On the Capacity of Networks with Correlated Sources
Characterizing the capacity region for a network can be extremely difficult. Even with independent sources, determining the capacity region can be as hard as the open problem of characterizing all information inequalities. The majority of computable outer bounds in the literature are relaxations of the Linear Programming bound which involves entropy functions of random variables related to the sources and link messages. When sources are not independent, the problem is even more complicated. Extension of linear programming bounds to networks with correlated sources is largely open. Source dependence is usually specified via a joint probability distribution, and one of the main challenges in extending linear programming bounds is the difficulty (or impossibility) of characterizing arbitrary dependencies via entropy functions. This paper tackles the problem by answering the question of how well entropy functions can characterize correlation among sources. We show that by using carefully chosen auxiliary random variables, the characterization can be fairly “accurate”.
The fundamental question in network coding is to determine the required link capacities to transmit the sources to the sinks. Characterizing the network coding capacity region is extremely hard . When the sources are independent, the capacity region depends only on the source entropy rates. However, when the sources are dependent, the capacity region depends on the detailed structure of the joint source distribution.
Following , a linear programming outer bound was developed for dependent sources  (see also ). This bound is specified by a set of information inequalities and equalities, and source dependence is represented by the entropy function
where is an index set for the sources and are independent and identically distributed copies of the dependent sources. Thus each has the same joint distribution as the sources, but are independent across different .
However (1) fails to properly characterize source dependence. We also note that the capacity regions (or best known achievable regions) for many classic multiterminal problems are also expressed as optimizations of linear combinations of joint entropies, subject to linear constraints (e.g. markov constraints) on joint entropies. If it were not for the specified joint distributions on the sources/side-information etc. typically present in such problems, numerical solution would be achieved by a linear program. Again, if it were possible to somehow accurately capture the dependence of random variables using entropies, it would lead to a convenient computational approach.
A natural question arises: How accurately can arbitrary dependencies be specified via entropies alone? We will show that by using auxiliary random variables, entropies can in fact be sufficient.
This work of characterizing correlation between random variables using entropy functions was mainly motivated by the problem of characterizing outer bounds on the capacity of networks with correlated sources. In Section II we review known outer bounds characterized using graph theoretic approach (referred as graphical bounds) as well as outer bounds using geometrical approach (referred as geometric bounds). These bounds are not tight and can be tightened by introducing new auxiliary random variables which more accurately describe correlation between the source random variables. In Section III, we give a general framework for improving outer bounds with introduction of auxiliary random variables. In Section III-A, we demonstrate by an example that our LP bound can can in fact be tightened via the use of auxiliary random variables. In Section III-B and Section III-C, we present two approaches to construct auxiliary random variables to tighten the outer bounds. The constructions via these two approaches are direct generalizations of the auxiliary random variables designed in Example 1, Section III-A. In Section IV, we deal with the more general problem of characterizing probability distribution using entropy functions.
Despite its importance, the maximal gain that can be obtained by network coding is still largely unknown, except in a few scenarios [5, 6]. One example is the single-source scenario where the capacity region is characterized by the max-flow bound  (see also [7, Chapter 18]) and linear network codes maximize throughput . However, when it involves more than one source, the problem can become quite difficult.
The problem becomes even more complex when the sources are correlated. In the classical literature, the problem of communicating correlated sources over a network is called distributed source compression. For networks of error-free channels with edge capacity constraints, the distributed source compression problem is a feasibility problem: given a network with edge capacity constraints and the joint probability distribution of correlated sources available at certain nodes, is it feasible to communicate the correlated sources to demanding nodes?
A relevant important problem is of separation of distributed source coding and network coding . Specifically, distributed source coding and network coding are separable if and only if optimality is not sacrificed by separately designing source and network codes. It has been shown in  that the separation holds for two-source two-sink networks however it has been shown by examples that that the separation fails for two-source three-sink and three-source two-sink networks.
In this section, we present known outer bounds on the capacity of networks with correlated sources. We first describe network model and define network code and achievable rate. We then present known graphical and geometric outer bounds.
A network is modelled as a graph where is the set of nodes and is the set of directed edges between certain pairs of nodes. Associated with each edge is a non-negative real number called the capacity of the edge . For edges , we write as a shorthand for . Similarly, for an edge and a node , the notations and respectively denote and . Let be an index set for a number of multicast sessions, and let be the set of source variables. These sources are available at the nodes identified by the mapping
Each source may be demanded by multiple sink nodes, identified by the mapping
where, is the set of all subsets of . Each edge carries a random variable which is a function of incident edge random variables and source random variables.
For a given network and connection requirement and , a network code is a set of mappings from input random variables (sources and incoming edges) to output random variables (outgoing edges) at a network node. The mapping must obey constraints implied by the topology. The alphabets of source random variables and edge random variables are denoted by and , respectively.
Definition 1 (Network code)
A network code for a given network is described by sets of its encoding functions and decoding functions .
Now we define an achievable rate tuple. The definition below is different from the usual definition of an achievable rate [7, Definition 21.2] in that the source rates are fixed and the link capacity constraints are variable.
Definition 2 (Achievable rate tuple)
Consider a given network with discrete memoryless sources and underlying probability distribution . A link capacity tuple is called achievable if there exists a sequence of network codes such that for every and every
where is the decoded estimate of at node from via mapping .
The set of all achievable link capacity tuples is denoted by where the subscript describes correlated source case.
Ii-a Graphical Bounds
In , the author gave a necessary and sufficient condition for when each sink requires all the sources.111The results were generalized for networks with noisy channels. However, in this paper we are mainly concerned with networks with error-free channels. This result includes the necessary and sufficient condition ,  for networks in which every source is demanded by single sink as a special case.
Theorem 1 (Theorem 3.1, )
For networks of error free channels, the transmission of sources is feasible if and only if
where source sessions are available at some nodes in and all source sessions are demanded by at least one node in , i.e., this is the min-cut of the graph.
As mentioned above, for a few special cases a necessary and sufficient condition for reliable transmission of correlated sources over a network is given in ,  and . However, the problem is an uncharted area in general. Until recently there did not even exist in the current literature a nontrivial necessary condition for reliable transmission of correlated sources in general multicast networks. In , we made the first attempt to address this problem by characterizing a graph based bound, called the “functional dependence bound”, for networks with correlated sources with arbitrary sink demands. The functional dependence bound was initially characterized for network with independent sources . Later in , we showed that the functional dependence bound is also an outer bound for networks with correlated sources.
In  we gave an abstract definition of a functional dependence graph, which expressed a set of local dependencies between random variables. In particular, we described a test for functional dependence, and gave a basic result relating local and global dependence. Below is the functional dependence bound based on the implications of functional dependence.
Theorem 2 (Functional dependence bound )
Let be a functional dependence graph on the (source and edge) random variables . Let be the collection of all maximal irreducible sets [4, Definition 25]. Then
where and .
The functional dependence region is defined as follows.
where edge-sets are subsets of maximal irreducible sets.
We also generalized existing bounding techniques that characterize geometric bounds for multicast networks with independent sources for networks with correlated sources.
Ii-B Geometric Bounds
In this section, we focus on outer bounds on achievable rate region for networks with correlated sources using geometric approach. We present outer bounds by using the set of almost entropic variables , and (again called LP bound) by using the set of polymatroid variables similar to the bounds given for independent sources in [2, Chapter 15].
Consider a network coding problem for a set of correlated source random variables . Let be the set of all link capacity tuples such that there exists a function (over the set of variables ) satisfying the following constraints:
for all and .
Taking as and in Definition 3 gives us regions and respectively.
Theorem 3 (Outer bound )
It is well known that the region is closed and convex . Moreover, the regions defined by the constraints (11)-(14) are also closed and convex. Replacing by in Theorem 3, we obtain an outer bound , for capacity of networks with correlated sources. This bound (a linear programming bound) is an outer bound for the achievable rate region since and Theorem 3 implies
Theorem 4 (Outer bound )
It is possible that the outer bounds and given above, in terms of the region of almost entropic vectors and the region of polymatroid vectors , may not be tight since the representation of the regions or together with constraints (11)-(14) do not capture the exact correlation of source random variables, i.e., the exact joint probability distribution. This is because the same entropy vector induced by the correlated sources may be satisfied by more than one probability distribution. The importance of incorporating the knowledge of source correlation (joint distribution) to improve the cut-set bound is also recently and independently investigated in .
Iii Improved Outer Bounds
In this section, we give a general framework for improved outer bounds using auxiliary random variables. In Section III-A we will demonstrate by an example that the outer bound is not tight and also give an explicit improved outer bound which is strictly better than the outer bound . In Section III-B and III-C, we present two generalizations of Example 1 to construct auxiliary random variables to obtain improved bounds.
Consider a set of correlated sources with underlying probability distribution . Construct any auxiliary random variables by choosing a conditional probability distribution function .
Let be the set of all link capacity tuples such that there exists an almost entropic function satisfying the following constraints:
for all and .
Similarly, an outer bound can be defined in terms of polymatroid function .
Theorem 5 (Improved Outer bounds)
An improved functional dependence bound can also be obtained from the functional dependence bound by introducing auxiliary random variables. The improvement of the bounds of the form in Definition 4 over the bound without using auxiliary random variables solely depends on the construction of auxiliary random variables.
Iii-a Looseness of the Outer Bounds
In this section, we demonstrate by an example that
In Figure 1, three correlated sources are available at node 1 and are demanded at nodes respectively. The edges from node to nodes have sufficient capacity to carry the random variable available at node 2. The correlated sources are defined as follows.
where are independent, uniform binary random variables.
The LP bound for the network in Figure 1 is the set of all link capacity tuples such that there exists satisfying the following constraints.
Note that the link capacity tuple is in the region by choosing as the entropy function of the following random variables:
Now, we will characterize an improved LP bound by constructing auxiliary random variables .
An improved LP bound for the network in Figure 1 is the set of all link capacity tuples such that there exists satisfying the following constraints.
Note that, by Definition 6, the link capacity tuple is in the improved LP bound if and only if there exists a polymatroidal satisfying (32)-(45). In the following, we prove that the link capacity tuple is indeed not in , Definition 6, and hence is not achievable.
As , it implies that
On the other hand, by (49), we have
Together with , this implies . Similarly, we can also prove that
Using the same argument, we can once again prove that and implies .
In Definition 4, we present new improved outer bounds on the capacity region of networks with correlated sources using auxiliary random variables. However, there is one problem that remains to be solved: How to construct auxiliary random variables that can tighten the bounds or more generally, can lead to characterization of the capacity region for networks with correlated sources. While it appears to be a hard problem to answer in general, we propose three approaches to construct auxiliary random variables. First, we propose to construct auxiliary random variables from common information.
Iii-B Auxiliary Random Variables from Common Information
The first approach is to construct an auxiliary random variable which is almost the common information of two random variables. This approach is a direct generalization of Example 1 in the previous section in a sense that the auxiliary random variables in Example 1 are precisely the common information between pairs of source random variables. This fact also implies that the approach leads to characterization of improved bounds.
Definition 7 (Common Information )
For any random variables and , the common information of and is the random variable (denoted by ) which has the maximal entropy among all other random variables such that
In many cases, it is not easy to find the common information between two random variables. For example, let be a binary random variable such that and . Suppose is another binary random variable independent of and . Then if  ( see also )
then even if and are almost the same for sufficiently small .
To address this issue, we propose a different way to construct auxiliary random variables. Consider any pair of random variables with probability distribution . For any , let
where the probability distribution of is given by
Note that the “smaller” the is, the more similar the random variable (associated with the conditional distribution ) is to the common information.
Our constructed random variable will be selected from to formulate an improved LP bound where
For a multi-source multicast network with source random variables one can construct random variables from the family of distributions
Iii-C Linearly Correlated Random Variables
In some scenarios, source random variables are “linearly correlated”. In this section we present a construction method for auxiliary random variables describing linear correlation between random variables. This approach is also a direct generalization of Example 1 in the previous section in a sense that the source random variables are linearly correlated.
A set of random variables is called linearly correlated if
for any , the support of the probability distribution of is a vector subspace and
is uniformly distributed.
Let be a set of linearly correlated random variables with support vector subspaces
where are linearly independent. That is, is a basis for the subspaces . It can be noticed that, there exists a set of linearly independent random variables uniformly distributed over the support induced from a basis of the vector spaces . That is,
The random variable can be written as a function of random variables as follows
is an coefficient matrix.
Thus, the random variables are linear functions of the random variables . In particular, a random variable is a function of the random variables such that the coefficient of is non-zero. Then we have the following equalities.
Iv Probability Distribution using Entropy Functions
The basic question is: How “accurate” can entropy function specify the correlation among random variables? We partly answer the question by showing that the joint probability distribution among random variables can be completely specified by entropy functions subject to some moderate constraint. First, we describe a few notations.
Let and be a random variable. Assume without loss of generality that has a positive probability distribution over . Let be the set of all nonempty subsets of . The size of the support222Roughly speaking, is the number of possible values that can take with positive probabilities. of a random variable will be denoted by . For notational simplicity, we will not distinguish a set with a single element and the element . Two random variables are regarded as equivalent if they are functions of each other. Therefore, and are regarded as equivalent.
The function is not one-to-one over the interval. Yet, we will use to define as the unique such that
Iv-a Single Random Variable Case
First we consider the problem of characterizing distribution of single random variable via entropy functions. To understand the idea, consider a binary random variable such that and . While the entropy of does not determine exactly what the probabilities of are, it essentially determines the probability distribution (up to permutations). To be precise, let such that where Then either or . Furthermore, the two possible distributions are in fact permutations of each other.
When is not binary, the entropy alone is not sufficient to characterize the probability distribution of . However, by using auxiliary random variables, it turns out that the distribution of can still be determined.
The idea is best demonstrated by an example. Suppose is ternary, taking values from the set . Suppose also that for all . Define random variables , and such that
Let us further assume that for all . Then by (81) and strict monotonicity of in the interval , it seems at the first glance that the distribution of is uniquely specified by the entropies of the auxiliary random variables. However, this is only half of the story and there is a catch in the argument – The auxiliary random variables chosen are not arbitrary. When we “compute” the probabilities of from the entropies of the auxiliary random variables, it is assumed to know how the random variables are constructed. Without knowing the “construction”, it is unclear how to find the probabilities of from entropies. More precisely, suppose we only know that there exists auxiliary random variables such that (80) holds and their entropies are given by (81) (without knowing that the random variables are specified by (79)). Then we cannot determine precisely what the distribution of is. Having said that, in this paper we will show that the distribution of can in fact be fully characterized by the “joint entropies” of the auxiliary random variables.
Iv-A1 Construction of auxiliary random variables
Definition 9 (Constructing auxiliary random variables )
For any , let be the auxiliary random variables such that
Notice that when .
Proposition 1 (Property 1: Distinct)
For any distinct , then
First note that and hence
Since are nonempty and distinct, there are two possible cases. In the first case, is nonempty. In this case, it can be checked easily that either , , or both must be nonempty. In the second case, . Then clearly both and must be nonempty. Finally, since has strictly positive probability distribution, then we can easily check that the theorem holds.
Proposition 2 (Property 2: Subset)
Suppose . Then
if and only if is nonempty.
By direct verification.
Proposition 3 (Property 3: Partition)
For any , there exists random variables such that
for all .
Assume without loss of generality that . Let
We can verified directly that (87) is satisfied.
In the rest of the paper, we will assume without loss of generality that
Proposition 4 (Property 4: The smallest atom)
has the minimum entropy among all . In other words
Consider . First notice that for all and hence On the other hand,
Therefore, and consequently . The proposition is thus proved.
Proposition 5 (Property 5: Singleton )
Suppose . Then for any ,
In addition, for all such that ,
Here, follows from the fact that if .
Consider any which is not a subset of . Let , and
It can be verified easily that
As is not a subset of , there exists such that . In this case,
Hence, On the other hand,
In the previous subsection, we have defined how to construct a set of auxiliary random variables from , and have identified properties of these random variables in relation to the underlying probability distributions. In the following, we will show that the constructed set of auxiliary random variables are in fact sufficient in fully characterizing the underlying probability distribution of .
Let be a random variable such that there exists auxiliary random variables
In other word, is a random variable such that there exists auxiliary random variables such that the entropy function of is essentially the same as that of