Exchanging ThirdParty Information with Minimum Transmission Cost
Abstract
In this paper, we consider the problem of minimizing the total transmission cost for exchanging channel state information. We proposed a network coded cooperative data exchange scheme, such that the total transmission cost is minimized while each client can decode all the channel information held by all other clients. In this paper, we first derive a necessary and sufficient condition for a feasible transmission. Based on the derived condition, there exists a feasible code design to guarantee that each client can decode the complete information. We further formulate the problem of minimizing the total transmission cost as an integer linear programming. Finally, we discuss the probability that each client can decode the complete information with distributed random linear network coding.
Keywords: network coding, cooperative data exchange, channel state information.
I Introduction
In wireless networks, it is always beneficial for the nodes to know the global knowledge of channel state information (CSI), e.g., channel gain or link loss probability, since global information can greatly ease the network optimization and improve the performance. Generally, such CSI on a connected link can be regarded as a local information and known between two connected nodes (e.g., node and ). However, for a thirdparty node, e.g., the node , the channel information of link is unknown to it. In some network design, such thirdparty information communication [1, 2] is often necessary.
Recently, cooperative data exchange among the users [3] becomes one of the most promising approaches in designing efficient data communications. In cooperative data exchange, each client initially holds a subset of packets, which are broadcast from the server or locally generated by itself. The objective is to guarantee that all the clients finally obtain the whole packets by cooperatively exchanging the data. Recent works showed that network coding [4, 5, 6] can reduce the number of transmissions or delay required for cooperative data exchange. However, finding the optimal solution with network coding to minimize the number of transmissions [3, 7, 8] or the transmission cost [9, 10] is nontrivial for general cooperative data exchange problem.
The work in [1] designs a coded cooperative data exchange scheme to minimize the number of transmissions for thirdparty information exchange. Compared with general cooperative data exchange problem, in thirdparty information exchange, each client initially has the local CSI to all other connected clients, and wants to know all CSI knowledge that is unknown to it. The work in [1] showed an optimal transmission scheme to minimize the total number of transmissions for exchanging thirdparty information.
Although the work in [1] gives the optimal solution to minimize the total number of transmissions required for thirdparty information exchange, it does not consider the case where each client is associated with a transmission cost, as studied in [9]. Consider a threeclient network as shown in Fig. 1, where denotes the CSI between client and client . It is assumed that the links are symmetric, i.e., . Initially, client knows only the local information for . Without considering the cost, client and client may be selected to transmit the encoded packets and , respectively, to complete the data exchange process. However, if we consider the cost given as , selecting client and as the transmitters is a better choice than the former solution in terms of the total transmission cost.
In this paper, we design an algorithm to determine the number of packets that each client should send and how the packets should be encoded for each transmission, so as to minimize the total transmission cost required for the thirdparty information exchange problem. Similar to the previous works [3, 7, 1], we consider there is a common control channel which allows reliable broadcast by any client to all the other clients. The main contributions of this paper can be summarized as follows:

We derive a necessary and sufficient condition for a feasible transmission scheme such that there exists a code design for every client to successfully decode all the packets from other clients.

Based on the necessary and sufficient condition for feasible transmission, we formulate the problem of minimizing the total transmission cost as an integer linear programming.

Our analysis shows that the clients with lower transmission costs should send more packets than the clients with higher transmission costs.

We analyze the probability that every client can decode all other packets when random linear network coding is locally performed at each client.
The rest of the papers are organized as follows. The problem is formulated in Section II. Section III derives the necessary and sufficient condition for a feasible transmission scheme. In Section IV, we give the optimal solution with the minimum transmission cost and analyze the performance with random network coding. We conclude the paper in Section V.
Ii Problem Formulation
Consider a network with clients in , where each client is associated with a transmission cost for sending a single packet. Suppose that is the CSI (e.g., channel gain or link loss probability) of the link between client and client . Initially, each client only knows the local CSI, i.e., client only holds the packets in . We assume that the links are symmetric, i.e., for . In other words, for every two clients and , they hold one common packet . Thus, the set of all the packets is . Suppose that is the total number of the packets in the network, i.e., .
There is a lossless broadcast channel for clients to send or receive the packets [3, 1, 7, 8, 9, 10, 11]. Each transmitted packet is encoded over the packets initially held by the sender. Let be the number of packets required to be transmitted by client . The total transmission cost can thus be written as
(1) 
In this paper, our goal is to find a network coded transmission scheme that satisfies the following two conditions:

Each client can finally decode all the packets in from the packets sent by other clients via broadcast channel.

The total transmission cost defined in Eq. (1) is minimum among all the transmission schemes that satisfy the first condition.
Without loss of generality, we use to denote the set of “wanted” packets by client , i.e., . We also assume that the clients in are ordered with the nondecreasing order of the transmission cost, i.e., .
Iii Feasible transmission scheme
Although the work in [1] already proposed a feasible transmission scheme, which can complete the thirdparty information exchange process with minimum transmissions, it is a special case of our problem, where it does not consider the transmission cost. In this section, we aim to derive a necessary and sufficient condition for a feasible transmission scheme, such that there exists a feasible code design for every client to successfully decode its “wanted” packets. Then, based on the derived condition, we can give the transmission scheme to minimize the total transmission cost in Section IV.
Iiia Encoding Matrix
In this section, we define the encoding matrix of the transmitted packets. Before sending the packets, each client first generates a linear encoded packet based on the packets it initially has over a finite field. Then, the th encoded packet sent by client can be written as a linear combination of packets in , i.e.,
(2) 
where is the coefficient selected for packet by the th encoded packet of over finite field .
The encoding vectors sent by all the clients can then be written as follows.
In the above encoding matrix , each column vector denotes the encoding vector of a transmitted packet, and each row vector represents how a native packet is encoded in the transmitted packets. For example, the first column vector denotes the encoding vector of packet sent by client , while in the first row vector, if the element is nonzero, it means the packet is participated in the encoded packet represented by that column. Let be the encoding vector of the packet , which is of size . For example, . Thus, .
Without loss of generality, for each client, we define a local receiving matrix as follows.
Definition 1
The local receiving matrix of client , named , is defined as the submatrix of , which includes almost all the rows and columns of except the followings:

The rows, which represent the encoding status of native packets in ;

The columns, which represent the encoding vectors of packets sent by client .
Thus, a row vector of denotes how a “wanted” packet of client is encoded in the received packets.
For example, does not include the first rows of , as the first rows represent how the native packets in participate in the received packets, and does not include the first columns of , as these columns denotes the packets sent by client . Thus, is a matrix including the encoding vectors received by client . We use to denote the th row vector of local receiving matrix .
IiiB Condition for a Receiving Matrix with Full Row Rank
We first investigate the condition, under which there exists a code design for a receiving matrix with full row rank.
Definition 2
We define coefficient element as the element in a row encoding vector , which is nonzero and is selected over . Let be the set of columns in whose elements are coefficient elements.
For example, .
Let be a general receiving matrix, where and is the th row vector of . We then give the necessary and sufficient condition that there exists a code design to ensure the rank of is as follows.
Lemma 1
There exists a code design such that the rank of the receiving matrix is , if and only if for any row vectors in , the size of is at least , where .
Proof.
We first prove the necessary condition, where we assume that there exists a code design such that the rank of the local receiving matrix is .
According to this assumption, for a matrix , we can find at least a set of coefficient elements which are selected from different rows and different columns. In other words, for rows, the size of is .
In addition, as the number of rows of matrix is and the rank of is , each row vector should be linear independent with each other. Hence, it means for each submatrix of with rows, its rank is the number of rows it includes, i.e., . So, in any rows , we can find at least a set of coefficient elements which are selected from different rows and different columns, i.e., . Thus, the necessary condition is proved.
We then prove the sufficient condition, where we assume that, for any row vectors of local receiving matrix , , the size of is at least .
First, we consider the first row vector of . There must be at least a coefficient element in row one, since . We select any of such columns, e.g., . Then, considering the second row vector of , there must be at least a coefficient element, whose column number is not , since . We then select such a column in , where . We repeat this process, and in each of the following rows, we will be able to find a coefficient element, whose column number has not been selected so far, since . Let be the set of columns that have been selected.
Suppose that is a submatrix of , where includes column vectors of and the set of the indices of these columns is .
We can then design the feasible code as follows. Considering the elements in the th row vector of , only the coefficient element located in the th column is assigned with nonzero, while the other coefficient elements that are in other columns of row are assigned with zero.
According to the above coefficient assignment, the determinant of matrix can be expressed as the product of nonzero elements from different rows and different columns, e.g., in the th row, the nonzero element located in column is selected. Since the determinant of matrix is nonzero, the rank of is thus . Correspondingly, the rank of is also . Thus, the sufficient condition is proved.
Hence, Lemma 1 is proved. ∎
IiiC Necessary and Sufficient Condition for Feasible Transmission
In this section, we aim to find a feasible transmission scheme, such that there exists a code design for encoding matrix to ensure the ranks of all the local receiving matrices s are full (i.e., ), for . To simplify the following presentation, we first define the following
Definition 3
Let be a subset of packets in . We define as the indices set of the clients who hold at least one of packets in .
For example, for a subset , we can obtain that .
Before deriving the necessary and sufficient condition, we first prove the following lemma.
Lemma 2
For any subset of native packets in , when for , the size of is at least .
Proof.
Firstly, we consider the case when . We can easily obtain that more than clients involve in the defined set. This is because, for any clients, the number of packets held by them, but not held by any other client is at most . Thus, for packets, we still need at least another one client to include the extra packet. In other words, at least clients are needed, i.e., .
We then consider . As in the above case, more than clients involve in the defined set. The worst case is that packets are only held by clients but not held by any other clients, e.g., packets in are only held by clients in . In this case, only clients can involve these packets in their encoded packets, i.e., .
When , we can also similarly prove that at least clients are needed, by just considering packets in .
Hence, the lemma is proved.
∎
Based on the above Lemmas, we then discuss the necessary and sufficient condition of the feasible transmission scheme for our thirdparty information exchange problem.
Theorem 1
For any client in , there exists a code design such that it can decode all its “wanted” packets, if and only if the total number of packets that any clients send is at least . That is
(3) 
where and .
Proof.
To guarantee that client can eventually decode its “wanted” packets in , the rank of its local receiving matrix should be .
We first prove the necessary condition, where we assume that after receiving packets from clients respectively, there exists a code design such that client can decode its “wanted” packets. In other words, there exists a code design such that the rank of matrix is .
According to Lemma 1, to guarantee the rank of is , for any row vectors of , we must have
(4) 
Note that each row vector denotes how a native packet is participated in the received encoded packets. In other words, row vectors represent native packets to participate in the encoded packets. According to Lemma 2, for any subset packets in , we have , when . In the worst case, for a subset of packets, e.g., , we have , where . That is, only clients can encode the packets in this subset into their encoded packets. Let be the index of the row vector that represents how the native packet in the above subset is participated in the received packets. Thus, . According to Eq. (4), we have
(5) 
which thus proves the necessary condition.
We then prove the sufficient condition, where we assume that for any clients, the total number of packets they send is at least , which means , where .
According to Lemma 2, we can obtain that for any native packets, at least clients (e.g., ) can encode them in their sending packets, where . Thus, for these rows, we can obtain that
(6) 
where is supposed to be the indices set of the row vectors representing the encoding status of these native packets.
According to the assumption, we have
(7) 
In addition, since , we can obtain that
(8) 
which means, the row number of is less than the column number of .
Thus, the size of is at least , if for any clients, the total number of packets they sent is at least . According to Lemma 1, we can obtain that is with full row rank , which thus proves the sufficient condition.
Thus, we complete the proof of Theorem 1. ∎
Iv Transmission Scheme with Minimum Transmission Cost
In this section, we first formulate the problem of minimizing the total transmission cost as an integer linear programming. Based on the proposed transmission scheme, we analyze the performance that can be achieved with random linear network coding over .
Iva Transmission Scheme with Minimum Cost
Based on Section IIIC, we can formulate the problem of minimizing the total transmission cost such that all clients can decode their “wanted” packets, as an Integer Linear Programming (ILP) as follows.
(9) 
subject to
(10) 
Based on the above ILP, we can obtain the transmission scheme with the minimum total transmission cost.
We also prove the following theorem, which can be used to further simplify Constraint (10) of the ILP.
Theorem 2
Suppose that is the optimal transmission scheme with the minimum total transmission cost. We must have , when it is assumed that .
Proof.
We omit the proof due to its simplicity. ∎
Based on the above theorem, we can conclude that the client with lower transmission cost needs to transmit more packets than the client with higher transmission cost.
Corollary 1
Proof.
For any , with constraint (11), we can obtain that . From Theorem 2, we can easily obtain the constraint (12), i.e., . Thus, for any clients, the total number of packets they send must be no less than . That is, for any , we have
(13)  
where .
From the above equation, we can obtain that for any clients, where , the total number of packets they need to send is at least , which thus proved the above Corollary. ∎
IvB Illustration with an Example
We consider a network with four clients as an example. Suppose that the transmission cost at each client is . As shown in Fig. 2, with our transmission scheme, the total transmission cost is . On contrary, with the transmission scheme proposed in [1], which aims to minimize the total number of transmissions, the transmission cost is . In addition, we can easily check that with the code design in our scheme each client can decode its “wanted” packets. Fig. 2 also verifies the result given in Theorem 2, i.e., the clients with lower transmission costs should send more packets than the clients with higher transmission costs.
IvC Performance Analysis with Random Network Coding
With the ILP in Eq. (9) and Constraints (11) (12), we can obtain the optimal number of packets each client should send, so as to minimize the total transmission cost. To guarantee that each client can finally decode its “wanted” packets with matrix , we can design a deterministic code as introduced in Lemma 1. However, the deterministic encoding matrix needs to be centrally designed, which may incur high overhead. Instead, we use random linear network coding at each client to locally determine the encoding vectors of the packets it sends.
We let each client locally conduct random linear network coding over the packets that it initially has, where the number of encoded packets that each client should generate is determined by the ILP given in the above section.
Before analyzing further result, we introduce the following SchwartzZippel Lemma [12].
Lemma 3
(SchwartzZippel lemma [12]) Let be a nonzero polynomial of degree over a field . Let be a finite subset of , and the value of each be selected independently and uniformly at random from . Then the probability that the polynomial equals zero is at most , i.e., .
Based on the above lemma, we can derive the following probability.
Theorem 3
With random linear network coding and the transmission scheme obtained by ILP, the probability that each client can finally decode its “wanted” packets is at least
(14) 
where is the field size.
Proof.
As in Theorem 1, for any clients, the total number of packets they send is at least . We then try to find a feasible set of the coefficients such that the local receiving matrix of each client , , is with rank .
For a matrix with maximum rank , the maximum degree of the coefficient variants should be . According to Lemma 3, the probability that the determinant of this matrix is zero should be at most . Hence, the probability that the determinant of the matrix is nonzero is at least
where is the field size.
Thus, the probability that client can finally decode its “wanted” packets with the local receiving matrix is at least . ∎
Based on the above lemma, when the number of clients is fixed, we can increase the field size to enhance the probability that each client can finally decode its “wanted” packets. The lower bound of the probability is shown in Table I.
N=4  N=6  N=8  N=10  N=12  
K=6  K=15  K=28  K=45  K=66  
q=256  0.9883  0.9609  0.9180  0.8594  0.7852 
q=512  0.9941  0.9805  0.9590  0.9297  0.8926 
For example, when the total number of clients is , which means the total number of packets needed to be exchanged is , the probability that each client can decode its “wanted” packets is more than , if we randomly select the coefficients from .
V Conclusion
In this paper, we aim to design a network coded cooperative information exchange scheme to minimize the total transmission cost for exchanging thirdparty information. We derive a necessary and sufficient condition for the feasible transmission scheme. We prove that for any clients, where , if the total number of packets they send is at least , there exists a feasible code design to make sure each client can finally obtain its “wanted” packets. We further formulate the problem of minimizing the total transmission cost for thirdparty information exchange as an integer linear programming. Our analysis also shows that the clients with lower transmission cost should send more packets than the clients with higher transmission cost. Finally, based on the transmission scheme obtained by ILP, we provide a lower bound of the probability that each client can decode its “wanted” packets, if random network coding is used.
Vi Acknowledgements
This research is partly supported by the International Design Center (grant no. IDG31100102 & IDD11100101). Li’s work is partially supported by NSF under the Grants No CCF082988, CMMI0928092, and OCI1133027.
References
 [1] D. J. Love, B. M. Hochwald, and K. Krishnamurthy, “Exchanging thirdparty information in a network,” in UCSD Information Theory and Applications Workshop, 2007.
 [2] O. Aluko, B. Clerckx, D. Love, and J. Krogmeier, “Enhanced limitedcoordination strategies for multiuser mimo systems,” in Asilomar Conference on Signals, Systems and Computers, 2010.
 [3] S. El Rouayheb, A. Sprintson, and P. Sadeghi, “On coding for cooperative data exchange,” in Proceedings of IEEE Information Theory Workshop (ITW), Jan. 2010, pp. 1 –5.
 [4] R. Ahlswede, N. Cai, S. Li, and R. Yeung, “Network information flow,” IEEE Trans. on Information Theory, vol. 46, pp. 1204–1216, 2000.
 [5] S. Katti, H. Rahul, W. Hu, D. Katabi, M. Medard, and J. Crowcroft, “Xors in the air: Practical wireless network coding,” in ACM SIGCOMM, 2006.
 [6] X. Wang, C. Yuen, and Y. Xu, “Joint rate selection and wireless network coding for time critical applications,” in IEEE Wireless Communications and Networking Conference (WCNC), 2012.
 [7] A. Sprintson, P. Sadeghi, G. Booker, and S. El Rouayheb, “A randomized algorithm and performance bounds for coded cooperative data exchange,” in Proceedings of IEEE International Symposium on Information Theory Proceedings (ISIT), Jun. 2010, pp. 1888 –1892.
 [8] N. Milosavljevic, S. Pawar, S. El Rouayheb, M. Gastpar, and K. Ramchandran, “Deterministic algorithm for the cooperative data exchange problem,” in 2011 IEEE International Symposium on Information Theory Proceedings (ISIT), Aug. 2011, pp. 410 –414.
 [9] D. Ozgul and A. Sprintson, “An algorithm for cooperative data exchange with cost criterion,” in Proceedings of Information Theory and Applications Workshop (ITA), Feb. 2011, pp. 1 – 4.
 [10] S. Tajbakhsh, P. Sadeghi, and R. Shams, “A generalized model for cost and fairness analysis in coded cooperative data exchange,” in Proceedings of IEEE International Symposium on Network Coding (NetCod), Jul. 2011, pp. 1 –6.
 [11] Muxi Yan and Alex Sprintson, “Weakly Secure Network Coding for Wireless Cooperative Data Exchange”, in Proceedings of IEEE Global Telecommunications Conference (GLOBECOM), Houston, 2011.
 [12] R. Motwani and P. Raghavan, Randomized Algorithms. Cambridge, U.K.: Cambridge Univ. Press, 1995.