Bayesian Quantized Network Coding via Belief Propagation

Bayesian Quantized Network Coding via Belief Propagation

Abstract

In this paper, we propose an alternative for routing based packet forwarding, which uses network coding to increase transmission efficiency, in terms of both compression and error resilience. This non-adaptive encoding is called quantized network coding, which involves random linear mapping in the real field, followed by quantization to cope with the finite capacity of the links. At the gateway node, which collects received quantized network coder packets, minimum mean squared error decoding is performed, by using belief propagation in the factor graph representation. Our simulation results show a significant improvement, in terms of the number of required packets to recover the messages, which can be interpreted as an embedded distributed source coding for correlated messages.

1Introduction

Data gathering in sensor networks has drawn attention to network coding [1] as an alternative for routing based packet forwarding [2] because of its flexibility, and robustness to the network changes and link failures. In the case of correlated messages, performing network coding on top of distributed source coding [3] is shown to be optimal, in terms of the achievable information rates [4]. However, appropriate encoding rates have to be known at the encoders, which requires transmission of an overhead and affects the flexibility and distributed nature of sensor networks.

Recently, the possibility of adopting non-adaptive joint source network coding has been studied by using the concepts of compressed sensing [5] and sparse recovery [6]. As a first major work to formulate and investigate theoretical feasibility of compressed sensing based network coding, we proposed to use Quantized Network Coding (QNC) [10], which involves random linear network coding in the real field and quantization. In [10], our decoding scheme was based on -minimization (using linear programming [13]), which is shown to be optimal for recovery of exactly sparse messages, from noiseless measurements. In this paper, we study optimal Minimum Mean Squared Error (MMSE) decoding and feasibility of its implementation in practical cases. Specifically, we propose to use a near optimal MMSE decoding based on Belief Propagation (BP).

Quantized network coding using low density coefficients is described and formulated in Section 2. We discuss optimal MMSE decoding for our QNC scenario with known priori in Section 3. In Section 4, we describe our BP based MMSE decoding, followed by our simulation results in Section 5. Finally, our concluding remarks are presented in Section 6.

2Quantized Network Coding

Consider a network (graph), , with the set of nodes , and the set of directed edges (links) . Each edge, , can maintain a lossless transmission from to , at a maximum rate of bits per use. The input content of edge (which is the same as its output content), at time index , is represented by , and is from a finite alphabet of size , where is the block length, transmitted in each time slot, between and . For each node, , we define sets of incoming edges, , and outgoing edges, . Moreover, each node has a random information source, , where there is a transform matrix, , such that and is -sparse. As in the rest of this paper, where we represent the realizations of random variables with lower case letters, the outcome realization of is represented by . In this paper, we study the (single session) data gathering, where all the messages, ’s, are to be transmitted to a single node, called decoder (or gateway), .

We defined QNC at each node, , as follows [10]:

where is the quantizer (designed based on the value of and , and the distribution of incoming contents and messages), associated with the outgoing edge . The corresponding network coding coefficients, and are selected from real numbers: , and satisfy the normalizing condition of (3) in [10]. Initial rest condition is also assumed to be satisfied in our QNC scenario:

Denoting the quantization noise at edge by , we have:

This is equivalent to:

where , and:

Representing the marginal measurements (received packets to the decoder) at time , by , we have:

where:

We store marginal measurements, over time, and build up a total measurements vector, :

As a result of linearity of QNC scenario (Eq. Equation 2), we have [10]:

where and are called total measurement matrix and total effective noise vector, respectively.

In [10], a compressed sensing (i.e. -min) decoding is used to reconstruct from noisy under-determined measurements, ’s. Being able to recover different values, ’s, from measurements, where is usually much less than , can be interpreted as an embedded distributed compression of inter-node correlated ’s. Although this is feasible with respect to some distortion, the proposed -min decoding does not offer an optimal solution, especially when a prior on is available and has more information than sparsity. In this paper, we address the optimal MMSE decoding in a Bayesian QNC scenario by studying the computational complexity of implementing such decoder. Motivated by the work in [14], near optimal implementation of MMSE decoding based on belief propagation and the appropriate design of network coding coefficients are discussed.

3Minimum Mean Square Error Decoding

The a priori model used to characterize the messages is a Gaussian mixture model. Specifically, we consider states, ’s ( is the ’th element of vector ), corresponding to ’s, which are independent binary random variables with:

Each state, , determines if is zero or not:

Therefore, a priori of independently modelled ’s is as follows:

where is Dirac delta function, and it also implies:

To facilitate the use of notations, we define to be the transform coefficient from to , describing the transfer of information of messages through the network. When the quantization noises, ’s, have small variance compared to that of the signal, , the variance of ’s can be approximated with the variance of noiseless propagated information, that is:

Moreover, ’s are approximately independent and their variance, , is proportional with the variance of corresponding quantizer input, . Hence:

where is a positive scalar, depending on the quantizer design. Defining

the effective total measurement noise, , can be formulated according to:

This implies:

where is the diagonal covariance matrix of quantization noises:

The MMSE estimation of is calculated according to:

where:

and,

Now, having a prior of (Eq. Equation 6), the distribution of quantization noises, and the measurement equation of (Equation 5), one could calculate the posterior probability of and its MMSE estimation, . However, this requires a high computational complexity for the decoder, which makes it practically infeasible. To tackle this issue, near optimal MMSE decoding by using Belief Propagation (BP) is proposed [14]. Such decoders are based on sum product algorithm [15], which is widely used in the literature of low density parity check codes. In Section 4, we describe the BP based near optimal MMSE decoder, used to recover messages in the considered Bayesian framework.

4MMSE Decoding via Belief Propagation

Belief propagation1 is used to calculate an approximate version of posterior probability, where a low density factor graph representation of random linear measurements is available [14]. In [16], BP decoding is extended to recover from random linear measurements, even when the graph representation is dense.

Consider the QNC measurement equation of (Equation 5) where the elements of total effective noise, ’s, are dependent. By eigen decomposition of their covariance matrix,

we define:

and

for which we have:

In Equation 10, is as follows:

and ’s are uncorrelated with unit variance. We also assume the marginal quantization noises, ’s, can fit into a Gaussian distribution. As a result of this assumption, ’s are independent zero mean Gaussian random variables with unit variance.

The equivalent linear measurement equation of (Equation 10), which characterizes the QNC scenario, can be represented by a factor graph, as shown in Fig. ?. In this graph, each constraint node, , , (gray node) is connected to a subset of variable nodes (white nodes), , , for which . After enough passings of the beliefs, between the nodes of the factor graph, an approximate version of posterior probability of ’s may be obtained [14].

In the following, we describe BP based decoding for our Bayesian QNC scenario:

  1. The variable nodes have their prior information, i.e. , as an initial belief to start with. Explicitly, node sends this probability density function (PDF), , to its neighbour constraint nodes, .

  2. The received beliefs at the constraint node , as well as the corresponding measurement, , are used to calculate a backward belief. Specifically, for each , where , (Equation 11) leads us to the update equation in ( ?).2 In ( ?), is the PDF of a zero mean Gaussian random variable with unit variance; and represent the convolution operator, and the iteration index, respectively.

  3. At the variable node , given the received backward beliefs from the neighbour nodes, , and a priori of , the forward beliefs are updated according to:

    where is a constant, assuring the unit integral of . Given the posterior probabilities, one may calculate the BP based MMSE estimate of ’s:

    and the corresponding , as an approximation for .

  4. This procedure is repeated by going back to step 2 until some convergence criterion, such as:

    is met (where controls the precision of decoding).

5Simulation Results


In this section, we evaluate the performance of the proposed QNC by comparing it with the conventional routing based packet forwarding. Specifically, we generate random deployments of a network with nodes and uniformly distributed edges, where one of the nodes is randomly picked to be the gateway (decoder) node. For each deployment, we also randomly generate ’s, with a mixture Gaussian distribution, as described in (Equation 6). The messages are derived from different sparsity factors of , and .3 Furthermore, the sparsifying matrix, , is randomly generated and orthonormal.

For each deployment, we run QNC with different block lengths, , and decoded the received packets at the decoder node to obtain . The network coding coefficients, ’s and ’s, used in QNC scenario, are generated such that the resulting is a dense Gaussian matrix. Specifically, ’s are derived from independent zero mean Gaussian distributions, and the rest of ’s, , are set to zero. Moreover, ’s are chosen to be locally orthonormal, as described explicitly in theorem 3.1 in [10]. BP based MMSE decoding as well as -min decoding are used to reconstruct the messages. The BP based decoder is as described in Section 4, and is implemented by using the implementation in [16]. -min decoding is described in [10], theorem 4.1, and uses the open source optimization toolbox in [17].

For each deployment, we also simulate packet forwarding to transmit messages to the decoder node. The route used to forward the packets is optimized (in terms of delivery delay) and calculated using the Dijkstra algorithm [18]. Continuous value messages are also quantized at the source nodes, by using a uniform quantizer with levels.

For each SNR (quality), the best choice of block length, , is found for both QNC and packet forwarding scenarios and used to minimize the corresponding delivery delay. We present the results, by averaging them over different realizations of network deployments and messages. In Fig. ?, the resulting average SNR is depicted versus the average delivery delay, obtained for different sparsity factors and density of edges, using dense measurement matrices.

As shown in Figs. ?, ?, ?, the performance of using QNC is better than that of using routing based packet forwarding. Specifically, the adopted -min decoder, proposed in [10], already outperforms packet forwarding for all SNR values. Using BP based MMSE decoding helps us improve the performance for some (especially low) SNR values, compared to -min decoding. Moreover, as it is expected, when the sparsity factor, , of messages increases (meaning higher correlation between ’s), the gap between the QNC and packet forwarding curves increases. However, there is a drawback in using BP decoder for some SNR values, especially when the sparsity factor of messages, , is high (i.e. there is not a high correlation between ’s). Such cases can be explained to be a result of propagation of quantization noises, in the network, which increases the noise power in the measurements.

6Conclusions

We have made improvements in the throughput of sensor networks, by introducing a network coding based approach for transmission of correlated sensed data to a gateway node. Conventional linear network coding is joined with the concepts of Bayesian compressed sensing to efficiently embed distributed source coding in network coding. On the other hand, belief propagation has helped us to discuss on near optimal decoding of quantized network coded messages, while computational resource constraints were intended to be met. Our simulation results show significant savings for QNC in terms of delivery delay, when compared with conventional packet forwarding. Moreover, using the proposed BP based MMSE decoder for QNC scenario helped us to require a smaller delivery delay, for (relatively) low SNR values. As a lacking point in the studies of BP based decoding, we are still working to derive theoretical bounds on the performance of our decoder. This would give us a better understanding about the optimality of adopted decoder, with respect to the infinite block length information theoretic bounds.

Acknowledgment

This work was supported by Hydro-Québec, the Natural Sciences and Engineering Research Council of Canada and McGill University in the framework of the NSERC/Hydro-Québec/McGill Industrial Research Chair in Interactive Information Infrastructure for the Power Grid.

Footnotes

  1. in some cases also referred as message passing
  2. In BP update stage, the incoming beliefs (messages) to a node are assumed to be independent.
  3. Note that the value of does not affect the results, since it will scale the variance of quantization noises too.

References

  1. C. Fragouli, “Network coding for sensor networks,” Handbook on Array Processing and Sensor Networks, pp. 645–667, 2009.
  2. J. Al-Karaki and A. Kamal, “Routing techniques in wireless sensor networks: a survey,” IEEE Wireless Communications, vol. 11, no. 6, pp. 6–28, 2004.
  3. Z. Xiong, A. Liveris, and S. Cheng, “Distributed source coding for sensor networks,” IEEE Signal Processing Magazine, vol. 21, no. 5, pp. 80–94, 2004.
  4. J. Barros and S. Servetto, “Network information flow with correlated sources,” IEEE Transactions on Information Theory, vol. 52, no. 1, pp. 155 – 170, 2006.
  5. D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, pp. 1289 –1306, April 2006.
  6. J. Haupt, W. Bajwa, M. Rabbat, and R. Nowak, “Compressed sensing for networked data,” IEEE Signal Processing Magazine, vol. 25, pp. 92 –101, march 2008.
  7. C. Luo, F. Wu, J. Sun, and C. W. Chen, “Compressive data gathering for large-scale wireless sensor networks,” in Proceedings of the 15th annual international conference on Mobile computing and networking, MobiCom ’09, (New York, NY, USA), pp. 145–156, ACM, 2009.
  8. S. Feizi and M. Medard, “A power efficient sensing/communication scheme: Joint source-channel-network coding by using compressive sensing,” in Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on, pp. 1048–1054, IEEE, 2011.
  9. C. Luo, J. Sun, and F. Wu, “Compressive network coding for approximate sensor data gathering,” in Global Telecommunications Conference (GLOBECOM 2011), IEEE, 2011.
  10. M. Nabaee and F. Labeau, “Quantized network coding for sparse messages,” arXiv preprint arXiv:1201.6271, 2012.
  11. F. Bassi, L. Chao, L. Iwaza, M. Kieffer, et al., “Compressive linear network coding for efficient data collection in wireless sensor networks,” in Proceedings of the 2012 European Signal Processing Conference, pp. 1–5, 2012.
  12. M. Nabaee and F. Labeau, “Restricted isometry property in quantized network coding of sparse messages,” arXiv preprint arXiv:1203.1892, 2012.
  13. E. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203 – 4215, 2005.
  14. D. Baron, S. Sarvotham, and R. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Transactions on Signal Processing, vol. 58, no. 1, pp. 269–280, 2010.
  15. F. Kschischang, B. Frey, and H. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498–519, 2001.
  16. S. Rangan, “Estimation with random linear mixing, belief propagation and compressed sensing,” CoRR, vol. abs/1001.2228, 2010.
  17. M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 1.21.” http://cvxr.com/cvx, 2011.
  18. E. Dijkstra, “A note on two problems in connexion with graphs,” Numerische mathematik, vol. 1, no. 1, pp. 269–271, 1959.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
13139
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description