Quantized Network Coding for Correlated Sources

Quantized Network Coding for Correlated Sources

Abstract

Non-adaptive joint source network coding of correlated sources is discussed in this paper. By studying the information flow in the network, we propose quantized network coding as an alternative for packet forwarding. This technique has both network coding and distributed source coding advantages, simultaneously. Quantized network coding is a combination of random linear network coding in the (infinite) field of real numbers and quantization to cope with the limited capacity of links. With the aid of the results in the literature of compressed sensing, we discuss theoretical and practical feasibility of quantized network coding in lossless networks. We show that, due to the nature of the field it operates on, quantized network coding can provide good quality decoding at a sink node with the reception of a reduced number of packets. Specifically, we discuss the required conditions on local network coding coefficients, by using restricted isometry property and suggest a design, which yields in appropriate linear measurements. Finally, our simulation results show the achieved gain in terms of delivery delay, compared to conventional routing based packet forwarding.

1Introduction

Flexible, low cost, and long lasting implementation of wireless sensor networks has made them an unavoidable alternative for conventional wired sensing structures in a wide variety of applications, including medicine, transportation, and military [1]. As a relatively new technology, the trends and challenges are more felt in the networking aspects of communication than in the classic physical layer era [2]. One of the introduced challenges is the gathering of sensed data at a central node of the network, where delivery delay, precision, and robustness to network changes are emerging issues.

As the conventional way of transmission in the networks, packet forwarding via routing is widely used in different implementations of sensor networks. While it achieves capacity rates in the case of lossless networks [3], packet forwarding requires an appropriate routing [4] protocol to be run. In the case of correlated sources, distributed source coding [5] on top of packet forwarding is proved to be optimal, in terms of achieved conditional information rates [7]. However, packet forwarding can lead to difficulties because of its need for queuing, and its slow adaptation to network changes, caused by deploying new node(s) or link failure(s).

These issues and a lot more have motivated the invention of network coding [3], as an alternative for packet forwarding in sensor networks [8]. Specifically, network coding sends a function of incoming packets to the intermediate nodes, as opposed to sending their original content. Furthermore, the usage of random linear functions, also known as random linear network coding, is proved to be sufficient in lossless networks [10]. Moreover, theoretical analysis shows that when network coding is used for transmission, no queuing is required to achieve the optimal information rates [3]. Network coding in lossy networks can result in improved achieved rate regions, compared to packet forwarding [12].

Similar to packet forwarding, network coding can be separately applied on top of distributed source coding for correlated sources [14]. On the other hand, one has to perform joint source network decoding in order to achieve optimal performance limits, which may not be feasible because of its computational complexity [15]. Sub-optimal solutions have been proposed to tackle this practicality issue [16], by using low density codes and sum product algorithm [19] for decoding. Similar to the distributed source coding, which requires knowledge of appropriate marginal rates at each encoder node, these approaches need some knowledge of correlation model of sources, at the encoders’ side. This knowledge of appropriate rates may be a luxury in some cases, especially when it is changing over time and needs to be updated frequently. Hence, it is essential to study the possibility of developing a non-adaptive joint source network coding for such cases. In this paper, we aim to develop a non-adaptive random linear network coding for efficient joint distributed source network coding of correlated sources in sensor networks.

Recently, the idea of using compressed sensing [20] and sparse recovery concepts in sensor networks has drawn attention [22]. For instance, in [26], theoretical discussion on sparse recovery of graph constrained measurements with an interest in network monitoring application is presented. Joint source, channel and network coding was also proposed in [28], where random linear mixing was proposed for compression of temporally and spatially correlated sources. In [29], practical possibility of finite field network coding of highly correlated sources was investigated, with the aid of low density codes and belief propagation base decoding. Unfortunately, a solid theoretical investigation on the feasibility of adopting sparse recovery in random linear network coding has not been done previously.

In our earlier work [30], we proposed non-adaptive joint source network coding of exactly sparse sources, with the aid of the results in compressed sensing literature. In this paper, we extend our work to the general case of correlated sources and discuss theoretical and practical aspects of having robust distributed compression.

A detailed description of data gathering scenario with our notations is presented in Section 2. In Section 3, we introduce and formulate our proposed quantized network coding, which is followed by its theoretical feasibility discussion using restricted isometry property, in Section 4. In Section 5, we present the decoding algorithm used to recover quantized network coded packets, and derive a performance bound on its recovery error. Our simulation setup and results are presented in Section 6. Finally in Section 7, we conclude the paper by discussion on our proposed method and the ongoing works on this topic.

2Problem Description and Notation

In this paper, we consider a lossless model of networks, for which the links have limited capacities. Although it may not be a perfect model in practical cases where the links have mutual interference, it still reflects the effect of such imperfectnesses when calculating single input single output capacity of the links. A future work may study the case of noisy networks of links, by understanding the effect of interference between the links.

As shown in Fig. ?, we represent the network by a directed graph, , where and are the sets of nodes (vertices) and directed edges (links). Each node, , is from the finite sorted set and each edge, , is from the finite sorted set . Further, each edge (link) can maintain a lossless transmission from to , at a maximum finite rate of bits per use. We define the sets of incoming and outgoing edges of node , denoted by and , respectively, as follows:

The input and output contents of edge at time instant are represented by and , where represents the discrete (integer) time index, during which a block of length is transmitted. Since the edges are lossless, and are the same and from a finite alphabet of size , where denotes truncation to the lower integer. In the rest of the paper, the realizations of all capital letter random variables are denoted by lower case letters.

The nodes of the network are equipped with sensors and specifically each node has an information source, , where . The sensed data are supposed to be correlated, as this is a valid assumption in a lot of different applications. We model the correlation between these sensed data, by the near-sparseness property, since it can be considered as a generalization of compressibility and sparseness. Specifically, by defining the sorted vector of ’s:

we assume that is near-sparse in some orthonormal transform domain .1 Explicitly, for , and a small positive , we have:

where is such that:

and is called -sparse. An example of the sparsifying transform matrix, , is the Karhunen Loeve transform of the messages.

Having these correlated information sources and the information network characterized, we study the transmission of ’s to a single gateway node. The gateway or decoder node, denoted by , , has high computational resources and is usually in charge of forwarding the information to a next level network; e.g. a wired backbone network. The described (single session) incast of sources to the unique decoder node is referred to as data gathering. The purpose of this paper is to discuss the theoretical and practical feasibility of non-adaptive joint source network coding in the described data gathering scenario. More specifically, we take a compressed sensing approach in order to handle the transmission of sensed data.

3Quantized Network Coding

Random linear network coding for multicast of independent sources has been proposed and studied in [11], where the algebraic operations are in finite field. Since our work is motivated by the concepts of compressed sensing, in which the results are valid in the infinite field of real number, we have to use a real field alternative for conventional finite field network coding. On the other hand, finite capacity of the edges has to be appropriately coped with the infinite number of symbols in the adopted real field network coding. As a result, we propose Quantized Network Coding (QNC), which uses quantization to bridge between the limited capacity of the links and infinite alphabet of real field network coded packets.

In [30], for , we defined QNC at node , according to:

where , ensures initial rest condition in the network. The messages, ’s are assumed to be constant until the transmission is complete.2 The local network coding coefficients, ’s and ’s are real valued and are usually picked semi-randomly. The quantizer operator, , corresponding to outgoing edge , is designed based on the values of and , and the distribution of its input (i.e. random linear combinations). A simple diagram of QNC at node is shown in Fig. ?.

Denoting the quantization noise of at time , by , we can reformulate (Equation 2) as follows:

We denote the adjacency matrix, , and matrix, such that:

We also define the vectors of edge contents, , and quantization noises, , according to:

As a result, Equation 3 can be re-written in the following form:

Depending on the network deployment, matrix defines the relation between the content of edges, , and the received packets at the decoder node. Explicitly, we define the vector of marginal measurements (received packets) at time at the decoder:

where:

By considering (Equation 6) as the difference equation, characterizing a linear system with and ’s as its inputs, and its output, and using the results in [31], ’s are given by:

where the marginal measurement matrix, , and the marginal effective noise vector, , are calculated as follows:

In Eqs. Equation 8, ?, the matrix multiplication is defined as:

By storing ’s, at the decoder, we build up the total measurements vector, , as follows:

where . Therefore, the following can be established:

where the total measurement matrix, , and the total effective noise vector, , are the concatenation result of marginal measurement matrices, ’s, and marginal effective noise vectors, . Because of our assumption to start transmission from , ’s are not useful for decoding, and therefore:

In the conventional linear network coding, the number of total measurements, , is at least equal to the number of data, . More precisely, the total measurement matrix is of full column rank, which makes us able to uniquely find a solution.3 In this paper, we are interested to investigate the feasibility of robust recovery of , when fewer number of measurements are received at the decoder than the number of messages; i.e. .

Considering the characteristic equation of (Equation 10), describing the QNC scenario, we can treat as a compressed sensing measurement equation. This gives us an opportunity to apply the results in the literature of compressed sensing and sparse recovery [20] to our QNC scenario with near-sparse messages. However, one needs to examine the required conditions which guarantee sparse recovery in the proposed QNC scenario. In the following, we discuss theoretical and practical feasibility of robust recovery with a compressed sensing perspective.

4Restricted Isometry Property

One of the advantageous of compressed sensing is to tackle a non-adaptive design for sensing of sparse signals, where the support (location of non-zero elements) is not known at the encoding side. To pay back this non-adaptive characteristic, we may need more measurements than the exact number of non-zero elements. Fortunately, if appropriate types of linear measurements are chosen, we can keep the required number of measurements much less than the number of messages; that is: .

One of the properties that is widely used to characterize appropriate measurement matrices in the compressed sensing literature, is the Restricted Isometry Property (RIP) [33]. Roughly speaking, it provides a measure of norm conservation while the dimensionality is reduced [34]. An matrix is said to satisfy RIP of order with constant , if for all -sparse vectors , we have:

In [30], we proposed a design for local network coding coefficients, ’s and ’s, which results in an appropriate total measurement matrix, , in the compressed sensing framework.

It is also numerically shown in [36] that locally orthonormal set of ’s is a better choice than non-orthonormal sets; that is for all , we have:

In [36], we established the relation between the satisfaction of RIP and the tail probability

by proving the following theorem.

By using theorem ?, we can analyze the behavior of , resulting from the proposed local network coding coefficients in theorem ?. Specifically, we try to see if we can obtain the same tail probability as a Gaussian ensemble, with the same order of measurements. Unfortunately, the complicated relation of local network coding coefficients and network parameters with the resulting (see Eqs. Equation 4, Equation 5, Equation 8, Equation 11) makes it difficult to derive a simple mathematical form for the tail probability, , and have a nice mathematical conclusion about the required number of measurements.

In Fig. ?, we present the numerical values of tail probabilities (defined in Equation 12) for the resulting , , using the proposed local network coding coefficients in theorem ?. These tail probabilities are compared with those of i.i.d Gaussian matrices, , versus the number of measurements, , in each case.4

In the following section, we use the aforementioned conclusion for the resulting to derive a performance bound for QNC scenario.

5Decoding using Sparse Recovery

Sparse recovery for exactly sparse data can be done by using linear programming [37], where NP-hard minimization is replaced by minimization. Fortunately, this alteration of cost function does not affect the optimality in recovery of exactly sparse vectors from noiseless measurements [37]. However, when dealing with noisy measurements, -min recovery does not necessarily offer an optimal solution. There is still a lot of work being done to develop practical and near-optimal recovery algorithms for noisy cases. In the following, we discuss -min recovery for QNC scenario and establish theoretical bounds on its recovery error.

Motivated by the work in [20], the compressed sensing based decoder for QNC scenario solves the following convex optimization:

which can be solved by using linear programming [37]. In the following, we present our results on the recovery error using -min decoding of Equation 13.

According to the preceding theorem, the upper bound, , is decreased when the quantization steps, ’s, are decreased, too. And since , a smaller upper bound on the -norm of recovery error is forced by increasing the block length, . Although this can be done practically, it will simultaneously increase the point to point transmission delays in the network, which may not be desirable. Introducing a trade-off on the choice of block length, one has to find its appropriate value for a specific quality of service (i.e. recovery error).

Based on theorem ?, if the resulting satisfies RIP of appropriate order with a high probability, then the robust recovery can be guaranteed. On the other hand, using remarks ?, ?, we can say that the resulting satisfies RIP with a high probability, while the number of measurements, , has a smaller order than the number of messages, . Therefore, putting all these numerical and theoretical results together, it is true to say that QNC can result in bounded error recovery ( ?) with a smaller order of measurements (received packets at the decoder) than that of messages. This saving in the required number of received packets can be interpreted as an embedded distributed compression, achieved by quantized network coding at the nodes.

6Simulation Results

In this section, we evaluate the performance of quantized network coding, by using different numerical simulations. We are interested to find out the compression achievements, resulting from QNC, by obtaining the delay-distortion curves in different scenarios.

Although we were able to derive mathematical performance measures for the QNC scenario, they are not comprehensive and do not offer any guarantee on the statistical performance measures; e.g. mean squared error. However, deriving such statistical performance bounds requires a lot more of theoretical work on the sparse recovery, and meanwhile we can only rely on the numerical evaluations.

We initiate our numerical evaluations, by comparing the delay-quality performance of QNC and conventional routing based packet forwarding for lossy transmission of a set of correlated sources (messages). To set up the simulations, we randomly generate random deployments of directed networks with uniformly distributed edges (making sure there is not any pair of nodes with two assigned edges). The edges can maintain a lossless communication of one bit per use, meaning , for all . One of the nodes is randomly picked to be the gateway node, , in which the messages are decoded. To generate a realization of messages, , we first generate a -sparse random vector, , whose components are uniformly distributed between and . A near-sparse vector, , is obtained by adding a zero mean uniform noise, such that is bounded by . This is followed by generation of an orthonormal random matrix, , and calculating random messages; and normalizing the range of ’s, between and , where . Different values of sparsity factor, , and are used in our simulations. A summary of the simulation parameters is presented in Table ?.

The parameters of messages and the networks, used in our simulations.
Parameter Value(s)
No of nodes,
No of edges,
Block length,
Sparsity factor,
Near-sparsity factor,
Range of messages
Average [dB]

For each generated random network deployment, we perform QNC with -min decoding. Local network coding coefficients, ’s and ’s, are generated according to the conditions of theorem ?. The freedom degrees are limited by picking ’s such that they are locally orthonormal; The resulting coefficients are then normalized to satisfy the normalization condition of Eq. ? and prevent overflow in the linear combination of QNC. Edge quantizers, ’s, are uniform with a range of and intervals (since ). This completes all the required parameters and vectors to simulate quantized network coding and obtain the received packets at the decoder node, ’s.5 Random ’s can be generated in a pseudo-random way and therefore only the generator seed needs to be transmitted to the decoder as a header.

At the decoder, the received measurements up to , , are used to recover the original messages. Specifically, for a realization of messages, , we define , to be the recovered messages, using -min decoding, according to (Equation 13). Moreover, the convex optimization, involved in (Equation 13) is solved by using the open source implementation of disciplined convex programming [38].

For each deployment, we also simulated a routing based packet forwarding and compare it with the results for QNC. To find the routes from each node to the gateway node, we find the shortest path from each node to the gateway node, using the Dijkstra algorithm [40]. Further, the real valued messages, ’s, are quantized at their corresponding source nodes, by using similar uniform quantizers, as used in QNC transmission. It is aimed to deliver all ’s to the decoder node and keep the track of delivered messages over time, , in the recovered vector of messages, . Moreover, if a message, , is not delivered by time index , zero is used as its recovered value:

The norm of recovery error, , is used as the quality measure in our numerical comparisons. The payback measure in our comparisons is the delivery delay, corresponding to achieve a minimum quality of service. Explicitly, delivery delay for a transmission which has terminated at is equal to in both cases of QNC and packet forwarding.6 In QNC scenario, for each value of , and , we calculate the average of ’s over different realizations of network deployments. Since the sparsity of messages does not affect the performance of packet forwarding, we only need to present its results for different network parameters (i.e. number of edges).

For a fixed block length, , the average of norm of recovery error versus the average delivery delay is depicted in Fig. ?. In Figs. ?, ?, the horizontal axis represents the product , which is the delivery delay, corresponding to , for different values of . The vertical axis is the logarithmic average norm of recovery error, , for QNC and Packet Forwarding (PF) scenarios. Simply, Figs. ?, ? can be considered as rate-distortion curves in a lossy source coding scenario.

As it is shown in Figs. ?, ?, when using the same block length, QNC achieves significant improvement, compared to PF, for low values of delivery delay. These low delays correspond to the initial ’s in the transmission, at which a small number of packets are received at the decoder, as expected. After enough packets are received at the decoder, QNC achieves its best performance (at around [dB]), as a result of associated quantization noises. The best performance for packet forwarding happens after a longer period of time than for QNC. On the other hand, the best performance of PF (around [dB]) is higher than that of QNC, which can be explained by noise propagation in the network during QNC. However, as it is shown in the following, QNC outperforms PF in a wide range of delay values, when the appropriate block length is adopted in each case.




After simulating QNC and PF scenarios for different block lengths and calculating the corresponding delay and recovery error norms, we find the best values of block length for each specific average norm of recovery error (as a measure of quality of service). The resulting -optimized curve for each QNC and PF scenario is depicted in Fig. ?. In Figs. ?- ?, QNC performance is compared with that of PF, for different number of edges, different sparsity factors, and near-sparsity parameters. Generally speaking, these figures show a promising improvement over conventional packet forwarding, when QNC is adopted for transmission of near-sparse messages. The achieved improvement is increased as the sparsity factor, , is decreased, meaning a higher level of correlation between the messages.

As a drawback, QNC seems to fail when the sparsity model is not good for describing the correlation model. Specifically, if the near-sparsity parameter, , is too high, then the resulting performance of QNC can not even achieve that of PF, for a wide range of delivery delays (see Fig. ? for instance). In Figs. ?, ?, ?, the effect of on the resulting QNC performance is illustrated. As it is shown, as long as is small (relative to the norm of message), there is not any difference in the QNC performance. But, if it is so high that the sparsity model does not characterize the messages fairly, then QNC fails to work properly (since min decoding criteria is not a good cost function anymore).


In the routing based packet forwarding scenarios, the intermediate (sensor) nodes have to go through route training and storage of packets. As one of the main advantages of network coding, in QNC scenario, the intermediate nodes should only carry simple linear combination and quantization, being liberated in terms of computational complexity. On the other side, at the decoder sides, QNC requires an -min decoder which is potentially more complex than the receiver required for packet forwarding. However, this may not be an issue in practical cases, as the gateway node is usually capable of handling high computational operations.

7Conclusions and Future Works

Joint source network coding of correlated sources was studied with a sparse recovery perspective. In order to achieve non-adaptive encoding, we proposed quantized network coding, which incorporates real field network coding and quantization to take advantage of decoding using linear programming. Thanks to the work in the literature of compressed sensing, we discussed theoretical guarantees to ensure efficient encoding and robust decoding of messages. Moreover, we were able to make conclusive statements about the robust recovery of messages, when fewer number of received packets than the number of sources (messages) were available at the decoder. Finally, our computer simulations verified the reduction in the average delivery delay, by using quantized network coding.

Although the proposed sparse recovery algorithm is working well for correlated messages with near-sparse characterization, it does not offer optimal recovery for other cases of correlated sources. Currently, we are studying the feasibility of near-optimal decoding, when other forms of prior information are available about the source. Specifically, we have suggested the use of belief propagation based decoding [41] in a Bayesian scenario. However, more theoretical work is needed to derive mathematical guarantees for robust recovery. Studying the general case of lossy networks with interference between the links is also one of the urging needs for our work.

Footnotes

  1. In this paper, all the vectors are column-wise.
  2. This is why ’s do not depend on .
  3. beyond the fact that there should not be any uncertainty involved as a result of noise.
  4. Detailed version of our calculations for the tail probability of can be found in [36].
  5. Lower case notations are used for realization of random variables.
  6. In the case of packet forwarding, we do not consider the learning period, required to find the optimal routes.

References

  1. I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A survey on sensor networks,” IEEE Communications Magazine, vol. 40, no. 8, pp. 102–114, 2002.
  2. C. Chong and S. Kumar, “Sensor networks: evolution, opportunities, and challenges,” Proceedings of the IEEE, vol. 91, no. 8, pp. 1247–1256, 2003.
  3. R. Ahlswede, N. Cai, S.-Y. Li, and R. Yeung, “Network information flow,” IEEE Transactions on Information Theory, vol. 46, pp. 1204 –1216, July 2000.
  4. J. Al-Karaki and A. Kamal, “Routing techniques in wireless sensor networks: a survey,” IEEE Wireless Communications, vol. 11, no. 6, pp. 6–28, 2004.
  5. D. Slepian and J. Wolf, “Noiseless coding of correlated information sources,” IEEE Transactions on Information Theory, vol. 19, no. 4, pp. 471–480, 1973.
  6. Z. Xiong, A. Liveris, and S. Cheng, “Distributed source coding for sensor networks,” IEEE Signal Processing Magazine, vol. 21, no. 5, pp. 80–94, 2004.
  7. T. S. Han, “Slepian-wolf-cover theorem for networks of channels,” Information and Control, vol. 47, no. 1, pp. 67 – 83, 1980.
  8. T. Ho, R. Koetter, M. Medard, D. Karger, and M. Effros, “The benefits of coding over routing in a randomized setting,” in IEEE International Symposium on Information Theory, p. 442, june-4 july 2003.
  9. C. Fragouli, “Network coding for sensor networks,” Handbook on Array Processing and Sensor Networks, pp. 645–667, 2009.
  10. R. Koetter and M. Médard, “An algebraic approach to network coding,” IEEE Transactions on Networking, vol. 11, no. 5, pp. 782–795, 2003.
  11. T. Ho, M. Medard, R. Koetter, D. Karger, M. Effros, J. Shi, and B. Leong, “A random linear network coding approach to multicast,” IEEE Transactions on Information Theory, vol. 52, no. 10, pp. 4413 –4430, 2006.
  12. S. Lim, Y. Kim, A. El Gamal, and S. Chung, “Noisy network coding,” IEEE Transactions on Information Theory, vol. 57, no. 5, pp. 3132–3152, 2011.
  13. A. Dana, R. Gowaikar, R. Palanki, B. Hassibi, and M. Effros, “Capacity of wireless erasure networks,” IEEE Transactions on Information Theory, vol. 52, no. 3, pp. 789 –804, 2006.
  14. T. Ho, M. Médard, M. Effros, R. Koetter, and D. Karger, “Network coding for correlated sources,” in Proceedings of Conference on Information Sciences and Systems, 2004.
  15. A. Ramamoorthy, K. Jain, P. A. Chou, and M. Effros, “Separating distributed source coding from network coding,” IEEE Transactions on Networking, vol. 14, pp. 2785–2795, June 2006.
  16. Y. Wu, V. Stankovic, Z. Xiong, and S. Kung, “On practical design for joint distributed source and network coding,” IEEE Transactions on Information Theory, vol. 55, no. 4, pp. 1709–1720, 2009.
  17. G. Maierbacher, J. Barros, and M. Médard, “Practical source-network decoding,” in 6th International Symposium on Wireless Communication Systems, pp. 283–287, IEEE, 2009.
  18. S. Cruz, G. Maierbacher, and J. Barros, “Joint source-network coding for large-scale sensor networks,” in IEEE International Symposium on Information Theory Proceedings, pp. 420–424, IEEE, 2011.
  19. F. Kschischang, B. Frey, and H. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498–519, 2001.
  20. D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, pp. 1289 –1306, April 2006.
  21. Addison-Wesley, 2011.
    R. Baraniuk, M. Davenport, M. Duarte, and C. Hegde, An Introduction to Compressive Sensing.
  22. J. Haupt, W. Bajwa, M. Rabbat, and R. Nowak, “Compressed sensing for networked data,” IEEE Signal Processing Magazine, vol. 25, pp. 92 –101, march 2008.
  23. N. Nguyen, D. Jones, and S. Krishnamurthy, “Netcompress: Coupling network coding and compressed sensing for efficient data communication in wireless sensor networks,” in 2010 IEEE Workshop on Signal Processing Systems, pp. 356 –361, oct. 2010.
  24. C. Luo, F. Wu, J. Sun, and C. W. Chen, “Compressive data gathering for large-scale wireless sensor networks,” in Proceedings of the 15th annual international conference on Mobile computing and networking, MobiCom ’09, (New York, NY, USA), pp. 145–156, ACM, 2009.
  25. S. Feizi, M. Médard, and M. Effros, “Compressive sensing over networks,” in 48th Annual Allerton Conference on Communication, Control, and Computing, pp. 1129–1136, IEEE, 2010.
  26. W. Xu, E. Mallada, and A. Tang, “Compressive sensing over graphs,” in IEEE International Conference on Computer Communications (INFOCOM), pp. 2087–2095, IEEE, 2011.
  27. M. Wang, W. Xu, E. Mallada, and A. Tang, “Sparse recovery with graph constraints: Fundamental limits and measurement construction,” in IEEE International Conference on Computer Communications (INFOCOM), pp. 1871–1879, IEEE, 2012.
  28. S. Feizi and M. Medard, “A power efficient sensing/communication scheme: Joint source-channel-network coding by using compressive sensing,” in 49th Annual Allerton Conference on Communication, Control, and Computing, pp. 1048–1054, IEEE, 2011.
  29. F. Bassi, L. Chao, L. Iwaza, M. Kieffer, et al., “Compressive linear network coding for efficient data collection in wireless sensor networks,” in Proceedings of the 2012 European Signal Processing Conference, pp. 1–5, 2012.
  30. M. Nabaee and F. Labeau, “Quantized network coding for sparse messages,” arXiv preprint arXiv:1201.6271, 2012.
  31. Prentice-Hall Englewood Cliffs, NJ, 1980.
    T. Kailath, Linear systems, vol. 1.
  32. E. Candes and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse problems, vol. 23, no. 3, p. 969, 2007.
  33. E. J. Candès, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathematique, vol. 346, no. 9-10, pp. 589 – 592, 2008.
  34. R. Baraniuk, “Compressive sensing,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118–121, 2007.
  35. R. Baraniuk, M. Davenport, R. Devore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Constr. Approx, vol. 2008, 2007.
  36. M. Nabaee and F. Labeau, “Restricted isometry property in quantized network coding of sparse messages,” arXiv preprint arXiv:1203.1892, 2012.
  37. E. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, pp. 4203 – 4215, December 2005.
  38. M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 1.21.” http://cvxr.com/cvx, 2011.
  39. M. Grant and S. Boyd, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control (V. Blondel, S. Boyd, and H. Kimura, eds.), Lecture Notes in Control and Information Sciences, pp. 95–110, Springer-Verlag Limited, 2008.
  40. E. Dijkstra, “A note on two problems in connexion with graphs,” Numerische mathematik, vol. 1, no. 1, pp. 269–271, 1959.
  41. M. Nabaee and F. Labeau, “Bayesian quantized network coding via belief propagation,” arXiv preprint arXiv:1209.1679, 2012.
  42. M. Nabaee and F. Labeau, “One-step quantized network coding for near sparse messages,” arXiv preprint arXiv:1210.7399, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
13140
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description