Going Beyond Pollution Attacks:
Forcing Byzantine Clients to Code Correctly
Abstract
Network coding achieves optimal throughput in multicast networks. However, throughput optimality relies on the network nodes or routers to code correctly. A Byzantine node may introduce junk packets in the network (thus polluting downstream packets and causing the sinks to receive the wrong data) or may choose coding coefficients in a way that significantly reduces the throughput of the network.
Most prior work focused on the problem of Byzantine nodes polluting packets. However, even if a Byzantine node does not pollute packets, he can still affect significantly the throughput of the network by not coding correctly. No previous work attempted to verify if a certain node coded correctly using random coefficients over all of the packets he was supposed to code over.
We provide two novel protocols (which we call PIP and LogPIP) for detecting whether a node coded correctly over all the packets received (i.e., according to a random linear network coding algorithm). Our protocols enable any node in the network to examine a packet received from another node by running a “verification test”. With our protocols, the worst an adversary can do and still pass the packet verification test is in fact equivalent to random linear network coding, which has been shown to be optimal in multicast networks. Our protocols resist collusion among nodes and are applicable to a variety of settings.
Our topology simulations show that the throughput in the worst case for our protocol is two to three times larger than the throughput in various adversarial strategies allowed by prior work. We implemented our protocols in C/C++ and Java, as well as incorporated them on the Android platform (Nexus One). Our evaluation shows that our protocols impose modest overhead.
Contents
1 Introduction
Network coding was first proposed by Ahlswede et al. [ACLY00], who demonstrated that, for certain networks, network coding can produce a higher throughput than the best routing strategy. A subsequent line of work that includes the works of Koetter et al. [KM03], Li et al. [LYC03], and Jaggi et al. [JSC05] showed that random linear coding reaches maximum throughput for multicast networks. Overall, network coding has proved better than routing for both wired and wireless networks and for both multicast and broadcast [NS08]; it has also found applications to increasing the robustness and throughput of peertopeer networks (e.g., [GR05]) and to a variety of sensor wireless networks as surveyed by Narmawala and Srivastava [NS08].
Throughput optimality requires diversity. The throughput guarantees of network coding, however, rely on the assumption that all the nodes in the network code correctly, i.e., each node in the network, when receiving packets, is assumed to transmit a packet that is a random linear combination of the incoming packets; informally, packets that are indeed linear combinations of the incoming packets are said to be valid, and packets that are random linear combinations of the incoming packets are said to be diverse.
The assumption that each node in the network codes correctly may not hold because the network may contain Byzantine nodes, who are malicious or faulty nodes. For example, a Byzantine node may change the payload or the coding vector in a way that is not a linear combination of the received packets, thereby transmitting an invalid (or polluted) packet. The invalid packet will mix with other packets and thus pollute more packets, ultimately causing the decoded information at the sinks to be incorrect.
In fact, a Byzantine node can transmit a valid packet (i.e., a linear combination of the received packets), but still manage to decrease the overall throughput at the sinks. The Byzantine node could choose coefficients for the linear combination in a way that is not random: the node could forward one of the packets (by simply routing), code over only a subset of the packets, or, even worse, choose coefficients that do not contribute any new information to his receivers, thus, effectively sending nothing. While the network is not polluted by such a Byzantine node (and the decoded information at the sinks is still valid), the throughput of the network is decreased. In Section 6, as an example, we show that such Byzantine nodes can indeed reduce the throughput to as much as a half or a third in some specific cases and as much as on random topologies. Figure 3 shows a simple example of throughput reduction on the standard butterfly topology.
Insufficiency of prior work to guarantee correctness. A significant body of previous work that includes [KFM04], [CJL06], [GR06], [GR06], [ZKMH07], [YWRG08], [HLK08], [JLK08], [BFKW09], [KTT09], [AB09], [DCNR09], [ABBF10], [LM10], [YSJL10], and [WVNK10] addressed the problem of defending against pollution attacks, where the goal is to enforce or check that the packets sent by each node to be some (not necessarily random) linear combination of the packets sent by the source. Most prior work on enforcing validity of packets has focused on detecting polluted packets right at the point where a Byzantine node injected them into the network [BFKW09], [ZKMH07], [KFM04], [YWRG08], [CJL06], and [ABBF10]: when a Byzantine node injects an invalid packet into the network, the node receiving the packet is able to detect if the packet is invalid by running a test, and can discard the invalid packet right away.
However, all such work does not detect Byzantine nodes that deviate from random linear coding of the received packets, thus allowing such Byzantine nodes to reduce throughput as already discussed above. In particular, Byzantine nodes are still allowed to simply forward a received packet (rather than to code over multiple packets) or use coefficients that provide no new degrees of freedom to downstream nodes, effectively sending no data.
Our result. Given that Byzantine nodes may significantly affect the throughput of the network, we believe that it is important to study the following problem:
How to force a node to code correctly?
(That is, to code both validly and randomly, over all the received packets.)
Our main contribution is a novel protocol for enabling each child of a node to detect whether the node coded correctly over all the packets he was supposed to code over (i.e., according to a random linear network coding algorithm). In our protocol, a child node of a node (where child means that receives data from his parent ) can check, by running a verification test, that the data from is the result of correctly coding over the packets receives from his parents. The node need only examine the packet received from and does not need to know the precise packet payloads used in coding at .
Let the required set of , denoted , be the subset of the parents of that is expected to code over. As we will discuss in Section 5, the exact definition of the required set depends on the application; the flexibility in defining it will enable our protocols to be applicable to a variety of settings. For example, some applications may require a node to code over the packets from all his parents; other applications, perhaps due to unreliability of the communication channel, may require nodes to code over at least some minimum number of parents.
Using our protocols presented in Section 4, the child node can ensure that:

the packet from is the result of coding over the packets from all the nodes in , and

the coding coefficients used by are pseudorandom.
We provide two algorithms, with two different kinds of guarantees: PayloadIndependentProtocol (PIP) and LogVerification PIP (LogPIP). PIP always detects if failed to code over all the packets from parents in the required set, whereas LogPIP detects such a violation with an adjustable probability. In cases where nodes can have many parents (say, more than ), LogPIP is faster and more bandwidth efficient. While we use pseudorandom coefficients instead of random ones, this does not affect the throughput guarantees of network coding (see Section 4.5); accordingly, we will use the two terms interchangeably in this paper.
Furthermore, our protocols are resistant to collusion among nodes: even if the two Byzantine nodes and collude, the other honest children of can still check whether coded correctly over any noncolluding parents.
Finally, we assume that there exist penalties for nodes that are found to send incorrect packets, and we assume that they drive incentives against cheating in a detectable manner. A discussion of the exact form of such penalties of course lies outside of the scope of this paper, and one should choose the penalty that is best fit for one’s application. To facilitate the use of a penalty system, though, our protocol enables nodes to prove (and not only detect) that a parent cheated (i.e., did not code correctly); moreover, Byzantine nodes cannot falsely accuse honest nodes of not coding correctly.
Thus, we assume that Byzantine nodes will not cheat in a detectable way. We therefore consider an adversarial model in which Byzantine nodes perform the worst possible action to pass the verification test. In Section 4.8, we prove that the worst an adversary can do and still pass our packet verification tests is to code correctly (i.e., according to a random linear network coding scheme), which has been shown to give optimal throughput in multicast networks.
Implementation and evaluation. Our simulations in Section 6 show that the throughput in the best adversarial strategy for our protocol is two to three times larger than the throughput in several adversarial strategies allowed by prior work.
We implemented our protocols in C/C++. We also wrote a Java implementation for Javabased P2P applications and an Android package for smartphone P2P file sharing. Our C/C++ evaluations show that the protocols are reasonably efficient: the running time at a node to prepare for transmitting the data is less than ms, and the time to perform a verification test is ms with PIP and ms with LogPIP. Compared to the overhead introduced by a pollution detection scheme that we analyzed [BFKW09], the additional overheads introduced by our two protocols are respectively less than for PIP and less than for LogPIP. This suggests that, if one is already using a pollution detection scheme, then additionally enforcing diversity of packets will not affect performance by much. Moreover, the overhead of both of our protocols is independent of how large the packet payload is.
2 Related Work
Ahlswede et al. [ACLY00] have pioneered the field of network coding. They showed the value of coding at routers and provided theoretical bounds on the capacity of such networks. Works such as those of Koetter et al. [KM03], Li et al. [LYC03], and Jaggi et al. [JSC05] show that, for multicast traffic, linear codes achieve maximum throughput, while coding and decoding can be done in polynomial time. Ho et al. [HKM03] show that random network coding can also achieve maximum network capacity. Network coding has been shown to improve throughput in a variety of networks: wireless [LMK05], peertopeer content distribution [GR05], energy [WNE00], distributed storage [Jia06], and others.
Despite its throughput benefits, however, network coding is susceptible to Byzantine attacks. A Byzantine node can inject into the network junk packets, which will mix with correct packets and generate more junk packets, thus resulting in junk data at the sink.
A significant amount of research aims to prevent against or recover from pollution attacks [KFM04], [CJL06], [GR06], [GR06], [ZKMH07], [YWRG08], [HLK08], [JLK08], [BFKW09], [KTT09], [AB09], [DCNR09], [ABBF10], [LM10], [YSJL10], and [WVNK10]. Ho et al. [HLK08] attempt to detect at the sinks if the packets have been modified by a Byzantine node. They do so by adding hash symbols that are obtained as a polynomial function of the data symbols, and pollution is indicated by an inconsistency between the packets and the hashes.
Jaggi et al. [JLK08], for example, discuss rateoptimal protocols that survive Byzantine attacks. Their idea is to append extra parity information to the source messages. Kosut et al. [KTT09] provide nonlinear protocols for achieving capacity in the presence of Byzantine adversaries.
There has also been important work in the problem of detecting polluted packets when they are injected, see for example [KFM04], [CJL06], [GR06], [GR06], [ZKMH07], [YWRG08], [BFKW09], [DCNR09], [ABBF10], and [WVNK10]. These schemes are helpful because they prevent polluted packets from mixing with other packets. The most common approach has been the use of a homomorphic cryptographic scheme (such as signature) [BFKW09], [ZKMH07], [KFM04], [YWRG08], [CJL06], [AB09], and [ABBF10]. In a peertopeer setting, Krohn et al. [KFM04] propose a scheme based on homomorphic hashes to detect on the fly whether a received packet is valid. The homomorphic hashes are used to verify if the check blocks of downloaded files are indeed a linear combinations of the original file blocks. Gkantsidis and Rodriguez [GR06] further extend the approach of Krohn et al. to resist pollution attacks in peertopeer file distribution systems that use network coding. They also mention the entropy attack, which is similar to our diversity attack. However, they do not solve the problem of enforcing a Byzantine client to code diversely. Their approach is to have a node download coding coefficients from neighbors and decide from which neighbors to download the data to get the most innovative packets. However, a Byzantine client can still not code diversely, and for example, can choose not to code over the data from a parent that he knows would provide innovative information to his neighbors, thus reducing overall throughput.
Wan et. al [WVNK10] propose limiting pollution attacks by identifying the malicious nodes, so that they can be isolated, and Le and Markopoulou [LM10] by identifying the precise location of Byzantine attackers using a homomorphic MAC scheme.
Zhao et al. [ZKMH07] provides a signatures scheme for content distribution with network coding based on linear algebra and cryptography. The source provides all nodes with an invariant vector and public key information. With that information, all nodes can check on the fly the validity of a packet. [YWRG08] provides homormorphic signatures schemes for preventing such Byzantine attacks, but the paper is vacuous due to a flaw. [CJL06] and [BFKW09] also provide homomorphic signatures schemes, with a construction based on elliptic curves. This scheme augments the packet size by only one constant of about bits.
Another recent approach to detecting polluted packets is the algebraic watchdog [KMB10, LAV10] in which nodes sniff on packets from other nodes and try to establish if they are polluted.
However, while all these schemes only check if a packet is valid, they cannot establish if a packet is diverse. If Byzantine nodes are prevented from sending junk packets, because there are packet validity checks, it is still the case that there are other ways in which a Byzantine node can affect the throughput without violating any validity checks. For example, a Byzantine node can simply not send any data, he can forward one of the received packets (without coding), he can code with fixed coefficients, or he can choose coefficients that minimize the network throughput. In Section 6, we show that Byzantine behavior of this kind does indeed significantly decrease throughput. All these behaviors are not considered (and not prevented) by all previous work on pollution attacks.
3 Model
We present the network model and then formulate the security problem that we want to solve. In Section 5, we explain how our model and protocols apply to a variety of problem domains.
3.1 Network Model
We consider a network where nodes perform random linear network coding [HKM03] over some finite field. Roughly, each packet is a pair consisting of a payload and a coding vector ; nodes “code” by choosing random coefficients and using them to compute linear combinations of the received packets. For example: node receives two packets and ; to random linear network code these packets, chooses two random coefficients and from a certain finite field and computes the resulting coded packet as , where the computations are also performed in the finite field. In Section 4.1, we provide more details about the structure of a packet.
The network is modeled as a directed graph in the natural way: each node in the network corresponds to a vertex in the graph, and if a node sends data to another node then there is a directed edge in the graph from the vertex (corresponding to the node) to the vertex (corresponding to the node) ; we then say that is a parent of and that is a child of ; similarly, if there is a directed edge from to , we say that is a grandparent of and is a grandchild of . Each node sends one packet per time period to each of his children.
We always denote a generic node in the network by ; he has parents denoted by and children denoted by . We denote by the set of parents of . As discussed, the required set of , denoted by , is the subset of indicating which parents the node should code over. Ideally, the required set would be equal to the parent set, but this may not be possible in all settings or applications. (See Section 5 where we discuss various choices of the required set.) See Figure 4 for a diagram of a network using our notation.
Each node has a public key and a corresponding secret key . We assume that each node knows the public key of the source ; this is a reasonable assumption present in most previous work on pollution attacks [BFKW09], [ZKMH07], [KFM04], [YWRG08], [CJL06], and [ABBF10]; for example, a node may be given this public key upon entering the system.
In some settings (Section 5), we will need each node to have a certificate that his public key is valid and belongs to it; consists of a signature from the source or some other trusted party: . A node need only obtain such a signature once per lifetime of the node and it can be performed, for example, when the node joins the network.
In order for a child to check that his parent coded correctly using the protocols that we present in Section 4, needs to know what is the required set of and what are the public keys of the nodes in this set. Nodes do not need to know the required set (or the set of grandparents) for their parents a priori; in fact, dynamically adjusting the required set is important for dynamic networks. In Section 5.2, we explain how nodes can acquire the required set for each of their parents depending on the application. We also explain for which applications our protocols are most fit and for which they are not fit. For now, assume that each nodes knows precisely the nodes in the required set of each parent.
3.2 Threat Model
Nodes in the network may be Byzantine (i.e., malicious or faulty): a node can pollute the data coming from the source by sending out a packet that is invalid or decrease the throughput by sending a packet that is not a result of coding over packets received from each parent in the required set. In Section 6, we discuss several Byzantine behaviors and how they affect the throughput of the network.
Even worse, Byzantine nodes can collude among each other. A node can collude with his parents, children or any other node in the network to pass the verification tests at his honest children.
We consider the adversarial model in which Byzantine nodes will use the best adversarial strategy to decrease the throughput at the sinks while still passing our verification tests. As already discussed, we assume that there exist penalties in place that create enough incentives for not cheating detectably; a discussion of what these penalties should be (e.g., a fine, an investigation, removal from the system, resource choking, reputation decrease, or making topology adjustments) is out of the scope of this paper and one should choose what best fits one’s application.
3.3 Solution Approach and Goals
Similarly to prior work on pollution signatures, we also take a “verification test” solution approach. Our technical goal is to design a protocol that provably implements such a test for correctness:
Verification test by node when receiving packet from node . A procedure run by child upon receiving a packet from parent to verify that node generated by coding correctly (i.e., using pseudorandom coefficients over a packet from each parent in the required set of ).If a Byzantine node passes the verification test performed by an honest child , the Byzantine node must have coded correctly over the required data. Therefore, such a verification test would achieve the goal of this paper, because each honest node in the network has the ability to enforce correct random linear network coding at each of his parents.
Specifically, the verification test should satisfy the following properties:

A Byzantine node that does not follow the random linear coding algorithm should be detected with overwhelming probability.

The test must be efficient with respect to computation and bandwidth.

The verification test must be collusion resistant: an honest child should be able to check if his parent coded over all the honest nodes in his required set, regardless of whether other children or grandparents are Byzantine or not.

If the verification test fails, it is possible to prove it. In particular, this implies that a node can, not only detect, but also prove, when a parent cheats.
We require that the computational overhead that each node incurs by running the verification test is reasonable and, moreover, we also require that the increase in packet size (due to the extra information sent to later nodes in order to enable them to run the verification test) does not depend on the payload of the packet. (Recall that network coding is particularly useful when the packet payload is large and the overhead of the coefficients becomes negligible.)
The protocols we propose (and which are presented in Section 4) achieve the above four properties.
We remark that tackling collusion is challenging. For example, a node could collude with a child : could send a packet to that is not the result of coding over all the nodes in the required set with pseudorandom coefficients, and would simply neglect running the verification test on . Still, we want to ensure that the other, honest children of can verify that they do receive correctlycoded packets. This means that each child node must be able to independently check and not rely on any shared information that is required to stay secret. Similarly, ideally, if some parents collude with , ’s children should still be able to check that coded over all the required parents that did not collude with . This means that the parents cannot have some secret shared data in the protocol, all of this making the cryptographic protocol more challenging.
Finally, while the network model that we adopt is simple, we show in Section 5 that it is expressive: there we explain how to use this model for a variety of network settings and applications, either directly or with simple extensions.
4 Protocol
We describe the protocols a node needs to run to perform the verification test on each of his parents and to assemble packets to send to his children. For clarity, we present the protocols in an incremental fashion, by successively adding more security properties. But first we will need to introduce some basic notation and cryptographic tools that we use.
4.1 Notation
A sequence (or tuple) of components is denoted by or ; for simplicity, sometimes we omit the starting and ending indices of the sequence, thus only writing . The concatenation of two strings and is denoted by .
We denote by the number of (parent) nodes in the required set of node ; by and the public and secret keys of node ; and by a signature of a message with respect to the key pair of , where the underlying signature scheme is assumed to satisfy the usual notion of unforgeability (i.e., existential unforgeability under chosenmessage attack). For concreteness, we use the DSA algorithm [NIS], whose signatures are only bits long.
Let be the prime number used in any of the pollution signature schemes in [BFKW09], [ZKMH07], [KFM04], [YWRG08], [CJL06], and [ABBF10]. For example, in [BFKW09], is a bit prime.
In network coding, as already mentioned, a packet has the form , where is the payload and the coding vector. (In our protocols, we will augment the packet with additional tokens.) The payload is an tuple of chunks, where each chunk is an element of , the multiplicative group of integers modulo the prime . A coding vector is an tuple of chunks, where each chunk is also an element of . Hence, consists of chunks , where for and for . In particular, we can think of , , and as vectors in some product space of .
4.2 Cryptographic tools
We now briefly review the cryptographic tools that we employ in our protocols:
Pseudorandom functions. Informally, a pseudorandom function family is a family of polynomialtime computable functions with the property that, for a sufficiently large security parameter and a random bit seed , “looks like” a random function to any efficient procedure. See [GGM86] for more details.
Merkle hashes. A Merkle hash [Mer89] is a concise commitment to elements. Suppose that Alice has elements and she gives Bob a Merkle hash of them. Later, when Bob asks to see some elements from Alice, the Merkle hash allows Bob to check that indeed the elements Alice gives him are the same elements over which she had computed the Merkle hash. Loosely, to compute the Merkle hash of elements, Alice places the elements at the leaves of a full binary tree; she recursively computes each node higher in the tree as the hash of the concatenation of the two children. The resulting hash at the root is called the Merkle hash/commitment of the elements. Given elements and their Merkle hash, Alice can reveal an element, say element , to Bob by revealing the label of every node (and his sibling) along the path from the leaf node containing element to the root; Bob verifies the correctness of element by rehashing the elements bottomup and then verifying that the resulting hash is equal to the claimed Merkle hash. The advantage of the Merkle hash is that Bob only needs to ask elements from Alice to check that a element out of has been correctly included in the Merkle hash. See [Mer89] for more details.
Pollution signatures. A pollution signature scheme (such as [BFKW09], [ZKMH07], [KFM04], [YWRG08], [CJL06], and [ABBF10]) is a signature scheme consisting of the usual triplet of algorithms with a special homomorphic property that allows it to be used to detect pollution attacks in network coding.
Specifically, the source runs the key generation algorithm to produce a secret key , together with a corresponding public key that is published for everyone to use. The source augments each outgoing packet with a special signature , generated by running the algorithm on input the secret key and the packet ; we refer to this special signature as a validity signature of the packet with respect to the public key .
When a node receives a (signed) packet , he verifies the signature on the packet, by running the algorithm on input the public key , the packet , and the signature .
Pollution signature schemes have the useful homomorphic property that, when given several packets together with their validity signatures, any node is able to compute a validity signature of any linear combination of those packets, without communicating with the source . For example, if a node receives two (signed) packets and , then, for any two coefficients and , can compute a validity signature of the packet ; in some schemes, this is done by computing , where each of these computations are performed in a certain field and the equality holds due to homorphism. See [BFKW09], [ZKMH07], [KFM04], [YWRG08], [CJL06], and [ABBF10] for more details.
4.3 A Generic Protocol
In order to avoid repetition in the presentation of our protocols, in this section we introduce the general structure that will be followed by each protocol version that we present; later, in any given protocol version, we will replace any unspecified quantities or procedures with concrete values or algorithms.
First we discuss the new packet structure: every packet transmitted by a generic node is augmented with three cryptographic tokens; the first token has already been used in prior work, while the last two tokens are new to our protocols:

A test token , which is used by each child of to run the verification test on , denoted VerifTest.

A helper token , which is used by each child of to produce his own test token , using a procedure called Combine.
Specifically, the protocol that a generic node runs, after receiving packets from his parents, in order to produce a packet for each of his children, takes the general form of Algorithm 1, where the procedures VerifTest, CheckHelper, and Combine, as well as the value of , will be specified later:
In Step 2, for each parent from which receives a packet: checks the validity signature of the packet to establish whether sent polluted data or not; then, checks the test token by running the verification test to establish that coded correctly; next, needs to make sure sent a correct helper token (without which could not compute a good test token himself and would fail the verification test at his children).
If any of the checks above fail, will report them and act in some way that is applicationspecific. As we will see in Section 4.7, can accompany his complaint with a proof that his parent cheated.
In our protocol, each node verifies his parents (if he is not the source) and is being verified by his children (if he is not a sink/destination). Thus, verifiers , and verifies .
4.4 How to Force Byzantine Nodes to Code Over All Required Packets
As a first step, we design a verification test that enables any child of a node to check that did indeed code over all of the parent nodes in his required set , i.e., that the packet sent by to is a linear combination of packets from parents in the required set with coefficients all of which are not equal to zero.
A naïve solution. The node can simply forward to each of his children all the packets received from parents in the required set. Of course, ’s parents make sure to sign (using their own secret keys) the packets they send to , so that can be sure that the packets forwarded by are indeed from ’s parents. In other words, forwards to each child the following data: , , and , the coding coefficients used for the packets from each parent, and the newly coded payload with the new integrity signature . Each child can then establish whether coded correctly, because he now has access to all the information received from his parents and can thus check that did not use any zero coefficients.
Clearly, this solution is bandwidth inefficient: the payload of the packet can be very large and will send such payloads to his children, reducing throughput times.
PayloadIndependent Protocol (PIP). We now improve on the naïve solution, by avoiding to include the packet payload in the test token sent for verification, thus saving considerable bandwidth and throughput.
Each parent sends a helper token consisting of a parent signature on the validity signature:
The text in prevents a colluding from giving this helper token to some other node , which could otherwise falsely claim that he received the data from .
The test token of node is computed by a simple concatenation; specifically, Combine computes the following test token:
where is the coding coefficient that used for the packet from .
The verification test for this version of the protocol is given in Algorithm 2.
Step 2 verifies that provided test data for parent . Step 3 checks that the data is authentic. Step 4 establishes that the coefficient used in coding over this parent is nonzero. Step 6 checks that the coded data from indeed corresponds to coded data over the information from the parents with the claimed coefficients .
We now give some intuition for why Algorithm 2 is a good verification test, leaving a formal proof to Section 4.8. If does not code over the packet of some parent, say from parent , in order for to produce a validity signature for that verifies successfully under the public key , needs to combine only those validity signatures from parents he coded over and not include in the computation; at Step 6, however, uses the validity signature from all required parents to check (with a coefficient that was checked to be nonzero in Step 4) and the check would fail.
CheckHelper at consists of checking that is indeed a signature on and is not a signature on zero.
The length of the test token is now
where denotes the size of the pollution signature and the size of the signature scheme introduced in Section 4.1. Indeed, the length of does not depend on the payload any more. Also, recall that the lengths of the signatures are constant. Note, though, that is linear in the number of parents; this may not be a problem, but in applications where the payload is not that large or where there can be many parents, it would be desirable to have a smaller token. Moreover, verifying digital signatures in the verification test (Step 3 above) will become expensive if the number of parents is not small.
Logarithmic PayloadIndependent Protocol (LogPIP). We provide a second protocol in which the length of the helper token is significantly shorter:
where is the size of a hash (e.g. bits for SHA1), thus replacing with the much smaller value (where the logarithm is base ). The second protocol that we present, however, is probabilistic in its guarantees: rather than enabling a child to test if a parent cheated (with overwhelming confidence), we enable to detect misbehavior of with a certain (adjustable) probability.
Specifically, after receiving the packet from , node picks a required parent of at random and challenges to prove that he coded correctly over that parent. Of course, does not know ahead of time on what packets he will be challenged. As shown in Section 6, such probabilistic approach is still quite effective because the chance that a Byzantine node is detected cheating grows exponentially in the number of times he attempts to cheat. In Section 6, we provide recommendations for when we believe it is more appropriate to use PIP or LogPIP.
The basic idea of LogPIP is that will send to a test token that is the root of a Merkle hash tree constructed over the data of the test token used in PIP; namely, a Merkle hash tree where the elements at the leaves are the tuples ranging over the parents . Each will then challenge by asking to see a certain path in the Merkle hash tree corresponding to a parent of . In this way, can check if coded over that parent (i.e., used a nonzero coefficient). Of course, cannot provide arbitrary data to when replying the challenge of , as guaranteed by the security properties of a Merkle hash. Therefore, if did not code over a parent, will discover this with a known probability.
Let be a hashing scheme. Figure 5 illustrates the Merkle tree that has to compute and provides notation for our discussion. We slightly modify the traditional Merkle hash, by adding data at internal nodes and changing the recursion. Each leaf node in the Merkle tree consists of a “summary” of the data from a required parent of . Each internal node consists of the validity signature obtained by coding over all the packets at the leaves of the subtree rooted by the internal node and a hash of the two children. The root node will thus contain the test token as the root hash and the validity signature over , namely . Thus, Combine consists of computing the Merkle hash to obtain the Merkle hash root, so that . is the same as in PIP, that is, .
The verification test VerifTest is ran in a different way than in PIP. Each node receives the packet from , checks the validity signature, and he can proceed to code and forward the packet. It can then challenge node to check if indeed coded over all the packets. During a challenge, only source signatures and hashes will be retrieved, due to the Merkle tree property. Moreover, only one digital signature will be verified, the one corresponding to the parent from the challenge. For the Merkle recursion, only hash verifications and homomorphic signature operations (which typically consist of multiplying 1024bit numbers) will be performed, so the overall cost is dominated by a single digital signature verification.
The number of challenges is selected based on the desired probability of detection. With challenges, there is a probability of of detecting that did not code over a parent. After transmissions in which cheats, the probability of detecting is at least (this is achieved when cheats minimally – by not coding over one parent), which increases exponentially in . Coupled with penalties, such a probabilistic approach offers incentives against cheating.
Node needs to remember the values that constituted the Merkle tree until the children finished challenging it. One challenge checks that the node coded correctly over a parent; multiple challenges can be sent at once and processed together.
As for CheckHelper, still needs to check that is indeed a signature on and is not a signature on zero to prevent from causing to fail the verification test at ’s children. Proofs of security for this protocol are included in Section 4.8.
Collusion. Both PIP and LogPIP are collusion resistant: even if a child colludes with , the other children check independently. Moreover, if colludes with a parent , still needs to code correctly over the rest of the parents that he did not collude with because he cannot forge these parents signatures if has at least one honest child verifying it.
4.5 How to Force Byzantine Nodes to Code Pseudorandomly
As a second step, we design a verification test that enables any child of a node to check, not only whether a packet received from is valid and derived using nonzero coefficients over each parent in his required set (as was guaranteed by the solution presented in Section 4.4), but also whether the node coded using (pseudo)random coefficients in .
The basic idea is to require node to generate the pseudorandom coefficients from a seed that is also known to each child , so that each will be able to generate these same coefficients and use them as part of his verification test on .
We assume that each client knows a random seed that is public; a trusted party drew the seed at random when the system started. For example, a client can learn about the seed when he joins the system. In a wireless setting with no membership, a node can either have already hardcoded, or he can obtain it from his neighbors ( can be accompanied by a signature from a trusted party to make sure that malicious neighbors cannot lie about its value). The seed can remain the same for the lifetime of the system.
Using the seed , the coefficients can then be generated using a pseudorandom function (defined in Section 4.2). For each parent in the parent set , the node computes (of course, mapped to the field of the coefficients) and uses as the coding coefficient for the packet from .
Observe that, contrary to what the definition of the pseudorandomness property [GGM86] prescribes, the seed is not kept private, but is instead made public. Of course, in such a case, one cannot expect that the inputoutput relation induced by is unpredictable; indeed, it is deterministic, because now may be computed by anyone (and is not an “oracle” anymore). Nonetheless, since in our setting the inputs to are not under the control of Byzantine nodes, and are predetermined, it is easy to show that the outputs of on these inputs will still retain the statistical properties that we are interested in, allowing for the network throughput to still be maximal using these “pseudorandom” coefficients.
If one wishes to enable to use a different set of coding coefficients for each child , the computation of the coding coefficients can be changed to ; thus must use to code over the data from when preparing a packet for child . Intuitively, using different coefficients increases throughput in some topologies because of more diversity; this can be helpful in P2P networks, for example, but not so much in a wireless setting where transmitting different data to children will not take advantage of the shared medium on which multiple children can listen.
The verification tests in previous sections can now be easily modified to have each child check that coded over each parent in the required set with this exact coding coefficients (or ): in Step 4 of Algorithm 2 and in Step 31 of Algorithm 3, node must check that equals (or ). With this check in place, Byzantine nodes are forced to code with pseudorandom coefficients. Section 4.8 shows that Byzantine nodes cannot code with different coefficients and pass the verification test.
4.6 How to Prevent Replay Attacks of Old Data
One problem is that a Byzantine client may code correctly for one transmission, but may attempt to cheat on the next transmission by sending the old data he sent for the first transmission. In some cases, such a strategy reduces throughput, but in others, it even pollutes packets downstream in the network. Nevertheless, the Byzantine client will pass any pollution test because the source uses the same keys for signing in both transmissions; the node will also pass our diversity tests above because he coded correctly over his parents in the first transmission.
Therefore, we need to prevent such replay attacks. In fact, the problem of replay attacks belongs to the use of pollution schemes and is not introduced by our diversity enforcement scheme. Any solution for that setting will suffice in our setting as well because of the way we build “on top” of validity signatures. Therefore, any overhead introduced by such a scheme already is introduced by the use of pollution schemes and does not come with diversity enforcement.
We propose one such replay solution. The idea is to have the source change the validity signature key with every transmission so that any attempt by a Byzantine client to use old data would be detected when checking the validity signature. Let denote the public key used by the source in the th transmission. The source has one master signing key pair of which the public verification key is known to all users as before. To inform nodes of the public key used during a transmission, the source will send with every packet this public key accompanied by a signature of this public key using the master signing key. The source signs the public key to prevent malicious clients from forging public keys of their own and claiming they belong to the source.
For our diversity scheme, we make use of the public key corresponding to each transmission to add diversity in the coding coefficients across transmissions. Each node should now code with and their children will check the inclusion of in the coding coefficients along with the other tests they perform; without , the coding coefficients will be the same across different transmissions.
4.7 How to Enable Nodes to Prove Misbehavior
We discuss how any child of a node can prove ’s misbehavior to a third party, when the verification test for fails. Recall that the ability to convince a third party (such as the source, a membership service, or other authoritative agents in the system) that did indeed misbehave is important to allow for punitive measures to be enacted. Furthermore, the ability to prove misbehavior reinforces the deterrent effect of verification tests.
We use signatures in a natural way to provide such proofs: Step 5 of Algorithm 1 is modified so that a node attaches an additional “attest” token to the packet he sends to his children; the attest token consists of a signature of the whole packet under his own secret key . Each child of will then verify this signature (and ignore any data from that does not carry a valid “attest” signature).
If a child establishes that his parent did not code correctly based on the verification tests in Algorithm 2 or Algorithm 3, he can provide the packet from together with his attest token as proof to a third party. Any other party knowing the required set of node can run the VerifTest procedures to establish if cheated. Of course, by the unforgeability property of the signature scheme, children of cannot falsely accuse of misbehavior.
4.8 Proofs of Security
Theorem 4.1 (Security of PIP).
In protocol PIP, if a generic node passes all checks at an honest child , it means that coded over the value from , , with precisely coefficient (as described in Section 4.5), where is any generic parent from ’s required set.
Proof.
Algorithm 2 gives VerifTest for the PIP protocol. If passes the checks in Step 2, it means that provided the triple in ; if passes the checks in tep 3 and Step 4, it means that indeed provided and ; if passes the check in Step 6, it means that computed by including with coefficient in the homomorphic computation (described in Section 4.2).
In Step 2 of Algorithm 1 when run by , the node checks that verifies as a signature of . By the theorem’s hypothesis, the pollution signature verifies, so that, by the security of the pollution scheme (detailed in [BFKW09]), it must be the case that included when computing . ∎
Theorem 4.2 (Security of LogPIP).
In protocol LogPIP, if a generic node did not code over any given parent, say , from his required set with coefficient (as described in Section 4.5), and an honest child challenges on random parents, the probability that is detected (some check fails) is at least .
Proof.
The strategy of the proof is to present some exhaustive cases in which could not have coded over a parent, and show that in each such case the probability of detection is . Consider the tree of values that used when he computed the Merkle hash that he gave to . Because of the Merkle hash guarantees, cannot come up with any other tree (that is not a subtree of ), that has the same Merkle root hash. If any leaf in this tree (if a leaf exists) does not satisfy check (i) in Step 3 of Algorithm 3, it will be caught if challenges on parent , which happens with probability . Similarly, if any internal nodes do not satisfy check (ii), will detect this with probability at least . Therefore, we can assume that the first level of internal nodes in the consists of the expected hashes and where is the desired coefficient and is indeed the validity signature from parent . If any internal node in does not satisfy check (iii), this will be detected whenever is challenged on a value that involves a path through the Merkle tree passing through the broken internal node; this happens with probability at least . Therefore, assuming all internal nodes pass check (iii), it means that the validity signature at the top of the tree must be . If the validity signature at the top of does not match the one initially provided by (i.e., ), check (v) will fail with probability 1. Assuming, this check succeeds it must be the case that the validity signature initially provided by is a proper validity signature after coding with over all . Since the validity signature matched (check (2) of Algorithm 1 when run at child ), it means that coded over all parents with the right coefficients, by the guarantees of the validity signature. Therefore, there are no more cases of possible cheating from to consider and since all previous types of cheating were caught with chance , the proof is complete. ∎
5 Applications and Extensions
In this section, we describe applications and extensions of our protocol.
5.1 Types of Required Sets
In our protocols so far, we considered that a child of node performs the verification test on a specific set of required parents for . However, one can use different types of verification tests, some being more useful for certain settings, as we will see. All these verifications, in fact, just map to verifying a specific required set as before.
A child can perform any of the following checks for node :

coded over all his parents or over a specific set of parents.

Threshold enforcement: coded over at least parents. This check can be enforced by having send an indication of which parents he coded over with their public keys and certificates (defined in Section 3.2): checks that these are at least in number, checks the certificate of each public key to make sure did not falsify these keys, and that indeed coded over them.

coded over a set of parents with some applicationlevel property. For example, must code over at least two parents noted by some application as high priority and at least five parents in total. The priority of each node can be included in the certificate . again indicates the nodes he coded over to along with their public keys and certificates, and checks that at least two certificates contains high priority and there are at least five in total. Other general application semantics can be supported by this verification case.
5.2 Applications and Required Sets
In this section, we describe the various settings to which our protocols are applicable, and how the nodes would learn of the required set of their parents.
Our model applies to settings in which a node can learn the required set of his parents, such as:

Systems with a membership service: the membership service can inform a node of his grandparents when the node joins and when changes occur. Some peertopeer and content distribution systems fall in this category.

Systems having a reliable yet potentially low capacity channel besides the channel where the coding occurs (which may be less reliable, but has higher capacity): the reliable channel can be used to communicate topology changes between nodes. Some examples of applications are decentralized peertopeer applications and content distribution, as well as some wireless networks.

Static topologies: these topologies do not change or change rarely. The topology is mostly known to the nodes (e.g., nodes can discover it when joining), so a node will know his grandparents. Wired as well as some wireless network applications fall in this category. For wired networks, since the topology is more static and delays tend to be lower, more aggressive verification tests can be implemented (e.g. the required set is most of the parents or all of the parents, depending on the particular system).

Moderately dynamic wireless topologies: the set of grandparents for a node may change many times, after each change, it remains the same for enough time allowing the node to discover the new grandparents.
Let us discuss how a child can learn about his changing grandparents in dynamic topologies. First of all, for such topologies, we recommend nodes use the threshold enforcement scheme (described in (Item 2 above) because the set of parents of a node changes dynamically. The threshold should be adjusted based on some minimum number of links a node is expected to have in order to code diversely.
Consider that parents of node have changed and child wants to learn about this. We use the same links used by packet flow to inform of his grandparents. Each new parent sends : his public key and the corresponding certificate . sends this information to . Let’s discuss the case when is malicious and may try to inform of incorrect parent list. Note that cannot lie that is a parent when he is not because, if does not have a link to , during transmission time, nodes will verify that coded over the data from which could not have done because he did not receive this data. Moreover, cannot create some public keys of his own and claim that some parents with those public keys exist, because each node key has a certification as discussed. On the other hand, may try to simply not report any of his parents so that he does not have to forward or code over any data. However, each child will expect to report at least a threshold of parents; if does not do so, can be suspicious and denounce of potentially being malicious, as discussed in Section 3.2. Therefore, can choose which parents to code over from the set of parents physically linked to it, but he cannot choose less than such parents.
5.3 Extensions
In this section, we describe how our protocol could be applied to other network coding scenarios.
First, note that we did not make any assumption about what a link or a node really is. A link can be a physical link, a chain of physical links, or even a subnetwork. For example, in a peer to peer network, a link can include an entire subnetwork via which some peers send data to a receiving peer. In this case, our protocol can be used to check that the receiving peer coded over all sender peers when he forwards the packets to some other peer. As another example, a link in a wired network may represent a connection, while a link in a wireless network may be the ability to hear/communicate with another node or be an edge induced by the data transmission graph. Moreover, a node can be a physical node (a router, a peer in a P2P network) or a subnetwork; in fact, a few nodes in our model can form one node for a certain system. Using these observations, we can express constraints of realworld networks:
Multiple packets may be sent on some links. Consider that parent has a capacity of packets on the link to node . In this case, in our protocol, will be represented as different nodes, each with a different public key. With this transformation, our protocol can be used unchanged.
Broadcast links. Broadcast in wireless can be mapped to our model by having the parent have one link (the same link) to all his children (basically, viewing all children as one child), and our protocols can be applied unchanged.
Multisource network coding. In the multisource network coding case, intermediate nodes combine packets for different files from different sources, but each source operates independently and may not communicate with the others. In such work, the metadata of the packet is augmented with information about which source and which file identifiers the current packet contains.
To support our protocols in the multisource case, note that PIP and LogPIP depend on source information only when checking validity signatures. Moreover, our protocols are built modularly on top of a validity signature and do not depend on any particular scheme. This means that all we need is a multisource validity signature and the rest of the algorithms will remain unchanged. Recent work [ABBF10] proposes such schemes: sources can send packets independently of each other, each packet contains a validity signature, and these signatures can be checked at each intermediate node by knowing the public keys of each of these sources. Children will be able to check if their parents coded over the appropriate grandparents as before.
Asynchronous networks and delay intolerant networks. A child may receive data from his parents at different times. For efficiency reasons, the child may have to code over the data that he received already and send the data forward, and not wait until a piece arrived from every parent. In this case, the child can enforce the threshold verification above, thus checking that the packet from is coded over at least a few parents.
Various levels of abstraction. Our protocol can be used at various levels of abstraction. For example, in peertopeer networks, nodes can perform:

Endtoend check. A peer can check that the data from another peer is the result of coding over the data of all of certain sources, even if those sources communicated with the tested peer via other nodes or networks.

Individual node check. A peer can check that the data from another peer is the result of coding over all of certain peers to which this peer should be connected to according to the PeertoPeer algorithm they run or whatever application they run.
A lot of P2P systems are taking advantage of smartphones nowadays. In Section 6, we show that our protocol is efficient even when run on a smart phone such as Android Nexus One.
6 Implementation and Evaluation
In this section, we evaluate the usefulness and the performance of our protocol.
6.1 Simulation
We run a Python simulation to show that there is significant throughput loss due to Byzantine behavior not detected in previous work, but detected in our protocols. We examined three types of node behavior: (Mode 1) Byzantine nodes choose coding coefficients such that their packet does not provide new information at their children; (Mode 2) Byzantine nodes simply forward one of the received packets (and do not code); (Mode 3) Byzantine nodes are forced to code with pseudorandom coefficients. We can see that neither Mode nor Mode are detected by prior work on pollution schemes, but both are detected by our protocols. Mode , which is the correct behavior, is enforced only by our protocols.
The simulation constructs a graph by assigning edges at random between nodes, but maintaining the given minimum cut. The Byzantine nodes are placed on the minimum cut. We ran the simulation for [ nodes, edges, packets sent from the source, mincut up to , 1 Byzantine node] and [ nodes, edges, packets send from the source, mincut value up to , Byzantine nodes]. Figure 6 shows the throughput (i.e., the degrees of freedom) at the sink plotted against the mincut value. We can see that the throughput difference between Modes / and Mode is significant. Moreover, when the mincut value of the network is small (e.g., ), the throughput increase when using Mode can be as large as twice (see mincut value of in Figure 6(a)). In Figure 6 (b), we can see a more significant throughput difference. Mode has a throughput of about degrees of freedom more than Mode (which is of the data sent by the source) and about degrees of freedom more than Mode (which is of the data sent by the source).
6.2 Implementation
We implemented our protocol as a library (called SecureNetCode) in C/C++ and Java, as well as embedded it into the Android platform. The C/C++ implementation is useful for lower level code that is meant to be fast: network routers, various wireless settings, and other C/C++ programs. The Java implementation is useful for higherlevel programs such as P2P applications. We embedded the Java implementation in the Android platform and ran it on a Nexus One smartphone. The reason is that, with the growing popularity of smartphones, more P2P content distribution applications for smartphones are developed, some using network coding ([Har11], [Fit08]).
Our library implementation is available at www.mit.edu/~ralucap/netcode.html . It consists of the functions in protocols PIP and LogPIP. Our library in C/C++ consists of lines and the one in Java consists of lines including comments and white lines, but excluding standard, number theory or cryptographic libraries. To implement certain cryptographic operations on large numbers, we used NTL in C/C++ and BigInteger in Java. As cryptographic algorithms, we used OpenSSL DSA and SHA. The size of the validity signature used is bit.
Results. Except for the Android results which were run on a standard Nexus One smartphone, the rest of the results were run on a dualcore processor with GHz and GByte of RAM. There was observable variability in the results (especially for Nexus One), so we ran the experiments up to times to find an average time.
Note that we only evaluate the performance of our diversity scheme and do not evaluate the performance of any pollution signature protocol. The reason is that our protocol is not tied to any particular such scheme and uses it modularly. To enforce that nodes code with coefficients of one (Section 4.4), the most important step for throughput, we invoke the pollution scheme no more than it is invoked without our diversity checks. To enforce our full protocol with pseudorandom coefficients, during verification, each node computes one additional homomorphic operation of the integrity signature (per parent for PIP and per challenge for LogPIP), typically an exponentiation in a certain group: . Fortunately, the coding coefficients are typically relatively small, e.g., 64 bits (even though the integrity signature allows them to be as large as as explained in Section 4.1). Note that the pollution signature verification, which is expensive, is not called additionally.
In Table 1, we present performance results of PIP and LogPIP using one challenge. We consider an integrity signature of size bits and coding coefficients of size bits.
C/C++  Java  Android  
PIP  LogPIP  PIP  LogPIP  PIP  LogPIP  
1  0.2/0.3  0.3/0.2  2.3/4.5  2.7/4.5  4.7/4.2  4.9/6.9 
2  0.2/0.6  0.3/0.2  2.3/9  2.7/4.6  4.7/7.6  5.1/7.1 
3  0.2/0.8  0.3/0.3  2.3/14  2.8/4.6  4.7/15.4  5.7/10.4 
5  0.2/1.4  0.3/0.3  2.3/23  2.8/4.7  4.7/24.4  6.7/10.5 
7  0.2/1.9  0.3/0.3  2.3/32  2.9/4.7  4.7/35.4  10.2/10.8 
10  0.2/2.8  0.3/0.4  2.3/45  2.9/4.7  4.6/70.6  11.9/10.3 
15  0.2/4.2  0.3/0.4  2.3/68  3.0/4.7  4.6/101  11.7/10.4 
50  0.3/14  0.4/0.4  2.3/224  3.4/ 4.7  4.6/351  28.5/15.6 
0.95  0.95  8.8  8.8  15.4  15.4 
We can see that, for verification, as we increase the number of parents, the overhead of LogPIP increases very slowly (logarithmically) as compared to the linear performance of PIP. The same happens to packet size, which we evaluate later in this section. Therefore, we recommend using LogPIP for scenarios with more than three parents, and PIP for cases with at most three parents. Alternatively, one could select a hybrid algorithm by performing challenges from LogPIP. The performance of LogPIP grows linearly in the number of challenges so one can tune the probability of detection (see Section 4) based on the desired tradeoff with performance overhead.
We can see that the C/C++ protocols impose modest overhead. For parents, which is a reasonably large value, the running time at a node to prepare for transmitting the data is ms and the time to verify a packet’s diversity ms in total for LogPIP; for three parents, the time to verify diversity is ms for PIP. All these values are independent of how large the packet payload is. Let’s compare this to the cost of a pollution scheme, for example [BFKW09]. In this scheme, the verification consists of two bilinear map computations and modular exponentiations, resulting in at least ms run time for verification in C using the PBC library for bilinear maps for each parent. For three parents, the relative overhead of PIP is thus and of LogPIP is . Due to this low additional overhead, we believe that if one is already using a pollution scheme, one might as well also use our scheme in addition to provide diversity.
The Java and Android implementations are slower because of the language and/or device limitations of the Nexus One. Nevertheless, we believe these implementations still perform well when used for higher level applications like P2P content distribution.
6.3 Packet Size
For PIP, the packet size increase in PIP is bits and the sum of packet increase and information sent during challenge phase in LogPIPis bits, where is the number of parents to code over. Recall that is the size of the validity signature, and depends on the validity scheme used. For instance, if [BFKW09] is used, we have an increase in PIP of bits and in LogPIP of bits. As discussed in Section 4, the packet size does not increase as the payload grows, so such overhead becomes insignificant when transmitting large files.
7 Conclusions
In this paper, we presented two novel protocols, PIP and LogPIP, for detecting whether a node coded correctly over all the packets received according to a random linear network coding algorithm. No previous work defends against such diversity attacks by Byzantine nodes. Our evaluation shows that our protocols are efficient and the overhead of both of our protocols does not grow with the size of the packet payload.
References
 [AB09] Shweta Agrawal and Dan Boneh. Homomorphic MACs: Macbased integrity for network coding. ACNS, 2009.
 [ABBF10] Shweta Agrawal, Dan Boneh, Xavier Boyen, and David Mandell Freeman. Preventing pollution attacks in multisource network coding. In PKC ’10: Proceedings of the 13th International Conference on Practice and Theory in Public Key Cryptography, pages 161–176. Springer, 2010.
 [ACLY00] Rudolf Ahlswede, Ning Cai, ShuoYen Robert Li, and Raymond W. Yeung. Network information flow. IEEE Trans. Inf. Theory, 2000.
 [BFKW09] Dan Boneh, David Mandell Freeman, Jonathan Katz, and Brent Waters. Signing a linear subspace: Signature schemes for network coding. In PKC ’09: Proceedings of the 12th International Conference on Practice and Theory in Public Key Cryptography, pages 68–87. Springer, 2009.
 [CJL06] Denis Charles, Kamal Jain, and Kristin Lauter. Signatures for network coding. In CISS ’06: Proceedings of the 40th Annual Conference on Information Sciences and Systems, pages 857–863, 2006.
 [DCNR09] Jing Dong, Reza Curtmola, and Cristina NitaRotaru. Practical defenses against pollution attacks in intraflow network coding for wireless mesh networks. WiSec, 2009.
 [Fit08] Frans Fitzek. Network coding for mobile phones. Online at http://blogs.forum.nokia.com/blog/frankfitzeksforumnokiablog/2008/10/06/networkcoding, 2008.
 [GGM86] Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to construct random functions. Journal of the ACM, 1986.
 [GR05] Christos Gkantsidis and Pablo Rodriguez. Network coding for large scale content distribution. In INFOCOM, 2005.
 [GR06] Christos Gkantsidis and Pablo Rodriguez. Cooperative security for network coding file distribution. In INFOCOM, 2006.
 [Har11] Larry Hardesty. Secure, synchronized, social tv. Online at http://web.mit.edu/newsoffice/2011/socialtvnetworkcoding0401.html, 2011.
 [HKM03] Tracey Ho, Ralf Koetter, Muriel Médard, David R. Karger, , and Michelle Effros. The benefits of coding over routing in a randomized setting. In ISIT, 2003.
 [HLK08] Tracey Ho, Ben Leong, Ralf Koetter, Muriel Médard, Michelle Effros, and David R. Karger. Byzantine modification detection in multicast networks with random network coding. IEEE Transactions on Information Theory, 54(6):2798–2803, 2008.
 [Jia06] Anxiao Jiang. Network coding for joint storage and transmission with minimum cost. In ISIT 06: Proceedings of the 2006 IEEE International Symposium on Information Theory, pages 1359–1363. IEEE, 2006.
 [JLK08] Sidharth Jaggi, Michael Langberg, Sachin Katti, Tracey Ho, Dina Katabi, Muriel Médard, and Michelle Effros. Resilient network coding in the presence of Byzantine adversaries. IEEE Trans. Inf. Theory, 2008.
 [JSC05] Sidharth Jaggi, Peter Sanders, Philip A. Chou, Michelle Effros, Sebastian Egner, Kamal Jain, and Ludo M. G. M. Tolhuizen. Polynomial time algorithms for multicast network code construction. IEEE Transactions on Information Theory, 51(6):1973–1982, 2005.
 [KFM04] Maxwell N. Krohn, Michael J. Freedman, and David Mazières. Onthefly verification of rateless erasure codes for efficient content distribution. In S&P ’00: Proceedings of the 2000 IEEE Symposium on Security and Privacy, pages 226–240. IEEE Computer Society, 2004.
 [KM03] Ralf Koetter and Muriel Médard. An algebraic approach to network coding. IEEE/ACM Transactions on Networking, 11(5):782–795, 2003.
 [KMB10] MinJi Kim, Muriel Médard, and Joo Barros. A multihop multisource algebraic watchdog. CoRR, 2010.
 [KTT09] Oliver Kosut, Lang Tong, and David Tse. Nonlinear network coding is necessary to combat general byzantine attacks. Allerton, 2009.
 [LAV10] Guanfeng Liang, Rachit Agarwal, and Nitin Vaidya. When watchdog meets coding. INFOCOM, 2010.
 [LM10] Anh Le and Athina Markopoulou. Locating byzantine attackers in intrasession network coding using spacemac. NetCod, 2010.
 [LMK05] Desmond S. Lun, Muriel Médard, and Ralf Koetter. Efficient operation of wireless packet networks using network coding. In IWCT ’05: Proceedings of the 2005: International Workshop on Convergent Technologies, 2005.
 [LYC03] ShuoYen Robert Li, Raymond W. Yeung, and Ning Cai. Linear network coding. IEEE Trans. Inf. Theory, 49(2):371–381, February 2003.
 [Mer89] Ralph C. Merkle. A certified digital signature. In CRYPTO ’89: Proceedings of the 9th Annual International Cryptology Conference, pages 218–238, New York, NY, USA, 1989. SpringerVerlag New York, Inc.
 [NIS] FIPS PUB 1863: Digital Signature Standard (DSS). National Institute of Standards and Technology, http://csrc.nist.gov/groups/ST/toolkit/digital_signatures.html.
 [NS08] Zunnun Narmawala and Sanjay Srivastava. Survey of applications of network coding in wired and wireless networks. In NCC ’08: Proceedings of the 14th Annual National Conference on Communications, 2008.
 [WNE00] Jeffrey E. Wieselthier, Gam D. Nguyen, and Anthony Ephremides. On the construction of energyefficient broadcast and multicast trees in wireless networks. In INFOCOM ’00: Proceedings of the 19th Annual IEEE International Conference on Computer Communications, pages 585–594. IEEE, 2000.
 [WVNK10] Qiyan Wan, Long Vu, Klara Nahrstedt, and Himanshu Khurana. Identifying malicious nodes in networkcoding based peertopeer streaming networks. IEEE INFOCOM, 2010.
 [YSJL10] Hongyi Yao, Danilo Silva, Sidharth Jaggi, and Michael Langberg. Network codes resilient to jamming and eavesdropping. CoRR, 2010.
 [YWRG08] Zhen Yu, Yawen Wei, Bhuvaneswari Ramkumar, and Yong Guan. An efficient signaturebased scheme for securing network coding against pollution attacks. In INFOCOM, 2008.
 [ZKMH07] Fang Zhao, Ton Kalker, Muriel Médard, and Keesook J. Han. Signatures for content distribution with network coding. In ISIT, 2007.