Codes against Online Adversaries

Codes against Online Adversaries

Abstract

In this work we consider the communication of information in the presence of an online adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword symbol-by-symbol over a communication channel. The adversarial jammer can view the transmitted symbols one at a time, and can change up to a -fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each symbol the jammer’s decision on whether to corrupt it or not (and on how to change it) must depend only on for . This is in contrast to the “classical” adversarial jammer which may base its decisions on its complete knowledge of . More generally, for a delay parameter , we study the scenario in which the jammer’s decision on the corruption of must depend solely on for .

In this work, we initiate the study of codes for online adversaries, and present a tight characterization of the amount of information one can transmit in both the -delay and, more generally, the -delay online setting. We show that for -delay adversaries, the achievable rate asymptotically equals that of the classical adversarial model. For positive values of we show that the achievable rate can be significantly greater than that of the classical model.

We prove tight results for both additive and overwrite jammers when the transmitted symbols are assumed to be over a sufficiently large field . In the additive case the jammer may corrupt information by adding onto it a corresponding error . In this case the receiver gets the symbol . In the overwrite case, the jammer may corrupt information by replacing it with a corresponding corrupted symbol . For positive delay , symbol may not be known to the adversarial jammer at the time it is being corrupted, hence these two error models, and the corresponding achievable rates, are shown to differ substantially.

Finally, we extend our results to a jam-or-listen online model, where the online adversary can either jam a symbol or eavesdrop on it. This corresponds to several scenarios that arise in practice. We again provide a tight characterization of the achievable rate for several variants of this model.

The rate-regions we prove for each model are informational-theoretic in nature and hold for computationally unbounded adversaries. The rate regions are characterized by “simple” piecewise linear functions of and . The codes we construct to attain the optimal rate for each scenario are computationally efficient.

1 Introduction

Consider the following adversarial communication scenario. A sender Alice wishes to transmit a message to a receiver Bob. To do so, Alice encodes into a codeword and transmits it over a channel. In this work the codeword is considered to be a vector of length over an alphabet of size . However, Calvin, a malicious adversary, can observe and corrupt up to a -fraction of the transmitted symbols (i.e., symbols).

In the classical adversarial channel model, e.g., [6, 3], it is usually assumed that Calvin has full knowledge of the entire codeword , and based on this knowledge (together with the knowledge of the code shared by Alice and Bob) Calvin can maliciously plan what error to impose on . We refer to such an adversary as an omniscient adversary. For large values of (which is the focus of this work) communication in the presence of an omniscient adversary is well-understood. It is known that Alice can transmit no more than error-free symbols to Bob when using codewords of block length . Further, efficient schemes such as Reed-Solomon codes [10, 1] are known to achieve this optimal rate.

Online adversaries

In this work we initiate the analysis of coding schemes that allow communication against certain adversaries that are weaker than the omniscient adversary. We consider adversaries that behave in an online manner. Namely, for each symbol , we assume that Calvin decides whether to change it or not (and if so, how to change it) based on the symbols , for alone, i.e., the symbols that he has already observed. In this case we refer to Calvin as an online adversary.

Online adversaries arise naturally in practical settings, where adversaries typically have no a priori knowledge of Alice’s message . In such cases they must simultaneously learn based on Alice’s transmissions, and jam the corresponding codeword accordingly. This causality assumption is reasonable for many communication channels, both wired and wireless, where Calvin is not co-located with Alice. For example consider the scenario in which the transmission of is done during channel uses over time, where at time the symbol (or packet) is transmitted over the channel. Calvin can only corrupt a packet when it is transmitted (and thus its error is based on its view so far). To decode the transmitted message, Bob waits until all the packets have arrived. As in the omniscient model, Calvin is restricted in the number of packets he can corrupt. This might be because of limited processing power, limited transmit energy, or a need to keep his location secret.

In addition to the online adversaries described above, we also consider the more general scenario in which Calvin’s jamming decisions are delayed. That is, for a delay parameter , Calvin’s decision on the corruption of must depend solely on for . We refer to such adversaries as -delay online adversaries. Such -delay online adversaries correspond, for example, to the scenario in which the error transmission of the adversary is delayed due to certain computational tasks that the adversary needs to perform. We show that the -delay model (i.e., ) and the -delay model for display different behaviour, hence we treat them separately.

Error model

We consider two types of attacks by Calvin. An additive attack is one in which Calvin can add error symbols to Alice’s transmitted symbols . Thus , the ’th symbol Bob receives, equals . Here addition is defined over the finite field with elements. An overwrite attack is one in which Calvin overwrites of Alice’s transmitted symbols by the symbols received by Bob1. These two attacks are significantly different, if we assume that at the time Calvin is corrupting he has no knowledge of its value – this is exactly the positive-delay scenario.

The two attacks we study are intended to model different physical models of Calvin’s jamming. For instance, in wired packet-based channels Calvin can directly replace some transmitted packets with some fake packets , and therefore behave like an overwriting adversary. On the other hand in wireless networks, Bob’s received signal is usually a function of both and the additive error .

Lastly we consider the jam-or-listen online adversary. In this scenario, in addition to being an online adversary, if Calvin jams a symbol then he has no idea what value it takes. This model is again motivated by wireless transmissions, where a node can typically either transmit or receive, but not both. For this model, we consider all four combinations of -delay/-delay, and additive/overwrite errors.

A rate is said to be achievable against an adversary Calvin if it is possible for Alice to transmit a message of at least symbols of over channel uses to Bob (with probability of decoding error going to zero as ). The capacity, when communicating in the presence of a certain adversarial model, is defined to be the supremum of all achievable rates. Thus, the capacity characterizes the rate achievable in the adversarial model under study. We denote the capacity of the classical omniscient adversarial channel which can change characters by . We denote the capacity of the -delay online adversarial channels which can change characters by for the additive error model, and for the overwrite error model. For the jam-or-listen adversary, we denote the corresponding capacities by or , depending on whether Calvin uses additive or overwrite errors. A more detailed discussion of our definitions and notation is given in Section 2.

Our results

In this work, we initiate the study of codes for online adversaries, and present a tight characterization of the amount of information one can transmit in both the -delay and, more generally, the -delay online setting. To the best of our knowledge, communication in the presence of an online adversary (with or without delay) has not been explicitly addressed in the literature. Nevertheless, we note that the model of online channels, being a natural one, has been “on the table” for several decades and the analysis of the online channel model appears as an open question in the book of Csiszár and Korner [4] (in the section addressing Arbitrary Varying Channels [2]). Various variants of causal adversaries have been addressed in the past, for instance [2, 5, 11, 12, 9] – however the models considered therein differ significantly from ours.

At a high level, we show that for -delay adversaries the achievable rate equals that of the classical “omniscient” adversarial model. This may at first come as a surprise, as the online adversary is weaker than the omniscient one, and hence one may suspect that it allows a higher rate of communication. We then show, for positive values of the delay parameter , that the achievable rate can be significantly greater than those achievable against omniscient adversaries.

We stress that our results are information-theoretic in nature and thus hold even if the adversary is computationally unbounded. The codes we construct to achieve the optimal rates are computationally efficient to design, and for Alice and Bob to implement (i.e., efficiently encodable and decodable). All our results assume that the field size is significantly larger than . In some cases it suffices to take , but in others we need . Both settings lend themselves naturally to real-world scenarios, as in both cases a field element can be represented by a polynomial (in ) number of bits.

The exact statements of our results are in Theorems 123 and 4 below. The technical parameters (including rate, field size, error probability, and time complexity) of our results are summarized in Table 1 of the Appendix. We start by showing that in the -delay case, the capacity of the online channel equals that of the stronger omniscient channel model.

Theorem 1 (0-delay model)

For any , communicating against a 0-delay online adversary channel under both the overwrite and additive error models equals the capacity under the omniscient model. In particular,

(1)

Moreover, the capacity can be attained by an efficient encoding and decoding scheme.

Next we characterize the capacity of the -delay online channel under the additive error model.

Theorem 2 ( delay with additive error model)

For any the capacity of the -delay online channel for under the additive error model is . Moreover, the capacity can be attained by an efficient encoding and decoding scheme.

We then turn to study the -delay online channel under the overwrite error model. The capacity we present is at least as large as that achievable against an additive or overwrite -delay adversary who changes symbols. However, it is sometimes significantly lower than that achievable against an additive -delay adversary.

Theorem 3 ( delay with overwrite error model)

For any the capacity of the -delay online channel under the overwrite error model is

(2)

Moreover, the capacity can be attained by an efficient encoding and decoding scheme.

Lastly, we show that the optimal rates achievable against a jam-or-listen online adversary equal the corresponding optimal rates achievable against an online adversary, for each of the four combinations of - or -delay, and additive or overwrite attacks.

Theorem 4 (jam-or-listen model)

For any and in the capacity of the -delay online channel under the jam-or-listen error model is equal to that of the -delay online channel:

(3)

Moreover, the capacity can be attained by the same efficient encoding and decoding schemes as in Theorems  12 and 3.

Outline of proof techniques

The proofs of Theorems 123 and 4 require obtaining several non-trivial upper and lower bounds on the capacity of the corresponding channel models. The lower bounds are proved constructively by presenting efficient encoding and decoding schemes operating at the optimal rates of communication. The upper bounds are typically proven by presenting strategies for Calvin that result in a probability of decoding error that is strictly bounded away from zero regardless of Alice and Bob’s encoding/decoding schemes.

Theorem 1 states that communication in the presence of a -delay online adversary is no easier than communicating in the presence of (the more powerful) omniscient adversary. There already exist efficient encoding and decoding schemes that allow communication at the optimal rate of in the presence of an omniscient adversary [10, 1]. Thus our contribution in this scenario is in the design of a strategy for Calvin that does not allow communication at a higher rate. The scheme we present is fairly straightforward, and allows Calvin to enforce a probability of error of size at least whenever Alice and Bob communicate at a rate higher than . Roughly speaking, Calvin uses a two-phase wait and attack strategy. In the first phase (whose length depends on ), Calvin does not corrupt the transmitted symbols but merely eavesdrops. He is thus able to reduce his ambiguity regarding the codeword that Alice transmits. In the second phase, using the knowledge of he has gained so far, Calvin designs an error vector to be imposed on the remaining part of the codeword that Alice is yet to transmit.

Theorem 2 states that for , the capacity of the -delay online channel under the additive error model is . Note that this expression is independent of . In fact, even if Calvin’s attack is delayed by just a single symbol, the rate of communication achievable between Alice and Bob is strictly greater than in the corresponding scenario in Theorem 1! The upper bound follows directly from the simple observation that Calvin can always add random symbols from to the first symbols of , and therefore the corresponding symbols received carry no information. The lower bound involves a non-trivial code construction. In a nutshell, we show a reduction between communicating over the -delay online channel under the additive error model and communicating over an erasure channel. In an erasure channel, the receiver Bob is assumed to know which of the elements of the transmitted codeword were corrupted by Calvin. As one can efficiently communicate over an erasure channel with rate , e.g., [3], we obtain the same rate for our online channel. The main question in now: “In our model, how can Bob detect that a received symbol was corrupted by Calvin?” The idea is to use authentication schemes which are information theoretically secure, and lend themselves to the adversarial setting at hand. Namely, each transmitted symbol will include some internal redundancy, a signature, which upon decoding will be authenticated. As Calvin is a positive delay adversary, it is assumed that he is unaware of both the symbol being transmitted and its signature. It is enough that the signature scheme we construct be resilient against such an adversary.

In Theorem 3 both the lower and upper bound on the capacity require novel constructions. For the upper bound we refine the “wait-and attack” strategy for Calvin outlined in the discussion above on Theorem 1, to fit the -delay scenario. For the lower bound, we change Alice and Bob’s encoding/decoding schemes, outlined in the discussion above on Theorem 2, to fit the -delay overwrite model. Namely, as before, Alice’s encoding scheme comprises of an erasure code along with a hash function used to authenticate individual symbols. However, in general, an overwrite adversary is more powerful than an additive adversary. This is because an overwriting adversary can substitute any symbol by a new symbol . Thus Calvin can choose to replace with a symbol that is a valid output of the hash function. Hence the design of the hash function for Theorem 3 is more intricate than the corresponding construction in Theorem 2.

Roughly speaking, in the scheme we propose for the -delay overwrite scenario, the redundancy added to each symbol contains information that allows pairwise authentication (via a pairwise independent hash function). Namely, each symbol contains signatures (one for each symbol ). Using these signatures, some pairs of symbols and can be mutually authenticated to check whether exactly one of them has been corrupted. (For instance, symbols and such that can be used for mutual authentication, since when Calvin corrupts either one of them he does not yet know the value of the other.) This allows Bob to build a consistency graph containing a vertex corresponding to each received symbol, and an edge connecting mutually consistent symbols. Bob then analyzes certain combinatorial properties of this consistency graph to extract a maximal set of mutually consistent symbols. He finally inverts Alice’s erasure code to retrieve her message. We view Bob’s efficient decoding algorithm as the main technical contribution of this work.

Lastly, Theorem 4 states that a jam-or-listen adversary is still as powerful as the previously described online adversaries. This is interesting because a jam-or-listen adversary is in general weaker than an online adversary, since he never finds out the values of the symbols he corrupts. This theorem is a corollary of Theorems 12 and 3 as follows. The code constructions corresponding to the lower bounds are the same as in Theorems 12 and 3. As for the upper bounds, we note that the attacks described for Calvin in Theorems 12 and 3 actually correspond to a jam-or-listen adversary, and hence are valid attacks for this scenario as well.

Outline

The rest of the paper is organized as follows. In Section 2 we present a detailed description of our adversarial models together with some notation to be used throughout our work. In Section 3 we present the proof of Theorem 2. In Section 4 we present the main technical contribution of this work, the proof of Theorem 3. Theorem 1, although stated first in the Introduction, follows rather easily from the proof of Theorem 3 and is thus presented in Section B of the Appendix. Theorem 4 follows directly from Theorems 1, 2, and 3, and is thus presented in Section C of the Appendix. Some remarks and open problems are finally given in Section 5. The technical parameters of our results are summarized in Table 1 of the Appendix.

2 Definitions and Notation

For clarity of presentation we repeat and formalize the definitions presented earlier. Let be a power of some prime integer, and let be the field of size . Throughout this work we assume that the field size is exponential in (although some of our results will only need a polynomial in sized ) and that our parameters and are constant. For any integer let denote the set . Let be Alice’s rate. An -code is defined by Alice’s encoder and Bob’s corresponding decoder, as defined below.

Alice: Alice’s message is assumed to be an element of . In our schemes, Alice will also hold a uniformly distributed secret which is assumed to be a number of elements (say ) of . Alice’s secret is assumed to be unknown to both Bob and Calvin prior to transmission. Alice’s encoder is a deterministic function mapping every in to a vector in .

Calvin/Channel: We assume that Calvin is online, namely at the time that the character is transmitted Calvin has the knowledge of . Here the knowledge set is a subset of that is defined below according to the different jamming models we study. Using his jamming function Calvin either replaces Alice’s transmitted symbol in with a corresponding symbol , or adds an error to such that Bob receives .

In this work, Calvin’s knowledge sets must satisfy the following constraints. Causality/-delay: Calvin’s knowledge set is a subset of . Jam-or-listen: If Calvin is a jam-or-listen adversary, is inductively defined so that it does not contain such that . That is, Calvin has no knowledge of any he corrupts.

Calvin’s jamming function must satisfy the following constraints. For each , Calvin’s jamming function, and in particular the corresponding error symbol , depends solely on the set , Alice’s encoding scheme, and Bob’s decoding scheme. Additive/Overwrite: If Calvin is an additive adversary, , with addition defined over . If Calvin is an overwrite adversary, . Power: Bob’s received symbol differs from Alice’s transmitted symbol for at most values in .

Bob: Bob’s decoder is a (potentially) probabilistic function solely of Alice’s encoder and the received vector . It maps every vector in to an element of .

Code parameters: Bob is said to make a decoding error if the message he decodes differs from that encoded by Alice, . The probability of error for a given message is defined as the probability, over Alice’s secret , Calvin’s randomness, and Bob’s randomness, that Bob decodes incorrectly. The probability of error of the coding scheme is defined as the maximum over all of the probability of error for message . Note that these definitions imply that a successful decoding scheme allows a worst case promise. Namely, it implies high success probability no matter which message was chosen by Alice.

The rate is said to be achievable if for every , and every sufficiently large there exists a computationally efficient -code that allows communication with probability of error at most . The supremum of the achievable rates is called the capacity and is denoted by . We denote the capacity of the -delay online adversarial channels under the additive error model by and under the overwrite error model by . For a jam-or-listen adversary we denote the corresponding capacities by and .

We put no computational restrictions on Calvin. This is because our proofs are information-theoretic in nature, and are valid even for a computationally unbounded adversary. However, our schemes provide computationally efficient schemes for Alice and Bob.

Remark 2.1

We can allow Calvin to be even stronger than outlined in the model above. In particular, Calvin’s jamming function can also depend on Alice’s message , and our Theorems and corresponding proofs are unchanged. The crucial requirement is that each of Calvin’s jamming functions be independent of Alice’s secret , conditioned on the symbols in the corresponding knowledge set. That is, the only information Calvin has of Alice’s secret, he gleans by observing .

Packets: For several of our code constructions (specifically those in Theorems 2 and 3), it is conceptually and notationally convenient to view each symbol from as a “packet” of symbols from a smaller finite field of size instead. In particular, we assume . Here is an integer code-design parameter to be specified later. For a codeword , Alice treats each symbol (or packet) in as sub-symbols through from . Similarly, she treats her secret as sub-symbols through from .

3 Proof of Theorem 2

We consider block length large enough so that . Throughout, to simplify our presentation, we assume that expressions such as or are integers. We first prove that is an upper bound on by showing a “random-add” strategy for Calvin. Namely, consider an adversary who chooses elements of uniformly at random and adds them to the first symbols in Alice’s transmissions. Thus the first symbols Bob receives are uniformly distributed random elements of , and carry no information at all. It is not hard to verify that such an adversarial strategy allows communication between Alice and Bob at rate at most . This concludes our discussion for the upper bound.

We now describe how Alice and Bob achieve a rate approaching with computationally tractable codes. Alice’s encoding is in two phases. In the first phase, roughly speaking, she uses an erasure code to encode the approximately symbols of her message into an erasure-codeword with symbols. The erasure code allows to be retrieved from any subset of at least symbols of the erasure-codeword . In the second phase, Alice uses “short” random keys and corresponding hash functions to transform each symbol of the erasure-codeword into the corresponding transmitted symbol . This hash function is carefully constructed so that if Calvin (a positive-delay additive adversary) corrupts a symbol , with high probability Bob is able to detect this in a computationally efficient manner by examining the corresponding received . Bob’s decoding scheme is also a two-phase process. In the first phase he uses the hash scheme described above to discard the symbols he detects Calvin has corrupted – there are at most such symbols. In the second phase Bob uses the remaining symbols and the decoder of Alice’s erasure code to retrieve her message. We assume Alice’s erasure code is efficiently encodable and decodable (for instance Reed-Solomon codes [10, 1] can be used). In what follows we give our code construction in detail.

Let be sufficiently large (to be specified explicitly later in the proof). Let . As mentioned in Section 2, Alice treats each symbol of a codeword as a packet, by breaking each into sub-symbols through from . She partitions through into three consecutive sequences of sub-symbols of sizes , and respectively. The sub-symbols through are denoted by the set , and correspond to the sub-symbols of , the th symbol of the erasure-codeword generated by Alice. The next sub-symbols are denoted by the set , and consist of Alice’s secret for packet , namely, sub-symbols chosen independently and uniformly at random from . For each , is chosen independently. The final sub-symbols are denoted by the set , and consist of the hash (or signature) of the information by the function . Here, is taken from a family of hash functions (known to all parties in advance) to be defined shortly. All in all, each transmitted symbol of Alice consists of the tuple .

We now explicitly demonstrate the construction of each from Alice’s message . Alice chooses . Thus the message she wishes to transmit to Bob has sub-symbols over . Alice uses an erasure code (resilient to erasures) to transform these sub-symbols of into the vector comprising of sub-symbols over . She then denotes consecutive blocks of sub-symbols of by the corresponding ’s. More specifically, consists of the sub-symbols in in locations through .

Before completing the description of Alice’s encoder by describing the hash family , we outline Bob’s decoder. Bob first authenticates each received symbol by checking that . He then decodes using the decoding algorithm of the erasure code on the sub-symbols on of all symbols that pass Bob’s authentication test.

We now define our hash family and show that with high probability any corrupted symbol will not pass Bob’s authentication check. More specifically, we study only corrupted symbols for which . (If , the erasure decoder described above will not make an error.) Let be the error imposed by Calvin in the transmission of the ’th packet . Hence for an additive adversary Calvin, is defined by . Analogously to the corresponding sub-divisions of and , we decompose into the tuple . In particular, we define the sets , and so to satisfy , and (addition is performed by element-wise addition over of corresponding sub-symbols in each set). For Bob to decode correctly, the property that fails Bob’s authentication test if needs to be satisfied with high probability. More formally, noting that is not known to Calvin and thus independent of , we need for all and all such that , that is sufficiently small. Or equivalently, is sufficiently small.

To complete our proof we present our hash family . Recall that consists of sub-symbols in . Let represent when arranged as a matrix. Let be a column vector of symbols corresponding to . We define the value of the hash as the length- column vector defined as . Thus for the corresponding errors defined above, iff . Here is the matrix representation of and correspond to . Namely, the corrupted symbol received by Bob is authenticated only if .

For Calvin to corrupt Alice’s transmission, we assume that or equivalently , therefore the rank of is at least . Now, in , the left hand side depends on while the right hand side does not. Hence the equation is satisfied by at most values for the vector . Since is uniformly distributed over and unknown to Calvin, the probability of a decoding error is at most if is chosen to be .

All in all, our communication scheme succeeds if each corrupted symbol with fails the authentication test. This happens with probability at least as desired. Taking the rate of the code is and the field size needed is .  

4 Proof of Theorem 3

Proof of Upper bound: We start by addressing the three cases in the upper bound on the capacity . First, if , Calvin corrupts the first symbols uniformly at random as in the proof of Theorem 2 to attain an upper bound of on the achievable rate. Second, if and the rate is positive, Calvin picks a codeword uniformly at random from Alice’s codebook. With probability at least , Alice’s true codeword is distinct from the codeword . Calvin then flips an unbiased coin, and depending on the outcome he corrupts either the first half or the second half of . This corruption is done by replacing the symbols of by the corresponding symbols of . If indeed , Bob has no way of determining whether Alice transmitted or . Thus, Bob’s probability of decoding incorrectly is at least for large enough and/or .

Finally, if , we present a “wait-and-attack” strategy for Calvin to prove that is an upper bound on . Suppose not, and that rate is achievable for some . Then there are possible messages in Alice’s codebook. Calvin starts by eavesdropping on, but not corrupting, the first symbols Alice transmits. He then overwrites the next symbols with symbols chosen uniformly at random from . These locations convey no information to Bob. At this point (after Alice transmits symbols), the -delay Calvin only knows the value of the first symbols of . It can be verified that with probability at least over Alice’s codebook, after Alice’s first transmitted symbols, the set of codewords consistent with what Bob and Calvin have observed thus far is of size at least . Calvin then picks a random from . With probability at least , is distinct from Alice’s . Calvin then flips an unbiased coin, and depending on the outcome he corrupts either the first half or the second half of the remaining symbols of . This corruption is done by replacing the symbols of by the corresponding symbols of . If indeed , Bob has no way of determining whether Alice transmitted or . Thus Bob’s probability (over the message set and over the choice of Calvin) of decoding incorrectly is at least .

Proof of Lower bound: We now prove that the rate specified in Theorem 3 is indeed achievable with a computationally tractable code. The scheme we present covers all positive rates in the rate-region specified in Theorem 3, i.e., whenever . In particular the rate of our codes equal if , and equals if . Our scheme follows roughly the ideas that appear in the scheme of Section 3. Namely, Alice’s encoding scheme comprises of an erasure code along with a hash function used for authentication. However, in general, an overwrite adversary is more powerful than an additive adversary, because it can be directly shown that an overwriting adversary can substitute any symbol by a new symbol that can pass the authentication scheme used by Bob in Section 3. We thus propose a more elaborate authentication scheme in which each symbol contains information that allows for pairwise authentication with every other symbol .

Using notation similar to that of Section 3, let be the message Alice would like to transmit to Bob, and be the encoding of via an efficiently encodable and decodable erasure code (here we use Reed-Solomon codes). Let be sufficiently large (to be specified explicitly later in the proof). Let (note that this is significantly larger than in Theorem 2). As mentioned in Section 2, Alice treats each symbol of a codeword as a packet, by breaking each into sub-symbols through from . She partitions through into three consecutive sequences of sub-symbols of sizes , and respectively. The sub-symbols through are denoted by the set , and correspond to the sub-symbols of , the th symbol of the erasure-codeword generated by Alice. The next sub-symbols are arranged into sets of sub-symbols each, denoted by the sets for each , and consist of Alice’s secret for packet . That is, each consists of sub-symbols chosen independently and uniformly at random from . For each and , is chosen independently. The final sub-symbols arranged into sets of sub-symbols each, denoted by the sets for each , and consist of the pairwise hashes of the symbols and . We define to be , where is taken from (a slight variation to) a pairwise independent family (known in advance to all parties). Namely, is the hash of the information from using a key from the transmitted symbol . All in all, each transmitted symbol of Alice consists of the tuple . Here .

We now explicitly demonstrate the construction of each from Alice’s message . Alice chooses , where is an abbreviation of the capacity specified in Theorem 3. Note that equals asymptotically in and . Thus the message she wishes to transmit to Bob has sub-symbols over . Alice uses an erasure code (resilient to erasures) to transform these sub-symbols of into the vector comprising of sub-symbols over . She then denotes consecutive blocks of sub-symbols of by corresponding ’s. More specifically, consists of the sub-symbols in in locations through . Here .

The remainder of the proof is as follows. We first discuss the property of the family of hash functions in use, needed for our analysis. We then describe and analyze Bob’s decoding algorithm.

As mentioned above we use a (variation to a) pairwise independent hash family with the property that for all , the probability over that equals is sufficiently small. Such functions are common in the literature (e.g., see [8, 7]). In fact, we use essentially the same hashes as in Theorem 2, except with different inputs and dimension. Namely, let and represent and respectively arranged as matrices. Let be a length- column vector of symbols corresponding to . We define the hash as the column vector . Note that means that , which implies that . But by assumption , so , and so is of rank at least . Thus a random satisfies with probability .

We now define Bob’s decoder. Let , be two symbols transmitted by Alice, and , be the corresponding symbols received by Bob. Consider the information , the secret and the hash value in , and let , and be the corresponding (potentially corrupted) values in . Similarly consider the components of and . Bob checks for mutual consistency between and . Namely, the pair and are said to be mutually consistent if both and . Clearly, if both and are uncorrupted versions of and respectively, they are mutually consistent. By the analysis above of , if Calvin does not know the value of , does not corrupt but corrupts , then the probability over that and are consistent is at most . This is because , , and w.h.p. . We conclude:

Lemma 4.1

With probability at least , the following and are mutually inconsistent. (i) Causality: If , and . (ii) -delay: If , and Calvin corrupts exactly one of the symbols and so that either or .

Bob decodes via the -Delay Online Overwriting Disruptive Adversary Decoding (-DOODAD) Algorithm, described in detail below. We first give a high-level overview of the three major steps of -DOODAD. Bob’s first step is to test pairs of received symbols for mutual consistency. In particular he considers only pairs of symbols separated by at most locations; in this event Lemma 4.1(ii) implies that Bob detects the corruption of exactly one of a pair of symbols with high probability.

Based on the tests in the first step, in the second step he enumerates subsets of of received symbols as “candidate subsets” for decoding via Alice’s erasure code. In particular, each of the candidate subsets satisfies the natural property that it contains at least mutually consistent ’s. Naïvely, this enumeration seems computationally intractable since there may be as many as such sets. However, there is also a more intricate combinatorial property (Step 2(c) in the -DOODAD algorithm below) that candidate subsets must satisfy; we discuss this property after presenting the details of the algorithm. The effect of Step 2 below is to drastically curtail the number of candidate subsets that Bob needs to consider, to at most , hence ensuring that this step is still computationally tractable.

In the third step, for each of the candidate subsets generated in the previous step, Bob uses the decoder for Alice’s erasure code to generate a set of linear equations that the sub-symbols of her message must satisfy. Then we claim that any candidate subset that has even one corrupted symbol must generate a set of inconsistent linear equations. Hence Bob decodes by using the decoder for Alice’s erasure code on the unique candidate subset that generates a consistent set of linear equations. As we will see, the error probability of our scheme will be , which is if we set .

The details of -DOODAD now follow. We define a connected component of an undirected graph as a connected subgraph of such that there is no edge in between any vertex in and any vertex outside it. Also, let be the linear transform of the Reed-Solomon code that takes the length- column vector of Alice’s message to the length- column vector of the erasure codeword . Hence . Let the column vector of sub-symbols corresponding to in the transmission Bob receives be denoted . For any subset of size , let , and be respectively defined as the restriction of to the th rows/indices of , and respectively, for all .

-Delay Online Overwriting Disruptive Adversary Decoding (-DOODAD) Algorithm :

  1. Bob constructs a -distance mutual consistency graph with vertex set and edge-set comprising of all mutually consistent pairs such that (but no other edges). Thus comprises of connected components .

  2. Let be a subset of . We define the candidate subset of as the set of connected components in . If the size of is , we say has size . Bob enumerates all possible candidate subsets of such that (a) The candidate subset has size at most . (b) The number of vertices in the subgraphs in is at least . (c) Each pair of vertices and in the union of the subgraphs in are mutually consistent.

  3. Let be the set comprising of indices in corresponding to all symbols in the components . Bob picks an arbitrary subset of size . If , he decodes as the sub-symbols in the vector . Otherwise he discards and returns to the beginning of Step 3.

Claim 4.1

The -DOODAD algorithm decodes Alice’s message correctly with probability at least .

Proof: Throughout we assume that Lemma 4.1 holds for all corresponding and (by the union bound this happens with probability at least ). Thus corrupted and uncorrupted are non-adjacent in . We first prove that at least one with only uncorrupted symbols satisfies Steps 2 and 3. We examine the three conditions of Step 2. By the definition of mutual consistency any set with only uncorrupted symbols satisfies Step 2(c). Since Calvin can corrupt at most symbols, there must be some satisfying Step 2(b). To prove that also satisfies Step 2(a), we observe the following. If Calvin does not corrupt at least consecutive symbols between two uncorrupted symbols and (say I¡j), there must be a sequence of at most uncorrupted symbols with indices such that any two consecutive symbols in the sequence have indices that differ by less than . Then by the definition of , both and must be in the same connected component of . But there are at most corrupted symbols, hence there are at most disjoint sequences of consecutive corrupted symbols (and thus at most components in ).

Lastly, we show that any with only uncorrupted symbols and satisfying Step 2 must also satisfy Step 3. To see this, note that any such has at least symbols from . Thus, by the definitions of and for Theorem 3, has at least uncorrupted sub-symbols over . Also, since comprises solely of uncorrupted symbols, , hence for any , . But by the properties of erasure codes, , Alice’s message vector. Thus .

We now show that there does not exist any such that the corresponding output of the -DOODAD algorithm differs from Alice’s real message . We prove this by contradiction. Suppose a passes all the decoding steps of the -DOODAD algorithm and results in a distinct from Alice’s message . We now make a series of observations that successively refine the structure of such a , resulting in the conclusion that, w.h.p., contains no uncorrupted symbols, and therefore .

First, note that must contain uncorrupted symbols to pass Step 2(b), since . In addition, to pass Step 2(c), by Lemma 4.1(i), all the uncorrupted symbols of must come before all the symbols corrupted by Calvin. Now notice that the uncorrupted and the corrupted symbols in must be separated by a separating set of at least consecutive symbols not in . If not, Lemma 4.1(ii) would imply that w.h.p. does not satisfy Step 2(c) of -DOODAD. Now note that the separating set must contain at least consecutive symbols corrupted by Calvin. This follows from the fact that consists of connected components. Namely, if contains less than corrupted symbols, there must exist an uncorrupted symbol and a corrupted symbol , both in , satisfying . But this by Lemma 4.1(ii) would contradict Step 2(c). Notice that if we may conclude our proof at this point.

We now observe that there are at most corrupted symbols in . This follows from the fact that contains consecutive symbols corrupted by Calvin (not in ), and the fact that Calvin can corrupt at most symbols. This, together with Step 2(b) of -DOODAD, implies that the component set contains a proper subset with at least uncorrupted symbols. Finally, let be any subset of uncorrupted sub-symbols in . Let be any other subset of symbols in . Consider the corresponding message vectors and that Step 3 of -DOODAD may decode to. Since is of size at least , by the property of erasure codes [6], if , then . Thus , contradicting Step 3.  

5 Conclusion

In this work we characterize the capacity of online adversarial channels and their variants under the additive and overwrite error models. Our results are tight and coding schemes efficient. Throughout, we assume that the communication is over a size alphabet, assumed to be large compared to the block-length . An intriguing problem left untouched in this work concerns communication in the online adversarial setting over “small”, e.g. binary, alphabets. The authentication schemes used extensively in this work depend integrally on the the alphabet size being large. They do not extend naïvely to the binary alphabet case, where new techniques seem to be needed.

Appendix A List of parameters of our codes

Capacity Minimum Complexity Probability of Error
Theorem 1
Theorem 2
Theorem 3