Sending a Message with Unknown Noise

# Sending a Message with Unknown Noise

Abhinav Aggarwal University of New MexicoAlbuquerque, New Mexico, USA Varsha Dani University of New MexicoAlbuquerque, New Mexico, USA Thomas P. Hayes University of New MexicoAlbuquerque, New Mexico, USA  and  Jared Saia University of New MexicoAlbuquerque, New Mexico, USA
###### Abstract.

Alice and Bob are connected via a two-way channel, and Alice wants to send a message of bits to Bob. An adversary flips an arbitrary but finite number of bits, , on the channel. This adversary knows our algorithm and Alice’s message, but does not know any private random bits generated by Alice or Bob, nor the bits sent over the channel, except when these bits can be predicted by knowledge of Alice’s message or our algorithm. We want Bob to receive Alice’s message and for both players to terminate, with error probability at most , where is a parameter known to both Alice and Bob. Unfortunately, the value is unknown in advance to either Alice or Bob, and the value is unknown in advance to Bob.

We describe an algorithm to solve the above problem while sending an expected bits. A special case is when , for some constant . Then when , the expected number of bits sent is , and when , the expected number of bits sent is , which is asymptotically optimal.

Reed Solomon Codes, Interactive Communication, Adversary, Polynomial, AMD Codes, Error Correction Codes, Fingerprinting
copyright: rightsretainedconference: International Conference on Distributed Computing and Networking; January 2018; Varanasi, Indiajournalyear: 2018ccs: Mathematics of computing Information theoryccs: Computing methodologies Distributed algorithmsccs: Security and privacy Security protocols

## 1. Introduction

What if we want to send a message over a noisy two-way channel, and little is known in advance? In particular, imagine that Alice wants to send a message to Bob, but the number of bits flipped on the channel is unknown to either Alice or Bob in advance. Further, the length of Alice’s message is also unknown to Bob in advance. While this scenario seems like it would occur quite frequently, surprisingly little is known about it.

In this paper, we describe an algorithm to efficiently address this problem. To do so, we make a critical assumption on the type of noise on the channel. We assume that an adversary flips bits on the channel, but this adversary is not completely omniscient. The adversary knows our algorithm and Alice’s message, but it does not know the private random bits of Alice and Bob, nor the bits that are sent over the channel, except when these bits do not depend on the random bits of Alice and Bob. Some assumption like this is necessary : if the adversary knows all bits sent on the channel and the number of bits it flips is unknown in advance, then no algorithm can succeed with better than constant probability (see Theorem 6.1 from (Dani et al., 2015) for details111Essentially, in this case, the adversary can run a man-in-the-middle attack to fool Bob into accepting the wrong message).

Our algorithm assumes that a desired error probability, is known to both Alice and Bob, that the adversary flips some number bits that is finite but unknown in advance, and that the length of Alice’s message, is unknown to Bob in advance. Our main result is then summarized in the following theorem.

###### Theorem 1.1 ().

Our algorithm tolerates an unknown number of adversarial errors, , and for any , succeeds in sending a message of length with probability at least , and sends an expected bits.

An interesting case to consider is when the error probability is polynomially small in , i.e. when , for some constant . Then when , our algorithm sends expected bits. When , the number of bits sent is , which is asymptotically optimal.

### 1.1. Related Work

Interactive Communication Our work is related to the area of interactive communication. The problem of interactive communication asks how two parties can run a protocol over a noisy channel. This problem was first posed by Schulman (Schulman, 1993, 1992), who describes a deterministic method for simulating interactive protocols on noisy channels with only a constant-factor increase in the total communication complexity. This initial work spurred vigorous interest in the area (see (Braverman, 2012a) for an excellent survey).

Schulman’s scheme tolerates an adversarial noise rate of , even if the adversary is not oblivious. It critically depends on the notion of a tree code for which an exponential-time construction was originally provided. This exponential construction time motivated work on more efficient constructions (Braverman, 2012b; Peczarski, 2006; Moore and Schulman, 2014). There were also efforts to create alternative codes (Gelles et al., 2011; Ostrovsky et al., 2009). Recently, elegant computationally-efficient schemes that tolerate a constant adversarial noise rate have been demonstrated (Brakerski and Kalai, 2012; Ghaffari and Haeupler, 2013). Additionally, a large number of results have improved the tolerable adversarial noise rate (Brakerski and Naor, 2013; Braverman and Rao, 2011; Ghaffari et al., 2014; Franklin et al., 2015; Braverman and Efremenko, 2014), as well as tuning the communication costs to a known, but not necessarily constant, adversarial noise rate (Haeupler, 2014).

Interactive Communication with Private Channels Our paper builds on a recent result on interactive communication by Dani et al (Dani et al., 2015). The model in (Dani et al., 2015) is equivalent to the one in this paper except that 1) they assume that Alice and Bob are running an arbitrary protocol ; and 2) they assume that both Alice and Bob know the number of bits sent in . In particular, similar to this paper, they assume that the adversary flips an unknown number of bits , and that the adversary does not know the private random bits of Alice and Bob, or the bits sent over the channel.

If the protocol just sends bits from Alice to Bob, then the algorithm from (Dani et al., 2015) can solve the problem we consider here. In that case, the algorithm of (Dani et al., 2015) will send an expected bits, with a probability of error that is for any fixed constant .

For the same probability of error, the algorithm in this paper sends an expected bits. This is never worse than (Dani et al., 2015), and can be significantly better. For example, when , our cost is versus from (Dani et al., 2015). In general if our cost is asymptotically better than (Dani et al., 2015). Additionally, unlike (Dani et al., 2015), the algorithm in this paper does not assume that is known in advance by Bob.

An additional results of (Dani et al., 2015) is a theorem showing that private channels are necessary in order to tolerate unknown with better than constant probability of error.

Rateless Codes Rateless error correcting codes enable generation of potentially an infinite number of encoding symbols from a given set of source symbols with the property that given any subset of a sufficient number of encoding symbols, the original source symbols can be recovered. Fountain codes (MacKay, 2005; Mitzenmacher, 2004) and LT codes (Palanki and Yedidia, 2004; Luby, 2002; Hashemi and Trachtenberg, 2014) are two classic examples of rateless codes. Erasure codes employ feedback for stopping transmission (Palanki and Yedidia, 2004; Luby, 2002) and for error detection (Hashemi and Trachtenberg, 2014) at the receiver.

Critically, the feedback channel, i.e. the channel from Bob to Alice, is typically assumed to be noise free. We differ from this model in that we allow noise on the feedback channel, and additionally, we tolerate bit flips, while most rateless codes tolerate only bit erasures.

### 1.2. Formal Model

##### Initial State

We assume that Alice initially knows some message of length bits that she wants to communicate to Bob, and that both Alice and Bob know an error tolerance parameter . However, Bob does not know or any other information about initially. Alice and Bob are connected by a two-way binary communication channel.

We assume an adversary can flip some a priori unknown, but finite number of bits on the channel from Alice to Bob or from Bob to Alice. This adversary knows , and all of our algorithms. However, it does not know any random bits generated by Alice or Bob, or the bits sent over the channel, except when these can be determined from other known information.

##### Channel steps

We assume that communication over the channel is synchronous. A channel step is defined as the amount of time that it takes to send one bit over the channel. As is standard in distributed computing, we assume that all local computation is instantaneous.

##### Silence on the channel

Similar to (Dani et al., 2015), when neither Alice nor Bob sends in a channel step, we say that the channel is silent. In any contiguous sequence of silent channel steps, the bit received on the channel in the first step is set by the adversary for free. By default, the bit received in the subsequent steps of the sequence remains the same, unless the adversary pays for one bit flip each time it wants to change the value of the bit received.

### 1.3. Paper organization

The rest of the paper is organized as follows. We first discuss an algorithm for the case when both Alice and Bob share the knowledge of in Section 2. We present the analysis for failure probability, correctness, termination and number of bits sent by this algorithm in Section 3. Then, we remove the assumption of knowledge of and provide an algorithm for the unknown case in Section 4, along with its analysis. Finally, in Section 5, we conclude the paper by stating the main result and discuss some open problems.

## 2. Known L

We first discuss the case when Bob knows . We remove this assumption later in Section 4.

Our algorithm makes critical use of Reed-Solomon codes from (Reed and Solomon, 1960). Alice begins by encoding her message using a polynomial of degree over , where . She sends the values of this polynomial computed at certain elements of the field as message symbols to Bob. Upon receiving an appropriate number of these points, Bob computes the polynomial using the Berlekamp-Welch algorithm (Welch and Berlekamp, 1986) and sends a fingerprint of his guess to Alice. Upon hearing this fingerprint, if Alice finds no errors, she echoes the fingerprint back to Bob, upon receiving a correct copy of which, Bob terminates the algorithm. Unless the adversary corrupts many bits, Alice terminates soon after.

However, in the case where Alice does not receive a correct fingerprint of the polynomial from Bob, she sends two more evaluations of the polynomial to Bob. Bob keeps receiving extra evaluations and recomputing the polynomial until he receives the correct fingerprint echo from Alice.

### 2.1. Notation

Some helper functions and notation used in our algorithm are described in this section. We denote by the fact that is sampled uniformly at random from the set .

##### Fingerprinting

For fingerprinting, we use a well known theorem by Naor and Naor (Naor and Naor, 1993), slightly reworded as follows:

###### Theorem 2.1 ().

(Naor and Naor, 1993) Fix integer and real . Then there exist constants and algorithm h such that the following hold for a given string .

1. For a string of length at most , we have , where is a string of length .

2. For any bit strings and of length at most , if , then , else .

We refer to as the fingerprint of the message .

##### GetPolynomial

Let be a multiset of tuples of the form . For each , we define to be the tuple that has the highest number of occurrences in , breaking ties arbitrarily. We define . Given the set , we define as a function that returns the degree- polynomial over that is supported by the largest number of points in , breaking ties arbitrarily.

The following theorem from (Reed and Solomon, 1960) (Welch and Berlekamp, 1986) provides conditions under which reconstructs the required polynomial.

###### Theorem 2.2 ().

(Reed and Solomon, 1960) (Welch and Berlekamp, 1986) Let be a polynomial of degree over some field , and . Let be the number of elements such that , and let . Then, if , we have .

##### Algebraic Manipulation Detection Codes

Our algorithm also makes use of Algebraic Manipulation Detection (AMD) codes from (Cramer et al., 2008). For a given , called the strength of AMD encoding, these codes provide three functions: amdEnc, amdDec and IsCodeword. The function creates an AMD encoding of a message . The function takes a message and returns true if and only if there exists some message such that . The function takes a message such that and returns a message such that . These functions enable detection of bit corruption in an encoded message with high probability. The following (slightly reworded) theorem from (Cramer et al., 2008) helps establish this:

###### Theorem 2.3 ().

(Cramer et al., 2008) For any , there exist functions amdEnc, amdDec and IsCodeword, such that for any bit string of length :

1. is a string of length , for some constant

2. and

3. For any bit string of length , we have

 Pr(IsCodeword(amdEnc(m,η)⊕s,η))≤η

With the use of Naor-Naor hash functions along with AMD codes, we are able to provide the required security for messages with Alice and Bob. Assume that the Bob generates the fingerprint , which upon tampering by the adversary, is converted to for some strings of appropriate lengths. Upon receiving this, Alice compares it against the fingerprint of her message by computing , for appropriately chosen . Then, we require that there exist a such that for any choice of ,

 Pr{h(s⊕t1,m′,p,|m′|)=(s⊕t1,f⊕t2)}≤η

for any string . Theorem 2.3 provides us with this guarantee directly.

##### Error-correcting Codes

These codes enable us to encode a message so that it can be recovered even if the adversary corrupts a third of the bits. We will denote the encoding and decoding functions by ecEnc and ecDec, respectively. The following theorem, a slight restatement from (Reed and Solomon, 1960), gives the properties of these functions.

###### Theorem 2.4 ().

(Reed and Solomon, 1960) There is a constant such that for any message , we have . Moreover, if differs from in at most one-third of its bits, then .

Finally, we observe that the linearity of ecEnc and ecDec ensure that when the error correction is composed with the AMD code, the resulting code has the following properties:

1. If at most a third of the bits of the message are flipped, then the original message can be uniquely reconstructed by rounding to the nearest codeword in the range of ecEnc.

2. Even if an arbitrary set of bits is flipped, the probability of the change not being recognized is at most , i.e. the same guarantee as the AMD codes.

This is because ecDec is linear, so when noise is added by the adversary to the codeword , effectively what happens is the decoding function , where is the AMD-encoded message. But now is an random string that is added to the AMD-encoded codeword.

##### Silence

In our algorithm, silence on the channel has a very specific meaning. We define the function to return true iff the string has fewer than bit alternations.

##### Other notation

We use to denote the -bit string of all zeros, for string concatenation and to denote the function that returns the bits on the channel over the next time steps. For the sake of convenience, we will use to mean , unless specified otherwise. Let .

### 2.2. Algorithm overview

Our algorithm for the case when is known is given in two parts: Algorithm 1 is what Alice follows and Algorithm 2 is what Bob follows. Both algorithms assume knowledge of the message length and the error tolerance . The idea is for Alice to compute a degree- polynomial encoding of over a field of size . Here and . She begins by sending evaluations of this polynomial over the first field elements to Bob in plaintext, which Bob uses to reconstruct the polynomial and retrieve the message. He also computes a fingerprint of this polynomial and sends it back to Alice. He encodes this fingerprint with AMD encoding and then ECC encoding, so that any successful tampering will require at least a third of the bits in the encoded fingerprint to be flipped and will be detected with high probability. If Alice receives a correct fingerprint, she echoes it back to Bob. Upon listening to this echo, Bob terminates. The channel from Bob to Alice is now silent, after incepting which Alice terminates the protocol as well.

If the adversary flips bits on the channel so that Bob’s fingerprint mismatches, Alice recognizes this mismatch with high probability and exchanges more evaluations of her polynomial with Bob, proceeding in rounds. In each round, Alice sends two more evaluations of the polynomial on the next two field elements and sends them to Bob. Bob uses these to reconstruct his polynomial and sends a fingerprint back to Alice. The next round only begins if Alice did not terminate in this round, which will require this fingerprint to match and for Alice to intercept silence after Bob has terminated. We will bound the number of rounds and the failure probability for our algorithm in the next section.

### 2.3. Example Run

We now discuss an example of a run of our protocol to make the different steps in the algorithm more clear. We illustrate this example in Fig. 1 and provide a step-by-step explanation below.

1. Alice begins by computing a polynomial corresponding to the message and sends its evaluation on the first field elements to Bob, in plaintext. The adversary now corrupts one of the evaluation tuples so that the polynomial that Bob reconstructs is different than .

2. Bob computes the fingerprint of this polynomial, depicted for brevity, and sends it to Alice. Alice compares this fingerprint against the hash of her own polynomial, , and notices a mismatch.

3. In response, Alice remains silent. Bob is now convinced that his version of the polynomial is incorrect, so he sends noise to Alice to ask her for a resend.

4. Alice encodes two more evaluations of at the next two field elements and sends them to Bob. The adversary tries to tamper with these evaluations by flipping some bits. For this example, we assume that he flips fewer than a third of the total number of bits in the encoded evaluations. Upon decoding, Bob is able to successfully recover both the evaluations and uses the GetPolynomial subroutine to recompute , which in this case matches .

5. Bob computes and sends it to Alice. Upon seeing this hash and verifying that it matches , Alice is now convinced that Bob has the correct copy of the polynomial, and hence, the original message.

6. Alice echoes the hash back to Bob, upon hearing which Bob extracts the message from the polynomial (using its coefficients) and terminates the protocol. Silence follows on the channel from Bob to Alice.

7. Alice intercepts silence and terminates the protocol as well.

The message has now successfully been transmitted from Alice to Bob.

## 3. Analysis

We now prove that our algorithm is correct with probability at least , and compute the number of bits sent. Before proceeding to the proof, we define three bad events:

1. Unintentional Silence. When Bob executes step 18 of his algorithm, the string received by Alice is interpreted as silence.

2. Fingerprint Error. Fingerprint hash collision as per Theorem 2.1.

3. AMD Error. The adversary corrupts an AMD encoded message into an encoding of a different message.

##### Rounds

For both Alice and Bob, we define a round as one iteration of the for loop in our algorithm. We refer to the part of the algorithm before the for loop begins as round . The AMD encoding strength is equal to initially and decreases by a factor of every rounds. This way, the number of bits added to the messages increases linearly every rounds, which enhances security against corruption.

### 3.1. Correctness and Termination

We now prove that with probability at least , Bob terminates the algorithm with the correct guess of Alice’s message.

#### 3.1.1. Unintentional Silence

The following lemmas show that Alice terminates before Bob with probability at most .

###### Lemma 3.1 ().

For , the probability that a -bit string sampled uniformly at random from has fewer than bit alternations is at most .

###### Proof.

Let be a string sampled uniformly at random from , where . Denote by the bit of . Let be the indicator random variable for the event that , for . Note that all ’s are mutually independent. Let be the number of bit alternations in . Clearly, , which gives , using the linearity of expectation. Since for all , we get . Using the multiplicative version of Chernoff bounds (Dubhashi and Panconesi, 2009) for ,

 Pr{X

To obtain , set to get,

 Pr{X

###### Lemma 3.2 ().

Alice terminates the algorithm before Bob with probability at most .

###### Proof.

Let be the event that Alice terminates before Bob. This happens when the string sent by Bob in step 18 after possible adversarial corruptions is interpreted as silence by Alice. Let be the event that Alice terminates before Bob in round of the algorithm. Then, using a union bound over the rounds, the fact that and Lemma 3.1, we get

 Pr{ξ}≤∑j≥1Pr{ξj}≤∑j≥1e−bj/19≤∑j≥12−bj/19=∑j≥12−Clog(L/ηj)/19≤∑j≥12−log(L/ηj)=∑j≥1log(ηj/L)≤δ6Ld∑j≥0(12)⌊j/d⌋≤δ3L≤δ3

Note that Lemma 3.1 is applicable here because for each , we have . To see this, use the fact that and to obtain the condition , which is always true because . ∎

#### 3.1.2. Fingerprint Failure

The following lemma proves that the fingerprint error happens with probability at most , ensuring the correctness of the algorithm.

###### Lemma 3.3 ().

Upon termination, Bob does not have the correct guess of Alice’s message with probability at most .

###### Proof.

Let be the event that Bob does not have the correct guess of Alice’s message upon termination. Note that in round , from Theorem 2.1, the fingerprints fail with probability at most . Using a union bound over these rounds, we get

 Pr{ξ}≤∑j≥1ηj=∑j≥1δ6d(12)⌊j/d⌋≤δ6∑j≥0(1/2)j=δ3

#### 3.1.3. AMD Failure

###### Lemma 3.4 ().

The probability of AMD failure is at most .

###### Proof.

Note that in round , from Theorem 2.3, AMD failure occurs with probability at most . Hence, using a union bound over the rounds, the AMD failure occurs with probability . ∎

### 3.2. Probability of Failure

###### Lemma 3.5 ().

Our algorithm succeeds with probability at least .

###### Proof.

Lemmas 3.23.3 and 3.4 ensure that the three bad events, as defined previously, each happen with probability at most . Hence, using a union bound over the occurrence of these three events, the total probability of failure of the algorithm is at most . If the three bad events do not occur, then Alice will continue to send evaluations of the polynomial until Bob has the correct message. Since is finite, Bob will eventually have the correct message and terminate. ∎

### 3.3. Cost to the algorithm

Recall that Alice and Bob compute their polynomials and , respectively, over . We refer to every that Bob stores after receiving the evaluation , that has potentially been tampered with, of the polynomial at from Alice as a polynomial evaluation tuple. We call a polynomial evaluation tuple  in Bob’s set good if and bad otherwise.

We begin by stating two important lemmas that relate the number of bits flipped by the adversary to make polynomial evaluation tuples bad to the number of bits required to send them.

###### Lemma 3.6 ().

Let be the number of bits flipped by the adversary to make polynomial evaluation tuples bad. Then, if , and

 f(m)≥(d+1)+C6((m−d−1)log(6Ld/δ)+(m−d−3)24d)

otherwise.

###### Proof.

Let , where is the number of polynomial evaluation tuples that were not encoded and is the number of AMD and error-encoded polynomial evaluation tuples. Clearly, . Each of the remaining polynomial evaluation tuples are sent in pairs, one pair per round. Since the adversary needs to flip at least a third of the number of bits for each encoded polynomial evaluation tuple to make it bad, we have

 f(m)≥m1+13m2/2∑j=1bj=m1+C3m2/2∑j=1(log(6Ldδ)+⌊jd⌋)≥m1+C6(m2log(6Ldδ)+(m2−2)24d)

Since the number of bits per polynomial evaluation tuple increases monotonically, the expression above becomes if , and

 f(m)≥(d+1)+C6((m−d−1)log(6Ld/δ)+(m−d−3)24d)

otherwise. ∎

###### Lemma 3.7 ().

Let be the number of bits required to send polynomial evaluation tuples, where . Then,

 g(m)≤L+5C((m−d−1)2log(6Ld/δ)+(m−d+1)28d).
###### Proof.

If , then we have , since each of these polynomial evaluation tuples is of length . For , taking into account the fact that each round involves exchange of at most messages between Alice and Bob, we get

 g(m)≤L+5(m−d−1)/2∑j=1bj=L+5C(m−d−1)/2∑j=1(log(6Ldδ)+⌊jd⌋)≤L+5C((m−d−1)2log(6Ld/δ)+(m−d+1)28d)

###### Lemma 3.8 ().

Let , and be any round at the end of which . Then the number of bad polynomial evaluation tuples through round is at least .

###### Proof.

We call a field element  good if , and bad otherwise. Let be the number of good field elements and be the number of bad field elements up to round . Similarly, let be the number of good polynomial evaluation tuples and be the number of bad polynomial evaluation tuples up to round . Then, from Theorem 2.2, we must have . Note that the total number of field elements for which Bob has received polynomial evaluation tuples from Alice through round is . Adding this equality to the previous inequality, we have

 (3.1) be≥12min(2r+1,q−d).

The total number of polynomial evaluation tuples received by Bob up to round is given by

 (3.2) bt+gt=d+2r+1.

Note that every bad field element is associated with at least polynomial evaluation tuples. This gives . Using this inequality with Eqs. (3.1) and (3.2), we have

 (3.3) bt≥12min(2r+1,q−d)⌊d+2r+12min(d+2r+1,q)⌋≥12⌊d+2r+12min(d+2r+1,q)min(2r+1,q−d)⌋

Case I: For this case, we have

 (3.4)

Case II: For this case, we have

 (3.5) 12⌊d+2r+12min(d+2r+1,q)min(2r+1,q−d)⌋=12⌊(d+2r+1)(q−d)2q⌋≥12⌊2r+12(1−dq)⌋≥r4

where the last inequality holds since for .

Combining Eqs. (3.4) and (3.5), we get . ∎

We now state a lemma that is crucial to the proof of Theorem 1.1.

###### Lemma 3.9 ().

If Bob terminates before Alice, the total number of bits sent by our algorithm is

 L+O(T+min(T+1,LlogL)log(Lδ)).
###### Proof.

Let be the last round at the end of which , or if at the end of round and for all subsequent rounds. Let be the number of bits corrupted by the adversary through round . Let represent the total cost through round and be the cost of the algorithm after round . Note that after round , the adversary must corrupt one of either (1) the fingerprint, or (2) its echo, or (3) silence on the channel in Step 15 of Alice’s algorithm, in every round to delay termination. Also, after round , Alice and Bob must exchange at least a fingerprint and an echo even if . Thus, we have,

 (3.6) A2=O(T+log(L/δ))

Recall that the number of polynomial evaluation tuples sent up to round is . Then, from Lemma 3.7, we have

 (3.7) A1≤g(d+2r′+1)≤L+5C(r′log(6Ld/δ)+(r′+1)22d).

From Lemma 3.8, we have that the number of bad polynomial evaluation tuples is at least . Thus, from Lemma 3.6, we have , which implies if . Otherwise, we have

 (3.8) T1≥(d+1)+C6((r′/4−d−1)log(6Ld/δ)+(r′/4−d+3)24d)

Case I : Since is at least the number of bad polynomial evaluation tuples, from Lemma 3.8, we have , which gives . Hence, using Eq (3.7), we get,

 A1 ≤L+5C(r′log(6Ld/δ)+(r′+1)22d) ≤L+5C(min(4T1,4(d+1))log(6Ld/δ)+(4d+5)22d) (3.9) =L+O(min(T1,LlogL)log(L/δ)+LlogL)

where the last equality holds because .

Case II : From Eq. (3.8), we have

 (3.10) T1≥(d+1)+C6((r′/4−d−1)log(6Ld/δ)+(r′/4−d+3)24d).

Since each summand in the inequality above is positive and , we get , which gives

 (3.11) r′log(6Ld/δ)≤4T1+4(d+1)log(6Ld/δ).

Since , we have . Building on this, we get,

 (3.12) (r′+1)22d≤(8√T1d+4d−11)22d

Hence, from Eqs. (3.7), (3.11) and (3.12) , we get

 A1 ≤L+5C(r′log(6Ld/δ)+(r′+1)22d) ≤L+5C⎛⎝4T1+4(d+1)log(6Ld/δ)+(8√T1d+4d−11)22d⎞⎠ (3.13) =L+O(T1+(LlogL)log(L/δ))

where the last equality holds because and from inequality (3.10).

Combining Eqs. (3.6), (3.9) and (3.13), the total number of bits sent by the algorithm becomes

 A1+A2=L+O(T+min(T+1,LlogL)log(Lδ))

Putting it all together, we are now ready to state our main theorem.

###### Theorem 3.1 ().

Our algorithm tolerates an unknown number of adversarial errors, , and for a given , succeeds with probability at least , and sends bits.

###### Proof.

By Lemmas 3.5, with probability at least , Bob terminates before Alice with the correct message. If this happens, then by Lemma 3.9, the total number of bits sent is

 L+O(T+min(T+1,LlogL)log(Lδ))

## 4. Unknown L

We now discuss an algorithm for the case when the message length is unknown to Bob. The only parameter now known to both Alice and Bob is .

Our main idea is to make use of an algorithm from (Aggarwal et al., 2017), which enables Alice to send a message of unknown length to Bob in our model, but is inefficient. 222We refer the reader to (Aggarwal et al., 2017) for details on this algorithm; we discuss only its use in this paper. We thus use a two phase approach. First, we send the length of the message (i.e. a total of bits) from Alice to Bob using the algorithms of (Aggarwal et al., 2017). Second, once Bob learns the value , we use the algorithm from Section 2 to communicate the message . We will show that the total number of bits sent by this two phase algorithm is asymptotically similar to the case when the message length is known by Bob in advance.

### 4.1. Algorithm Overview

Let be a noise-free protocol in which Alice sends to Bob, who is unaware of the length ( in this case) of the message. Let be a noise-free protocol in which Alice sends to Bob, who knows the length a priori. W can write the noise-free protocol to communicate from Alice to Bob, who does not know , as a composition of and in this order. Let and be the simulations of and , respectively, that are robust to adversarial bit flipping.

To simulate with desired error probability , we proceed in two steps. We first make robust with error tolerance using Algorithm from  (Aggarwal et al., 2017), setting . Then, we make robust with error tolerance using Algorithms 1 and 2. This way, when we compose the robust versions of and , we get with error probability at most (by union bound). The correctness of immediately follows from the correctness of and , by construction.

### 4.2. Probability of Failure

The failure events for are exactly the failure events for and . In other words, we say fails when one or both of and fail. Thus, the failure probability of is at most , by a simple union bound over the two sub-protocols.

### 4.3. Number of bits sent

To analyze the number of bits sent, let be the number of bits flipped by the adversary in and be the number of bits flipped by the adversary in . Recall that the length of the message from Alice to Bob in is and that in is . Let be the number of bits sent in and be the number of bits sent in . Thus, using Theorem from (Aggarwal et al., 2017) (with and ), we get

 A1=O(logL⋅loglogL+T1)

Similarly, using Theorem 3.1 from this paper (with ), we get

 A2=L+O(T2+min(T2+1,L/logL)logL))

Using , the total number of bits sent by is then . The proof of Theorem 1.1 now follows directly from the above analysis.

Note that another approach to sending a message of unknown length from Alice to Bob would have been to directly use the algorithm in (Aggarwal et al., 2017) with . However, this would have incurred a higher blowup than the approach that we take in this paper. More specifically, when is small, the direct use of the multiparty algorithm gives a multiplicative logarithmic blowup in the number of bits, while our current approach maintains the constant overall blowup in the number of bits by using the heavy weight protocol for the length of the message instead (which is exponentially smaller than the message).

## 5. Conclusion

We have described an algorithm for sending a message over a two-way noisy channel. Our algorithm is robust to an adversary that can flip an unknown but finite number of bits on the channel. The adversary knows our algorithm and the message to be sent, but does not know the random bits of the sender and receiver, nor the bits sent over the channel. The receiver of the message does not know the message length in advance.

Assume the message length is , the number of bits flipped by the adversary is , and is an error parameter known to both players. Then our algorithm sends an expected number of bits that is , and succeeds with probability at least . When and is polynomially small in , the number of bits sent is , which is asymptotically optimal; and when , the number of bits sent is .

Many open problems remain including the following. First, Can we determine asymptotically matching upper and lower bounds on the number of bits required for our problem? Our current algorithm is optimal for , and seems close to optimal for , but is it optimal for intermediate values of ? Second, Can we tolerate a more powerful adversary or different types of adversaries? For example, it seems like our current algorithm can tolerate a completely omniscient adversary, if that adversary can only flip a chosen bit with some probability that is for some fixed . Finally, can we extend our result to the problem of sending our message from a source to a target in an arbitrary network where nodes are connected via noisy two-way channels? This final problem seems closely related to the problem of network coding (Liew et al., 2013; Matsuda et al., 2011; Bassoli et al., 2013), for the case where the amount of noise and the message size is not known in advance. In this final problem, since there are multiple nodes, we would likely also need to address problems of asynchronous communication.

## References

• (1)
• Aggarwal et al. (2017) Abhinav Aggarwal, Varsha Dani, Thomas P Hayes, and Jared Saia. 2017. Distributed Computing with Channel Noise. arXiv preprint arXiv:1612.05943v2 (2017).
• Bassoli et al. (2013) Riccardo Bassoli, Hugo Marques, Jonathan Rodriguez, Kenneth W Shum, and Rahim Tafazolli. 2013. Network coding theory: A survey. IEEE Communications Surveys & Tutorials 15, 4 (2013), 1950–1978.
• Brakerski and Kalai (2012) Zvika Brakerski and Yael Tauman Kalai. 2012. Efficient Interactive Coding against Adversarial Noise. In 53rd IEEE Annual Symposium on Foundations of Computer Science (FOCS). 160–166.
• Brakerski and Naor (2013) Zvika Brakerski and Moni Naor. 2013. Fast Algorithms for Interactive Coding. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). 443–456.
• Braverman (2012a) Mark Braverman. 2012a. Coding for Interactive Computation: Progress and Challenges. In 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton). 1914–1921.
• Braverman (2012b) Mark Braverman. 2012b. Towards Deterministic Tree Code Constructions. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS). 161–167.
• Braverman and Efremenko (2014) Mark Braverman and Klim Efremenko. 2014. List and Unique Coding for Interactive Communication in the Presence of Adversarial Noise. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on. 236–245.
• Braverman and Rao (2011) Mark Braverman and Anup Rao. 2011. Towards Coding for Maximum Errors in Interactive Communication. In Proceedings of the Forty-third Annual ACM Symposium on Theory of Computing (STOC). 159–166.
• Cramer et al. (2008) Ronald Cramer, Yevgeniy Dodis, Serge Fehr, Carles Padró, and Daniel Wichs. 2008. Detection of algebraic manipulation with applications to robust secret sharing and fuzzy extractors. In Advances in Cryptology–EUROCRYPT 2008. Springer, 471–488.
• Dani et al. (2015) Varsha Dani, Thomas Hayes, Mahnush Movahedi, Jared Saia, and Maxwell Young. 2015. Interactive Communication with Unknown Noise Rate. CoRR abs/1504.06316 (2015).
• Dubhashi and Panconesi (2009) Devdatt P Dubhashi and Alessandro Panconesi. 2009. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press.
• Franklin et al. (2015) Matthew Franklin, Ran Gelles, Rafail Ostrovsky, and Leonard Schulman. 2015. Optimal Coding for Streaming Authentication and Interactive Communication. IEEE Transactions on Information Theory 61, 1 (Jan 2015), 133–145.
• Gelles et al. (2011) Ran Gelles, Ankur Moitra, and Amit Sahai. 2011. Efficient and Explicit Coding for Interactive Communication. In Foundations of Computer Science (FOCS). 768–777.
• Ghaffari and Haeupler (2013) Mohsen Ghaffari and Bernhard Haeupler. 2013. Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding. (2013). Available at: http://arxiv.org/abs/1312.1763.
• Ghaffari et al. (2014) Mohsen Ghaffari, Bernhard Haeupler, and Madhu Sudan. 2014. Optimal Error Rates for Interactive Coding I: Adaptivity and Other Settings. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC). 794–803.
• Haeupler (2014) Bernhard Haeupler. 2014. Interactive channel capacity revisited. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on. IEEE, 226–235.
• Hashemi and Trachtenberg (2014) Morteza Hashemi and Ari Trachtenberg. 2014. Near real-time rateless coding with a constrained feedback budget. In Communication, Control, and Computing (Allerton), 2014 52nd Annual Allerton Conference on. IEEE, 529–536.
• Liew et al. (2013) Soung Chang Liew, Shengli Zhang, and Lu Lu. 2013. Physical-layer network coding: Tutorial, survey, and beyond. Physical Communication 6 (2013), 4–42.
• Luby (2002) Michael Luby. 2002. LT codes. In null. IEEE, 271.
• MacKay (2005) David JC MacKay. 2005. Fountain codes. In Communications, IEE Proceedings-, Vol. 152. IET, 1062–1068.
• Matsuda et al. (2011) Takahiro Matsuda, Taku Noguchi, and Tetsuya Takine. 2011. Survey of network coding and its applications. IEICE transactions on communications 94, 3 (2011), 698–717.
• Mitzenmacher (2004) Michael Mitzenmacher. 2004. Digital fountains: A survey and look forward. In Information Theory Workshop, 2004. IEEE. IEEE, 271–276.
• Moore and Schulman (2014) Cristopher Moore and Leonard J. Schulman. 2014. Tree Codes and a Conjecture on Exponential Sums. In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science (ITCS). 145–154.
• Naor and Naor (1993) Joseph Naor and Moni Naor. 1993. Small-bias probability spaces: Efficient constructions and applications. SIAM journal on computing 22, 4 (1993), 838–856.
• Ostrovsky et al. (2009) Rafail Ostrovsky, Yuval Rabani, and Leonard J. Schulman. 2009. Error-Correcting Codes for Automatic Control. Information Theory, IEEE Transactions on 55, 7 (July 2009), 2931–2941.
• Palanki and Yedidia (2004) Ravi Palanki and Jonathan S Yedidia. 2004. Rateless codes on noisy channels. In IEEE International Symposium on Information Theory. Citeseer, 37–37.
• Peczarski (2006) Marcin Peczarski. 2006. An Improvement of the Tree Code Construction. Inform. Process. Lett. 99, 3 (Aug. 2006), 92–95.
• Reed and Solomon (1960) Irving S Reed and Gustave Solomon. 1960. Polynomial codes over certain finite fields. Journal of the society for industrial and applied mathematics 8, 2 (1960), 300–304.
• Schulman (1992) L.J. Schulman. 1992. Communication on Noisy Channels: A Coding Theorem for Computation. In Foundations of Computer Science, 1992. Proceedings., 33rd Annual Symposium on. 724–733.
• Schulman (1993) Leonard J. Schulman. 1993. Deterministic Coding for Interactive Communication. In Proceedings of the Annual ACM Symposium on Theory of Computing (STOC). 747–756.
• Welch and Berlekamp (1986) Lloyd R Welch and Elwyn R Berlekamp. 1986. Error correction for algebraic block codes. (Dec. 30 1986). US Patent 4,633,470.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters