A Appendix

New Bounds for the Garden-Hose Model1

Abstract

We show new results about the garden-hose model. Our main results include improved lower bounds based on non-deterministic communication complexity (leading to the previously unknown bounds for Inner Product mod 2 and Disjointness), as well as an upper bound for the Distributed Majority function (previously conjectured to have quadratic complexity). We show an efficient simulation of formulae made of AND, OR, XOR gates in the garden-hose model, which implies that lower bounds on the garden-hose complexity of the order will be hard to obtain for explicit functions. Furthermore we study a time-bounded variant of the model, in which even modest savings in time can lead to exponential lower bounds on the size of garden-hose protocols.

1 Introduction

1.1 Background: The Model

Recently, Buhrman et al. [4] proposed a new measure of complexity for finite Boolean functions, called garden-hose complexity. This measure can be viewed as a type of distributed space complexity, and while its motivation is mainly in applications to position based quantum cryptography, the playful definition of the model is quite appealing in itself. Garden-hose complexity can be viewed as a natural measure of space, in a situation where two players with private inputs compute a Boolean function cooperatively. Space-bounded communication complexity has been investigated before [2, 7, 9] (usually for problems with many outputs), and recently Brody et al. [3] have studied a related model of space bounded communication complexity for Boolean functions (see also [17]). In this context the garden-hose model can be viewed as a memoryless model of communication that is also reversible.

To describe the garden-hose model let us consider two neighbors, Alice and Bob. They own adjacent gardens which happen to have empty water pipes crossing their common boundary. These pipes are the only means of communication available to the two. Their goal is to compute a Boolean function on a pair of private inputs, using water and the pipes across their gardens as a means of communication2.

A garden-hose protocol works as follows: There are shared pipes. Alice takes some pieces of hose and connects pairs of the open ends of the pipes. She may keep some of the ends open. Bob acts in the same way for his end of the pipes. The connections Alice and Bob place depend on their local inputs , and we stress that every end of a pipe is only connected to at most one other end of a pipe (meaning no Y-shaped pieces of hose may be used to split or combine flows of water). Finally, Alice connects a water tap to one of those open ends on her side and starts the water. Based on the connections of Alice and Bob, water flows back and forth through the pipes and finally ends up spilling on one side.

If the water spills on Alice’s side we define the output to be 0. Otherwise, the water spills on Bob’s side and the output value is . It is easy to see that due to the way the connections are made the water must eventually spill on one of the two sides, since cycles are not possible.

Note that the pipes can be viewed as a communication channel that can transmit bits, and that the garden-hose protocol is memoryless, i.e., regardless of the previous history, water from pipe always flows to pipe if those two pipes are connected. Furthermore computation is reversible, i.e., one can follow the path taken by the water backwards (e.g. by sucking the water back).

Buhrman et al. [4] have shown that it is possible to compute every function by playing a garden-hose game. A garden-hose protocol consists of the scheme by which Alice chooses her connections depending on her private input and how Bob chooses his connections depending on his private input . Alice also chooses the pipe that is connected to the tap. The protocol computes a function , if for all inputs with the water spills on Alice’s side, and for all inputs with the water spills on Bob’s side.

The size of a garden-hose protocol is the number of pipes used. The garden-hose complexity GH() of a function is the minimum number of pipes needed in any garden-hose game that computes the value of for all and such that is defined.

The garden-hose model is originally motivated by an application to quantum position-verification schemes [4]. In this setting the position of a prover is verified via communications between the prover and several verifiers. An attack on such a scheme is performed by several provers, none of which are in the claimed position. [4] proposes a protocol for position-verification that depends on a function , and a certain attack on this scheme requires the attackers to share as many entangled qubits as the garden-hose complexity of . Hence all with low garden-hose complexity are not suitable for this task, and it becomes desirable to find explicit functions with large garden-hose complexity.

Buhrman et al. [4] prove a number of results about the garden-hose model:

  • Deterministic one-way communication complexity can be used to show lower bounds of up to for many functions.

  • For the Equality problem they refer to a bound of shown by Pietrzak (the proof implicitly uses the fooling set technique from communication complexity [10] [personal communication]).

  • They argue that super-polynomial lower bounds for the garden-hose complexity of a function imply that the function cannot be computed in Logspace, making such bounds hard to prove for ‘explicit’ functions.

  • They define randomized and quantum variants of the model and show that randomness can be removed at the expense of multiplying size by a factor of (for quantum larger gaps are known).

  • Via a counting argument it is easy to see that most Boolean functions need size .

Very recently Chiu et al. [5] have improved the upper bound for the Equality function to from the previously known bound [4].

1.2 Our Results

We study garden-hose complexity and establish several new connections with well studied models like communication complexity, permutation branching programs, and formula size.

We start by showing that non-deterministic communication complexity gives lower bounds on the garden-hose complexity of any function . This improves the lower bounds of for several important functions like Inner Product, Disjointness to .

We observe that any 2-way deterministic communication protocol can be converted to a garden-hose protocol so that the complexity is upper bounded by the size of the protocol tree of the communication protocol.

We then turn to comparing the model to another nonuniform notion of space complexity, namely branching programs. We show how to convert any permutation branching program to a garden-hose protocol with only a constant factor loss in size.

The most important application of this simulation is that it allows us to find a garden-hose protocol for the distributed Majority function, iff , that has size , disproving the conjecture in [4] that this function has complexity .

Using the garden-hose protocols for Majority, Parity, AND, OR, we show upper bounds on the composition of functions with these.

We then show how to convert any Boolean formula with AND, OR, XOR gates to a garden-hose protocol with a small loss in size. In particular, any formula consisting of arbitrary fan-in 2 gates only can be simulated by a garden-hose protocol with a constant factor loss in size. This result strengthens the previous observation that explicit super-polynomial lower bounds for will be hard to show: even bounds of would improve on the long-standing best lower bounds on formula size due to Nečiporuk from 1966 [12]. We can also simulate formulae including a limited number of Majority gates of arbitrary fan-in, so one might be worried that even super-linear lower bounds could be difficult to prove. We argue, however, that for formulae using arbitrary symmetric gates we can still get near-quadratic lower bounds using a Nečiporuk-type method. Nevertheless we have to leave super-linear lower bounds on the garden-hose complexity as an open problem.

Next we define a notion of time in garden-hose protocols and prove that for any function , if we restrict the number of times water can flow through pipes to some value , we have , where denotes the time-bounded garden-hose complexity, and the -round deterministic communication complexity. This result leads to strong lower bounds for the time bounded complexity of e.g. Equality, and to a time-hierarchy based on the pointer jumping problem.

Finally, we further investigate the power of randomness in the garden-hose model by considering private coin randomness ([4] consider only public coin randomness).

1.3 Organization

Most proofs are deferred to the appendix.

2 Preliminaries

2.1 Definition of the Model

We now describe the garden-hose model in graph terminology. In a garden-hose protocol with pipes there is a set of vertices plus one extra vertex, the tap .

Given their inputs Alice and Bob want to compute . Depending on Alice connects some of the vertices in in pairs by adding edges that form a matching among the vertices in . Similarly Bob connects some of the vertices in in pairs by adding edges that form a matching in .

Notice that after they have added the additional edges, a path starting from vertex is formed in the graph . Since no vertex has degree larger than 2, this path is unique and ends at some vertex. We define the output of the game to be the parity of the length of the path starting at . For instance, if the tap is not connected the path has length 0, and the output is 0. If the tap is connected to another vertex, and that vertex is the end of the path, then the path has length 1 and the output is 1 etc.

A garden-hose protocol for is a mapping from to matchings among together with a mapping from to matchings among . The protocol computes if for all the path has even length iff . The garden-hose complexity of is the smallest such that a garden-hose protocol of size exists that computes .

We note that one can form a matrix that has rows labeled by all of Alice’s matchings, and columns labeled by Bob’s matchings, and contains the parity of the path lengths. A function has garden-hose complexity iff its communication matrix is a sub-matrix of . is called the garden-hose matrix for size .

2.2 Communication Complexity, Formulae, Branching Programs

Definition 1.

Let . In a communication complexity protocol two players Alice and Bob receive inputs and from . In the protocol players exchange messages in order to compute . Such a protocol is represented by a protocol tree, in which vertices, alternating by layer, belong to Alice or to Bob, edges are labeled with messages, and leaves either accept or reject. See [10] for more details. The communication matrix is the matrix containing in row and column .

We say a protocol correctly computes the function if for all , the output of the protocol is equal to . The communication complexity of a protocol is the maximum number of bits exchanged for all .

The deterministic communication complexity of a function is the complexity of an optimal protocol that computes .

Definition 2.

The non-deterministic communication complexity of a Boolean function is the length of the communication in an optimal two-player protocol in which Alice and Bob can make non-deterministic guesses, and there are three possible outputs . For each with there is a guess that will make the players accept but there is no guess that will make the players reject, and vice versa for inputs with .

Note that the above is the two-sided version of non-deterministic communication complexity. It is well known [10] that , and that these inequalities are tight.

Definition 3.

In a public coin randomized protocol for the players have access to a public source of random bits. For all inputs it is required that the protocol gives the correct output with probability for some . The public coin randomized communication complexity of , is the complexity of the optimal public coin randomized protocol. Private coin protocols are defined analogously (players now have access only to private random bits), and their complexity is denoted by .

Definition 4.

The deterministic communication complexity of protocols with at most messages exchanged, starting with Alice, is denoted by .

Definition 5.

In a simultaneous message passing protocol, both Alice and Bob send messages to a referee. The referee, based on , computes the output. The simultaneous communication complexity of a function , , is the cost of the best simultaneous protocol that computes the function using private randomness and error 1/3.

Next we define Boolean formulae.

Definition 6.

A Boolean formula is a Boolean circuit whose every node has fan-out 1 (except the output gate). A Boolean formula of depth is then a tree of depth . The nodes are labeled by gate functions from a family of allowed gate functions, e.g. the class of the 16 possible functions of the form in case the fan-in is restricted to 2. Another interesting class of gate functions is the class of all symmetric functions (of arbitrary fan-in). The formula size of a function (relative to a class of gate functions) is the smallest number of leaves in a formula computing .

Finally, we define branching programs. Our definition of permutation branching programs is extended in a slightly non-standard way.

Definition 7.

A branching program is a directed acyclic graph with one source node and two sink nodes (labeled with and ). The source node has in-degree 0. The sink nodes have out-degree 0. All non-sink nodes are labeled by variables and have out-degree 2. The computation on an input starts from the source node and depending on the value of on a node either moves along the left outgoing edge or the right outgoing edge of that node. An input is accepted iff the path defined by in the branching program leads to the sink node labeled by . The length of the branching program is the maximum length of any path, and the size is the number of nodes.

A layered branching program of length is a branching program where all non-sink nodes (except the source) are partitioned into layers. All the nodes in the same layer query the same variable , and all outgoing edges of the nodes in a layer go to the nodes in the next layer or directly to a sink. The width of a layered branching program is defined to be the maximum number of nodes in any layer of the program. We consider the starting node to be in layer 0 and the sink nodes to be in layer .

A permutation branching program is a layered branching program, where each layer has the same number of nodes, and if is queried in layer , then the edges labeled with 0 between layers and form an injective mapping from to (and so do the the edges labeled with 0). Thus, for permutation branching programs if we fix the value of , each node on level has in-degree at most 1.

We call a permutation branching program strict if there are no edges to from internal layers. This is the original definition of permutation branching programs. Programs that are not strict are also referred to as loose for emphasis.

We denote by the minimal size of a permutation branching program that computes .

We note that simple functions like AND, OR can easily be computed by linear size loose permutation branching programs of width 2, something that is not possible for strict permutation branching programs [1].

3 Garden-Hose Protocols and Communication Complexity

3.1 Lower Bound via Non-deterministic Communication

In this section we show that non-deterministic communication complexity can be used to lower bound . This bound is often better than the bound shown in [4], which cannot be larger than .

Theorem 8.

.

The main idea is that a nondeterministic protocol that simulates the garden-hose game can choose the set of pipes that are used on a path used on inputs instead of the path itself, reducing the complexity of the protocol. The set that is guessed may be a superset of the actually used pipes, introducing ambiguity. Nevertheless we can make sure that the additionally guessed pipes form cycles and are thus irrelevant.

As an application consider the function . It is well known that [10], hence we get that . The same bound holds for Disjointness. These bounds improve on the previous bounds for these functions [4]. Furthermore note that the fooling set technique gives only bounds of size for the complexity of (see [10]), so the technique previously used to get a linear lower bound for Equality fails for .

3.2 At Most The Size of a Protocol Tree for

Buhrman et al. [4] show that any one way communication complexity protocol with complexity can be converted to a garden-hose protocol with pipes. One-way communication complexity can be much larger than two-way communication [16].

Theorem 9.

For any function , the garden-hose complexity is upper bounded by the number of edges in a protocol tree for .

The construction is better than the previous one in [4] for problems for which one-way communication is far from the many-round communication complexity.

4 Relating Permutation Branching Programs and the Garden-Hose Model

Definition 10.

In a garden hose protocol a spilling-pipe on a player’s side is a pipe such that water spills out of that pipe on the player’s side during the computation for some input .

We say a protocol has multiple spilling-pipes if there is more than one spilling-pipe on Alice’s side or on Bob’s side.

We now show a technical lemma that helps us compose garden-hose protocols without blowing up the size too much.

Lemma 11.

A garden-hose protocol for with multiple spilling pipes can be converted to another garden-hose protocol for that has only one spilling pipe on Alice’s side and one spilling pipe on Bob’s side. The size of is at most 3 times the size of plus 1.

Next we are going to show that it is possible to convert a (loose) permutation branching program into a garden-hose protocol with only a constant factor increase in size. We are stating a more general fact, namely that the inputs to the branching program we simulate can be functions (with small garden-hose complexity) instead of just variables. This allows us to use composition.

Lemma 12.

GH, where and and . The do not necessarily have the same inputs .

A first corollary is the following fact already shown in [4]. Nonuniform Logspace is equal to the class of all languages recognizable by polynomial size families of branching programs. Since reversible Logspace equals deterministic Logspace [11], and a reversible Logspace machine (on a fixed input length) can be transformed into a polynomial size permutation branching program, we get the following.

Corollary 13.

Logspace . This holds for any partition of the variables among Alice and Bob.

5 The Distributed Majority Function

In this section we investigate the complexity of the Distributed Majority function.

Definition 14.

Distributed Majority: DMAJ iff , where .

Buhrman et al. [4] have conjectured that the complexity of this function is quadratic, which is what is suggested by the naïve garden-hose protocol for the problem. The naïve protocol implicitly keeps one counter for and one for the sum, leading to quadratic size. Here we describe a construction of a permutation branching program of size for Majority, which can then be used to construct a garden-hose protocol for the Distributed Majority function. The Majority function is defined by .

Note that the Majority function itself can be computed in the garden-hose model using pipes (for any way to distribute inputs to Alice and Bob), since Alice can just communicate to Bob. The advantage of using a permutation branching program to compute Majority is that by Lemma 12 we can then find a garden-hose protocol for the composition of MAJ and the Boolean AND, which is the Distributed Majority function. We adapt a construction of Sinha and Thathachar [19], who describe a branching program for the Majority function.

Lemma 15.

.

We can now state our result about the composition of functions with small garden-hose complexity via a Majority function.

Lemma 16.

For , where each function has garden-hose complexity , we have .

The lemma immediately follows from combining Lemma 15 with Lemma 12. Considering we get

Corollary 17.

The garden-hose complexity of distributed Majority is .

6 Composition and Connection to Formula Size

We wish to relate to the formula size of . To do so we examine composition of garden-hose protocols by popular gate functions.

Theorem 18.

For , where each function has garden-hose complexity

  • .

  • .

  • .

  • .

This result follows from Lemma 16 and Lemma 12 combined with the trivial loose permutation branching programs for AND, OR, XOR.

We now turn to the simulation of Boolean formulae by garden-hose protocols. We use the simulation of formulae over the set of all fan-in 2 function by branching programs due to Giel [6].

Theorem 19.

Let be a formula for a Boolean function on inputs made of gates of arbitrary fan-in. If has size and for all , then for all constants we have .

Proof.

Giel [6] shows the following simulation result:

Fact 1.

Let be any constant. Assume there is a formula with arbitrary fan-in 2 gates and size for a Boolean function . Then there is a layered branching program of size and width that also computes .

By inspection of the proof it becomes clear that the constructed branching program is in fact a strict permutation branching program. The theorem follows by applying Lemma 12. ∎

Corollary 20.

When the ’s are single variables for all constants . Thus any lower bound on the garden-hose complexity of a function yields a slightly smaller lower bound on formula-size (all gates of fan-in 2 allowed).

The best lower bound of known for the size of formulae over the basis of all fan-in 2 gate function is due to Nečiporuk [12]. The Nečiporuk lower bound method (based on counting subfunctions) can also be used to give the best general branching program lower bound of (see [20]).

Due to the above any lower bound larger than for the garden-hose model would immediately give lower bounds of almost the same magnitude for formula size and permutation branching program size. Proving super-quadratic lower bounds in these models is a long-standing open problem.

Due to the fact that we have small permutation branching programs for Majority, we can even simulate a more general class of formulae involving a limited number of Majority gates.

Theorem 21.

Let be a formula for a Boolean function on inputs made of gates of arbitrary fan-in. Additionally there may be at most Majority gates on any path from the root to the leaves. If has size , then for all constants we have .

Proof.

Proceeding in reverse topological order we can replace all sub-formulae below a Majority gate by garden-hose protocols with Theorem 19, increasing the size of the sub-formula. Then we can apply Lemma 16 to replace the sub-formula including the Majority gate by a garden-hose protocol. If the size of the formula below the Majority gate is , then the garden-hose size is , where the poly-logarithmic factor of Lemma 16 is hidden in the polynomial increase. Since every path from root to leaf has at most Majority gates, and we may choose the in Theorem 19 to be smaller than , we get our result. ∎

6.1 The Nečiporuk Bound with Arbitrary Symmetric Gates

Since garden-hose protocols can even simulate formulae containing some arbitrary fan-in Majority gates, the question arises whether one can hope for super-linear lower bounds at all. Maybe it is hard to show super-linear lower bounds for formulae having Majority gates? Note that very small formulae for the Majority function itself are not known (the currently best construction yields formulae of size [18]), hence we cannot argue that Majority gates do not add power to the model. In this subsection we sketch the simple observation that the Nečiporuk method [12] can be used to give good lower bounds for formulae made of arbitrary symmetric gates of any fan-in. Hence there is no obstacle to near-quadratic lower bounds from the formula size connection we have shown. We stress that nevertheless we do not have any super-linear lower bounds for the garden-hose model.

We employ the communication complexity notation for the Nečiporuk bound from [8].

Theorem 22.

Let be a Boolean function and a partition of the input bits of . Denote by the deterministic one-way communication complexity of , when Alice receives all inputs except those in , and Bob the inputs in . Then the size (number of leaves) of any formula consisting of arbitrary symmetric Boolean gates is at least .

The theorem is as good as the usual Nečiporuk bound except for the log-factor, and can hence be used to show lower bounds of up to on the formula size of explicit functions like IndirectStorageAccess [20].

7 Time Bounded Garden-Hose Protocols

We now define a notion of time in garden-hose complexity.

Definition 23.

Given a garden-hose protocol for computing function , and an input we refer to the pipes that carry water in on as the wet pipes. Let denote the maximum number of wet pipes for any input in .

The number of wet pipes on input is equal to the length of the path the water takes and thus corresponds to the time the computation takes. Thus it makes sense to investigate protocols which have bounded time . Furthermore, the question is whether it is possible to simultaneously optimize and the number of pipes used.

Definition 24.

We define to be the complexity of an optimal garden-hose protocol for computing where for any input we have that is bounded by .

As an example consider the Equality function (test whether ). The straightforward protocol that compares bit after bit has cost but needs time in the worst case. On the other hand one can easily obtain a protocol with time 2, that has cost : use pipes to communicate to Bob. We have the following general lower bound.

Theorem 25.

For all Boolean functions we have , where is the deterministic communication complexity of with at most rounds (Alice starting).

Proof.

We rewrite the claim as .

Let be the garden-hose protocol for that achieves complexity for . The deterministic -round communication protocol for simulates by simply following the flow of the water. In each round Alice or Bob (alternatingly) send the name of the pipe used at that time by . ∎

Thus for Equality we have for instance that . There is an almost matching upper bound of by using blocks of pipes to communicate blocks of bits each.

We can easily deduce a time-cost tradeoff from the above: For Equality the product of time and cost is at least , because for time we get a super-linear bound on the size, whereas for larger we can use that the size is always at least .

7.1 A Time-Size Hierarchy

The Pointer Jumping Function is well-studied in communication complexity. We describe a slight restriction of the problem in which the inputs are permutations of .

Definition 26.

Let and be two disjoint sets of vertices such that .

Let and is bijective and and is bijective. For a pair of functions and define

Then and .

Finally, the pointer jumping function is defined to be the XOR of all bits in the binary name of , where is a fixed vertex in .

Round-communication hierarchies for or related functions are investigated in [15]. Here we observe that gives a time-size hierarchy in the garden-hose model. For simplicity we only consider the case where Alice starts.

Theorem 27.
  1. can be computed by a garden-hose protocol with time and size .

  2. Any garden-hose protocol for that uses time at most has size for all .

We note that slightly weaker lower bounds hold for the randomized setting.

8 Randomized Garden-Hose Protocols

We now bring randomness into the picture and investigate its power in the garden-hose model. Buhrman et al [4] have already considered protocols with public randomness. In this section we are mainly interested in the power of private randomness.

Definition 28.

Let denote the minimum complexity of a garden-hose protocol for computing , where the players have access to public randomness, and the output is correct with probability 2/3 (over the randomness). Similarly, we can define , the cost of garden-hose protocols with access to private randomness.

By standard fingerprinting ideas [10] we can observe the following.

Claim 1.

Claim 2.

, and this is achieved by a constant time protocol.

Proof.

The second claim follows from Newman’s theorem [13] showing that any public coin protocol with communication cost can be converted into a private coin protocol with communication cost bits on inputs of length together with the standard public coin protocol for Equality, and the protocol tree simulation of Theorem 9. ∎

Of course we already know that even the deterministic complexity of Equality is , hence the only thing achieved by the above protocol is the reduction in time complexity. Note that due to our result of the previous section computing Equality deterministically in constant time needs exponentially many pipes.

Buhrman et al. [4] have shown how to de-randomize a public coin protocols at the cost of increasing size by a factor of , so the factor in the separation between public coin and deterministic protocols above is the best that can be achieved. This raises the question whether private coin protocols can ever be more efficient in size than the optimal deterministic protocol. We now show that there are no very efficient private coin protocols for Equality.

Claim 3.

Proof.

To prove this we first note that , where is the cost of randomized private coin simultaneous message protocols for (Alice and Bob can send their connections to the referee). Hence, , but Newman and Szegedy [14] show that . ∎

9 Open Problems

  • We show that getting lower bounds on larger than will be hard. But we know of no obstacles to proving super-linear lower bounds.

  • Possible candidates for quadratic lower bounds could be the Disjointness function with set size and universe size , and the IndirectStorageAccess function.

  • Consider the garden-hose matrix as a communication matrix. How many distinct rows does have? What is the deterministic communication complexity of ? The best upper bound is , and the lower bound is . An improved lower bound would give a problem, for which is larger than .

  • We have proved . Is it true that ? Is there any problem where is smaller than ?

  • It would be interesting to investigate the relation between the garden-hose model and memoryless communication complexity, i.e., a model in which Alice and Bob must send messages depending on their input and the message just received only. The garden-hose model is memoryless, but also reversible.

Acknowledgement

We thank an anonymous referee for pointing out a mistake in an earlier version of this paper.

Appendix A Appendix

a.1 Non-deterministic Communication

Proof of Theorem 8.

Consider a deterministic garden-hose protocol for using pipes. Maybe the most natural approach to simulate ’s computation by a non-deterministic communication protocol would be to guess the path that the water takes, and verify this guess locally by Alice and Bob. There are, however, too many paths for this to lead to good bounds. Instead we use a coarser guess. For any given input in a computation of the water traverses a set of pipes. We refer to these pipes as the wet pipes in on . In general a set of wet pipes can correspond to several paths through the network, which must use only edges from the set.

In the non-deterministic protocol Alice guesses a set of pipes that is supposed to be . Since is odd if and only if the size of immediately tells us whether is a witness for 1-inputs or 0-inputs.

Consider an even size set . Alice computes the connections of the pipes on her side using her input (as used in the garden-hose protocol). Her connections are consistent with , iff the tap is connected to a pipe in , and the other pipes in are all connected in pairs, except one, which is open. Note that none of the pipes in may be connected to a pipe outside of . Similarly, is consistent with Bob’s connections (based on ), if all the pipes in are paired up (no pipe in is open and no pipe in is connected to a pipe outside ).

For odd size we use an analogous definition of consistency: Now Alice has no open pipe in and all pipes in are paired up except the one connected to the tap, and Bob has all pipes in paired up except one that is open.

Suppose that is consistent with the connections defined by . Denote by the path the water takes in the garden-hose protocol. We claim that all the pipes in are in , and that the remaining pipes in form cycles. If this is the case then the non-deterministic protocol is correct: Since cycles have even length, subtracting them does not change the fact that is even or odd, and hence the size of and have the same parity, i.e., a consistent determines the function value correctly. Also note that the communication complexity of the non-deterministic protocol is at most +1, since a subset of the pipes used can be communicated with bits: Alice guesses an that is consistent with her input and sends it to Bob, who accepts/rejects if is also consistent with his input, otherwise he gives up (accepting/rejecting takes one additional bit of communication). Note that for partial functions no consistent may exist for Alice to choose, but in that case she can give up without a result.

To establish correctness we have to show that all pipes in are in (and the remaining pipes in form cycles). Clearly the starting pipe (the one connected to the tap) is in by the definition of consistency. All remaining pipes in on Bob’s and Alice’s side are either paired up or (for exactly one pipe) open. Hence we can follow the flow of water without leaving . This implies that is in , and since removing from leaves no open pipes all the remaining pipes in must form a set of cycles. ∎

a.2 Garden-Hose and Protocol Trees

Proof of Theorem 9:.

Given a protocol tree (with edges) of a two way communication protocol for any function we construct a garden-hose protocol with at most pipes.

We describe the construction in a recursive way. Let be any node of the protocol tree belonging to Alice, with children belonging to Bob. In the protocol tree rooted at a function is computed. If none of the are leaves, then we assume by induction that we can construct a garden hose protocol for each of the children, where uses at most many pipes, and is the number of edges in the subtree of . The have the tap on Bob’s side. To find a garden-hose protocol for , we use pipes. Alice sends the water through pipe to communicate the message corresponding to the edge to . Furthermore the right end of pipe is connected to the tap of a copy of . The number of pipes used is at most the number of edges in the protocol tree. If one or two of the are leaves, we use the same construction, except that for an accepting leaf we use one extra pipe that is open on Bob’s end, and for a rejecting leaf we just let the water spill at Alice’s pipe. It is easy to see by induction that the garden-hose protocol accepts on if and only if the protocol tree ends in an accepting leaf. ∎

a.3 One Spilling Pipe

Proof of Lemma 11.

Fix a protocol that uses pipes to compute . In the protocol Alice makes the connections on her side based on her input . Similarly Bob’s connections are based on his input . Denote the set of pipes that are open on Alice’s side by and the set of pipes that are open on Bob’s side by .

In the new protocol Alice and Bob have pipes arranged into 3 blocks of pipes each. Let’s call them and . The main idea is to use to compute and then use and to ‘un-compute’ (to remove the extra information provided by the multiple spilling pipes).

In the construction of Alice and Bob make their connections on and separately, exactly the same way they did in for pipes. Alice then connects ’s tap-pipe to the tap and keeps the tap-pipes of open. They then add the following connections: Alice connects every pipe in to pipe in and Bob connects every pipe in to pipe in . Note that those pipes were open before they were connected as they were all spilling pipes. now does not have any open pipes. The only pipes that will ever spill in and are their taps (there may be other open pipes but it is easy to see that they never spill). The tap-pipes of and are both on Alice’s side. Finally, Alice uses one more pipe, and connects the tap-pipe of to the new pipe. Figure 1 shows an example of the construction.

Figure 1: The Construction in Lemma 11

The size of the new protocol is exactly , and there is exactly one spilling pipe on each side, namely the tap pipes of and , because the only other open pipes are the pipes in and the pipes in . These cannot be reached by the water. All connections made are done by Alice and Bob alone. We now argue that the protocol computes correctly.

Notice that if , then water flows through and ends at one of the pipes in . This pipe is connected to the corresponding pipe in . So the water follows the same path backwards in until it reaches the tap-pipe in . This pipe is open on Alice’s side. Hence water spills on Alice’s side making the output 0 (and it spills at the tap of ).

Similarly, if , water flows through and ends at one of the pipes in on Bob’s side. Since this pipe is connected to the corresponding pipe in the water flows backwards din until it reaches the tap-pipe of . This is on Alice’s side and connected to the extra pipe. This makes the water to spill on Bob’s side as desired. ∎

a.4 Permutation Branching Programs to Garden-Hose

Proof of Lemma 12.

In Lemma 11 we have seen that we can turn a garden-hose protocol with multiple spilling pipes into a protocol with exactly one spilling pipe per side. Such a protocol acts exactly as a node in a branching program, except that its decision is based on . This observation suffices to simulate decision trees, but in a branching program nodes can have in-degree larger than 1, and we cannot pump water from several sources into a single garden-hose protocol.

We now show how to construct a garden-hose protocol for . Given a loose permutation branching program for of size , we show how to construct a garden-hose protocol.

Let denote the graph of the branching program. consists of layers , where the first layer has just one node (the source), the last layer 2 nodes (the sinks), and all intermediate layers have nodes, so the size is . Layer queries some variable , whose value is . The 1-edges between and are , the 0-edges .

The construction goes by replacing the nodes of each layer by the garden-hose protocols for . Each layer uses copies of , arranged in two layers. We refer to these copies as the upper and lower copies of , each numbered from to (and implicitly by their level). Essentially we need the first layer to compute , and the second layer to un-compute, since we only want to remember the name of the current vertex in , not the value of .

If , then we connect the 1-spill pipe of the upper -th copy of to the 1-spill pipe of the lower -th copy of . Similarly we make the connections for the 0-spill pipes (on Alice’s side).

To connect layers we connect the tap-pipes on each lower copy of a level to the tap-pipes of an upper copy on level . On level the tap-pipe of an upper copy is connected to Alice’s tap according to the branching program.

Figure 2: Permutation Branching Program to Garden-Hose Protocol Construction

Figure 2 shows an example of the construction, where each block is a garden-hose protocol to compute .

For every edge that goes to the accepting sink of the branching program we use one pipe that is connected to the corresponding upper copy on Alice’s side, if the corresponding spilling pipe is on Alice’s side. Otherwise we leave the spilling pipe open. We proceed analogously for edges to the rejecting sink.

The size of the garden-hose protocol is at most . ∎

a.5 A Permutation Branching Program for Majority

Proof of Lemma 15.

In 1997, Sinha et al. [19] described a branching program of size for computing Majority. Unfortunately the branching program they construct is not a permutation branching program. Thus it is not immediately clear how to convert their construction into a garden-hose protocol.

To describe a permutation branching program for Majority we first need permutation branching programs for computing the sum of the inputs mod for small . Denote by the (non-Boolean) function mod . The following is easy to see.

Claim 4.

can be computed by permutation branching program of width so that each input with , when starting on the top level at node ends at node mod on the last level.

We call this permutation branching program a modulus- box. The join of two modulus resp.  boxes is a new branching program, in which bottom level nodes of the first box are identified in some way with top level nodes of the second. We employ the following main technical result of Sinha et al. [19], which describes an approximate divider.

Fact 2.

[19] Fix the length of an interval of natural numbers. There are prime numbers , where and and a number such that and and . Set . Consider inputs such that .

Then there is a way to join modulus- boxes (in order ) into a single branching program, such that all inputs reaching the sink nodes named mod for some (in the last box) satisfy that belongs to one of intervals of length in . The intervals overlap, and each point in is in intervals.

Furthermore, the connections between the boxes are such that every output node of the box is connected to one input node of the box, and every input node of the box is connected to at most one output node of the box.

The above differs from the presentation in [19] in that we require that the are increasing so that we can join them without creating nodes with fan-in larger than 1. This means that every box for has a few input nodes that are not used.

Note that our goal is to know whether is greater than or not. Effectively this means there are three kinds of bottom layer nodes in the branching program constructed above (for ): those where we know that all inputs reaching the sink have , at which point we can reject, those where , where we accept, and undecided nodes. A bottom layer node is undecided, if the interval of possible reaching that sink contains . At undecided nodes the interval of possible values of has been reduced to size , i.e., a fraction of the the original interval. Furthermore, there are undecided nodes (since is in that many intervals), but the intervals for those nodes stretch to at most beyond on both sides, hence the union of the intervals of all undecided bottom layer nodes is an interval of size at most . Hence, this construction can be iterated (at most times) to decide Majority on all inputs.

Now we need to argue that the whole construction can be made into a permutation branching program. Obviously any mod- box can be computed by a strict permutation BP of width and length . The connections between the boxed are injective mappings. Hence so the whole constructions for the above fact can be made into a permutation branching program, where dummy nodes need to be added to bring all layers to the same width ().

The branching program for Majority is then an iteration of the above construction of permutation branching programs. In each level of the iteration some nodes accept, some reject, and some continue on a smaller interval. For all undecided sink nodes we can assume that they continue using the same interval of size at most . This continues until the intervals are very short (), at which the problem can be solved by counting.

To do the same iteration in a permutation branching program we need to do the following. We want to turn a building block of the iteration (a permutation branching program as in Fact 2) into a permutation branching program that has only 3 sinks reached by inputs (plus some sinks that are never reached). To do this we first use the original program, followed by 3 copies of the same program in reverse. We connect the undecided sinks of the upper program into the corresponding vertices in the first reversed lower program, similarly the accepting and rejecting sinks into the corresponding vertices of the other two reversed programs. Then each input that is undecided by will end up at the node corresponding to the starting node of the first reverse copy. Similarly inputs that are accepted by will leave the second reverse copy at the node corresponding to the starting node of etc. Using dummy nodes this program can be extended to a permutation branching program, with width increased by a factor of 3 and length by 2. Each input leads to one of three nodes. We can now connect the undecided sink of the above construction to the starting vertex of the next block . To turn the whole construction into a strict permutation branching program the accepting and rejecting bottom vertices are connected to extra vertices that remember at which layer/vertex the inputs were accepted/rejected.

The whole construction yields a permutation branching program for Majority. The length of the program is , for the iterations, the boxes that have length . Each level of the program has width at most for the mod boxes and the constant factors to turn things into a permutation BP (plus vertices for accepting/rejecting paths)). Hence the total size of the program is .

a.6 Lower Bound for Formulae with Symmetric Gates

Proof of Theorem 22.

Fix and and any formula of size at most computing consisting of symmetric gates only. Define to be the subtree of , whose leaves are the variables in (and root is the output gate of the formula, and denote by the number of leaves of . Then the size of is . We will show that .

Alice has all the variables except those in , which go to Bob. Alice (and Bob) have to evaluate all the gates in (this includes the root). They will evaluate the gates in (reverse) topological order. All the leaves are known to Bob. Denote by the set of paths in that start at a leaf or a gate of fan-in at least 2 inside , and end at a gate of fan-in at least 2 inside and have no such gates in between. Then . Also denote by the set of gates in that have fan-in larger than 1 inside , again . We will show that the communication is at most .

Bob goes over paths and gates in in reverse topological order (i.e., from the leaves up). Let be the vertices of some in reverse topological order (i.e., the root is last). Denote by the gate at , the last vertex that has fan-in 1 in . Alice can tell Bob which function is computed at in terms of the value already computed (by Bob) at . This takes 2 bits. Hence the total communication to evaluate paths in is . For each there are at least 2 inputs in that have already been computed by Bob. Since the gate at is symmetric, it is sufficient for Alice to say how many of her inputs to evaluate to 1, which takes at most bits unless the formula is larger than . So the total communication is at most , and , unless has size larger than already.

a.7 Pointer Jumping

Proof Sketch for Theorem 27.

To show part 1) we use a protocol using pipes, organized into blocks. If Alice has input , then she connects the tap to pipe in block 1. For all even numbered blocks she connects the th pipe in block to pipe in block . Bob connects for all odd numbered blocks the th pipe in block to pipe in block .

Assume that is odd. Then the th vertex of the path is on Bob’s side. If then the XOR of is 0 and the water needs to spill on Alice’s side. Hence, in block , for all pipes with even , Alice leaves the pipe open instead of connecting it to a pipe in block . She does make the connections as described above for all odd pipes in block .

Similarly, if is even, then the last vertex is on Alice’s side and if is odd the spill needs to be on Bob’s side. Hence Bob skips all the connections between blocks and for odd numbered pipes in block .

Note that and are bijective, hence the connections made are legal. In total we use pipes. It is clear that the garden-hose protocol described above computes .

Now we turn to part 2. Take any time garden-hose protocol for using pipes. Due to the simulation in Theorem 25 we get a round communication protocol (Alice starting) with communication . But Nisan and Wigderson [15] show that such protocols need communication for . Hence .

The difficulty in applying their result is that Nisan and Wigderson analyze the complexity of for uniformly random inputs, not random bijective inputs resp. . Hence we need to make some changes to their proof. These changes needed to make the argument work are minor, however: the uniform distribution on pairs of bijective functions is still a product distribution, and as long as it is still true that at any vertex in the protocol tree the information about the next pointer is a small constant. The main difference to the original argument is that conditioning on the previous path introduces information about the next pointer due to the fact that vertices on the path can not be used again. This can easily be subsumed into the information given via the previous communication. ∎

Footnotes

  1. This work is funded by the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012-T3-1-009) and by the Singapore National Research Foundation.
  2. It should be mentioned that even though Alice and Bob choose to not communicate in any other way, their intentions are not hostile and neither will deviate from a previously agreed upon protocol.

References

  1. D.A. Barrington. Width-3 permutation branching programs, 1985. Technical report, MIT/LCS/TM-293.
  2. P. Beame, M. Tompa, and P. Yan. Communication-space tradeoffs for unrestricted protocols. SIAM Journal on Computing, 23(3):652–661, 1994. Earlier version in FOCS’90.
  3. Joshua Brody, Shiteng Chen, Periklis A. Papakonstantinou, Hao Song, and Xiaoming Sun. Space-bounded communication complexity. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 159–172, 2013.
  4. Harry Buhrman, Serge Fehr, Christian Schaffner, and Florian Speelman. The garden-hose model. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 145–158. ACM, 2013.
  5. Well Y Chiu, Mario Szegedy, Chengu Wang, and Yixin Xu. The garden hose complexity for the equality function. arXiv:1312.7222, 2013.
  6. O. Giel. Branching program size is almost linear in formula size. Journal of Computer and System Sciences, 63(2):222–235, 2001.
  7. H. Klauck. Quantum and classical communication-space tradeoffs from rectangle bounds. In Proceedings of FSTTCS, 2004.
  8. H. Klauck. One-Way Communication Complexity and the Nečiporuk Lower Bound on Formula Size. SIAM J. Comput., 37(2):552–583, 2007.
  9. H. Klauck, R. Špalek, and R. de Wolf. Quantum and classical strong direct product theorems and optimal time-space tradeoffs. SIAM Journal on Computing, 36(5):1472–1493, 2007. Earlier version in FOCS’04. quant-ph/0402123.
  10. Eyal Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University Press, 1997.
  11. K.J. Lange, P. McKenzie, and A. Tapp. Reversible space equals deterministic space. Journal of Computer and System Sciences, 2(60):354–367, 2000.
  12. E. I. Nečiporuk. A boolean function. In Soviet Mathematics Doklady, volume 7, 1966.
  13. I. Newman. Private vs. common random bits in communication complexity. Information Processing Letters, 39(2):67–71, 1991.
  14. Ilan Newman and Mario Szegedy. Public vs. private coin flips in one round communication games (extended abstract). In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, STOC ’96, pages 561–570, 1996.
  15. Noam Nisan and Avi Wigderson. Rounds in communication complexity revisited. SIAM J. Comput., 22(1):211–219, February 1993.
  16. C. H. Papadimitriou and M. Sipser. Communication complexity. Journal of Computer and System Sciences, 28(2):260–269, 1984. Earlier version in STOC’82.
  17. P. Papakonstantinou, D. Scheder, and H. Song. Overlays and limited memory communication mode(l)s. In Proc. of the 29th Conference on Computational Complexity, 2014.
  18. I. S. Sergeev. Upper bounds for the formula size of symmetric boolean functions. Russian Mathematics, Iz. VUZ, 58(5):30–42, 2014.
  19. Rakesh Kumar Sinha and Jayram S Thathachar. Efficient oblivious branching programs for threshold and mod functions. Journal of Computer and System Sciences, 55(3):373–384, 1997.
  20. I. Wegener. The Complexity of Boolean Functions. Wiley-Teubner Series in Computer Science, 1987.
72276
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description