New Bounds for the GardenHose Model^{1}
Abstract
We show new results about the gardenhose model. Our main results include improved lower bounds based on nondeterministic communication complexity (leading to the previously unknown bounds for Inner Product mod 2 and Disjointness), as well as an upper bound for the Distributed Majority function (previously conjectured to have quadratic complexity). We show an efficient simulation of formulae made of AND, OR, XOR gates in the gardenhose model, which implies that lower bounds on the gardenhose complexity of the order will be hard to obtain for explicit functions. Furthermore we study a timebounded variant of the model, in which even modest savings in time can lead to exponential lower bounds on the size of gardenhose protocols.
1 Introduction
1.1 Background: The Model
Recently, Buhrman et al. [4] proposed a new measure of complexity for finite Boolean functions, called gardenhose complexity. This measure can be viewed as a type of distributed space complexity, and while its motivation is mainly in applications to position based quantum cryptography, the playful definition of the model is quite appealing in itself. Gardenhose complexity can be viewed as a natural measure of space, in a situation where two players with private inputs compute a Boolean function cooperatively. Spacebounded communication complexity has been investigated before [2, 7, 9] (usually for problems with many outputs), and recently Brody et al. [3] have studied a related model of space bounded communication complexity for Boolean functions (see also [17]). In this context the gardenhose model can be viewed as a memoryless model of communication that is also reversible.
To describe the gardenhose model let us consider two neighbors, Alice and Bob. They own adjacent gardens which happen to have empty water pipes crossing their common boundary. These pipes are the only means of communication available to the two. Their goal is to compute a Boolean function on a pair of private inputs, using water and the pipes across their gardens as a means of communication
A gardenhose protocol works as follows: There are shared pipes. Alice takes some pieces of hose and connects pairs of the open ends of the pipes. She may keep some of the ends open. Bob acts in the same way for his end of the pipes. The connections Alice and Bob place depend on their local inputs , and we stress that every end of a pipe is only connected to at most one other end of a pipe (meaning no Yshaped pieces of hose may be used to split or combine flows of water). Finally, Alice connects a water tap to one of those open ends on her side and starts the water. Based on the connections of Alice and Bob, water flows back and forth through the pipes and finally ends up spilling on one side.
If the water spills on Alice’s side we define the output to be 0. Otherwise, the water spills on Bob’s side and the output value is . It is easy to see that due to the way the connections are made the water must eventually spill on one of the two sides, since cycles are not possible.
Note that the pipes can be viewed as a communication channel that can transmit bits, and that the gardenhose protocol is memoryless, i.e., regardless of the previous history, water from pipe always flows to pipe if those two pipes are connected. Furthermore computation is reversible, i.e., one can follow the path taken by the water backwards (e.g. by sucking the water back).
Buhrman et al. [4] have shown that it is possible to compute every function by playing a gardenhose game. A gardenhose protocol consists of the scheme by which Alice chooses her connections depending on her private input and how Bob chooses his connections depending on his private input . Alice also chooses the pipe that is connected to the tap. The protocol computes a function , if for all inputs with the water spills on Alice’s side, and for all inputs with the water spills on Bob’s side.
The size of a gardenhose protocol is the number of pipes used. The gardenhose complexity GH() of a function is the minimum number of pipes needed in any gardenhose game that computes the value of for all and such that is defined.
The gardenhose model is originally motivated by an application to quantum positionverification schemes [4]. In this setting the position of a prover is verified via communications between the prover and several verifiers. An attack on such a scheme is performed by several provers, none of which are in the claimed position. [4] proposes a protocol for positionverification that depends on a function , and a certain attack on this scheme requires the attackers to share as many entangled qubits as the gardenhose complexity of . Hence all with low gardenhose complexity are not suitable for this task, and it becomes desirable to find explicit functions with large gardenhose complexity.
Buhrman et al. [4] prove a number of results about the gardenhose model:

Deterministic oneway communication complexity can be used to show lower bounds of up to for many functions.

For the Equality problem they refer to a bound of shown by Pietrzak (the proof implicitly uses the fooling set technique from communication complexity [10] [personal communication]).

They argue that superpolynomial lower bounds for the gardenhose complexity of a function imply that the function cannot be computed in Logspace, making such bounds hard to prove for ‘explicit’ functions.

They define randomized and quantum variants of the model and show that randomness can be removed at the expense of multiplying size by a factor of (for quantum larger gaps are known).

Via a counting argument it is easy to see that most Boolean functions need size .
1.2 Our Results
We study gardenhose complexity and establish several new connections with well studied models like communication complexity, permutation branching programs, and formula size.
We start by showing that nondeterministic communication complexity gives lower bounds on the gardenhose complexity of any function . This improves the lower bounds of for several important functions like Inner Product, Disjointness to .
We observe that any 2way deterministic communication protocol can be converted to a gardenhose protocol so that the complexity is upper bounded by the size of the protocol tree of the communication protocol.
We then turn to comparing the model to another nonuniform notion of space complexity, namely branching programs. We show how to convert any permutation branching program to a gardenhose protocol with only a constant factor loss in size.
The most important application of this simulation is that it allows us to find a gardenhose protocol for the distributed Majority function, iff , that has size , disproving the conjecture in [4] that this function has complexity .
Using the gardenhose protocols for Majority, Parity, AND, OR, we show upper bounds on the composition of functions with these.
We then show how to convert any Boolean formula with AND, OR, XOR gates to a gardenhose protocol with a small loss in size. In particular, any formula consisting of arbitrary fanin 2 gates only can be simulated by a gardenhose protocol with a constant factor loss in size. This result strengthens the previous observation that explicit superpolynomial lower bounds for will be hard to show: even bounds of would improve on the longstanding best lower bounds on formula size due to Nečiporuk from 1966 [12]. We can also simulate formulae including a limited number of Majority gates of arbitrary fanin, so one might be worried that even superlinear lower bounds could be difficult to prove. We argue, however, that for formulae using arbitrary symmetric gates we can still get nearquadratic lower bounds using a Nečiporuktype method. Nevertheless we have to leave superlinear lower bounds on the gardenhose complexity as an open problem.
Next we define a notion of time in gardenhose protocols and prove that for any function , if we restrict the number of times water can flow through pipes to some value , we have , where denotes the timebounded gardenhose complexity, and the round deterministic communication complexity. This result leads to strong lower bounds for the time bounded complexity of e.g. Equality, and to a timehierarchy based on the pointer jumping problem.
Finally, we further investigate the power of randomness in the gardenhose model by considering private coin randomness ([4] consider only public coin randomness).
1.3 Organization
Most proofs are deferred to the appendix.
2 Preliminaries
2.1 Definition of the Model
We now describe the gardenhose model in graph terminology. In a gardenhose protocol with pipes there is a set of vertices plus one extra vertex, the tap .
Given their inputs Alice and Bob want to compute . Depending on Alice connects some of the vertices in in pairs by adding edges that form a matching among the vertices in . Similarly Bob connects some of the vertices in in pairs by adding edges that form a matching in .
Notice that after they have added the additional edges, a path starting from vertex is formed in the graph . Since no vertex has degree larger than 2, this path is unique and ends at some vertex. We define the output of the game to be the parity of the length of the path starting at . For instance, if the tap is not connected the path has length 0, and the output is 0. If the tap is connected to another vertex, and that vertex is the end of the path, then the path has length 1 and the output is 1 etc.
A gardenhose protocol for is a mapping from to matchings among together with a mapping from to matchings among . The protocol computes if for all the path has even length iff . The gardenhose complexity of is the smallest such that a gardenhose protocol of size exists that computes .
We note that one can form a matrix that has rows labeled by all of Alice’s matchings, and columns labeled by Bob’s matchings, and contains the parity of the path lengths. A function has gardenhose complexity iff its communication matrix is a submatrix of . is called the gardenhose matrix for size .
2.2 Communication Complexity, Formulae, Branching Programs
Definition 1.
Let . In a communication complexity protocol two players Alice and Bob receive inputs and from . In the protocol players exchange messages in order to compute . Such a protocol is represented by a protocol tree, in which vertices, alternating by layer, belong to Alice or to Bob, edges are labeled with messages, and leaves either accept or reject. See [10] for more details. The communication matrix is the matrix containing in row and column .
We say a protocol correctly computes the function if for all , the output of the protocol is equal to . The communication complexity of a protocol is the maximum number of bits exchanged for all .
The deterministic communication complexity of a function is the complexity of an optimal protocol that computes .
Definition 2.
The nondeterministic communication complexity of a Boolean function is the length of the communication in an optimal twoplayer protocol in which Alice and Bob can make nondeterministic guesses, and there are three possible outputs . For each with there is a guess that will make the players accept but there is no guess that will make the players reject, and vice versa for inputs with .
Note that the above is the twosided version of nondeterministic communication complexity. It is well known [10] that , and that these inequalities are tight.
Definition 3.
In a public coin randomized protocol for the players have access to a public source of random bits. For all inputs it is required that the protocol gives the correct output with probability for some . The public coin randomized communication complexity of , is the complexity of the optimal public coin randomized protocol. Private coin protocols are defined analogously (players now have access only to private random bits), and their complexity is denoted by .
Definition 4.
The deterministic communication complexity of protocols with at most messages exchanged, starting with Alice, is denoted by .
Definition 5.
In a simultaneous message passing protocol, both Alice and Bob send messages to a referee. The referee, based on , computes the output. The simultaneous communication complexity of a function , , is the cost of the best simultaneous protocol that computes the function using private randomness and error 1/3.
Next we define Boolean formulae.
Definition 6.
A Boolean formula is a Boolean circuit whose every node has fanout 1 (except the output gate). A Boolean formula of depth is then a tree of depth . The nodes are labeled by gate functions from a family of allowed gate functions, e.g. the class of the 16 possible functions of the form in case the fanin is restricted to 2. Another interesting class of gate functions is the class of all symmetric functions (of arbitrary fanin). The formula size of a function (relative to a class of gate functions) is the smallest number of leaves in a formula computing .
Finally, we define branching programs. Our definition of permutation branching programs is extended in a slightly nonstandard way.
Definition 7.
A branching program is a directed acyclic graph with one source node and two sink nodes (labeled with and ). The source node has indegree 0. The sink nodes have outdegree 0. All nonsink nodes are labeled by variables and have outdegree 2. The computation on an input starts from the source node and depending on the value of on a node either moves along the left outgoing edge or the right outgoing edge of that node. An input is accepted iff the path defined by in the branching program leads to the sink node labeled by . The length of the branching program is the maximum length of any path, and the size is the number of nodes.
A layered branching program of length is a branching program where all nonsink nodes (except the source) are partitioned into layers. All the nodes in the same layer query the same variable , and all outgoing edges of the nodes in a layer go to the nodes in the next layer or directly to a sink. The width of a layered branching program is defined to be the maximum number of nodes in any layer of the program. We consider the starting node to be in layer 0 and the sink nodes to be in layer .
A permutation branching program is a layered branching program, where each layer has the same number of nodes, and if is queried in layer , then the edges labeled with 0 between layers and form an injective mapping from to (and so do the the edges labeled with 0). Thus, for permutation branching programs if we fix the value of , each node on level has indegree at most 1.
We call a permutation branching program strict if there are no edges to from internal layers. This is the original definition of permutation branching programs. Programs that are not strict are also referred to as loose for emphasis.
We denote by the minimal size of a permutation branching program that computes .
We note that simple functions like AND, OR can easily be computed by linear size loose permutation branching programs of width 2, something that is not possible for strict permutation branching programs [1].
3 GardenHose Protocols and Communication Complexity
3.1 Lower Bound via Nondeterministic Communication
In this section we show that nondeterministic communication complexity can be used to lower bound . This bound is often better than the bound shown in [4], which cannot be larger than .
Theorem 8.
.
The main idea is that a nondeterministic protocol that simulates the gardenhose game can choose the set of pipes that are used on a path used on inputs instead of the path itself, reducing the complexity of the protocol. The set that is guessed may be a superset of the actually used pipes, introducing ambiguity. Nevertheless we can make sure that the additionally guessed pipes form cycles and are thus irrelevant.
As an application consider the function . It is well known that [10], hence we get that . The same bound holds for Disjointness. These bounds improve on the previous bounds for these functions [4]. Furthermore note that the fooling set technique gives only bounds of size for the complexity of (see [10]), so the technique previously used to get a linear lower bound for Equality fails for .
3.2 At Most The Size of a Protocol Tree for
Buhrman et al. [4] show that any one way communication complexity protocol with complexity can be converted to a gardenhose protocol with pipes. Oneway communication complexity can be much larger than twoway communication [16].
Theorem 9.
For any function , the gardenhose complexity is upper bounded by the number of edges in a protocol tree for .
The construction is better than the previous one in [4] for problems for which oneway communication is far from the manyround communication complexity.
4 Relating Permutation Branching Programs and the GardenHose Model
Definition 10.
In a garden hose protocol a spillingpipe on a player’s side is a pipe such that water spills out of that pipe on the player’s side during the computation for some input .
We say a protocol has multiple spillingpipes if there is more than one spillingpipe on Alice’s side or on Bob’s side.
We now show a technical lemma that helps us compose gardenhose protocols without blowing up the size too much.
Lemma 11.
A gardenhose protocol for with multiple spilling pipes can be converted to another gardenhose protocol for that has only one spilling pipe on Alice’s side and one spilling pipe on Bob’s side. The size of is at most 3 times the size of plus 1.
Next we are going to show that it is possible to convert a (loose) permutation branching program into a gardenhose protocol with only a constant factor increase in size. We are stating a more general fact, namely that the inputs to the branching program we simulate can be functions (with small gardenhose complexity) instead of just variables. This allows us to use composition.
Lemma 12.
GH, where and and . The do not necessarily have the same inputs .
A first corollary is the following fact already shown in [4]. Nonuniform Logspace is equal to the class of all languages recognizable by polynomial size families of branching programs. Since reversible Logspace equals deterministic Logspace [11], and a reversible Logspace machine (on a fixed input length) can be transformed into a polynomial size permutation branching program, we get the following.
Corollary 13.
Logspace . This holds for any partition of the variables among Alice and Bob.
5 The Distributed Majority Function
In this section we investigate the complexity of the Distributed Majority function.
Definition 14.
Distributed Majority: DMAJ iff , where .
Buhrman et al. [4] have conjectured that the complexity of this function is quadratic, which is what is suggested by the naïve gardenhose protocol for the problem. The naïve protocol implicitly keeps one counter for and one for the sum, leading to quadratic size. Here we describe a construction of a permutation branching program of size for Majority, which can then be used to construct a gardenhose protocol for the Distributed Majority function. The Majority function is defined by .
Note that the Majority function itself can be computed in the gardenhose model using pipes (for any way to distribute inputs to Alice and Bob), since Alice can just communicate to Bob. The advantage of using a permutation branching program to compute Majority is that by Lemma 12 we can then find a gardenhose protocol for the composition of MAJ and the Boolean AND, which is the Distributed Majority function. We adapt a construction of Sinha and Thathachar [19], who describe a branching program for the Majority function.
Lemma 15.
.
We can now state our result about the composition of functions with small gardenhose complexity via a Majority function.
Lemma 16.
For , where each function has gardenhose complexity , we have .
Corollary 17.
The gardenhose complexity of distributed Majority is .
6 Composition and Connection to Formula Size
We wish to relate to the formula size of . To do so we examine composition of gardenhose protocols by popular gate functions.
Theorem 18.
For , where each function has gardenhose complexity

.

.

.

.
This result follows from Lemma 16 and Lemma 12 combined with the trivial loose permutation branching programs for AND, OR, XOR.
We now turn to the simulation of Boolean formulae by gardenhose protocols. We use the simulation of formulae over the set of all fanin 2 function by branching programs due to Giel [6].
Theorem 19.
Let be a formula for a Boolean function on inputs made of gates of arbitrary fanin. If has size and for all , then for all constants we have .
Proof.
Giel [6] shows the following simulation result:
Fact 1.
Let be any constant. Assume there is a formula with arbitrary fanin 2 gates and size for a Boolean function . Then there is a layered branching program of size and width that also computes .
By inspection of the proof it becomes clear that the constructed branching program is in fact a strict permutation branching program. The theorem follows by applying Lemma 12. ∎
Corollary 20.
When the ’s are single variables for all constants . Thus any lower bound on the gardenhose complexity of a function yields a slightly smaller lower bound on formulasize (all gates of fanin 2 allowed).
The best lower bound of known for the size of formulae over the basis of all fanin 2 gate function is due to Nečiporuk [12]. The Nečiporuk lower bound method (based on counting subfunctions) can also be used to give the best general branching program lower bound of (see [20]).
Due to the above any lower bound larger than for the gardenhose model would immediately give lower bounds of almost the same magnitude for formula size and permutation branching program size. Proving superquadratic lower bounds in these models is a longstanding open problem.
Due to the fact that we have small permutation branching programs for Majority, we can even simulate a more general class of formulae involving a limited number of Majority gates.
Theorem 21.
Let be a formula for a Boolean function on inputs made of gates of arbitrary fanin. Additionally there may be at most Majority gates on any path from the root to the leaves. If has size , then for all constants we have .
Proof.
Proceeding in reverse topological order we can replace all subformulae below a Majority gate by gardenhose protocols with Theorem 19, increasing the size of the subformula. Then we can apply Lemma 16 to replace the subformula including the Majority gate by a gardenhose protocol. If the size of the formula below the Majority gate is , then the gardenhose size is , where the polylogarithmic factor of Lemma 16 is hidden in the polynomial increase. Since every path from root to leaf has at most Majority gates, and we may choose the in Theorem 19 to be smaller than , we get our result. ∎
6.1 The Nečiporuk Bound with Arbitrary Symmetric Gates
Since gardenhose protocols can even simulate formulae containing some arbitrary fanin Majority gates, the question arises whether one can hope for superlinear lower bounds at all. Maybe it is hard to show superlinear lower bounds for formulae having Majority gates? Note that very small formulae for the Majority function itself are not known (the currently best construction yields formulae of size [18]), hence we cannot argue that Majority gates do not add power to the model. In this subsection we sketch the simple observation that the Nečiporuk method [12] can be used to give good lower bounds for formulae made of arbitrary symmetric gates of any fanin. Hence there is no obstacle to nearquadratic lower bounds from the formula size connection we have shown. We stress that nevertheless we do not have any superlinear lower bounds for the gardenhose model.
We employ the communication complexity notation for the Nečiporuk bound from [8].
Theorem 22.
Let be a Boolean function and a partition of the input bits of . Denote by the deterministic oneway communication complexity of , when Alice receives all inputs except those in , and Bob the inputs in . Then the size (number of leaves) of any formula consisting of arbitrary symmetric Boolean gates is at least .
The theorem is as good as the usual Nečiporuk bound except for the logfactor, and can hence be used to show lower bounds of up to on the formula size of explicit functions like IndirectStorageAccess [20].
7 Time Bounded GardenHose Protocols
We now define a notion of time in gardenhose complexity.
Definition 23.
Given a gardenhose protocol for computing function , and an input we refer to the pipes that carry water in on as the wet pipes. Let denote the maximum number of wet pipes for any input in .
The number of wet pipes on input is equal to the length of the path the water takes and thus corresponds to the time the computation takes. Thus it makes sense to investigate protocols which have bounded time . Furthermore, the question is whether it is possible to simultaneously optimize and the number of pipes used.
Definition 24.
We define to be the complexity of an optimal gardenhose protocol for computing where for any input we have that is bounded by .
As an example consider the Equality function (test whether ). The straightforward protocol that compares bit after bit has cost but needs time in the worst case. On the other hand one can easily obtain a protocol with time 2, that has cost : use pipes to communicate to Bob. We have the following general lower bound.
Theorem 25.
For all Boolean functions we have , where is the deterministic communication complexity of with at most rounds (Alice starting).
Proof.
We rewrite the claim as .
Let be the gardenhose protocol for that achieves complexity for . The deterministic round communication protocol for simulates by simply following the flow of the water. In each round Alice or Bob (alternatingly) send the name of the pipe used at that time by . ∎
Thus for Equality we have for instance that . There is an almost matching upper bound of by using blocks of pipes to communicate blocks of bits each.
We can easily deduce a timecost tradeoff from the above: For Equality the product of time and cost is at least , because for time we get a superlinear bound on the size, whereas for larger we can use that the size is always at least .
7.1 A TimeSize Hierarchy
The Pointer Jumping Function is wellstudied in communication complexity. We describe a slight restriction of the problem in which the inputs are permutations of .
Definition 26.
Let and be two disjoint sets of vertices such that .
Let and is bijective and and is bijective. For a pair of functions and define
Then and .
Finally, the pointer jumping function is defined to be the XOR of all bits in the binary name of , where is a fixed vertex in .
Roundcommunication hierarchies for or related functions are investigated in [15]. Here we observe that gives a timesize hierarchy in the gardenhose model. For simplicity we only consider the case where Alice starts.
Theorem 27.

can be computed by a gardenhose protocol with time and size .

Any gardenhose protocol for that uses time at most has size for all .
We note that slightly weaker lower bounds hold for the randomized setting.
8 Randomized GardenHose Protocols
We now bring randomness into the picture and investigate its power in the gardenhose model. Buhrman et al [4] have already considered protocols with public randomness. In this section we are mainly interested in the power of private randomness.
Definition 28.
Let denote the minimum complexity of a gardenhose protocol for computing , where the players have access to public randomness, and the output is correct with probability 2/3 (over the randomness). Similarly, we can define , the cost of gardenhose protocols with access to private randomness.
By standard fingerprinting ideas [10] we can observe the following.
Claim 1.
Claim 2.
, and this is achieved by a constant time protocol.
Proof.
The second claim follows from Newman’s theorem [13] showing that any public coin protocol with communication cost can be converted into a private coin protocol with communication cost bits on inputs of length together with the standard public coin protocol for Equality, and the protocol tree simulation of Theorem 9. ∎
Of course we already know that even the deterministic complexity of Equality is , hence the only thing achieved by the above protocol is the reduction in time complexity. Note that due to our result of the previous section computing Equality deterministically in constant time needs exponentially many pipes.
Buhrman et al. [4] have shown how to derandomize a public coin protocols at the cost of increasing size by a factor of , so the factor in the separation between public coin and deterministic protocols above is the best that can be achieved. This raises the question whether private coin protocols can ever be more efficient in size than the optimal deterministic protocol. We now show that there are no very efficient private coin protocols for Equality.
Claim 3.
Proof.
To prove this we first note that , where is the cost of randomized private coin simultaneous message protocols for (Alice and Bob can send their connections to the referee). Hence, , but Newman and Szegedy [14] show that . ∎
9 Open Problems

We show that getting lower bounds on larger than will be hard. But we know of no obstacles to proving superlinear lower bounds.

Possible candidates for quadratic lower bounds could be the Disjointness function with set size and universe size , and the IndirectStorageAccess function.

Consider the gardenhose matrix as a communication matrix. How many distinct rows does have? What is the deterministic communication complexity of ? The best upper bound is , and the lower bound is . An improved lower bound would give a problem, for which is larger than .

We have proved . Is it true that ? Is there any problem where is smaller than ?

It would be interesting to investigate the relation between the gardenhose model and memoryless communication complexity, i.e., a model in which Alice and Bob must send messages depending on their input and the message just received only. The gardenhose model is memoryless, but also reversible.
Acknowledgement
We thank an anonymous referee for pointing out a mistake in an earlier version of this paper.
Appendix A Appendix
a.1 Nondeterministic Communication
Proof of Theorem 8.
Consider a deterministic gardenhose protocol for using pipes. Maybe the most natural approach to simulate ’s computation by a nondeterministic communication protocol would be to guess the path that the water takes, and verify this guess locally by Alice and Bob. There are, however, too many paths for this to lead to good bounds. Instead we use a coarser guess. For any given input in a computation of the water traverses a set of pipes. We refer to these pipes as the wet pipes in on . In general a set of wet pipes can correspond to several paths through the network, which must use only edges from the set.
In the nondeterministic protocol Alice guesses a set of pipes that is supposed to be . Since is odd if and only if the size of immediately tells us whether is a witness for 1inputs or 0inputs.
Consider an even size set . Alice computes the connections of the pipes on her side using her input (as used in the gardenhose protocol). Her connections are consistent with , iff the tap is connected to a pipe in , and the other pipes in are all connected in pairs, except one, which is open. Note that none of the pipes in may be connected to a pipe outside of . Similarly, is consistent with Bob’s connections (based on ), if all the pipes in are paired up (no pipe in is open and no pipe in is connected to a pipe outside ).
For odd size we use an analogous definition of consistency: Now Alice has no open pipe in and all pipes in are paired up except the one connected to the tap, and Bob has all pipes in paired up except one that is open.
Suppose that is consistent with the connections defined by . Denote by the path the water takes in the gardenhose protocol. We claim that all the pipes in are in , and that the remaining pipes in form cycles. If this is the case then the nondeterministic protocol is correct: Since cycles have even length, subtracting them does not change the fact that is even or odd, and hence the size of and have the same parity, i.e., a consistent determines the function value correctly. Also note that the communication complexity of the nondeterministic protocol is at most +1, since a subset of the pipes used can be communicated with bits: Alice guesses an that is consistent with her input and sends it to Bob, who accepts/rejects if is also consistent with his input, otherwise he gives up (accepting/rejecting takes one additional bit of communication). Note that for partial functions no consistent may exist for Alice to choose, but in that case she can give up without a result.
To establish correctness we have to show that all pipes in are in (and the remaining pipes in form cycles). Clearly the starting pipe (the one connected to the tap) is in by the definition of consistency. All remaining pipes in on Bob’s and Alice’s side are either paired up or (for exactly one pipe) open. Hence we can follow the flow of water without leaving . This implies that is in , and since removing from leaves no open pipes all the remaining pipes in must form a set of cycles. ∎
a.2 GardenHose and Protocol Trees
Proof of Theorem 9:.
Given a protocol tree (with edges) of a two way communication protocol for any function we construct a gardenhose protocol with at most pipes.
We describe the construction in a recursive way. Let be any node of the protocol tree belonging to Alice, with children belonging to Bob. In the protocol tree rooted at a function is computed. If none of the are leaves, then we assume by induction that we can construct a garden hose protocol for each of the children, where uses at most many pipes, and is the number of edges in the subtree of . The have the tap on Bob’s side. To find a gardenhose protocol for , we use pipes. Alice sends the water through pipe to communicate the message corresponding to the edge to . Furthermore the right end of pipe is connected to the tap of a copy of . The number of pipes used is at most the number of edges in the protocol tree. If one or two of the are leaves, we use the same construction, except that for an accepting leaf we use one extra pipe that is open on Bob’s end, and for a rejecting leaf we just let the water spill at Alice’s pipe. It is easy to see by induction that the gardenhose protocol accepts on if and only if the protocol tree ends in an accepting leaf. ∎
a.3 One Spilling Pipe
Proof of Lemma 11.
Fix a protocol that uses pipes to compute . In the protocol Alice makes the connections on her side based on her input . Similarly Bob’s connections are based on his input . Denote the set of pipes that are open on Alice’s side by and the set of pipes that are open on Bob’s side by .
In the new protocol Alice and Bob have pipes arranged into 3 blocks of pipes each. Let’s call them and . The main idea is to use to compute and then use and to ‘uncompute’ (to remove the extra information provided by the multiple spilling pipes).
In the construction of Alice and Bob make their connections on and separately, exactly the same way they did in for pipes. Alice then connects ’s tappipe to the tap and keeps the tappipes of open. They then add the following connections: Alice connects every pipe in to pipe in and Bob connects every pipe in to pipe in . Note that those pipes were open before they were connected as they were all spilling pipes. now does not have any open pipes. The only pipes that will ever spill in and are their taps (there may be other open pipes but it is easy to see that they never spill). The tappipes of and are both on Alice’s side. Finally, Alice uses one more pipe, and connects the tappipe of to the new pipe. Figure 1 shows an example of the construction.
The size of the new protocol is exactly , and there is exactly one spilling pipe on each side, namely the tap pipes of and , because the only other open pipes are the pipes in and the pipes in . These cannot be reached by the water. All connections made are done by Alice and Bob alone. We now argue that the protocol computes correctly.
Notice that if , then water flows through and ends at one of the pipes in . This pipe is connected to the corresponding pipe in . So the water follows the same path backwards in until it reaches the tappipe in . This pipe is open on Alice’s side. Hence water spills on Alice’s side making the output 0 (and it spills at the tap of ).
Similarly, if , water flows through and ends at one of the pipes in on Bob’s side. Since this pipe is connected to the corresponding pipe in the water flows backwards din until it reaches the tappipe of . This is on Alice’s side and connected to the extra pipe. This makes the water to spill on Bob’s side as desired. ∎
a.4 Permutation Branching Programs to GardenHose
Proof of Lemma 12.
In Lemma 11 we have seen that we can turn a gardenhose protocol with multiple spilling pipes into a protocol with exactly one spilling pipe per side. Such a protocol acts exactly as a node in a branching program, except that its decision is based on . This observation suffices to simulate decision trees, but in a branching program nodes can have indegree larger than 1, and we cannot pump water from several sources into a single gardenhose protocol.
We now show how to construct a gardenhose protocol for . Given a loose permutation branching program for of size , we show how to construct a gardenhose protocol.
Let denote the graph of the branching program. consists of layers , where the first layer has just one node (the source), the last layer 2 nodes (the sinks), and all intermediate layers have nodes, so the size is . Layer queries some variable , whose value is . The 1edges between and are , the 0edges .
The construction goes by replacing the nodes of each layer by the gardenhose protocols for . Each layer uses copies of , arranged in two layers. We refer to these copies as the upper and lower copies of , each numbered from to (and implicitly by their level). Essentially we need the first layer to compute , and the second layer to uncompute, since we only want to remember the name of the current vertex in , not the value of .
If , then we connect the 1spill pipe of the upper th copy of to the 1spill pipe of the lower th copy of . Similarly we make the connections for the 0spill pipes (on Alice’s side).
To connect layers we connect the tappipes on each lower copy of a level to the tappipes of an upper copy on level . On level the tappipe of an upper copy is connected to Alice’s tap according to the branching program.
Figure 2 shows an example of the construction, where each block is a gardenhose protocol to compute .
For every edge that goes to the accepting sink of the branching program we use one pipe that is connected to the corresponding upper copy on Alice’s side, if the corresponding spilling pipe is on Alice’s side. Otherwise we leave the spilling pipe open. We proceed analogously for edges to the rejecting sink.
The size of the gardenhose protocol is at most . ∎
a.5 A Permutation Branching Program for Majority
Proof of Lemma 15.
In 1997, Sinha et al. [19] described a branching program of size for computing Majority. Unfortunately the branching program they construct is not a permutation branching program. Thus it is not immediately clear how to convert their construction into a gardenhose protocol.
To describe a permutation branching program for Majority we first need permutation branching programs for computing the sum of the inputs mod for small . Denote by the (nonBoolean) function mod . The following is easy to see.
Claim 4.
can be computed by permutation branching program of width so that each input with , when starting on the top level at node ends at node mod on the last level.
We call this permutation branching program a modulus box. The join of two modulus resp. boxes is a new branching program, in which bottom level nodes of the first box are identified in some way with top level nodes of the second. We employ the following main technical result of Sinha et al. [19], which describes an approximate divider.
Fact 2.
[19] Fix the length of an interval of natural numbers. There are prime numbers , where and and a number such that and and . Set . Consider inputs such that .
Then there is a way to join modulus boxes (in order ) into a single branching program, such that all inputs reaching the sink nodes named mod for some (in the last box) satisfy that belongs to one of intervals of length in . The intervals overlap, and each point in is in intervals.
Furthermore, the connections between the boxes are such that every output node of the box is connected to one input node of the box, and every input node of the box is connected to at most one output node of the box.
The above differs from the presentation in [19] in that we require that the are increasing so that we can join them without creating nodes with fanin larger than 1. This means that every box for has a few input nodes that are not used.
Note that our goal is to know whether is greater than or not. Effectively this means there are three kinds of bottom layer nodes in the branching program constructed above (for ): those where we know that all inputs reaching the sink have , at which point we can reject, those where , where we accept, and undecided nodes. A bottom layer node is undecided, if the interval of possible reaching that sink contains . At undecided nodes the interval of possible values of has been reduced to size , i.e., a fraction of the the original interval. Furthermore, there are undecided nodes (since is in that many intervals), but the intervals for those nodes stretch to at most beyond on both sides, hence the union of the intervals of all undecided bottom layer nodes is an interval of size at most . Hence, this construction can be iterated (at most times) to decide Majority on all inputs.
Now we need to argue that the whole construction can be made into a permutation branching program. Obviously any mod box can be computed by a strict permutation BP of width and length . The connections between the boxed are injective mappings. Hence so the whole constructions for the above fact can be made into a permutation branching program, where dummy nodes need to be added to bring all layers to the same width ().
The branching program for Majority is then an iteration of the above construction of permutation branching programs. In each level of the iteration some nodes accept, some reject, and some continue on a smaller interval. For all undecided sink nodes we can assume that they continue using the same interval of size at most . This continues until the intervals are very short (), at which the problem can be solved by counting.
To do the same iteration in a permutation branching program we need to do the following. We want to turn a building block of the iteration (a permutation branching program as in Fact 2) into a permutation branching program that has only 3 sinks reached by inputs (plus some sinks that are never reached). To do this we first use the original program, followed by 3 copies of the same program in reverse. We connect the undecided sinks of the upper program into the corresponding vertices in the first reversed lower program, similarly the accepting and rejecting sinks into the corresponding vertices of the other two reversed programs. Then each input that is undecided by will end up at the node corresponding to the starting node of the first reverse copy. Similarly inputs that are accepted by will leave the second reverse copy at the node corresponding to the starting node of etc. Using dummy nodes this program can be extended to a permutation branching program, with width increased by a factor of 3 and length by 2. Each input leads to one of three nodes. We can now connect the undecided sink of the above construction to the starting vertex of the next block . To turn the whole construction into a strict permutation branching program the accepting and rejecting bottom vertices are connected to extra vertices that remember at which layer/vertex the inputs were accepted/rejected.
The whole construction yields a permutation branching program for Majority. The length of the program is , for the iterations, the boxes that have length . Each level of the program has width at most for the mod boxes and the constant factors to turn things into a permutation BP (plus vertices for accepting/rejecting paths)). Hence the total size of the program is .
∎
a.6 Lower Bound for Formulae with Symmetric Gates
Proof of Theorem 22.
Fix and and any formula of size at most computing consisting of symmetric gates only. Define to be the subtree of , whose leaves are the variables in (and root is the output gate of the formula, and denote by the number of leaves of . Then the size of is . We will show that .
Alice has all the variables except those in , which go to Bob. Alice (and Bob) have to evaluate all the gates in (this includes the root). They will evaluate the gates in (reverse) topological order. All the leaves are known to Bob. Denote by the set of paths in that start at a leaf or a gate of fanin at least 2 inside , and end at a gate of fanin at least 2 inside and have no such gates in between. Then . Also denote by the set of gates in that have fanin larger than 1 inside , again . We will show that the communication is at most .
Bob goes over paths and gates in in reverse topological order (i.e., from the leaves up). Let be the vertices of some in reverse topological order (i.e., the root is last). Denote by the gate at , the last vertex that has fanin 1 in . Alice can tell Bob which function is computed at in terms of the value already computed (by Bob) at . This takes 2 bits. Hence the total communication to evaluate paths in is . For each there are at least 2 inputs in that have already been computed by Bob. Since the gate at is symmetric, it is sufficient for Alice to say how many of her inputs to evaluate to 1, which takes at most bits unless the formula is larger than . So the total communication is at most , and , unless has size larger than already.
∎
a.7 Pointer Jumping
Proof Sketch for Theorem 27.
To show part 1) we use a protocol using pipes, organized into blocks. If Alice has input , then she connects the tap to pipe in block 1. For all even numbered blocks she connects the th pipe in block to pipe in block . Bob connects for all odd numbered blocks the th pipe in block to pipe in block .
Assume that is odd. Then the th vertex of the path is on Bob’s side. If then the XOR of is 0 and the water needs to spill on Alice’s side. Hence, in block , for all pipes with even , Alice leaves the pipe open instead of connecting it to a pipe in block . She does make the connections as described above for all odd pipes in block .
Similarly, if is even, then the last vertex is on Alice’s side and if is odd the spill needs to be on Bob’s side. Hence Bob skips all the connections between blocks and for odd numbered pipes in block .
Note that and are bijective, hence the connections made are legal. In total we use pipes. It is clear that the gardenhose protocol described above computes .
Now we turn to part 2. Take any time gardenhose protocol for using pipes. Due to the simulation in Theorem 25 we get a round communication protocol (Alice starting) with communication . But Nisan and Wigderson [15] show that such protocols need communication for . Hence .
The difficulty in applying their result is that Nisan and Wigderson analyze the complexity of for uniformly random inputs, not random bijective inputs resp. . Hence we need to make some changes to their proof. These changes needed to make the argument work are minor, however: the uniform distribution on pairs of bijective functions is still a product distribution, and as long as it is still true that at any vertex in the protocol tree the information about the next pointer is a small constant. The main difference to the original argument is that conditioning on the previous path introduces information about the next pointer due to the fact that vertices on the path can not be used again. This can easily be subsumed into the information given via the previous communication. ∎
Footnotes
 This work is funded by the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012T31009) and by the Singapore National Research Foundation.
 It should be mentioned that even though Alice and Bob choose to not communicate in any other way, their intentions are not hostile and neither will deviate from a previously agreed upon protocol.
References
 D.A. Barrington. Width3 permutation branching programs, 1985. Technical report, MIT/LCS/TM293.
 P. Beame, M. Tompa, and P. Yan. Communicationspace tradeoffs for unrestricted protocols. SIAM Journal on Computing, 23(3):652–661, 1994. Earlier version in FOCS’90.
 Joshua Brody, Shiteng Chen, Periklis A. Papakonstantinou, Hao Song, and Xiaoming Sun. Spacebounded communication complexity. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 159–172, 2013.
 Harry Buhrman, Serge Fehr, Christian Schaffner, and Florian Speelman. The gardenhose model. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 145–158. ACM, 2013.
 Well Y Chiu, Mario Szegedy, Chengu Wang, and Yixin Xu. The garden hose complexity for the equality function. arXiv:1312.7222, 2013.
 O. Giel. Branching program size is almost linear in formula size. Journal of Computer and System Sciences, 63(2):222–235, 2001.
 H. Klauck. Quantum and classical communicationspace tradeoffs from rectangle bounds. In Proceedings of FSTTCS, 2004.
 H. Klauck. OneWay Communication Complexity and the Nečiporuk Lower Bound on Formula Size. SIAM J. Comput., 37(2):552–583, 2007.
 H. Klauck, R. Špalek, and R. de Wolf. Quantum and classical strong direct product theorems and optimal timespace tradeoffs. SIAM Journal on Computing, 36(5):1472–1493, 2007. Earlier version in FOCS’04. quantph/0402123.
 Eyal Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University Press, 1997.
 K.J. Lange, P. McKenzie, and A. Tapp. Reversible space equals deterministic space. Journal of Computer and System Sciences, 2(60):354–367, 2000.
 E. I. Nečiporuk. A boolean function. In Soviet Mathematics Doklady, volume 7, 1966.
 I. Newman. Private vs. common random bits in communication complexity. Information Processing Letters, 39(2):67–71, 1991.
 Ilan Newman and Mario Szegedy. Public vs. private coin flips in one round communication games (extended abstract). In Proceedings of the Twentyeighth Annual ACM Symposium on Theory of Computing, STOC ’96, pages 561–570, 1996.
 Noam Nisan and Avi Wigderson. Rounds in communication complexity revisited. SIAM J. Comput., 22(1):211–219, February 1993.
 C. H. Papadimitriou and M. Sipser. Communication complexity. Journal of Computer and System Sciences, 28(2):260–269, 1984. Earlier version in STOC’82.
 P. Papakonstantinou, D. Scheder, and H. Song. Overlays and limited memory communication mode(l)s. In Proc. of the 29th Conference on Computational Complexity, 2014.
 I. S. Sergeev. Upper bounds for the formula size of symmetric boolean functions. Russian Mathematics, Iz. VUZ, 58(5):30–42, 2014.
 Rakesh Kumar Sinha and Jayram S Thathachar. Efficient oblivious branching programs for threshold and mod functions. Journal of Computer and System Sciences, 55(3):373–384, 1997.
 I. Wegener. The Complexity of Boolean Functions. WileyTeubner Series in Computer Science, 1987.