Cancellation-free circuits: An approach for proving superlinear lower bounds for linear Boolean operators††thanks: Partially supported by the Danish Council for Independent Research, Natural Sciences.
We continue to study the notion of cancellation-free linear circuits. We show that every matrix can be computed by a cancellation-free circuit, and almost all of these are at most a constant factor larger than the optimum linear circuit that computes the matrix. It appears to be easier to prove statements about the structure of cancellation-free linear circuits than for linear circuits in general. We prove two nontrivial superlinear lower bounds. We show that a cancellation-free linear circuit computing the Sierpinski gasket matrix must use at least gates, and that this is tight. This supports a conjecture by Aaronson. Furthermore we show that a proof strategy for proving lower bounds on monotone circuits can be almost directly converted to prove lower bounds on cancellation-free linear circuits. We use this together with a result from extremal graph theory due to Andreev to prove a lower bound of for infinitely many matrices for every for. These lower bounds for concrete matrices are almost optimal since all matrices can be computed with gates.
1 Introduction and Known Results
Let be the Galois field of order , and let be the -dimensional vector space over . A Boolean function is said to be linear if there exists a Boolean matrix such that for every . This is equivalent of saying that can be computed using only XOR gates.
An XOR-AND circuit is a directed acyclic graph. There are nodes with in-degree , called the inputs one of these is the constant value . All other nodes have in-degree and are called gates. Every gate is labeled either (XOR) or (AND). There are gates which are called the outputs; these are labeled . The value of a gate labeled is the product of its inputs (children), and the value of a gate labeled is the sum of its two children (addition in , denoted ). The circuit , with inputs , computes the matrix if the output vector computed by , , satisfies . In other words, output is defined by the th row of the matrix. The size of a circuit , denoted , is the number of gates in . For simplicity, we will let unless otherwise is explicitly stated. A circuit is linear if every gate is labeled .
For relatively dense matrices, computing all the rows independently gives gates for each output, that is a circuit of size . It follows from a theorem by Lupanov [12, 14]) that this upper bound can be improved.
Theorem 1.1 (Lupanov)
Every matrix can be computed using a circuit of size
A counting argument shows, that this is asymptotically tight. In fact, the vast majority of matrices require this number of gates up to a constant factor. Despite this fact, there is no known concrete family of matrices requiring superlinear size .
Another, but related circuit model is the one where we allow unbounded fan-in and arbitrary gates (that is gates computing any predicate are allowed), but require bounded depth. The circuit complexity of such a circuit is the number of wires. Here the lower bound situation is a little better; Alon, Karchmer and Wigderson  showed in 1990 that a particular family of matrices requires wires for linear circuits in this model. This has recently been improved by Gál et al.  who have proven that a concrete infinite family of matrices require wires when computed in depth 2. Recently Drucker  gave a survey of the strategies used for proving lower bounds on wire complexity for general (not necessarily linear) Boolean operators in bounded depth, and the limitations of these.
Returning to the circuit model with bounded fan-in, the situation is even worse for general Boolean predicates. Here we know by a seminal result by Shannon [20, 22], that almost every function requires gates, but again no superlinear bound is known for a concrete family of functions. A popular, and essentially the only known, technique for proving non-trivial linear lower bounds is the technique of gate-elimination. The key idea when using gate elimination is to set some of the inputs to constant values, arguing that a certain number of gates get “eliminated” and that this results in a function inductively assumed to have a certain size. Gate elimination was first used by Schnorr  to prove a lower bound, and later improved by Paul  and again by Blum  who in 1984 presented a lower bound for a family of functions when using the full binary basis. This is still the best concrete lower bound known . For a description of the gate-elimination method see the survey of Boppana and Sipser  or the essay by Blum . In both of these it is mentioned that it is unlikely that the gate elimination method will ever yield superlinear lower bounds.
In the case of general Boolean functions there are a number of functions conjectured to have superlinear size, examples include any -complete language. For linear operators there are, as far as the authors know, only few families of matrices conjectured to have superlinear size. One of these include the Sierpinski gasket matrix, (Aaronson, personal communication and ) described later in this paper.
One proof strategy for proving lower bounds is to prove lower bounds for a restricted circuit model, and to prove that sizes of circuits computing a function in the restricted circuit model are not too much larger than in the original model. This was essentially the motivation for looking at monotone circuits. In , Razborov gave a superpolynomial lower bound for the Clique function for monotone circuits. The hope was at that time, that the monotone circuit complexity was polynomially related to general Boolean circuit complexity. This was disproven by Razborov in , showing that the gap was superpolynomial. For more details, see e.g. .
2 Cancellation-free Linear Circuits
For linear circuits, the value computed by every gate is the parity function of some subset of the variables. That is, the output of every gate can be considered as a vector in the vector space , where if and only if is a term in the parity function computed by the gate . We call the value vector of , and for input variables define , that is the unit vector having the th coordinate and all other . It is clear by definition that if a gate has the two children , then , where denotes coordinate wise addition in . We say that a linear circuit is cancellation-free if for every pair of gates where is an ancestor of then , where denotes the usual coordinatewise partial order. That is, if is a term in a gate it is a term in all subsequent gates. The intuition behind this is that if this condition is satisfied, the circuit never exploits the fact that in , . That is, things do not “cancel out” in the circuit. By definition, it is clear that any linear operator can be computed by a cancellation-free circuit. The proposition comes directly from the definition of cancellation-free
The following are equivalent:
For every pair of vertices in , there do not exist two disjoints paths in from to
For every where there is no path from to
does not contain the triangle as an undirected minor
The notion cancellation-free was introduced by Boyar and Peralta in [7, 8]. The paper concerns straight line program for computing linear forms, which is equivalent to the model studied in this paper. They proved that the problem of finding shortest linear circuits for linear operators is NP hard, even when restricted to cancellation-free circuits. They also noticed that most heuristics for constructing small linear circuits never exploit the cancellation property. Then, they constructed a gate minimizing heuristic that uses cancellation.
3 Relationship Between Cancellation-free Linear Circuits and General Linear Circuits
Boyar and Peralta proved in  that there exists an infinite family of matrices where the sizes of cancellation-free circuits computing them are at least times larger than the optimum. We call this ratio the cancellation ratio, . We can strengthen the lower bound to using a surprisingly simple matrix. This construction is originally due to Svensson .
There exists an infinite family of matrices such that any cancellation-free circuit computing them must have size times larger than the optimum. Thus
Consider the matrix:
If one allows cancellation this matrix can be computed by a circuit of size , by first computing to obtain . For , adding to gives . Thus, we use gates to compute . After that we can obtain with one gate since .
Consider any cancellation-free linear circuit computing the matrix. Let the set contain the gate computing and all its (noninput) predecessors. Clearly since it is the sum of terms.
Notice that because is cancellation-free, none of the gates in can compute any of the output values . Therefore for every we need at least one gate to compute . Thus one needs extra gates for this part. This adds up to . And the ratio is therefore proving the theorem.∎
It turns out that for almost every matrix, the cancellation ratio is constant.
If cancellation is allowed almost every matrix needs gates to be computed.
The number of matrices is . Since there are two inputs to each of the gates, and each of the outputs are either the output from a gate or an input (or zero), the number of circuits with inputs, outputs and gates is at most
Taking the logarithm one gets
Recalling that , for sufficiently large , :
so the number of distinct circuits is at most . For the number of matrices that can be computed with gates is at most
That is, the fraction of matrices not computable, is at least
Since this limit tends to almost every matrix has circuit size at least . ∎
We will now show that the construction in the proof of Theorem 1.1 produces a circuit that is cancellation-free. Before stating the lemma and its proof we will need a definition of rectangular decompositions: Given a Boolean matrix , the Boolean matrices constitute a rectangular decomposition if where addition is over the reals and every has rank . We say that the weight of is the number of nonzero columns plus the number of nonzero rows. The weight of a rectangular decomposition is the sum of the weights of the ’s. Lupanov showed in  (see also ) that every matrix admits a rectangular decomposition of weight .
Every matrix can be computed by a cancellation-free linear circuit of size .
Let the Boolean matrix be arbitrary. Consider the rectangular decomposition assumed to exist by Lupanov’s theorem. For each let () denote the number of nonzero columns (rows) in . Add for each the inputs corresponding to the nonzero columns, using gates. Call the result . Now each output is a sum of ’s. For each , add these ’s. In total, this takes at most gates. The total number of gates is at most
Since the addition in the the rectangular decomposition is over the reals, the circuits is cancellation-free. ∎
Combining the two lemmas we get the following:
For almost every matrix, the cancellation ratio, , is constant.
4 Lower Bound on the Size of Cancellation-free Circuits Computing the Sierpinski Gasket Matrix.
In this section we will prove that the Sierpinski gasket matrix needs gates when computed by a linear cancellation-free circuit, and that this suffices.
Suppose some subset of the input variables are restricted to the value . Now look at the resulting circuit. Some of the gates will now compute the value . In this case, we say that the gate is eliminated since it no longer does any computation. The situation can be even more extreme, some gate might “compute” . In both cases, we can remove the gate from the circuit, and forward the input if necessary (if is an output gate, now outputs the result). In the second case, the parent of will get eliminated, so the effect might cascade. For any subset of the variables, there is a unique set of gates that become eliminated when setting these variables to .
The Sierpinski gasket matrix is defined recursively as:
In all of the following let , and let be the Sierpinski gasket matrix. First we need a fact about :
For every the determinant of the Sierpinski gasket matrix is . In particular the rows in are linearly independent.
The determinant of an augmented matrix is given by the formula:
For every , any cancellation-free circuit that computes the Sierpinski gasket matrix has size at least .
The proof is by induction on . For the base case, look at the matrix . This clearly needs at least gate.
Suppose the statement is true for some , now look at the matrix . Denote the output gates and the inputs . Partition the gates of into three disjoint sets, and defined as follows:
: The gates having only inputs from and . Equivalently the gates not reachable from inputs .
: The gates in that are not eliminated when inputs are set to .
: . That is, the gates in that do become eliminated when inputs is set to .
Obviously . We will now give lower bounds on the sizes of , , and .
Since the circuit is cancellation-free, the outputs and all their predecessors are in . By the induction hypothesis, .
Since the gates in are note eliminated when , they compute on the inputs . By the induction hypothesis .
The goal is to prove that this set has size at least . Let be the set of arcs from to . We first prove that
By definition, all gates in attain the value when are set to . Let be arbitrary. Since , becomes eliminated, so . Every can only have one child in , since no gate in can have two children in . So .
We now show that . Let the endpoints of in be and let their corresponding value vectors be .
Now look at the value vectors of the output gates . For each of these, the first vector consisting of the first coordinates must be in , but the dimension of must is , so .
We have that , so
It turns out that this is tight.
The Sierpinski matrix can be computed by a cancellation-free circuit using gates.
This is clearly true for . Assume that can be computed using gates. Consider the matrix . Construct the circuit in a divide and conquer manner; construct recursively on variables and . This gives outputs . After this use operations to finish the outputs . This adds up to exactly . ∎
5 Stronger Lower Bounds
In , Mehlhorn proved lower bounds on monotone circuits for computing “Boolean sums”. The same proof strategy can be used to prove lower bounds on cancellation-free linear circuits. For a matrix , denote by the smallest cancellation-free linear circuit that computes , and as the number of ’s in . Let be the complete bipartite graph with vertices in one vertex set and in the other.
Let be an matrix. Interpret as a vertex adjacency matrix for a bipartite graph in the natural way. If this graph does not contain for constants then .
Consider the class of cancellation-free linear circuits where all sums of at most variables are available for free. Let be smallest of such circuits computing . Obviously . Since all sums of at most variables are available for free, anything computed at a gate in is a sum of at least variables. Since the circuit is cancellation-free, for a gate in , its value vector will never decrease, hence the value vector of a successor to will have on the coordinates that ’s value vector has. In particular, since the matrix does not contain , this means that any gate in can have a path to at most outputs.
For a fixed row , the cost of computing it is at least
And since a gate has a path to at most outputs, if we sum over all rows we count each gate at most times. So the total size of is at least
Now, proving lower bounds for linear cancellation-free circuits is reduced to the problem of finding dense bipartite graphs not containing . This problem is known as the Zarankiewicz problem.
For any , there exists a concrete family of matrices that requires gates when computed by a cancellation-free linear circuit.
6 Conclusion and Open Problems
What is the value of ? If for some , , Corollary 1 provides an unconditional superlinear lower bound for a concrete family of matrices.
In the proof of Theorem 4.1, we did not use the cancellation-free property as extensively as we did in the proof of Theorem 5.1. We only used that there is no path from to the outputs . Another strategy to prove an unconditional lower bound on the size of circuits computing the Sierpinski matrix could be to prove that for any optimal circuit no such path exists. Then the theorem would follow, even with cancellations.
-  Aaronson, S.: Thread on cstheory.stackexchange.com. http://cstheory. stackexchange.com/questions/1794/circuit-lower-bounds-over-arbitrary-sets-of- gates
-  Alon, N., Karchmer, M., Wigderson, A.: Linear circuits over GF(2). SIAM J. Comput. 19(6), 1064–1067 (1990)
-  Andreev, A.E.: On a family of Boolean matrices. Vestnik Moskovskogo Universiteta 41(2), 97–100 (1986), English translation: Moscow Univ. Math. Bull., 41, 1986 79-82
-  Blum, N.: A Boolean function requiring 3n network size. Theor. Comput. Sci. 28, 337–345 (1984)
-  Blum, N.: On negations in Boolean networks. In: Albers, S., Alt, H., Näher, S. (eds.) Efficient Algorithms. Lecture Notes in Computer Science, vol. 5760, pp. 18–29. Springer (2009)
-  Boppana, R.B., Sipser, M.: The complexity of finite functions. In: Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity (A), pp. 757–804. Laboratory for Computer Science, Massachusetts Institute of Technology (1990)
-  Boyar, J., Matthews, P., Peralta, R.: Logic minimization techniques with applications to cryptology. Journal of Cryptology (2012), to appear
-  Boyar, J., Matthews, P., Peralta, R.: On the shortest linear straight-line program for computing linear forms. In: Ochmanski, E., Tyszkiewicz, J. (eds.) MFCS. Lecture Notes in Computer Science, vol. 5162, pp. 168–179. Springer (2008)
-  Brown, W.: On graphs that do not contain a Thomsen graph. Canad. Math. Bull 9(2), 1–2 (1966)
-  Drucker, A.: Limitations of lower-bound methods for the wire complexity of Boolean operators. Electronic Colloquium on Computational Complexity (ECCC) 18, 125 (2011)
-  Gál, A., Hansen, K.A., Koucký, M., Pudlák, P., Viola, E.: Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates. Electronic Colloquium on Computational Complexity (ECCC) 18, 150 (2011)
-  Jukna, S.: Boolean Function Complexity: Advances and Frontiers. Springer Berlin Heidelberg (2012)
-  Kollár, J., Rónyai, L., Szabó, T.: Norm-graphs and bipartite turán numbers. Combinatorica 16(3), 399–406 (1996)
-  Lupanov, O.: On rectifier and switching-and-rectifier schemes. Dokl. Akad. 30 Nauk SSSR 111, 1171-1174. (1965)
-  Mehlhorn, K.: Some remarks on Boolean sums. Acta Informatica 12, 371–375 (1979)
-  Paul, W.J.: A 2.5 n-lower bound on the combinational complexity of Boolean functions. SIAM J. Comput. 6(3), 427–443 (1977)
-  Razborov, A.: Lower bounds of monotone complexity of the logical permanent function. Matematicheskie Zametki 37(6), 887–900 (1985), English translation in Mathematical Notes of the Academy of Sci. of the USSR, 37:485-493, 1985
-  Razborov, A.: Lower bounds on the monotone complexity of some Boolean functions. Doklady Akademii Nauk SSSR 281(4), 798–801 (1985), English translation translation: Soviet Mathematics Doklady, 31, 354-357.
-  Schnorr, C.P.: Zwei lineare untere schranken für die komplexität Boolescher funktionen. Computing 13(2), 155–171 (1974)
-  Shannon, C.: The synthesis of two-terminal switching circuits. Bell System Technical Journal 28(1), 59–98 (1949)
-  Svensson, J.: Minimizing the Number of XOR Gates in Circuits Computing Linear Forms. Master’s thesis, Department of Mathematics and Computer Science, University of Southern Denmark (2011)
-  Wegener, I.: The Complexity of Boolean Functions. Wiley-Teubner (1987)