Separating OR, SUM, and XOR Circuits^{1}^{1}1This work is an extended version of two preliminary conference abstracts (8; 14).
Abstract
Given a boolean by matrix we consider arithmetic circuits for computing the transformation over different semirings. Namely, we study three circuit models: monotone ORcircuits, monotone SUMcircuits (addition of nonnegative integers), and nonmonotone XORcircuits (addition modulo 2). Our focus is on separating these models in terms of their circuit complexities. We give three results towards this goal:

We prove a direct sum type theorem on the monotone complexity of tensor product matrices. As a corollary, we obtain matrices that admit ORcircuits of size , but require SUMcircuits of size .

We construct socalled uniform matrices that admit XORcircuits of size , but require ORcircuits of size .

We consider the task of rewriting a given ORcircuit as a XORcircuit and prove that any subquadratictime algorithm for this task violates the strong exponential time hypothesis.
keywords:
arithmetic circuits, boolean arithmetic, idempotent arithmetic, monotone separations, rewriting1 Introduction
A basic question in arithmetic complexity is to determine the minimum size of an arithmetic circuit that evaluates a linear map . In this work we approach this question from the perspective of relative complexity by varying the circuit model while keeping the matrix fixed, with the goal of separating different circuit models. That is, our goal is to show the existence of that admit small circuits in one model but have only large circuits in a different model.
We will focus on boolean arithmetic and the following three circuit models. Our circuits consist of either

only gates (i.e., boolean sums; rectifier circuits),

only gates (i.e., integer addition; cancellationfree circuits), or

only gates (i.e., integer addition mod 2).
These three types of circuits have been studied extensively in their own right (see Section 2), but fairly little is known about their relative powers.
Each model admits a natural description both from an algebraic and a combinatorial perspective.
Algebraic perspective
In the three models under consideration, each circuit with inputs and outputs computes a vector of linear forms
That is, , where is an by boolean matrix with and the arithmetic is either

in the boolean semiring ,

in the semiring of nonnegative integers , or

in .
As an example, Fig. 1 displays two circuits for computing for the same using two different operators; the circuit on the right requires one more gate.
Combinatorial perspective
A circuit computing for a boolean matrix can also be viewed combinatorially: every gate is associated with a subset of the formal variables ; this set is called the support of and it is denoted . The input gates correspond to the singletons , , and every noninput gate computes either

the set union (),

the disjoint set union (), or

the symmetric difference () of its children.
This way an output gate will have .
Note the special structure of a circuit: there is at most one directed path from any input to any output . In fact, from this perspective, every circuit for is easy to interpret both as an circuit for , and as a circuit for (equivalently, there are onto homomorphisms from to and ). In this sense, both  and circuits are at least as efficient as circuits.
Relative complexity
More generally we fix a boolean matrix and ask how the circuit complexity of computing depends on the underlying arithmetic.
To make this quantitative, denote by , , and the minimum number of wires in an unbounded fanin circuit for computing in the respective models. For simplicity, we restrict our attention to the case of square matrices so that .
For , we are interested in the complexity ratios
For example, we have that and that for all , by the above fact that each circuit can be interpreted as an circuit and as a circuit.
1.1 Our results
We begin by studying the monotone complexity of tensor product matrices of the form
where denotes the usual Kronecker product of matrices. In Section 3, we prove a direct sum type theorem on their monotone complexity. As a corollary, we obtain matrices that are easy for circuits, , but hard for circuits, . This implies our first separation:
Theorem 1.
.
We are not aware of any prior lower bound techniques that work against circuits, but not against circuits. Hence, as far as we know, Theorem 1 is a first step in this direction.
Next, we separate  and circuits from circuits by considering matrices that look locally random in the following sense:
Definition (uniformity).
A random matrix is called uniform if the entries in every submatrix have a marginal distribution that is uniform on .
Equivalently, a matrix is uniform if each of its entries is or with equal probability and the entries in every submatrix are mutually independent.
In Section 4 we construct uniform matrices that are easy for circuits:
Theorem 2.
There are uniform matrices having .
These uniform matrices turn out to be difficult to compute using monotone circuits. Indeed, as a corollary, we will obtain our second separation:
Corollary 3.
.
Separations between  and circuits have also been considered by Sergeev et al. (11; 12) who proved the slightly weaker bound . Furthermore, Jukna (17) has informed us that the bound in Corollary 3 can actually be proved more directly using existing methods (15; 28). Nevertheless, we hope our alternative approach via uniform matrices might be of independent interest—for example, in closing the gap between the current lower bound and the best known upper bound ; see Section 2.
As is true in the case of we conjecture more generally that all the nontrivial complexity gaps between the three models are of order . While we are unable to enlarge the gap in Theorem 1, or prove any superconstant lower bounds on , our final result provides some evidence towards these conjectures.
In Section 5, we show that if certain circuits that are derived from CNF formulas could be efficiently rewritten as equivalent  or circuits, this would imply unexpected consequences for exponentialtime algorithms. More precisely, we study the following problem.
 The Rewrite problem:

On input an circuit , output a circuit that computes the same matrix as .
Both Rewrite and Rewrite admit simple algorithms that output a circuit of size in time . However, we show that any significant improvement on these algorithms would give a nontrivial time algorithm for deciding whether an variable clause CNF formula is satisfiable—this violates the strong exponential time hypothesis (13):
Theorem 4.
Neither Rewrite nor Rewrite can be solved in time for any constant , unless the strong exponential time hypothesis fails.
Theorem 4 provides evidence, e.g., for the conjecture in the following sense. If there is a family of matrices witnessing , then clearly no time algorithm exists for Rewrite: if we are given a minimumsize circuit for as input, there is no time to write down a legal output.
Our proof of Theorem 4 shows, in particular, that an time algorithm for Rewrite would give an improved algorithm for counting the number of satisfying assignments to a given CNF formula (#). Similarly, an time algorithm for Rewrite would give an improved algorithm for deciding whether the number of satisfying assignments is odd ().
1.2 Notation
A circuit is a directed acyclic graph where the vertices of indegree (or fanin) zero are called input gates and all other vertices are called arithmetic gates. One or more arithmetic gates are designated as output gates. The size of the circuit is the number of edges (or wires) in the circuit.
We abbreviate ; all our logarithms are to base by default; and we write random variables in boldface.
2 Related work
Upper bounds
The trivial depth circuit for a boolean matrix uses wires, where we denote by the weight of , i.e., the number of 1entries in . Even though might be of order , Lupanov (as presented by Jukna (16, Lemma 1.2)) constructs depth2 circuits (applicable in all the three models) of size for any . This implies the universal upper bound
Lower bounds
Standard counting arguments (16, §1.4) show that most matrices have wire complexity in each of the three models. Combining this with Lupanov’s upper bound we conclude that a random matrix does little to separate our models:
Fact 0.
For a uniformly random , the ratio is a constant w.h.p.
Unsurprisingly, it can also be shown that finding a minimumsize circuit for a given matrix is NPhard in all the models. For  and circuits this follows from the NPcompleteness of the Ensemble Computation problem as defined by Garey and Johnson (10, PO9). For circuits this was proved by Boyar et al. (5).
circuits
The study of circuits (sometimes called rectifier circuits) has been centered around finding explicit matrices that are hard for circuits. Here, dense rectanglefree matrices and their generalisations, free matrices, are a major source of lower bounds.
Definition.
A matrix is called free if it does not contain an all1 submatrix. Moreover, is simply called free if it is free.
Nechiporuk (24) and independently Lamagna and Savage (21) constructed the first examples of dense free matrices achieving . Subsequently, Mehlhorn (22) and Pippenger (26) established the following theorem that gives a general template for this type of lower bound; we use it extensively later.
Theorem 5 (Mehlhorn–Pippenger).
If is free, then .
circuits
It is a longstanding open problem to exhibit explicit matrices requiring superlinear size circuits. No such lower bounds are known even for logdepth circuits, and the only successes are in the case of bounded depth (2; 9), (16, §13.5). This, together with Fact 1, makes it particularly difficult to prove lower bounds on .
circuits
Additive circuits have been studied extensively in the context of the addition chain problem (see Knuth (18, §4.6.3) for a survey) and its generalisations (27).
In cryptography, as observed by Boyar et al. (5), many heuristics that have been proposed for finding small circuits produce, in fact, circuits that do not exploit the cancellation of variables that is available in . Thus, the measure gives a lower bound on the approximation ratio achieved by any such minimisation heuristic.
Algebraic complexity
A particular motivation for studying the separation between  and circuits is to understand the complexity of zeta transforms on partial orders (3). Indeed, the characteristic matrix of every partial order has an circuit proportional to the number of covering pairs in , but the existence of small circuits (and hence fast zeta transforms) is not currently understood satisfactorily.
Strong exponential time hypothesis
3 /Separation
In this section we give a direct sum type theorem for the monotone complexity of tensor product matrices. Using this, we obtain a separation of the form
(1)  
where denotes the usual Kronecker product of matrices and denotes the number of input and output variables. This will prove Theorem 1.
3.1 Tensor products
As a first example, let be a fixed boolean matrix and consider the matrix product
(2) 
where we think of as a matrix of input variables. If we arrange these variables into a column vector by stacking the columns of on top of one another, then (2) becomes
(3) 
where is the identity matrix. That is, is the block matrix having copies of on the diagonal.
The transformation (3) famously admits nontrivial circuits due to the fact that fast matrix multiplication algorithms can be expressed as small bilinear circuits over . However, it is easy to see that in the case of our monotone models, no nontrivial speedup is possible: any circuit for (3) must compute independently times:
(4) 
This follows from the observation that two subcircuits corresponding to two different columns of cannot share gates due to monotonicity.
Our approach
We will generalise the above setting slightly and use tensor products of the form to separate  and circuits. Analogously to (2), one can check that the matrix corresponds to computing the mapping
(5) 
We aim to show that for suitable choices of and computing is easy for circuits but hard for circuits. We will choose to have large complexity (e.g., choose at random), and think of as dictating how many independent copies of a circuit must compute.
More precisely, define and as the minimum such that can be written as over the boolean semiring or over the semiring of nonnegative integers, respectively, where and are matrices. Equivalently, (resp., ) is the minimum number of rectangles (resp., nonoverlapping rectangles) that are required to cover all entries of .
These cover numbers appear often in the study of communication complexity (20). In this context, the matrix —the boolean complement of the identity —is the usual example demonstrating a large gap between the two concepts (20, Example 2.5):
We will use this gap to show that, up to polylogarithmic factors,
In terms of the number of input variables , we will obtain (1).
3.2 Upper bound for circuits
Suppose where and are matrices. We can compute (5) as
which requires 3 matrix multiplications, each involving as one of the dimensions (the other dimensions being at most ).
If these 3 multiplications are naively implemented with an circuit of depth 3, each layer will contain at most wires so that . However, one can still use Lupanov’s techniques to save an additional logarithmic factor: if , Corollary 1.35 in Jukna (16) can be applied to show that each of the three multiplications above can be computed using wires. Thus, for we get
Lemma 6.
for all . ∎
3.3 Lower bound for circuits
Intuitively, since lowrank decompositions are not available for in the semiring of nonnegative integers, a circuit for should be forced to compute independent copies of . More generally, we ask
Direct sum question.
Do we have for all , ?
Alas, we can answer this affirmatively only in some special cases. For example, the trivial case was discussed above (4), and it is not hard to generalise the argument to show that the lower bound holds in case admits a fooling set of size . (When is viewed as an incidence matrix of a bipartite graph, a fooling set is a matching no two of whose edges induce a 4cycle. See (20, §1.3).) However, since this will not be the case when , we will settle for the following version, which suffices for the separation result.
Theorem 7.
For all free ,
(6) 
For the purposes of the proof we switch to the combinatorial perspective: For and we introduce two sets of formal variables and . Moreover, we let and denote the associated outputs. That is, each output is defined by one row of , and each output is defined by one row of . With this terminology, the input variables for are the pairs in ; we think of as indexing the rows and as indexing columns of the variable matrix . Finally, corresponds to computing the outputs
In the following proof we use the freeness of to “zoom in” on that layer of the circuit which reveals the large wire complexity (similarly to Mehlhorn (22)). We advise the reader to first consider the case , as this already contains the main idea of the proof.
Theorem 7.
Let be a circuit computing . As a first step, we simplify by allowing input gates to have largerthansingleton supports. Namely, let consist of those gates of whose supports are contained in a wide row cylinder of the form where and . We simply declare that all computations done by gates in come for free: we promote a gate in to an input gate and delete all its incoming wires. We continue to denote the modified circuit by —clearly, these modifications only decrease its wire complexity.
Call a wire that is connected to an input gate an input wire and denote the set of input wires by . The wire complexity lower bound (6) will follow already from counting the number of input wires.
For denote by the subcircuit of computing the outputs , , and denote by the input wires of ; we claim that
(7) 
Before we prove (7), we note how it implies the theorem. Each input wire is feeding into a noninput gate having their support not contained in a wide row cylinder. Due to freeness of this means that can appear only in at most different . Thus, the sum counts at most times and, more generally, we have
Proof of (7). Fix . If is empty the claim is trivial. Otherwise fix a variable and consider the structure of when restricted to the variables . Since this set of variables can be naturally identified with by ignoring the first coordinate, we can view as computing a copy of on the variables .
Indeed, we define the support of an input wire to be the set of such that the variable is contained in the support of . (The support of is simply the support of the adjacent input gate.) Moreover, we let
Put otherwise, consists of the input wires that are used by in computing a copy of on the variables . Associate to each a rectangle
where is the set of such that appears in the subcircuit of that computes the output . Now, the crucial observation is that the collection of rectangles is a nonoverlapping cover of , because computes a copy of by taking disjoint unions of the supports . Therefore, we must have that
(8) 
As will be shortly discussed in Section 4.1, a random matrix is free and has weight w.h.p. Using these facts we obtain the following corollary, which, together with Lemma 6, proves Theorem 1.
Corollary 8.
A random satisfies w.h.p. ∎
4 /Separation
In this section we use the probabilistic method to construct uniform matrices that, for large enough , will witness the following complexity gap with high probability:
In what follows, all matrix arithmetic will be over .
4.1 Motivation for uniform matrices
Suppose first that is a random matrix where each entry is drawn uniformly and independently from . The probability that fails to be free can be bounded from above by taking the union bound over all possible submatrices:
(9) 
It is easy to check (and wellknown in the context of random graphs (4, §11)) that for this quantity tends to 0 as .
Our key observation here is that the estimate (9) only uses the property that the entries in each submatrix of are mutually independent. Indeed, the above analysis holds even when is only uniform for . Thus, we have the following lemma.
Lemma 9.
If is uniform for , then w.h.p.,
Proof.
Any uniform matrix has pairwise independent entries so that w.h.p. by Chebyshev’s inequality. On the other hand, the above discussion implies that is free w.h.p. Thus, the claim follows from Theorem 5.
∎
4.2 Proof of Theorem 2
Let . To construct a uniform matrix we start with an matrix that satisfies the following two properties:

has linear complexity, .

Each set of columns of are linearly independent.
Miltersen (23) shows that such can be obtained as submatrices of certain generating matrices of linear codes, e.g., those of Spielman (29).
Theorem 10 (Miltersen (23, Theorem 1.4)).
Let . There are matrices with such that the mapping is injective on .
Indeed, let be the set of vectors of Hamming weight at most . Note that if is injective on , then it clearly has property (2). We also have that and so . Thus, if we set , we can apply Theorem 10 to obtain our desired matrix .
We can now define
where is a matrix chosen uniformly at random; note that . If we compute in three stages in the obvious way, we obtain
where we used the fact that —roughly, this follows from simply reversing the direction of the wires in a circuit computing (see Jukna (16, p. 46)).
It remains to show that is uniform. In fact, since our definition of is a generalisation of how wise independent variables are typically constructed (1, §15.2), the proof of the following lemma is somewhat routine.
Lemma 11.
is uniform.
Proof.
We need to show that each submatrix , where and , is uniformly distributed in . Write
where is the submatrix of consisting of the columns with indices in .
Claim.
is uniformly distributed in .
Proof of Claim. Let denote the th row of . The rows , , are mutually independent variables, since the variables , , are. Therefore it suffices to show that is uniformly distributed in for each .
To this end, fix ; we show that all the outcomes where are equally likely. For any there is a vector with since has linearly independent columns. Hence iff . But is distributed the same as so that is independent of the choice of , as desired.
Finally, the same analysis as above demonstrates that is uniformly distributed in proving the lemma. ∎
Remark.
Interestingly, Theorem 5 is unable to prove a better lower bound than for any matrix . Is it true that for every uniform , we have that w.h.p.? A positive answer would give the tight bound .
5 Rewriting
In this section we study what would happen if Rewrite or Rewrite could be solved in subquadratic time. Namely, we show that this eventuality would contradict the strong exponential time hypothesis. This will prove Theorem 4. As discussed in Section 1.1, we interpret this as evidence for our conjectures and .
5.1 Preliminaries
For purposes of computations, we tacitly assume that for any input circuit considered in this section. This is to make each admit a binary representation of length where the notation hides factors polylogarithmic in . For concreteness, might be represented as two lists: (i) the list of gates in , with output gates indicated, and (ii) the list of wires in ; both lists are given in topological order, with the input wires of each gate forming a consecutive sublist of the list of wires. Whatever the encoding, we assume it is efficient enough so that the following property holds.
Proposition 12.
On input an circuit and a vector , the output can be computed in time (in the usual RAM model of computation). ∎
The following proposition records a similar observation for circuit rewriting.
Proposition 13.
Both Rewrite and Rewrite can be solved in time .
Proof.
Suppose we are given an circuit as input. The matrix computed by can be easily extracted from in time . We then simply output the trivial depth1 circuit for that has size at most . ∎
5.2 Proof of Theorem 4
The main technical ingredient in our proof is Lemma 14 below, which states that if subquadratictime rewriting algorithms exist, then certain simple covering problems can be solved faster than in a trivial manner.
In the following we consider set systems defined by and that are (not necessarily distinct) subsets of . We say that is a covering pair if .
Lemma 14.
Suppose we are given sets as input.

If Rewrite can be solved in time for some constant , then the number of covering pairs can be computed in time .

If Rewrite can be solved in time for some constant , then the parity of the number of covering pairs can be computed in time .
Proof of (a)..
Let be an matrix defined by iff is a covering pair. We show how to compute without constructing explicitly.
Suppose for a moment that we had a small circuit for . The value can be recovered from the circuit in time via the following trick: evaluate (over the integers) on the all1 vector to obtain ; but now
(10) 
Unfortunately, we do not know how to construct a small circuit for . Instead, our key observation below will be that the complement matrix admits an circuit of size only . By assumption, we can then rewrite as a circuit in time . In particular, the size of the new circuit must also be
Analogously to (10) we can then recover from in time :
Indeed, it remains to describe how to construct for in time .
Construction
Define a depth2 circuit follows: The th layer of hosts input gates , ; the st layer contains intermediate gates , ; and the nd layer contains output gates , . Each input gate is connected to gates for ; similarly, each output gate is connected to gates for . To see that computes note that there is a path from input to output iff there is a such that iff is not a covering pair. Note also that and that the construction takes time . ∎
Proof of (b)..
The proof is the same as above, except we work over . ∎
Next, we reduce # and to the covering problems in Lemma 14. Here we are essentially applying a technique of Williams (30, Theorem 5).
Theorem 15.
We have the following reductions:

If Rewrite can be solved in time for some , then # can be solved in time .

If Rewrite can be solved in time for some , then can be solved in time .
Proof.
Let be an instance of over variables . Without loss of generality (by inserting one variable as necessary), we may assume that is even. Call the variables left variables and the variables right variables.
For each truth assignment to the left variables, let be the set of clauses satisfied by . Similarly, for assignment to the right variables, let be the set of clauses satisfied by . Clearly, the compound assignment to all the variables satisfies if and only if . That is, the number of satisfying assignments is precisely the number of covering pairs of the set system , . Thus, both claims follow from Lemma 14. ∎
We can now finish the proof of Theorem 4:

For Rewrite the result follows immediately from Theorem 15.

For Rewrite we need to make the following additional argument. As discussed by Cygan et al. (7) the CNF Isolation Lemma of Calabro et al. (6) can be applied to show that any time algorithm for can be turned into an time Monte Carlo algorithm for where . Recognising this, the result follows from Theorem 15.
Acknowledgements
We are grateful to Stasys Jukna for pointing out a more direct proof of Corollary 3 as referenced in the text. We also thank Igor Sergeev for providing many references, in particular, one simplifying our proof of Theorem 2. Furthermore, we thank Jukka Suomela for discussions.
This research is supported in part by Academy of Finland, grants 132380 and 252018 (M.G.), 252083 and 256287 (P.K.), and by Helsinki Doctoral Programme in Computer Science  Advanced Computing and Intelligent Systems (J.K.).
References
 Alon and Spencer (2000) N. Alon and J. H. Spencer. The Probabilistic Method. John Wiley & Sons, 2 edition, 2000.
 Alon et al. (1990) N. Alon, M. Karchmer, and A. Wigderson. Linear circuits over GF(2). SIAM Journal on Computing, 19(6):1064–1067, 1990. doi:10.1137/0219074.
 Björklund et al. (2012) A. Björklund, T. Husfeldt, P. Kaski, M. Koivisto, J. Nederlof, and P. Parviainen. Fast zeta transforms for lattices with few irreducibles. In Proceedings of the 23rd Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2012), pages 1436–1444. SIAM, 2012.
 Bollobás (2001) B. Bollobás. Random Graphs. Number 73 in Cambridge studies in advanced mathematics. Cambridge University Press, 2nd edition, 2001.
 Boyar et al. (2013) J. Boyar, P. Matthews, and R. Peralta. Logic minimization techniques with applications to cryptology. Journal of Cryptology, 26:280–312, 2013. doi:10.1007/s0014501291247.
 Calabro et al. (2008) C. Calabro, R. Impagliazzo, V. Kabanets, and R. Paturi. The complexity of unique SAT: An isolation lemma for CNFs. Journal of Computer and System Sciences, 74(3):386–393, 2008. doi:10.1016/j.jcss.2007.06.015.
 Cygan et al. (2012) M. Cygan, H. Dell, D. Lokshtanov, D. Marx, J. Nederlof, Y. Okamoto, R. Paturi, S. Saurabh, and M. Wahlstrom. On problems as hard as CNFSAT. In Proceedings of the 27th Conference on Computational Complexity (CCC 2012), pages 74–84. IEEE, 2012. doi:10.1109/CCC.2012.36.
 Find et al. (2013) M. G. Find, M. Göös, P. Kaski, and J. H. Korhonen. Separating OR, SUM, and XOR circuits. Submitted, 2013.
 Gál et al. (2012) A. Gál, K. A. Hansen, M. Koucký, P. Pudlák, and E. Viola. Tight bounds on computing errorcorrecting codes by boundeddepth circuits with arbitrary gates. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC 2012), pages 479–494. ACM, 2012. doi:10.1145/2213977.2214023.
 Garey and Johnson (1979) M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NPCompleteness. W.H. Freeman and Company, 1979.
 Gashkov and Sergeev (2011) S. B. Gashkov and I. S. Sergeev. On the complexity of linear Boolean operators with thin matrices. Journal of Applied and Industrial Mathematics, 5:202–211, 2011. doi:10.1134/S1990478911020074.
 Grinchuk and Sergeev (2011) M. I. Grinchuk and I. S. Sergeev. Thin circulant matrixes and lower bounds on complexity of some Boolean operators. Diskretnyĭ Analiz i Issledovanie Operatsiĭ, 18:38–53, 2011.
 Impagliazzo and Paturi (2001) R. Impagliazzo and R. Paturi. On the complexity of SAT. Journal of Computer and System Sciences, 62(2):367–375, 2001. doi:10.1006/jcss.2000.1727.
 Järvisalo et al. (2012) M. Järvisalo, P. Kaski, M. Koivisto, and J. H. Korhonen. Finding efficient circuits for ensemble computation. In Proceedings of the 15th International Conference on Theory and Applications of Satisfiability Testing (SAT 2012), pages 369–382. Springer, 2012. doi:10.1007/9783642316128_28.
 Jukna (2006) S. Jukna. Disproving the single level conjecture. SIAM Journal on Computing, 36(1):83–98, 2006. doi:10.1137/S0097539705447001.
 Jukna (2012) S. Jukna. Boolean Function Complexity: Advances and Frontiers, volume 27 of Algorithms and Combinatorics. Springer, 2012.
 Jukna (2013) S. Jukna. Comment on XOR versus OR circuits, April 2013. URL http://www.thi.informatik.unifrankfurt.de/~jukna/boolean/comment9.html.
 Knuth (1998) D. E. Knuth. The Art of Computer Programming, volume 2. Addison–Wesley, 3rd edition, 1998.
 Kollár et al. (1996) J. Kollár, L. Rónyai, and T. Szabó. Normgraphs and bipartite Turán numbers. Combinatorica, 16(3):399–406, 1996. doi:10.1007/BF01261323.
 Kushilevitz and Nisan (1997) E. Kushilevitz and N. Nisan. Communication Complexity. Cambridge University Press, 1997.
 Lamagna and Savage (1974) E. A. Lamagna and J. E. Savage. Computational complexity of some monotone functions. In IEEE Conference Record of 15th Annual Symposium on Switching and Automata Theory, pages 140–144, 1974. doi:10.1109/SWAT.1974.9.
 Mehlhorn (1979) K. Mehlhorn. Some remarks on Boolean sums. Acta Informatica, 12:371–375, 1979. doi:10.1007/BF00268321.
 Miltersen (1998) P. B. Miltersen. Error correcting codes, perfect hashing circuits, and deterministic dynamic dictionaries. In Proceedings of the 9th Annual ACMSIAM Symposium on Discrete Algorithms (SODA 1998), pages 556–563. SIAM, 1998.
 Nechiporuk (1971) É. I. Nechiporuk. On a Boolean matrix. Systems Theory Research, 21:236–239, 1971.
 Pǎtraşcu and Williams (2010) M. Pǎtraşcu and R. Williams. On the possibility of faster SAT algorithms. In Proceedings of the 21st Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2010), pages 1065–1075. SIAM, 2010.
 Pippenger (1980a) N. Pippenger. On another Boolean matrix. Theoretical Computer Science, 11(1):49–56, 1980a. doi:10.1016/03043975(80)900341.
 Pippenger (1980b) N. Pippenger. On the evaluation of powers and monomials. SIAM Journal on Computing, 9(2):230–250, 1980b. doi:10.1137/0209022.
 Pudlák and Rödl (2004) P. Pudlák and V. Rödl. Pseudorandom sets and explicit constructions of Ramsey graphs. In Complexity of computations and proofs, volume 13 of Quaderni Di Matematica. 2004.
 Spielman (1996) D. A. Spielman. Lineartime encodable and decodable errorcorrecting codes. IEEE Transactions on Information Theory, 42(6):1723–1731, 1996. doi:10.1109/18.556668.
 Williams (2005) R. Williams. A new algorithm for optimal 2constraint satisfaction and its implications. Theoretical Computer Science, 348(2–3):357–365, 2005. doi:10.1016/j.tcs.2005.09.023.