Faster quantum algorithm for evaluating game trees

# Faster quantum algorithm for evaluating game trees

Ben W. Reichardt School of Computer Science and Institute for Quantum Computing, University of Waterloo.
###### Abstract

We give an -query quantum algorithm for evaluating size- AND-OR formulas. Its running time is poly-logarithmically greater after efficient preprocessing. Unlike previous approaches, the algorithm is based on a quantum walk on a graph that is not a tree. Instead, the algorithm is based on a hybrid of direct-sum span program composition, which generates tree-like graphs, and a novel tensor-product span program composition method, which generates graphs with vertices corresponding to minimal zero-certificates.

For comparison, by the general adversary bound, the quantum query complexity for evaluating a size- read-once AND-OR formula is at least , and at most . However, this algorithm is not necessarily time efficient; the number of elementary quantum gates applied between input queries could be much larger. Ambainis et al. have given a quantum algorithm that uses queries, with a poly-logarithmically greater running time.

## 1 Introduction

An AND-OR formula is a rooted tree in which the internal nodes correspond to AND or OR gates. The size of the formula is the number of leaves. To a formula of size and a numbering of the leaves from to corresponds a function . This function is defined on input by placing bit on the th leaf, for , and evaluating the gates toward the root. Evaluating an AND-OR formula solves the decision version of a MIN-MAX tree, also known as a two-player game tree.

Let be the quantum query complexity for evaluating the size- AND-OR formula . Quantum query complexity is the generalization of classical decision tree complexity to quantum algorithms. Now the general adversary bound of is  [BS04, HLŠ07], and thus . Since the general adversary bound is nearly tight for any boolean function, in particular  [Rei09a]. (Interpreted in a different way, this says that the square of the quantum query complexity of evaluating a boolean function is almost a lower bound on the read-once formula size for that function [LLS06].) However, the algorithm from [Rei09a] is not necessarily even time efficient. That is, even though the number of queries to the input is nearly optimal, the number of elementary quantum gates applied between input queries could be much larger.

Ambainis et al. [ACR07] have given a quantum algorithm that evaluates using queries, with a running time only poly-logarithmically larger after efficient preprocessing. We reduce the query overhead from to only , with the same preprocessing assumption.

###### Theorem 1.1.

Let be an AND-OR formula of size . Then can be evaluated with error at most by a quantum algorithm that uses input queries. After polynomial-time classical preprocessing independent of the input, and assuming unit-time coherent access to the preprocessed string, the running time of the algorithm is .

An improvement from to overhead may not be significant for eventual practical applications. Additionally, the algorithm does not obviously bring us closer to knowing whether the general adversary bound is tight for quantum query complexity, because its overhead is larger than the overhead of the query algorithm from [Rei09a]. (It may be that the analysis used to prove Theorem 1.1 is somewhat loose.)

However, the idea behind the algorithm is still of interest, as it provides a new solution to the problem of evaluating AND-OR formulas with large depth. If is a formula on variables, with depth , then the [ACR07] algorithm, applied directly, evaluates using queries.111Actually, [ACR07, Sec. 7] only shows a bound of queries, but this can be improved to using the bounds on below [ACR07, Def. 1]. The improved analysis of the same algorithm in [Rei09b] tightens this to , which is also the depth-dependence found by [Amb07]. Thus for a highly unbalanced formula, with depth , the quantum algorithm performs no better asymptotically than the trivial -query classical algorithm. Fortunately, Bshouty et al. have given a “rebalancing” procedure that takes AND-OR formula as input and outputs an equivalent AND-OR formula with depth and size  [BCE91, BB94]. Appealing to this result, [ACR07] evaluates using queries.

The algorithm behind Theorem 1.1 gets around the large-depth problem without using formula rebalancing. Instead, the algorithm is based on a novel method for constructing bipartite graphs with certain useful spectral properties. Ambainis et al. run a quantum walk on a graph that matches the formula tree, except with certain edge weights and boundary conditions at the leaves. This tree comes from glueing together elementary graphs for each gate. We term this composition method “direct-sum” composition, because the graph’s adjacency matrix acts on a space that is the direct sum of spaces for each individual gate. Direct-sum composition incurs severe overhead on highly unbalanced formulas, making the query complexity at least proportional to the formula depth.

The new algorithm begins with the same elementary graphs, with even the same edge weights. However, it composes them using a kind of “tensor-product” graph composition method. Overall, this results in graphs that have much lower depth, although they are not trees. By carefully combining this method with direct-sum composition, we obtain a graph on which the algorithm runs a quantum walk. The two approaches are summarized in more detail in Section 1.1 below.

The general formula-evaluation problem is an ideal example of a recursively defined problem. The evaluation of a formula is the evaluation of a function, the inputs of which are themselves independent formulas. As argued in a companion paper [Rei09b], quantum computers are particularly well suited for evaluating formulas. Unlike the situation for classical algorithms, for a large class of formulas the optimal quantum algorithm can work following the formula’s recursive structure. Since Theorem 1.1 does not require AND-OR formula rebalancing, it extends this quantum recursion paradigm. Besides its conceptual appeal, this may also be important because the effect of rebalancing on the general quantum adversary bound appears to be less tractable for formulas over gate sets beyond AND and OR. Therefore the rebalancing step that aids [ACR07] might not be useful to solve the large-depth problem on more general formulas. The hybrid graph composition method is another tool that might generalize more easily.

Our algorithm is developed and analyzed using the framework relating span programs and quantum algorithms from [Rei09a]. The connection to time-efficient quantum algorithms, especially for evaluating unbalanced formulas over arbitrary fixed, finite gate sets, has been developed further in [Rei09b]. Table 1 summarizes some results for the formula-evaluation problem in the classical and quantum models; for a more detailed and inclusive review, see [Rei09b]. Section 1.2 below will go over only the history of quantum algorithms for evaluating AND-OR formulas.

### 1.1 Idea of the algorithm

As an example to illustrate the main idea of our algorithm, consider the AND-OR formula , where and denote and , respectively. In Figure 1, this formula is represented as a tree.

The [ACR07] algorithm starts with the graph in Figure 1, essentially the same as the formula tree, except with extra edges attached to the root and some leaves. The edges should be weighted, but for this intuitive discussion take every edge’s weight to be one.

Consider an input . Modify the graph by attached a dangling edge to vertex if , for . Then it is simple to see that the resulting graph has an eigenvalue-zero eigenvector supported on vertex (or rather its adjacency matrix does) if and only if . If we added an edge off vertex , then the resulting graph would have an eigenvalue-zero eigenvector supported on the new root if and only if .

The [ACR07] algorithm takes advantage of this property by running (phase estimation on) a quantum walk that starts at the root vertex. The algorithm detects the eigenvalue-zero eigenvector in order to evaluate the formula.

This algorithm does not work well on formulas with large depth. For example, consider the maximally unbalanced formula on inputs, a skew tree. The corresponding graph is nearly the length- line graph. It will still have an eigenvalue-zero eigenvector supported on the root if and only if the formula evaluates to zero. However, the algorithm will require time to detect this eigenvector, because its squared support on the root is only after normalization, and because the spectral gap around zero will also be . (The spectral gap determines the precision of the phase estimation procedure, and hence its running time. It corresponds to the squared support of eigenvalue-zero eigenvectors by [Rei09a, Theorem 8.7].) Alternatively, one can argue that the algorithm requires time because it takes that long even to reach the deepest leaf vertices.

Now consider the graph in Figure 2. Again modify the graph according to an input by attaching dangling edges to those vertices with . Considering a few examples should convince the reader that the resulting graph has an eigenvalue-zero eigenvector supported on vertex if and only if . Note, though, that the distance from “output” vertex to any of the “input” vertices to is at most two. The graph is also far from being a tree. Its main feature is that the “constraint” vertices—the vertices aside from —are in one-to-one correspondence with minimal zero-certificates to . For example, for , , but flipping any bit of from to changes the formula evaluation from to . The corresponding constraint vertex is connected to exactly those input vertices for which .

The graphs in Figures 1 and 2 represent two extremes of a family of graphs that evaluate in the same manner. The graph in Figure 1 can be seen (essentially) as coming from plugging together along the individual graphs in Figures 3(a) and 3(b) that evaluate single OR and AND gates in the same manner. We term this “direct-sum” composition. The graph in Figure 2 comes from a certain “tensor-product” composition of the graphs from Figures 3(a) and 3(b). (Figure 3 shows several other examples of tensor-product graph composition.)

With the correct choice of edge weights, tensor-product composition leads to graphs for which the squared support of the unit-normalized eigenvalue-zero eigenvector on the root is for a formula of size . This implies by [Rei09a, Theorem 9.1] a quantum algorithm that uses queries to evaluate the formula. There is no issue with deep formulas. However, the algorithm will not be time efficient. Essentially, the problem is that the number of vertices can be exponentially large in , as can be their degrees, which makes it difficult to implement a quantum walk efficiently.

Theorem 1.1 gets around this problem by using a combination of the two methods of composing graphs. For example, the graph in Figure 2 also evaluates the formula in the same manner. One can think of the vertex as evaluating the subformula . The subgraph it cuts off has been composed in a direct-sum manner with a tensor-product-composed graph for the formula . By combining the two composition methods, we can manage the tradeoffs, controlling the maximum degree and norm of the graph, while also avoiding the formula depth problem.

Although our algorithm can be presented and analyzed entirely in terms of graphs, we will present it in terms of span programs. Span programs are part of a framework for designing and analyzing quantum algorithms [Rei09a], for which Section 2 gives some necessary background. Eigenvalue-zero eigenvectors correspond to span program witnesses and the squared support on the root corresponds to a complexity measure known as the full witness size [Rei09b]. The two graph composition techniques described above correspond to different ways of composing general span programs.

### 1.2 Review of quantum algorithms for evaluating AND-OR formulas

Research on the formula-evaluation problem in the quantum model began with the simple -bit OR function, . Grover gave a quantum algorithm for evaluating with bounded one-sided error using oracle queries and time [Gro96, Gro02].

Grover’s algorithm, together with repetition for reducing errors, can be applied recursively to speed up the evaluation of more general AND-OR formulas. For example, the size- AND-OR formula can be evaluated in queries. Here the extra logarithmic factor comes from using repetition to reduce the error probability of the inner evaluation procedure from a constant to be polynomially small. Call a formula layered if the gates at the same depth are the same. Buhrman, Cleve and Wigderson show that the above argument can be applied to evaluate a layered, depth- AND-OR formula on inputs using queries [BCW98, Theorem 1.15].

Høyer, Mosca and de Wolf [HMW03] consider the case of a unitary input oracle that maps

 ~Ox:|φ⟩⊗|j⟩⊗|b⟩⊗|0⟩↦|φ⟩⊗|j⟩⊗(|b⊕xj⟩⊗|ψx,j,xj⟩+|b⊕¯¯¯xj⟩⊗|ψx,j,¯¯¯xj⟩), (1.1)

where , are pure states with . Such an oracle can be implemented when the function is computed by a bounded-error, randomized subroutine [NC00]. Høyer et al. allow access to and , both at unit cost, and show that can still be evaluated using queries. This robustness result implies that the steps of repetition used by [BCW98] are not necessary, and a depth- layered AND-OR formula can be computed in queries, for some constant . This gives an -query quantum algorithm for the case that the depth is constant, but is not sufficient to cover, e.g., the complete, binary AND-OR formula, for which .

A breakthrough for the formula-evaluation problem came in 2007, when Farhi, Goldstone and Gutmann presented a quantum algorithm for evaluating complete, binary AND-OR formulas [FGG07]. Their algorithm is not based on iterating Grover’s algorithm in any way, but instead runs a quantum walk—analogous to a classical random walk—on a graph derived from the AND-OR formula graph as in Figure 1. The algorithm runs in time in a certain continuous-time query model.

Ambainis et al. discretized the [FGG07] algorithm by reinterpreting a correspondence between discrete-time random and quantum walks due to Szegedy [Sze04] as a correspondence between continuous-time and discrete-time quantum walks [ACR07]. Applying this correspondence to quantum walks on certain weighted graphs, they gave an -query quantum algorithm for evaluating “approximately balanced” formulas, extended in [Rei09b]. Using the formula rebalancing procedure of [BCE91, BB94], the [ACR07] algorithm uses queries in general. This is nearly optimal, since the adversary bound gives an lower bound [BS04].

This author has given an -query quantum algorithm for evaluating arbitrary size- AND-OR formulas [Rei09a]. In fact, the result is more general, stating that the general adversary bound is nearly tight for every boolean function. However, unlike the earlier AND-OR formula-evaluation algorithms, the algorithm is not necessarily time efficient.

## 2 Span programs

In this section, we will briefly recall some of the definitions and results on span programs from [Rei09a, Rei09b]. This section is essentially an abbreviated version of [Rei09b, Sec. 2].

For a natural number , let . For a finite set , let be the inner product space with orthonormal basis . For vector spaces and over , let be the set of linear transformations from into , and let . For , is its operator norm. Let . For a string , let denote its bitwise complement.

### 2.1 Span program full witness size

The full witness size is a span program complexity measure that is important for developing quantum algorithms that are time efficient as well as query efficient.

###### Definition 2.1 (Span program [Kw93]).

A span program consists of a natural number , a finite-dimensional inner product space over , a “target” vector , disjoint sets and for , , and “input vectors” for .

To corresponds a function , defined on by

 fP(x)={1if |t⟩∈Span({|vi⟩:i∈Ifree∪⋃j∈[n]Ij,xj})0otherwise (2.1)

Some additional notation is convenient. Fix a span program . Let . Let be given by . For , let and . Then if . A vector is said to be a witness for if and . A vector is said to be a witness for if and .

###### Definition 2.2 (Witness size).

Consider a span program , and a vector of nonnegative “costs.” Let . For each input , define the witness size of on with costs , , as follows:

 wsizes(P,x)=⎧⎨⎩min|w⟩:AΠ(x)|w⟩=|t⟩∥S|w⟩∥2if fP(x)=1min|w′⟩:⟨t|w′⟩=1Π(x)A†|w′⟩=0∥SA†|w′⟩∥2if fP(x)=0 (2.2)

The witness size of with costs is

 wsizes(P)=maxx∈Bnwsizes(P,x). (2.3)

Define the full witness size by letting and

 fwsizes(P,x) =⎧⎨⎩min|w⟩:AΠ(x)|w⟩=|t⟩(1+∥Sf|w⟩∥2)if fP(x)=1min|w′⟩:⟨t|w′⟩=1Π(x)A†|w′⟩=0(∥|w′⟩∥2+∥SA†|w′⟩∥2)if fP(x)=0 (2.4) fwsizes(P) =maxx∈Bnfwsizes(P,x). (2.5)

When the subscript is omitted, the costs are taken to be uniform, , e.g., . The witness size is defined in [RŠ08]. The full witness size is defined in [Rei09a, Sec. 8], but is not named there. A strict span program has , so , and a monotone span program has for all  [Rei09a, Def. 4.9].

###### Theorem 2.3 ([Rei09a, Theorem 9.3], [Rei09b, Theorem 2.3]).

Let be a span program. Then can be evaluated using

 T=O(fwsize(P)∥abs(AGP)∥) (2.6)

quantum queries, with error probability at most . Moreover, if the maximum degree of a vertex in is , then the time complexity of the algorithm for evaluating is at most a factor of worse, after classical preprocessing and assuming constant-time coherent access to the preprocessed string.

### 2.2 Direct-sum span program composition

Let be a boolean function. Let . For , let be a natural number, with for . For , let . Define by

 y(x)j={fj(xj)if j∈Sxjif j∉S (2.7)

Define by . Given span programs for the individual functions and for , we will construct a span program for .

Let be a span program computing . Let have inner product space , target vector and input vectors indexed by and for and .

For , let be a vector of costs, and let be the concatenation of the vectors . For , let and be span programs computing and , with . For , let have inner product space with target vector and input vectors indexed by and for , . For , let .

Let . Define by if . The idea is that maps to the input span program that must evaluate to true in order for to be available for .

###### Definition 2.4 ([Rei09a, Def. 4.5]).

The direct-sum-composed span program is defined by:

• The inner product space is . Any vector in can be uniquely expressed as , where and .

• The target vector is .

• The free input vectors are indexed by with, for ,

 |v⊕i⟩=⎧⎪⎨⎪⎩|vi⟩Vif i∈Ifree|vi⟩V+|i⟩⊗|tjc⟩if i∈Ijc % and j∈S|i′⟩⊗|vi′′⟩if i=(i′,i′′)∈Ijc×Ijcfree (2.8)
• The other input vectors are indexed by for , , . For , , with for . For , let . For and , let

 |v⊕ii′⟩=|i⟩⊗|vi′⟩. (2.9)

By [Rei09a, Theorem 4.3], and . Moreover, we can bound how quickly the full witness size can grow relative to the witness size:

###### Lemma 2.5 ([Rei09b, Lemma 2.5]).

Under the above conditions, for each input , with ,

• If , let be a witness to with . Then

 fwsizes(Q⊕,x)wsizer(P,y)≤σ(y,|w⟩)+1+∑i∈Ifree|wi|2wsizer(P,y)where σ(y,|w⟩)=maxj∈S:∃i∈Ijyj with ⟨i|w⟩≠0fwsizesj(Pjyj)wsizesj(Pjyj). (2.10)
• If , let be a witness to such that . Then

 fwsizes(Q⊕,x)wsizer(P,y)≤σ(¯y,|w′⟩)+∥|w′⟩∥2wsizer(P,y)where σ(¯y,|w′⟩)=maxj∈S:∃i∈Ij¯yj with ⟨vi|w′⟩≠0fwsizesj(Pj¯yj)wsizesj(Pj¯yj). (2.11)

If , then and should each be taken to be in the above equations.

###### Lemma 2.6 ([Rei09b, Lemma 3.4]).

If is the direct-sum composition along a formula of span programs and , then

 ∥abs(AGP)∥≤2maxv∈φmax{∥abs(AGPv)∥,∥abs(AGP†v)∥}. (2.12)

If the span programs are monotone, then .

## 3 Evaluation of arbitrary AND-OR formulas

Let be an AND-OR formula. In this section, we will prove Theorem 1.1 by applying Theorem 2.3 to a certain composed span program . The construction of has three steps that we will explain in detail below.

1. Eliminate any AND or OR gates with fan-in one, and expand out AND and OR gates with higher fan-ins into gates of fan-in exactly two.

2. Mark a certain subset of the edges of the formula. We call marked edges “checkpoints.”

3. Starting with the span programs for and , compose them for the subformulas cut off by checkpointed edges using a version of tensor-product composition. Compose the resulting span programs across checkpoints using direct-sum composition to yield .

Direct-sum span program composition keeps the norm of the corresponding graph’s adjacency matrix under control, as well as the maximum degree of a vertex in the graph. However, it makes the full witness size grow much larger than the witness size, especially for highly unbalanced formulas. Reduced tensor-product composition generates strict span programs, for which the full witness size stays close to the witness size. However, it allows the norm and the maximum degree of the corresponding graph to grow exponentially quickly. By using both techniques in careful combination, we are able to manage this tradeoff so that Theorem 2.3 can be applied.

Section 3.1 presents the span programs we use for fan-in-two AND and OR gates.

In Section 3.2, we study reduced tensor-product composition for the span programs for AND and OR gates. Reduced tensor-product composition is a version of tensor-product composition that parsimoniously uses fewer dimensions when possible. For AND-OR formulas, it has the advantage that the output span program’s inner product space bears a close connection to false inputs of the formula , similarly to canonical span programs. In order to motivate the checkpointing idea, we explain the problems of a scheme based only on reduced tensor-product composition.

Section 3.3 then presents our full construction of , based on a combination of direct-sum and reduced tensor-product composition.

Section 3.4 contains the analysis of the graphs required to apply Theorem 2.3.

### 3.1 Span programs for AND2 and OR2

We will use the following strict, monotone span programs for fan-in-two AND and OR gates:

###### Definition 3.1 ([Rei09b, Def. 4.1]).

For , define span programs and computing and , , respectively, by

 PAND(s1,s2): |t⟩ =(α1α2), |v1⟩ =(β10), |v2⟩ =(0β2) (3.1) POR(s1,s2): |t⟩ =δ, |v1⟩ =ϵ1, |v2⟩ =ϵ2 (3.2)

Both span programs have , and . Here the parameters , for , are given by

 αj =(sj/sp)1/4 βj =1 (3.3) δ =1 ϵj =(sj/sp)1/4, (3.4)

where . Let and .

Note that . They are largest when .

###### Claim 3.2 ([Rei09b, Claim 4.2]).

The span programs and satisfy:

 wsize(√s1,√s2)(PAND,x)={√spif x∈{11,10,01}√sp2if x=00wsize(√s1,√s2)(POR,x)={√spif x∈{00,10,01}√sp2if x=11 (3.5)

### 3.2 Reduced tensor-product span program composition for AND-OR formulas

Reduced tensor-product composition of span programs is introduced in [Rei09a, Def. 4.6]. We repeat the definition here, specializing to the case of monotone, strict span programs acting on disjoint inputs. Also, for simplicity we consider the case of composing on one span program at a time. After stating the definition, we specialize further to AND-OR formulas, and characterize the reduced tensor-product span program composition of the AND and OR span programs from Definition 3.1.

Consider monotone functions and . Let be defined by

 g(x,y)=f(f′(x),y) (3.6)

for , . Let be a span program computing and let be a span program computing . Assume that and are both monotone, strict span programs.

Let span program be in inner product space , with target vector and input vectors indexed by for . Let be in the inner product space with target vector and input vectors indexed by for . Since and are both monotone, .

###### Definition 3.3 ([Rei09a]).

The tensor-product-composed span program, reduced with respect to the basis for , is , defined by:

• Let . For , define and by

 Vl={V′% if l∉Z, i.e., ⟨l|vi⟩≠0 for some i∈I11Cif l∈Z|πl⟩={|t′⟩if l∉Z∥|t′⟩∥if l∈Z (3.7)
• The inner product space of is . Any vector can be uniquely expressed as , where .

• The target vector is

 |tr⊗⟩=∑l∈[d]⟨l|t⟩|πl⟩Vl. (3.8)
• is strict and monotone, thus for all .

• The input vectors are indexed by

 Ir⊗k1={I11×I′k1if k≤m% Ij1if k>m, where j=k−m+1 (3.9)

and given by

 |vr⊗ι⟩={∑l∈[d]⟨l|vi⟩|v′i′⟩Vlif ι=(i,i′)∈⋃k≤mIr⊗k1∑l∈[d]⟨l|vι⟩|πl⟩Vlif ι∈⋃k>mIr⊗k1 (3.10)

The intuition behind this construction is to compose the span programs in a tensor-product manner somewhat similar to Definition 2.4. From [Rei09a, Def. 4.4], this would give a span program with target vector and input vectors either for or otherwise. However, if all the input vectors are zero in a coordinate , then the first type of input vectors are all zero on . The second type of input vectors are all proportional to the same state on that coordinate, so we might as well just keep as in the above definition. The advantage over tensor-product composition is that the graph associated to the span program potentially has fewer vertices, with lower degrees.

Here we have composed the span program into the first input of . Reduced tensor-product composition into the other inputs is defined symmetrically.

By [Rei09a, Prop. 4.7], for arbitrary costs and ,

 wsize(r,s)(Qr⊗)≤wsize(wsizer(P′),s)(P). (3.11)

Now let us study reduced tensor-product composition for AND-OR formulas. To start with, it will be helpful to give two examples of Definition 3.3, for the cases and . Recall from [Rei09a, Def. 8.1] that the biadjacency matrix for the bipartite graph is given by , where is the matrix whose columns are the input vectors of , as defined in Section 2.1. Assume that is a singleton set, with the first column of . Rearrange the rows so that the nonzero entries of come first (the set from Definition 3.3 comes last), writing , where is nonzero in every entry. Writing , decomposes as

 BGP(11+n)=(|t1⟩|γ⟩C1|t2⟩0C2). (3.12)
• First consider the case that from Definition 3.1. Let be the composed span program from Definition 3.3. Then

 BGQr⊗(12+n)=⎛⎜⎝α1|t1⟩β1|γ⟩0α1C1α2|t1⟩0β2|γ⟩α2C1α|t2⟩00αC2⎞⎟⎠. (3.13)

For comparison, the tensor-product-composed span program from [Rei09a, Def. 4.4] would have the biadjacency matrix

 BGQr⊗(12+n)=⎛⎜ ⎜ ⎜ ⎜⎝α1|t1⟩β1|γ⟩0α1C1α2|t1⟩0β2|γ⟩α2C1α1|t2⟩00α1C2α2|t2⟩00α2C2⎞⎟ ⎟ ⎟ ⎟⎠. (3.14)
• Next consider the case that from Definition 3.1. Let be the composed span program from Definition 3.3. Then

 BGQr⊗(12+n)=(δ|t1⟩ϵ1|γ⟩ϵ2|γ⟩δC1δ|t2⟩00δC2). (3.15)

From Eqs. (3.13) and (3.15), we can derive a bound on the growth of the norm of the entrywise absolute value of the biadjacency matrix for a span program, under reduced tensor-product composition with either or :

###### Lemma 3.4.

Let be a strict, monotone span program on input bits, with