On the Spectral Properties of Symmetric Functions

# On the Spectral Properties of Symmetric Functions

Anil Ada111Department of Computer Science, Carnegie Mellon University. Email: aada@cs.cmu.edu.    Omar Fawzi222LIP, École Normale Supérieure de Lyon. Email: omar.fawzi@ens-lyon.fr.    Raghav Kulkarni333Chennai Mathematical Institute. Email: kulraghav@gmail.com.
###### Abstract

We characterize the approximate monomial complexity, sign monomial complexity, and the approximate norm of symmetric functions in terms of simple combinatorial measures of the functions. Our characterization of the approximate norm solves the main conjecture in [AFH12]. As an application of the characterization of the sign monomial complexity, we prove a conjecture in [ZS09] and provide a characterization for the unbounded-error communication complexity of symmetric-xor functions.

## 1 Introduction

Understanding the structure and complexity of Boolean functions is a main goal in computational complexity theory. Fourier analysis of Boolean functions provide many useful tools in this study. Natural Fourier analytic properties of a Boolean function can be linked to the computational complexity of the function in various settings like circuit complexity, communication complexity, decision tree complexity, learning theory, etc.

In this paper, our focus is on trying to understand the Fourier analytic (i.e. spectral) properties of symmetric functions, which are Boolean functions such that permuting the input bits does not change the output. Many basic and fundamental functions like are symmetric, and having a full understanding of the spectral properties of symmetric functions is a natural goal.

Some of the important spectral properties of Boolean functions are the degree (the largest degree of a monomial with non-zero Fourier coefficient), the monomial complexity (the number of non-zero Fourier coefficients), and the Fourier norms. Often, the degree or the monomial complexity of a Boolean function does not give us useful information, so we study approximate versions like -approximate degree (the minimum degree of a polynomial that point-wise approximates the function) and sign degree (the minimum degree of a polynomial that sign represents the function). These measures have found numerous applications in computational complexity theory.

Some earlier results on the spectral properties of symmetric functions include the characterization of sign degree [ABFR94], approximate degree [Pat92], and Fourier norm [AFH12].

Our main results are as follows.

• Theorem 3: characterization of approximate monomial complexity of symmetric functions.

• Theorem 3: characterization of sign monomial complexity of symmetric functions.

• Corollary 3: a lower bound on the norm of symmetric functions.

• Theorem 3: characterization of approximate norm of symmetric functions. This solves the main conjecture of [AFH12].

Our results have the following applications in communication complexity.

• Theorem 4: characterization of the unbounded-error communication complexity of symmetric-xor functions. This resolves a conjecture of [ZS09]. This result was obtained independently by Hatami and Qiang [HQ17].

• Theorem 4: verifying the log approximation rank conjecture for symmetric-xor functions.

To prove these results, we make use of (i) the close connections between Boolean functions and their corresponding two-party xor functions (Proposition 2), and (ii) the known bounds on the approximate rank and the sign rank of two-party symmetric-and functions (Theorem 2 and Theorem 2). We transform these results on two-party symmetric-and functions to the setting of symmetric xor-functions via reductions.

## 2 Preliminaries

### General notation

We use to denote the set . All the logarithms are base 2. For , denotes the Hamming weight of , i.e., . For , denotes negation of . Given and in , denotes the -bit string obtained by taking the coordinate-wise and of and . Similarly, denotes the -bit string obtained by taking the coordinate-wise xor of and .

A Boolean function is called symmetric if the function’s output does not change when we permute the input variables. When is symmetric, we’ll use to also denote the corresponding function with the understanding that . We define and

 r0(f) \defeqmin{r≤\ceiln/2:f(i)=f(i+2) for all i∈[r,\ceiln/2−1]} r1(f) \defeqmin{r≤\floorn/2−1:f(i)=f(i+2) for all i∈[\ceiln/2,n−r−2]}

Note that we have for all . Then . Also, we let

 λ(f)\defeq|{i:f(i)≠f(i+1)}|,

and

 ρ(f)\defeq|{i:f(i)≠f(i+2)}|.

When the function is clear from the context, we may drop the from this notation.

### Fourier analysis

Let be a Boolean function. We view as residing in the -dimensional vector space of real-valued functions . We equip this vector space with the inner product , where is uniformly distributed over . For each , define the function

 χS(x)\defeq(−1)∑i∈Sxi.

We refer to these functions as characters or monomials. It is easy to check that the set forms an orthonormal basis. Therefore every function (including every Boolean function) can be written as , where are the real-valued coefficients, called the Fourier coefficients. This way of expanding is called the Fourier expansion of .

The degree of a function is defined as and the monomial complexity is defined as . We also define the Fourier -norm:

 \norm\fcϕp\defeq(∑S|\fcϕ(S)|p)1/p.

The Fourier infinity norm is defined to be .

For symmetric functions, [AFH12] characterized the Fourier -norm in terms of . {theorem}[[AFH12]] Let be a symmetric function. When , we have

 log\norm\fcf1=Θ(r(f)log(nr(f))) .

### Matrix analysis

Let be a real-valued matrix, with singular values . The rank of , denoted , is the number of non-zero singular values. The Schatten -norm is defined as follows:

 \normMp \defeq(k∑i=1σpi)1/p, \normM∞ \defeqσ1.

We then define

 trace norm: \normM\tr \defeq \normM1 Frobenius norm: \normM\Fr \defeq \normM2 spectral norm: \normM \defeq \normM∞

Given two matrices and , we write if one can be obtained from the other after reordering the rows and/or the columns.

### Approximation theory

Throughout the paper, denotes any constant in . Given , we say that -approximates if for all , . Then the -approximate monomial complexity of , denoted by , is defined as the minimum monomial complexity of a function that -approximates . Similarly we define . For a matrix , and are defined as the minimum rank and the minimum trace norm respectively, of a matrix that -approximates entry-wise.

Given , we say that sign-represents if for all such that , , and for all such that , . The sign monomial complexity of , denoted , is defined to be the minimum monomial complexity of a function that sign represents . For a matrix with entries in , we similarly define .

The following proposition provides a relationship between the approximate trace norm and the approximate rank: {proposition}[Folklore] Let . Then,

 \rank\eps(M)≥(\normM\tr,\epsk(1+\eps))2.
###### Proof.

Let be a matrix that entry-wise -approximates , and . Then

 \normM\tr,\eps≤\normM′\tr(∗)≤\normM′\Fr√\rank(M′)≤k(1+\eps)√\rank(M′)=k(1+\eps)√\rank\eps(M),

where we used the Cauchy-Schwarz inequality for . ∎

Bruck and Smolensky [BS92] provided an upper bound on the sign monomial complexity of a Boolean function in terms of its Fourier -norm. In fact, their proof gives an upper bound on the approximate monomial complexity too. {theorem}[[BS92]] For any ,

 \mon\eps(f)≤4n\eps2\norm\fcf21.

Bruck [Bru90] gave a lower bound on the sign monomial complexity of a Boolean function in terms of the Fourier infinity norm of . {theorem}[[Bru90]] Let be a Boolean function. Then

 \signmon(f)≥1\norm\fcf∞.

### Two-party functions

A capital function name will refer to a function with two inputs, e.g., where and are some finite sets. We’ll abuse notation and also use to denote the by matrix corresponding to , i.e., the ’th entry of the matrix contains the value . It will always be clear from the context whether refers to a function or a matrix.

Given , we’ll define by . We denote by the communication function such that . We use the notation when the inputs and are promised to satisfy . Similarly, we define and , for .

In an important paper, Razborov [Raz03] gave close to tight lower bounds on the randomized communication complexity of where is a symmetric function. His main result can be stated as a lower bound on the approximate trace norm of a certain submatrix of : {theorem}[[Raz03]] For , let . If for some we have , then

 \normF∧n,k,f\tr,1/4≥(nk)eΩ(√kℓ).

We’ll also need a result from Sherstov [She12] that gives essentially tight lower bounds on the sign-rank of all symmetric-and functions (see Section 4 for this result’s relation to communication complexity).

{theorem}

[[She12]] Let . Then

 \signrank(F∧n,f)≥2Ω(λ(f)/log5n).

Our main interest in 2-party functions is due to the tight links between the Fourier analytic properties of a Boolean function and the matrix analytic properties of .

{proposition}

[Folklore] Let be any function and let . Then

1. ,

2. ,

3. ,

4. ,

5. .

## 3 Main Results

{theorem}

Let be a symmetric function. Then,

 Ω(r(f))≤log\mon1/4(f)≤O(r(f)log(nr(f))).
###### Proof.

Lower bound:

We first note that we may assume that . In fact, if , then we can consider the function defined as . We note that . To see this, given a function approximating with , the function defined by satisfies and . This shows that . But we have , i.e., (except if , but this case is simple). This implies that .

For the remainder of the proof, we assume there is an such that and .

In light of Proposition 2, part (b), our goal will be to show a lower bound on . For any such that , we define the submatrix of of size by for all . Note that this is for example the submatrix obtained by considering all the bitstrings for which the first bits are set to one and among the remaining bits, exactly are set to .

Observe that . In particular, when , . This means that

 F\xorn−t,k,f2k=F∧n−t,k,f′k,

where

 f′k(i)=f2k(2k−2i+t) for i∈{0,1,…,k}.

Thus, we’ll show a lower bound on the approximate-rank of . To do this, first we’ll use Proposition 2, and show a lower bound on the approximate-trace norm. To show a lower bound on the approximate-trace norm, we’ll use Theorem 2 and the fact that

 f2k(s−1)≠f2k(s+1)⟹f′k(k+t−(s−1)2)≠f′k(k+t−(s+1)2).

In other words, our choice for in Theorem 2 will be . Let’s now specify and . Note that we should make sure that is even. We distinguish two cases depending on whether or not.

If , then we simply set if is odd and if is even. Then we let . Since , it is easy to check that and as required by Theorem 2. So we have

 \normF\xorn−t,k,f2k\tr,1/4=\normF∧n−t,k,f′k\tr,1/4≥(n−tk)eΩ(√kℓ),

which, by Proposition 2 and our choices for and , implies

 \rank1/4(F\xorn−t,k,f2k)≥eΩ(√kℓ)=eΩ(s).

In the case , we set or depending on the parity of , and . We then have using the fact that . As , this implies that . On the other hand, we have . Now recall that . But which implies that . In addition, as , we also have . As a result, we can apply Theorem 2 and obtain

 \rank1/4(F\xorn−t,k,f2k)≥eΩ(√kℓ)=eΩ(s).

Using Proposition 2 part (b), we obtain the desired result.

Upper bound:

Using Theorem 2, we have . Taking the logarithm and using Theorem 2 we get the desired result. ∎

{theorem}

Let be a symmetric function. Then,

 Ω(ρ(f)/log5n)≤log\mon±(f)≤O(1+ρ(f)logn).
###### Proof.

Lower bound:

First, we’ll assume is a constant fraction of . At the end of the proof, we give an argument for when this is not true.

In light of Proposition 2, part (c), our goal is to show that

 log\signrank(F\xorn,f)=Ω(ρ(f)/log5n). (1)

Since is a submatrix of , it suffices to show a lower bound on the sign-rank of . As in the proof of Theorem 3,

 F\xorn,n/3,f2n/3=F∧n,n/3,f′n/3,

where

 f′n/3(i)=f2n/3(2n/3−2i) for i∈{0,1,…,n/3}.

From the assumption we made at the beginning of the proof, we know that . By Theorem 2, we know that

 log\signrank(F∧n/3,f′n/3)=Ω(λ(f′n/3)/log5(n/3)).

We show that the above implies

 log\signrank(F∧n,n/3,f′n/3)=Ω(λ(f′n/3)/log5(n/3)), (2)

by showing that is a submatrix of , as follows. Given , construct (by padding and appropriately with bits each) with the property that the Hamming weights and the strings don’t intersect at indices . Clearly the mappings and are injective, and . So is a submatrix of . This establishes (2), and therefore (1). This completes the proof for the case when is a constant fraction of .

If the changes are happening mostly at odd indices , then consider the restriction of in which one input variable is set to 1. If is this restriction, then is a submatrix of and therefore it suffices to show a lower bound on .

If is not a constant fraction of , then consider the function defined as . This is such that a constant fraction of . Furthermore, note that as one is obtained from the other by rearranging the columns. Therefore it suffices to show a lower bound on .

Upper bound:

We’ll prove by induction on that . If then is either a constant function or a parity function (parity or its negation), and so can be represented exactly using at most two non-zero Fourier coefficients. We also have to explicitly prove the case. Let’s consider the function with for and for , for some . Observe that the following polynomial sign represents :

 (2t−0.1)(−1)x1+x2+…+xn+((−1)x1+(−1)x2+⋯+(−1)xn−n).

So . By slightly modifying the above polynomial, we can sign represent any function that behaves like a parity function (parity or its negation) for and behaves like a constant function for . We can also sign represent any function that behaves like a constant function for and behaves like a parity function for . These are the only cases to consider for .

Now suppose . Let be the largest index such that . Let be the function obtained from as follows: for and for . Observe that . Let be a sign representing polynomial for with at most monomials. Let be the function obtained from as follows: for and either for or for . Observe that , and so it has a sign representing polynomial with at most monomials. The functions and are constructed in a way so that the product sign represents (in particular, the choice of or for is made accordingly). Therefore .

As a corollary to the upper bound above, we can give a lower bound on the Fourier infinity norm of a symmetric function.

{corollary}

Let be a symmetric function. Then

 \norm\fcf∞≥1(n+2)ρ(f).
###### Proof.

From the proof of Theorem 3, we have . Combining this with Theorem 2 gives the desired bound. ∎

We now prove the main conjecture of [AFH12].

{theorem}

Let be a symmetric function. Then,

 Ω(r(f))−12logn≤log\norm\fcf1,1/5≤log\norm\fcf1≤O(r(f)log(nr(f))).
###### Proof.

The upper bound is in Theorem 2. For the lower bound, let be such that . Applying Theorem 2 with , we get . By the triangle inequality, we have . Thus,

 log\norm\fcf1,1/5≥12log\mon1/4(f)−12logn−log40 .

To conclude, it suffices to use Theorem 3

## 4 Applications to Communication Complexity

We denote by the -error randomized communication complexity of . In this model, the players are allowed to share randomness and for all inputs, they are required to output the correct answer with probability at least . We’ll think of as some constant less than .

Here we’ll also be interested in the unbounded-error randomized communication complexity of a function , denoted . In this model, the players have private randomness and the only requirement from the protocol is that for all inputs, it gives the correct answer with probability greater than 1/2. Notice that achieving error probability 1/2 is trivial: just output a random bit. Also, note that there is no requirement that the success probability be bounded away from 1/2, e.g., the success probability could be . This makes the model quite powerful and proving lower bounds much harder. It was shown in [PS86] that

 \bfU(F)=log2\signrank(F)±O(1).

In a remarkable paper [For02], Forster was able to prove a lower bound on the unbounded error communication complexity of a function using the function’s spectral norm. In particular he was able to show a linear lower bound for the inner-product function. Building on Forster’s work, Sherstov [She12] gave essentially tight lower bounds on the unbounded error communication complexity of all symmetric-and functions (see Theorem 2).

In [ZS09], Shi and Zhang conjecture that the unbounded error communication complexity of a symmetric-xor function is characterized by . We prove this conjecture. First, the proof Theorem 3 allows us to bound the sign-rank of symmetric-xor functions.

{theorem}

Let be a symmetric function. Then,

 Ω(ρ(f)/log5n)≤log\signrank(F\xorn,f)≤O(1+ρ(f)logn).

This immediately implies:

{corollary}

Let be a symmetric function. Then,

 Ω(ρ(f)/log5n)≤log\bfU(F\xorn,f)≤O(1+ρ(f)logn).

The second application is related to the Log Approximation Rank Conjecture, which is the randomized communication complexity analog of the famous Log Rank Conjecture. The Log Approximation Rank Conjecture states that there is a constant such that for any 2-party function ,

 log\rank\eps′(F)≤\bfR\eps(F)≤logc\rank\eps′(F).

Here, the lower bound is well-known to be true for all functions, so the conjecture is about establishing the upper bound. This has been done by Razborov [Raz03] for symmetric-and functions . We show that the conjecture holds also for symmetric-xor functions .

{theorem}

There are constants such that for any two-party function , where is symmetric, we have

 \bfR\eps(F\xorn,f)≤logc\rank\eps′(F).
###### Proof.

By Proposition 3.4 of [ZS09], we know that

 \bfR\eps(F\xorn,f)≤O(r(f)log2r(f)loglogr(f)).

The proof of Theorem 3 allows us to conclude that

 log\rank1/4(F\xorn,f)≥Ω(r(f)).

Combining the two bounds proves the result. ∎

## References

• [ABFR94] James Aspnes, Richard. Beigel, Merrick Furst, and Steven Rudich. The expressive power of voting polynomials. Combinatorica, 14(2):135–148, 1994.
• [AFH12] Anil Ada, Omar Fawzi, and Hamed Hatami. Spectral norm of symmetric functions. In APPROX-RANDOM, pages 338–349, 2012.
• [Bru90] Jehoshua Bruck. Harmonic analysis of polynomial threshold functions. SIAM Journal of Discrete Mathematics, 3:168–177, 1990.
• [BS92] Jehoshua Bruck and Roman Smolensky. Polynomial threshold functions, ac0 functions, and spectral norms. SIAM Journal on Computing, 21(1):33–42, February 1992.
• [For02] Jürgen Forster. A linear lower bound on the unbounded error probabilistic communication complexity. Journal of Computer and System Sciences, 65(4):612–625, 2002.
• [HQ17] Hamed Hatami and Yingjie Qian. The unbounded-error communication complexity of symmetric xor functions.
• [Pat92] Ramamohan Paturi. On the degree of polynomials that approximate symmetric Boolean functions (preliminary version). In Proceedings of ACM Symposium on Theory of Computing, pages 468–474, 1992.
• [PS86] Ramamohan Paturi and Janos Simon. Probabilistic communication complexity. Journal of Computer and System Sciences, 33(1):106–123, 1986.
• [Raz03] Alexander Razborov. Quantum communication complexity of symmetric predicates. Izvestiya: Mathematics, 67(1):145–159, 2003.
• [She12] Alexander A. Sherstov. The multiparty communication complexity of set disjointness. In STOC’12—Proceedings of the 2012 ACM Symposium on Theory of Computing, pages 525–544. ACM, New York, 2012.
• [ZS09] Zhiqiang Zhang and Yaoyun Shi. Communication complexities of symmetric xor functions. Quantum Information & Computation, 9(3):255–263, 2009.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters