On the Spectral Properties of Symmetric Functions
Abstract
We characterize the approximate monomial complexity, sign monomial complexity, and the approximate norm of symmetric functions in terms of simple combinatorial measures of the functions. Our characterization of the approximate norm solves the main conjecture in [AFH12]. As an application of the characterization of the sign monomial complexity, we prove a conjecture in [ZS09] and provide a characterization for the unboundederror communication complexity of symmetricxor functions.
1 Introduction
Understanding the structure and complexity of Boolean functions is a main goal in computational complexity theory. Fourier analysis of Boolean functions provide many useful tools in this study. Natural Fourier analytic properties of a Boolean function can be linked to the computational complexity of the function in various settings like circuit complexity, communication complexity, decision tree complexity, learning theory, etc.
In this paper, our focus is on trying to understand the Fourier analytic (i.e. spectral) properties of symmetric functions, which are Boolean functions such that permuting the input bits does not change the output. Many basic and fundamental functions like are symmetric, and having a full understanding of the spectral properties of symmetric functions is a natural goal.
Some of the important spectral properties of Boolean functions are the degree (the largest degree of a monomial with nonzero Fourier coefficient), the monomial complexity (the number of nonzero Fourier coefficients), and the Fourier norms. Often, the degree or the monomial complexity of a Boolean function does not give us useful information, so we study approximate versions like approximate degree (the minimum degree of a polynomial that pointwise approximates the function) and sign degree (the minimum degree of a polynomial that sign represents the function). These measures have found numerous applications in computational complexity theory.
Some earlier results on the spectral properties of symmetric functions include the characterization of sign degree [ABFR94], approximate degree [Pat92], and Fourier norm [AFH12].
Our main results are as follows.
Our results have the following applications in communication complexity.

Theorem 4: verifying the log approximation rank conjecture for symmetricxor functions.
To prove these results, we make use of (i) the close connections between Boolean functions and their corresponding twoparty xor functions (Proposition 2), and (ii) the known bounds on the approximate rank and the sign rank of twoparty symmetricand functions (Theorem 2 and Theorem 2). We transform these results on twoparty symmetricand functions to the setting of symmetric xorfunctions via reductions.
2 Preliminaries
General notation
We use to denote the set . All the logarithms are base 2. For , denotes the Hamming weight of , i.e., . For , denotes negation of . Given and in , denotes the bit string obtained by taking the coordinatewise and of and . Similarly, denotes the bit string obtained by taking the coordinatewise xor of and .
A Boolean function is called symmetric if the function’s output does not change when we permute the input variables. When is symmetric, we’ll use to also denote the corresponding function with the understanding that . We define and
Note that we have for all . Then . Also, we let
and
When the function is clear from the context, we may drop the from this notation.
Fourier analysis
Let be a Boolean function. We view as residing in the dimensional vector space of realvalued functions . We equip this vector space with the inner product , where is uniformly distributed over . For each , define the function
We refer to these functions as characters or monomials. It is easy to check that the set forms an orthonormal basis. Therefore every function (including every Boolean function) can be written as , where are the realvalued coefficients, called the Fourier coefficients. This way of expanding is called the Fourier expansion of .
The degree of a function is defined as and the monomial complexity is defined as . We also define the Fourier norm:
The Fourier infinity norm is defined to be .
Matrix analysis
Let be a realvalued matrix, with singular values . The rank of , denoted , is the number of nonzero singular values. The Schatten norm is defined as follows:
We then define
trace norm:  
Frobenius norm:  
spectral norm: 
Given two matrices and , we write if one can be obtained from the other after reordering the rows and/or the columns.
Approximation theory
Throughout the paper, denotes any constant in . Given , we say that approximates if for all , . Then the approximate monomial complexity of , denoted by , is defined as the minimum monomial complexity of a function that approximates . Similarly we define . For a matrix , and are defined as the minimum rank and the minimum trace norm respectively, of a matrix that approximates entrywise.
Given , we say that signrepresents if for all such that , , and for all such that , . The sign monomial complexity of , denoted , is defined to be the minimum monomial complexity of a function that sign represents . For a matrix with entries in , we similarly define .
The following proposition provides a relationship between the approximate trace norm and the approximate rank: {proposition}[Folklore] Let . Then,
Proof.
Let be a matrix that entrywise approximates , and . Then
where we used the CauchySchwarz inequality for . ∎
Twoparty functions
A capital function name will refer to a function with two inputs, e.g., where and are some finite sets. We’ll abuse notation and also use to denote the by matrix corresponding to , i.e., the ’th entry of the matrix contains the value . It will always be clear from the context whether refers to a function or a matrix.
Given , we’ll define by . We denote by the communication function such that . We use the notation when the inputs and are promised to satisfy . Similarly, we define and , for .
In an important paper, Razborov [Raz03] gave close to tight lower bounds on the randomized communication complexity of where is a symmetric function. His main result can be stated as a lower bound on the approximate trace norm of a certain submatrix of : {theorem}[[Raz03]] For , let . If for some we have , then
We’ll also need a result from Sherstov [She12] that gives essentially tight lower bounds on the signrank of all symmetricand functions (see Section 4 for this result’s relation to communication complexity).
[[She12]] Let . Then
Our main interest in 2party functions is due to the tight links between the Fourier analytic properties of a Boolean function and the matrix analytic properties of .
[Folklore] Let be any function and let . Then

,


,

,

,

.
3 Main Results
{theorem}Let be a symmetric function. Then,
Proof.
Lower bound:
We first note that we may assume that . In fact, if , then we can consider the function defined as . We note that . To see this, given a function approximating with , the function defined by satisfies and . This shows that . But we have , i.e., (except if , but this case is simple). This implies that .
For the remainder of the proof, we assume there is an such that and .
In light of Proposition 2, part (b), our goal will be to show a lower bound on . For any such that , we define the submatrix of of size by for all . Note that this is for example the submatrix obtained by considering all the bitstrings for which the first bits are set to one and among the remaining bits, exactly are set to .
Observe that . In particular, when , . This means that
where
Thus, we’ll show a lower bound on the approximaterank of . To do this, first we’ll use Proposition 2, and show a lower bound on the approximatetrace norm. To show a lower bound on the approximatetrace norm, we’ll use Theorem 2 and the fact that
In other words, our choice for in Theorem 2 will be . Let’s now specify and . Note that we should make sure that is even. We distinguish two cases depending on whether or not.
If , then we simply set if is odd and if is even. Then we let . Since , it is easy to check that and as required by Theorem 2. So we have
which, by Proposition 2 and our choices for and , implies
In the case , we set or depending on the parity of , and . We then have using the fact that . As , this implies that . On the other hand, we have . Now recall that . But which implies that . In addition, as , we also have . As a result, we can apply Theorem 2 and obtain
Using Proposition 2 part (b), we obtain the desired result.
Upper bound:
Let be a symmetric function. Then,
Proof.
Lower bound:
First, we’ll assume is a constant fraction of . At the end of the proof, we give an argument for when this is not true.
In light of Proposition 2, part (c), our goal is to show that
(1) 
Since is a submatrix of , it suffices to show a lower bound on the signrank of . As in the proof of Theorem 3,
where
From the assumption we made at the beginning of the proof, we know that . By Theorem 2, we know that
We show that the above implies
(2) 
by showing that is a submatrix of , as follows. Given , construct (by padding and appropriately with bits each) with the property that the Hamming weights and the strings don’t intersect at indices . Clearly the mappings and are injective, and . So is a submatrix of . This establishes (2), and therefore (1). This completes the proof for the case when is a constant fraction of .
If the changes are happening mostly at odd indices , then consider the restriction of in which one input variable is set to 1. If is this restriction, then is a submatrix of and therefore it suffices to show a lower bound on .
If is not a constant fraction of , then consider the function defined as . This is such that a constant fraction of . Furthermore, note that as one is obtained from the other by rearranging the columns. Therefore it suffices to show a lower bound on .
Upper bound:
We’ll prove by induction on that . If then is either a constant function or a parity function (parity or its negation), and so can be represented exactly using at most two nonzero Fourier coefficients. We also have to explicitly prove the case. Let’s consider the function with for and for , for some . Observe that the following polynomial sign represents :
So . By slightly modifying the above polynomial, we can sign represent any function that behaves like a parity function (parity or its negation) for and behaves like a constant function for . We can also sign represent any function that behaves like a constant function for and behaves like a parity function for . These are the only cases to consider for .
Now suppose . Let be the largest index such that . Let be the function obtained from as follows: for and for . Observe that . Let be a sign representing polynomial for with at most monomials. Let be the function obtained from as follows: for and either for or for . Observe that , and so it has a sign representing polynomial with at most monomials. The functions and are constructed in a way so that the product sign represents (in particular, the choice of or for is made accordingly). Therefore .
∎
As a corollary to the upper bound above, we can give a lower bound on the Fourier infinity norm of a symmetric function.
Let be a symmetric function. Then
Proof.
We now prove the main conjecture of [AFH12].
Let be a symmetric function. Then,
4 Applications to Communication Complexity
We denote by the error randomized communication complexity of . In this model, the players are allowed to share randomness and for all inputs, they are required to output the correct answer with probability at least . We’ll think of as some constant less than .
Here we’ll also be interested in the unboundederror randomized communication complexity of a function , denoted . In this model, the players have private randomness and the only requirement from the protocol is that for all inputs, it gives the correct answer with probability greater than 1/2. Notice that achieving error probability 1/2 is trivial: just output a random bit. Also, note that there is no requirement that the success probability be bounded away from 1/2, e.g., the success probability could be . This makes the model quite powerful and proving lower bounds much harder. It was shown in [PS86] that
In a remarkable paper [For02], Forster was able to prove a lower bound on the unbounded error communication complexity of a function using the function’s spectral norm. In particular he was able to show a linear lower bound for the innerproduct function. Building on Forster’s work, Sherstov [She12] gave essentially tight lower bounds on the unbounded error communication complexity of all symmetricand functions (see Theorem 2).
In [ZS09], Shi and Zhang conjecture that the unbounded error communication complexity of a symmetricxor function is characterized by . We prove this conjecture. First, the proof Theorem 3 allows us to bound the signrank of symmetricxor functions.
Let be a symmetric function. Then,
This immediately implies:
Let be a symmetric function. Then,
The second application is related to the Log Approximation Rank Conjecture, which is the randomized communication complexity analog of the famous Log Rank Conjecture. The Log Approximation Rank Conjecture states that there is a constant such that for any 2party function ,
Here, the lower bound is wellknown to be true for all functions, so the conjecture is about establishing the upper bound. This has been done by Razborov [Raz03] for symmetricand functions . We show that the conjecture holds also for symmetricxor functions .
There are constants such that for any twoparty function , where is symmetric, we have
References
 [ABFR94] James Aspnes, Richard. Beigel, Merrick Furst, and Steven Rudich. The expressive power of voting polynomials. Combinatorica, 14(2):135–148, 1994.
 [AFH12] Anil Ada, Omar Fawzi, and Hamed Hatami. Spectral norm of symmetric functions. In APPROXRANDOM, pages 338–349, 2012.
 [Bru90] Jehoshua Bruck. Harmonic analysis of polynomial threshold functions. SIAM Journal of Discrete Mathematics, 3:168–177, 1990.
 [BS92] Jehoshua Bruck and Roman Smolensky. Polynomial threshold functions, ac0 functions, and spectral norms. SIAM Journal on Computing, 21(1):33–42, February 1992.
 [For02] Jürgen Forster. A linear lower bound on the unbounded error probabilistic communication complexity. Journal of Computer and System Sciences, 65(4):612–625, 2002.
 [HQ17] Hamed Hatami and Yingjie Qian. The unboundederror communication complexity of symmetric xor functions. https://arxiv.org/abs/1704.00777, 2017.
 [Pat92] Ramamohan Paturi. On the degree of polynomials that approximate symmetric Boolean functions (preliminary version). In Proceedings of ACM Symposium on Theory of Computing, pages 468–474, 1992.
 [PS86] Ramamohan Paturi and Janos Simon. Probabilistic communication complexity. Journal of Computer and System Sciences, 33(1):106–123, 1986.
 [Raz03] Alexander Razborov. Quantum communication complexity of symmetric predicates. Izvestiya: Mathematics, 67(1):145–159, 2003.
 [She12] Alexander A. Sherstov. The multiparty communication complexity of set disjointness. In STOC’12—Proceedings of the 2012 ACM Symposium on Theory of Computing, pages 525–544. ACM, New York, 2012.
 [ZS09] Zhiqiang Zhang and Yaoyun Shi. Communication complexities of symmetric xor functions. Quantum Information & Computation, 9(3):255–263, 2009.