A Local Limit Theorem for Cliques in G(n,p)

A Local Limit Theorem for Cliques in G(n,p)

Ross Berkowitz
Abstract.

We prove a local limit theorem the number of r-cliques in G(n,p) for p\in(0,1) and r\geq 3 fixed constants. Our bounds hold in both the \ell^{\infty} and \ell^{1} metric. The main work of the paper is an estimate for the characteristic function of this random variable. This is accomplished by introducing a new technique for bounding the characteristic function of constant degree polynomials in independent Bernoulli random variables, combined with a decoupling argument.

Key words and phrases:
Local limit theorem, random graph, characteristic function
2010 Mathematics Subject Classification:
05C80, 60F05

1. Introduction

In 1960 Erdős and Rényi introduced the study of G(n,p), the random graph on n vertices where each edge is included independently at random with probability p. In [ER61] they showed, among other results, that the number of cliques of size r in G(n,p) is concentrated about its mean using Chebyshev’s inqeuality. Since then the Erdős-Rényi random graph has become an object of much study, and many nice results have been obtained concerning the following natural question:

Question 1.

Let H be some fixed graph. What is the distribution of the number of copies of H as a random variable?

In this paper, we will consider this question for the regime where H is the r-clique, K_{r}, and p\in(0,1) is a fixed constant. Let {f_{r}} denote the random variable counting the number of r-cliques in G(n,p) and set \mu=\operatorname*{\mathbb{E}}[f_{r}] and \sigma^{2}=Var(f_{r}).

In the 1980’s there were several papers studying which subgraph counts obeyed a central limit theorem (see [KR83, Kar84, NW88, Ruc88], for example). By that time central limit theorems stating that {f_{r}} converged in distribution to the Gaussian were known. That is for any real numbers a<b

(1) \Pr\mathopen{}\mathclose{{}\left[f_{r}\in[\mu+a\sigma,\mu+b\sigma]}\right]=% \frac{1}{\sqrt{2\pi}}\int_{a}^{b}e^{-t^{2}/2}dt+o(1)

Note that the central limit theorem in equation 1 bounds the probability that {f_{r}} lies in an interval of length O(\sigma). In this paper we will show that the distribution of {f_{r}} is pointwise close to a discrete Gaussian. Our main result is the following local limit theorem:

Theorem 1.

Fix any 0<\tau<\min(1/12,1/2r). For any m\in\mathbb{N} we have that

\Pr[f_{r}=m]=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{\mathopen{}\mathclose{{}% \left(m-\mu}\right)^{2}}{2\sigma^{2}}}+O\mathopen{}\mathclose{{}\left(\frac{1}% {\sigma n^{\frac{1}{2}-\tau}}}\right)

Because of the quantitative error bound, we are also able to extend this to the following \ell^{1}, or statistical distance bound between {f_{r}} and the discrete Gaussian.

Theorem 2.
\sum_{m\in\mathbb{N}}\mathopen{}\mathclose{{}\left|\Pr({f_{r}}=m)-\frac{1}{% \sqrt{2\pi}\sigma}\exp\mathopen{}\mathclose{{}\left(-\frac{\mathopen{}% \mathclose{{}\left(m-\mu}\right)^{2}}{2\sigma^{2}}}\right)}\right|=O(n^{-\frac% {1}{2}+\tau})

1.1. Related Results

Our methods depend on examining a particular orthogonal basis of the space of functions on G(n,p). For many other applications of such orthogonal decompositions to counting problems on G(n,p), see [Jan94].If p were allowed to become arbitrarily small as n grows, then {f_{r}} may be shown in some cases to resemble a Poisson random variable. For example, if the edge probability p\sim cn for some constant c, then Erdős and Renyi [ER61] showed that the number of triangles in G(n,p) converges to a Poisson distribution. This result was a local limit theorem, as it estimated the pointwise probabilities \Pr[f_{3}=k] for k constant. Further, Röllin and Ross [RR15] showed a local limit theorem when p\sim cn^{\alpha} for \alpha\in[-1,-\frac{1}{2}]. In this regime they showed that the triangle counting distribution converges to a translated Poisson distribution (which is in turn close to a discrete Gaussian) in both the \ell^{\infty} and \ell^{1} metrics. In 2014, Gilmer and Kopparty [GK14] proved a local limit theorem for triangle counts in G(n,p) in the regime where p is a fixed constant. Their main theorem was the following pointwise bound:

\Pr[f_{3}=m]=\frac{1}{\sqrt{2\pi}\sigma}\exp\mathopen{}\mathclose{{}\left(-% \frac{\mathopen{}\mathclose{{}\left(m-\mu}\right)^{2}}{2\sigma_{n}^{2}}}\right% )\pm o(n^{-2})

The proof in [GK14] proceeded by using the characteristic function. The main step there was to show that |\varphi(t)-\varphi_{f_{3}}(t)| is small for t\in[-\pi\sigma_{n},~{}\pi\sigma_{n}], where \varphi represents the characteristic function of the standard normal distribution, and \varphi_{f_{3}} represents the characteristic function the triangle counting function f_{3}. [Ber16] extended this result by improving the error bound and obtained a bound on the statistical distance between f_{3} and the discrete Gaussian as well.

1.2. High Level Overview of Techniques

The central technique in this paper is to examine the characteristic function \varphi_{\mathcal{K}}(t), where \mathcal{K}=(f_{r}-\mu)/\sigma is the mean 0 variance 1 normalization of f_{r}. The main calculation is showing that

\int_{t=-\pi\sigma}^{\pi\sigma}\mathopen{}\mathclose{{}\left|\varphi_{\mathcal% {K}}(t)-\varphi(t)}\right|dt=O(n^{-\frac{1}{2}+\tau})

where \varphi(t):=e^{-t^{2}/2} is the characteristic function of the standard unit normal random variable. The local limit theorem then follows from Fourier inversion for lattices. However, bounding the characteristic function of sums of dependent random variables is a tricky problem and several new ideas were needed.

1.2.1. Estimating \varphi_{\mathcal{K}}(t) for small t

First, building on the method in our earlier work [Ber16], we rewrite our random variable f_{r} as a polynomial, not in the natural 0,1 indicator random variables x_{e}, but instead in the orthogonal p-biased Fourier basis \chi_{e}. This slight change of basis immediately simplifies the proof of the central limit theorem and lays bare the intuition that the number of triangles in G(n,p) is almost completley driven by the number of edges present in the graph. In fact, once we switch from x_{e} to \chi_{e} and normalize to unit variance, the degree {r\choose 2} polynomial \mathcal{K}=({f_{r}}-\mu)/\sigma becomes 1-o(1) close to a degree 1 polynomial. This turns out to be sufficient to prove that \varphi_{\mathcal{K}}(t) is close to a Gaussian for t small. Because |e^{itx}-e^{itx^{\prime}}|\leq|x-x^{\prime}| for any x,x^{\prime}\in\mathbb{R} we can simply estimate \varphi_{\mathcal{K}}(t) by noting that

(2) |\operatorname*{\mathbb{E}}[e^{it\mathcal{K}}]-\operatorname*{\mathbb{E}}[e^{% it\mathcal{K}^{=1}}]|\leq\operatorname*{\mathbb{E}}[|t\mathcal{K}^{>1}|]

Because \mathcal{K}^{=1} is a sum of i.i.d. Bernoulli random variables, the fact that its characteristic function is close to Gaussian is the well known Berry-Esseen bound. Meanwhile, we will show that as noted above, \mathcal{K} is concentrated on degree 1 terms and so \mathcal{K}^{>1} is small.

1.2.2. Bounding \varphi_{\mathcal{K}}(t) for slightly larger t

The bound in equation 1.2.2 is useful, but crude, and it degrades in usefulness rapidly as t grows. Let X=\mathcal{K}^{=1}=\sum_{e}\hat{\mathcal{K}}(e)\chi_{e} and Y=\mathcal{K}^{>1} (recall that we will expect Y to be small). Then \mathcal{K}=X+Y and we obtain a better approach by using Taylor’s Theorem to rewrite the above estimate as

\displaystyle|\operatorname*{\mathbb{E}}[e^{it\mathcal{K}}]-\operatorname*{% \mathbb{E}}[e^{itX}]|=|\operatorname*{\mathbb{E}}[e^{itX}(e^{itY}-1)]|=% \operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[e^{itX}\sum_{j=1}^{% \ell}\frac{(itY)^{j}}{j!}}\right]+O\mathopen{}\mathclose{{}\left(\operatorname% *{\mathbb{E}}\mathopen{}\mathclose{{}\left|e^{itX}(tY)^{\ell+1}}\right|}\right)

Assuming tY is typically small and \ell some large but fixed constant we will be able to show that e^{itX} and Y^{j} are nearly uncorrelated. To this end we prove a result which, with some omitted terminology, says:

Theorem 3.

Let Z=X+Y, where X=\sum_{i=1}^{n}X_{i} is an i.i.d. sum of Bernoulli random variables. Assume that \hat{\|}Y\hat{\|}_{1}=O(n), and \varphi_{X}(t)=O(\exp(-n^{\Omega(1)})). Then for any fixed \ell

|\varphi_{Z}(t)-\varphi_{X}(t)|=O\mathopen{}\mathclose{{}\left(\mathopen{}% \mathclose{{}\left|t\cdot\|Y\|_{2}}\right|^{\ell}}\right)

1.2.3. Bounding \varphi_{\mathcal{K}}(t) for t even larger still

Several substantial barriers present themselves for adapting the above arguments to bounding \varphi_{\mathcal{K}}(t) for t\geq O(n). First, in order to apply Theorem 3 profitably, there was the requirement that we consider a random variable of the form X+Y where X=\sum X_{i} is a sum of i.i.d. random variables, and t\|Y\|_{2} is small. This is a source of trouble as once t>\|Y\|_{2}^{-1} our bound will be worthless. Second, and equally troubling, the characteristic function the sum of {n\choose 2} i.i.d. independent Bernoullis, \sum\frac{1}{n}\chi_{e} is only small for t=O(n), but we require our characteristic function to be small for t\leq\sigma=O(n^{r-1}). It should be noted that this barrier is not artificial. Some subgraph counts, such as the number of disjoint pairs of edges, do obey a central limit theorem by the proofs above, but not a local limit theorem. In these cases the problem occurs because of the breakdown of the characteristic function at t=O(n) Again, this is not accidental, but a consequence of the fact that the number of pairs of disjoint edges is always a square, and therefore almost on a lattice of step size O(n).

The main idea is, very roughly speaking, that the higher order terms of the polynomial \mathcal{K} are responsible for controlling the size of \varphi_{\mathcal{K}}(t) for t large. In particular, when t=n, it is most profitable to look at \mathcal{K}^{=2}, the degree 2 polynomial rather than the \mathcal{K}^{=1} as in the previous arguments. However there is still trouble: what to do with the larger \mathcal{K}^{=1} term? The answer lies in a decoupling trick which allows us to “clear out” the lower order terms. We illustrate with an example extracted from [GK14].

Example 1.1.

Let f_{3} be the triangle counting random variable. Partition the vertex set [n]=U_{0}\cup U_{1} with |U_{0}|=|U_{1}|=n/2. Let B_{0} denote the edges internal to U_{0}, and B_{1} be all other edges. Let X\in\{0,1\}^{B_{0}} and Y\in\{0,1\}^{B_{1}} be random vectors drawn according to the probability distribution G(n,p). Finally let Y_{0},Y_{1} denote independently drawn copies of Y. Finally rewrite f_{3}=A(X)+B(Y)+C(X,Y), isolating the monomials in f_{3} which only depend on either X or Y. Then we can bound the characteristic function of f_{3} by doing the following decoupling trick

\displaystyle|\varphi_{f_{3}}(t)|^{2} \displaystyle=|\operatorname*{\mathbb{E}}_{X,Y}[e^{itf_{3}}]|^{2}=\mathopen{}% \mathclose{{}\left|\operatorname*{\mathbb{E}}_{X}e^{itA(X)}\operatorname*{% \mathbb{E}}_{Y}e^{it(B(Y)+C(X,Y))}}\right|^{2}\leq\operatorname*{\mathbb{E}}_{% X}\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{Y}e^{it(B+C)}}% \right|^{2}
\displaystyle=\operatorname*{\mathbb{E}}_{X}\mathopen{}\mathclose{{}\left(% \operatorname*{\mathbb{E}}_{Y_{1}}e^{it(B+C)}\overline{\operatorname*{\mathbb{% E}}_{Y_{2}}e^{it(B+C)}}}\right)=\operatorname*{\mathbb{E}}_{Y_{1},Y_{2}}e^{it(% B(Y_{0})-B(Y_{1}))}\operatorname*{\mathbb{E}}_{X}e^{it(C(X,Y_{0})-C(X,Y_{1}))}
\displaystyle\leq\operatorname*{\mathbb{E}}_{Y_{0},Y_{1}}\mathopen{}\mathclose% {{}\left|\operatorname*{\mathbb{E}}_{X}e^{it(C(X,Y_{0})-C(X,Y_{1}))}}\right|

In the last line above, the terms A and B, which depended on only one of X or Y, have vanished. Additionally, in the inner expectation we consider C(X,Y_{0})-C(X,Y_{1}) as a polynomial in X for some random but fixed choice of Y_{0},Y_{1}. A moment’s reflection will reveal that the only monomials in C(X,Y) correspond to triangles with two vertices in U_{0} and one vertex in U_{1}. Therefore each triangle represented in C(X,Y) has two edges in B_{1}, but only one in B_{0}. So C(X,Y) is only a polynomial of degree 1 in X.

Then we can use standard methods to analyze \operatorname*{\mathbb{E}}[e^{it[C(X,Y_{0})-C(X,Y_{1})]}], because it is a sum of independent Bernoulli random variables. One last wrinkle in the above that should be mentioned is that the linear function C(X,Y_{0})-C(X,Y_{1}) depends on the samples Y_{0},Y_{1} of edges in B_{1}. But after some work, we can show that with overwhelming probability (in the sampling of Y_{0},Y_{1}), we will have that \operatorname*{\mathbb{E}}[e^{it[C(X,Y_{0})-C(X,Y_{1})]}] is small.

Section 5 develops a version of this decoupling trick for higher degree polynomials. In order to eliminate all monomials of degree at most k-1, we will require k+1 partitions of our vertices and 2k independent samples. One additional difference will be that, upon performing this decoupling trick, we will not always be left with a linear function but rather a polynomial which is highly concentrated on degree 1 terms. But combining some careful analysis with Theorem 3, we will be able to obtain our bounds on \varphi_{\mathcal{K}}(t) in a similar manner to the above example.

1.3. Organization of this Paper

In Section 2 we set up our notation and introduce some facts which will be necessary for the later sections. Section 3 contains the statements and proofs of our main results, modulo the main technical lemmas. In Section 4 we prove Theorem 3, which is our main technical tool for bounding characteristic functions of constant degree polynomials in this paper. In Section 5 we prove our main decoupling Lemma. Section 6 contains our analysis of the properites of the clique counting random variables f_{r} and \mathcal{K}. Finally, Sections 7 through 11 are dedicated to applying the afforementioned Lemma to bounding the characteristic function of \mathcal{K} in different regimes depending on |t|.

2. Preliminaries and Notation

2.1. Definition of our random variables f_{r} and \mathcal{K}

Throughout we will always be working with the probability space G(n,p). We will assume a vertex set of [n]=\{1,2,\ldots,n\}, and a set of indicator random variables, x_{e} for each edge e\in{[n]\choose 2}. x_{e} will be 1 if edge e is present in our sampled graph and 0 otherwise. All edges will be present independently at random with probability p. We will use \lambda to denote \min(p,1-p). The graph K_{r} is the clique on r vertices, that is it has r vertices and contains all edges between them. Let f_{r} denote the random variable counting the number of copies of K_{r} in our random graph. We express this as

f_{r}=\sum_{S\equiv K_{r}}x^{S}

where the sum is taken over all {n\choose r} sets of edges S\subset{[n]\choose 2} which are isomorphic to the r-clique K_{r}, and x^{S}:=\prod_{i\in S}x_{i}. We will also frequently refer to the mean and standard deviation of f_{r}. Throught the paper we will use \mu and \sigma to denote

\mu:=\operatorname*{\mathbb{E}}_{G\sim G(n,p)}f_{r}(G)=p^{r\choose 2}{n\choose r% }\qquad\qquad\sigma:=\sqrt{\operatorname*{\mathbb{E}}_{G\sim G(n,p)}[f_{r}^{2}% -\mu^{2}]}

Note that \mu and \sigma depend on n, as well as the fixed parameters r and p. Throughout it will be more convenient to work with the normalized copy of f_{r}, which we label \mathcal{K}

\mathcal{K}:=\mathcal{K}_{r}(G):=\frac{f_{r}-\mu}{\sigma}

2.2. Parameters and asymptotics

We will have need of a fixed, but arbitrarily small constant labeled \tau\in(0,1/2r). \tau will be the same constant throughout the entire paper. Additionally, we will always assume that r\geq 3, and p\in(0,1) are fixed constants which do not depend on n. Our results will then apply in the asymptotic setting as n\to\infty. Additionally, all asymptotic notation of the form O(\cdot),~{}o(\cdot), or \Omega(\cdot) will view r,p,\tau as fixed constants.

2.3. p-biased basis

A crucial tool throughout this paper will be the p-biased Fourier basis. Rather than working with the indicator random variable x_{e} directly we will instead apply the following linear transformation

Definition 1 (Fourier Basis).
\chi_{e}:=\chi_{e}(x_{e})=\frac{x_{e}-p}{\sqrt{p(1-p)}}=\begin{cases}-\sqrt{% \frac{p}{1-p}}&\mbox{if }x=0\\ \sqrt{\frac{1-p}{p}}&\mbox{if }x=1\end{cases}

For S\subset{[n]\choose 2} set \chi_{S}=\prod_{e\in S}\chi_{e}. For S=\varnothing we have \chi_{\varnothing}\equiv 1.

As the x_{e} variables are independent p-biased Bernoulli random variables, for any S\neq T\subset{[n]\choose 2} we have \operatorname*{\mathbb{E}}[\chi_{S}^{2}]=1 and \operatorname*{\mathbb{E}}[\chi_{S}\chi_{T}]=0. Therefore the set of functions \{\chi_{S}~{}|~{}S\subset{[n]\choose 2}\} is an orthonormal basis for the space of functions on G(n,p).

We will also use the notation of the Fourier transform. Since the \chi_{S} form an orthonormal basis, for any function f:\{0,1\}^{[n]\choose 2}\to\mathbb{R} we can choose coefficients \hat{f}(S) so that

f=\sum_{S\subset{[n]\choose 2}}\hat{f}(S)\chi_{S}

The coefficients \hat{f}(S) are called the Fourier Coefficients of f, and the function \hat{f}:2^{[n]\choose 2}\to\mathbb{R} is called the Fourier transform of f. Morevoer we can compute these coefficients by noting that \hat{f}(S)=\operatorname*{\mathbb{E}}[f\chi_{S}]. The degree of a monomial \chi_{S} is |S|, and for an arbitrary function f, we say that it has degree equal the degree of the largest monomial in its Fourier expansion. That is \deg(f)=\max_{\hat{f}(S)\neq\varnothing}|S|.

Additionally, it will be helpful to refer only to terms of f of a certain degree. For any k\in\mathbb{N} let

\displaystyle f^{=k} \displaystyle:=\sum_{|S|=k}\hat{f}(S)\chi_{S}
\displaystyle f^{>k} \displaystyle:=\sum_{|S|>k}\hat{f}(S)\chi_{S}

The 2-norm of a function \|f\|_{2} and the spectral 1-norm \hat{\|}f\hat{\|}_{1} are defined to be

\displaystyle\|f\|_{2} \displaystyle:=\sqrt{\operatorname*{\mathbb{E}}[f^{2}]}
\displaystyle\hat{\|}f\hat{\|}_{1} \displaystyle:=\sum_{S}|\hat{f}(S)|

Another fact which we will need throughout this paper is Parseval’s Theorem, which allows us to easily compute the variance of a function in terms of its Fourier transform

Theorem 4.

(Parseval’s Theorem)

\operatorname*{\mathbb{E}}[f^{2}]=\sum_{S\subset{[n]\choose 2}}\hat{f}(S)^{2}

Also, it therefore holds that

Var(f)=\operatorname*{\mathbb{E}}[f^{2}]-\operatorname*{\mathbb{E}}[f]^{2}=% \operatorname*{\mathbb{E}}[f^{2}]-\hat{f}(\varnothing)^{2}=\sum_{S\neq% \varnothing}\hat{f}(S)^{2}

2.4. Function Restrictions

Let f:\{0,1\}^{[n]\choose 2}\to\mathbb{R} be an arbitrary function and H\subset{[n]\choose 2}. Given a setting \beta\in\{0,1\}^{H^{c}} of the edges not in H we define the restricted function f_{\beta}:\{0,1\}^{H}\to\mathbb{R} by

f_{\beta}(\alpha):=f(\alpha,\beta)

Whenever we use this restriction notation the choice of H will be made explicit beforehand, but not referenced in the notation for the sake of compactness. For any S\subset H we may express the Fourier coefficients of \widehat{f_{\beta}}(S) in terms of the coefficients of f as follows:

\widehat{f_{\beta}}(S)=\sum_{T\subset H^{c}}\hat{f}(S\cup T)\chi_{T}(\beta)

Additionally, sometimes we wish to restrict by looking at some set of vertices, and only considering edges incident to those vertices. To this end we will define the notion of the vertex support of some set of edges.

Definition 2.

Let S\subset{[n]\choose 2} then \mathrm{supp}(S) is defined to be the set of vertices incident to an edge in S. If we interpret an edge as a set of two vertices, then \mathrm{supp}(S)=\cup_{e\in S}e.

2.5. Characteristic Functions

The bulk of this paper will be concerned with estimating the characteristic function of \mathcal{K}, our normalized copy of f_{r}, the K_{r} counting random variable. First we recall the definition of the characteristic function:

Definition 3.

Let X be a random variable. Then its characteristic function \varphi_{X}:\mathbb{R}\to\mathbb{C} is defined to be

\varphi_{X}(t):=\operatorname*{\mathbb{E}}[e^{itX}]

These are very well studied objects, and they completely determine their associated random variable. In particular, we will need the following inversion formula which specifies the probability distribution of a latice valued random variable in terms of its characteristic function.

Theorem 5 (Fourier Inversion for Lattices).

Let X be a random variable supported on the lattice \mathcal{L}:=b+h\mathbb{Z}. Let \varphi_{X}(t):=\operatorname*{\mathbb{E}}[e^{itX}], the characteristic function of X. Then for x\in\mathcal{L}

\mathbb{P}(X=x)=\frac{h}{2\pi}\int_{-\frac{\pi}{h}}^{\frac{\pi}{h}}e^{-itx}% \varphi_{X}(t)dt

For a proof, see Theorem 4 of chapter 15.3 in volume 2 of Feller [Fel71]. We will also need the following bounds on the characteristic function of the Bernoulli random variable.

Lemma 1.

Let Y be a random variable taking the value 1 with probabillity p and -1 with probability 1-p. For any |t|<\frac{\pi}{2}, |\operatorname*{\mathbb{E}}[e^{itY}]|<1-\frac{8p(1-p)t^{2}}{\pi^{2}}. Consequently, it also follows that for |t|<\pi we have |\operatorname*{\mathbb{E}}[e^{itx_{e}}]|\leq 1-\frac{4p(1-p)t^{2}}{\pi^{2}} and also for |t|<\sqrt{p(1-p)}\pi we have |\operatorname*{\mathbb{E}}[e^{it\chi_{e}}]|\leq 1-\frac{2t^{2}}{\pi^{2}}.

Proof.

For Y we note that

\displaystyle|\operatorname*{\mathbb{E}}[e^{itY}]|^{2} \displaystyle=|pe^{it}+(1-p)e^{-it}|^{2}=\cos^{2}(t)+(1-2p)^{2}\sin^{2}(t)=1-4% p(1-p)\sin^{2}(t)
\displaystyle\leq 1-\frac{16p(1-p)t^{2}}{\pi^{2}}

where the first inequality used the fact that \sin(t)\geq 2t/\pi for |t|\leq\frac{\pi}{2}. The first claimed result now follows by noting that \sqrt{1-x}\leq 1-\frac{x}{2}.

For the subsequent claim we note that for any random variable X, and a,b\in\mathbb{R}

\operatorname*{\mathbb{E}}[e^{it(aX+b)}]=e^{itb}\operatorname*{\mathbb{E}}[e^{% i(at)X}]

Since x_{e}=(Y+1)/2 and \chi_{e}=\frac{x_{e}-p}{\sqrt{p(1-p)}}=\frac{Y+1-2p}{2\sqrt{p(1-p)}}, the result follows. For example:

|\operatorname*{\mathbb{E}}[e^{it\chi_{e}}]|=|\operatorname*{\mathbb{E}}[e^{i% \frac{t}{2\sqrt{p(1-p)}}Y}]|\leq 1-\frac{8p(1-p)t^{2}}{4p(1-p)\pi^{2}}

2.6. Concentration of low degree polynomials

Throughout, we will lean heavily on the following hypercontractivity bounds, which roughly says that low degree polynomials in the \chi_{e} are reasonably well behaved in terms of moments and concentration. The reference given here is for Ryan O’Donnell’s textbook, but the results do not originate there and are due to a series of results by Bonami, Beckner, Borell and others. See the notes in [O’D14] for further reference on the matter.

Theorem 6 ([O’D14] Theorem 10.24).

If f has degree at most d then for any t\geq(2e/\lambda)^{d/2} (recall \lambda=\min(p,1-p)),

\Pr\mathopen{}\mathclose{{}\left[|f(X)|\geq t\|f\|_{2}}\right]\leq\lambda^{k}% \exp\mathopen{}\mathclose{{}\left(-\frac{d}{2e}\lambda t^{2/d}}\right)
Theorem 7 ([O’D14] Theorem 10.21).

If f has degree at most d then for q\geq 1

\operatorname*{\mathbb{E}}[|f|^{2q}]\leq(2q-1)^{dq}\lambda^{2d-q}\|f\|_{2}^{2q}

3. Main Results

In this section we give an overview of the proof of our local limit theorems and statistical distance bounds without the proofs of our lemmas and calculations which will follow in subsequent sections. In this section we will use the following notation for the density and characteristic functions of the standard unit normal N(0,1) respectively

\displaystyle\mathcal{N}(x) \displaystyle:=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}
\displaystyle\varphi(t): \displaystyle=e^{-\frac{t^{2}}{2}}

3.1. Tools For Proving Local Limit Theorems

Our main engine is the Fourier inversion formula given by Theorem 5. Using this theorem, we can obtain our local limit theorem and statistical distance bounds. To this end we cite the following lemmas. For proofs see [Ber16] (although the ideas do not originate there).

Lemma 2.

Let X_{n} be a sequence of random variables supported in the lattices \mathcal{L}_{n}=b_{n}+h_{n}\mathbb{Z}, then

|h_{n}\mathcal{N}(x)-\mathbb{P}(X_{n}=x)|\leq h_{n}\mathopen{}\mathclose{{}% \left(\int_{-\frac{\pi}{h_{n}}}^{\frac{\pi}{h_{n}}}\mathopen{}\mathclose{{}% \left|\varphi(t)-\varphi_{n}(t)}\right|dt+e^{-\frac{\pi^{2}}{2h_{n}^{2}}}}\right)
Lemma 3.

Let X_{n} be a sequence of random variables supported in the lattice \mathcal{L}_{n}:=b_{n}+h_{n}\mathbb{Z}, and with chf’s \varphi_{n}. Assume that the following hold:

  1. \sup_{x\in\mathcal{L}_{n}}|\Pr(X_{n}=x)-h_{n}\mathcal{N}(x)|<\delta_{n}h_{n}

  2. \Pr(|X_{n}|>A)\leq\epsilon_{n}

Then \sum_{x\in\mathcal{L}_{n}}|\Pr(X_{n}=x)-\mathcal{N}(x)|\leq 2A\delta_{n}+% \epsilon_{n}+\frac{h_{n}}{\sqrt{2\pi}A}e^{\frac{-A^{2}}{2}}.

3.2. Proofs of Main Results

The main calculation of this paper is the following characteristic function bound:

Theorem 8.

Fix 0<\tau<\min(1/2r,~{}1/12). Recall \mathcal{K}:=(f_{r}-\mu)/\sigma, and let \varphi_{\mathcal{K}}(t) be the characteristic function of \mathcal{K}. Then

\int_{-\pi\sigma}^{\pi\sigma}\mathopen{}\mathclose{{}\left|\varphi_{\mathcal{K% }}(t)-e^{\frac{-t^{2}}{2}}}\right|=O(n^{-1/2+2\tau})
Proof.

This proof is a combination of our estimates for the characteristic function \varphi_{\mathcal{K}}(t) from sections 7 through 11 . The relevant bounds are

  • For |t|\leq n^{\tau}, we use Lemma 7 to say that |\varphi_{\mathcal{K}}(t)-e^{-t^{2}/2}|=O(n^{-1/2+\tau}).

  • For n^{\tau}<|t|\leq n^{\frac{1}{2}+2\tau} we use Lemma 9 to say that |\varphi_{\mathcal{K}}(t)|=O(n^{-50}).

  • For n^{\frac{1}{2}+2\tau}<|t|\leq n^{\frac{r}{2}-5/12-2\tau} Corollary 3 implies that |\varphi_{\mathcal{K}}(t)|=O(n^{-r^{2}})

  • For n^{\frac{r}{2}-5/12-2\tau}<|t|\leq n^{r-1-2(r-2)\tau} Corollary 4 implies that |\varphi_{\mathcal{K}}(t)|=\exp(-\Omega(n^{\tau}/2r^{2})).

  • For n^{r-1-2(r-2)\tau}<|t|\leq\pi\sigma_{n} Lemma 18 tells us that |\varphi_{\mathcal{K}}(t)|=\exp(-\Omega(n^{1-2(r-2)\tau}).

Note that in order for the last item on this list to be an effective bound, we require \tau<\frac{1}{2r-2}, which is satisfied. Combining all of these pieces we find that

\displaystyle\int_{-\pi\sigma_{n}}^{\pi\sigma_{n}}\mathopen{}\mathclose{{}% \left|\varphi_{\mathcal{K}}(t)-e^{\frac{-t^{2}}{2}}}\right|\leq\int_{-n^{\tau}% }^{n^{\tau}}\frac{t}{\sqrt{n}}dt+2\int_{n^{-\tau}}^{\pi\sigma}e^{-\frac{t^{2}}% {2}}dt+2\int_{n^{\tau}<|t|\leq\pi\sigma}|\varphi_{\mathcal{K}}(t)|dt=O(n^{-% \frac{1}{2}+2\tau})

Theorem 1 is now just a restatement of the following corollary:

Corollary 1.

Let \mathcal{L}_{n}:=\frac{1}{\sigma}(\mathbb{Z}-\mu). Then for any x\in\mathcal{L}_{n}

\mathopen{}\mathclose{{}\left|\mathbb{P}(\mathcal{K}=x)-\frac{\mathcal{N}(x)}{% \sigma_{n}}}\right|=O\mathopen{}\mathclose{{}\left(\frac{1}{\sigma n^{\frac{1}% {2}-2\tau}}}\right)
Proof.

Apply Lemma 2 to \mathcal{K} (where h_{n}=1/\sigma and b_{n}=\mu), combined with the estimate for the characteristic function of Z given by Theorem 8. ∎

Next we prove that f_{r} and the discrete Gaussian are close in the \ell^{1} metric as well.

Theorem 2.
\sum_{m\in\mathbb{N}}\mathopen{}\mathclose{{}\left|\Pr(f_{r}=m)-\frac{1}{\sqrt% {2\pi}\sigma}\exp\mathopen{}\mathclose{{}\left(-\frac{\mathopen{}\mathclose{{}% \left(t-\mu}\right)^{2}}{2\sigma^{2}}}\right)}\right|=O(n^{-\frac{1}{2}+2\tau})
Proof.

It is equivalent to show that for \mathcal{L}=\frac{1}{\sigma}(\mathbb{Z}-\mu) we have

\sum_{x\in\mathcal{L}_{n}}\mathopen{}\mathclose{{}\left|\Pr(\mathcal{K}=x)-% \frac{1}{\sigma}\mathcal{N}(x)}\right|=O(n^{-1/2+2\tau})

This follows from Lemma 3. We may set \delta_{n}=O(1/\sigma n^{1/2-2\tau}) by Corollary 1. We may also set A=\log(n)^{2} and take \epsilon_{n}=\Pr(|\mathcal{K}|\geq\log(n)^{2})=O(1/n) (this can be shown in several ways. Lemma 6 will suffice for our purposes but much stronger tools exist). The main term will then be

\sum_{x\in\mathcal{L}_{n}}\mathopen{}\mathclose{{}\left|\Pr(Z=x)-\frac{1}{% \sigma}\mathcal{N}(x)}\right|\leq O\mathopen{}\mathclose{{}\left(\log^{2}(n)n^% {-1/2+2\tau}+\frac{1}{n}+n^{-\omega(1)}}\right)

Since choice of \tau was arbitrary, this is sufficient to prove our result. ∎

4. Proof of Theorem 3

Let X=\sum_{i=1}^{n}a_{i}X_{i} be a sum of independent p-biased mean 0 variance 1 Bernoulli random variables. Let Y be a degree d polynomial in the X_{i} such that Y contains no degree 1 monomials. Assume \sum_{i=1}^{n}a_{i}^{2}=T, and a_{i}^{2}\leq\delta for all i. Set \eta:=\|Y\|_{2} and \epsilon:=\exp[-2t^{2}(T-\delta d\ell)/\pi^{2}] . Let \varphi_{X}:=\operatorname*{\mathbb{E}}e^{itx} and \varphi(t)=\operatorname*{\mathbb{E}}[e^{it(X+Y)}] characteristic functions of X and X+Y respectively.

Theorem 3.

Fix some \ell\in\mathbb{N}. Then for all t such that

|t|<\min\mathopen{}\mathclose{{}\left(\sqrt{p(1-p)}\pi\delta^{-1/2},~{}~{}(2e)% ^{\frac{\ell}{2}}\lambda^{-\frac{\ell}{2}}\eta^{-1}}\right)

it follows that

\displaystyle|\varphi(t)-\varphi_{X}(t)| \displaystyle\leq\ell\epsilon\mathopen{}\mathclose{{}\left(1+\mathopen{}% \mathclose{{}\left|t\hat{\|}Y\hat{\|}_{1}}\right|^{\ell}}\right)+\frac{|t\eta|% ^{\ell+1}}{(\ell+1)!}\ell^{\frac{d(\ell+1)}{2}}\lambda^{d\mathopen{}\mathclose% {{}\left(\frac{1-\ell}{2}}\right)}+\lambda^{d}\exp\mathopen{}\mathclose{{}% \left[-\frac{d\lambda}{2e}\mathopen{}\mathclose{{}\left|t\eta}\right|^{-2/d}}\right]
\displaystyle\qquad+|t\eta|^{\frac{\ell+1}{2}}\lambda^{\frac{3d-\ell}{4}}(\ell% +1)\exp\mathopen{}\mathclose{{}\left[-\frac{d\lambda}{4e}\mathopen{}\mathclose% {{}\left|t\eta}\right|^{-2/d}}\right]
Proof.
\displaystyle\varphi(t)-\varphi_{X}(t) \displaystyle=\operatorname*{\mathbb{E}}[e^{it(X+Y)}]-\operatorname*{\mathbb{E% }}[e^{itX}]=\operatorname*{\mathbb{E}}[e^{it(X+Y)}-e^{itX}]=\operatorname*{% \mathbb{E}}[e^{itX}\mathopen{}\mathclose{{}\left(e^{itY}-1}\right)]

By hypothesis we have that \|tY\|_{2}\leq(2e)^{\ell/2}\lambda^{-\ell/2}, so an application of Theorem 6 yields

\Pr(|Yt|\geq 1)\leq\lambda^{d}\exp\mathopen{}\mathclose{{}\left(-\frac{d}{2e}% \mathopen{}\mathclose{{}\left(t\eta}\right)^{-2/d}}\right)

And when |Yt|\leq 1 we can use the degree \ell Taylor polynomial of e^{itY} to say that

\displaystyle\mathopen{}\mathclose{{}\left|e^{itY}-1-\mathopen{}\mathclose{{}% \left(\sum_{j=1}^{\ell}\frac{t^{j}Y^{j}}{j!}}\right)}\right|\leq\frac{e|t^{% \ell+1}Y^{\ell+1}|}{(\ell+1)!}

To acount for the unlikely event that tY is too large to use this Taylor bound let A be the event that |tY|\geq 1. Now let Z be the error random variable Z:=1_{A}\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left|e^{itY}-% \sum^{\ell}t^{j}Y^{j}/j!}\right|-\frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!}}\right). So now we can categorically say that always

\mathopen{}\mathclose{{}\left|e^{itY}-1-\mathopen{}\mathclose{{}\left(\sum_{j=% 1}^{\ell}\frac{t^{j}Y^{j}}{j!}}\right)}\right|\leq\frac{e|t^{\ell+1}Y^{\ell+1}% |}{(\ell+1)!}+Z

We show that |Z| has small expectation. |Z|\leq 1+(k+1)|tY|^{k+1} uniformly. By Theorem 6:

\Pr(Z\neq 0)=\Pr(A)=\Pr(|tY|\geq 1)\leq\lambda^{d}\exp\mathopen{}\mathclose{{}% \left(-\frac{d\lambda}{2e}\mathopen{}\mathclose{{}\left(t\eta}\right)^{-2/d}}\right)

Because Y is a degree d polynomial with \|Y\|_{2}=\eta by Theorem 7

(3) \operatorname*{\mathbb{E}}|Y|^{\ell+1}=\|Y\|_{\ell+1}^{\ell+1}\leq\ell^{\frac{% d(\ell+1)}{2}}\lambda^{d\mathopen{}\mathclose{{}\left(\frac{1-\ell}{2}}\right)% }\eta^{\ell+1}

So it follows by Cauchy-Schwarz that

\displaystyle\operatorname*{\mathbb{E}}|Z| \displaystyle\leq\operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[1_{A}% \cdot(1+(\ell+1)|tY|^{\ell+1})}\right]\leq\Pr(A)+\|1_{A}\|_{2}\|(\ell+1)|tY|^{% \ell+1}\|_{2}
\displaystyle\leq\lambda^{d}\exp\mathopen{}\mathclose{{}\left[-\frac{d\lambda}% {2e}\mathopen{}\mathclose{{}\left(t\eta}\right)^{-2/d}}\right]+\mathopen{}% \mathclose{{}\left(\lambda^{d}\exp\mathopen{}\mathclose{{}\left[-\frac{d% \lambda}{2e}\mathopen{}\mathclose{{}\left(t\eta}\right)^{-2/d}}\right]}\right)% ^{\frac{1}{2}}\mathopen{}\mathclose{{}\left((\ell+1)^{2}t^{\ell+1}\ell^{\frac{% d(\ell+1)}{2}}\lambda^{d\mathopen{}\mathclose{{}\left(\frac{1-\ell}{2}}\right)% }\eta^{\ell+1}}\right)^{\frac{1}{2}}
(4) \displaystyle=\lambda^{d}\exp\mathopen{}\mathclose{{}\left[-\frac{d\lambda}{2e% }\mathopen{}\mathclose{{}\left(t\eta}\right)^{-2/d}}\right]+(t\eta)^{\frac{% \ell+1}{2}}\lambda^{\frac{3d-\ell}{4}}(\ell+1)\exp\mathopen{}\mathclose{{}% \left[-\frac{d\lambda}{4e}\mathopen{}\mathclose{{}\left(t\eta}\right)^{-2/d}}\right]

Next, we analyze the Taylor polynomial of e^{itY} by splitting Y^{j} into a sum of monomials. Let \mathcal{M}_{j} be the set of monomials supported in Y^{j} and say

Y^{j}=\sum_{m\in\mathcal{M}_{j}}a_{m}m

as a result we may write

\operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[e^{itX}\frac{t^{j}Y^{j% }}{j!}}\right]=\frac{t^{j}}{j!}\sum_{m\in\mathcal{M}_{j}}a_{m}\operatorname*{% \mathbb{E}}\mathopen{}\mathclose{{}\left[e^{itX}m}\right]

We now examine this one monomial at a time. Let M denote the set of variables X_{i} appearing in some fixed monomial m. Note that because Y has degree d and m is in Y^{j} we have that |M|\leq dj\leq d\ell. Then:

(5) \displaystyle\operatorname*{\mathbb{E}}[e^{itX}m]=\operatorname*{\mathbb{E}}_{% M}\operatorname*{\mathbb{E}}_{M^{c}}[me^{itX}]=\operatorname*{\mathbb{E}}_{M}% me^{it\sum_{i\in M}a_{i}X_{i}}\operatorname*{\mathbb{E}}_{M^{c}}[e^{it\sum_{i% \in M^{c}}a_{i}X_{i}}]

So we have that

\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}[e^{itX}m]}\right|=% \mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{M}me^{it\sum_{i\in M% }a_{i}X_{i}}\operatorname*{\mathbb{E}}_{M^{c}}[e^{it\sum_{i\in M^{c}}a_{i}X_{i% }}]}\right|\leq|m|\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{M^% {c}}[e^{it\sum_{i\in M^{c}}a_{i}X_{i}}]}\right|

Using the hypothesis |a_{i}t|<\sqrt{p(1-p)}\pi, we obtain from Lemma 1 that

\displaystyle\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{M^{c}}[% e^{it\sum_{i\in M^{c}}a_{i}X_{i}}]}\right|= \displaystyle\prod_{i\in M^{c}}\mathopen{}\mathclose{{}\left|\operatorname*{% \mathbb{E}}[e^{ita_{i}X_{i}}]}\right|\leq\prod_{i\in M^{c}}\mathopen{}% \mathclose{{}\left(1-\frac{2}{\pi^{2}}(t^{2}a_{i}^{2})}\right)\leq e^{-\frac{2% }{\pi^{2}}t^{2}\sum_{i\in M^{c}}a_{i}^{2}}
\displaystyle\leq e^{-\frac{2}{\pi^{2}}t^{2}(T-\delta|M|)}\leq e^{-\frac{2}{% \pi^{2}}t^{2}(T-\delta d\ell)}
\displaystyle\leq\epsilon

So plugging this back into equation 5 we find that

\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}\mathopen{}\mathclose{% {}\left[e^{itX}\frac{t^{j}Y^{j}}{j!}}\right]}\right|\leq\sum_{m\in\mathcal{M}_% {j}}\frac{|t|^{j}}{j!}\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}% \mathopen{}\mathclose{{}\left[e^{itX}m}\right]}\right|\leq\frac{|t|^{j}}{j!}% \sum_{m\in\mathcal{M}_{j}}|a_{m}m|\epsilon\leq\frac{|t|^{j}}{j!}\epsilon C_{p}% ^{dj}\hat{\|}Y^{j}\hat{\|}_{1}\leq\frac{|tC_{p}^{d}|^{j}}{j!}\epsilon\hat{\|}Y% \hat{\|}_{1}^{j}

Where the last inequality uses Lemma 4 and the constant

C_{p}=\mathopen{}\mathclose{{}\left(1+\frac{|1-2p|}{\sqrt{p(1-p)}}}\right)\|% \chi_{p}\|_{\infty}=\mathopen{}\mathclose{{}\left(1+\frac{|1-2p|}{\sqrt{p(1-p)% }}}\right)\sqrt{\frac{1-\lambda}{\lambda}}

Summing over all j from 1 to \ell

\displaystyle|\varphi(t)-\varphi_{X}(t)| \displaystyle=\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}% \mathopen{}\mathclose{{}\left[e^{itX}(e^{itY}-1)}\right]}\right|\leq\mathopen{% }\mathclose{{}\left|\sum_{j=1}^{\ell}\operatorname*{\mathbb{E}}\mathopen{}% \mathclose{{}\left[e^{itX}\frac{t^{j}Y^{j}}{j!}}\right]}\right|+\operatorname*% {\mathbb{E}}\mathopen{}\mathclose{{}\left|e^{itX}\mathopen{}\mathclose{{}\left% (\frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!}+Z}\right)}\right|
\displaystyle\leq\mathopen{}\mathclose{{}\left(\sum_{j=1}^{\ell}\frac{C_{p}^{% dj}|t|^{j}}{j!}\epsilon\hat{\|}Y\|_{1}^{j}}\right)+\frac{|t|^{\ell+1}}{(\ell+1% )!}\operatorname*{\mathbb{E}}|Y|^{\ell+1}+\operatorname*{\mathbb{E}}|Z|
\displaystyle\leq\ell\epsilon\mathopen{}\mathclose{{}\left(1+\mathopen{}% \mathclose{{}\left(tC_{p}^{d}\hat{\|}Y\hat{\|}_{1}}\right)^{\ell}}\right)+% \frac{|t|^{\ell+1}}{(\ell+1)!}\operatorname*{\mathbb{E}}|Y|^{\ell+1}+% \operatorname*{\mathbb{E}}|Z|

Where the 1+\ldots inside the first parenthesis on the last line is to account for the possibility that t\hat{\|}Y\hat{\|}_{1}\leq 1. We may use equations 3 and 4 to bound the two error terms in the above equation. Putting all of the estimates together yields

\displaystyle|\varphi(t)-\varphi_{X}(t)| \displaystyle=\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}% \mathopen{}\mathclose{{}\left[e^{itX}(e^{itY}-1)}\right]}\right|\leq\mathopen{% }\mathclose{{}\left|\sum_{j=1}^{\ell}\operatorname*{\mathbb{E}}\mathopen{}% \mathclose{{}\left[e^{itX}\frac{t^{j}Y^{j}}{j!}}\right]}\right|+\operatorname*% {\mathbb{E}}\mathopen{}\mathclose{{}\left|e^{itX}\mathopen{}\mathclose{{}\left% (\frac{e|t^{\ell+1}Y^{\ell+1}|}{(\ell+1)!}+Z}\right)}\right|
\displaystyle\leq\mathopen{}\mathclose{{}\left(\sum_{j=1}^{\ell}\frac{|t|^{j}}% {j!}\epsilon\|Y\|_{1}^{j}}\right)+\frac{|t|^{\ell+1}}{(\ell+1)!}\operatorname*% {\mathbb{E}}|Y|^{\ell+1}+\operatorname*{\mathbb{E}}|Z|
\displaystyle\leq\ell\epsilon\mathopen{}\mathclose{{}\left(1+\mathopen{}% \mathclose{{}\left|t\hat{\|}Y\hat{\|}_{1}}\right|^{\ell}}\right)+\frac{|t|^{% \ell+1}}{(\ell+1)!}\operatorname*{\mathbb{E}}|Y|^{\ell+1}+\operatorname*{% \mathbb{E}}|Z|
\displaystyle\leq\ell\epsilon\mathopen{}\mathclose{{}\left(1+\mathopen{}% \mathclose{{}\left|t\hat{\|}Y\hat{\|}_{1}}\right|^{\ell}}\right)+\frac{|t\eta|% ^{\ell+1}}{(\ell+1)!}\ell^{\frac{d(\ell+1)}{2}}\lambda^{d\mathopen{}\mathclose% {{}\left(\frac{1-\ell}{2}}\right)}+\lambda^{d}\exp\mathopen{}\mathclose{{}% \left[-\frac{d\lambda}{2e}\mathopen{}\mathclose{{}\left|t\eta}\right|^{-2/d}}\right]
\displaystyle\qquad+|t\eta|^{\frac{\ell+1}{2}}\lambda^{\frac{3d-\ell}{4}}(\ell% +1)\exp\mathopen{}\mathclose{{}\left[-\frac{d\lambda}{4e}\mathopen{}\mathclose% {{}\left|t\eta}\right|^{-2/d}}\right]

Lemma 4.

Let f be a polynomial. For any j\in\mathbb{N} we have that

\|f^{j}\|_{1}\leq\mathopen{}\mathclose{{}\left[1+\frac{|1-2p|}{\sqrt{p(1-p)}}}% \right]^{j-1}\hat{\|}f\hat{\|}_{1}^{j}
Proof.

We prove this by induction on j. For j=1 the statement is trivial. Now assume it is true for some arbitrary j. Let \mathcal{M} be the set of monomials supported in f and f=\sum_{\mathcal{M}}a_{m}m. Then

\displaystyle\hat{\|}f^{j+1}\hat{\|}_{1} \displaystyle=\hat{\|}\sum_{m\in\mathcal{M}}a_{m}mf^{j}\hat{\|}_{1}\leq\sum_{m% \in\mathcal{M}}|a_{m}|\hat{\|}mf^{j}\hat{\|}_{1}\leq\sum_{m\in\mathcal{M}}|a_{% m}|\mathopen{}\mathclose{{}\left(1+\frac{|1-2p|}{\sqrt{p(1-p)}}}\right)\hat{\|% }f\hat{\|}_{1}^{j}
\displaystyle\leq\mathopen{}\mathclose{{}\left[1+\frac{|1-2p|}{\sqrt{p(1-p)}}}% \right]^{j}\hat{\|}f\hat{\|}_{1}^{j+1}

completing the induction and the proof. ∎

5. Decoupling and Polynomial Degree Reduction

In this section we set up our decoupling technique for reducing the degree of polynomials in characteristic function computations. For an example illustrating the idea, see Example 1.1 in the introduction.

5.1. The \alpha operator

Partition the edge set of {[n]\choose 2} into k+1 parts B_{0},B_{1},\ldots,B_{k}. Let X\in\{0,1\}^{B_{0}} be the indicator random variables denoting which edges from B_{0} are in our graph sampled from G(n,p). Similarly for i\in[k] let Y_{i} denote the indicator random variables for the edges in B_{i}. Also, for any i\in[k] let Y_{i}^{0} and Y_{i}^{1} denote independent random variables with the same marginals as Y_{i}. We let B_{i}^{0} and B_{i}^{1} denote separate copies of the edges in B_{i}, so that we may say Y_{i}^{j}\in\{0,1\}^{B_{i}^{j}}. For v\in\{0,1\}^{k} let Y^{v}:=(Y_{1}^{v_{1}},Y_{2}^{v_{2}},\ldots,Y_{k}^{v_{k}}).

One may interpret this as choosing two random samples Y_{i}^{0} and Y_{i}^{1} from G(n,p) for the edges in each B_{i}. Then X,Y^{v} would correspond to a sampling of every edge in the graph, with which of the copies of each Y_{i} you query controlled by the binary vector v.

Finally, set \mathcal{B}:=\cup_{i=1}^{k}\cup_{j=0}^{1}B_{i}^{j}, and let \mathbf{Y}:=(Y_{1}^{0},Y_{1}^{1},Y_{2}^{1},\ldots,Y_{k}^{1})\in\{0,1\}^{% \mathcal{B}}. For v\in\{0,1\}^{k} let |v| denote the hamming weight of v.

We are now ready to define our \alpha operator.

Definition 4.

Given the partition {[n]\choose 2}=B_{0}\cup B_{1}\cup\ldots\cup B_{k} as above, define the operator on functions of the form f(X,Y_{1},\ldots,Y_{k}), which outputs the function \alpha(f):\{0,1\}^{B_{0}\cup\mathcal{B}}\to\mathbb{R} given by

\displaystyle\alpha(f)(X,\mathbf{Y}) \displaystyle:=\sum_{\textbf{v}\in\{0,1\}^{k}}(-1)^{|v|}f(X,Y^{v})

Note that \alpha is a linear operator, and in the rest of this section we will describe its action. First we define a pair of terms which will be useful in our analysis.

Definition 5 (Rainbow Sets).

We call a set S\subset{[n]\choose 2} rainbow if S\cap B_{i}\neq\varnothing for all 1\leq i\leq k.

Definition 6 (Flattening from \mathcal{B} to {[n]\choose 2}).

Given a set S\subset B_{0}\cup\mathcal{B} define

\mbox{flat}(S):=\{e\in{[n]\choose 2}\mathrm{~{}s.t.~{}}e^{0}\mbox{ or }e^{1}% \in S\}

That is, \mbox{flat}(S) takes in a subset of \mathcal{B}, and outputs the set of edges in {[n]\choose 2} relevant to S (information about which copy or both of e were in S is omitted).

5.2. Action of \alpha on \chi_{S}

In this subsection we compute the action of \alpha on our basis functions \chi_{S}. Write S=S_{0}\cup S_{1}\cup\ldots\cup S_{k} where S_{i}=S\cap B_{i}. Because \chi_{S}:=\prod_{i=0}^{k}\chi_{S_{i}} we can compute that

\displaystyle\alpha(S) \displaystyle:=\alpha(\chi_{S})=\sum_{\textbf{v}\in\{0,1\}^{k}}(-1)^{|v|}\chi_% {S}(X,Y^{v})=\sum_{\textbf{v}\in\{0,1\}^{k}}\chi_{S_{0}}(X)\prod_{i=1}^{k}(-1)% ^{v_{i}}\chi_{S_{i}}(Y_{i}^{v_{i}})
\displaystyle=\chi_{S_{0}}(X)\prod_{i=1}^{k}\mathopen{}\mathclose{{}\left(\chi% _{S_{i}}(Y_{i}^{0})-\chi_{S_{i}}(Y_{i}^{1})}\right)

If S is not rainbow, then for some i we have S_{i}=\varnothing, so it follows from the above product form that if \alpha(S)\equiv 0. Furthermore, if S is rainbow, then

\displaystyle\operatorname*{\mathbb{E}}\alpha(S) \displaystyle=\hat{\alpha}(\varnothing)=0
\displaystyle\|\alpha(S)\|_{2}^{2} \displaystyle=\sum_{T\subset\mathbf{Y}}\hat{\alpha}(T)^{2}=\sum_{v\in\{0,1\}^{% k}}(-1)^{2|v|}=2^{k}

We can also compute that for S,T, both rainbow we have

\displaystyle\operatorname*{\mathbb{E}}[\alpha(\chi_{S})\alpha(\chi_{t})] \displaystyle=\operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[\chi_{S_% {0}}\chi_{T_{0}}}\right]\prod_{i=0}^{k}\operatorname*{\mathbb{E}}\mathopen{}% \mathclose{{}\left[\mathopen{}\mathclose{{}\left(\chi_{S_{i}}(Y_{i}^{0})-\chi_% {S_{i}}(Y_{i}^{1})}\right)\mathopen{}\mathclose{{}\left(\chi_{T_{i}}(Y_{i}^{0}% )-\chi_{T_{i}}(Y_{i}^{1})}\right)}\right]
\displaystyle=\begin{cases}2^{k}\mbox{\hfil}\mbox{if }S_{i}=T_{i}\neq% \varnothing~{}\forall i\\ 0&\mbox{else}\end{cases}

So we have shown that the linear operator \alpha is orthogonal for S rainbow, and 0 on S nonrainbow. In particular, if f=\sum_{S}\hat{f}(S)\chi_{S} then

\displaystyle\|\alpha(f)\|_{2}^{2} \displaystyle=\operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[% \mathopen{}\mathclose{{}\left(\sum_{S\subset E}\alpha(\chi_{S})\hat{f}(S)}% \right)^{2}}\right]=\sum_{S\mbox{ rainbow}}2^{k}\hat{f}(S)^{2}

We will also need to examine products of the form \alpha(\chi_{S})\alpha(\chi_{T}) for distinct sets S,T\subset{[n]\choose 2}.

Lemma 5.

Let S,T\subset{[n]\choose 2} and set \gamma=\frac{1-2p}{\sqrt{p(1-p)}}. For U\subset B_{0}\cup\mathcal{B} we have

\displaystyle\mathopen{}\mathclose{{}\left|\widehat{\alpha(\chi_{S})\alpha(% \chi_{T})}(U)}\right|\leq\begin{cases}0&\mbox{if }S\Delta T\not\subset\mbox{% flat}(U)\mbox{ or }\mbox{flat}(U)\not\subset S\cup T\\ \max\mathopen{}\mathclose{{}\left(\gamma^{|U|},1}\right)&\mbox{if }S\Delta T% \subset\mbox{flat}(U)\subset S\cup T\end{cases}

The proof is mostly a calculation and is contained in appendix A.

5.3. \alpha and decoupling

We are now ready to state our main Lemma of this section.

Lemma 6.

Let f:=f(X,Y) be a function of the independent random variables X,Y_{1},\ldots,Y_{k}. Let \varphi(t)=\operatorname*{\mathbb{E}}[e^{itf(X,Y)}] the characteristic function of f. Then we have

|\varphi(t)|^{2^{k}}\leq\operatorname*{\mathbb{E}}_{\mathbf{Y}}\mathopen{}% \mathclose{{}\left|\operatorname*{\mathbb{E}}_{X}e^{it\alpha(f)(X,\mathbf{Y})}% }\right|
Proof.

We prove this statement inductively on k. For k=0 the proof is trivial, and for k=1 the proof is in Example 1.1.

Let \tilde{X}=(X,Y_{k}) and \tilde{Y}=Y_{1},\ldots,Y_{k-1}. Then by the inductive hypothesis:

|\varphi(t)|^{2^{k-1}}\leq\operatorname*{\mathbb{E}}_{\tilde{Y}}\mathopen{}% \mathclose{{}\left|\operatorname*{\mathbb{E}}_{\tilde{X}}e^{it\sum_{v\in\{0,1% \}^{k-1}}(-1)^{|v|}f(\tilde{X},\tilde{Y}^{v})}}\right|

Applying Cauchy-Schwarz to the inner term of the above expectation we find

\displaystyle\big{|}\operatorname*{\mathbb{E}}_{X,Y_{k}} \displaystyle e^{it\sum_{\textbf{v}\in\{0,1\}^{k-1}}(-1)^{|v|}f(X,Y_{k},\tilde% {Y}^{v})}\big{|}^{2}\leq\operatorname*{\mathbb{E}}_{X}\mathopen{}\mathclose{{}% \left|\operatorname*{\mathbb{E}}_{Y_{k}}e^{it\sum_{v\in\{0,1\}^{k-1}}(-1)^{|v|% }f(X,Y_{k},\tilde{Y}^{v})}}\right|^{2}
\displaystyle=\operatorname*{\mathbb{E}}_{X}\mathopen{}\mathclose{{}\left(% \operatorname*{\mathbb{E}}_{Y_{k}^{0}}e^{it\sum_{v\in\{0,1\}^{k-1}}(-1)^{|v|}f% (X,Y_{k},\tilde{Y}^{v})}}\right)\overline{\mathopen{}\mathclose{{}\left(% \operatorname*{\mathbb{E}}_{Y_{k}^{1}}e^{it\sum_{{v}\in\{0,1\}^{k-1}}(-1)^{|v|% }f(X,Y_{k},\tilde{Y}^{v})}}\right)}
\displaystyle=\operatorname*{\mathbb{E}}_{X,Y_{k}^{0},Y_{k}^{1}}e^{it\sum_{v% \in\{0,1\}^{k}}f(X,Y^{v})(-1)^{|v|}}=\operatorname*{\mathbb{E}}_{X,Y_{k}^{0},Y% _{k}^{1}}e^{it\alpha(f)}

So a second application of Cauchy-Schwarz now tells us that

\displaystyle|\varphi(t)|^{2^{k}} \displaystyle\leq\mathopen{}\mathclose{{}\left(\operatorname*{\mathbb{E}}_{% \tilde{Y}}\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{\tilde{X}}% e^{it\sum_{v\in\{0,1\}^{k-1}}(-1)^{|v|}f(\tilde{X},\tilde{Y}^{v})}}\right|}% \right)^{2}\leq\operatorname*{\mathbb{E}}_{\tilde{Y}}\mathopen{}\mathclose{{}% \left|\operatorname*{\mathbb{E}}_{\tilde{X}}e^{it\sum_{v\in\{0,1\}^{k-1}}(-1)^% {|v|}f(\tilde{X},\tilde{Y}^{v})}}\right|^{2}
\displaystyle\leq\operatorname*{\mathbb{E}}_{\tilde{Y}}\operatorname*{\mathbb{% E}}_{X,Y_{k}^{0},Y_{k}^{1}}e^{it\alpha(f)}\leq\operatorname*{\mathbb{E}}_{% \mathbf{Y}}\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{X}e^{it% \alpha(f)(X,\mathbf{Y})}}\right|

6. Properties of the K_{r} counting function

In this section we compute the Fourier transform of f_{r}, the K_{r} counting function, and its normalized brother \mathcal{K}. Recall the definition of f_{r} by

f_{r}=\sum_{\begin{subarray}{c}H\subset G\\ H\equiv K_{r}\end{subarray}}1_{H}

Where the sum is over all copies of K_{r} in G. Meanwhile for each individual r-clique H, its indicator function is given by

1_{H}(X)=\prod_{e\in H}x_{e}=\prod_{e\in H}\mathopen{}\mathclose{{}\left(\sqrt% {p(1-p)}\chi_{e}+p}\right)=p^{r\choose 2}\sum_{\mathrm{supp}(S)\subset H}% \mathopen{}\mathclose{{}\left(\frac{1-p}{p}}\right)^{|S|/2}\chi_{S}

Summing over all posible choices of H we find that \widehat{f_{r}}(S) is p^{r\choose 2}(\frac{1-p}{p})^{|S|/2} multiplied by the number of different r-cliques containing all of the edges in S. But we know any set S supported on t vertices appears in exactly {n-t\choose r-t} r-cliques. Therefore

(6) f_{r}=p^{r\choose 2}\sum_{t=0}^{r}{n-t\choose r-t}\mathopen{}\mathclose{{}% \left(\frac{1-p}{p}}\right)^{|S|/2}\sum_{|\mathrm{supp}(S)|=t}\chi_{S}

Combining this formula with Theorem 4 allows us to quickly compute \sigma^{2}, the Variance f_{r}(G) when G is drawn from G(n,p).

(7) \displaystyle\sigma^{2}:=Var_{G\sim G(n,p)}(f(G)) \displaystyle=\sum_{S\neq\varnothing}\hat{f}(S)^{2}=p^{r(r-1)}\sum_{t=2}^{r}% \mathopen{}\mathclose{{}\left[{n-t\choose r-t}\mathopen{}\mathclose{{}\left(% \frac{1-p}{p}}\right)^{|S|/2}}\right]^{2}\sum_{|\mathrm{supp}(S)|=t}1
\displaystyle=\frac{p^{r(r-1)-1}(1-p)}{2(r-2)!^{2}}n^{2r-2}+O\mathopen{}% \mathclose{{}\left(n^{2r-3}}\right)

As a reminder recall that \mathcal{K}=\frac{f_{r}-\mu}{\sigma} is the normalized (mean 0, variance 1) rescaling of f_{r}. We note that

(8) \displaystyle W^{1}(\mathcal{K}) \displaystyle=\sum_{|S|=1}\hat{\mathcal{K}}(S)^{2}=1-O\mathopen{}\mathclose{{}% \left(\frac{1}{n}}\right)
(9) \displaystyle W^{>1}(\mathcal{K}) \displaystyle=\sum_{|S|\geq 2}\hat{\mathcal{K}}(S)^{2}=\Theta\mathopen{}% \mathclose{{}\left(\frac{1}{n}}\right)
(10) \displaystyle\hat{\mathcal{K}}(S) \displaystyle=\frac{p^{r\choose 2}\mathopen{}\mathclose{{}\left(\frac{1-p}{p}}% \right)^{|S|/2}{n-|\mathrm{supp}(S)|\choose r-|\mathrm{supp}(S)|}}{\sigma}=% \Theta\mathopen{}\mathclose{{}\left(n^{1-|\mathrm{supp}(S)|}}\right)

Where the above formula for \hat{\mathcal{K}}(S) is valid for all S\neq\varnothing (and \hat{\mathcal{K}}(\varnothing)=0).

7. Bound for |t|\leq n^{\tau}

In this section we prove the following Lemma:

Lemma 7.

For all t=o(n) we have |\varphi_{\mathcal{K}}(t)-e^{-t^{2}/2}|=O\mathopen{}\mathclose{{}\left(\frac{t% }{\sqrt{n}}}\right).

In order to do this, we will need the Berry-Esseen theorem. The following lemma is a restatement of Lemma 1 of Chapter V in Petrov’s Sums of Independent Random Variables [Pet75].

Lemma 8.

Let Q^{2}=\frac{1}{{n\choose 2}} and set X=\sum_{e\in{[n]\choose 2}}Q\chi_{e}. It is the mean 0 variance 1 sum of independent random variables. Further define L_{n} to be

L_{n}:={n\choose 2}\operatorname*{\mathbb{E}}[|Q\chi_{e}|^{3}]=\frac{p^{2}+(1-% p)^{2}}{\sqrt{{n\choose 2}p(1-p)}}=\Theta_{p}\mathopen{}\mathclose{{}\left(1/n% }\right)

then for t\leq\frac{1}{4L_{n}} we have that

(11) \mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}[e^{itX}]-e^{-\frac{t^% {2}}{2}}}\right|\leq 16L_{n}|t|^{3}e^{\frac{-t^{2}}{3}}

With this bound, we are ready to prove Lemma 7.

Proof of Lemma 7.

Decompose \mathcal{K} into two parts: X a mean 0 variance 1 sum of i.i.d. random variables, and Y, which is considered as an error term. Set Q=\frac{1}{\sqrt{n\choose 2}} and let

\displaystyle X:=\sum_{e\in{[n]\choose 2}}Q\chi_{e} \displaystyle Y:=\mathcal{K}-X=\sum_{e\in{[n]\choose 2}}(\hat{\mathcal{K}}(e)-% Q)\chi_{e}+\sum_{|S|\geq 2}\hat{\mathcal{K}}(S)\chi_{S}

We know that all edges e\in{[n]\choose 2} have the same Fourier coefficient \hat{\mathcal{K}}(e), and further that

\sum_{e}\hat{\mathcal{K}}(e)^{2}={n\choose 2}\hat{\mathcal{K}}(e)^{2}=1-W^{>1}% (\mathcal{K})=1-O\mathopen{}\mathclose{{}\left(\frac{1}{n}}\right)

Therefore it follows that

|\hat{\mathcal{K}}(e)-Q|=\mathopen{}\mathclose{{}\left|\frac{\hat{\mathcal{K}}% (e)^{2}-Q^{2}}{\hat{\mathcal{K}}(e)+Q}}\right|=O\mathopen{}\mathclose{{}\left(% \frac{1}{n^{2}}}\right)

So now we can compute

\|Y\|_{2}^{2}=\sum_{e}(\hat{\mathcal{K}}(e)-Q)^{2}+\sum_{|S|\geq 2}\hat{% \mathcal{K}}(S)^{2}=O\mathopen{}\mathclose{{}\left(\frac{1}{n^{2}}}\right)+W^{% >1}(\mathcal{K})=O\mathopen{}\mathclose{{}\left(\frac{1}{n}}\right)

For t\leq\frac{1}{4L_{n}}=\Theta(n), Lemma 8, the above calculation, and Cauchy-Schwarz applied to \operatorname*{\mathbb{E}}[|Y|]^{2} tell us

\displaystyle\mathopen{}\mathclose{{}\left|\varphi_{\mathcal{K}}(t)-e^{-\frac{% t^{2}}{2}}}\right| \displaystyle=\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}% \mathopen{}\mathclose{{}\left[e^{it\mathcal{K}}}\right]-e^{-\frac{t^{2}}{2}}}% \right|=\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}\mathopen{}% \mathclose{{}\left[e^{it(X+Y)}}\right]-e^{-\frac{t^{2}}{2}}}\right|\leq% \mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}\mathopen{}\mathclose{% {}\left[e^{itX}}\right]-e^{-\frac{t^{2}}{2}}}\right|+\mathopen{}\mathclose{{}% \left|\operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[e^{itX+Y}}\right% ]-\operatorname*{\mathbb{E}}e^{itX}}\right|
\displaystyle\leq 16L_{n}|t|^{3}e^{\frac{-t^{2}}{3}}+\operatorname*{\mathbb{E}% }|tY|=O\mathopen{}\mathclose{{}\left(\frac{t^{3}e^{-\frac{t^{2}}{3}}}{n}+\frac% {t}{\sqrt{n}}}\right)

8. Bound for |t|\in[n^{\tau},n^{\frac{1}{2}+2\tau}]

The goal for this section is to prove the following lemma

Lemma 9.

For n^{\tau}<t\leq n^{\frac{3}{4}}

|\varphi_{\mathcal{K}}(t)|\leq O\mathopen{}\mathclose{{}\left(n^{-50}}\right)

8.1. High Level Proof

In this subsection, we will assume the following helper claims, and then prove Lemma 9.

Claim 1.

For all sufficiently large n and any \alpha\in(n^{-1+\tau},1) there exists a set of edges H\subset{[n]\choose 2} with |H|\geq\alpha{n\choose 2} such that

\sum_{\begin{subarray}{c}S\subset H\\ |S|\geq 2\end{subarray}}n^{2-2|\mathrm{supp}(S)|}=O(\alpha^{2}n^{-1})

For the subsequent claims, assume we have chosen one such H as promised by Lemma 1 which will be fixed throughout.

Claim 2.

Let A be the event (over the space of revelations \beta\in\{0,1\}^{H^{c}}) that for every edge e\in H

|\widehat{\mathcal{K}_{\beta}}(e)-\hat{\mathcal{K}}(e)|<\frac{1}{n^{1.4}}

Recall \lambda:=\min(p,1-p). Then \Pr(A)\geq 1-n^{2}\exp\mathopen{}\mathclose{{}\left[-\Omega\mathopen{}% \mathclose{{}\left(n^{\frac{0.4}{r^{2}}}}\right)}\right].

Claim 3.

Let B be the event (over the space of revelations \beta\in\{0,1\}^{H^{c}}) that for every set S\subset{[n]\choose 2} with |S|\geq 2

|\widehat{\mathcal{K}_{\beta}}(S)|=Cn^{r-|S|}

where C is a fixed constant depending on r and p, but not on n. Then \Pr(B)\geq 1-\exp\mathopen{}\mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}% \left[n^{-1/r^{2}}}\right]}\right)

Claim 4.

Assume \beta\in A\cap B. Then for t\in[n^{\tau},n^{3/4}]

\operatorname*{\mathbb{E}}_{H}[e^{it\mathcal{K}_{\beta}}]=O(n^{-50})

Lemma 9 now follows by combining all of these claims.

Proof of Lemma 9.

Let A, and B be as defined in Claims 2 and 3. We can break up \{0,1\}^{H^{c}} into A\cap B and (A\cap B)^{c} and estimate

\displaystyle|\varphi_{\mathcal{K}}(t)| \displaystyle:=\big{|}\operatorname*{\mathbb{E}}_{(\alpha,\beta)\in 2^{n% \choose 2}}[e^{it\mathcal{K}(\alpha,\beta)}]\big{|}\leq\operatorname*{\mathbb{% E}}_{\beta\subset H^{c}}|\operatorname*{\mathbb{E}}_{\alpha\subset H}[e^{it% \mathcal{K}_{\beta}(\alpha)}]|\leq\Pr[(A\cap B)^{c}]+\Pr[A\cap B]\operatorname% *{\mathbb{E}}_{\beta\in(A\cap B)}\mathopen{}\mathclose{{}\left|\operatorname*{% \mathbb{E}}_{\alpha}[e^{it\mathcal{K}_{\beta}}]}\right|

Combining Claims 2 3, and 4 we can bound the right hand side of the above by O(n^{-50}). ∎

8.1.1. Proof of Claims

Proof Of Claim 1.

Draw a random graph H on n vertices by choosing \alpha{n\choose 2} edges uniformly at random. Then we note that

\displaystyle\operatorname*{\mathbb{E}}\sum_{\begin{subarray}{c}S\subset H\\ |S|\geq 2\end{subarray}}n^{2-2|\mathrm{supp}(S)|}=\sum_{\begin{subarray}{c}S% \subset{n\choose 2}\\ 2\leq|S|\leq r\end{subarray}}n^{2-2|\mathrm{supp}(S)|}\Pr(S\subset H)\leq% \alpha^{2}n^{2}\sum_{i=3}^{r}n^{-2i}\sum_{|\mathrm{supp}(S)|=i}1=O\mathopen{}% \mathclose{{}\left(\alpha^{2}n^{-1}}\right)

So some H must have at most the average value for this sum. ∎

We prove Claim 2 by noting that the formula for \widehat{\mathcal{K}_{\beta}}(S) (a coefficient in the polynomial \mathcal{K}_{\beta}) is itself a low degree polynomial, and therefore may be shown to have tight concentration by hypercontractivity.

Proof Of Claim 2.

Recall that from Section 2.4 that

\widehat{\mathcal{K}_{\beta}}(e)=\sum_{T\subset H^{c}}\hat{\mathcal{K}}(e\cup T% )\chi_{T}(\beta)

So \widehat{\mathcal{K}_{\beta}}(e):\{0,1\}^{H^{c}}\to\mathbb{R} is a polynomial (in the functions \chi_{e}), and we can began by estimating its coefficients. First we see that

\operatorname*{\mathbb{E}}[\widehat{\mathcal{K}_{\beta}}(e)]=\widehat{\widehat% {\mathcal{K}}_{\beta}(e)}(\varnothing)=\hat{\mathcal{K}}(e)

Also for any T\subset\{0,1\}^{H^{c}} we know that \hat{\mathcal{K}}(e\cup T)\neq 0 only if |\mathrm{supp}(e\cup T)|\leq r. So we can compute:

\displaystyle Var_{\beta}(\widehat{\mathcal{K}_{\beta}}(e)) \displaystyle=\sum_{\begin{subarray}{c}T\subset H^{c}\\ T\neq\varnothing\end{subarray}}\hat{\mathcal{K}}(e\cup T)^{2}=\sum_{i=3}^{r}% \sum_{\begin{subarray}{c}T\subset H^{c}\\ |\mathrm{supp}(T\cup e)|=i\end{subarray}}\hat{\mathcal{K}}(e\cup T)^{2}
\displaystyle\leq\sum_{i=3}^{r}\sum_{|\mathrm{supp}(T\cup e)|=i}\hat{\mathcal{% K}}(e\cup T)^{2}\leq\sum_{i=3}^{r}{n-2\choose i-2}O(n^{2-2i})=O\mathopen{}% \mathclose{{}\left(\frac{1}{n^{3}}}\right)

Since \widehat{\mathcal{K}_{\beta}}(e) has degree less than {r\choose 2}, an application of Theorem 6 gives us that for any e\in H

\Pr\mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left|\widehat{% \mathcal{K}_{\beta}}(e)-\hat{\mathcal{K}}(e)}\right|\geq\frac{1}{n^{1.4}}}% \right]<\exp\mathopen{}\mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}\left% (n^{\frac{0.4}{r^{2}}}}\right)}\right)

Applying a union bound over all edges in H completes the proof. ∎

Proof of Claim 3.

Again we use the decomposition

\widehat{\mathcal{K}_{\beta}}(S)=\sum_{T\subset H^{c}}\hat{\mathcal{K}}(S\cup T% )\chi_{T}(\beta)

and note that

\operatorname*{\mathbb{E}}[\widehat{\mathcal{K}_{\beta}}(S)]=\widehat{\widehat% {\mathcal{K}}_{\beta}(S)}(\varnothing)=\hat{\mathcal{K}}(S)

Let |\mathrm{supp}(S)|=s. For any T\subset\{0,1\}^{H^{c}} we know that \hat{\mathcal{K}}(S\cup T)\neq 0 if and only if |\mathrm{supp}(S\cup T)|\leq r. There are at most {n-s\choose\ell-s}2^{{\ell\choose 2}}\leq 2^{r^{2}}n^{\ell-s} choices of T such that s=|\mathrm{supp}(S\cup T)|=\ell. And further for each of these choices we know that \hat{\mathcal{K}}(S\cup T)=\Theta(n^{1-\ell}). Define the helper function

g:=\sum_{\begin{subarray}{c}T\subset H^{c}\\ |\mathrm{supp}(S\cup T)|>s\end{subarray}}\hat{\mathcal{K}}(S\cup T)\chi_{T}(\beta)

We can compute that

\displaystyle Var(g) \displaystyle\leq\sum_{\ell=s+1}^{r}\sum_{|\mathrm{supp}(S\cup T)=\ell}% \mathopen{}\mathclose{{}\left(\hat{\mathcal{K}}(S\cup T)}\right)^{2}\leq\sum_{% \ell=s+1}^{r}2^{r^{2}}n^{\ell-s}\Theta(n^{2-2\ell})
\displaystyle\leq O(n^{2-2s-1})

Further we can see that g is a polynomial of degree at most {r\choose 2}, and so by Theorem 6

\displaystyle\Pr\mathopen{}\mathclose{{}\left[|g|\geq n^{1-s+\frac{1}{4}}}% \right]=\exp\mathopen{}\mathclose{{}\left[-\Omega\mathopen{}\mathclose{{}\left% (n^{1/r^{2}}}\right)}\right]

If |g|<n^{1-s+1/4} then we can conclude that

\displaystyle\hat{\mathcal{K}}(S)=\sum_{|\mathrm{supp}(S\cup T)|=s}\hat{% \mathcal{K}}(S\cup T)\chi_{T}(\beta)+g(\beta)\leq 2^{{s\choose 2}}\Theta(n^{1-% s})+n^{1-s+\frac{1}{4}}=O(n^{1-s})

So for any S\subset H we find that |\widehat{\mathcal{K}_{\beta}}(S)|\leq O(n^{1-s}) with probability at least 1-\exp(-\Omega(n^{\frac{2}{r^{2}}})). Taking a union bound over all such S finishes the proof. ∎

Proof of Claim 4.

Fix \alpha=\frac{1}{t} and assume that \beta\in A\cap B. Let X and Y be

\displaystyle X:=\sum_{e\in{[n]\choose 2}}\widehat{\mathcal{K}_{\beta}}(e)\chi% _{e} \displaystyle Y:=\sum_{|S|\geq 2}\widehat{\mathcal{K}_{\beta}}(S)\chi_{S}

then \mathcal{K}_{\beta}=X+Y, where X is an independent sum, and Y is small. To apply Theorem 3 we set \ell=99 and compute the relevant parameters to be

  1. T:=\sum_{e\in H}\widehat{\mathcal{K}_{\beta}}(e)^{2}\geq\sum_{e\in H}\hat{% \mathcal{K}}(e)^{2}/4\geq\alpha{n\choose 2}\frac{1}{4n^{2}}\geq\frac{1}{t} where the last inequality uses the fact that n^{\tau}\leq t\leq n^{\frac{3}{4}}.

  2. \displaystyle\eta^{2}=\|Y\|_{2}^{2}=\sum_{\begin{subarray}{c}S\subset H\\ |S|\geq 2\end{subarray}}\widehat{\mathcal{K}_{\beta})}(S)^{2}\leq\sum_{% \begin{subarray}{c}S\subset H\\ |S|\geq 2\end{subarray}}O(n^{2-2|S|})=O(\alpha^{2}n^{-1})=O\mathopen{}% \mathclose{{}\left(\frac{1}{nt^{2}}}\right)

  3. \hat{\|}Y\hat{\|}_{1}=\sum_{S\subset H}|\widehat{\mathcal{K}_{\beta}}(S)|=O(n).

  4. \delta=\max_{e}(\widehat{\mathcal{K}_{\beta}}(e))=3\widehat{\mathcal{K}}(e)/2% \leq\frac{3}{n}

  5. \displaystyle\epsilon \displaystyle=\exp\mathopen{}\mathclose{{}\left(-\frac{t^{2}[T-\delta{r\choose 2% }\ell]}{\pi^{2}}}\right)=\exp\mathopen{}\mathclose{{}\left(-\Omega\mathopen{}% \mathclose{{}\left(t^{2}\mathopen{}\mathclose{{}\left[t^{-1}-\frac{\ell{r% \choose 2}}{n}}\right]}\right)}\right)=\exp\mathopen{}\mathclose{{}\left(-% \Omega\mathopen{}\mathclose{{}\left(t}\right)}\right)=\exp\mathopen{}% \mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}\left(n^{\tau}}\right)}\right)

Where the third step above used the fact that t=o(n), and the last used that t\geq n^{\tau}. Given these settings of parameters we can plug into Theorem 3 and find that

\operatorname*{\mathbb{E}}[e^{it\mathcal{K}_{\beta}}]\leq O\mathopen{}% \mathclose{{}\left(\epsilon n^{\ell}+(n^{-{\frac{1}{2}}})^{\ell+1}+\exp% \mathopen{}\mathclose{{}\left(-\Omega(n^{1/r^{2}})}\right)+n^{\frac{\ell+2}{4}% }\exp\mathopen{}\mathclose{{}\left(-\Omega(n^{1/r^{2}})}\right)}\right)=O(n^{-% 50})

9. Bound for |t|\in[n^{\frac{1}{2}+2\tau},n^{\frac{r}{2}-\frac{5}{12}-\tau}]

9.1. High Level Overview

In this section we discuss how to bound the characteristic function \varphi_{\mathcal{K}}(t) for t\in[n^{\frac{1}{2}+2\tau},n^{\frac{r}{2}-\frac{5}{12}-\tau}]. The central trick is the decoupling tool from Section 5 combined with Theorem 3. The basic outline is that we will partition the edges of {[n]\choose 2} into k+1 pieces, apply Lemma 6 to switch our attention from \mathcal{K} to \alpha(\mathcal{K}), and then further examine a random restriction to edges on some subset U_{0}\subset[n] of the vertices. The restricted polynomial \alpha(\mathcal{K})_{\mathbf{Y}} will have its Fourier mass concentrated on degree 1 terms. We will then use Theorem 3 to bound the characteristic function of \alpha(\mathcal{K})_{\mathbf{Y}}.

9.2. Notation for Section and Setup

Partition the vertex set [n] into [n]=\cup_{i=0}^{k}U_{i}. Assume that for i=1,2,\ldots,k all sets U_{i} have a common size u:=|U_{i}|. U_{0} will contain all the other vertices, and we will always insist that |U_{0}|\geq\frac{n}{k+1}. Once this partition has been made we can refer to a vertex in U_{i} as having been colored with the color i. Thus, tautologically, U_{i} is the set of all vertices colored i. We partition our edge variables into k+1 classes B_{0},B_{1},\ldots,B_{k} by saying an edge e=(u,v) is in B_{i} if the largest color among the colors of its endpoints is color i. Equivalently, if e is an edge between a vertices in U_{i} and U_{j} respectively, then e\in B_{\max(i,j)}.

We define the \alpha operator as per Section 5.1 with respect to this partition. For i=1,2,\ldots,k let B_{i}^{0} and B_{i}^{1} denote two separate copies of the edges in B_{i}, and let Y_{i}^{0} and Y_{i}^{1} denote independent identically drawn p-biased edge sets from B_{i}^{0} and B_{i}^{1} respectively. Meanwhile, let X denote the edge variables in B_{0}={U_{0}\choose 2} Then by Lemma 6

(12) |\varphi_{\mathcal{K}}(t)|^{2^{k}}\leq\operatorname*{\mathbb{E}}_{\mathbf{Y}}% \mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E}}_{X}e^{it\sum_{\textbf% {v}\in\{0,1\}^{k}}(-1)^{|v|}\mathcal{K}(X,Y^{v})}}\right|=\operatorname*{% \mathbb{E}}_{\mathbf{Y}}\mathopen{}\mathclose{{}\left|\operatorname*{\mathbb{E% }}_{X}[e^{it\alpha(\mathcal{K})_{\mathbf{Y}}(X)}]}\right|

We recall from Section 6 that \alpha(\mathcal{K}) is a function in the variable set X,Y_{1}^{0},Y_{1}^{1},\ldots,Y_{k}^{1}. However, the expectation we wish to bound above is only in terms of the variables in X. We define our restricted functions, as per the notation in Section 2.4, to be

\displaystyle g(X) \displaystyle:=\alpha(\mathcal{K})_{\mathbf{Y}}(X):=\alpha(\mathcal{K})(X,% \mathbf{Y})=\sum_{T\subset B_{0}\cup\mathcal{B}}\hat{\mathcal{K}}(T)\alpha(% \chi_{T})(X,\mathbf{Y})
(13) \displaystyle g_{S}(\mathbf{Y}) \displaystyle:=g_{S}:=\widehat{\alpha(\mathcal{K})_{\mathbf{Y}}}(S)=\sum_{% \begin{subarray}{c}T\subset\mathcal{B}\\ T~{}\mbox{\scriptsize{rainbow}}\end{subarray}}\widehat{\mathcal{K}}(S\cup T)% \alpha(\chi_{T})(\mathbf{Y})

Where in the last line S\subset B_{0}. The rainbow condition in the subsequent sum is not technically necessary, but there to prune out the nonrainbow sets which have have a Fourier coefficient of 0.

Thus, by equation 12, our goal for the rest of this section is to show that \varphi_{g}(t):=\operatorname*{\mathbb{E}}_{X}[e^{itg}] is small with high probability over choice of restriction \mathbf{Y}. To do this we will split the random variable g into two pieces h,d:\{0,1\}^{\mathcal{B}_{0}}\to\mathbb{R} where

h(X)=\sum_{e\in B_{0}}g_{e}(\mathbf{Y})\chi_{S}(X)=g^{=1}\qquad\qquad d(X)=% \sum_{|S|\geq 2}g_{S}(\mathbf{Y})\chi_{S}(X)=g^{>1}

For h, we hope to show that its characteristic function is small, and for d we will be interested in bounding \hat{\|}d\hat{\|}_{1} and \|d\|_{2} with an eye towards applying Theorem 3.

9.3. Main proofs of the section modulo lemmas

The main result of this section is the following characteristic function bound:

Lemma 10.

Assume \mathopen{}\mathclose{{}\left(\frac{n}{u}}\right)^{k/2}n^{k/2+\tau}\leq t\leq% \mathopen{}\mathclose{{}\left(\frac{n}{u}}\right)^{k/2}n^{k/2+1/3-\tau} and u=\Omega(n^{2\tau}). Then for any fixed natural number \ell

\operatorname*{\mathbb{E}}_{Y}|\varphi_{g}(t)|=O\mathopen{}\mathclose{{}\left(% n^{-\frac{\ell}{6}}}\right)

Before proving the lemma, we first state some claims, to be proven afterward, about the behavior of g,~{}h, and d.

Claim 5.

With probability \geq 1-\exp(-\Omega(n^{-\tau/r^{2}}) over sampling of \mathbf{Y} we have that

\displaystyle\|d\|_{2}^{2} \displaystyle=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{k}n^{-k-1+2\tau}}\right)
\displaystyle\hat{\|}d\hat{\|}_{1} \displaystyle=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{\frac{k}{2}}n^{-k/2+1+\tau}}\right)
Claim 6.

Under the assumption that u\geq n^{2\tau}, \tau<\frac{1}{2}, and k\leq r-2. Then there exists a constant C such that for sufficiently large n

\Pr\mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left|\sum_{e\in B_{0% }}g_{e}^{2}(\mathbf{Y})}\right|\leq C\mathopen{}\mathclose{{}\left(\frac{u}{n}% }\right)^{k}n^{-k}}\right]\leq\exp\mathopen{}\mathclose{{}\left(-\Omega(n^{% \tau/2r^{2}})}\right)
Claim 7.

For any e\in B_{0} there exists a C>0 such that for all sufficiently large n

\Pr\mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left|g_{e}(\mathbf{Y% })}\right|\geq Cn^{-\frac{k}{2}-1+\tau}\mathopen{}\mathclose{{}\left(\frac{u}{% n}}\right)^{\frac{k}{2}}}\right]\leq\exp\mathopen{}\mathclose{{}\left(-\Omega% \mathopen{}\mathclose{{}\left(-n^{\tau/r^{2}}}\right)}\right)

With these ingredients we can prove Lemma 10

Proof.

Let A be the event that \|d\|_{2}^{2} and \hat{\|}d\hat{\|}_{1} are small as promised by Claim 5, that g_{e} is small for all e\in B_{0} as promised by Claim 7, and \sum_{e\in B_{0}}g_{e}^{2} is large as promised by Claim 6. By those results we know that \Pr(A)\geq 1-\exp(-\Omega(n^{-\tau/2r^{2}}).

We apply Theorem 3 to g=h+d. Conditioning on the event A we can estimate the relevant parameters of that theorem to be

  1. T:=\sum_{e\in{U_{0}\choose 2}}g_{e}^{2}\geq\Omega\mathopen{}\mathclose{{}\left% (\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k}n^{-k}}\right)

  2. \eta^{2}=\|d\|_{2}^{2}\leq O\mathopen{}\mathclose{{}\left(\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}n^{-k-1+2\tau}}\right)

  3. \hat{\|}d\hat{\|}_{1}=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}% \left(\frac{u}{n}}\right)^{\frac{k}{2}}n^{-k/2+1+\tau}}\right)

  4. \delta^{2}=\max_{e}(g_{e}^{2})=O\mathopen{}\mathclose{{}\left(\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}n^{-k-2+2\tau}}\right)

  5. \displaystyle\epsilon \displaystyle=\exp\mathopen{}\mathclose{{}\left(-\frac{2t^{2}}{\pi^{2}}% \mathopen{}\mathclose{{}\left[T-\delta{r\choose 2}\ell}\right]}\right)=\exp% \mathopen{}\mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}\left[\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}t^{2}\mathopen{}\mathclose{{}\left(n% ^{-k}-\ell{r\choose 2}n^{-k-2+2\tau}}\right)}\right]}\right)
    \displaystyle\exp\mathopen{}\mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}% \left(\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k}t^{2}n^{-k}}\right)% }\right)

So for any fixed \ell such that t\leq(2e)^{\ell/2}\lambda^{-\ell/2}\eta^{-1}=O((u/n)^{-k}n^{k+1-2\tau}) Theorem 3 tells us

\displaystyle|\varphi_{g}(t)-\varphi_{h}(t)| \displaystyle\leq\ell\epsilon\mathopen{}\mathclose{{}\left(1+\mathopen{}% \mathclose{{}\left(t\hat{\|}d\hat{\|}_{1}}\right)^{\ell}}\right)+\frac{(t\eta)% ^{(\ell+1)}}{(\ell+1)!}\ell^{\frac{(\ell+1){r\choose 2}}{2}}\lambda^{{r\choose 2% }\mathopen{}\mathclose{{}\left(\frac{1-\ell}{2}}\right)}+\lambda^{{r\choose 2}% }\exp\mathopen{}\mathclose{{}\left[-\frac{{r\choose 2}\lambda}{2e}|t\eta|^{-% \frac{2}{{r\choose 2}}}}\right]
\displaystyle\qquad+\mathopen{}\mathclose{{}\left|t\eta}\right|^{(\ell+1)/2}% \lambda^{\frac{3{r\choose 2}-\ell}{4}}\mathopen{}\mathclose{{}\left(\ell+1}% \right)\exp\mathopen{}\mathclose{{}\left[-\frac{{r\choose 2}\lambda}{4e}% \mathopen{}\mathclose{{}\left|t\eta}\right|^{-\frac{2}{{r\choose 2}}}}\right]
\displaystyle=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{k\ell/2}t^{\ell}n^{-\ell k/2+\ell+\ell\epsilon}\exp% \mathopen{}\mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}\left(\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}t^{2}n^{-k}}\right)}\right)+(t\eta)^% {\ell+1}}\right.
\displaystyle\mathopen{}\mathclose{{}\left.\qquad+\exp\mathopen{}\mathclose{{}% \left(-\Omega\mathopen{}\mathclose{{}\left[(t\eta)^{-\frac{2}{{r\choose 2}}}}% \right]}\right)+(t\eta)^{\frac{\ell+1}{2}}\exp\mathopen{}\mathclose{{}\left(-% \Omega\mathopen{}\mathclose{{}\left[(t\eta)^{-\frac{2}{{r\choose 2}}}}\right]}% \right)}\right)

Assuming that t\geq n^{k+\tau}u^{-k/2} we find that the first term in the right hand side above is bounded above by \exp\mathopen{}\mathclose{{}\left(-\Omega(n^{2\tau})}\right). Additionally, assuming t\leq u^{-k/2}n^{k+\frac{1}{3}+\tau} we have that t\eta=O(n^{-1/6}), and so the subsequent terms in the above expansion can be bounded above by O\mathopen{}\mathclose{{}\left(n^{-\ell/6}+\exp(-n^{-1/2r^{2}})}\right).

Therefore, whenever the event A occurs, we have |\varphi_{g}(t)-\varphi_{g^{=1}}(t)|\leq O(n^{-\ell/6}). Combining the fact that \Pr(A^{c})=\exp\mathopen{}\mathclose{{}\left(\Omega(n^{-\tau/r^{2}})}\right) with the observation that |\varphi_{g}(t)-\varphi_{h}(t)|\leq 2, it follows that

\operatorname*{\mathbb{E}}_{\mathbf{Y}}|\varphi_{g}(t)-\varphi_{h}(t)|\leq O(n% ^{-\ell/6})+2\Pr(A^{c})=O(n^{-\ell/6})

To finish the proof of the lemma, we just have to bound the characteristic function of h. But h is a sum of independent p biased Bernoulli random variables. So we can compute (again, conditioning on A), that

\displaystyle|\varphi_{h}(t)| \displaystyle=\operatorname*{\mathbb{E}}\mathopen{}\mathclose{{}\left[e^{it% \sum_{e\in B_{0}}itg_{e}\chi_{e}}}\right]=\prod_{e\in B_{0}}\mathopen{}% \mathclose{{}\left|\operatorname*{\mathbb{E}}[e^{itg_{e}\chi_{e}}]}\right|\leq% \exp\mathopen{}\mathclose{{}\left(-\frac{4}{\pi^{2}}t^{2}\sum g_{e}^{2}}\right% )\leq\exp(-\Omega(t^{2}T))
\displaystyle\leq\exp(-\Omega(n^{2\tau}))

Where the first inequality uses Lemma 1 combined with the assumption on the event A (from Claim 7) that |g_{e}t|=o(1)

We now apply this Lemma for appropriate choices of k,\ell, and u to bound \varphi_{\mathcal{K}}(t) in a form suitable for use in the proof of our main local limit theorem in Section 3

Corollary 2.

Assume \tau<\frac{1}{12}. For any 1\leq k\leq r-2, and n^{\frac{k}{2}+2\tau}\leq t\leq n^{\frac{k}{2}+\frac{7}{12}-2\tau} we have |\varphi_{\mathcal{K}}(t)|\leq O(n^{-r^{2}}).

Proof.

We proceed in two cases. In the first, set u=n/(k+1). Then Lemma 10 tells us that for some constants C_{1},C_{2} we have whenever C_{1}n^{k/2+\tau}\leq t\leq C_{2}n^{k/2+1/3-\tau} then \operatorname*{\mathbb{E}}|\varphi_{g}(t)|=O(n^{-\ell/6}). Furthermore, we know from Lemma 6 that |\varphi_{\mathcal{K}}(t)|^{2^{k}}\leq\operatorname*{\mathbb{E}}_{\mathbf{Y}}|% \varphi_{g}(t)|. So choosing \ell=2^{k+5}r^{2} we find that |\varphi_{\mathcal{K}}(t)|=O(n^{-r^{2}}).

In the second case set u=n^{1-1/2k}. Lemma 10 along with the same choice of \ell=2^{k+5}r^{2} will tell us that for n^{\frac{k}{2}+\frac{1}{4}+\tau}\leq t\leq n^{\frac{k}{2}+7/12-\tau} we have |\varphi_{\mathcal{K}}(t)|=O(n^{-r^{2}}). So long as \tau<\frac{1}{12} these intervals will overlap (at least in the limit).

Corollary 3.

If \tau<\frac{1}{12}, then for any t\in[n^{\frac{1}{2}+2\tau},n^{\frac{r}{2}-\frac{5}{12}-2\tau}] we have |\varphi_{\mathcal{K}}(t)|\leq O(n^{-r^{2}}).

Proof.

This follows by taking the union of the bounds in the above corollary for 1\leq k\leq r-2. ∎

9.4. Proof of Claim 6

9.4.1. Showing that \sum g_{e}^{2} is large

In this section, we will show that, with high probability over \mathbf{Y}, \sum_{e\in B_{0}}g_{e}^{2}=\|h\|_{2}^{2} is large. To do this we first separate out the family \mathcal{F}_{e} of subsets of \mathcal{B} which contain most of the Fourier weight of g_{e}(\mathbf{Y}).

Definition 7.
\mathcal{F}_{e}=\{S\subset\mathcal{B}\mathrm{~{}s.t.~{}}|\mathrm{supp}(S-e)|=k% ,~{}S~{}\mbox{\scriptsize{rainbow}}\}

We then carve \sum g_{e}^{2} into pieces as follows. Let

\displaystyle G_{e}(\mathbf{Y}) \displaystyle:=\sum_{S\in\mathcal{F}_{e}}\widehat{g_{e}}(S)\alpha(\chi_{S})(% \mathbf{Y})
\displaystyle H_{e}(\mathbf{Y}) \displaystyle:=\sum_{S\notin\mathcal{F}_{e}}\widehat{g_{e}}(S)\alpha(\chi_{S})% (\mathbf{Y})
\displaystyle Z(\mathbf{Y}) \displaystyle=\sum_{e\in B_{0}}G_{e}^{2}=\sum_{e}(g_{e}-H_{e})^{2}

So in particular G_{e}+H_{e}=g_{e}, and G_{e} is the main term while H_{e} is best thought of as an error term. We now embark on proving the following estimates for G_{e} and H_{e} respectively

Lemma 11.

Assume that 1\leq k\leq r-2. Then

\displaystyle\|G_{e}\|_{2}^{2} \displaystyle=\Theta\mathopen{}\mathclose{{}\left(n^{-k-2}\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}}\right)
\displaystyle\|H_{e}\|_{2}^{2} \displaystyle=O\mathopen{}\mathclose{{}\left(n^{-k-3}\mathopen{}\mathclose{{}% \left(\frac{u}{n}}\right)^{k}}\right)
\displaystyle\|g_{e}\|_{2}^{2} \displaystyle=\Theta\mathopen{}\mathclose{{}\left(n^{-k-2}\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}}\right)
Proof.

The third claim follows immediately from the previous two. For H_{e} we recall equation 13 and compute.

\displaystyle 2^{-k}\|H_{e}\|_{2}^{2}=2^{-k}\sum_{S\in\mathcal{F}_{e}^{c}}% \widehat{g_{e}}(S)^{2} \displaystyle=\sum_{t=k+3}^{r}\sum_{\begin{subarray}{c}|V\cup e|=t% \end{subarray}}\sum_{\begin{subarray}{c}S\subset{V+e\choose 2}\\ S~{}\mbox{\scriptsize{rainbow}}\end{subarray}}\hat{\mathcal{K}}(S\cup e)^{2}% \leq\sum_{t=k+3}^{r}\sum_{|V|=t}\sum_{\begin{subarray}{c}S\subset{V+e\choose 2% }\\ S~{}\mbox{\scriptsize{rainbow}}\end{subarray}}O\mathopen{}\mathclose{{}\left(n% ^{-2t+2}}\right)
\displaystyle\leq\sum_{t=k+3}^{r}u^{k}n^{t-k-2}2^{t\choose 2}O(n^{-2t+2})\leq O% \mathopen{}\mathclose{{}\left(n^{-k-3}\mathopen{}\mathclose{{}\left(\frac{u}{n% }}\right)^{k}}\right)

Where the second inequality follows from noting that there are at most u^{k}n^{|\mathrm{supp}(S)|-k-2} ways to choose the support of a rainbow set of edges, and then at most 2^{|\mathrm{supp}(S)|\choose 2} ways to pick edges with that support.

Meanwhile for G_{e}

\displaystyle 2^{-k}\operatorname*{\mathbb{E}}[G_{e}(\mathbf{Y})^{2}] \displaystyle=\sum_{S\in\mathcal{F}_{e}}\hat{\mathcal{K}}(S\cup e)^{2}=\sum_{|% V|=k+2}\sum_{\begin{subarray}{c}\mathrm{supp}(S\cup e)=V\\ S~{}\mbox{\scriptsize{rainbow}}\end{subarray}}\hat{\mathcal{K}}(S\cup e)^{2}=% \sum_{|V|=k+2}\sum_{\begin{subarray}{c}\mathrm{supp}(S\cup e)=V\\ S~{}\mbox{\scriptsize{rainbow}}\end{subarray}}C_{S}n^{-2k-2}
\displaystyle\geq\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}|U_{i}|}\right)C% _{S}n^{-2k-2}
\displaystyle=\Theta\mathopen{}\mathclose{{}\left(n^{-k-2}\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{k}}\right)

Where C_{S} is some constant depending on |S|, which always lies in [\lambda^{r^{2}},1] and can be read off of the Fourier expansion of \mathcal{K} in Section 6 . Note that we used the assumption that k+2\leq r to ensure that the sums above were nonempty. ∎

Meanwhile, for each e\in U we know that |G_{e}-g_{e}|=|H_{e}|, and so g_{e}^{2}\geq G_{e}^{2}-2|G_{e}H_{e}|. But H_{e} is relatively small, so by Cauchy Schwarz

\|G_{e}H_{e}\|_{2}^{2}=\operatorname*{\mathbb{E}}[G_{e}^{2}H_{e}^{2}]\leq\sqrt% {\operatorname*{\mathbb{E}}[G_{e}^{4}]\operatorname*{\mathbb{E}}[H_{e}^{4}]}=% \|G_{e}\|_{4}^{2}\|H_{e}\|_{4}^{2}

Since \deg(G_{e}),\deg(H_{e})\leq{r\choose 2} Theorem 7 tells us that

\|G_{e}\|_{4}^{4}\|H_{e}\|_{4}^{4}\leq(3)^{2r^{2}}\lambda^{2r^{2}-4}\|H_{e}\|_% {2}^{4}\|G_{e}\|_{2}^{4}=\Theta\mathopen{}\mathclose{{}\left((u/n)^{4k}n^{-4k-% 10}}\right)

This in turn implies that \|G_{e}H_{e}\|_{2}=\Theta(u^{k}n^{-2k-2.5}). So it follows from Theorem 6 that |G_{e}H_{e}|\leq u^{k}n^{-2.5+\tau} with probability 1-\exp(\Omega(-n^{\tau/2r^{2}}) . Therefore with high probability

\sum_{e}g_{e}^{2}\geq\sum_{e\in B_{0}}\mathopen{}\mathclose{{}\left[G_{e}^{2}-% O(u^{k}n^{-2k-2.5+\tau})}\right]=Z-O(u^{k}n^{-2k-1/2+\tau})

We restate this as a lemma.

Lemma 12.

With probability at least 1-\exp(\Omega(n^{-\tau/2r^{2}}) over choice of \mathbf{Y}, we have that

\sum_{e}g_{e}^{2}\geq\sum_{e\in B_{0}}\mathopen{}\mathclose{{}\left[G_{e}^{2}-% O(u^{k}n^{-2k-2.5+\tau})}\right]=Z-O(u^{k}n^{-2k-\tau+1/2})

To finish our argument, we require the fact that Z is large with high probability. This will follow immediately from observing that Z is a fixed degree polynomial and computing the variance of Z. Unfortunately, computing this variance is cumbersome, and so the proof of the following lemma is in Appendix B

Lemma 13.

Let Z=\sum_{e\in B_{0}}G_{e}^{2}. Assume that u\geq n^{2\tau} and 1\leq k\leq r-2. Then there exists a constant C such that for sufficiently large n, \Pr\mathopen{}\mathclose{{}\left[|Z|\leq C\mathopen{}\mathclose{{}\left(\frac{% u}{n}}\right)^{k}n^{-k}}\right]\leq\exp\mathopen{}\mathclose{{}\left(\Omega(n^% {-\tau/r^{2}})}\right)

We are now in a position to prove Claim 6

Claim 6.

Under the assumption that u\geq n^{2\tau}, \tau<\frac{1}{2}, and k\leq r-2. Then there exists a constant C such that for sufficiently large n

\Pr\mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left|\sum_{e\in B_{0% }}g_{e}^{2}(\mathbf{Y})}\right|\leq C\mathopen{}\mathclose{{}\left(\frac{u}{n}% }\right)^{k}n^{-k}}\right]\leq\exp\mathopen{}\mathclose{{}\left(\Omega(n^{-% \tau/2r^{2}})}\right)
Proof.

Lemma 13 above implies that for some C_{1}>0 we have Z\geq C_{1}u^{k}n^{-2k} with probability 1-\exp\mathopen{}\mathclose{{}\left(\Omega(n^{-\tau/r^{2}})}\right). Meanwhile Lemma 12 implies that \sum_{e\in B_{0}}g_{e}^{2}\geq Z-O(u^{k}n^{-2k-\tau+\frac{1}{2}}) with probability 1-\exp\mathopen{}\mathclose{{}\left(\Omega(n^{-\tau/2r^{2}})}\right). Combining these inequalities yields the corollary. ∎

9.5. Proof of Claim 7

Claim 7.

For any e\in B_{0} there exists a C>0 such that for all sufficiently large n

\Pr\mathopen{}\mathclose{{}\left[\mathopen{}\mathclose{{}\left|g_{e}(\mathbf{Y% })}\right|\geq Cn^{-\frac{k}{2}-1+\tau}\mathopen{}\mathclose{{}\left(\frac{u}{% n}}\right)^{\frac{k}{2}}}\right]\leq\exp\mathopen{}\mathclose{{}\left(-\Theta% \mathopen{}\mathclose{{}\left(n^{-\frac{\tau}{r^{2}}}}\right)}\right)
Proof.

We computed in Lemma 11 that \|g_{e}\|_{2}=O(n^{-k-2}(u/n)^{k}). We also know that \operatorname*{\mathbb{E}}[g_{e}(\mathbf{Y})]=\hat{g_{e}}(\varnothing)=0. g_{e} is a polynomial in \mathbf{Y} of degree at most r\choose 2 so the result then follows from Lemma 6. ∎

9.6. Proof of Claim 5

First, we compute a bound on the Fourier coefficients g_{S}(\mathbf{Y})=\widehat{\alpha(\mathcal{K})_{\mathbf{Y}}}(S).

Lemma 14.

For some C>0 and for all sufficienetly large n

g_{S}(\mathbf{Y})^{2}\leq\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k}% n^{-k-2|\mathrm{supp}(S)|+2+2\tau}

holds for all S\subset B_{0} with probability 1-\exp(-\Theta(n^{\tau/r^{2}})).

Proof.

For any S, we note that \operatorname*{\mathbb{E}}_{\mathbf{Y}}[g_{S}(\mathbf{Y})]=0. If |S|>r-k then g_{S}(\mathbf{Y}) is identically 0. If |S|\leq r-k then we compute this quantity to have variance

\displaystyle 2^{-k}\operatorname*{\mathbb{E}}[g_{S}(\mathbf{Y})^{2}] \displaystyle=\sum_{\begin{subarray}{c}T\subset\mathcal{B}\\ T~{}\mbox{\scriptsize{rainbow}}\end{subarray}}\hat{\mathcal{K}}(S\cup T)^{2}=% \sum_{t=k+|\mathrm{supp}(S)|}^{r}\sum_{|V|=t}\sum_{\begin{subarray}{c}\mathrm{% supp}(S\cup T)=V\\ T~{}\mbox{\scriptsize{rainbow}}\end{subarray}}\hat{\mathcal{K}}(S\cup T)^{2}
\displaystyle\leq\sum_{t=k+|\mathrm{supp}(S)|}^{r}\sum_{|V|=t}\sum_{% \begin{subarray}{c}\mathrm{supp}(S\cup T)=V\\ T~{}\mbox{\scriptsize{rainbow}}\end{subarray}}O\mathopen{}\mathclose{{}\left(n% ^{-2t+2}}\right)
\displaystyle\leq\sum_{t=k+|\mathrm{supp}(S)|}^{r}u^{k}n^{t-k-|\mathrm{supp}(S% )|}2^{t\choose 2}O\mathopen{}\mathclose{{}\left(n^{-2t+2}}\right)=O\mathopen{}% \mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k}n^{-k-% 2|\mathrm{supp}(S)|+2}}\right)

We also know that g_{S}(\mathbf{Y}) is a polyomial of degree at most {r\choose 2} and \operatorname*{\mathbb{E}}[g_{\mathcal{B}|S}(\mathbf{Y})]=0. By Theorem 6, for n sufficiently large we have that \Pr[|g_{S}|\geq\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k/2}n^{-k/2-% |\mathrm{supp}(S)|+1+\tau}]\leq\exp(-\Omega(n^{\tau/r^{2}})). Since there are at most O(n^{r^{2}}) possible monomials S for which g_{S}(\mathbf{Y})\not\equiv 0 a union bound finishes the proof of the lemma. ∎

Claim 5.

With probability 1-\exp(-\Omega(n^{-\tau/r^{2}}) we have that both

\displaystyle\|d\|_{2}^{2} \displaystyle=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{k}n^{-k-1+2\tau}}\right)
\displaystyle\hat{\|}d\hat{\|}_{1} \displaystyle=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{\frac{k}{2}}n^{-k/2+1+\tau}}\right)
Proof.

Both of these statements follow from a computation using the bound given in Lemma 14. Throughout we condition on the assumption that all of the Fourier coeficients of d are as small as promised by Lemma 14. First we bound the 2-norm of d by

\displaystyle\|d\|_{2}^{2} \displaystyle=\sum_{\begin{subarray}{c}S\subset B_{0}\\ |S|\geq 2\end{subarray}}g_{S}(\mathbf{Y})^{2}=\sum_{t=3}^{r-k}\sum_{|\mathrm{% supp}(S)|=t}g_{S}(\mathbf{Y})^{2}\leq\mathopen{}\mathclose{{}\left(\frac{u}{n}% }\right)^{k}\sum_{t=3}^{r-k}\sum_{|\mathrm{supp}(S)|=t}n^{-k-2t+2+2\tau}
\displaystyle\leq\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k}\sum_{t=% 3}^{r-k}{n\choose t}2^{t}O\mathopen{}\mathclose{{}\left(n^{-k-2t+2+2\tau}}% \right)=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{u}{% n}}\right)^{k}n^{-k-1+2\tau}}\right)

Then we bound the spectral 1 norm by

\displaystyle\hat{\|}d\hat{\|}_{1} \displaystyle=\sum_{S\subset B_{0}}|\hat{d}(S)|=\sum_{t=3}^{r-k}\sum_{|\mathrm% {supp}(S)|=t}O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{k/2}n^{-k/2-t+1+\tau}}\right)=O\mathopen{}\mathclose{{}% \left(\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k/2}n^{-k/2+1+\tau}}\right)

10. Bound for |t|\in[n^{\frac{r}{2}-\frac{1}{6}-2\tau},~{}n^{r-1-r\tau}]

Here we repeat the same setup and notation from Section 9.2, but now we focus exclusively on the special case when k=r-2. The function g=\alpha(\mathcal{K})_{\mathbf{Y}} exhibits some different behavior in this case.

First, let’s look at what happens when k=r-1. Then any rainbow set T\subset\mathcal{B} contains vertices of from U_{1},U_{2},\ldots,U_{k}, and so in particular has at least r-1 vertices not in U_{0}. Therefore for any nonempty S\subset{U_{0}\choose 2} we have |\mathrm{supp}(S\cup T)|\geq r+1. But recall that \mathcal{K} is supported on sets of edges spanning at most r vertices. Therefore \alpha(\mathcal{K}) does not depend on the edges in B_{0} at all, and so g(X) is a constant.

But if k=r-2, then for any rainbow T\subset\mathcal{B} and S\subset{U_{0}\choose 2} if |S|>1 we have |\mathrm{supp}(S\cup T)|\geq k+3=r+1. Therefore \hat{g}(S)=0 when S is not just the set of a singleton edge. In particular for any edge e\in{U_{0}\choose 2} if T is rainbow, and |\mathrm{supp}(e\cup T)|\leq r then it follows that |\mathrm{supp}(T)-e|=k=r-2. Recallling definition 7 that \mathcal{F}_{e}=\{S\subset\mathcal{B}\mathrm{~{}s.t.~{}}|\mathrm{supp}(S)-e|=k% ,~{}S\mbox{ rainbow}\}, we can restate our observation as the following lemma.

Lemma 15.

Assume k=r-2. Then we have d\equiv 0. Further, for e\in{U_{0}\choose 2} and T\subset\mathcal{B},~{}T\notin\mathcal{F}_{e}, it follows that \widehat{g_{e}}(T)\equiv 0. That is g_{e}=\sum_{T\in\mathcal{F}_{e}}\widehat{g_{e}}(T)\chi_{T}(\mathbf{Y}).

This implies that for any choice of \mathbf{Y}\in\{0,1\}^{\mathcal{B}} we have \alpha(\mathcal{K})_{\mathbf{Y}}=g is a a degree 1 polynomial in independent Bernoulli random variables. Because of this, we can bound the characteristic function of g more directly. Additionally, our analysis of \sum g_{e}^{2}=\|g\|_{2}^{2} also becomes easier.

Lemma 16.

For t\in[(r-1)^{r/2-1}n^{r/2-1+\tau/2},~{}n^{r-2-r\tau}], we have |\operatorname*{\mathbb{E}}_{\mathbf{Y}}[\varphi_{g}(t)]|=\exp(-\Omega(n^{\tau% /2r^{2}})).

Proof.

Set u=n^{2+\tau/(r-2)}t^{-2/(r-2)}, and therefore (u/n)^{r-2}=n^{r-2+\tau}/t^{2}. First, we check that this is a feasible choice of u for Claims 6 and 7, that is n^{2\tau}\leq u\leq n/(r-1). For the lower bound, we find the requirement

n^{2+\tau/(r-2)}t^{-2/(r-2)}\geq n^{2\tau}\implies t\leq n^{r-2-r\tau}

For the upper bound we need

n^{2+\tau/(r-2)}t^{-2/(r-2)}\leq\frac{n}{r-1}\implies t\geq(r-1)^{r/2-1}n^{r/2% -1+\tau/2}

And these are exactly the hypotheses on t. Let A be the event that \sum_{e\in B_{0}}g_{e}^{2}\geq(u/n)^{r-2}n^{-r+2}=n^{\tau}/t^{2}, and g_{e}^{2}\leq(u/n)^{r-2}n^{-r+\tau}=n^{-2+2\tau}/t^{2} for all e\in B_{0}. By Claims 6 and 7 we have \Pr(A^{c})\leq\exp(-\Omega(n^{\tau/2r^{2}})).

Given that event A occurs we have that |g_{e}t|\leq n^{-1+\tau}<\sqrt{p(1-p)}\pi. Therefore we can use Lemma 1 to bound

|\varphi_{g}(t)|=|\operatorname*{\mathbb{E}}[e^{it\sum g_{e}\chi_{e}}]|=\prod_% {e\in B_{0}}\operatorname*{\mathbb{E}}[e^{itg_{e}\chi_{e}}]\leq e^{-\frac{2}{% \pi^{2}}t^{2}\sum g_{e}^{2}}\leq\exp(-\Omega(n^{\tau/r^{2}}))

.

To complete the proof of the lemma, we combine this inequality with the bounds that \Pr(A^{c})\leq\exp(-\Omega(n^{-\tau/2r^{2}})) and |e^{ix}|\leq 1. ∎

Setting u slightly smaller yields a result more suited to slightly larger values of t

Lemma 17.

For t\in[(r-1)^{r/2-1}n^{r/2-3\tau/2},~{}n^{r-1-2(r-2)\tau}], we have |\operatorname*{\mathbb{E}}_{\mathbf{Y}}[\varphi_{g}(t)]|=\exp(-\Omega(n^{\tau% /2r^{2}})).

The proof is more or less the same as the above, but we include it here as subtle errors would be easy to make.

Proof.

Set u=n^{2+\frac{2-3\tau}{r-2}}t^{-2/(r-2)}, and therefore (u/n)^{r-2}=n^{r-3\tau}/t^{2}. First, we check that this is a feasible choice of u for Claims 6 and 7, that is n^{2\tau}\leq u\leq n/(r-1). For the lower bound, we find the requirement

n^{2+\frac{2-3\tau}{r-2}}t^{-2/(r-2)}\geq n^{2\tau}\implies t\leq n^{r-1-2(r-2% )\tau}

For the upper bound we need

n^{2+\frac{2-3\tau}{r-2}}t^{-2/(r-2)}\leq\frac{n}{r-1}\implies t\geq(r-1)^{r/2% -1}n^{r/2-3\tau/2}

And these are exactly the hypotheses on t. Let A be the event that \sum_{e\in B_{0}}g_{e}^{2}\geq(u/n)^{r-2}n^{-r+2}=n^{2-3\tau}/t^{2}, and g_{e}^{2}\leq(u/n)^{r-2}n^{-r+2\tau}=n^{-\tau}/t^{2} for all e\in B_{0}. By Claims 6 and 7 respectively we have \Pr(A^{c})\leq\exp(-n^{\tau/2r^{2}}).

Given that event A occurs we have that, |g_{e}t|\leq n^{-\tau/2}<\sqrt{p(1-p)}\pi. Therefore we can use Lemma 1 to bound (again conditional on the event A occuring)

|\varphi_{g}(t)|=|\operatorname*{\mathbb{E}}[e^{it\sum g_{e}\chi_{e}}]|=\prod_% {e\in B_{0}}\operatorname*{\mathbb{E}}[e^{itg_{e}\chi_{e}}]\leq e^{-\frac{4}{% \pi^{2}}t^{2}\sum g_{e}^{2}}\leq\exp(-\Omega(n^{2-3\tau}))

.

To complete the proof of the lemma, just use the fact the bound \Pr(A^{c})\leq\exp(-\Omega(n^{-\tau/2r^{2}})) and the fact that |e^{ix}|\leq 1. ∎

Combining Lemmas 16 and 17 with Lemma 6 yields the following corollary

Corollary 4.

Assume 0<\tau<\frac{1}{2r}. For t\in[n^{r/2-1+\tau},~{}n^{r-1-2(r-2)\tau}] we have |\varphi_{\mathcal{K}}(t)|\leq\exp(-\Omega(n^{\tau/2r^{2}})).

11. Bound for large t

For large t, an even more extreme application of Lemma 6 is needed. To do this, we take the following partition of the edge random variables. Partition the vertex set [n] into \lfloor\frac{n}{r}\rfloor r-cliques. Let \mathcal{F} be the family of cliques in this partition. Now let \tilde{B_{0}},B_{1},\ldots,B_{{r\choose 2}-1} be any partition of the edges of these cliques such that each B_{i} contains exactly one edge from each clique in \mathcal{F}. Now set B_{0} to be the union of \tilde{B_{0}} along with all edges of K_{n} not already partitioned into a B_{i} (i.e., edges connecting the different cliques in \mathcal{F} as well as the leftover edges from vertices not put into cliques). See Figure 1 for an example of this partition. In this section, rather than using the orthogonal character functions, it will be more convenient to use indicator vectors x_{e}\in\{0,1\} instead. Additionally for a set of edges S, we will use x^{S} to denote the monomial \prod_{e\in S}x_{e}.

Let X\in\{0,1\}^{B_{0}} and Y_{i}^{0},~{}Y_{i}^{1}\in\{0,1\}^{B_{i}} independent as in section 5. As before, for a given setting of \mathbf{Y}\in\{0,1\}^{\mathcal{B}} we define g(X) by setting

\displaystyle g(X):=\alpha(\mathcal{K})_{\mathbf{Y}}(X)=\alpha(\mathcal{K})(X,% \mathbf{Y})

Recall that \alpha is a linear operator, and that furthermore we have \alpha(x^{S})=0 unless S is a rainbow set of edges. However, by construction we know that the only rainbow sets S are exactly the cliques S\in\mathcal{F}. Therefore we have

\alpha(\mathcal{K})=\sum_{S\equiv K_{r}}\alpha(x^{S})=\sum_{S\in\mathcal{F}}% \alpha(x^{S})

For each S\in\mathcal{F}, we have S=e\cup S^{\prime} where e\in B_{0} and S^{\prime}\subset\cup_{i\geq 1}B_{i}. So, for any fixed S\in\mathcal{F}, if we sample \mathbf{Y} at random, then we have that Y_{e^{\prime}}^{0}=1 and Y_{e^{\prime}}^{1}=0 for all edges e^{\prime}\in S^{\prime} with probability at least \lambda^{2{r\choose 2}^{2}}.111recall \lambda=\min(p,1-p) Label this event A_{S}. If A_{S} occurs then it follows that

\alpha(x^{S})(X,Y)=x_{e}\sum_{v\in\{0,1\}^{k}}(-1)^{|v|}x^{S^{\prime}}=x_{e}

as the only nonzero term in the above sum is when v=(0,0,\ldots,0). Given that A_{S} occurs and t\leq\pi\sigma by Lemma 1 we have

\operatorname*{\mathbb{E}}_{x_{e}}e^{itx^{S}/\sigma}=\operatorname*{\mathbb{E}% }_{x_{e}}e^{itx_{e}/\sigma}\leq 1-\frac{8p(1-p)t^{2}}{\pi^{2}\sigma^{2}}

Let z(\mathbf{Y}) denote the number of edges e\in\tilde{B}_{0} such that A_{e} occurs. Using the fact that g=\sum_{\mathcal{F}}\alpha(x^{S}) and that the random variables \alpha(x^{S}) are independent, we may compute

|\operatorname*{\mathbb{E}}[e^{itg}]|=\prod_{S\in A}\mathopen{}\mathclose{{}% \left|\operatorname*{\mathbb{E}}[e^{it\alpha(x^{S})}]}\right|\leq\prod_{% \begin{subarray}{c}S\in A\\ A_{S}\mbox{\scriptsize{ occurs}}\end{subarray}}\mathopen{}\mathclose{{}\left(1% -\frac{8p(1-p)t^{2}}{\pi^{2}\sigma^{2}}}\right)=\exp\mathopen{}\mathclose{{}% \left(-\frac{8p(1-p)t^{2}z(Y)}{\pi^{2}\sigma^{2}}}\right)

Since each of the events A_{S} are independent and occur with probability \geq\lambda^{2{r\choose 2}^{2}} it follows from Chernoff bounds that z(Y)\geq\lambda^{2{r\choose 2}^{2}}n/2r with probability \geq 1-\exp(-\Omega(n)).

So we find that

\operatorname*{\mathbb{E}}_{\mathbf{Y}}\mathopen{}\mathclose{{}\left|% \operatorname*{\mathbb{E}}_{X}[e^{it\alpha(\mathcal{K})}]}\right|\leq\Pr(A)% \exp\mathopen{}\mathclose{{}\left(-\frac{8p(1-p)t^{2}}{\pi^{2}\sigma^{2}}\cdot% \frac{\lambda^{2{r\choose 2}^{2}}n}{2r}}\right)+\Pr(A^{c})=\exp\mathopen{}% \mathclose{{}\left(-\Omega\mathopen{}\mathclose{{}\left(\frac{t^{2}n}{\sigma^{% 2}}}\right)}\right)

Combining this with Lemma 6 we have proved the following:

Lemma 18.

For |t|\leq\pi\sigma we have that |\varphi_{\mathcal{K}}(t)|\leq\exp(-\Omega(t^{2}n/\sigma^{2})).

v_{11}

v_{12}

v_{13}

v_{21}

v_{22}

v_{23}

Figure 1. Example illustrating the partition B_{0},B_{1},\ldots,B_{r\choose 2-1} from section 11. In this case (where r=3) we have 3 line styles (loosely dahsed, dotted, and thin) representing the 3 different edge classes. Note that the only rainbow triangles are (v_{11},v_{12},v_{13}) and (v_{21},v_{22},v_{23}). B_{0}, here represented by the thin edges is quite large, but most of the edges are not on even a single rainbow triangle and so g(X) does not depend on them at all

References

  • [Ber16] Ross Berkowitz. A quantitative local limit theorem for triangles in random graphs, 2016.
  • [ER61] P. Erdős and A. Rényi. On the evolution of random graphs. Bull. Inst. Internat. Statist., 38:343–347, 1961.
  • [Fel71] William Feller. An introduction to probability theory and its applications. Vol. II. Second edition. John Wiley & Sons, Inc., New York-London-Sydney, 1971.
  • [GK14] Justin Gilmer and Swastik Kopparty. A local central limit theorem for the number of triangles in a random graph. ArXiv e-prints, November 2014.
  • [Jan94] Svante Janson. Orthogonal decompositions and functional limit theorems for random graph statistics. Mem. Amer. Math. Soc., 111(534):vi+78, 1994.
  • [Kar84] Michał Karoński. Balanced subgraphs of large random graphs, volume 7 of Seria Matematyka [Mathematics Series]. Uniwersytet im. Adama Mickiewicza w Poznaniu, Poznań, 1984. With a Polish summary.
  • [KR83] Michał Karoński and Andrzej Ruciński. On the number of strictly balanced subgraphs of a random graph. In Graph theory (Łagów, 1981), volume 1018 of Lecture Notes in Math., pages 79–83. Springer, Berlin, 1983.
  • [NW88] Krzysztof Nowicki and John C. Wierman. Subgraph counts in random graphs using incomplete U-statistics methods. In Proceedings of the First Japan Conference on Graph Theory and Applications (Hakone, 1986), volume 72, pages 299–310, 1988.
  • [O’D14] Ryan O’Donnell. Analysis of boolean functions. Cambridge University Press, 2014.
  • [Pet75] V. V. Petrov. Sums of independent random variables. Springer-Verlag, New York-Heidelberg, 1975. Translated from the Russian by A. A. Brown, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 82.
  • [RR15] Adrian Röllin and Nathan Ross. Local limit theorems via Landau-Kolmogorov inequalities. Bernoulli, 21(2):851–880, 2015.
  • [Ruc88] Andrzej Ruciński. When are small subgraphs of a random graph normally distributed? Probab. Theory Related Fields, 78(1):1–10, 1988.

Appendix A Proof of Lemma 5

Lemma 5.

Let S,T\subset{[n]\choose 2}. For 0\leq i\leq k let S_{i}=S\cap B_{i} and T_{i}=T\cap B_{i}.

\alpha(\chi_{S})\alpha(\chi_{T})=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}% \mathopen{}\mathclose{{}\left[\sum_{\iota=0}^{1}-\chi_{S_{i}^{\iota}}\chi_{T_{% i}^{\iota\oplus 1}}+\chi_{S_{i}^{\iota}\Delta T_{i}^{\iota}}\mathopen{}% \mathclose{{}\left(\sum_{U_{i}\subset S_{i}^{\iota}\cap T_{i}^{\iota}}\gamma^{% |U_{i}|}\chi_{U_{i}}}\right)}\right]}\right)

In particular, for U\subset B_{0}\cup\mathcal{B} we have

\displaystyle\mathopen{}\mathclose{{}\left|\widehat{\alpha(\chi_{S})\alpha(% \chi_{T})}(U)}\right|\leq\begin{cases}0&\mbox{if }S\Delta T\not\subset\mbox{% flat}(U)\mbox{ or }\mbox{flat}(U)\not\subset S\cup T\\ \max\mathopen{}\mathclose{{}\left(\gamma^{|U|},1}\right)&\mbox{if }S\Delta T% \subset\mbox{flat}(U)\subset S\cup T\end{cases}
Proof.

For sets S,T\subset\mathcal{B} we compute

\displaystyle\alpha(\chi_{S})\alpha(\chi_{T}) \displaystyle=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}% \mathclose{{}\left[\chi_{S_{i}}(Y_{i}^{0})-\chi_{S_{i}}(Y_{i}^{1})}\right]}% \right)\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}\mathclose{{}% \left[\chi_{T_{i}}(Y_{i}^{0})-\chi_{T_{i}}(Y_{i}^{1})}\right]}\right)
\displaystyle=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}% \mathclose{{}\left[-\chi_{S_{i}}(Y_{i}^{0})\chi_{T_{i}}(Y_{i}^{1})-\chi_{S_{i}% }(Y_{i}^{1})\chi_{T_{i}}(Y_{i}^{0})+\chi_{S_{i}}(Y_{i}^{0})\chi_{T_{i}}(Y_{i}^% {0})+\chi_{S_{i}}(Y_{i}^{1})\chi_{T_{i}}(Y_{i}^{1})}\right]}\right)
\displaystyle=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}% \mathclose{{}\left[\sum_{\iota=0}^{1}-\chi_{S_{i}}(Y_{i}^{\iota})\chi_{T_{i}}(% Y_{i}^{\iota\oplus 1})+\chi_{S_{i}}(Y_{i}^{\iota})\chi_{T_{i}}(Y_{i}^{\iota})}% \right]}\right)
\displaystyle=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}% \mathclose{{}\left[\sum_{\iota=0}^{1}-\chi_{S_{i}}(Y_{i}^{\iota})\chi_{T_{i}}(% Y_{i}^{\iota\oplus 1})+\chi_{S_{i}\Delta T_{i}}(Y_{i}^{\iota})\prod_{e\in{S% \cap T}}\mathopen{}\mathclose{{}\left(1+\frac{1-2p}{\sqrt{p(1-p)}}\chi_{e}(Y_{% i}^{\iota})}\right)}\right]}\right)
\displaystyle=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}% \mathclose{{}\left[\sum_{\iota=0}^{1}-\chi_{S_{i}}(Y_{i}^{\iota})\chi_{T_{i}}(% Y_{i}^{\iota\oplus 1})+\chi_{S_{i}\Delta T_{i}}(Y_{i}^{\iota})\mathopen{}% \mathclose{{}\left(\sum_{U_{i}\subset S_{i}\cap T_{i}}\gamma^{|U_{i}|}\chi_{U_% {i}}(Y_{i}^{\iota})}\right)}\right]}\right)
\displaystyle=\mathopen{}\mathclose{{}\left(\prod_{i=1}^{k}\mathopen{}% \mathclose{{}\left[\sum_{\iota=0}^{1}-\chi_{S_{i}^{\iota}}\chi_{T_{i}^{\iota% \oplus 1}}+\chi_{S_{i}^{\iota}\Delta T_{i}^{\iota}}\mathopen{}\mathclose{{}% \left(\sum_{U_{i}\subset S_{i}^{\iota}\cap T_{i}^{\iota}}\gamma^{|U_{i}|}\chi_% {U_{i}}}\right)}\right]}\right)

So we see that this is supported only on sets U such that for each i\in[k] we have that U_{i}=S_{i}^{\iota}\cup T_{i}^{\iota\oplus 1} or S_{i}\Delta T_{i}\subset U_{i}.

In both of these cases S_{i}\Delta T_{i}\subset\mbox{flat}(U_{i})\subset S_{i}\cup T_{i}. Furthermore each of the terms appearing in the expansion of the product in the RHS of the above are unique, and each has coefficients of bounded size. In particular for any set U we find that

\displaystyle\mathopen{}\mathclose{{}\left|\widehat{\alpha(\chi_{S})\alpha(% \chi_{T})}(V)}\right|\leq\begin{cases}0&\mbox{if }S\Delta T\not\subset\mbox{% flat}(V)\\ \max\mathopen{}\mathclose{{}\left(\gamma^{|V|},1}\right)&\mbox{if }S\Delta T% \subset\mbox{flat}(V)\end{cases}

Appendix B Proof of Lemma 13

In this section we prove Lemma 13 from Section 9.4.222All terminology and parameters should be set as in that section, and are not repeated here.

Lemma 13.

Let Z=\sum_{e\in B_{0}}G_{e}^{2}. Assume that u\geq n^{2\tau} and 1\leq k\leq r-2. Then there exists a constant C_{0} such that for sufficiently large n, \Pr\mathopen{}\mathclose{{}\left[|Z|\leq C_{0}\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{k}n^{-k}}\right]\leq\exp\mathopen{}\mathclose{{}\left(% \Omega(n^{-\tau/r^{2}})}\right)

To build to this lemma we first analyze the transform of each summand G_{e}^{2} individually. \hat{G_{e}} is supported on sets S\in\mathcal{F}_{e}, that is, sets S such that \mathrm{supp}(S)-\mathrm{supp}(e) consists of 1 vertex from each color class U_{1},U_{2},\ldots,U_{k}. For sets S,T\in\mathcal{F}_{e} Lemma 5 tells us that

\displaystyle\mathopen{}\mathclose{{}\left|\widehat{\alpha(\chi_{S})\alpha(% \chi_{T})}(V)}\right|\leq\begin{cases}0&\mbox{if }S\Delta T\not\subset\mbox{% flat}(V)\mbox{ or }\mbox{flat}(V)\not\subset S\cup T\\ C&\mbox{if }S\Delta T\subset\mbox{flat}(V)\subset S\cup T\end{cases}

where C is some constant depending only on r and p.

Therefore, for V\subset\mathcal{B} we can bound the Fourier coefficients of G_{e}^{2} by

(14) \displaystyle\widehat{G_{e}^{2}}(V)=\sum_{S,T\in\mathcal{F}_{e}}\hat{g_{e}}(S)% \hat{g_{e}}(T)\widehat{\alpha(\chi_{S})\alpha(\chi_{T})}(V)\leq\sum_{% \begin{subarray}{c}S,T\in\mathcal{F}_{e}\\ S\Delta T\subset\mbox{flat}(V)\subset S\cup T\end{subarray}}C^{2}\hat{\mathcal% {K}}(S\cup e)\hat{\mathcal{K}}(T\cup e)

For S\in\mathcal{F}_{e}, we know that \hat{K}(S\cup e)=\Theta(n^{-|\mathrm{supp}(S\cup e)|+1})=\Theta(n^{-k-1}). So the above sum can be reduced to a counting problem. For any given set V\subset\mathcal{B} we need to count the number of pairs S.T\in\mathcal{F}_{e} such that S\Delta T\subset\mbox{flat}(V)\subset S\cup T. First, to help with this counting problem we define the auxiliary color function to be c(S)=\{i\mathrm{~{}s.t.~{}}\mathrm{supp}(S)\cap U_{i}\neq\varnothing~{}\mbox{% and }1\leq i\leq k}. That is c(S) is the number of the vertex partitions U_{1},U_{2},\ldots,U_{k} that S sees. A few helpful observations:

  • Touching a vertex in U_{0} is not counted in c(V)

  • For any S\in\mathcal{F}_{e} we have c(S)=k

  • The above bullet is not true for all sets in the spectrum of G_{e}^{2}, as for example the empty set has a nontrivial coefficient of \widehat{G_{e}^{2}}(\varnothing)=\operatorname*{\mathbb{E}}[G_{e}^{2}]=\Theta(% (u/n)^{k}n^{-k-2})

We solve this counting problem with the following Lemma

Lemma 19.

Fix some V\subset\mathcal{B} and let \mathcal{A}=\{(S,T)\subset\mathcal{F}_{e}^{2}\mathrm{~{}s.t.~{}}S\Delta T% \subset\mbox{flat}(V)\subset S\cup T\}. If \mathrm{supp}(V)\cap U_{0}\not\subset\mathrm{supp}(e), then |\mathcal{A}|=0. Otherwise, |\mathcal{A}|\leq O(u^{k-c(V)}).

Proof.

The first claim follows from just noting that S,T\in\mathcal{F}_{e} implies that \mathrm{supp}(S)\cap U_{0}\subset\mathrm{supp}(e) and \mathrm{supp}(T)\cap U_{0}\subset\mathrm{supp}(e).

For the second inequality, we first count the number of ways to choose the support of S\cup T. Since S\Delta T\subset\mbox{flat}(V), it follows that \mathrm{supp}(S\Delta T)\subset\mathrm{supp}(\mbox{flat}(V)). However for any set of edges S,T it must be that \mathrm{supp}(S)\Delta\mathrm{supp}(T)\subset\mathrm{supp}(S\Delta T). Therefore \mathrm{supp}(S)\Delta\mathrm{supp}(T)\subset\mathrm{supp}(V).

Now we establish what \mathrm{supp}(S)\cap U_{i} and \mathrm{supp}(T)\cap U_{i} can look like, depending on the properties of V. We do this in three cases:

  1. |\mathrm{supp}(V)\cap U_{i}|=1. Let v_{i} be the vertex in the intersection. Because \mathrm{supp}(S)\Delta\mathrm{supp}(T)\subset\mathrm{supp}(V) and |\mathrm{supp}(S)\cap U_{i}|=|\mathrm{supp}(T)\cap U_{i}|=1 it follows that \mathrm{supp}(S)\cap U_{i}=\mathrm{supp}(T)\cap U_{i}=v_{i}.

  2. |\mathrm{supp}(V)\cap U_{i}|\geq 2. Since S,T are both rainbow, we have that S\cup T is supported on at most 2 vertices of color i. Combining this with the hypothesis that \mathrm{supp}(V)\subset\mathrm{supp}(S)\cup\mathrm{supp}(T) yields that \mathrm{supp}(V)\cap U_{i}=\mathrm{supp}(S\cup T)\cap U_{i}.

  3. |\mathrm{supp}(V)\cap U_{i}|=0. The above observation that \mathrm{supp}(S)\Delta\mathrm{supp}(T)\subset\mathrm{supp}(V) tells us that (\mathrm{supp}(S)\Delta\mathrm{supp}(T))\cap U_{i}=\varnothing and so \mathrm{supp}(S)\cap U_{i}=\mathrm{supp}(T)\cap U_{i}. This single point of intersection could be any arbitary point from U_{i} leaving at most u possible choices.

So combining all these cases we find that

  • If |\mathrm{supp}(V)\cap U_{i}|\geq 1, then \mathrm{supp}(S\cup T)\cap U_{i} is determined uniquely by V

  • If |\mathrm{supp}(V)\cap U_{i}|=0 then we permit that \mathrm{supp}(S\cup T)\cap U_{i} could be any one vertex from U_{i}

Meanwhile \mathrm{supp}(S\cup T)\cap U_{0}\subset e, and so there are at most 4 possible choices for this set. Combining this over all possible choices, we see that \mathrm{supp}(S\cup T) can take at most 4u^{k-c(v)} distinct possible values. Lastly we note that there are at most \Theta(1) possible ways to decide how to form rainbow sets of edges S,T supported on a fixed set of at most 2k+2 vertices, so this finishes the proof. ∎

Lemma 20.

For j\in\{0,1,2\} and 0\leq c\leq k define

\mathcal{C}_{j}:=\{V\subset\mathcal{B}|\widehat{Z}(V)\neq 0,~{}c(V)=c,~{}|% \mathrm{supp}(V)\cap U_{0}|=j\}

Then:

\displaystyle|\mathcal{C}_{0}|\leq u^{2c-1}\qquad|\mathcal{C}_{1}|\leq nu^{2c}% \qquad|\mathcal{C}_{2}|\leq n^{2}u^{2c}
Proof.

We prove the claims on the size of \mathcal{C}_{1},\mathcal{C}_{2} first. Any set V such that \hat{Z}(V)\neq 0 has the property that for some S,T\in\mathcal{F}_{e} we have \mbox{flat}(V)\subset S\cup T. Since any S,T each contain exactly one vertex in each of the U_{i}, it follows that S\cup T is supported on at most 2 vertices in each U_{i}. Furthermore, for V\in\mathcal{C}_{j} there are at most n^{j} possible choices for \mathrm{supp}(V)\cap U_{0}. Therefore there are at most u^{2c(V)}n^{j} ways to choose the vertex support of V. Once the vertices are chosen, there are only O(1) subsets of edges from \mathcal{B} supported on any vertex set of size at most 2k. This shows our bound on |\mathcal{C}_{1}| and |\mathcal{C}_{2}|.

For the claim about |\mathcal{C}_{0}|, note that if V\in\mathcal{C}_{0}, then as above there are some S,T\in\mathcal{F}_{e} such that S\Delta T\subset\mbox{flat}(V). Now let i be the smallest index such that V is supported on a vertex in U_{i}. S has an edge of the form (a,b) where a\in U_{i} and b\in U_{j} for some j<i by the rainbow condition for membership in \mathcal{F}_{e}. It follows that b\notin\mathrm{supp}(V), and hence (a,b)\notin\mbox{flat}(V). Hence it must be the case that (a,b)\in T as well. So S\cap U_{i}=T\cap U_{i}=a. Therefore \mathrm{supp}(V) contains at most 1 vertex from U_{i}. Continuing to count the number of possible choices of the support of V as we did above we find that |\mathcal{C}_{0}|=O(u^{2c-1}). ∎

From these two lemmas we can obtain our desired concentration bound for \sum_{e}G_{e}^{2}.

Lemma 13.

Let Z=\sum_{e\in B_{0}}G_{e}^{2}. Assume that u\geq n^{2\tau} and 1\leq k\leq r-2. Then there exists a constant C such that for sufficiently large n, \Pr\mathopen{}\mathclose{{}\left[|Z|\leq C\mathopen{}\mathclose{{}\left(\frac{% u}{n}}\right)^{k}n^{-k}}\right]\leq\exp\mathopen{}\mathclose{{}\left(\Omega(n^% {-\tau/r^{2}})}\right)

Proof.

First we use Lemma 19 to compute \hat{Z}(V) in terms of k and c(V). We do this in three pieces. For i\in\{0,1,2\} let \mathcal{F}_{0} be the family of nonempty subsets V\subset\mathcal{B} such that |\mathrm{supp}(V)\cap U_{0}|=i.

For any V\subset\mathcal{B} and e\in{U_{0}\choose 2} note that \widehat{G_{e}^{2}}(V)=0 unless \mathrm{supp}(V)\cap U_{0}\subset e. Therefore if V\in\mathcal{F}_{i}, then there are at most n^{2-i} choices of e\in{U_{0}\choose 2} such that \widehat{G_{e}^{2}}(V)\neq 0. Combining this observation with Equation 14 and Lemma 19 for V\in\mathcal{F}_{i} we have

\displaystyle\hat{Z}(V)=\sum_{e\in B_{0}}\widehat{G_{e}^{2}}(V)\leq\O\mathopen% {}\mathclose{{}\left(n^{2-i}n^{-2k-2}u^{k-c(V)}}\right)=O\mathopen{}\mathclose% {{}\left(\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right)^{k-c(V)}n^{-k-c(V)-% i}}\right)

Now we are in a position to compute Var(Z). We break the sets up into three parts as per the above calculation. First for sets V\in\mathcal{F}_{0} we use Lemma 20 to say

\displaystyle\sum_{V\in\mathcal{F}_{0}}\hat{Z}(V)^{2} \displaystyle=\sum_{c=1}^{k}\sum_{\begin{subarray}{c}V\in\mathcal{F}_{0}\\ c(V)=c\end{subarray}}\hat{Z}(V)^{2}=\sum_{c=1}^{k}\sum_{\begin{subarray}{c}V% \in\mathcal{F}_{0}\\ c(V)=c\end{subarray}}O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}% \left(\frac{u}{n}}\right)^{2k-2c}n^{-2k-2c}}\right)
\displaystyle=\sum_{c=1}^{k}O\mathopen{}\mathclose{{}\left(\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{2k-2c}n^{-2k-2c}u^{2c-1}}\right)=O% \mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{u}{n}}\right% )^{2k-1}n^{-2k-1}}\right)

Next for \mathcal{F}_{i} where i=1,2 we find

\displaystyle\sum_{V\in\mathcal{F}_{1}}\hat{Z}(V)^{2} \displaystyle=\sum_{c=1}^{k}\sum_{\begin{subarray}{c}V\in\mathcal{F}_{1}\\ c(V)=c\end{subarray}}\hat{Z}(V)^{2}=\sum_{c=1}^{k}\sum_{\begin{subarray}{c}V% \in\mathcal{F}_{1}\\ c(V)=c\end{subarray}}O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}% \left(\frac{u}{n}}\right)^{2k-2c}n^{-2k-2c-2i}}\right)
\displaystyle=\sum_{c=1}^{k}O\mathopen{}\mathclose{{}\left(\mathopen{}% \mathclose{{}\left(\frac{u}{n}}\right)^{2k-2c}n^{-2k-2c-2i}u^{2c}n^{i}}\right)% =O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{u}{n}}% \right)^{2k}n^{-2k-i}}\right)

Combining these two bounds we find

Var(Z)=\sum_{\begin{subarray}{c}V\subset\mathcal{B}\\ V\neq\varnothing\end{subarray}}\hat{Z}(V)^{2}=\sum_{V\in\mathcal{F}_{0}}\hat{Z% }(V)^{2}+\sum_{V\in\mathcal{F}_{1}}\hat{Z}(V)^{2}+\sum_{V\in\mathcal{F}_{2}}% \hat{Z}(V)^{2}=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(% \frac{u}{n}}\right)^{2k-1}n^{-2k-1}}\right)

We know from Lemma 11 that \operatorname*{\mathbb{E}}[Z]=\Theta(u^{k}n^{-2k}), so the event that |Z|\leq\operatorname*{\mathbb{E}}[Z]/2 implies that

\frac{|Z-\operatorname*{\mathbb{E}}[Z]|}{\sqrt{Var(Z)}}\geq\frac{\operatorname% *{\mathbb{E}}[Z]}{2\sqrt{Var(Z)}}=\Omega\mathopen{}\mathclose{{}\left(\frac{u^% {k}n^{-2k}}{u^{k-\frac{1}{2}}n^{-2k}}}\right)=\Omega\mathopen{}\mathclose{{}% \left(u^{\frac{1}{2}}}\right)=\Omega\mathopen{}\mathclose{{}\left(n^{\tau}}\right)

Since Z is a polynomial of degree at most r(r-1), a standard application of Lemma 6 confirms that \Pr\mathopen{}\mathclose{{}\left[|Z|\leq\operatorname*{\mathbb{E}}[Z]/2}\right% ]\leq\exp\mathopen{}\mathclose{{}\left(-\Omega(n^{2\tau/r^{2}})}\right), and this is what we needed to show. ∎

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
370724
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description