Tight Lower Bounds for Planted Clique
in the Degree4 SOS Program
Abstract
We give a lower bound of for the degree4 SumofSquares SDP relaxation for the planted clique problem. Specifically, we show that on an ErdösRényi graph , with high probability there is a feasible point for the degree4 SOS relaxation of the clique problem with an objective value of , so that the program cannot distinguish between a random graph and a random graph with a planted clique of size . This bound is tight.
We build on the works of Deshpande and Montanari and Meka et al., who give lower bounds of and respectively. We improve on their results by making a perturbation to the SDP solution proposed in their work, then showing that this perturbation remains PSD as the objective value approaches .
In an independent work, Hopkins, Kothari and Potechin [HKP15] have obtained a similar lower bound for the degree SOS relaxation.
eq(LABEL:#1) \newrefformateqn(LABEL:#1) \newrefformatlemLemma LABEL:#1 \newrefformatdefDefinition LABEL:#1 \newrefformatthmTheorem LABEL:#1 \newrefformatcorCorollary LABEL:#1 \newrefformatchaChapter LABEL:#1 \newrefformatsecSection LABEL:#1 \newrefformatappAppendix LABEL:#1 \newrefformattabTable LABEL:#1 \newrefformatfigFigure LABEL:#1 \newrefformathypHypothesis LABEL:#1 \newrefformatalgAlgorithm LABEL:#1 \newrefformatremRemark LABEL:#1 \newrefformatitemItem LABEL:#1 \newrefformatstepstep LABEL:#1 \newrefformatconjConjecture LABEL:#1 \newrefformatfactFact LABEL:#1 \newrefformatpropProposition LABEL:#1 \newrefformatprobProblem LABEL:#1 \newrefformatclaimClaim LABEL:#1 \newrefformatrelaxRelaxation LABEL:#1 \newrefformatredReduction LABEL:#1 \newrefformatpartPart LABEL:#1 \newrefformatobsObservation LABEL:#1 \newrefformatcorrCorollary LABEL:#1
1 Introduction
In the Maximum Clique problem, the input consists of a graph and the goal is to find the largest subset of vertices all of which are connected to each other. The Maximum Clique problem is NPhard to approximate within a factor for all [Hås96, Kho01].
Karp [Kar76] suggested an average case version of the Maximum Clique problem on random graphs drawn from the ErdösRényi distribution . A heuristic argument shows that an ErdösRényi graph has a clique of size with high probability: given such a graph, choose a random vertex, then choose one of its neighbors, then choose a vertex adjacent to both, and continue this process until there is no vertex adjacent to the clique. After steps, the probability that another vertex can be added is , and so after about steps this process terminates. This heuristic argument can be made precise, and one can show that this greedy algorithm can find a clique of size in an instance of in polynomial time.
Indeed, with some work it can be shown that the largest clique in an instance of actually has size with high probability [GM75, Mat76, BE76]. But while some clique of size can easily be found in polynomial time (using the heuristic from the previous paragraph), an efficient algorithm for finding the clique of size has been much more elusive. In his seminal paper on the probabilistic analysis of combinatorial algorithms, Karp asked whether there exists a polynomialtime algorithm for finding a clique of size for any fixed constant [Kar76]. Despite extensive efforts, there has been no algorithmic progress on this question since.
The planted clique problem is a natural variant of this problem wherein the input is promised to be either a graph drawn from or a graph with a clique of size planted within its vertices. The goal of the algorithm is to distinguish between the two distributions.
For , there is a simple quasipolynomial time algorithm that distinguishes the two distributions. The algorithm simply tries all subsets of vertices, looking for a clique. For a random graph , there are no cliques of size , but there is one in the planted distribution. Clearly, the planted clique problem becomes easier as the planted clique’s size increases. Yet there are no polynomialtime algorithms known for this problem for any . For , a result of Alon et al. uses random matrix theory to argue that looking at the spectrum of the adjacency matrix suffices to solve the decision problem [AKS98].
The works of [FK08, BV09] show that, if one were able to efficiently calculate the injective tensor norm of a certain random order tensor, then by extending the spectral algorithm of [AKS98] one would have a polynomialtime algorithm for . However, there is no known algorithm that efficiently computes the injective tensor norm of an order tensor; in fact computing the inective tensor norm is hard to approximate in the general case [HM13].
While algorithmic progress has been slow, there has been success in proving strong lower bounds for the planted clique problem within specific algorithmic frameworks. The first such bound was given by Jerrum, who showed that a class of Markov Chain Monte Carlo algorithms require a superpolynomial number of steps to find a clique of size , for any fixed , in an instance of [Jer92]. Feige and Krauthgamer showed that levels of the LovászSchriver SDP hierarchy are needed to find a hidden clique of size [FK00, FK03]. Feldman et al. show (for the planted bipartite clique problem) that any “statistical algorithm” cannot distinguish in a polynomial number of queries between the random and planted cases for [FGR12].
More recently, there has been an effort to replicate the results of [FK00, FK03] for the SumofSquares (or SOS) hierarchy, a more powerful SDP hierarchy. The recent work of [MPW15] achieves a lower bound for rounds of the SOS hierarchy, by demonstrating a feasible solution for the level SDP relaxation with a large enough objective value in the random case. The work of [DM15a] achieves a sharper lower bound for the MekaPotechinWigderson SDP solution, but only for rounds; a counterexample of Kelner (which may be found in [Bar14]) demonstrates that the analysis of [DM15a] is tight for the integrality gap instance of [DM15a, MPW15] within logarithmic factors.
This line of work brings to the fore the question: can a degree SOS relaxation solve the planted clique problem for some ? While lower bounds are known for LovászSchrijver SDP relaxations for planted clique [FK00, FK03], SOS relaxations can in general be much more powerful than LovászSchrijver relaxations. For example, while there are instances of unique games that are hard for rounds of the LovászSchrijver SDP hierarchy [KS09, RS09], recent work has shown that these instances are solved by degree SOS hierarchy [BBH12].
Moreover, even the degree SOS relaxation proves to be surprisingly powerful in a few applications:

First, the work of Barak et al. [BBH12] shows that a degree SOS relaxation can certify hypercontractivity of low degree polynomials over the hypercube. This argument is the reason that hard instances for LovászSchriver and other SDP hierarchies constructed via the noisy hypercube gadgets are easily refuted by the SOS hierarchy.

Second, a degree SOS relaxation can certify that the to norm of a random subspace of dimension at most is bounded by a constant (with high probability over the choice of the subspace) [BBH12]. This averagecase problem has superficial similarities to the planted clique problem.
In this work, we make modest progress towards a lower bound for SOS relaxations of planted clique by obtaining a nearly tight lower bound for the degree SOS relaxation (corresponding to two rounds, ). More precisely, our main result is the following.
Theorem 1.1.
Suppose that . Then with probability , there exists a feasible solution to the SOSSDP of degree () with objective value .^{1}^{1}1We have made no effort to optimize logarithmic factors in this work; a more delicate analysis of the required logarithmic factors is certainly possible.
Note that by the work of [AKS98], this result is tight up to logarithmic factors. In an independent work, Hopkins, Kothari and Potechin [HKP15] have obtained a similar result.
Our work builds heavily on previous work by Meka, Potechin and Wigderson [MPW15] and Deshpande and Montanari [DM15a]. Since the SDP solution constructed in these works is infeasible for , we introduce a modified SDP solution with objective value , and prove that for a random graph the solution is feasible with high probability. At the parameter setting for which the objective value becomes , the SDP solutions of [DM15a, MPW15] violate the PSDness constraint, or equivalently, there exists a set of test vectors such that for all . Our feasible SDP solution is a perturbation of their solution–we add spectral mass to the solution along the vectors from the set , then enforce the linear constraints of the SDP program.
1.1 Notation
We use the symbol to denote the PSD ordering on matrices, saying that if is PSD and that if . When we wish to hide constant factors for clarity, we use to denote that for some constant .
We denote by the vector such that , or the all1’s vector. We denote the normalized version of this vector by . Further, we use and . We will drop the subscript when is clear from context.
In our notation, we at times closely follow the notation of [DM15a], as our paper builds on their results and we recycle many of their bounds.
For convenience, we will use the shorthand . We will abuse notation by using to refer to both the binomial coefficient and to the set . We will also use the notation to refer to the union of sets . Further, when we give a vector , we will identify the entries of by unordered pairs of elements of .
Throughout the paper, we will (unless otherwise stated) work with some fixed instance of , and denote by the “centered” th row of the adjacency matrix of , with th entry equal to if the edge , equal to if the edge , and equal to for . We will use to denote the th index of .
1.2 Organization
In \prettyrefsec:overview, we give background material on the degree4 SOS relaxation for the maxclique problem, describe the integrality gap of Deshpande and Montanari for the planted clique problem, and explain the obstacle they face to reach an integrality gap value of . We then describe our integrality gap instance, motivating our construction using the obstacle for the DeshpandeMontanari and MekaPotechinWigderson witness, and give an overview of our proof that our integrality gap instance is feasible. In \prettyrefsec:mainproof, we prove that our witness is PSD, completing the proof of feasibility. \prettyrefsec:matrixconc contains our concentration bounds for random matrices that arise within our proofs. In our proof, we reuse several bounds proved by Deshpande and Montanari. As far as possible, we restate the claims from [DM15a] as they are used; for convenience, in \prettyrefapp:dmbounds, we list a few other claims from Deshpande and Montanari that we use in this paper.
2 Preliminaries and Proof Overview
In this section, we describe the degree4 SOS relaxation for the maxclique SDP and give background on the DeshpandeMontanari witness. We then describe our own modified witness, and give an overview of the proof that our witness is feasible (the difficult part being showing that our witness is PSD). The full proof of feasibility is deferred to \prettyrefsec:mainproof.
2.1 Degree4 SOS Relaxation for Max Clique
The degree SOS relaxation for the maximum clique problem is a semidefinite program whose variables are . For a subset with , the variable indicates whether is contained in the maximum clique. For a graph on vertices, the program can be described as follows.
Maximize  (2.1)  
subject to  
It is instructive to think of the variable as a pseudoexpectation of the product of indicator variables, or a pseudomoment:
Intuitively, the constraints of the SDP force the solution to behave somewhat like the moments of a probability distribution over integral solutions, although they needn’t correspond to the moments of a true distribution, hence the term pseudomoment. For more background, see e.g. [Bar14]. The pseudmoment interpretation of the SDP solution motivates the choice of the witness in the prior work. For example, we may notice that the objective function in this view is simply the pseudoexpectation of the size of the planted clique, .
If denotes the optimum value of the SDP relaxation on graph , then clearly is at least the size of the maximum clique in . In order to prove a lower bound for degree SOS relaxation on , it is sufficient to argue that with overwhelming probability, is significantly larger than the maximum clique on a random graph. This amounts to exhibiting a feasible SDP solution with large objective value, for an overwhelming fraction of graphs sampled from . Formally, we will show the following:
Theorem 2.1 (Formal version of \prettyrefthm:mainresult).
There exists an absolute constant such that
We obtain \prettyrefthm:maintechnical by constructing a point, or witness, for each , then proving that the point is feasible with high probability. We defer the description of our witness to \prettyrefdef:sol and \prettyrefdef:solpE, as we spend \prettyrefsec:DMwit and \prettyrefsec:probspace motivating our construction; however the curious reader may skip ahead to \prettyrefdef:solpE which does not require the knowledge of additional notation.
2.2 DeshpandeMontanari Witness
Henceforth, fix a graph that is sampled from . Both the work of Meka, Potechin and Wigderson [MPW15] and that of Deshpande and Montanari [DM15a] construct essentially the same SDP solution for the degree SOS relaxation.
This SDP solution assigns to each clique of size , a value that depends only on its size (in our case, ). In essence, their solution takes advantage of the independence of the instance. The motivating observation is that the variable can be thought of as a pseudoexpectation of the indicator that is a subclique of the planted clique. The idea is then to make this pseudoexpectation of the indicator consistent with the true expectation under the distribution where a clique of size is planted uniformly at random within the instance of . Thus, every vertex is in the clique “with uniform probability:”
Then, the same principle is applied to edges, traingles, and cliques, so that
This is the general idea of the SDP solution of [DM15a]. More formally, the SDP solution in [DM15a] is specified by four parameters as,
where for a set of vertices , is the indicator that the subgraph induced on is a clique. The parameters determine the value of the objective function, and the feasibility of the solution. As a convention, we will define .
It is easy to check that the solution satisfies all the linear constraints of the SOS program (2.1), since it assigns nonzero values only to cliques in . The key difficulty is in showing that the matrix is PSD for an appropriate choice of parameters .
In order to show that , it is sufficient to show that where,
where is the indicator for the presence of the edge . In words, is the matrix where the entry is proportional not to the indicator of whether is a clique, but to the indicator of whether has as a subgraph the bipartite clique with bipartitions and . It is easy to see that the matrix is obtained by dropping from the rows and columns corresponding where . Hence .
Notice that is a random matrix whose entries depend on the edges in the random graph . At the risk of oversimplification, the approach of both the previous works [MPW15] and [DM15a] can be broadly summarized as follows:

(Expectation) Show that the expected matrix has sufficiently large positive eigenvalues.

(Concentration) Show that with high probability over the choice of , the noise matrix has bounded eigenvalues, so as to ensure that
Here we will sketch a few key details of the argument in [DM15a]. The matrix can be decomposed into blocks where . Deshpande and Montanari use the Schur complements to reduce the problem of proving that to facts about the blocks . Specifically, they show the following lemma:
Lemma 2.2.
Let be the matrix defined so that . For , let be the submatrix of corresponding to monomials with . Then is PSD if and only if
(2.2)  
(2.3) 
The most significant challenge is to argue that (2.3) holds with high probability. In fact, the inequality only holds for the DeshpandeMontanari SDP solution with high probability for parameters for which the objective value is .
Expected matrix
The expected matrix is symmetric with respect to permutations of the vertices. It forms an association scheme (see [MPW15, DM15a]), by virtue of which its eigenvalues and eigenspaces are well understood. In particular, the following proposition in [DM15a] is an immediate consequence of the theory of association schemes.
Proposition 2.3 (Proposition 4.16 in [DM15a]).
has three eigenspaces, such that
where are the projections to the spaces respectively. The eigenvalues are given by,
(2.4)  
(2.5)  
(2.6) 
Further the eigenspaces are given by,
where we have used to denote the space of vectors of real numbers indexed by subsets of of size at most .
Deviation from Expectation
Given the lower bound on eigenvalues of the expected matrix , the next step would be to bound the spectral norm of the noise . However, since the eigenspaces of are stratified (for the given ), with one large eigenvalue and several much smaller eigenvalues, standard matrix concentration does not suffice to give tight bounds. To overcome this, Deshpande and Montanari split and along the eigenspaces of .
More precisely, let us split as
where includes all multilinear entries, and includes all nonmultilinear entries, i.e., entries where . Formally,
The spectral norm of the matrix over the eigenspaces is carefully bounded in [DM15a].
Lemma 2.4.
(Proposition 4.20, 4.25 in [DM15a]) With probability at least , all of the following bounds hold:
(2.7)  
(2.8)  
(2.9) 
prop:eigenspaces and \prettyreflem:qkbounds are sufficient to conclude that for parameter choices of that correspond to planted clique of size up to . More precisely, to argue that with high probability , it is sufficient to argue that, , i.e.,
Deshpande and Montanari fix , , and for a parameter . Using \prettyrefprop:eigenspaces and \prettyreflem:qkbounds, the above matrix inequality becomes,
(2.10) 
which can be shown to hold for . Eventually, it is necessary to show (2.3), which is stronger than . This is again achieved by showing bounds on the spectra of and . We refer the reader to [DM15a] for more details of the arguments.
2.3 Problematic Subspace
The SDP solution described above ceases to be PSD at which corresponds to an objective value of . The specific obstruction to arises out of (2.10). More precisely, the bottom principal minor which yields the constraint,
forcing . It is clear that the problematic vectors for which are precisely those for which and is large, i.e., aligns with the subspace .
In fact, we identify a specific subspace that is problematic for the [DM15a] solution. To describe the subspace, let us fix some notation. Define the random variable to be if , and otherwise. We follow the convention that .
Lemma 2.5.
Let the vectors be defined so that , and let . Then with probability at least ,
Proof.
This is an immediate observation from the various matrix norm bounds in [DM15a] (specifically \prettyreflem:tildes, \prettyreflem:wigner and \prettyrefobs:nullspace). We defer the detailed proof to \prettyrefapp:misc. ∎
Since , the above lemma implies that all the vectors with large singular values for are within the subspace . Furthermore, we will show the following lemma which clearly articulates that is the sole obstruction to .
Lemma 2.6.
Suppose satisfies
(2.11)  
(2.12)  
(2.13) 
then with probability ,
Proof.
Fix . Recall that . We can write the matrix
where
and
and .
It is sufficient to show that and . Using \prettyrefprop:eigenspaces and (2.9), when condidition \prettyrefeq:cond1 holds. Using \prettyrefprop:eigenspaces, \prettyreflem:qkbounds and \prettyreflem:probsubspace we can write,
which is PSD given the bounds on in conditions \prettyrefeq:cond2 and \prettyrefeq:cond3. To see this, one shows that all the principal minors are PSD.
On the other hand, for any , we can write
Now we will appeal to the fact that a quadratic for all if and . Since by condition \prettyrefeq:cond2, it is easily seen that the above quadratic form is always nonnegative, implying that . ∎
An immediate corollary of the proof of the above lemma is the following.
Corollary 2.7.
Under the hypothesis of \prettyreflem:calcs, with probability ,
The above corollary is a consequence of the fact that .
2.4 The Corrected Witness
Suppose we have an unconstrained matrix that we wish to modify as little as possible so as to ensure . Given a test vector so that , the natural update to make is to take for a suitably chosen . This would suggest creating a new SDP solution by setting .
Unfortunately, the SOS SDP relaxation has certain hard constraints, namely that the nonclique entries are fixed at zero. Moreover, the entry must depend only on . Setting the SDP solution matrix to would almost certainly violate both these constraints. It is thus natural to consider multiplicative updates to the entries of the matrix which clearly preserve the zero entries of the matrix.
Specifically, the idea would be to consider an update of the form where is the diagonal matrix with entries given by the vector . If the matrix has a significantly large eigenvalue along , i.e., , for some matrix with , then this multiplicative update has a similar effect as an additive update,
where the norm of the final “error” term is relatively small. Recall that, in our setting, the Deshpande Montanari SDP solution matrix does have a large eigenvalue along . We now formally describe our SDP solution, first as a matrix according to the intuition given above, and then as a set of pseudomoments.
Definition 2.8 (Corrected SDP Witness, matrix view).
Let be defined so that
Define to be the diagonal matrix with on the diagonal. Define to be the restriction of to the nonmultilinear entries. Also let
where . Then our SDP witness is the matrix , defined so that
where is the projection that zeros out rows and columns corresponding to pairs .
Definition 2.9 (Corrected SDP Witness, pseudomoments view).
Let , and let be a set of parameters, to be fixed later. For a subset , let be the graph induced on by . For any subset of at most vertices , , we define
where is some factor chosen for each depending on the choice of , which we will set later to ensure that the final moments matrix is PSD.
Proposition 2.10.
For , and , with probability at least , the solution does not violate any of the linear constraints of the planted clique SDP.
Proof.
First, whenever so these entries satisfy the constraints of the SDP. If then is given by,
Notice that is nonzero only if is a clique, and it depends only on . Moreover, is a sum over iid mean random variables and therefore satisfies,
A simple union bound over all subsets shows that for all of them with probability at least . ∎
It now remains to verify that . We will do this by verifying the Schur complement conditions, as in [DM15a]. Analogous to the submatrix , one can consider the corresponding submatrix of . The expression for is as follows:
Here is the matrix with on the diagonal, and is the matrix corresponding to the nonmultilinear entries (entries corresponding to monomials like ), and is the alls matrix. The matrices and are unchanged, and so we must simply verify that and that .
This concludes our proof overview. In \prettyrefsec:mainproof, we verify the Schur complement conditions and prove our main result, and in \prettyrefsec:matrixconc we give the random matrix concentration results upon which we rely throughout the proof.
3 Proof of the Main Result
In this section, we will demonstrate that , and that . This will allow us to conclude that our solution matrix is PSD, and therefore is a feasible point for the degree4 SOS relaxation.
Parameters
Before we proceed further, it will be convenient to parametrize the choice of and . In particular, it will be useful to fix,
(3.1) 
for two parameters , which we will finally fix to and . For this setting of parameters, the eigenvalues from \prettyrefprop:eigenspaces are bounded by,
(3.2) 
When convenient, we will also use the shorthand , , , and .
3.1 Proving that
Here we will make a first step towards verifying the Schur complement conditions of \prettyreflem:schur by showing that . Specifically, we will show the following stronger claim.
Theorem 3.1.
For and , , the following holds with probability at least ,
Proof.
Fix . By definition of , we have
Define . We can apply \prettyreflem:calcs to the term and \prettyrefcorr:calcs for ,