Breaking symmetries to rescue Sum of Squares:The case of makespan scheduling This work has been partially funded by Project Fondecyt Nr. 1181527.

# Breaking symmetries to rescue Sum of Squares: The case of makespan scheduling ††thanks: This work has been partially funded by Project Fondecyt Nr. 1181527.

Victor Verdugo Institute of Engineering Sciences, Universidad de O’Higgins. victor.verdugo@uoh.cl    José Verschae Institute of Engineering Sciences, Universidad de O’Higgins. jose.verschae@uoh.cl
###### Abstract

The Sum of Squares (SoS) hierarchy gives an automatized technique to create a family of increasingly tight convex relaxations for binary programs. There are several problems for which a constant number of rounds of the hierarchy give integrality gaps matching the best known approximation algorithm. In many other, however, ad-hoc techniques give significantly better approximation ratios. Notably, the lower bounds instances, in many cases, are invariant under the action of a large permutation group. The main purpose of this paper is to study how the presence of symmetries on a formulation degrades the performance of the relaxation obtained by the SoS hierarchy. We do so for the special case of the minimum makespan problem on identical machines. Our first result is to show that a linear number of rounds of SoS applied over the configuration linear program yields an integrality gap of at least . This improves on the recent work by Kurpisz et al. [40] that shows an analogous result for the weaker LS and SA hierarchies. Then, we consider the weaker assignment linear program and add a well chosen set of symmetry breaking inequalities that removes a subset of the machine permutation symmetries. We show that applying the SoS hierarchy for rounds to this linear program reduces the integrality gap to . Our results suggest that for this classical problem the symmetries of the natural assignment linear program were the main barrier preventing the SoS hierarchy to give relaxations with integrality gap after a constant number of rounds. We leave as an open question whether this phenomenon occurs for different problems where the SoS hierarchy yields weak relaxations.

## 1 Introduction

Lift-and-project methods are powerful techniques for deriving convex relaxations of integer programs. The lift-and-project hierarchies, as Sherali & Adams (SA), Lovász & Schrijver (LS), or Sum of Squares (SoS), are systematic methods for obtaining a family of increasingly tight relaxations, parameterized on the round of the hierarchy. For all this hierarchies, rounds on a problem with variables imply a convex relaxation with variables in the lifted space. Taking rounds, gives an exact description of the integer hull, at the cost of having an exponential number of variables. Arguably, it is not well understood for which problems these hierarchies yield relaxations that match the best possible approximation algorithm. Indeed, there are some positive results, but many other strong negative results for algorithmically easy problems. This shows a natural limitation to the power of hierarchies as one-fit-all techniques. Quite remarkably, the instances used for obtaining lower bounds often have a very symmetric structure [43, 21, 56, 40, 58], which suggests a strong connection between the tightness of the relaxation given by these hierarchies and symmetries. The main purpose of this article is to study this connection for a specific relevant problem, namely, minimum makespan scheduling on identical machines.

Minimum makespan scheduling. This problem is one of the first problems considered under the lens of approximation algorithms [20], and since then it has been studied extensively. The input of the problem consists of a set of jobs, each having an integral processing time , and a set of identical machines. Given an assignment , the load of a machine is the total processing time of jobs assigned to , that is, . The objective is to find an assignment of jobs to machines that minimizes the makespan, that is, the maximum load. The problem is strongly NP-hard and admits several polynomial-time approximation schemes (PTASs) based on different techniques, as dynamic programming, integer programming on fixed dimension, and integer programming under a constant number of constraints [26, 1, 2, 25, 29, 30, 15].

Integrality gaps. The minimum makespan problem has two natural linear relaxations which have been extensively studied in the literature. The assignment linear program uses binary variables which indicate whether job is assigned to machine . It is easy to see that its integrality gap is 2. The stronger configuration linear program, uses an exponential number of variables which indicate whether the set of jobs assigned to has as a multiset of processing times. Kurpisz et al. [40] showed that the configuration linear program has an integrality gap of at least even after a linear number of rounds of the or SA hierarchies. Hence, the same lower bound holds when the ground formulation is the assignment linear program. On the other hand, Kurpisz et al. [40] leave open whether the SoS hierarchy applied to the configuration linear program has a integrality gap after many rounds. Our first main contribution is a negative answer to this question.

###### Theorem 1.

Consider the problem of scheduling identical machines to minimize the makespan. For each there exists an instance with jobs such that, after applying rounds of the SoS hierarchy over the configuration linear program, the obtained semidefinite relaxation has an integrality gap of at least .

Naturally, since the configuration linear program is stronger than the assignment linear program, our result holds if we apply rounds of SoS over the assignment linear program. The proof of the lower bound relies on tools from representation theory of symmetric groups over polynomials rings and it is inspired on the recent work by Raymond et al. for symmetric sums of squares in hypercubes [59]. The lower bound comes by constructing high-degree pseudoexpectations on one hand, and by obtaining symmetry-reduced decompositions of the polynomial ideal defined by the configuration liner program, on the other hand. The machinery from representation theory allows to restrict attention to invariant polynomials, and we combine this with a strong pseudoindependence result for a well chosen polynomial spanning set. Our analysis is also connected to the work of Razborov on flag algebras and graph densities, and we believe it can be of independent interest for analyzing lower bounds in the context of SoS in presence of symmetries [60, 61, 58].

Symmetries and Hierarchies. It is natural to explore whether symmetry handling techniques might help overcoming the limitation given by Theorem 1. A natural source of symmetry the problem comes from the fact that the machines are identical: Given a schedule, we obtain other with the same makespan by permuting the assignment over the machines. The same symmetries are encountered in the assignment and configuration linear programs, namely, if is a permutation and is a feasible solution to the assignment linear program then is also feasible. In other words, the assignment linear program is invariant under the action of the symmetric group on the set of machines. The question we study is the following: Is it possible to obtain a polynomial size linear or semidefinite program with an integrality gap of at most that is not invariant for the machine symmetries? That is, our goal is to understand if the group action is deteriorating the quality of the relaxations obtained from the SoS hierarchy. This time, we provide a positive answer.

###### Theorem 2.

Consider the problem of scheduling identical machines to minimize the makespan. After adding a set of linearly many symmetry breaking inequalities to the assignment linear program, rounds of the SoS hierarchy yields a convex relaxation with an integrality gap of at most , for any .

The theorem is based on introducing a formulation that breaks the symmetries in the invariant assignment program by adding new constraints. This enforces that any integer feasible solution of the formulation should respect a lexicographic order over the machine configurations. On top of the linear program obtained from adding the aforementioned constraints, we apply the SoS hierarchy. Using the decomposition theorem [33], we can can construct a solution that is integral on a well chosen set of machines of size . Our symmetry breaking inequalities imply that between two consecutive machines in , our solution assigns approximately the same configurations, and thus we can construct an approximately optimal solution.

### 1.1 Related work

Upper bounds. The first application of semidefinite programming in the context of approximation algorithms was by the work of Goemans & Williamson for Max-Cut [19]. There are not many positive results in this line for other combinatorial optimization problems, but of particular interest to our work is the SoS based approximation scheme by Karlin et al. to the Max-Knapsack problem [33]. They use a structural decomposition result satisfied by the SoS hierarchy, and which makes a difference respect to other classic hierarchies. Recently, for a constant number of machines, Levey and Rothvoss design an approximation scheme with a sub-exponential number of rounds in the weaker SA hierarchy [45]. A lot of attention has received the SoS method in order to design algorithms for high-dimensional problems. Among them we find matrix and tensor completion [8, 57], tensor decomposition [50] and clustering [36]. See the recent survey of Raghavendra et al. for high-dimensional estimation using the SoS method [58]. In the context of hierarchies we refer to Laurent [42] for a detailed comparison between SoS and others. For applications in approximation algorithms we refer to the survey of Rothvoss [63].

Lower bounds. The first was obtained in the context of positivstellensatz certificates was by Grigoriev [21], showing the necessity of a linear number of SoS rounds to refute an easy Knapsack instance. A similar result was obtained by Laurent [43] on the number of rounds needed to certificate the infeasibility of certain Max-Cut instances, and recently the work of Kurpisz et al. in unconstrained polynomial optimization [37]. The same authors show that for a certain polynomial-time single machine scheduling problem, the SoS hierarchy exhibits an unbounded integrality gap even in high-degree regime [37, 39]. Remarkable are the work of Grigoriev [22] and Schoenebeck [65] exhibiting the difficulty for SoS to certify the insatisfiability of a family of random 3-SAT instances in subexponential time, and recently there have been efforts on unifying frameworks to show lower bounds on random CSP’s [6, 35, 34]. For estimation and detection problems, lower bounds have been shown for the -clique problem, -densest subgraph and tensor PCA, among others [27, 7].

Invariant Sum of Squares. Remarkable in this line is the work of Gatermann & Parrillo, that studied how to obtain reduced sums of squares certificates of non-negativity when the polynomial is invariant under the action of a group, using tools from representation theory [16]. Recently, Raymond et al. developed on the Gatermann & Parrillo method to construct symmetry-reduced sum of squares certificates for polynomials over -subset hypercubes [59]. Furthermore, the authors make an interesting connection with the Razborov method and flag algebras [60, 61]. Blekherman et al. provided degree bounds on rational representations for certificates over the hypercube, recovering as corollary known lower bounds for combinatorial optimization problems like Max-Cut [10, 44]. Other applications of symmetries in semidefinite and linear programming can be found in combinatorial optimization [62, 5, 24, 38, 14], graph theory [23, 13, 12, 56] and coding theory [17, 4, 48].

Symmetry Handling in Integer Programming. The integer programming community have dealt with symmetries by either breaking them [32, 46, 28], or devising symmetry-aware exact algorithms as isomorphism pruning [51], orbital branching [54] and orbitopal fixing [31]. The work by Ostrowski [53] combines hierarchies and symmetry handling but with a fundamentally different approach as ours. The author uses the SA hierarchy and reduces the dimension of the lifted relaxation to obtain a faster algorithm. It is worth noticing that such approach does not help reducing the exponential dependency on the number of rounds nor helps diminishing the integrality gap. For an extensive treatment we refer to the surveys by Margot [52] and Liberti [47].

## 2 Preliminaries: Sum of Squares (SoS) and Pseudoexpectations

In what follows we denote by the ring of polynomials with real coefficients. Binary integer programming belongs to a larger class of problems in polynomial optimization, where the constraints are defined by polynomials in the variables indeterminates. More specifically, consider the set

 K={x∈RE:gi(x)≥0 for all i∈M,hj(x)=0 for all j∈J,x2e−xe=0 for all e∈E}, (1)

where for all and for all . In particular, for binary integer programming the equality and inequality constraints are affine functions.

Ideals, quotients and square-free polynomials. We denote by the ideal of polynomials in generated by , and let be the quotient ring of polynomials that vanish in the ideal . That is, are in the same equivalence class of the quotient ring if , that we denote . Alternatively, if and only if the polynomials evaluate to the same values on the vertices of the hypercube, that is, for all .

###### Example 1.

The polynomial is in the ideal generated by the polynomials and , since

 3x21x2+7x1x22−10x1x2=3x2(x21−x1)+7x1(x22−x2),

and therefore, . Alternatively, this follows as .

Observe that the equivalence classes in the quotient ring are in bijection with the square-free polynomials in , that is, polynomials where no variable appears squared. In what follows we identify elements of in this way. Given , we denote by the square-free monomial that is obtained from the product of the variables indexed by the elements in , that is, The degree of a polynomial is denoted by , and we say that is a sum of squares polynomial, for short SoS, is there exist polynomials for a finite family in the quotient ring such that .

Certificates and SoS method. The question of certifying the infeasibility of (1) is hard in general but sometimes it is possible to find simple certificates of infeasibility. We say that there exists a degree- SoS certificate of infeasibility for if there exist SoS polynomials and , and polynomials , all of them in the quotient ring, such that

 −1≡s0+∑i∈Msigi+∑j∈JrjhjmodIE, (2)

and the degree of every polynomial in the right hand side is at most . Observe that if is feasible, then the right hand side is guaranteed to be non negative for at least one assignment of in , which contradicts the equality above. In the case of binary integer programming, if is infeasible there exists a degree- SoS certificate, with  [42, 55].

###### Example 2.

Given , consider the program and , with . This program is infeasible since the equality constraint forces that exacty one of the variables is one and the other is zero, that is, their product is null. Let , and . We check that they provide a degree-2 SoS certificate of infeasibility,

 1ε(x1x2−ε)−12ε(x1+x2)(x1+x2−1)≡1ε(x1x2−ε)−12ε⋅2x1x2≡−1modI{1,2}.

The SoS algorithm iteratively checks the existence of a SoS certificate, parameterized in the degree, and each step of the algorithm is called a round. Since is an upper bound on the certificate degree, the method is guaranteed to finish [55, 9]. Furthermore, the existence of a degree- SoS certificate can be decided by solving a semidefinite program, in time . This approach can be seen as the dual of the hierarchy proposed by Lasserre, which has been studied extensively in the optimization and algorithms community [41, 42, 63, 11].

Pseudoexpectations. To determine the existence of a SoS certificate one solves a semidefinite program, and the solutions of this programs determine the coefficients of elements in the dual space of linear operators. We say that a linear functional is a degree- SoS pseudoexpectation for (1), if it satisfies the following properties:

1. ,

2. for all with ,

3. for all , for all with ,

4. for all , for all with .

###### Lemma 1.

Suppose that defined in (1) es infeasible. If there exists a degree- SoS pseudoexpectation for then there is no degree- SoS certificate of infeasibility.

###### Proof.

Suppose there exists a degree- certificate of infeasibility for , that is, and SoS polynomials, and satisfying (2), and let the degree- pseudoexpectation. Property (1) and linearity of the pseudoexpectation implies that , and

 ˜E(s0+∑i∈Msigi+∑j∈Jrjhj)=˜E(s0)+∑i∈M˜E(sigi)+∑j∈J˜E(rjhj)≥0,

due to linearity and properties (2)-(4) of the pseudoexpectation. This yields a contradiction. ∎

The minimum value of for which there exists a SoS certificate of infeasibility tells how hard is the program (1) for the SoS method. Lemma 1 provides a way of finding lower bounds on the minimum value of certificate degree, and we use it later for studying this number in the context of scheduling, in Section 3. The higher the degree of a pseudoexpectation, the higher is the minimum degree of certificate of infeasibility. There are many examples of problems that are extremely easy to certificate for humans, but not for the SoS method. For example, given a positive , consider the program and for all . This problem is clearly infeasible, but there is no degree- SoS certificate of infeasibility for , as shown originally by Grigoriev and others recently using different approaches [21, 56]. In the following we refer to low-degree when the degree of a certificate or the pseudoexpectation is .

Sherali & Adams certificates. There is a weaker notion of certificates obtained using linear programming due to Sherali & Adams (SA) [66]. In the case of equality constrained programs they correspond to find linear operators satisfying properties (1) and (4), and we say they are a degree- SA pseudoexpectation.

## 3 Lower bound: Symmetries are hard for SoS

In this section we show that the SoS method fails to provide a low-degree certificate of infeasibility for a certain family of scheduling instances. The program we analize in this section is known as the configuration linear program, that has proven to be powerful for different scheduling and packing problems [67, 18].

### 3.1 Configuration Linear Program

Given a value , a configuration corresponds to a multiset of processing times such that its total sum does not exceed . The multiplicity indicates the number of times that the processing time appears in the multiset . The load of a configuration is just the total processing time, . Given , let denote the set of all configurations with load at most .

For each combination of a machine and a configuration , the program has a variable that models whether machine is scheduled with jobs with processing times according to configuration . Letting denote the number of jobs in with processing time , we can write the following binary linear program, ,

 ∑C∈CyiC =1 for alli∈M, ∑i∈M∑C∈Cm(p,C)yiC =np for allp∈{pj:j∈J}, yiC ∈{0,1} for alli∈M, for all C∈C.

Hard instances. We briefly describe the construction of a family of hard instances for the configuration linear program in [40]. Let , and for each odd we have jobs and machines. There are 15 different job-sizes with value , each one with multiplicity . There exist a set of special configurations , called matching configurations, such that the program above is feasible if and only if the program restricted to the matching configurations is feasible. The infeasibility of the latter program comes from the fact that there is no 1-factorization of a regular multigraph version of the Petersen graph [40, Lemma 2].

###### Theorem 3 ([40]).

For each odd , there exists a degree- SA pseudoexpectation for the configuration linear program. In particular, there is no low-degree SA certificate of infeasibility.

### 3.2 A symmetry-reduced decomposition of the scheduling ideal

In what follows, we consider the set of machines . Given , the variables ground set for configuration linear program is , and the symmetric group acts over the monomials in according to , for every . The action extends linearly to , and the configuration linear program is invariant under this action, that is, for every and every we have . We say that a polynomial is -invariant if for every . In particular, if is invariant we have , which is the symmetrization or Reynolds operator of the group action.

We say that a linear function over the quotient ring is -symmetric if for every polynomial we have . For the rest of this section we restrict attention to programs defined by equality constraints, as it is the case for the configuration linear program.

###### Lemma 2.

Let be a symmetric linear operator over such that for every invariant SoS polynomial of degree at most we have . Then, for every with .

That is, when is symmetric it is enough to check the condition in the lemma above to satisfy (2). Therefore, in this case we restrict our attention to those polynomials that are invariant and SoS.

###### Proof of Lemma 2.

Since the operator is symmetric, for every in the quotient ring with we have . The polynomial is symmetric, and it is SoS since , which is a sum of squares. Since , we have and we conclude that . ∎

In the following we focus on understanding polynomials that are invariant and SoS. To analize the action of the symmetric group over we introduce some tools from representation theory [64] to characterize the invariant -modules of the polynomial ring. We maintain the exposition minimally enough for our purposes and we follow in part the notation used by Raymond et al. [59]. We refer to [64] for a deeper treatment of representation theory of symmetric groups.

Isotypic decompositions. We say that a -module is irreducible if the only invariant subspaces are and . Any -module 111Think as to fix ideas. can be decomposed into irreducible modules, and the decomposition is indexed by the partitions of . A partition of is a vector such that and . We denote by when is a partition of . Then, can be decomposed as

 V=⨁λ⊢mVλ, (3)

that is, a direct sum where each is an irreducible -module of  [64]. Each of the subspaces in the direct sum is called an isotypic component. A tableau of shape is a bijective filling between and the cells of a grid with rows, and every row has length . In this case, the shape or Young diagram of the tableau is . For a tableau of shape , we denote by the subset of that fills row in the tableau.

###### Example 3.

Let and consider the partition . The following tableaux have shape ,

In the tableau at the left, . In the tableau at the right, .

The row group is the subgroup of that stabilizes the rows of the tableau , that is,

 Rτλ={σ∈Sm:σ⋅rowr(τλ)=rowr(τλ) for every r∈[t]}. (4)

Invariant SoS polynomials. We go back now to the case of the configuration linear program. Let be the quotient ring restricted to polynomials of degree at most and let be its isotypic decomposition. Given a tableau of shape , let the row subspace of fixed points in for the row group , that is,

 Wτλ={q∈Qℓλ:σq=q for all σ∈Rτλ} (5)

It can be shown that for any tableau of shape , the dimension is the same value  [59, Lemma A.10]. The following result follows from the work of Gaterman & Parrillo in the context of symmetry reduction for invariant semidefinite programs [16]. They use it to show that an invariant semidefinite program can be decomposed into many programs of smaller dimension, one per isotypic module. In what follows, is the inner product in the space of square matrices defined by the trace of . Given , we denote by the subset of partitions of that are lexicographically larger than .

###### Theorem 4.

Suppose that is a degree- SoS and -invariant polynomial. For each partition , let be a tableau of shape and let be a set of polynomials such that . Then, for each partition there exists a positive semidefinite matrix such that , where .

###### Remark 1.

The theorem above is based on the recent work of Raymond et al. [59, p. 324, Theorem 3]. The key facts is that the number of partitions needed in the decomposition is reduced to a number that does not depend on , and that it is enough to have a spanning set for each isotypic module, which is a relaxation from the original result of Gatermann & Parrillo that required a basis [16]. In our case the symmetric group is acting differently from Raymond et al., but the proof follows the same lines. We include a proof of our version in the Appendix for completeness.

Together with Lemma 2, it is enough to study pseudoexpectations for each of the partitions in separately. In particular, Theorem 4 gives us flexibility in the spanning set that we use for describing the row subspaces. We remark that for each partition we can take any tableau with that shape, and consider a spanning set for its corresponding row subspace. In the following, for a matrix with entries in , let be the matrix obtained by applying to each entry of .

###### Lemma 3.

Suppose that for each partition , the spanning set of is such that is positive semidefinite. Then, for each with we have .

###### Proof.

By Lemma 2 it is enough to prove the claim for invariant and degree- SoS. By Theorem 4, for each there exist a positive semidefinite matrix such that . Therefore, , since both and are positive semidefinite for each partition . ∎

### 3.3 Spanning sets of the scheduling ideal

In this section we show how to construct the spanning sets of the row subspaces in order to apply Lemma 3, which together with a particular linear operator provides the existence of a high-degree SoS pseudoexpectation. The structure of the configuration linear program allows us to further restrict the canonical spanning set obtained from monomials, by one that is combinatorially interpretable and adapted to our purposes.

Partial schedules. We say that is a partial schedule if for every we have , where is the vertex degree in the (directed) bipartite graph with vertex partition and , and edges . For convenience, we say that is a partial schedule over if . We denote by the set of machines incident to a partial schedule , that is, . Sometimes is convenient to see a partial schedule as a function from to , so we also say that is partial schedule with domain .

###### Example 4.

Suppose that and . The set is a partial schedule. The machine is not incident to . In this case, since there are two machines, , incident to . The domain of is . The set is not a partial schedule since .

Scheduling ideal. Let sched be the ideal of polynomials in generated by

 {∑C∈CyiC−1:i∈[m]}∪{y2iC−yiC:i∈[m],C∈C}, (6)

Recall that the set of polynomials above enforce the machines in the scheduling solutions to be assigned with exactly one configuration. In the following lemmas we show that this set of constraints induce a nice structure for constructing spanning sets in the quotient ring.

###### Lemma 4.

If is not a partial schedule, .

###### Proof.

Since it is not a partial schedule, there exists a machine and two configurations such that . Then it is enough to prove that for every pair of different configurations we have . To that end, fix configuration and we have

 ∑C∈C,C≠C1yiC1yiC ≡∑C∈C,C≠C1yiC1yiC+y2iC1−yiC1≡yiC1(∑C∈CyiC−1)≡0modsched,

and then we conclude the claim. In particular, . ∎

###### Lemma 5.

Let be a partial schedule of cardinality at most . Then,

 yL∈span({yS:|S|=ℓ and S is a partial % schedule}).
###### Proof.

Assume that since otherwise we are done. Let such that for every , that is, is subset of machines that is not incident to the edges in the bipartite graph , and . Observe that since is a partial schedule, it is incident to exactly machines. Since for every , we have

 yS≡yS∏h∈H∑C∈CyhC≡∑L∈CHyS∪Lmodsched,

where is the set of partial schedules with domain . In particular, for every we have that is a partial schedule, and . ∎

Let be the quotient ring of polynomials in with degree equal to that vanish in the ideal sched. Lemmas 4 and 5 above imply directly the following theorem.

###### Theorem 5.

The quotient ring is spanned by .

### 3.4 Spanning sets of the invariant row subspace

In previous section we provided a reduced spanning set for the quotient ring vanishing in sched. In the following we construct spanning sets for the invariant row subspaces. Given a tableau with shape , the is the tableau with shape , its first row it is equal to the first row of and the remaining elements of fill the rest of the cells in increasing order over the rows. That part is called the tail of the hook, and we denote by the elements of in the tail of , and , that is the elements in the first row of the tableau.

###### Example 5.

Let and consider the partition . The tableau at the left has shape and the tableau at the right is , with shape ; and .

The following lemma gives a spanning set for the row subspaces obtained from the hook tableau. We denote by the symmetrization respect to the row subgroup of ,

 symhook(τλ)(f)=1|Rhook(τλ)|∑σ∈Rhook(τλ)σf. (7)
###### Lemma 6.

Given a tableau , the row subspace of the quotient ring is spanned by

 {symhook(τλ)(yS):|S|=ℓ and S % is a partial schedule}. (8)
###### Proof.

The row subspace is spanned by  [59, Lemma 2]. By Corollary 5 the monomial basis is spanned by the partial schedules of size equal to , so the lemma follows by linearity of the symmetrization operator. ∎

In the row subgroup , the elements of that are in the tail remain fixed. The rest of the elements on the first row are permuted arbitrarily. In particular, . Therefore, any permutation in acts over a monomial by separating de bipartite graph into those vertices in that are fixed by and the rest in that can be permuted.

Configuration profiles and extensions. Observe that bipartite graphs corresponding to different partial schedules are isomorphic if and only if the degree of every configuration is the same in both graphs. We say that a partial schedule is in -profile, with , if for every we have . Observe that a partial schedule in -profile has size , quantity that we denote by . We denote by the support of the vector , namely, .

###### Definition 1.

Given a partial schedule , we say that a partial schedule over is a -extension if is in -profile. We denote by the set of -extensions. In particular, every -extension has size .

###### Example 6.

Suppose that , and . For the profile , we have that . For the profile , we have that .

Given a partial schedule and a -profile, let be the polynomial defined by

 BT,γ=∑A∈F(T,γ)yA, (9)

if , and otherwise. In words, the polynomial above corresponds to sum over all those partial schedules in -profile that are not incident to . The following theorem is the main result of this section.

###### Theorem 6.

Let and a tableau of shape . Then, the row subspace of is spanned by

 Pλ=ℓ⋃ω=0{yTBT,γ:T is partial schedule with M(T)=tail(τλ) and ∥γ∥=ω}. (10)
###### Proof of Theorem 6.

By Lemma 6 it is enough to check that the set of polynomials in (8) is spanned by those in (10). Let be a partial schedule of size . Let be the subset of that is incident to the tail of the tableau, that is, , and let be the edges of the partial schedule incident to the first row of the tableau.

###### Claim 1.

Observe that is a partial schedule over . Similarly as we did in Lemma 5, the partial schedule incident to the tail can be completed to be in the span of partial schedules with domain equal to , that is,

 ytail(S,τλ)≡ytail(S,τλ)∏h∈tail(τλ)∖tail(S,τλ)∑C∈CyhC≡∑L∈Ctail(τλ)∖tail(S,τλ)ytail(S,τλ)∪Lmodsched

where is the set of partial schedules with domain . Thus, every partial schedule in the summation above have domain . Therefore, it is enough to check that exists a constant such that

 symrow(τλ)(yrow(S,τλ))=κ⋅Btail(τλ),γ

for some profile with . Recall that since . Let be the profile of the partial schedule . The equality follows since , and that is a -extension for every permutation in . The constant is equal to . ∎

###### Proof of Claim 1.

Observe that for every permutation , we have

 σyS=∏(i,C)∈Syσ(i)C=∏(i,C)∈tail(S,τλ)yσ(i)C∏(i,C)∈row(S,τλ)yσ(i)C=ytail(S,τλ)σyrow(S,τλ),

since the permutation fixes the edges in . Therefore, symmetrizing yields to

 symhook(τλ)(yS) =1|Rhook(τλ)|∑σ∈Rhook(τλ)σyS =ytail(S,τλ)⋅1|R% hook(τλ)|∑σ∈Rhook(τλ)σyrow(S,τλ) =ytail(S,τλ)⋅symhook(τλ)(yrow(S,τλ)).\qed

### 3.5 High-degree SoS pseudoexpectation: Proof of Theorem 1

We now have the algebraic ingredients to study the scheduling ideal and we detail next the SA pseudoexpectations from Theorem 3, that are the base for our lower bound. Recall that for every odd , the hard instance has machines and the linear operators we consider are supported over partial schedules incident to a set of six so called matching configurations, . Consider the such that for every partial schedule of cardinality at most ,

 ˜E(yS)=1(3k)|S|6∏j=1(k/2)δS(Cj), (11)

where is the lower factorial function, that is, , and . The linear operator is zero elsewhere. We state formally the main result that implies Theorem 1.

###### Theorem 7.

For every odd , the linear operator is a degree- SoS pseudoexpectation for the configuration linear program in instance and .

###### Proof of Theorem 1.

For every odd the instance described in Section 3.1 is infeasible for . By Theorem 7, the operator is a degree- SoS pseudoexpectation, which in turns imply by Lemma 1 that there is no degree- SoS certificate of infeasibility. For an instance with jobs, let be the greatest odd integer such that , with . The theorem follows by considering the instance above with dummy jobs of processing time equal to zero. ∎

Theorem 3 guarantees that for every odd, is a degree- SA pseudoexpectation, and therefore a degree- SA pseudoexpectation as well. In particular, properties (1) and (4) are satisfied. Since the configuration linear program is constructed from equality constraints, it is enough to check property (2) for high enough degree, in this case . To check property (2) we require a notion of conditional pseudoexpectations.

Conditional pseudoexpectations. Given a partial schedule , consider the operator such that

 ˜ET(yS)=1(3k−|T|)!6∏j=1(k/2−δT(Cj))δS(Cj) (12)

for every partial schedule over the machines and zero otherwise. Observe that if it corresponds to the linear operator in (11). The following lemmas about the conditional pseudoexpectation in (12) are key for proving that is a high-degree SoS pseudoexpectation. We state the lemmas and show how to conclude Theorem 1 using them. In particular, in Lemma 9 we prove a strong pseudoindependence property satisfied by the conditional pseudoexpectations and the polynomials (9) in the spanning set. We then prove prove the lemmas.

###### Lemma 7.

The linear operator is -symmetric.

###### Lemma 8.

Let be a partial schedule. Then, the following holds:

1. If is a partial schedule and , then .

2. If are two partial schedules such that and , then

 ˜ET(yRyS)=