A structural approach to kernels for ILPs:
Treewidth and Total Unimodularity
Abstract
Kernelization is a theoretical formalization of efficient preprocessing for hard problems. Empirically, preprocessing is highly successful in practice, for example in stateoftheart ILPsolvers like CPLEX. Motivated by this, previous work studied the existence of kernelizations for ILP related problems, e.g., for testing feasibility of . In contrast to the observed success of CPLEX, however, the results were largely negative. Intuitively, practical instances have far more useful structure than the worstcase instances used to prove these lower bounds.
In the present paper, we study the effect that subsystems that have (a Gaifman graph of) bounded treewidth or that are totally unimodular have on the kernelizability of the ILP feasibility problem. We show that, on the positive side, if these subsystems have a small number of variables on which they interact with the remaining instance, then we can efficiently replace them by smaller subsystems of size polynomial in the domain without changing feasibility. Thus, if large parts of an instance consist of such subsystems, then this yields a substantial size reduction. Complementing this we prove that relaxations to the considered structures, e.g., larger boundaries of the subsystems, allow worstcase lower bounds against kernelization. Thus, these relaxed structures give rise to instance families that cannot be efficiently reduced, by any approach.
1 Introduction
The notion of kernelization from parameterized complexity is a theoretical formalization of preprocessing (i.e., data reduction) for hard combinatorial problems. Within this framework it is possible to prove worstcase upper and lower bounds for preprocessing; see, e.g., recent surveys on kernelization [17, 18]. Arguably one of the most successful examples of preprocessing in practice are the simplification routines within modern integer linear program (ILP) solvers like CPLEX (see also [1, 12, 19]). Since ILPs have high expressive power, already the problem of testing feasibility of an ILP is hard; there are immediate reductions from a variety of wellknown hard problems. Thus, the problem also inherits many lower bounds, in particular, lower bounds against kernelization.
integer linear program feasibility – ilpf
Input: A matrix and a vector .
Question: Is there an integer vector with ?
Despite this negative outlook, a formal theory of preprocessing, such as kernelization aims to be, needs to provide a more detailed view on one of the most successful practical examples of preprocessing, even if worstcase bounds will rarely match empirical results. With this premise we take a structural approach to studying kernelization for ilpf. We pursue two main structural aspects of ILPs. The first one is the treewidth of the socalled Gaifman graph underlying the constraint matrix . As a second aspect we consider ILPs whose constraint matrix has large parts that are totally unimodular. Both bounded treewidth and total unimodularity of the whole system imply that feasibility (and optimization) are tractable.^{1}^{1}1Small caveat: For bounded treewidth this also requires bounded domain. We study the effect of having subsystems that have bounded treewidth or that are totally unimodular. We determine when such subsystems allow for a substantial reduction in instance size. Our approach differs from previous work [15, 16] in that we study structural parameters related to treewidth and total unimodularity rather than considering parameters such as the dimensions and of the constrain matrix or the sparsity thereof.
Treewidth and ILPs.
The Gaifman graph of a matrix is a graph with one vertex per column of , i.e., one vertex per variable, such that variables that occur in a common constraint form a clique in (see Section 3.1). This perspective allows us to consider the structure of an ILP by graphtheoretical means. In the context of graph problems, a frequently employed preprocessing strategy is to replace a simple (i.e., constanttreewidth) part of the graph that attaches to the remainder through a constantsize boundary, by a smaller gadget that enforces the same restrictions on potential solutions. There are several metakernelization theorems (cf. [13]) stating that large classes of graph problems can be effectively preprocessed by repeatedly replacing such protrusions by smaller structures. It is therefore natural to consider whether large protrusions in the Gaifman graph , corresponding to subsystems of the ILP, can safely be replaced by smaller subsystems.
We give an explicit dynamic programming algorithm to determine which assignments to the boundary variables (see Section 3.3) of the protrusions can be extended to feasible assignments to the remaining variables in the protrusion. Then we show that, given a list of feasible assignments to the boundary of the protrusion, the corresponding subsystem of the ILP can be replaced by new constraints. If there are variables in the boundary and their domain is bounded by , we find a replacement system with variables and constraints that can be described in bits. By an informationtheoretic argument we prove that equivalent replacement systems require bits to encode. Moreover, we prove that largedomain structures are indeed an obstruction for effective kernelization by proving that a family of instances with a single variable of large domain (all others have ), and with given Gaifman decompositions into protrusions and a small shared part of encoding size , admit no kernelization or compression to size polynomial in .
On the positive side, we apply the replacement algorithm to protrusion decompositions of the Gaifman graph to shrink ilpf instances. When an ilpf instance can be decomposed into a small number of protrusions with small boundary domains, replacing each protrusion by a small equivalent gadget yields an equivalent instance whose overall size is bounded. The recent work of Kim et al. [13] on metakernelization has identified a structural graph parameter such that graphs from an appropriately chosen family with parameter value can be decomposed into protrusions. If the Gaifman graph of an ilpf instance satisfies these requirements, the ilpf problem has kernels of size polynomial in . Concretely, one can show that boundeddomain ilpf has polynomial kernels when the Gaifman graph excludes a fixed graph as a topological minor and the parameter is the size of a modulator of the graph to constant treewidth. We do not pursue this application further in the paper, as it follows from our reduction algorithms in a straightforward manner.
Total unimodularity.
Recall that a matrix is totally unimodular (TU) if every square submatrix has determinant , , or . If is TU then feasibility of , for any integral vector , can be tested in polynomial time. (Similarly, one can efficiently optimize any function subject to .) We say that a matrix is totally unimodular plus columns if it can be obtained from a TU matrix by changing entries in at most columns. Clearly, changing a single entry may break total unimodularity, but changing only few entries should still give a system of constraints that is much simpler than the worstcase. Indeed, if, e.g., all variables are binary (domain ) then one may check feasibility by simply trying all assignments to variables with modified column in . The system on the remaining variables will be TU and can be tested efficiently.
From the perspective of kernelization it is interesting whether a small value of allows a reduction in size for or, in other words, whether one can efficiently find an equivalent system of size polynomial in . We prove that this depends on the structure of the system on variables with unmodified columns. If this remaining system decomposes into separate subsystems, each of which depends only on a bounded number of variables in nonTU columns, then by a similar reduction rule as for the treewidth case we get a reduced instance of size polynomial in and the domain size . Complementing this we prove that in general, i.e., without this bounded dependence, there is no kernelization to size polynomial in ; this also holds even if counts the number of entry changes to obtain from a TU matrix, rather than the (usually smaller) number of modified columns.
Related work.
Several lower bounds for kernelization for ilpf and other ILPrelated problems follow already from lower bounds for other (less general) problems. For example, unless and the polynomial hierarchy collapses^{2}^{2}2 is a standard assumption in computational complexity. It is stronger than and , and it is known that implies a collapse of the polynomial hierarchy., there is no efficient algorithm that reduces every instance of ilpf to an equivalent instance of size polynomial in (here refers to the number of columns in ); this follows from lower bounds for hitting set [9] or for satisfiability [8] and, thus, holds already for binary variables (ilpf). The direct study of kernelization properties of ILPs was initiated in [15, 16] and focused on the influence of row and columnsparsity of on having kernelization results in terms of the dimensions and of . At high level, the outcome is that unbounded domain variables rule out essentially all nontrivial attempts at polynomial kernelizations. In particular, ilpf admits no kernelization to size polynomial in when variable domains are unbounded, unless ; this remains true under strict bounds on sparsity [15]. For bounded domain variables the situation is a bit more positive: there are generalizations of positive results for hitting set and satisfiability (when sets/clauses have size at most ). One can reduce to size polynomial in in general [16], and to size polynomial in when seeking a feasible with for a sparse covering ILP [15].
Organization.
Section 2 contains preliminaries about parameterized complexity, graphs, and treewidth. In Section 3 we analyze the effect of treewidth on preprocessing ILPs, while we consider the effect of large totally unimodular submatrices in Section 4. In Section 5 we discuss some differences between totally unimodular and boundedtreewidth subsystems. We conclude in Section 6.
2 Preliminaries
Parameterized complexity and kernelization.
A parameterized problem is a set where is any finite alphabet and denotes the nonnegative integers. In an instance the second component is called the parameter. A parameterized problem is fixedparameter tractable () if there is an algorithm that, given any instance , takes time and correctly determines whether ; here is any computable function. A kernelization for is an algorithm that, given , takes time polynomial in and returns an instance such that if and only if (i.e., the two instances are equivalent) and ; here is any computable function, and we also call it the size of the kernel. If is polynomially bounded in , then is a polynomial kernelization. We also define (polynomial) compression; the only difference with kernelization is that the output is any instance with respect to any fixed language , i.e., we demand that if and only if and that . A polynomialparameter transformation from a parameterized problem to a parameterized problem is a polynomialtime mapping that transforms each instance of into an equivalent instance of , with the guarantee that if and only if and for some polynomial .
Lower bounds for kernelization.
For one of our lower bound proofs we use the notion of a crosscomposition from [7], which builds on the framework for lower bounds for kernelization by Bodlaender et al. [5] and Fortnow and Santhanam [11].
Definition 1.
An equivalence relation on is called a polynomial equivalence relation if the following two conditions hold:

There is an algorithm that given two strings decides whether and belong to the same equivalence class in time.

For any finite set the equivalence relation partitions the elements of into at most classes.
Definition 2.
Let be a set and let be a parameterized problem. We say that crosscomposes into if there is a polynomial equivalence relation and an algorithm that, given strings belonging to the same equivalence class of , computes an instance in time polynomial in such that:

for some ,

is bounded by a polynomial in .
Theorem 1 ([7]).
If the set is NPhard under Karp reductions and crosscomposes into the parameterized problem , then there is no polynomial kernel or compression for unless .
Graphs.
All graphs in this work are simple, undirected, and finite. For a finite set and positive integer , we denote by the family of size subsets of . The set is abbreviated as . An undirected graph consists of a vertex set and edge set . For a set we use to denote the subgraph of induced by . We use as a shorthand for . For we use to denote the open neighborhood of . For we define . The boundary of in , denoted , is the set of vertices in that have a neighbor in .
Treewidth and protrusion decompositions.
A tree decomposition of a graph is a pair , where is a tree and is a family of subsets of called bags, such that (i) , (ii) for each edge there is a node with , and (iii) for each the nodes induce a connected subtree of . The width of the tree decomposition is . The treewidth of a graph , denoted , is the minimum width over all tree decompositions of . An optimal tree decomposition of an vertex graph can be computed in time using Bodlaender’s algorithm [3]. A approximation to treewidth can be computed in time using the recent algorithm of Bodlaender et al. [6]. A vertex set such that is called a treewidth modulator.
For a positive integer , an protrusion in a graph is a vertex set such that and . An protrusion decomposition of a graph is a partition of such that (1) for every we have , (2) , and (3) for every the set is an protrusion in . We sometimes refer to as the shared part.
3 ILPs of bounded treewidth
We analyze the influence of treewidth for preprocessing ilpf. In Section 3.1 we give formal definitions to capture the treewidth of an ILP, and introduce a special type of tree decompositions to solve ILPs efficiently. In Section 3.2 we study the parameterized complexity of ilpf parameterized by treewidth. Tractability turns out to depend on the domain of the variables. An instance of ilpf has domain size if, for every variable , there are constraints and for some and . (All positive results work also under more relaxed definitions of domain size , e.g., any choice of integers for each variable, at the cost of technical complication.) The feasibility of boundedtreewidth, boundeddomain ILPs is used in Section 3.3 to formulate a protrusion replacement rule. It allows the number of variables in an ILP of domain size that is decomposed by a protrusion decomposition to be reduced to . In Section 3.4 we discuss limitations of the protrusionreplacement approach.
3.1 Tree decompositions of linear programs
Given a constraint matrix we define the corresponding Gaifman graph as follows [10, Chapter 11]. We let , i.e., the variables in for . We let if and only if there is an with and . Intuitively, two vertices are adjacent if the corresponding variables occur together in some constraint.
Observation 1.
For every row of , the variables with nonzero coefficients in row form a clique in . Consequently (cf. [4]), any tree decomposition of has a node with .
To simplify the description of our dynamic programming procedure, we will restrict the form of the tree decompositions that the algorithm is applied to. This is common practice when dealing with graphs of bounded treewidth: one works with nice tree decompositions consisting of leaf, join, forget, and introduce nodes. When using dynamic programming to solve ILPs it will be convenient to have another type of node, the constraint node, to connect the structure of the Gaifman graph to the constraints in the ILP. To this end, we define the notion of a nice Gaifman decomposition including constraint nodes.
Definition 3.
Let . A nice Gaifman decomposition of of width is a triple , where is a rooted tree and is a width tree decomposition of the Gaifman graph with:
(1) The tree has at most nodes.
(2) Every row of is assigned to exactly one node of . If row is mapped to node then is a list of pointers to the nonzero coefficients in row .
(3) Every node of has one of the following types:
 leaf:

has no children and ,
 join:

has exactly two children and ,
 introduce:

has exactly one child and with ,
 forget:

has exactly one child and with ,
 constraint:

has exactly one child , , and stores a constraint of involving variables that are all contained in .
The following proposition shows how to construct the Gaifman graph for a given matrix . It will be used in later proofs.
Proposition 1.
Given a matrix in which each row contains at most nonzero entries, the adjacency matrix of can be constructed in time.
Proof.
Initialize an allzero adjacency matrix in time. Scan through to collect the indices of the nonzero entries in each row in time. For each row , for each of the pairs of distinct nonzero entries in the row, set the corresponding entries , of the adjacency matrix to one. ∎
We show how to obtain a nice Gaifman decomposition for a matrix of width from any tree decomposition of its Gaifman graph of width .
Proposition 2.
There is an algorithm that, given and a width tree decomposition of the Gaifman graph of , computes a nice Gaifman decomposition of having width in time.
Proof.
Building a nice tree decomposition. From the tree decomposition of we can derive a chordal supergraph of with maximum clique size bounded by , by completing the vertices of each bag into a clique [14, Lemma 2.1.1]. This can be done in time by scanning through the contents of the bags of . A perfect elimination order of the chordal supergraph can be used to obtain a nice tree decomposition of having width on at most nodes [14, Lemma 13.1.3]. The nice tree decomposition consists of leaf, join, introduce, and forget nodes.
Incorporating constraint nodes. We augment the nice tree decomposition with constraint nodes to obtain a nice Gaifman decomposition of , as follows. We scan through matrix and store, for each row, a list of pointers to the nonzero entries in that row. This takes time. Since a graph of treewidth does not have cliques of size more than , by Observation 1 each row of has at most nonzero entries. We maintain a list of the rows in that have not yet been associated to a constraint bag in the Gaifman decomposition. We traverse the rooted tree in postorder. For each node , we inspect the corresponding bag and test, for each constraint that is not yet represented by the decomposition, whether all variables involved in the constraint are contained in the bag. This can be determined in time per constraint as follows. For each variable in we test whether the corresponding row in has a nonzero coefficient for that variable; if so, we increase a counter. If the final value of the counter matches the precomputed number of nonzero coefficients in the row then the bag contains all variables involved in the constraint. In that case we update the tree as follows: we make a new node , assign , and let be a copy of the precomputed list of pointers to the nonzero coefficients in the constraint. We make the parent of . If is not the root, then it originally had a parent ; we make the parent of instead. This operation effectively splices a node of degree two into the tree. Since the newly introduced node has the same bag as , the relation between the bags of parents and children for the existing nodes of the tree remains unaltered (e.g., a forget node in the old tree will be a forget node in the new tree). The newly introduced node satisfies the requirements of a constraint node. We then continue processing the remainder of the tree to obtain the final nice Gaifman decomposition . As the original tree contains nodes, while we spend time per node to incorporate the constraint bags, this phase of the algorithm takes time. By Observation 1, for each constraint of the involved variables occur together in some bag. Hence we will detect such a bag in the procedure, which results in a constraint node for the row. Since the nice tree decomposition that we started from contained at most nodes, while we introduce one node for each constraint in , the resulting tree has at most nodes. This shows that satisfies all properties of a nice Gaifman decomposition and concludes the proof. ∎
3.2 Feasibility on Gaifman graphs of bounded treewidth
We discuss the influence of treewidth on the complexity of ilpf. It turns out that for unbounded domain variables the problem remains weakly hard on instances with Gaifman graphs of treewidth at most two (Theorem 2). On the other hand, the problem can be solved by a simple dynamic programming algorithm with runtime , where is the domain size and denotes the width of a given tree decomposition of the Gaifman graph (Theorem 3). In other words, the problem is fixedparameter tractable in terms of , and efficiently solvable for bounded treewidth and polynomially bounded in the input size.
Both results are not hard to prove and fixedparameter tractability of ilpf() can also be derived from Courcelle’s theorem (cf. [10, Corollary 11.43]). Nevertheless, for the sake of selfcontainment and concrete runtime bounds we provide direct proofs. Theorem 3 is a subroutine of our protrusion reduction algorithm.
Theorem 2.
ilp feasibility remains weakly hard when restricted to instances whose Gaifman graph has treewidth two.
Proof.
We give a straightforward reduction from subset sum to this restricted variant of ilpf. Recall that an instance of subset sum consists of a set of integers and a target value ; the task is to determine whether some subset of the integers sums to exactly . Given such an instance we create variables that encode the selection of a subset and variables that effectively store partial sums; the variables are constrained to domain . Concretely, we aim to compute
for all . Clearly, this is correctly enforced by the following constraints.
Finally, we enforce . Clearly, a subset of the integers with sum translates canonically to a feasible assignment of the variables, and vice versa.
It remains to check the treewidth of the corresponding Gaifman graph. We note that for this purpose it is not necessary to split equalities into inequalities or performing similar normalizations since it does not affect whether sets of variables occur in at least one shared constraint. Thus, we can make a tree decomposition (in fact, a path decomposition) consisting of a path on nodes with bags where and for . Clearly, for each constraint there is a bag containing all its variables, correctly handling all edges of the Gaifman graph, and the bags containing any variable form a connected subtree. It follows that the Gaifman graph of the constructed instance has treewidth at most two. ∎
Theorem 3.
Instances of ilpf of domain size with a given nice Gaifman decomposition of width can be solved in time .
Proof.
Let denote an instance of ilpf of domain size and let denote a given nice Gaifman decomposition of width for . We describe a simple dynamic programming algorithm for testing whether there exists an integer vector such that . For ease of presentation we assume that each domain is ; it is straightforward, but technical, to use arbitrary (possibly different) domains of at most values for each variable.
With a node , apart from its bag , we associate the set of all variables appearing in or in the bag of any descendant of . By we denote all constraints (rows of ) that the Gaifman decomposition maps to a descendant of (including itself).
Our goal is to compute for each node the set of all feasible assignments to the variables in when taking into account only constraints in . The set of feasible assignments for will be recorded in a table indexed by tuples where . The entry at corresponds to assigning where and . Its value will be if we determined that there is a feasible assignment for with respect to constraints that extends ; otherwise the value is . We will now describe how to compute the tables in a bottomup manner, by outlining the behavior on each node type.

Leaf node with bag . Since as has no children, we simply have for all which is computed in time.

Forget node with child and bag . It suffices to project down to only contain information about the feasible assignments for . To make this concrete, let with and . Thus, . We let
for all , i.e., we set to if some choice of extends the assignment to one that is feasible for with respect to all constraints on ; else it takes value . Each table entry takes time and in total we spend time ; note that implies .

Introduce node with child and bag . Let with and , implying that . As is an introduce node we have , and therefore does not constrain the value of in any way. We set as follows.
for all . We use time .

Join node with children and bag . At a join node , from child we get all assignments that are feasible for regarding constraints , and from child we get the feasible assignments for constraints . We have , and therefore an assignment is feasible for if and only if it is feasible for both and . It suffices to merge the information of the two child nodes. Letting with , we set
for all . This takes time .

Constraint node with child mapped to row . Let with . We know that and therefore an assignment of values to is feasible with respect to the constraints if and only if it is feasible for the constraints and also satisfies constraint . We therefore initialize by setting as follows.
for all . Now, we need to discard those assignments that are not feasible for the additional constraint . For each with we process the row . Using the pointers to the nonzero coefficients in row that are stored in , along with the fact that all variables constrained by are contained in bag by Definition 3, we can evaluate the constraint in time. If the sum of values times coefficients exceeds then the constraint is not satisfied and we set to . This takes time per assignment. In total we use time .
At the end of the computation we have the table where denotes the root of the tree decomposition . By definition, it encodes all assignments to that can be extended to assignments that are feasible for all constraints in . By Definition 3 the set for the root contains all constraints of and thus any entry implies that has an integer solution. Conversely, any integer solution must lead to a nonzero entry in . By Definition 3 the number of nodes in is . The total time needed for the dynamic programming is therefore bounded by . ∎
3.3 Protrusion reductions
To formulate the protrusion replacement rule, which is the main algorithmic asset used in this section, we need some terminology. For a nonnegative integer , an boundaried ILP is an instance of ilpf in which distinct boundary variables are distinguished among the total variable set . If is a sequence of variables of , we will also use to denote the corresponding boundaried ILP. The feasible boundary assignments of a boundaried ILP are those assignments to the boundary variables that can be completed into an assignment that is feasible for the entire system.
Definition 4.
Two boundaried ILPs and are equivalent if they have the same feasible boundary assignments.
The following lemma shows how to compute equivalent boundaried ILPs for any boundaried input ILP. The replacement system is built by adding, for each infeasible boundary assignment, a set of constraints on auxiliary variables that explicitly blocks that assignment.
Lemma 1.
There is an algorithm with the following specifications: (1) It gets as input an boundaried ILP with domain size , with , , and a width nice Gaifman decomposition of . (2) Given such an input it takes time . (3) Its output is an equivalent boundaried ILP of domain size containing variables and constraints, and all entries of in .
Proof.
The lemma is a combination of two ingredients. Using Theorem 3 we can efficiently test whether a given assignment to the boundary variables can be extended to a feasible solution for . Then, knowing the set of all assignments that can be feasibly extended, we block each infeasible partial assignment by introducing a small number of variables and constraints, allowing us to fully discard the original constraints and nonboundary variables. The latter step uses a construction from an earlier paper by Kratsch [16, Theorem 5], which we repeat here for completeness.
Finding feasible partial assignments. Consider a partial assignment to the boundary variables with . To determine whether this assignment can be extended to a feasible assignment for , we enforce these equalities in the ILP. Concretely, for each we replace the domainbounding constraints for in the system by and . We obtain a new system with constraints. Since the modified constraints involve only a single variable each, the modifications do not affect the Gaifman graph: . Moreover, the modifications do not affect which entries in the constraint matrix have nonzero values, implying that also serves as a nice Gaifman decomposition of . The partial assignment to the boundary variables can be feasibly extended if and only if is feasible. We may invoke the algorithm of Theorem 3 with to decide feasibility in time. By iterating over all possible partial assignments to the boundary variables with values in , we determine which partial assignments can be feasibly extended. Let be a list of the partial assignments that can not be extended to a feasible solution for .
Blocking infeasible partial assignments. Using we construct an equivalent boundaried ILP as follows. Based on the length of we can determine the number of variables that will be used in , which helps to write down the constraint matrix efficiently. The number of variables in the new system will be , the number of constraints will be . The system is built as follows. For each boundary variable of we introduce a corresponding variable in and constrain its domain to using two inequalities; this yields variables and constraints.
For each infeasible partial assignment in the list , we add new variables and for all , together with the following constraints:
(1)  
(2) 
We claim that an assignment to the boundary variables can be extended to the newly introduced variables to satisfy the constraints for if and only if the partial assignment is not . In the first direction, assume that . Then , implying that (taking into account the domains of and ) for all . Therefore constraint (2) is violated which shows that the partial assignment can not be feasibly extended. In the other direction, if , then there is a position with . It follows that (due to the domain of ) which in turn implies that since the contribution of to the equality (1) is a multiple of . Therefore constraint (2) is fulfilled.
The only coefficients used in the constraints that block a partial assignment are ; as the equalities of (1) are represented using two inequalities, we get coefficients and . The values of , which appear in the righthand side vector , are in since they arise from the coordinates of an attempted partial assignment (with negative values from representing equalities by inequalities). As the coefficients of the domainenforcing constraints are all plus or minus one, with righthand side in , the structure of the constructed system matches that described in the lemma statement. For each infeasible partial assignment we introduce variables with domainenforcing constraints each, along with inequalities to express the equalities of (1), and a single constraint for (2). The total system therefore has variables and constraints. Since there are only partial assignments that we check, we have and therefore the system has variables and constraints. Consequently, the constraint matrix has entries and it is not hard to verify that it can be written down in linear time. It remains to prove that the boundaried ILP is equivalent to the input structure.
Consider an assignment to the boundary variables. If the assignment can be extended to a feasible assignment for , then the boundary variables take values in (since has domain size ) and therefore satisfy the domain restrictions of . Since the partial assignment has a feasible extension, it is not on the list . For each set of constraints that was added to block an infeasible partial assignment , the claim above therefore shows that the related variables and can be set to satisfy their constraints. Hence the partial assignment can be extended to a feasible assignment for . In the reverse direction, suppose that a partial assignment can be feasibly extended for . By the claim above, the partial assignment differs from each of the blocked points on . Since contains all infeasible partial assignments with values in , and feasible partial assignments for belong to since we restricted the domain of , there is an extension feasible for . This shows that the two boundaried ILPs are indeed equivalent, and concludes the proof. ∎
Intuitively, we can simplify an ilpf instance with a given protrusion decomposition by replacing all protrusions with equivalent boundaried ILPs of small size via Lemma 1. We get a new instance containing all replacement constraints plus all original constraints that are fully contained in the shared part.
Theorem 4.
For each constant there is an algorithm that, given an instance of ilpf with domain size , along with a protrusion decomposition of the given Gaifman graph , outputs an equivalent instance of ilpf with domain size on variables in time . Each constraint of is either a constraint in involving only variables from , or one of new constraints with coefficients and righthand side among .
Proof.
The main idea of the proof is to apply Lemma 1 to replace each protrusion in the Gaifman graph by a small subsystem that is equivalent with respect to the boundary variables. For the sake of efficiency, we start by scanning through once to compute for each row of a list of pointers to the nonzero coefficients in that row. This takes time. We handle the protrusions for (this implies ) consecutively, iteratively replacing each protrusion by a small equivalent boundaried ILP to obtain an equivalent instance.
Replacing protrusions. Consider some with ; we show how to replace the variables by a small subsystem that has the same effect on the existence of global solutions. The definition of protrusion decomposition ensures that and that has treewidth at most . From the system we extract the constraints involving at least one variable in . We collect these constraints, together with domainenforcing constraints for , into a subsystem . Let and be the number of variables and constraints in , respectively. Since by the definition of a protrusion decomposition, we have . Since the nonzero variables involved in a constraint induce a clique in the Gaifman graph, while has treewidth at most and therefore does not have cliques of size more than , it follows that a constraint involving a variable from acts on at most variables. We can therefore identify the constraints involving a variable in by only inspecting the rows of containing at most nonzero entries, which implies that the system can be extracted in time.
Let be the neighbors of in , i.e., the variables of that appear in a common constraint with a variable in . As is a constant, we can compute a tree decomposition of of width with bags in time [3, 6]. Using Proposition 2 this yields a nice Gaifman decomposition of in time. Interpreting as an boundaried ILP, we invoke Lemma 1 to compute an equivalent boundaried ILP in time for constant . By Lemma 1, the numbers in the system are restricted to the set and has variables and constraints. We modify the instance as follows, while preserving the fact that it has domain size . We remove all variables from and all constraints involving them from the system . For each nonboundary variable in we add a corresponding new variable to . For each constraint in , containing some boundary variables and some nonboundary variables, we add a new constraint with the same coefficients and righthand side to . All occurrences of boundary variables of are replaced by the corresponding existing variables of ; occurrences of nonboundary variables are replaced by occurrences of the corresponding newly introduced variables.
Observe that these replacements preserve the variable set , and that the newly introduced constraints only involve and newly introduced variables. We can therefore perform this replacement step independently for each protrusion with . Since each variable set for is removed and replaced by a new set of variables, the final system resulting from these replacements has variables, which is since the definition of a protrusion decomposition ensures that . When building , the procedure above removes from all constraints that involve at least one variable in with . Hence the only constraints in are (1) the constraints of that only involve variables in , and (2) the