A Lower Bound on CNF Encodings of the AtMostOne Constraint
Abstract
Constraint “at most one” is a basic cardinality constraint which requires that at most one of its boolean inputs is set to . This constraint is widely used when translating a problem into a conjunctive normal form (CNF) and we investigate its CNF encodings suitable for this purpose. An encoding differs from a CNF representation of a function in that it can use auxiliary variables. We are especially interested in propagation complete encodings which have the property that unit propagation is strong enough to enforce consistency on input variables. We show a lower bound on the number of clauses in any propagation complete encoding of the “at most one” constraint. The lower bound almost matches the size of the best known encodings. We also study an important case of 2CNF encodings where we show a slightly better lower bound. The lower bound holds also for a related “exactly one” constraint.
1 Introduction
In this paper we study the properties of one of the most basic cardinality constraints — the “at most one” constraint on boolean variables which requires that at most one input variable is set to . This constraint is widely used when translating a problem into a conjunctive normal form (CNF). Since the “at most one” constraint is antimonotone, it has a unique minimal prime CNF representation which requires negative clauses, where is the number of input variables. However, there are CNF encodings of size which use additional auxiliary variables. Several encodings for this constraint were considered in the literature. Let us mention sequential encoding [21] which addresses also more general cardinality constraints. The same encoding was also called ladder encoding in [17], and it forms the smallest variant of the commandervariable encodings [18]. After a minor simplification, it requires clauses and auxiliary variables. Similar, but not smaller encodings can also be obtained as special cases of totalizers [5] and cardinality networks [1]. Currently the smallest known encoding is the product encoding introduced by Chen [10]. It consists of clauses and uses auxiliary variables. The sequential and the product encodings are described in Section 3.1 and Section 3.2 with some modifications. It is worth noting that the product encoding can be derived using the monotone circuit of size for the function described in [14] and in [22], if . Section 3.3 provides more detail on this.
Other encodings introduced in the literature for the “at most one” constraint use more clauses than either sequential or product encoding does. These include the binary encoding [6, 16] and the bimander encoding [17].
All the encodings for the “at most one” constraint we have mentioned are in the form of a 2CNF formula which is a CNF formula where all clauses consist of at most two literals. This restricted structure guarantees that the encodings are propagation complete. The notion of propagation completeness was introduced by [8] as a generalization of unit refutation completeness introduced by [13]. We say that a formula is propagation complete if for any set of literals , the following property holds: either is contradictory and this can be detected by unit propagation, or unit propagation started with derives all literals that logically follow from this formula. It was shown in [3] that a prime 2CNF formula is always propagation complete. Since unit propagation is a standard tool used in stateoftheart SAT solvers [7], this makes 2CNF formulas as a part of a larger instance simple for them.
When encoding a constraint into a CNF formula, a weaker condition than propagation completeness of the resulting formula is often required. Namely, we require that unit propagation on the encoding is strong enough to enforce some kind of local consistency, for instance generalized arc consistency (GAC), see for example [4]. In this case we only care about propagation completeness with respect to input variables and not necessarily about behaviour on auxiliary variables. Later we formalize this notion as propagation complete encoding (PC encoding). Let us note that this name was also used in [9] to denote an encoding of a given constraint which is propagation complete with respect to all variables including the auxiliary ones.
Chen [10] conjectures that the product encoding is the smallest possible PC encoding of the “at most one” constraint. In this paper we provide support for the positive answer to this conjecture. Our lower bound almost matches the size of the product encoding. We show that any propagation complete encoding of the “at most one” constraint on variables requires at least clauses. The lower bound actually holds for a related constraint “exactly one” as well. We also consider the important special case of 2CNF encodings for which we achieve a better lower bound, namely, any 2CNF encoding of the “at most one” constraint on variables requires at least clauses.
We should note that having a smaller encoding is not necessarily an advantage when a SAT solver is about to be used. Adding auxiliary variables can be costly because the SAT solver has to deal with them and possibly use them for decisions. However, encodings using auxiliary variables can be useful for constraints whose CNF representation is too large. Moreover, the experimental results in [20] suggest that a SAT solver can be modified to minimize the disadvantage of introducing auxiliary variables. Another experimental evaluation of various cardinality constraints and their encodings appears in [15]. A propagation complete encoding can also be used as a part of a general purpose CSP solver where unit propagation can serve as a propagator of GAC, see [4].
The paper is organized as follows. In Section 2, we give the necessary definitions and formulate the main result in Theorem 2.8. In particular, we introduce the notion of a Pencoding that captures the common properties of propagation complete encodings of the “at most one” and the “exactly one” constraints used for the lower bounds. Moreover, we define a specific form of a Pencoding which we call a regular form and formulate Theorem 2.10 that is the basis of the proofs of the lower bounds by considering separately the encodings in the regular form and the encodings not in this form. We also prove that Theorem 2.10 is sufficient for the lower bound on the size of the considered encodings. In Section 3, we recall the known results and present some auxiliary results we use in the rest of the paper. In Section 4, we prove the properties of the encodings not in the regular form that imply Theorem 2.10. Section 5 contains the proof of a lower bound on the size of any propagation complete encoding of the “at most one” and the “exactly one” constraints obtained by analysis of the encodings in the regular form. In Section 6 we prove a lower bound on the size of 2CNF encodings of the “at most one” constraint by a different analysis of the encodings in the regular form. We close the paper with notes on possible directions for further research in Section 7 and concluding remarks in Section 8.
A preliminary version of this paper appeared in [19]. Due to page limitations, several proofs were omitted or only sketched in the conference version. In this version of the paper, we have included all proofs and improved their readability. The lower bounds were slightly improved since the conference version as well.
2 Definitions and Results
In this section we introduce the notions used throughout the paper, state the main results and give an overview of their proof. We use to denote strict inclusion.
2.1 AtMostOne and ExactlyOne Functions
In this paper we are interested in two special cases of cardinality constraints represented by “at most one” and “exactly one” functions. These functions differ only on the zero input.
Definition 2.1.
For every , the function (at most one) is defined as follows: Given an assignment , the value is if and only if there is at most one index for which .
Definition 2.2.
For every , the function (exactly one) is defined as follows: Given an assignment , the value is if and only if there is exactly one index for which .
We study propagation complete encodings of these two functions using their common generalization called Pencoding introduced in Definition 2.5.
2.2 CNF Encoding
We work with formulas in conjunctive normal form (CNF formulas). For a standard notation see e.g. [11]. Namely, a literal is a variable (positive literal), or its negation (negative literal). If is a variable, then let . If is a vector of variables, then we denote by the union of over . For simplicity, we write if is a variable that occurs in , so is considered as a set here, although, the order of the variables in is important. Given a literal , the term denotes the variable in the literal , that is, for . Given a set of literals , .
A clause is a disjunction of a set of literals which does not contain a complementary pair of literals. A formula is in conjunctive normal form (CNF) if it is a conjunction of a set of clauses. In this paper, we consider only formulas in conjunctive normal form and we often simply refer to a formula, by which we mean a CNF formula. We treat clauses as sets of literals and formulas as sets of clauses. In particular, the order of the literals in a clause or clauses in a formula is not important and we use common set relations and operations (set membership, inclusion, set difference, etc.) on clauses and formulas. The empty clause (the contradiction) is denoted and the empty formula (the tautology) is denoted .
A unit clause consists of a single literal. A binary clause consists of two literals. A CNF formula, each clause of which contains at most literals, is said to be a CNF formula.
A partial assignment of variables is a subset of that does not contain a complementary pair of literals, so we have for each . By we denote the formula obtained from by the partial setting of the variables defined by .
A CNF formula represents a boolean function on the variables in . We say that a clause is an implicate of a formula if any satisfying assignment of satisfies as well, i.e. implies for every assignment . We denote this property with . We say that is a prime implicate of if none is an implicate of . Note that whether a clause is a (prime) implicate of depends only on the function represented by and we can therefore speak about implicates of as well. We say that CNF is prime if it consists only of prime implicates of . By the size of the formula we mean the number of clauses in , it is denoted as which is consistent with considering a CNF formula as a set of clauses.
In this paper we also consider encodings of boolean functions defined as follows.
Definition 2.3 (Encoding).
Let be a boolean function on variables . Let be a CNF formula on variables, where . We call an encoding of if for every we have
(1) 
The variables in and are called input variables and auxiliary variables, respectively.
2.3 Propagation Complete Encoding
We are interested in encodings which are propagation complete. This notion relies on unit resolution which is a special case of general resolution. We say that two clauses are resolvable, if there is exactly one literal such that and . The resolvent of these clauses is then defined as . If one of and is a unit clause, we say that is derived by unit resolution from and . We say that a clause can be derived from by unit resolution (or unit propagation), if can be derived from by a series of unit resolutions. We denote this fact with .
Definition 2.4 (Propagation complete encoding).
Let be a boolean function on variables . Let be a CNF formula on variables, where . We call a propagation complete encoding (PC encoding) of if it is an encoding of and for any , and for each , such that
(2) 
we have
(3) 
If is a prime CNF formula, we call it prime PC encoding.
Note that the definition of a propagation complete encoding is less restrictive than requiring that formula is propagation complete as defined in [8]. The difference is that in a PC encoding we only consider literals on input variables as assumptions and consequences of unit propagation. The definition of a propagation complete formula [8] does not distinguish input and auxiliary variables and the implication from (2) to (3) is required for all literals on all variables.
The following notation is used in the rest of the paper. Let be literals on variables in . Then denotes the set of literals that can be derived by unit resolution from that is
2.4 Propagation Complete Encodings of and
Propagation complete encodings of and share two common properties which we capture under the notion of Pencoding.
Definition 2.5 (Pencoding).
Let be a formula with , , , . We say that is a Pencoding if it satisfies the following two conditions.

is satisfiable for each ,

holds for each with ,
One can easily verify that being a Pencoding is a necessary condition for a formula to be a propagation complete encoding of or .
Lemma 2.6.
Let be a PC encoding of or . Then is a Pencoding.
2.5 The Main Result
Let us introduce the following notation.
Definition 2.7.
We denote the minimum size of a Pencoding with input variables by and the minimum size of a 2CNF Pencoding with input variables by .
We pay special attention to 2CNF encodings of . The minimum size of these encodings is as explained in Section 6. One can prove by contradiction that there are no 2CNF encodings of of input variables as follows. Given an encoding of , we can eliminate an auxiliary variable from by removing clauses containing or and replacing them with the resolvents of all pairs of these clauses resolvable using the variable . We call this step DPelimination of , since its repetition for all variables is one of the parts of DavisPutnam algorithm [12] (see also [7]). After eliminating all auxiliary variables, the remaining formula is a 2CNF representation of , since 2CNF formulas are closed under resolution. This is a contradiction, since is a prime implicate of .
We are now ready to state the main result of this paper.
Theorem 2.8.
Every PC encoding of or has size at least , the smallest size of a 2CNF encoding of is equal to , and

For we have .

For we have .

For we have .
The lower bound for is tight, since for every , there is a 2CNF encoding of of size and it is a Pencoding. The lower bound for is almost tight, since for every sufficiently large , the product encoding [10] of has size and is a 2CNF Pencoding. Moreover, in Section 3.4, we prove that is a close estimate of the minimum size of PC encodings of the functions and . Namely, for each of these functions, there is a PC encoding of size at most .
The first part of Theorem 2.8 follows from Lemma 2.6 and Lemma 6.1. Our proof of parts 2 and 3 relies on the notion of Pencodings in regular form we define below. Let be a Pencoding with input variables . Given a variable , , unit propagation on formula starts with clauses which contain the negative literal . It is important to distinguish different types of Pencodings according to the structure of these clauses. For each let us denote
(4) 
Definition 2.9.
Let be a Pencoding with input variables . We say that is in regular form if the following holds for each :

.

Clauses in contain no input variables other than .

Clauses in are binary.
It is interesting to note that the construction of the product encoding introduced by Chen [10] leads to an encoding in regular form and this form is probably the best for most values of . On the other hand, there are infinitely many rare values of , for which using a Pencoding not in regular form allows to slightly reduce the size. This is used to describe the product encoding in Section 3.2.
The following theorem is used later to reduce the analysis of the minimum size Pencodings to the analysis of Pencodings in regular form and an induction argument. The theorem will be used for both general CNF and 2CNF formulas. Since the minimum size of a Pencoding can be different in these two classes of formulas, we do not use the assumption that is a minimum size Pencoding and include condition (a) that has the same effect and can be used for both general CNF and 2CNF formulas.
Theorem 2.10.
We give a proof of Theorem 2.10 in Section 4. This theorem allows the following approach to proving a lower bound. If a given Pencoding is not in regular form, we use induction on , and if it is in regular form, we prove a lower bound directly. Although we combine Theorem 2.10 later with additional arguments to prove stronger lower bounds, the following simple corollary of this theorem already gives the main term of the lower bound.
Corollary 2.11.
Let be a Pencoding with input variables , . Then consists of at least clauses.
Proof.
Assume that is a minimum size Pencoding for input variables. Without loss of generality, we can assume that it is a prime Pencoding (see also Lemma 3.5 below). We prove by induction on that . For , this implies the stated lower bound. For , one can check (see also Lemma 5.1 below) that must contain at least three clauses, thus . Now let us assume that . Since is of minimum size, no Pencoding with input variables with fewer clauses exists and item 1 of Theorem 2.10 does not apply. If is not in regular form then by item 2 of Theorem 2.10, there is a Pencoding with input variables such that . By induction hypothesis , thus
If is in regular form, then we have , since by definition, the union of , , contains clauses. ∎
3 Known and Auxiliary Results
In this section we state known and preliminary results used throughout the paper. We start by recalling some of the known good encodings of with some modifications.
3.1 Sequential Encoding
Let us present a variant of the sequential encoding [21], which addresses also more general cardinality constraints. This construction has also been called ladder encoding in [17]. The following recurrence describes the sequential encoding of with a minor simplification which reduces its size to and the number of auxiliary variables to . The base case is
and for each , let
where is an auxiliary variable not used in . By induction on , one can verify that is an encoding of with auxiliary variables and of size . Since it is a prime 2CNF, it is propagation complete, see [3]. Hence, we have the following.
Lemma 3.1.
For every , there is a 2CNF PC encoding of of size .
Since is a symmetric function, the order of the variables in the formula for this function can be chosen arbitrarily without changing the function. When a different order of the variables is used in a recurrence, the obtained formula has a different form. Let us introduce the tree encoding by the following recurrence. The base case is
and for each , let
where is an auxiliary variable not used in . By a similar argument as above, is a PC encoding of .
The size of the formulas and is the same and both are 2CNF formulas. Let us consider a graph, whose vertices are variables and edges are the twoelement sets , where is a clause in the formula. Both the formulas and can be decomposed into triples of clauses which correspond to triangles in their graph and the triangles are connected via their vertices into a tree structure. In the graph for , these triangles form a path of length and in the graph for , they form a tree with diameter .
3.2 Product Encoding
Chen [10] introduced the product encoding of which has size . It turns out that is the smallest number of the variables, for which the product encoding outperforms the sequential encoding ( vs. clauses). On the other hand, we show below that the sequential encoding is the smallest possible for . It is not clear whether this holds also for .
Let us present a slightly optimized version of the product encoding using a combination with sequential encoding for some values of . The combination is obtained by considering two candidates for the recursive construction of the product encoding and using the better of them for each . The base case for is
If , the first candidate formula for is
(5) 
as in the recurrence used for the sequential encoding. If , let be (5). If , we use the formula described by Chen [10]. Let and . Clearly, we have . Arrange the input variables in pairwise different cells of a rectangular array of dimension . Let and be the functions, such that is the row index and the column index of the cell containing . Let , and , be new auxiliary variables. Then, the second candidate for is the formula
(6) 
It is worth noting that formula (6) is in regular form, see Definition 2.9. The size of (5) is and the size of (6) is . Let be the smaller of these formulas, where any of the candidates can be used, if their sizes are the same. It appears that formula (5) is smaller than (6) for and for infinitely many other numbers, in particular, for the numbers and , where is an integer. This can be explained as follows. If or , then is given by (6). In this case, the size of the candidate for given by (6) is at least and the size of the candidate for given by (5) is .
Clearly, the size of is at most which is the size of (5) and using this, one can prove by induction on that for all , we have
(7) 
Asymptotically, a better bound was proven by Chen [10]. We present here a proof of this bound for completeness.
Lemma 3.2 (Chen [10]).
We have .
3.3 Relationship to Monotone Circuits
Let us briefly describe a connection between the product encoding and the monotone circuit of the size for the function , described in [14] and in Section 6, Theorem 2.3 in [22]. If , the construction yields the product encoding for . More generally, if is any constant, we obtain a PC encoding of “at most ” of size . This is the smallest known encoding for this constraint, if . On the other hand, for every , a smaller encoding can be obtained using Batcher’s sorting network. For large , an even smaller encoding of size , where , can be obtained using AKS sorting networks.
Let be the threshold function “at least ” of variables. By the results cited above, there is a monotone circuit for this function consisting of binary AND and OR gates. In order to obtain asymptotically the same number of clauses in an encoding, the circuit has to be transformed in such a way that we replace groups of binary OR gates computing a disjunction of several previous gates by a single OR gate with multiple inputs. The Horn part of the Tseitin encoding of the circuit after this transformation consists of clauses. If we add a negative unit clause on the output of the circuit, we obtain an encoding of the “at most ” constraint. Moreover, using the specific structure of the circuit, one can verify that this encoding is propagation complete. In particular, if , the obtained encoding is the product encoding of the constraint “at most one”.
3.4 PEncodings and Encodings of and
We use Pencodings as a representation of common properties of PC encodings of and . Although the converse of Lemma 2.6 is not true (see below for an example) a partial converse is valid.
Lemma 3.3.
A Pencoding with input variables is an encoding (not necessarily propagation complete) of either or .
Proof.
Consider a Pencoding where . The functions and differ only on the zero assignment. In order to prove the statement, it is sufficient to prove that the function encoded by agrees with and on the remaining assignments.
Consider a nonzero assignment of the input variables. First, assume that for two indices . Such is a falsifying assignment of both functions and . By condition 2 cannot be extended to a model of . For the remaining case assume that for some and for all . Such is a satisfying assignment of both the functions and . By condition 1 we have that is satisfiable and condition 2 implies that any satisfying assignment of sets all other input variables to . This means that can be extended to a satisfying assignment of . ∎
Consider the formula
where is the prime representation of and is an auxiliary variable. One can verify that this formula is a Pencoding, is an encoding of , however, is not a PC encoding of , since . This implies that the converse of Lemma 2.6 is not true.
Although we believe that the size of the smallest PC encoding of is and the size of the smallest PC encoding of is , we can prove only the following bounds.
Proposition 3.4.
Let , let be a smallest PC encoding of and let be a smallest PC encoding of . Then
Proof.
The lower bounds follow from Lemma 2.6. Let be a Pencoding of size . One can verify that
(9) 
where is a new auxiliary variable, is a PC encoding of . This implies . Moreover, one can verify that
is a PC encoding of . This implies . ∎
3.5 Simple Reductions of Encodings
In this section we present additional properties of encodings that can be assumed without loss of generality, since every encoding can be modified to satisfy them without increase of the size.
Lemma 3.5.
The prime CNF formula obtained from a given Pencoding by replacing every clause by a prime implicate contained in it, is also a Pencoding.
Proof.
If the number of occurrences of an auxiliary variable in an encoding is at most 4, then DPelimination of does not increase the size of the formula (see Section 2.5 for definition of DPelimination) and leads to an encoding of the same function with a smaller number of auxiliary variables. This allows us to make the following observation.
Lemma 3.6.
Let be an encoding of a function of minimum size that, moreover, has the minimum number of auxiliary variables among such encodings. Then any auxiliary variable occurs in at least 5 clauses of .
With a little effort one can show that DPelimination also preserves propagation completeness of an encoding. In particular, Lemma 3.6 holds also for a PC encoding of minimum size, however, this is not used in this paper.
3.6 Substituting Variables in Unit Propagation
One of the reduction steps we use later to simplify an encoding is the substitution of a variable with a literal on a variable already present in the formula. If is a formula and , we denote by the formula obtained from using the substitution . More precisely, if the literal is positive, then the variable is substituted by the literal . If is negative, then the variable is substituted by the literal . An important property of this operation is that this kind of substitution preserves unit propagation.
Lemma 3.7.
Let be a formula, let , such that and assume, is satisfiable. Then
Lemma 3.7 is a consequence of a more general statement with an essentially the same proof which we are going to show first. Let us consider a substitution of the variables in by literals on the same set of the variables. The substitution extends to the literals so that for every , we have . Moreover, the substitution extends to the clauses and the formulas in CNF as follows. If is a clause with variables from then is defined as , if there is no complementary pair of literals among , and otherwise. If is a CNF formula, then , where in case for all . In particular, where is a map on the literals defined for every literal as
(10) 
Applying a substitution to a formula preserves resolution proofs as we show in the following lemma.
Lemma 3.8.
Let be a formula on the variables and let be a resolution proof of from . If is a substitution as above, then there is a sequence , where each is a clause or , such that the following implications are satisfied
and the sequence of clauses contained in is a resolution proof of the clauses contained in it from the clauses in . If the original proof is a unit resolution proof, so is the derived proof.
Proof.
For each , we have either or , where . In order to prove the claim, let us construct by induction on . Some of the clauses can repeat in the constructed sequence. Assume, the sequence is constructed and is empty or satisfies the requirements formulated above. If , choose . If , then , where and , , and for some literal and sets of literals and . If , then choose . Otherwise, there is no conflict in .
If, moreover, the variable has no occurence in , then and are clauses and are resolvable using the literals and . This implies , , and . If and are resolvable, then and we can choose . Otherwise, either and we have or and we have . Hence, we can choose or so that .
Assume, some of the literals and has an occurence in . Since is a clause, only one of the literals and is contained in it. If , then and we can choose . Similarly, if , we can choose .
If is a unit clause, then is a unit clause. This implies the last statement of the lemma. ∎
4 Reducing to Regular Form
This section is devoted to the proof of Theorem 2.10. We start with basic properties of Pencodings.
Lemma 4.1.
Let and let be a Pencoding with input variables and auxiliary variables . For each distinct it holds that

,

,

contains a binary clause containing the literal .
Proof.
Suppose that satisfies the assumption. The claims of the lemma can be proven as follows.

Since , there is a series of unit resolutions starting from , whose first step uses a binary clause containing . ∎
The following lemma shows that fixing any set of input variables to zero in a Pencoding gives us a Pencoding on the remaining input variables.
Lemma 4.2.
Let be a Pencoding and let be a nonempty set of indices and consider the partial assignment . Then is a Pencoding with the input variables , .
Proof.
We now concentrate on clauses with negative literals on input variables.
Lemma 4.3.
Let be a prime Pencoding, , and . Then one of the following is satisfied

, where ,

for some .
Proof.
We have for a nonempty set of literals . If contains a literal on an input variable, consider the following two cases.

If for some then is an implicate as well which is in contradiction with primality of .
Otherwise, . ∎
The following proposition shows that for every input variable in a minimum size Pencoding .
Proposition 4.4.
Let and let be a Pencoding with input variables . Let and suppose that . Then there is another Pencoding with input variables and satisfying . Moreover, if is a 2CNF formula, then so is .
Proof.
Using Lemma 3.5, we can assume that is a prime formula. By Lemma 4.1, there is a binary clause with some . Let us assume for a contradiction that with . By Lemma 4.3, . Let . We have that . Since is the only clause of containing , unit resolution uses to derive and does not use in any of the later steps. Hence, we have , which is a contradiction with Lemma 4.12. This implies .
Consider the substitution and let us show that satisfies the conditions 1 and 2.

Let us show that is a satisfiable formula for each . If , we have that is satisfiable and that using the clause . Thus, the formula is satisfiable and both literals and get value in each of its satisfying assignments. It follows that is satisfiable as well.
If , we have . Since is the only clause in that contains , it holds that . Thus, both literals and get value in any satisfying assignment of . It follows that is satisfiable as well.

Follows from Lemma 3.7 for the formula and , where , .
The substitution changes to which is omitted in . Hence has size smaller than . This completes the proof. ∎
Let us present an example of a formula which shows that Proposition 4.4 does not hold for PC encodings of instead of Pencodings. The formula (9) with auxiliary variables is a PC encoding of with a single occurence of , for which the construction in Proposition 4.4 provides a formula which is a PC encoding of .
Pencodings that are not in regular form can be reduced to Pencodings with a smaller number of input variables by the following statements. We start with Pencodings which violate Condition 1 of Definition 2.9. Recall that this condition requires that for every input variable of a Pencoding in regular form.
Proposition 4.5.
Let be a Pencoding with input variables , such that for some . Then, there is a formula , which satisfies one of the following

is a Pencoding with input variables and ,

is a Pencoding with input variables and