Hunting for Tractable Languages for Judgment Aggregation
Abstract
Judgment aggregation is a general framework for collective decision making that can be used to model many different settings. Due to its general nature, the worst case complexity of essentially all relevant problems in this framework is very high. However, these intractability results are mainly due to the fact that the language to represent the aggregation domain is overly expressive. We initiate an investigation of representation languages for judgment aggregation that strike a balance between (1) being limited enough to yield computational tractability results and (2) being expressive enough to model relevant applications. In particular, we consider the languages of Krom formulas, (definite) Horn formulas, and Boolean circuits in decomposable negation normal form (DNNF). We illustrate the use of the positive complexity results that we obtain for these languages with a concrete application: voting on how to spend a budget (i.e., participatory budgeting).
Hunting for Tractable Languages for Judgment Aggregation
Ronald de Haan Institute for Logic, Language and Computation University of Amsterdam me@ronalddehaan.eu
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Introduction
Judgment aggregation is a general framework to study methods for collective opinion forming, that has been investigated in the area of computational social choice (see, e.g., ? ?, ? ?). The framework is set up in such a general way that it can be used to model an extremely wide range of scenarios—including, e.g., the setting of voting (?). On the one hand, this generality is an advantage: methods studied in judgment aggregation can be employed in all these scenarios. On the other hand, however, this generality severely hinders the use of judgment aggregation methods in applications. Because there are no restrictions on the type of aggregation settings that are modeled, relevant computational tasks across the board are computationally intractable in the worst case. In other words, no performance guarantees are available that warrant the efficient use of judgment aggregation methods for applications—not even for simple settings. For example, computing the outcome of a judgment aggregation scenario is NPhard for all aggregation procedures studied in the literature that satisfy the rudimentary quality condition of consistency (?; ?; ?).
These negative computational complexity results are in many cases due purely to the expressivity of the language used to represent aggregation scenarios (full propositional logic, or CNF formulas)—not to the structure of the scenario being modeled. In other words, the known negative complexity results draw an overly negative picture
To correct this gloomy and misleading image, a more detailed and more finegrained perspective is needed on the way that application settings are modeled in the general framework of judgment aggregation. In this paper, we take a first look at the complexity of judgment aggregation scenarios using this more sensitive point of view. That is, we initiate an investigation of representation languages for judgment aggregation that (1) are modest enough to yield positive complexity results for relevant computational tasks, yet (2) are general enough to model interesting and relevant applications.
Concretely, we look at several restricted propositional languages that strike a balance between expressivity and tractability in other settings, and we study to what extent such a balance is attained in the setting of judgment aggregation. In particular, we look at Krom (2CNF), Horn and definite Horn formulas, and we consider the class of Boolean circuits in decomposable negation normal form (DNNF). We study the impact of these restricted languages on the complexity of computing outcomes for a number of judgment aggregation procedures studied in the literature. We obtain a wide range of (positive and negative) results. Most of the results we obtain are summarized in Tables 3, 4 and 5, located in later sections.
In particular, we obtain several interesting positive complexity results for the case where the domain is represented using a Boolean circuit in DNNF. Additionally, we illustrate how this representation language of Boolean circuits in DNNF—that combines expressivity and tractability—can be used to get tractability results for a specific application: voting on how to spend a budget. This application setting can be seen as an instantiation of the setting of Participatory Budgeting (see, e.g., ? ?).
Related Work
Judgment aggregation has been studied in the field of computational social choice from (a.o.) a philosophy, economics and computer science perspective (see, e.g., ? ?, ? ?, ? ?, ? ?, ? ?, ? ?). The complexity of computing outcomes for judgment aggregation procedures has been studied by, a.o., ? (?), ? (?), ? (?), ? (?) and ? (?). See Table 2 for complexity results that are relevant for this paper.
Roadmap
We begin by explaining the framework of judgment aggregation. We then study to what extent the known languages of Krom and (definite) Horn formulas lead to suitable results for judgment aggregation. We continue with looking at the class of DNNF circuits—studied in the field of knowledge compilation—and we illustrate how results for this class of circuits can be used for a concrete application of judgment aggregation (that of voting on how to allocate a budget). We conclude with outlining some promising ways in which the research path that we set out can be followed.
An overview of notions from propositional logic and computational complexity theory that we use can be found in the appendix. The proofs of some results are omitted from the main paper and are located in the additional material at the end—these results are marked with a star ().
Judgment Aggregation
We begin by introducing the setting of Judgment Aggregation (?; ?; ?; ?). In this paper, we will use a variant of the framework that has been studied by, e.g., ? (?), ? (?) and ? (?).^{1}^{1}1This framework is also known under the name of binary aggregation with integrity constraints, and can be used interchangeably with other Judgment Aggregation frameworks from the literature —as shown by ? (?).
Let be a finite set of issues, in the form of propositional variables. Intuitively, these issues are the topics about which the individuals want to combine their judgments. A truth assignment is called a ballot, and represents an opinion that individuals and the group can have. We will also denote ballots by a binary vector , where for each —we use to denote for each . Moreover, we say that is a partial ballot, and that agrees with a ballot if whenever , for all . We use an integrity constraint to restrict the set of feasible opinions (for both the individuals and the group). The integrity constraint is a propositional formula (or more generally, a singleoutput Boolean circuit), whose variables can include . We define the set of rational ballots to be the ballots (for ) that are consistent with the integrity constraint . We say that finite sequences of rational ballots are profiles. A profile contains a ballot for each individual participating in the judgment aggregation scenario. Where convenient we equate a profile with the multiset containing .
A judgment aggregation procedure (or rule), for the set of issues and the integrity constraint , is a function that takes as input a profile , and that produces a nonempty set of ballots. A procedure is called consistent if for all , and it holds that each is consistent with . Consistency is a central requirement for judgment aggregation procedures, and all rules that we consider in this paper are consistent.
An example of a simple judgment aggregation procedure is the majority rule (defined for profiles with an odd number of ballots). We let the majority outcome be the partial ballot such that for each , if a strict majority of ballots satisfy , if a strict majority of ballots satisfy , and otherwise. The majority rule returns the majority outcome . The majority rule is efficient to compute, but is not consistent (as shown in Example 1).
Example 1.
Consider the judgment aggregation scenario. where , , and the profile is as shown in Table 1. The majority outcome is inconsistent with .
Judgment Aggregation Procedures
Next, we introduce the judgment aggregation rules that we use in this paper. These procedures are all consistent and are many of the ones that have been studied in the literature (for an overview see, e.g., ? ?).
Several procedures that we consider can be seen as instantiations of a general template: scoring procedures. Let be a set of issues and be an integrity constraint. Moreover, let be a scoring function that assigns a value to each literal with respect to a ballot . The scoring judgment aggregation procedure that corresponds to is defined as follows:
That is, selects the rational ballots that maximize the cumulative score for all literals agreeing with with respect to all ballots .
The median (or Kemeny) procedure med is based on the scoring function and is defined by letting for each and each . Alternatively, the med procedure can be defined as the rule that selects the ballots that minimize the cumulative Hamming distance to the profile . The Hamming distance between two ballots is .
The reversal scoring procedure rev is based on the scoring function such that for each and each . That is, the score of w.r.t. is the minimal number of issues whose truth value needs to be flipped to get a rational ballot that sets to false.
The maxcard Condorcet (or Slater) procedure mcc is also based on the Hamming distance. Let be a profile. The mcc procedure is defined by letting . That is, the mcc procedure selects the rational ballots that minimize the Hamming distance to the majority outcome .
The Young procedure young selects those ballots that can be obtained as a rational majority outcome by deleting a minimal number of ballots from the profile. Let be a profile, and let denote the smallest number such that deleting individual ballots from results in a profile such that is a complete and rational ballot. We let the outcome of the Young procedure be the set of rational ballots such that deleting individual from results in a profile with .
The MaxHamming procedure maxham is also based on the Hamming distance. Let be a single ballot, and let be a profile. We define the maxHamming distance between and to be . The MaxHamming procedure is defined by letting . That is, the MaxHamming procedure selects the rational ballots that minimize the maxHamming distance to .
The ranked agenda (or Tideman) procedure ra is based on the notion of majority strength.^{2}^{2}2Here, we consider a variant of the ranked agenda procedure that works with a fixed tiebreaking order. Other variants, where all possible tiebreaking orders are considered in parallel, have also been studied in the literature (see, e.g., ? ?). Let be a profile and let . The majority strength of for is the number of ballots such that . Let be a fixed linear order on Lit (the tiebreaking order). Based on and the majority strength, we define the linear order on Lit. Let . Then if either (i) or (ii) and . Then where the ballot is defined inductively as follows. Let be such that for each it holds that . Let be the empty truth assignment. For each , check whether both and is consistent with , where is obtained from by setting to true (and keeping the assignments to variables not occurring in unchanged). If both are the case, then let . Otherwise, let . Then . Intuitively, the procedure iterates over the assignments in the order specified by . Each literal is set to true whenever this does not lead to an inconsistency with previously assigned literals.
Outcome Determination
When given a judgment aggregation scenario (i.e., an agenda, an integrity constraint, and a profile of individual opinions), an important computational task is to compute a possible collective opinion, for a fixed judgment aggregation procedure. This task is often referred to as outcome determination. Moreover, often it makes sense to seek possible collective opinions that satisfy certain properties (e.g., whether or not a given issue is accepted in the collective opinion).
Essentially, this is a search problem: the task is to find one of (possibly) multiple solutions. However, to make the theoretical complexity analysis easier, we will consider the following decision variant of this problem.
Outcome(F) Instance: A set of issues with an integrity constraint a profile and a partial ballot (for ). Question: Is there a ballot such that agrees with ?
An outcome witnessing a yesanswer can be obtained by solving this decision problem a linear number of times. In addition to the basic task of finding one outcome (that agrees with a given partial ballot ), one could consider other computational tasks, e.g., representing the set of outcomes in a succinct way that admits certain queries/operations to be performed efficiently. For example, it might be desirable to enumerate all (possibly exponentially many) outcomes with polynomial delay. It could also be desirable to check whether all outcomes agree with a given partial ballot (skeptical reasoning). For the sake of simplicity, in this paper we will stick to the decision problem described above. All tractability results that we obtain for the decision problem can straightforwardly be extended to tractability results for the above computational tasks.
For the judgment aggregation procedures that we considered above, Outcome(F) is hard. For an overview, see Table 2.
complexity of Outcome(F)  

med  c (?) 
rev  c (?) 
mcc  c (?) 
young  c (?) 
maxham  c (?) 
ra  c (?) 
Krom and (Definite) Horn Formulas
In this section, we consider the fragments of Krom (2CNF), Horn and definite Horn formulas—for a formal definition of these fragments, see the appendix. These fragments can be used to express settings where only basic dependencies between issues play a role—see Example 2 for an indication.
Example 2.
Krom (2CNF) formulas can be used to express dependencies of the form “if we decide to use software tool 1 () or software tool 2 (), then we need to purchase the entire package ():” .
Definite Horn formulas can be used to express dependencies of the form “if we hire both researcher 1 () and researcher 2 (), then we need to rent another office :” .
For some judgment aggregation rules these fragments make computing outcomes tractable, and for other judgment aggregation rules they do not. We begin with considering the rules med and mcc. Computing outcomes for these rules is tractable when restricted to Krom formulas, but not when restricted to (definite) Horn formulas.
Proposition 1.
Outcome(med) is hard even when restricted to the case where .
Proposition 2.
Outcome(mcc) is hard even when restricted to the case where .
The following result refers to the notion of majority consistency (see, e.g., ? ?). A profile is majority consistent (with respect to an integrity constraint ) if the majority outcome is consistent with . A judgment aggregation procedure is majority consistent if for each integrity constraint and each profile that is majority consistent (w.r.t. ), the procedure outputs all and only those complete ballots that agree with the (partial) ballot .
Theorem 3.
For all judgment aggregation procedures that are majority consistent, e.g., , Outcome(F) is polynomialtime solvable when .
Proof.
The general idea behind this proof is to use the property that when , the majority outcome is always consistent. Let be an instance of Outcome(F) with . Let . We consider the majority outcome .
We show that the partial ballot is consistent with . Suppose, to derive a contradiction, that is inconsistent with . Then there must be some clause of size such that and sets both and to false. By definition of , then a strict majority of the ballots in set to false, and a strict majority of the ballots in set to false. By the pigeonhole principle then there must be some ballot in that sets both and to false. However, since , we get that does not satisfy , which is a contradiction with our assumption that all ballots in the profile satisfy . Thus, we can conclude that is consistent with .
Since is majority consistent, we know that contains all ballots that are consistent with both and . Since , deciding if contains a ballot that is consistent with can be done in polynomial time. ∎
We continue with the maxham procedure for which computing outcomes is not tractable when restricted to Krom formulas nor when restricted to definite Horn formulas.
Proposition 4.
Outcome(maxham) is hard even when restricted to the case where .
Outcome(maxham) restricted to the case where coincides with a problem known as Closest String for binary alphabets (see, e.g., ? ?). To the best of our knowledge, this is the first time that the exact complexity of (this variant of) this problem has been identified. Outcome(maxham) is also very similar to the problem of computing outcomes for the minimax rule in approval voting (?).
Corollary 5.
Outcome(maxham) is hard even when restricted to the case where .
Finally, we consider the procedure ra, for which computing outcomes is tractable for both Krom and Horn formulas.
Theorem 6.
Let be a class of propositional formulas (or Boolean circuits) with the following two properties:

is closed under instantiation, i.e., for any and any partial truth assignment it holds that ; and

satisfiability of formulas in is polynomialtime solvable.
Then Outcome(ra) is polynomialtime solvable when restricted to the case where .
Proof (sketch).
Let be a class of propositional formulas that satisfies the conditions stated above, and let . We can then compute by directly using the iterative definition of given in the description of the ranked agenda procedure. This definition iteratively constructs partial ballots . Ballot is the empty ballot, and for each , ballot is constructed from by using only the operations of instantiating the integrity constraint and checking satisfiability of the resulting formula. Due to the properties of , these operations are all polynomialtime solvable. Thus, constructing can be done in polynomial time. ∎
Corollary 7.
For each , Outcome(ra) is polynomialtime solvable when restricted to the case where .
An overview of the complexity results that we established in this section can be found in Table 3.
complexity of Outcome(F)  
restricted to Horn / DefHorn  
med  c (Proposition 1) 
mcc  c (Proposition 2) 
maxham  c (Corollary 5) 
ra  in P (Corollary 7) 
complexity of Outcome(F)  
restricted to Krom  
med  in P (Theorem 3) 
mcc  in P (Theorem 3) 
maxham  c (Corollary 5) 
ra  in P (Corollary 7) 
The results that we obtained for Horn formulas can all be straightforwardly extended to the fragment of renamable Horn formulas—e.g., the fragment of renamable Horn formulas satisfies the requirements of Theorem 6. A propositional formula is renamable Horn if there is a set of variables such that becomes Horn when all literals over are complemented.
Boolean Circuits in DNNF
Next, we consider the case where the integrity constraints are restricted to Boolean circuits in Decomposable Negation Normal Form (DNNF). This is a class of Boolean circuits studied in the area of knowledge compilation. We illustrate how this class of circuits is useful for judgment aggregation.
Knowledge Compilation
Knowledge compilation (see, e.g., ? ?, ? ?, ? ?) refers to a collection of approaches for solving reasoning problems in the area of artificial intelligence and knowledge representation and reasoning that are computationally intractable in the worstcase asymptotic sense. These reasoning problems typically involve knowledge in the form of a Boolean function—often represented as a propositional formula. The general idea behind these approaches is to split the reasoning process into two phases: (1) compiling the knowledge into a different format that allows the reasoning problem to be solved efficiently, and (2) solving the reasoning problem using the compiled knowledge. Since the entire reasoning problem is computationally intractable, at least one of these two phases must be intractable. Indeed, typically the first phase does not enjoy performance guarantees on the running time—upper bounds on the size of the compiled knowledge are often desired instead. One of the advantages of this methodology is that one can reuse the compiled knowledge for many instances, which could lead to a smaller overall running time.
A prototypical example of a problem studied in the setting of knowledge compilation is that of clause entailment (see, e.g., ? ?, ? ?). In this problem, one is given a knowledge base, say in the form of a propositional formula in CNF, together with a clause . The question is to decide whether . This problem is coNPcomplete in general. The knowledge compilation approach to solve this problem would be to firstly compile the CNF formula into an equivalent expression in a different format. For example, one could consider the formalism of Boolean circuits in Decomposable Negation Normal Form (DNNF) (or DNNF circuits, for short).
DNNF circuits are a particular class of Boolean circuits in Negation Normal Form (NNF). A Boolean circuit in NNF is a direct acyclic graph with a single root (a node with no ingoing edges) where each leaf is labelled with , , or for a propositional variable , and where each internal node is labelled with or . (An arc in the graph from to indicates that is a child node of .) The set of propositional variables occurring in is denoted by Var. For any truth assignment , we define the truth value assigned to by in the usual way, i.e., each node is assigned a truth value based on its label and the truth value assigned to its children, and the truth value assigned to is the truth value assigned to the root of the circuit. DNNF circuits are Boolean circuits in NNF that satisfy the additional property of decomposability. A circuit is decomposable if for each conjunction in the circuit, the conjuncts do not share variables. That is, for each node in that is labelled with and for any two children of this node, it holds that , where are the subcircuits of that have as root, respectively. An example of a DNNF circuit is given in Figure 1.
The problem of clause entailment can be solved in polynomial time when the propositional knowledge is given as a DNNF circuit (?). Moreover, every CNF formula can be translated to an equivalent DNNF circuit—without guarantees on the size of the circuit. Thus, one could solve the problem of clause entailment by first compiling the CNF formula into an equivalent DNNF circuit (without guarantees on the running time or size of the result) and then solving in time polynomial in .
Next, we will show how representation languages such as DNNF circuits can be used in the setting of Judgment Aggregation, and we will argue how Judgment Aggregation can benefit from the approach of first compiling knowledge (without performance guarantees) before using the compiled knowledge to solve the initial problem.
Algebraic Model Counting
We will use the technique of algebraic model counting (?) to execute several judgment aggregation procedures efficiently using the structure of DNNF circuits. Algebraic model counting is a generalization of the problem of counting models of a Boolean function that uses the addition and multiplication operators of a commutative semiring.
Definition 1 (Commutative semiring).
A semiring is a structure , where:

addition is an associative and commutative binary operation over the set ;

multiplication is an associative binary operation over the set ;

distributes over ;

is the neutral element of , i.e., for all , ;

is the neutral element of , i.e., for all , ; and

is an annihilator for , i.e., for all , .
When is commutative, we say that the semiring is commutative. When is idempotent, we say that the semiring is idempotent.
Definition 2 (Algebraic model counting).
Given:

a Boolean function over a set of propositional variables;

a commutative semiring , and

a labelling function mapping literals over the variables in to values in the set ,
the task of algebraic model counting (AMC) is to compute:
We can solve the task of algebraic model counting efficiently for DNNF circuits when the semiring satisfies an additional condition.
Definition 3 (Neutral ).
Let be a semiring, and let be a labelling function for some set of propositional variables. A pair is neutral if for all it holds that .
Theorem 8 (? ?, Thm 5).
When is represented as a DNNF circuit, and the semiring and the labelling function have the properties that (i) is idempotent, and (ii) is neutral, then the algebraic model counting problem is polynomialtime solvable—when given and as input, and when the operations of addition () and multiplication () over can be performed in polynomial time.
We will use the result of Theorem 8 to show that outcome determination for several judgment aggregation procedures is tractable for the case where is a DNNF circuit. To do so, we will consider the following commutative, idempotent semiring (also known as the maxplus algebra). We let , we let , , , and . Whenever we have a labelling function such that is neutral—i.e., such that for each —we satisfy the conditions of Theorem 8.
Theorem 9.
Outcome(med) and Outcome(mcc) are polynomialtime computable when is a DNNF circuit.
Proof.
We prove the statement for Outcome(med). The case for Outcome(mcc) is analogous. Let be an instance of Outcome(med). We solve the problem by reducing it to the problem of algebraic model counting. For , we use the maxplus algebra described above. We construct the labelling function as follows. For each , we count the number of ballots such that and we count the number of ballots such that . That is, we let and be the majority strength of and , respectively, in the profile . We pick a constant such that where and . We then let and . This ensures that satisfies the condition of neutrality (i.e., that for each ).
This choice of and has the property that the ballots are exactly those complete ballots that satisfy and for which holds that . That is, the set consists of those rational ballots that achieve the solution of the algebraic model counting problem . We can solve the instance of decision problem Outcome(med) by solving the algebraic model counting problem twice: once for and once for . The instance is a yesinstance if and only if . By Theorem 8, this can be done in polynomial time.
To make this algorithm work for the case of Outcome(mcc), one only needs to adapt the values of and . Instead of setting and to the majority strength of and , respectively, we let if a strict majority of ballots have that , and we let otherwise. Similarly, we let if a strict majority of ballots have that , and we let otherwise. ∎
Representing the integrity constraint as a DNNF circuit makes it possible to perform more tasks efficiently than just the decision problem Outcome(F). For example, the algorithms for algebraic model counting can be used to produce a DNNF circuit that represents the set of outcomes, allowing further operations to be carried out efficiently.
Theorem 10.
Outcome(rev) is polynomialtime computable when is a DNNF circuit.
Proof (sketch).
The polynomialtime algorithm for Outcome(rev) is analogous to the algorithm described for Outcome(med) described in the proof of Theorem 9. The only modification that needs to be made to make this algorithm work for Outcome(rev) is to adapt the numbers and , for each . Instead of identifying these numbers with the majority strength of and , respectively, we identify them with the total reversal score of and , over the profile . That is, we let and we let . For general propositional formulas , the reversal scoring function is NPhard to compute. However, since is given as a DNNF circuit, we can compute the scoring function , and thereby and , in polynomial time—by using another reduction to the problem of algebraic model counting. We omit the details of this latter reduction. ∎
Intuitively, the results of Theorems 9 and 10 are a consequence of the fact that DNNF circuits allow polynomialtime weighted maximal model computation, and that the judgment aggregation procedures med, mcc and rev are based on weighted maximal model computation. These results can therefore also straightforwardly be extended to other judgment aggregation procedures that are based on weighted maximal model computation.
Other Results
We can extend some previously established results (Proposition 4 and Theorem 6) to the case of DNNF circuits.
Corollary 11.
Outcome(ra) is polynomialtime computable when restricted to the case where is a DNNF circuit.
Corollary 12.
Outcome(maxham) is complete when restricted to the case where is a DNNF circuit.
A similar result for young follows from a result that we will establish in the next section (Proposition 18).
Corollary 13.
Outcome(young) is complete when restricted to the case where is a DNNF circuit.
An overview of the results established so far in this section can be found in Table 4.
A Compilation Approach
The results of Theorems 9 and 10 and Corollary 11 pave the way for another approach towards finding cases where judgment aggregation procedures can be performed efficiently. The idea behind this approach is to compile the integrity constraint into a DNNF circuit—regardless of whether this compilation process enjoys a polynomialtime worstcase performance guarantee. There are several offtheshelf tools available that compile CNF formulas into DNNF circuits using optimized methods based on SAT solving algorithms (?; ?; ?). Since the class of DNNF circuits is expressively complete—i.e., every Boolean function can be expressed using a DNNF circuit—it is possible to compile any integrity constraint into a DNNF circuit .
The downside is that the circuit could be of exponential size, or it could take exponential time to compute it. However, once the circuit is computed and stored in memory, one can use several judgment aggregation procedures efficiently: med, mcc, rev and ra.
Thus, this approach restricts the computational bottleneck to the compilation phase, before any judgments are solicited from the individuals in the judgment aggregation scenario. Once the compilation phase has been completed, there are polynomialtime guarantees on the aggregation phase (polynomial in the size of the compiled DNNF circuit ).
CNF Formulas of Bounded Treewidth
The tractability results for DNNF circuits can be leveraged to get parameterized tractability results for the case where the integrity constraint is a CNF formula with a ‘treelike’ structure.
Parameterized Complexity Theory & Treewidth
In order to explain the results that follow, we briefly introduce some relevant concepts from the theory of parameterized complexity. For more details, we refer to textbooks on the topic (see, e.g., ? ?, ? ?). The central notion in parameterized complexity is that of fixedparameter tractability—a notion of computational tractability that is more lenient than the traditional notion of polynomialtime solvability. In parameterized complexity running times are measured in terms of the input size as well as a problem parameter . Intuitively, the parameter is used to capture structure that is present in the input and that can be exploited algorithmically. The smaller the value of the problem parameter , the more structure the input exhibits. Formally, we consider parameterized problems that capture the computational task at hand as well as the choice of parameter. A parameterized problem is a subset of for some fixed alphabet . An instance of contains the problem input and the parameter value . A parameterized problem is fixedparameter tractable there is a deterministic algorithm that for each instance decides whether and that runs in time , where is a computable function of , and is a fixed constant. Algorithms running within such time bounds are called fptalgorithms. The idea behind these definitions is that fixedparameter tractable running times are scalable whenever the value of is small.
A commonly used parameter is that of the treewidth of a graph. Intuitively, the treewidth measures the extent to which a graph is like a tree—trees and forests have treewidth 1, cycles have treewidth 2, and so forth. The notion of treewidth is defined as follows. A tree decomposition of a graph is a pair where is a tree and is a family of subsets of such that:

for every , the set is nonempty and connected in ; and

for every edge , there is a such that .
The width of the decomposition is the number . The treewidth of is the minimum of the widths of all tree decompositions of . Let be a graph and a nonnegative integer. There is an fptalgorithm that computes a tree decomposition of of width if it exists, and fails otherwise (?).
Encoding Results
We can then use results from the literature to establish tractability results for computing outcomes of various judgment aggregation procedures for integrity constraints whose variable interactions have a treelike structure. Let be a CNF formula. The incidence graph of is the graph , where and . The incidence treewidth of is defined as the treewidth of the incidence graph of .
We can leverage the results of Theorems 9 and 10 and Corollary 11 to get fixedparameter tractability results for computing outcomes of med, mcc, rev and ra for integrity constraints with small incidence treewidth.
Proposition 14 (? ?, ? ?).
Let be a CNF formula of incidence treewidth . Constructing a DNNF circuit that is equivalent to can be done in fixedparameter tractable time.
Corollary 15.
The problems Outcome(med), Outcome(mcc), Outcome(rev) and Outcome(ra) are fixedparameter tractable when parameterized by the incidence treewidth of .
Case Study: Budget Constraints
In this section, we illustrate how the results of the previous section can contribute to providing a computational complexity analysis for an application setting. The setting that we consider as an example is that of budget constraints. This setting is closely related to that of Participatory Budgeting (see, e.g., ? ?), where citizens propose projects and vote on which projects get funded by public money. In the setting that we consider, each issue represents whether or not some measure is implemented. Each such measure has an implementation cost associated with it. Moreover, there is a total budget that cannot be exceeded—that is, each ballot (individual or collective) can set a set of variables to true such that the cumulative cost of these variables is at most (and set the remaining variables to false). The integrity constraint encodes that the total budget cannot be exceeded by the total cost of the variables that are set to true. (For the sake of simplicity, we assume that all costs and the total budget are all positive integers.)
The concepts and tools from judgment aggregation are useful and relevant in this setting. This is witnessed, for instance, by the fact that simply taking a majority vote will not always lead to a suitable collective outcome. Consider the example where there are three measures that are each associated with cost , and where there is a budget of . Moreover, suppose that there are three individuals. The first individual votes to implement measures and ; the second votes for measures and , and the third for and . Each of the individuals’ opinions is consistent with the budget. However, taking a majority measurebymeasure vote results in implementing all three issues, which exceeds the budget. (In other words, the individual opinions are all rational, whereas the collective majority opinion is not.) This example is illustrated in Figure 2—in this figure, we encode the budget constraint using a DNNF circuit .

Encoding into a PolynomialSize DNNF Circuit
To use the framework of judgment aggregation to model settings with budget constraints, we need to encode budget constraints using integrity constraints . One can do this in several ways. We consider an encoding using DNNF circuits (as in Figure 1(b)). Let be a set of issues, let be a vector of implementation costs, and let be a total budget. We say that an integrity constraint encodes the budget constraint for and if for each complete ballot it holds that satisfies if and only if .
We can encode budget constraints efficiently using DNNF circuits by expressing them as binary decision diagrams. A binary decision diagram (BDD) is a particular type of NNF circuit. Let be an NNF circuit. We say that a node of is a decision node if (i) it is a leaf or (ii) it is a disjunction node expressing , where and and are decision nodes. A binary decision diagram is an NNF circuit whose root is a decision node. A free binary decision diagram (FBDD) is a BDD that satisfies decomposability (see, e.g., ? ?, ? ?).
Theorem 16.
For each , and , we can construct a DNNF circuit encoding the budget constraint for and in time polynomial in .
Proof.
We construct an FBDD encoding the budget constraint for and as follows. Without loss of generality, suppose that for each . Let . We introduce a decision node for each and . Take arbitrary and . If , we let . If , we distinguish two cases: either (i) or (ii) , where .. In case (i), we let . In case (ii), we let . We let the root of the FBDD be the node —and we remove all nodes that are not descendants of . Intuitively, the subcircuit rooted at represents all truth assignments to the variables that fit within a budget of . For each node it holds that the variables in the leaves reachable from are among . Therefore, we constructed an FBDD. Moreover, each complete ballot satisfies the circuit if and only if . Thus, is a DNNF circuit constructed in time polynomial in encoding the budget constraint for and . ∎
Complexity Results
Using the encoding result of Theorem 16, we can establish polynomialtime solvability results for computing outcomes for several judgment aggregation procedures in the setting of budget constraints.
Corollary 17.
Outcome(med), Outcome(mcc), Outcome(rev), and Outcome(ra) are polynomialtime computable when restricted to the case where expresses a budget constraint.
For the young and maxham procedures, we obtain intractability results for the case of budget constraints—for both procedures computing outcomes is hard.
Proposition 18.
Outcome(young) is hard when restricted to the case where expresses a budget constraint.
Corollary 19.
Outcome(maxham) is hard when restricted to the case where expresses a budget constraint.
Proof.
The result follows directly from Proposition 4. ∎
An overview of the complexity results that we established in this section can be found in Table 5.
complexity of Outcome(F)  

med  in P (Corollary 17) 
rev  in P (Corollary 17) 
mcc  in P (Corollary 17) 
young  c (Proposition 18) 
maxham  c (Corollary 19) 
ra  in P (Corollary 17) 
Directions for Future Research
In this paper, we provided a set of initial results for restricted languages for judgment aggregation, but these results are only the tip of the iceberg that is to be explored. We outline some directions for interesting future work on this topic.
One first direction is to establish the complexity of Outcome(F) for cases that are left open in this paper—for example, for young and rev for the case of Krom and (definite) Horn formulas. Another direction is to pinpoint the complexity of Outcome(F) for the languages that we considered for other judgment aggregation rules studied in the literature (see, e.g., ? ?).
Yet another direction is to extend tractability results obtained in this paper—e.g., for Krom and Horn formulas—to formulas that are ‘close’ to Krom or Horn formulas. One could use the notion of backdoors for this (see, e.g., ? ?).
Finally, further restricted languages of propositional formulas or Boolean circuits need to be studied, to get a more complete picture of where the boundaries of the expressivitytractability balance lie in the setting of judgment aggregation. A good source for additional languages is the field of knowledge compilation (see, e.g., ? ?, ? ?, ? ?), where many restricted languages have been studied with respect to their expressivity and support for performing various operations tractably.
Conclusion
In this paper, we initiated the hunt for representation languages for the setting of judgment aggregation that strike a balance between (1) allowing relevant computational tasks to be performed efficiently and (2) being expressive enough to model interesting and relevant application settings. Concretely, we considered Krom and (definite) Horn formulas, and we studied the class of Boolean circuits in DNNF. We studied the impact of these languages on the complexity of computing outcomes for a number of judgment aggregation procedures studied in the literature. Additionally, we illustrated the use of these languages for a specific application setting: voting on how to spend a budget.
Appendix A Appendix: Preliminaries
We give an overview of some notions from propositional logic and computational complexity that we use in the paper.
Propositional Logic
Propositional formulas are constructed from propositional variables using the Boolean operators , and . A literal is a propositional variable (a positive literal) or a negated variable (a negative literal). A clause is a finite set of literals, not containing a complementary pair , , and is interpreted as the disjunction of these literals. A formula in conjunctive normal form (CNF) is a finite set of clauses, interpreted as the conjunction of these clauses. For each , an clause is a clause that contains at most literals, and denotes the class of all CNF formulas consisting only of clauses. is also denoted by Krom, and 2CNF formulas are also known as Krom formulas. A Horn clause is a clause that contains at most one positive literal. A definite Horn clause is a clause that contains exactly one positive literal. We let Horn denote the class of all CNF formulas that contain only Horn clauses (Horn formulas), and we let DefHorn denote the class of all CNF formulas that contain only definite Horn clauses (definite Horn formulas).
For a propositional formula , Var denotes the set of all variables occurring in . Moreover, for a set of variables, Lit denotes the set of all literals over variables in , i.e., . We use the standard notion of (truth) assignments for Boolean formulas and truth of a formula under such an assignment. For any formula and any truth assignment , we let denote the formula obtained from by instantiating variables in the domain of with and simplifying the formula accordingly. By a slight abuse of notation, if is defined on all Var, we let denote the truth value of under .
Computational Complexity Theory
We assume the reader to be familiar with the complexity classes P and NP, and with basic notions such as polynomialtime reductions. For more details, we refer to textbooks on computational complexity theory (see, e.g., ? ?).
In this paper, we also refer to the complexity classes and that consist of all decision problems that can be solved by a polynomialtime algorithm that queries an NP oracle or times, respectively. Formally, algorithms with access to an oracle are defined as follows. Let be a decision problem. A Turing machine with access to an oracle is a Turing machine with a dedicated oracle tape and dedicated states , and . Whenever is in the state , it does not proceed according to the transition relation, but instead it transitions into the state if the oracle tape contains a string that is a yesinstance for the problem , i.e., if , and it transitions into the state if . Intuitively, the oracle solves arbitrary instances of in a single time step. The class (resp. ) consists of all decision problems for which there exists a deterministic Turing machine that decides for each instance of size whether in time polynomial in by querying some oracle at most (resp. ) times.
Let be a class of propositional formulas. The following problem is complete for the class under polynomialtime reductions when is the class of all propositional formulas (?; ?; ?).
Instance: A satisfiable propositional formula , and a variable . Question: Is there a model of that sets a maximal number of variables in Var to true (among all models of ) and that sets to true?
For any class of propositional formulas, we let denote the problem MaxModel restricted to formulas .
Acknowledgments.
This work was supported by the Austrian Science Fund (FWF), project J4047.
References
 [Arora and Barak 2009] Arora, S., and Barak, B. 2009. Computational Complexity – A Modern Approach. Cambridge University Press.
 [Benade et al. 2017] Benade, G.; Nath, S.; Procaccia, A. D.; and Shah, N. 2017. Preference elicitation for participatory budgeting. In Proc. of the 31st AAAI Conf. on Artificial Intelligence (AAAI 2017), 376–382. AAAI Press.
 [Bodlaender 1996] Bodlaender, H. L. 1996. A lineartime algorithm for finding treedecompositions of small treewidth. SIAM J. Comput. 25(6):1305–1317.
 [Bova et al. 2015] Bova, S.; Capelli, F.; Mengel, S.; and Slivovsky, F. 2015. On compiling CNFs into structured deterministic DNNFs. In Proc. of the 18th Intern. Conf. on Theory and Applications of Satisfiability Testing (SAT 2015), 199–214.
 [Brams, Kilgour, and Sanver 2004] Brams, S. J.; Kilgour, D. M.; and Sanver, M. R. 2004. A minimax procedure for negotiating multilateral treaties. In Proc. of the 2004 Annual Meeting of the American Political Science Association.
 [Cadoli et al. 2002] Cadoli, M.; Donini, F. M.; Liberatore, P.; and Schaerf, M. 2002. Preprocessing of intractable problems. Inf. Comput. 176(2):89–120.
 [Chen and Toda 1995] Chen, Z.Z., and Toda, S. 1995. The complexity of selecting maximal solutions. Inf. Comput. 119:231–239.
 [Cygan et al. 2015] Cygan, M.; Fomin, F. V.; Kowalik, L.; Lokshtanov, D.; Marx, D.; Pilipczuk, M.; Pilipczuk, M.; and Saurabh, S. 2015. Parameterized Algorithms. Springer.
 [Darwiche and Marquis 2002] Darwiche, A., and Marquis, P. 2002. A knowledge compilation map. J. Artif. Intell. Res. 17:229–264.
 [Darwiche 2004] Darwiche, A. 2004. New advances in compiling CNF into decomposable negation normal form. In de Mántaras, R. L., and Saitta, L., eds., Proc. of the 16th European Conf. on Artificial Intelligence, (ECAI 2004), 328–332. IOS Press.
 [Darwiche 2014] Darwiche, A. 2014. Tractable knowledge representation formalisms. In Bordeaux, L.; Hamadi, Y.; and Kohli, P., eds., Tractability: Practical Approaches to Hard Problems. Cambridge University Press. 141–172.
 [Dietrich and List 2007] Dietrich, F., and List, C. 2007. Arrow’s theorem in judgment aggregation. Social Choice and Welfare 29(1):19–33.
 [Dietrich 2007] Dietrich, F. 2007. A generalised model of judgment aggregation. Social Choice and Welfare 28(4):529–565.
 [Downey and Fellows 2013] Downey, R. G., and Fellows, M. R. 2013. Fundamentals of Parameterized Complexity. Springer Verlag.
 [Endriss and de Haan 2015] Endriss, U., and de Haan, R. 2015. Complexity of the winner determination problem in judgment aggregation: Kemeny, Slater, Tideman, Young. In Proc. of the 14th Intern. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2015).
 [Endriss et al. 2016] Endriss, U.; Grandi, U.; de Haan, R.; and Lang, J. 2016. Succinctness of languages for judgment aggregation. In Proc. of the 15th Intern. Conf. on the Principles of Knowledge Representation and Reasoning (KR 2016). AAAI Press.
 [Endriss, Grandi, and Porello 2012] Endriss, U.; Grandi, U.; and Porello, D. 2012. Complexity of judgment aggregation. J. Artif. Intell. Res. 45:481–514.
 [Endriss 2016] Endriss, U. 2016. Judgment aggregation. In Brandt, F.; Conitzer, V.; Endriss, U.; Lang, J.; and Procaccia, A., eds., Handbook of Computational Social Choice. Cambridge University Press, Cambridge.
 [Gaspers and Szeider 2012] Gaspers, S., and Szeider, S. 2012. Backdoors to satisfaction. In Bodlaender, H. L.; Downey, R.; Fomin, F. V.; and Marx, D., eds., The Multivariate Algorithmic Revolution and Beyond, 287–317. Springer Verlag.
 [Gergov and Meinel 1994] Gergov, J., and Meinel, C. 1994. Efficient analysis and manipulation of OBDDs can be extended to FBDDs. IEEE Transactions on Computers 43(10):1197–1209.
 [Grandi and Endriss 2013] Grandi, U., and Endriss, U. 2013. Lifting integrity constraints in binary aggregation. Artificial Intelligence 199:45–66.
 [Grandi 2012] Grandi, U. 2012. Binary Aggregation with Integrity Constraints. Ph.D. Dissertation, University of Amsterdam.
 [Grossi and Pigozzi 2014] Grossi, D., and Pigozzi, G. 2014. Judgment Aggregation: A Primer. Morgan & Claypool Publishers.
 [de Haan and Slavkovik 2017] de Haan, R., and Slavkovik, M. 2017. Complexity results for aggregating judgments using scoring or distancebased procedures. In Proc. of the 16th International Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2017).
 [Kimmig, Van den Broeck, and De Raedt 2017] Kimmig, A.; Van den Broeck, G.; and De Raedt, L. 2017. Algebraic model counting. J. of Applied Logic 22:46–62.
 [Krentel 1988] Krentel, M. W. 1988. The complexity of optimization problems. J. of Computer and System Sciences 36(3):490–509.
 [Lang and Slavkovik 2014] Lang, J., and Slavkovik, M. 2014. How hard is it to compute majoritypreserving judgment aggregation rules? In Proc. of the 21st European Conf. on Artificial Intelligence (ECAI 2014). IOS Press.
 [Lang et al. 2017] Lang, J.; Pigozzi, G.; Slavkovik, M.; van der Torre, L.; and Vesic, S. 2017. A partial taxonomy of judgment aggregation rules and their properties. Social Choice and Welfare 48(2):327–356.
 [Li, Ma, and Wang 2002] Li, M.; Ma, B.; and Wang, L. 2002. On the closest string and substring problems. J. of the ACM 49(2):157–171.
 [List and Pettit 2002] List, C., and Pettit, P. 2002. Aggregating sets of judgments: An impossibility result. Economics and Philosophy 18(1):89–110.
 [Marquis 2015] Marquis, P. 2015. Compile! In Bonet, B., and Koenig, S., eds., Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI 2015), 4112–4118. AAAI Press.
 [Muise et al. 2012] Muise, C. J.; McIlraith, S. A.; Beck, J. C.; and Hsu, E. I. 2012. Dsharp: Fast dDNNF compilation with sharpSAT. In Kosseim, L., and Inkpen, D., eds., Proc. of the 25th Canadian Conf. on Artificial Intelligence (Canadian AI 2012), 356–361. Springer Verlag.
 [Oztok and Darwiche 2014a] Oztok, U., and Darwiche, A. 2014a. CVwidth: A new complexity parameter for CNFs. In Proc. of the 21st European Conf. on Artificial Intelligence (ECAI 2014), 675–680. IOS Press.
 [Oztok and Darwiche 2014b] Oztok, U., and Darwiche, A. 2014b. On compiling CNF into decisionDNNF. In Proc. of the 20th Intern. Conf. on Principles and Practice of Constraint Programming (CP 2014), 42–57. Springer Verlag.
 [Rothe 2016] Rothe, J. 2016. Economics and Computation. Springer.
 [Wagner 1990] Wagner, K. W. 1990. Bounded query classes. SIAM J. Comput. 19(5):833–846.
Appendix B Additional Material: Lemmas and Proofs
As additional material, we provide proofs for all statements in the main paper marked with a star (), as well as additional lemmas used for these proofs.
Lemma 20.
is complete.
Proof.
We sketch a reduction from MaxModel for arbitrary propositional formulas. Let be an instance of MaxModel. By using the standard Tseitin transformation, we can transform into a 3CNF formula with for some set of new variables, such that for each truth assignment it holds that is true if and only if there exists a truth assignment such that is true.
We then transform into a 3CNF formula with , for the set of fresh variables, such that the maximal models of correspond exactly to the maximal models of . We define as follows:
Each model of then must set the same number of variables in to true—namely of them. ∎
Lemma 21.
is complete.
Proof.
We give a reduction from . Let be an instance of , where and where consists of the clauses . Without loss of generality, we may assume that each clause is of size exactly 3. Also, without loss of generality, we may assume that is satisfied by the “all zeroes” assignment, that is, by the assignment such that for all . Moreover, we may assume without loss of generality that . We construct an instance of as follows.
For each clause , we introduce fresh variables and , for and . Moreover, for each , we introduce fresh variables , , for and for . We then let consist of the following clauses. For each , we add the clauses:
ensuring that at most one variable among can be true. Moreover, for each and each , we add the clauses:
ensuring that the variables and get the same truth value, for each and each .
Then, for each , we add the clause , ensuring that at most one variable among is true. Moreover for each we add the clauses:
and:
ensuring that the variables and get the same truth value, for each and each .
Finally, we add the following clauses to , for each clause of . Let be a clause of , and let be the th literal in , for . If for some , we add the clause , and if for some , we add the clause .
To finish our construction, we let , for the unique such that .
Before we show correctness of this reduction, we establish several other properties of the formula . Any maximal model of sets at least variables to true. Since the “all zeroes” assignment satisfies , we can satisfy by setting all variables to true, setting all variables to false, and for each setting all variables to true for some , and setting all variables to false for the other . This model of sets variables to true.
Moreover, by construction of