A Polynomial-time Fragment of Epistemic Probabilistic Argumentation (Technical Report)

A Polynomial-time Fragment of Epistemic Probabilistic Argumentation (Technical Report)

Nico Potyka npotyka@uos.de
Abstract

Probabilistic argumentation allows reasoning about argumentation problems in a way that is well-founded by probability theory. However, in practice, this approach can be severely limited by the fact that probabilities are defined by adding an exponential number of terms. We show that this exponential blowup can be avoided in an interesting fragment of epistemic probabilistic argumentation and that some computational problems that have been considered intractable can be solved in polynomial time. We give efficient convex programming formulations for these problems and explore how far our fragment can be extended without loosing tractability.

Key Words.:
Probabilistic Argumentation, Algorithms for Probabilistic Argumentation, Complexity of Probabilistic Argumentation
\settopmatter

printacmref=false \acmDOI \acmISBN \acmConferenceTechnical Report \acmYear \copyrightyear \acmPrice

\affiliation\institution

University of Osnabrück \countryGermany

1 Introduction

Abstract argumentation Dung (1995) deals with the question what arguments a rational agent can accept. This question is answered independent of the content of the arguments, just based on their relationships. To this end, abstract argumentation problems can be modeled as graphs, where nodes correspond to arguments and edges to special relations like attack or support. In the basic setting introduced in Dung (1995) only attack relations were considered. In bipolar argumentation, this framework is extended with support relations Amgoud et al. (2004); Boella et al. (2010); Cayrol and Lagasquie-Schiex (2013); Cohen et al. (2014). Another useful extension is to go beyond the classical two-valued view that arguments can only be accepted or rejected. Examples include ranking frameworks that can be based on fixed point equations Besnard and Hunter (2001); Leite and Martins (2011); Barringer et al. (2012); Correia et al. (2014) or the graph structure Cayrol and Lagasquie-Schiex (2005); Amgoud and Ben-Naim (2013) and weighted argumentation frameworks Baroni et al. (2015); Rago et al. (2016); Amgoud and Ben-Naim (2017); Mossakowski and Neuhaus (2018); Potyka (2018). Probabilistic argumentation frameworks express uncertainty by building up on probability theory and probabilistic reasoning methods. Uncertainty can be introduced, for example, over possible worlds, over subgraphs of the argumentation graph or over classical extensions Dung and Thang (2010); Li et al. (2011); Rienstra (2012); Hunter (2014); Doder and Woltran (2014); Polberg and Doder (2014); Thimm et al. (2017); Kido and Okamoto ([n. d.]); Rienstra et al. (2018); Thimm et al. (2018); Riveret et al. (2018). For the subgraph-based approach, the computational complexity has been studied extensively in Fazzinga et al. (2013, 2018).

Our focus here is on the epistemic approach to probabilistic argumentation that has evolved from work in Thimm (2012); Hunter (2013). The idea is to consider probability functions over possible worlds in order to assign degrees of beliefs to arguments. Based on the relationships between arguments, the possible degrees of beliefs are then restricted by semantical constraints. Two basic computational problems have been introduced in Hunter and Thimm (2016). The satisfiability problem asks whether a given set of semantical constraints over an argumentation graph can be satisfied by a probability function. The entailment problem is to answer queries about the probability of arguments. To this end, probability bounds on the probability of the argument are computed based on the probability functions that satisfy the given semantical constraints. Based on their close relationship to problems considered in probabilistic reasoning, it has been conjectured that these problems are intractable. However, as we will explain, both problems can actually be solved in polynomial time. Intuitively, the reason is that the semantical constraints can only talk about atomic probability statements. For this reason, reasoning with probability functions over possible world turns out to be equivalent to reasoning with functions that assign probabilities to arguments directly. We call these functions probability labellings as they can be seen as generalizations of labellings in classical abstract argumentation Caminada and Gabbay (2009) that, intuitively, label arguments as rejected (probability ), accepted (probability ) or undecided (probability ).

We explain the epistemic probabilistic argumentation approach from Thimm (2012); Hunter (2013); Hunter and Thimm (2016) in more detail in Section 2 and introduce a slight generalization of the computational problems considered in Hunter and Thimm (2016). Even more general variants of these problems have been considered in Hunter et al. (2018), but these variants are too general to obtain polynomial runtime guarantees as we will explain in Section 4 and 5. In Section 3, we show that reasoning with probability labellings is equivalent to reasoning with probability functions when only atomic probability statements are considered and use this observation to show that both the satisfiability and the entailment problem considered in Hunter and Thimm (2016) and their generalizations can be solved in polynomial time. We then look at how far we can extend our language towards the language considered in Hunter et al. (2018) by allowing connecting arguments or constraints with logical connectives. In Section 4, we look at more expressive queries. We cannot avoid an exponential blowup when considering arbitrary queries. However, we show that when applying the principle of maximum entropy, conjunctive queries can still be answered in polynomial time. In particular, we show that a compact representation of the maximum entropy probability function that satisfies the constraints can be computed in polynomial time. In Section 5, we look at more expressive constraints. We find that the constraint language cannot be extended much further. If we only allow connecting two arguments or their negation by only conjunction or disjunction in probability statements or if we allow connecting two constraints disjunctively, the satisfiability problem becomes intractable.

2 Background

We consider bipolar argumentation frameworks (BAFs) consisting of a set of arguments , an attack relation and a support relation . denotes the set of attackers of an argument and denotes its supporters. We visualize bipolar argumentation frameworks as graphs, where arguments are denoted as nodes, solid edges denote attack relations and dashed edges denote support relations. Figure 1 shows an example BAF with four arguments .

\xymatrix

*++[o][F-]A \ar@->@/^1pc/[r] &*++[o][F-]B \ar@->@/^1pc/[l]
*++[o][F-]C \ar@–>@/^1pc/[u] &*++[o][F-]D \ar@–>@/^1pc/[l] \ar@->@/_1pc/[u]

Figure 1: A simple example BAF.

We define a possible world as a subset of arguments . Intuitively, contains the arguments that are accepted. As usual, denotes the set of all subsets of , that is, the set of all possible worlds. In order to talk about an agent’s beliefs in arguments’ acceptance state we can consider probability functions such that . We denote the set of all probability functions over by . The probability of an argument under is defined by adding the probabilities of all worlds in which is accepted, that is, . can be understood as a degree of belief of an agent, where means complete acceptance and means complete rejectance.

Given an argumentation graph, a probability function should maintain reasonable relationships between the probabilities of arguments based on their relationships in the graph. For example, if an argument is accepted, its attackers should not be accepted. In order to capture this intuition, several constraints have been introduced in the literature that can be imposed on the probability functions. For the satisfiability and entailment problem in Hunter and Thimm (2016), the following constraints have been considered (for attack-only graphs).

COH:

is called coherent if for all with , we have .

SFOU:

is called semi-founded if for all with .

FOU:

is called founded if for all with .

SOPT:

is called semi-optimistic if for all with .

OPT:

is called optimistic if .

JUS:

is called justifiable if is coherent and optimistic.

The intuition for these constraints comes from the idea that probability represents indifference, whereas probabilities smaller (larger) than tend towards rejectance (acceptance) of the argument. Coherence imposes an upper bound on the beliefs in arguments based on the beliefs in their attackers. Semi-Foundedness says that an agent should not tend to reject an argument if there is no reason for this. Foundedness even demands that the argument should be fully accepted in this case. Semi-optimistic and Optimistic give upper bounds on the beliefs in an argument based on its attackers and supporters. Usually, not all constraints are employed, but a subset is selected that seems reasonable for a particular application.

Example \thetheorem

If we demand COH and FOU for the BAF in Figure 1, we get and from FOU. From COH, we get , and . Since , the last inequality implies .

One could define natural dual constraints for support-only graphs, for example:

S-COH:

is called s-coherent if for all with , we have .

PES:

is called pessimistic if .

Example \thetheorem

If we add S-COH to our previous example, we get and . Since we already know that , we can conclude . Overall, the constraints imply and .

If both attack and support relations are present, one may want to further refine constraints like Optimism and Pessimism to take account of both attackers and supporters simultaneously. In order to provide more flexibility, a general constraint language has been considered recently that captures all of the previous examples Hunter et al. (2018). This language allows constraints over complex formulas of arguments and connecting constraints via logical connectives. However, for now, we consider only a simple fragment here for which we can obtain polynomial performance guarantees.

Definition \thetheorem (Linear Atomic Constraint, Satisfiability).

A linear atomic constraint is an expression of the form , where and . A probability function satisfies a linear atomic constraint iff . satisfies a set of linear atomic constraints , denoted as , iff it satisfies all . In this case, is called satisfiable.

Note that and can be expressed as well. For , just note that is equivalent to . For , note that and together are equivalent to . We merely restrict our language to constraints with in order to keep the notation simple. Notice that this restriction is also not important for complexity considerations because the number of constraints just changes by a constant factor. All semantical constraints that we mentioned before are indeed linear atomic constraints.

Inspired by the probabilistic entailment problem from probabilistic logic Nilsson (1986); Georgakopoulos et al. (1988); Hansen and Jaumard (2000), the authors in Hunter and Thimm (2016) considered the following reasoning problems: Given a partial probability assignment (constraints of the form for some ) and a subset of the semantical constraints (linear atomic constraints),

  1. decide whether there is a probability function that satisfies the partial probability assignment and the semantical constraints,

  2. compute lower and upper bounds on the probability of an argument among the probability functions that satisfy the partial probability assignment and the semantical constraints,

  3. decide whether given lower and upper bounds on the probability of an argument are taken by probability functions that satisfy the partial probability assignment and the semantical constraints.

Because of their similarity to intractable probabilistic reasoning problems, it was conjectured that these problems are intractable as well. However, as we will explain in the next section, all three problems can be solved in polynomial time.

Before doing so, we make the computational problems more precise. We will only generalize the first (satisfiability) and second (entailment) problem from Hunter and Thimm (2016). Since the second problem can be solved in polynomial time, there is no need to look at the third problem, which is just a decision variant of the second problem. Formally, we consider the following two problems:

PArgAtSAT:

Given a finite set of linear atomic constraints , decide whether it is satisfiable.

PArgAtENT:

Given a finite set of satisfiable linear atomic constraints and an argument , compute lower and upper bounds on the probability of among the probability functions that satisfy . More precisely, solve the two optimization problems

such that

PArg stands for probabilistic argumentation, At for the restriction to linear atomic constraints and SAT and ENT stand for satisfiability and entailment, respectively. Notice that the computational problems from Hunter et al. (2018) are indeed a special case because the partial probability assignments can just be encoded as constraints in .

Example \thetheorem

Consider the BAF in Figure 1. Say our partial probability assignment assigns probability to and to . These assignments correspond to the two linear constraints and . Say we also impose COH. Then, we additionally have the constraints and . Taken together, these constraints imply that every probability function that satisfies all constraints, must satisfy , (partial assignment constraints), and (follow with coherence constraints). Note that when also adding the foundedness constraints and , the set of constraints becomes unsatisfiable.

3 Probability labellings and Two Polynomial-time Algorithms

We define a probability labelling as a function . That is, a probability labelling assigns a degree of belief to arguments directly, rather than in an indirect way using possible worlds. denotes the set of all probability labellings over . We will now show that probability labellings correspond to equivalence classes of probability functions and that by restricting to these equivalence classes (represented by probability labellings), we can solve PArgAtSAT and PArgAtENT in polynomial time.

We call two probability functions atomically equivalent, denoted as , iff for all . Atomic equivalence is an equivalence relation. denotes the equivalence class of and denotes the set of all equivalence classes. We first note that there is a one-to-one relationship between and .

Proposition \thetheorem

The function defined by , where for all is a bijection.

In particular, for every labelling , there is a probability function such that and .

Proof.

First note that is well-defined: this is because, for all , we have for all by definition of .

is injective for if , then for all . That is, and .

is also surjective. To see this, consider an arbitrary . Define via for all . We prove by induction over the number of arguments that . For the base case, consider . Then . For the induction step, consider and let . Then

In the second and third row, we partitioned the worlds in those that reject (second row) and those that accept (third row). Notice that the sums in the second and third row correspond to possible worlds over a set of arguments of length , so that our induction hypothesis implies that they sum up to 1. Hence, is a probability function. Furthermore, for all , we have

where we used again the fact that the sum in the second row has to sum up to . Hence, and is also surjective and thus bijective. ∎

Intuitively, determines a compact representative for the equivalence class , namely the probability labelling . We say that a probability labelling satisfies a linear atomic constraint iff . The following proposition explains that we can capture the set of all probability functions that satisfy a constraint by the set of labellings that satisfy the constraint.

Proposition \thetheorem

A linear atomic constraint is satisfied by a probability function if and only if the probability labelling and all satisfy the constraint.

Proof.

This follows immediately from the satisfaction definition and the observation that for all . ∎

To begin with, we show that PArgAtSAT can be decided in polynomial time.

Proposition \thetheorem

PArgAtSAT can be solved in polynomial time. In particular, when given arguments and constraints , then is satisfiable if and only if the linear optimization problem

such that

has minimum 0.

Proof.

First notice that every probablistic labelling corresponds to a vector such that . The points are intuitively composed of a labelling and a vector of slack variables that relax the constraints.

To begin with, we show that the optimization problem always has a minimum. Let be defined by . Then is a feasible solution because for all constraints, we get . Hence, the feasible region is non-empty and the theory of linear programming implies that the minimum exists Bertsimas and Tsitsiklis (1997). In particular, since is non-negative, it is clear that the minimum can never be smaller than .

We show next that the minimum is if and only if there is a labelling that satisfies . Assume first that the minimum is and let be an optimal solution. Consider defined by . We have for all . Hence, satisfies .

Conversely, assume that there is a probability labelling that satisfies . Let be defined by and consider the point . We have . Therefore, is a feasible solution. In particular, it yields for the objective function and hence is minimal.

We know from the theory of linear programming that linear optimization problems can be solved in polynomial time with respect to the number of optimization variables and constraints Bertsimas and Tsitsiklis (1997). We have optimization variables and constraints (non-negativity constraints are free). Hence, we can decide in polynomial time whether there exists a probability labelling that satisfies . If there is such a labelling , then the probability function from Proposition 3 satisfies according to Proposition 3. Conversely, if there is no probability function that satisfies , then there can be no labelling that satisfies it either. For if there was such a labelling , then would satisfy as well. Hence, can be satisfied by a probability function if and only if the minimum of our linear optimization problem is . Hence, PArgAtSAT can be solved in polynomial time by the given linear program. ∎

We can apply similar ideas to show that PArgAtENT can be solved in polynomial time .

Proposition \thetheorem

PArgAtENT can be solved in polynomial time. In particular, when given arguments and constraints such that is satisfiable, then the lower and upper bounds on the probability of are the results of the following linear optimization problems:

such that
Proof.

For concreteness and w.l.o.g assume that we want to compute bounds on the probability of . We look only at the minimization problem for computing the lower bound (for the maximization problem everything is completely analogous). That is, we consider the following linear optimization problem:

such that

By assumption, is satisfiable. Hence, the feasible region is non-empty and the theory of linear programming implies that the minimum exists and can be computed in polynomial time Bertsimas and Tsitsiklis (1997). The minimum found corresponds exactly to the smallest probability that is assigned to by a probability labelling that satisfies the constraints. To see this, note that if we take a minimal solution , we can construct a labelling that satisfies the constraints as in the previous proof. In particular, . There can be no probability labelling that satisfies the constraints and assigns a smaller probability to because each such labelling yields a feasible vector with .

Similar to before, it follows that the minimum also corresponds to the smallest probability that is assigned to by a probability function that satisfies . If the minimum is taken by a labelling , we know that the corresponding probability function from Proposition 3 yields the same probability and satisfies according to Proposition 3. Hence, the minimum cannot be smaller than the probability taken by probability functions that satisfy . Conversely, if there is a probability function that satisfies and gives , then the labelling gives as well and satisfies according to Proposition 3. Hence, the minimum cannot be larger than the probability taken by probability functions that satisfy either, and so it must be indeed equal. Hence, PArgAtENT can be solved in polynomial time by the given linear program. ∎

4 Complex Queries

Until now, we only looked at probabilities of arguments. However, the real power of probability functions is that they allow computing probabilities for arbitrary formulas over arguments. By a formula over a set of arguments , we mean an expression that is formed by connecting the arguments in via logical connectives. Satisfaction of formulas by possible worlds is explained in the usual recursive way. For example, iff does not satisfy and iff satisfies both and The probability of a formula under is defined by adding the probabilities of all worlds that satisfy , that is, . Unfortunately, we now have to add an exponential number of terms. There is probably no general way to avoid this problem because the entailment problem can now be used to solve the propositional satisfiability problem. In order to make this precise, we define a 3CNF-Query as a formula over arguments, where , and .

Proposition \thetheorem

Let be a satisfiable set of linear atomic constraints over and let be a 3CNF-query. Then the following problem is NP-complete: decide whether the upper bound on the probability of among the probability functions that satisfy is non-zero.

Proof.

For membership, we need a result from Linear Programming theory. Among the optimal solutions of an N-dimensional linear program, there must be one that satisfies constraints with equality Bertsimas and Tsitsiklis (1997). In our context, this means that non-negativity constraints must be satisfied with equality. That is, among the optimal probability functions, there must be one that has at most non-zero probabilities. Let . Then for arbitrary formulas over , . Hence, the set of pairs provides a certificate of polynomial size such that checking the constraints and can be done in polynomial time.

For hardness, we give a polynomial-time reduction from . Given a propositional 3CNF formula with atoms , we introduce corresponding arguments . Let be the query obtained from by replacing with for . We do not add any constraints, so that all satisfy our constraints. Then the upper bound on the probability of is non-zero iff is satisfiable. To see this, note that if is satisfiable, there is an interpretation that satisfies and a corresponding possible world that satisfies . Then the probability function with and for all other possible worlds gives . Conversely, if is not satisfiable, is not satisfiable either and because the sum does not contain any terms for any . ∎

There are, however, some interesting special cases that can be solved efficiently. One case is answering conjunctive queries under the principle of maximum entropy.

The entropy of a probability function over is defined as . It can be seen as a measure of uncertainty. Indeed, the entropy is always non-negative and maximal if is the uniform distribution. Intuitively, by maximizing entropy among the probability functions that satisfy a set of constraints, we select the probability distribution that adds as little information as possible. The principle of optimum entropy has been justified by several characterizations with common-sense properties Johnson and Shore (1983); Jaynes (1983); Paris and Vencovská (1990); Kern-Isberner (2001).

For a probability labelling over arguments , we define its entropy as . As we show next, corresponds to the maximum entropy taken in the equivalence class and the maximum is taken by the corresponding probability function .

Proposition \thetheorem

For every labelling , the probability function defined by , maximizes entropy among all . In particular, .

Proof.

Consider an arbitrary probability function For all formulas , we let denote the indicator function that yields iff . Then we have

where, for the second equality, we used the fact that for all and in the last row, we used the observation that the previous formula corresponds to the KL-divergence between two probability functions that is always non-negative. Furthermore, the KL-divergence is if and only if both arguments are equal Yeung (2008), that is, if and only if . Therefore, and whenever . In particular, for all . ∎

Hence, in order to compute the probability function with maximum entropy, we can just compute the labelling with maximum entropy. The corresponding probability function then maximizes entropy. This is the basic idea of the following proposition.

Proposition \thetheorem

Given a satisfiable finite set of linear atomic constraints , the optimization problem

such that

has a unique solution and is the unique solution of the optimization problem

such that

In particular, can be computed in polynomial time.

Proof.

Both optimization problems have a strictly concave and continuous objective function. Maximizing such a function subject to consistent linear constraints yields a unique solution Nocedal and Wright (2006). In particular, these problems can be solved by interior-point methods in polynomial time in the number of optimization variables and constraints Boyd and Vandenberghe (2004). For the first problem, the number of optimization variables is exponential in the number of arguments, but for the second problem the number of optimization variables equals the number of arguments. Hence, the problem can be solved in polynomial time. Since the solution of the second problem maximizes entropy among all probability labellings, and the probability distributions corresponding to the labellings maximize entropy among their equivalence classes according to Proposition 4, must equal . ∎

Having computed , we can compute a compact representation of . Of course, constructing explicitly would take exponential time again. Fortunately, for some queries, we can just work with the compact representation directly. This includes, in particular, conjunctive queries, as we explain in the following proposition.

Proposition \thetheorem

Let , let be a satisfiable set of linear atomic constraints and let be a conjunction of literals, that is, , where , . Let be the probability function that maximizes entropy among all probability functions that satisfy . Then can be computed in polynomial time even if is unknown. In particular,

where is the probability labelling that maximizes entropy among all probability labellings that satisfy .

Proof.

First note that if and otherwise. Dually, if and otherwise. Let denote the atoms occuring in . We know from Proposition 4 that for all with , we have

where we split up the arguments in (indexed by ) since we know their interpretation (because ). Therefore,

where we used the fact that as we explained in the proof of Proposition 3 (the products correspond to probabilities of a probability function over ).

can be computed in linear time when we know . We can compute in polynomial time as explained in Proposition 4. Hence, we can compute in polynomial time. ∎

However, even under the principle of maximum entropy, queries cannot become arbitrarily complex. In this case, 3CNF-queries are even sufficient to solve .

Proposition \thetheorem

The following problem is -hard: Given a satisfiable set of linear atomic constraints over and a 3CNF-query , compute , where is the probability function that maximizes entropy among all probability functions that satisfy .

Proof.

We give a polynomial-time reduction from . Given a propositional 3CNF formula , we construct a corresponding argument query as in the proof of Proposition 4. We let so that is just the uniform distribution with for all . Then and is the number of possible worlds that satisfy , which equals the number of propositional interpretations that satisfy . ∎

Similar to the proof of Proposition 4, it can be seen that the corresponding decision problem that asks whether the query has a non-zero probability, is NP-complete. While queries can be difficult to compute in general, there are still some special cases that can be solved efficiently. For example, consider the query that asks for the probability that or (or both) are accepted. Then the query is equivalent to . Since the three conjunctions are exclusive (they cannot be satisfied by the same worlds), we have . Hence, we can answer the disjunctive query by three conjunctive queries that can be computed in polynomial time. More generally, if we can rewrite a query efficiently as a disjunction of exclusive conjunctions, the query can be answered by conjunctive queries. However, in general, can grow exponentially with the number of atoms in the query.

5 Complex Constraints

In this section, we look at how far we can extend the expressiveness of our constraint language. Unfortunately, there are strong limitations. As soon as we only allow constraining the probability of the disjunction of two literals, the satisfiability problem becomes intractable. We define a linear 2DN constraint as an expression of the form , where , and . We say that a probability function satisfies such a constraint iff .

Proposition \thetheorem

The satisfiability problem for Linear 2DN Constraints is NP-complete.

Proof.

For membership, we can construct a polynomial certificate like in the proof of Proposition 4.

For hardness, we can give a polynomial-time reduction from 2PSAT, the problem of deciding whether a set of probability statements of the form over propositional 2-clauses is satisfiable. As shown in Georgakopoulos et al. (1988), 2PSAT is NP-complete. We can introduce an argument for every propositional atom as in the previous proofs and represent every statement with a linear 2D constraint . Then, clearly, the set of linear 2D constraints can be satisfied if and only if the 2PSAT instance can be satisfied. ∎

The problem does not get significantly easier when considering conjunction instead of disjunction. We define a linear 2CN constraint as an expression of the form and say that satisfies such a constraint iff .

Proposition \thetheorem

The satisfiability problem for Linear 2CN Constraints is NP-complete.

Proof.

Membership follows as in the previous proposition.

For hardness, we can give a polynomial-time reduction from satisfiability of linear 2DN constraints that we considered before. Consider an arbitrary linear 2DN constraint . Notice that every formula can be equivalently expressed as a disjunction of three exclusive conjunctions of length . For example . Since these conjunctions cannot be satisfied simultaneously, we have . In general, every linear 2DN constraint can be equivalently represented by a linear 2CN constraint where the are exclusive conjunctions of two literals chosen as before to satisfy . The number of constraints remains unchanged and their size changes only by a constant factor. In particular, a set of linear 2DN constraints is satisfiable if and only if the corresponding set of linear 2CN constraints is satisfiable. ∎

So talking about the probability of formulas in constraints is difficult. However, instead of allowing logical connectives in probability statements, we could consider logical connections of constraints as considered in Hunter et al. (2018). Note that connecting constraints conjunctively does not add anything semantically. This is because there is no difference between adding two constraints or their conjunction to a knowledge base when the usual interpretation of conjunction is used. Adding negation basically means allowing for strict inequalities. Negation alone does not any additional difficulties and the problem can be reduced to the case without negation with constant cost Hunter et al. (2018). The most interesting case is allowing for connecting constraints disjunctively. We define a 2D linear atomic constraint as an expression of the form , where are linear atomic atoms. We say that a probability function satisfies such a constraint iff it satisfies or . Unfortunately, the satisfiability problem for 2D linear atomic constraints is intractable.

Proposition \thetheorem

The satisfiability problem for 2D Linear Atomic Constraints is NP-complete. In particular, the problem remains NP-complete even when the linear atomic constraints are restricted to the form with , that is, even when they can contain at most probability terms.

Proof.

Membership follows again from noticing that a labelling that satisfies the constraints is a certificate that can be checked in polynomial time.

For hardness, we give a polynomial-time reduction from 3SAT to satisfiability of 2D Linear Atomic Constraints. As before, for every propositional atom, we introduce a corresponding argument. Consider a clause . We introduce three additional auxiliary arguments and encode the clause by four 2D linear atomic constraints. We use the constraints for and . Notice that is equivalent to and is equivalent to . The constraint must be satisfied if the -th literal is not satisfied. The last constraint expresses that at most one literal is allowed not to be satisfied. If all three literals are falsified, the last constraint is not satisfied. If the first or second literal are satisfied, the first atom in the disjunction will be satisfied, if the third literal is satisfied, the second atom will be satisfied. Our reduction introduces new arguments and additional constraints, so the size is polynomial.

If is satisfiable, then there is a possible world (interpretation) that satisfies . Consider the probability function that assigns probability to and to all other worlds. Let denote the probability labelling corresponding to . We extend to a probability labelling over the arguments. For every clause , satisfies one literal and we set the corresponding auxilary argument to and the other two to . Then the 2D linear atomic constraints are satisfied by and the corresponding probability function satisfies the constraint as well as shown before. Hence, the 2D linear atomic constraints are satisfiable.

Conversely, if all 2D linear atomic constraints are satisfied by a probability function , then every world with must satisfy (strictly speaking, also interprets the auxiliary arguments, but those can just be ignored). For the sake of contradiction, assume that this is not the case. That is, there is a world with that does not satisfy . Then there is a clause in such that satisfies neither nor nor . If , then and if , then . Then the constraints can only be satisfied if for . But then and the constraint is violated, which contradicts our assumption that satisfies the constraints. Hence, indeed every world with must satisfy and since there must be at least one world with non-zero probability (otherwise, cannot be a probability function), is satisfiable. ∎

However, it may still be possible to extend our fragment to statements of the form , where every linear atomic constraint can only contain a single probability term. This would allow making conditional statements like in the rationality property Hunter and Thimm (2014):

RAT:

is called rational if for all with , we have implies .

We may reuse ideas for 2SAT in order to handle such constraints efficiently. However, we currently cannot say for certain if this is possible in polynomial time and leave this question for future work.

6 Related Work

As mentioned in the introduction, there is a large variety of other probabilistic argumentation frameworks Dung and Thang (2010); Li et al. (2011); Rienstra (2012); Hunter (2014); Doder and Woltran (2014); Polberg and Doder (2014); Thimm et al. (2017); Kido and Okamoto ([n. d.]); Rienstra et al. (2018); Thimm et al. (2018); Riveret et al. (2018). We sketch three early works here to give an impression of some ideas. Dung and Thang (2010) consider probability functions over possible worlds as well, but the mechanics are very different from what we saw here. Instead of considering all possible probability functions that satisfy particular constraints, a single probability function is derived from a set of probabilistic rules. Roughly speaking, these rules express the likelihood of assumptions under given preconditions. Multiple rules for one assumption are only allowed if they can be ordered by specificity. Li et al. (2011) consider functions that assign probabilities to arguments (like probability labellings) and attack relations. The functions are supposed to be given and allow assigning a probability to subgraphs of the given argumentation framework using common independency assumptions. Then the probability of an argument is defined by taking the probability of every subgraph and adding those probabilities for which the argument is accepted in the subgraph under a particular semantics. Since the number of subgraphs is exponential, the authors present a Monte-Carlo algorithm to approximate the probability of an argument. In Rienstra (2012), probabilities are again introduced over possible worlds. Again, a single probability distribution is derived from rules. However, in contrast to Dung and Thang (2010), these rules are probabilistic extensions of a light form of ASPIC rules Prakken (2010); Caminada and Amgoud (2007). They are also more a flexible in that they do not need to be ordered according to specificity.

Riveret et al. (2018) recently introduced a very general probabilistic argumentation framework that generalizes many ideas that have been considered before in the literature. The authors consider probability functions over subsets of defeasible theories or over subgraphs. The latter approach can then be seen as a generalization of the former, which abstracts from the structure of arguments. The authors discuss probabilistic labellings that should not be confused with probability labellings that we considered here. Roughly speaking, in Riveret et al. (2018), a probabilistic labelling frame corresponds to a probability function over subsets of possible classical labellings over an argumentation framework. These probabilistic labelling frames can then be used to assign probabilities to arguments. In this sense, a probabilistic labelling considered in Riveret et al. (2018) induces a probability labelling as considered here. However, the focus in Riveret et al. (2018) is on conceptual questions and computational problems are not discussed.

Our polynomial-time algorithms are based on a connection between probability functions and probability labellings. The relationship is established by considering an equivalence relation over probability functions. Conceptual similar ideas have been considered in probabilistic-logical reasoning. However, in this area, equivalence relations are introduced over possible worlds. Roughly speaking, the possible worlds are partitioned into equivalence classes that interpret the formulas that appear in the knowledge base in the same manner. Reasoning algorithms can then be modified to work on probability functions over equivalence classes Fischer and Schramm (1996); Kern-Isberner and Lukasiewicz (2004); Finthammer and Beierle (2012); Potyka (2016). If the number of equivalence classes is small, a significant speedup can be obtained. However, identifying compact representatives for these equivalence classes is intractable in general Potyka et al. (2015). In particular, in general, the number of equivalence classes over possible worlds can still be exponential. Indeed, many polynomial cases that we found here cannot be solved in polynomial time with this approach. For example, if one atomic constraint is given for every argument , every equivalence class of possible worlds will contain exactly one possible world, so that actually nothing is gained.

Hunter and Thimm (2016) also considered an inconsistency-tolerant generalization of the entailment problem that still works when there are conflicts between the partial probability assignment constraints and the semantical constraints. We can probably derive similar polynomial runtime guarantees for this problem. However, the approach in Hunter and Thimm (2016) is based on the assumption that the semantical constraints are consistent. This is no problem for the semantical constraints considered in Hunter and Thimm (2016) because the probability of attacked arguments is only bounded from above and the probability of non-attacked arguments is only bounded from below. However, in bipolar argumentation frameworks, we want to consider more complicated relationships and the constraints can easily become inconsistent. Therefore, it is interesting to also analyze other variants that use ideas for paraconsistent probabilistic reasoning Daniel (2009); Potyka and Thimm (2015) or reasoning with priorities Potyka (2015). It is also interesting to note that our satisfiability test from Proposition 3 actually corresponds to an inconsistency measure. If the knowledge base is inconsistent, the returned value will be , otherwise it measures by how much probability functions must violate the constraints numerically Potyka (2014).

7 Discussion and Future Work

We showed that the satisfiability and entailment problem for the epistemic probabilistic argumentation approach considered in Hunter and Thimm (2016) can be solved in polynomial time. In fact, arbitrary linear atomic constraints can be considered. For the query language, we found that conjunctive queries can still be answered in polynomial time under the principle of maximum entropy. General disjunctive constraints are intractable, but if they can be expressed as a compact disjunction of exclusive conjunctions, the query can be reduced to a sequence of conjunctive queries. We found that the constraint language cannot be extended significantly. However, it may still be possible to allow disjunctions of two probability statements, which would allow expressing conditional constraints like RAT. Another interesting question for future work is whether we can compute conjunctive queries for the entailment problem in polynomial time even without using the principle of maximum entropy.

We focussed mainly on complexity results and did not speak much about the runtime guarantees of our convex programming formulations. In general, interior-point methods can solve convex programs in cubic time in the number of optimization variables and optimization constraints Boyd and Vandenberghe (2004). This means that all convex programs that we introduced here can be solved in cubic time in the size of the argumentation problem in the worst-case. Our linear programs for satisfiability and entailment can often be solved faster by using the Simplex algorithm. Even though the Simplex algorithm has exponential worst-case runtime, in practice, the runtime usually depends only linearly on the number of optimization variables and quadratically on the number of constraints Matousek and Gärtner (2007). Implementations for satisfiability and entailment can be found in the Java-library ProBabble111https://sourceforge.net/projects/probabble/. You have to install IBM CPLEX in order to use ProBabble, but IBM offers free licenses for academic purposes. Problems with thousands of arguments can usually be solved within a few hundred milliseconds. Without the labelling approach, the same amount of time would be needed for 10-15 arguments already because the number of possible worlds grows exponentially.

References

  • (1)
  • Amgoud and Ben-Naim (2013) Leila Amgoud and Jonathan Ben-Naim. 2013. Ranking-based semantics for argumentation frameworks. In Scalable Uncertainty Management (SUM). Springer, 134–147.
  • Amgoud and Ben-Naim (2017) Leila Amgoud and Jonathan Ben-Naim. 2017. Evaluation of arguments in weighted bipolar graphs. In European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (ECSQARU). Springer, 25–35.
  • Amgoud et al. (2004) Leila Amgoud, Claudette Cayrol, and Marie-Christine Lagasquie-Schiex. 2004. On the bipolarity in argumentation frameworks.. In International Workshop on Non-Monotonic Reasoning (NMR), Vol. 4. 1–9.
  • Baroni et al. (2015) Pietro Baroni, Marco Romano, Francesca Toni, Marco Aurisicchio, and Giorgio Bertanza. 2015. Automatic evaluation of design alternatives with quantitative argumentation. Argument & Computation 6, 1 (2015), 24–49.
  • Barringer et al. (2012) Howard Barringer, Dov M Gabbay, and John Woods. 2012. Temporal, numerical and meta-level dynamics in argumentation networks. Argument & Computation 3, 2-3 (2012), 143–202.
  • Bertsimas and Tsitsiklis (1997) Dimitris Bertsimas and John N Tsitsiklis. 1997. Introduction to linear optimization. Vol. 6. Athena Scientific Belmont, MA.
  • Besnard and Hunter (2001) Philippe Besnard and Anthony Hunter. 2001. A logic-based theory of deductive arguments. Artificial Intelligence 128, 1-2 (2001), 203–235.
  • Boella et al. (2010) Guido Boella, Dov M Gabbay, Leon van der Torre, and Serena Villata. 2010. Support in abstract argumentation. In Computational Models of Argument (COMMA). Frontiers in Artificial Intelligence and Applications, IOS Press, 40–51.
  • Boyd and Vandenberghe (2004) Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, New York, NY, USA.
  • Caminada and Amgoud (2007) Martin Caminada and Leila Amgoud. 2007. On the evaluation of argumentation formalisms. Artificial Intelligence 171, 5 (2007), 286–310.
  • Caminada and Gabbay (2009) Martin WA Caminada and Dov M Gabbay. 2009. A logical account of formal argumentation. Studia Logica 93, 2-3 (2009), 109.
  • Cayrol and Lagasquie-Schiex (2005) Claudette Cayrol and Marie-Christine Lagasquie-Schiex. 2005. Graduality in Argumentation. Journal of Artificial Intelligence Research (JAIR) 23 (2005), 245–297.
  • Cayrol and Lagasquie-Schiex (2013) Claudette Cayrol and Marie-Christine Lagasquie-Schiex. 2013. Bipolarity in argumentation graphs: Towards a better understanding. International Journal of Approximate Reasoning 54, 7 (2013), 876–899.
  • Cohen et al. (2014) Andrea Cohen, Sebastian Gottifredi, Alejandro J. García, and Guillermo R. Simari. 2014. A Survey of Different Approaches to Support in Argumentation Systems. Knowledge Eng. Review 29, 5 (2014), 513–550.
  • Correia et al. (2014) Marco Correia, Jorge Cruz, and Joao Leite. 2014. On the Efficient Implementation of Social Abstract Argumentation.. In ECAI. 225–230.
  • Daniel (2009) L. Daniel. 2009. Paraconsistent Probabilistic Reasoning. Ph.D. Dissertation. L’École Nationale Supérieure des Mines de Paris.
  • Doder and Woltran (2014) Dragan Doder and Stefan Woltran. 2014. Probabilistic argumentation frameworks–a logical approach. In International Conference on Scalable Uncertainty Management. Springer, 134–147.
  • Dung (1995) Phan Minh Dung. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence 77, 2 (1995), 321–357.
  • Dung and Thang (2010) Phan Minh Dung and Phan Minh Thang. 2010. Towards (probabilistic) argumentation for jury-based dispute resolution. COMMA 216 (2010), 171–182.
  • Fazzinga et al. (2018) Bettina Fazzinga, Sergio Flesca, and Filippo Furfaro. 2018. Probabilistic bipolar abstract argumentation frameworks: complexity results.. In IJCAI. 1803–1809.
  • Fazzinga et al. (2013) Bettina Fazzinga, Sergio Flesca, and Francesco Parisi. 2013. On the complexity of probabilistic abstract argumentation.. In IJCAI. 898–904.
  • Finthammer and Beierle (2012) Marc Finthammer and Christoph Beierle. 2012. Using equivalences of worlds for aggregation semantics of relational conditionals. In Annual Conference on Artificial Intelligence. Springer, 49–60.
  • Fischer and Schramm (1996) Volker G Fischer and Manfred Schramm. 1996. Tabl-a tool for efficient compilation of probabilistic constraints. (1996).
  • Georgakopoulos et al. (1988) George Georgakopoulos, Dimitris Kavvadias, and Christos H Papadimitriou. 1988. Probabilistic satisfiability. Journal of complexity 4, 1