Strategic Argumentation is NP-Complete

Strategic Argumentation is NP-Complete

Guido Governatori
NICTA, Australia
&Francesco Olivieri
Griffith Univ., Australia
NICTA, Australia
Verona Univ., Italy &Simone Scannapieco
Griffith Univ., Australia
NICTA, Australia
Verona Univ., Italy &Antonino Rotolo
Bologna Univ., Italy &Matteo Cristani
Verona Univ., Italy
Abstract

In this paper we study the complexity of strategic argumentation for dialogue games. A dialogue game is a 2-player game where the parties play arguments. We show how to model dialogue games in a skeptical, non-monotonic formalism, and we show that the problem of deciding what move (set of rules) to play at each turn is an NP-complete problem.

Strategic Argumentation is NP-Complete


Guido Governatori NICTA, Australia                        Francesco Olivieri Griffith Univ., Australia NICTA, Australia Verona Univ., Italy                        Simone Scannapieco Griffith Univ., Australia NICTA, Australia Verona Univ., Italy                        Antonino Rotolo Bologna Univ., Italy                        Matteo Cristani Verona Univ., Italy

Copyright © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

1 Introduction and Motivation

Over the years many dialogue games for argumentation have been proposed to study questions such as which conclusions are justified, or how procedures for debate and conflict resolution should be structured to arrive at a fair and just outcome. We observed that the outcome of a debate does not solely depend on the premises of a case, but also on the strategies that parties in a dispute actually adopt. According to our knowledge, this aspect has not received the proper attention in the literature of the field.

Almost all the AI literature on the strategic aspects of argumentation (see Section 3 for a brief overview) assumes to work with argument games with complete information, i.e., dialogues where the structure of the game is common knowledge among the players. Consider, however, the following example due to (?) (which in turn modifies an example taken from (?)):

This dialogue exemplifies an argument game occurring in witness examinations in legal courts. The peculiarity of this game is the fact that the exchange of arguments reflects an asymmetry of information between the players: each player does not know the other player’s knowledge, thus she cannot predict which arguments are attacked and which counterarguments are employed for attacking the arguments. Indeed, (?) points out, for instance, that attacks , but only when is given: hence, the attack of the proponent is made possible only when the opponent discloses some private information with the move .

Despite the encouraging results offered by (?), we argue that relaxing the complete-information assumption leads in general to non-tractable frameworks. In this paper, in particular, we explore the computational cost of argument games of incomplete information where the (internal) logical structure of arguments is considered.

In this case relaxing complete information, such as when players do not share the same beliefs and set of arguments, simply amounts to the fact they have different logical theories, i.e., different sets of rules from which arguments supporting logical conclusions can be built. Hence, if the proponent, having a theory , has the objective to prove that some is true, there is no obvious way for preferring an argument for obtained from the minimal subset of (which could at first sight minimise the chances of successful attacks from the opponent) over the maximal set of arguments obtained from the whole (which could at a first sight maximise the chances to defeat any counterarguments of the opponent).

The layout of the paper is as follows. Section 2 offers a gentle introduction and motivation for our research problem. Section 3 reviews relevant related work, thus presenting further motivations behind our contribution. Section 4 presents the logic used for building arguments in dialogues (Argumentation Logic): it is a variant of Defeasible Logic (?) having linear complexity; another logic (Agent Logic has linear complexity as well) is subsequently recalled from (?). In this second logic, it is possible to formulate the NP-complete “Restoring Sociality Problem”; the objective is to prove that this problem can be mapped into the problem of interest here, the so-called “Strategic Argumentation Problem”, which rather consists in successfully deciding for each player what move to play at each argument turn (thus showing that the Strategic Argumentation Problem is NP-complete as well). Section 5 defines dialogue protocols for games of incomplete information based on Argumentation Logic and formulates the Strategic Argumentation Problem. Section 6 shows how to transform a theory in Agent Logic into an equivalent one in Argumentation Logic, and presents the main theorem of computational complexity for argument games.

2 A Gentle Introduction to the Problem

In the most typical forms of strategic argumentation, two players exchange arguments in a dialogue game: in the simplest case, a proponent (hereafter ) has the objective to prove a conclusion (a literal of the language) and an opponent (hereafter ) presents counterarguments to the moves of . If we frame this intuition in proof-theoretic settings, such as in those developed in (???) where arguments are defined as inference trees formed by applying rules, exchanging arguments means exchanging logical theories (consisting of rules) proving conclusions. Assume, for instance, that the argument game is based on a finite set of indisputable facts and a finite set of rules: facts initially fire rules and this leads to building proofs for literals.

If and are common knowledge of and , successful strategies in argument games are trivially identified: each player can compute if the entire theory (consisting of and ) logically entails . In this situation the game consists of a single move.

Suppose now that is known by both players, but is partitioned into three subsets: a set known by both players and two subsets and corresponding, respectively, to ’s and ’s private knowledge (what and privately know to be true). This scenario exemplifies an argument game of incomplete information. In this context, each player can use all rules belonging to her private knowledge ( or ) as well as all the public rules. These rules are not just the rules in but also rules that, though initially belonging to the private information of other player, have been used in previous turns.

Let us suppose to work with a skeptical non-monotonic framework, i.e., a logical machinery where, whenever two conflicting conclusions are obtainable from different rules, the system refrains to take a decision. Assuming a game where players has private and public knowledge, the problem of deciding what move (set of rules) to play at each turn amounts to establish whether there is any subset of her rules that can be successful. Is there any safe criterion to select successful strategies?

Consider the following three examples.

and are debating about the truthfulness of a statement, we say ; is arguing that is the case, whilst answers back the truthfulness of the opposite claim (henceforth ). Each player has her own (private) arguments, not known by the opponent, but they both share the factual knowledge as well as some inference rules. Suppose has the following private arguments:

while has

where and . The notation used is to exemplify arguments as chains of rules. For instance, argument implies that contains three rules , , and .

The point of the example being that if decides to announce all his private arguments, then she is not able to prove her thesis . Indeed, she would not have counterarguments defeating and . If instead she argues with and the subpart of , keeping hidden from the way to prove the premises of , then she proves

Consider now this new setting:

If ’s intent is to prove and she plays , then wins the game. However, if plays (or even ), this allows to succeed. Here, a minimal subset of is successful. However, the situation (for similar reasons) can be reversed for :

In this second case, the move is not successful for , while playing with the whole ensures victory.

In the remainder of this paper, we will study this research question in the context of Defeasible Logic. We will show that the problem of deciding what set of rules to play (Strategic Argumentation Problem) at a given move is NP-complete even when the problem of deciding whether a given theory (defeasibly) entails a literal can be computed in polynomial time. We will map the NP-complete Restoring Sociality Problem proposed in (?) into the Strategic Argumentation Problem. To this end, we first propose a standard Defeasible Logic to formalise the argumentation framework (Subsection 4.1) and then we present the BIO agent defeasible logic (Subsection 4.2). Finally, in Section 6 we show how to transform an agent defeasible logic into an equivalent argumentation one and we present the main theorem of computational complexity.

3 Related Work

Despite the game-like character of arguments and debates, game-theoretic investigations of argumentation are still rare in the AI argumentation literature and in the game theory one as well (an exception in this second perspective is (?)).

Most existing game-theoretic investigations of argumentation in AI, such as (?????) proceed within Dung’s abstract argumentation paradigm, while (?), though working on argumentation semantics related with Dung’s approach, develop a framework where also the logical internal structure of arguments is made explicit.

(?) presents a notion of argument strength within the class of games of strategy. The measure of the strength of an argument emerges from confronting proponent and opponent via a repeated game of argumentation strategy such that the payoffs reflect the long term interaction between proponent and opponent strategies.

Other types of game analyses have been used for argumentation. In particular, argumentation games have been reconstructed as two-player extensive-form games of perfect information (???). (For a discussion on using extensive-form games, see also (?).) While (?) works on zero-sum games, (?) does not adopt this view because preferences over outcomes are specified in terms of expected utility combining the probability of success of arguments (with respect to a third party, an adjudicator such as a judge) with the costs and benefits associated to arguments, thus making possible that argument withdrawn be the most preferred option. Besides this difference, in both approaches uncertainty is introduced due to different probabilities of success depending on a third party, such as external audience or a judge, whose attitude towards the arguments exchanged by proponent and opponent is uncertain.

All these works assume that argument games have complete information, which, we noticed, is an oversimplification is many real-life contexts (such as in legal disputes). How to go beyond complete information? In game-theoretic terms, one of the simplest ways of analyzing argument games of incomplete information is to frame them as Bayesian extensive games with observable actions (?, chap. 12): this is possible because every player observes the argumentative move of the other player and uncertainty only derives from an initial move of chance that distributes (payoff-relevant) private information among the players corresponding to logical theories: hence, chance selects types for the players by assigning to them possibly different theories from the set of all possible theories constructible from a given language. If this hypothesis is correct, notice that (i) Bayesian extensive games with observable actions allow to simply extend the argumentation models proposed, e.g., in (??), and (ii) the probability distributions over players’ types can lead to directly measuring the probability of justification for arguments and conclusions, even when arguments are internally analyzed (?). Despite this fact, however, complexity results for Bayesian games are far from encouraging (see (?) for games of strategy). If we move to Bayesian extensive games with observable actions things not encouraging, too. Indeed, we guess that considerations similar to those presented by (?) can be applied to argument games: the calculation of the perfect Bayesian equilibrium solution can be tremendously complex due to both the size of the strategy space (as a function of the size of the game tree, and it can be computationally hard to compute it (?)), and the dependence between variables representing strategies and players’ beliefs. A study of these game-theoretical issues cannot be developed here and is left to future work: this paper, instead, considers a more basic question: the computational problem of exploring solutions in the logical space of strategies when arguments have an internal structure.

In this sense, this contribution does not directly develop any game-theoretic analysis of argumentation games of incomplete information, but it offers results about the computation cost for logically characterizing the problems that any argumentation game with incomplete information potentially rises. Relevant recent papers that studied argumentation of incomplete information without any direct game-theoretic analysis are (?) and (?), which worked within the paradigm of abstract argumentation. The general idea in these works is to devise a system for dynamic argumentation games where agents’ knowledge bases can change and where such changes are precisely caused by exchanging arguments. (?) presents a first version of the framework and an algorithm, for which the authors prove a termination result. (?) generalizes this framework (by relaxing some constraints) and devises a computational method to decide which arguments are accepted by translating argumentation framework into logic programming; this further result, however, is possible only when players are eager to give all the arguments, i.e., when proponent and opponent eventually give all possible arguments in the game.

4 Logic

In this section we shall introduce the two logics used in this paper. The first is the logic used in a dialogue game. This is the logic to represent the knowledge of the players, the structure of the arguments, and perform reasoning. We call this logic “Argumentation logic” and we use the Defeasible Logic of (?). (?) provides the relationships between this logic (and some of its variants) and abstract argumentation, and (?) shows how to use this logic for dialogue games. The second logic, called Agent Logic, is the logic in which the “restoring sociality problem” (a known NP-completed problem) (?) was formulated. It is included in this paper to show how to reduce the restoring sociality problem into the strategic argumentation problem, proving thus that the later is also an NP-complete problem. The Agent Logic is an extension of Defeasible Logic with modal operators for Beliefs, Intentions and Obligations (?).

Admittedly, this section takes a large part of this paper, but is required to let the reader comprehend the mechanisms behind our demonstration of NP-completeness.

4.1 Argumentation Logic

A defeasible argumentation theory is a standard defeasible theory consisting of a set of facts or indisputable statements, a set of rules, and a superiority relation among rules saying when a single rule may override the conclusion of another rule. We have that is a strict rule such that whenever the premises are indisputable so is the conclusion . A defeasible rule is a rule that can be defeated by contrary evidence. Finally, is a defeater that is used to prevent some conclusion but cannot be used to draw any conclusion.

Definition 1 (Language).

Let be a set of propositional atoms and be a set of labels. Define:

Literals

If is a literal, denotes the complementary literal (if is a positive literal then is ; and if is , then is );

Rules

where is a unique label, is the antecedent of , is the consequent of , and is the type of .

We use to indicate all rules with consequent . We denote the sets of strict, rules, strict and defeasible rules, and defeaters with , , , and , respectively.

Definition 2 (Defeasible Argumentation Theory).

A defeasible argumentation theory is a structure

where

  • is a finite set of facts;

  • is the finite set of rules;

  • The superiority relation is acyclic, irreflexive, and asymmetric.

Definition 3 (Proofs).

Given an agent theory , a proof of length in is a finite sequence of labelled formulas of the type , , and , where the proof conditions defined in the rest of this section hold. denotes the initial part of the derivation of length .

We start with some terminology.

Definition 4.

Given and a proof in , a literal is -provable in if there is a line of such that . A literal is -rejected in if there is a line of such that .

The definition of describes just forward chaining of strict rules:

: If then (1) or (2) s.t. is -provable. : If then (1) and (2) s.t. is -rejected.

For a literal to be definitely provable either is a fact, or there is a strict rule with head , whose antecedents have all been definitely proved previously. And to establish that cannot be definitely proven we must establish that every strict rule with head has at least one antecedent is definitely rejected.

The following definition is needed to introduce the defeasible provability.

Definition 5.

A rule is applicable in the proof condition for iff , . A rule is discarded in the condition for iff such that .

: If then
(1) or
(2) (2.1) and
(2.2) s.t. is applicable, and
(2.3) either is discarded, or
(2.3.1) s.t. is applicable and .
: If then
(1) and either
(2.1) or
(2.2) either is discarded, or
(2.3) s.t. is applicable, and
(2.3.1) either is discarded, or .

To show that is defeasibly provable we have two choices: (1) We show that is already definitely provable; or (2) we need to argue using the defeasible part of a theory . For this second case, is not definitely provable (2.1), and there exists an applicable strict or defeasible rule for (2.2). Every attack is either discarded (2.3), or defeated by a stronger rule (2.3.1). is defined in an analogous manner and follows the principle of strong negation which is closely related to the function that simplifies a formula by moving all negations to an inner most position in the resulting formula, and replaces the positive tags with the respective negative tags, and the other way around (?).

4.2 Agent Logic

A defeasible agent theory is a standard defeasible theory enriched with 1) modes for rules, 2) modalities (belief, intention, obligation) for literals, and 3) relations for conversions and conflict resolution. We report below only the distinctive features. For a detailed exposition see (?).

Definition 6 (Language).

Let and be a set of propositional atoms and literals as in Definition 1, be the set of modal operators, and be a set of labels. Define:

Modal literals
Rules

where is a unique label, is the antecedent of , is the consequent of , is the type of , and is the mode of .

() denotes all rules of mode (with consequent ), and .

Observation 1.

Rules for intention and obligation are meant to introduce modalities: for example, if we have the intention rule and we derive , then we obtain . On the contrary, belief rules produce literals and not modal literals.

Rule conversion

It is sometimes meaningful to use rules for a modality as they were for another modality , i.e., to convert one mode of conclusions into a different one. Formally, we define the asymmetric binary convert relation such that means ‘a rule of mode can be used also to produce conclusions of mode ’. This corresponds to the following rewriting rule:

where and .

Conflict-detection/resolution

We define an asymmetric binary conflict relation such that means ‘modes and are in conflict and mode prevails over ’.

Definition 7 (Defeasible Agent Theory).

A defeasible agent theory is a structure

where

  • is a finite set of facts;

  • , , are three finite sets of rules for beliefs, obligations, and intentions;

  • The superiority (acyclic) relation such that: i. such that if then and ; and ii. is such that if then .

  • is a set of convert relations;

  • is a set of conflict relations.

A proof is now a finite sequence of labelled formulas of the type , , and .

The following definition states the special status of belief rules, and that the introduction of a modal operator corresponds to being able to derive the associated literal using the rules for the modal operator.

Definition 8.

Given and a proof in , is -provable in if there is a line of such that either

  1. is a literal and , or

  2. is a modal literal and , or

  3. is a modal literal and .

Instead, is -rejected in if

  1. is a literal and or

  2. is a modal literal and , or

  3. is a modal literal and .

We are now ready to report the definition of .

: If then (1) if or or (2) s.t. is -provable or (3) s.t. and is -provable. : If then (1) if and and (2) s.t. is -rejected and (3) if then s.t. is -rejected.

The sole difference with respect to is that now we may use rule of a different mode, namely , to derive conclusions of mode through the conversion mechanism. In this framework, only belief rules may convert to other modes. That is the case, every antecedent of the belief rule in clause (3) must be (definitely) proven with modality .

We reformulate definition of being applicable/discarded, taking now into account also and relations.

Definition 9.

Given a proof and

  • A rule is applicable in the proof condition for iff

    1. and , is -provable, or

    2. , , and , is -provable.

  • A rule is discarded in the condition for iff

    1. and such that is -rejected; or

    2. and, if , then such that is -rejected, or

    3. and either or .

We are now ready to provide proof conditions for :

: If then
(1) or
(2) (2.1) and
(2.2) s.t. is applicable, and
(2.3) either is discarded, or
(2.3.1) s.t. is applicable and , and
either , or and
: If then
(1) and either
(2.1) or
(2.2) , either is discarded, or
(2.3) , s.t. is applicable, and
(2.3.1) either is discarded, or , or
, and,
if then .

Again, the only difference with respect to is that we have rules for different modes, and thus we have to ensure the appropriate relationships among the rules. Hence, clause (2.3.1) prescribes that either attack rule and counterattack rule have the same mode (i.e., ), or that can be used to produce a conclusion of the mode (i.e., and ). Notice that this last case is reported for the sake of completeness but it is useless in our framework since it plays a role only within theories with more than three modes.

Being the strong negation of the positive counterpart, is defined in an analogous manner.

We define the extension of a defeasible theory as the set of all positive and negative conclusions. In (??), authors proved that the extension calculus of a theory in both argumentation and agent logic is linear in the size of the theory.

Let us introduce some preliminary notions, which are needed for formulating the “restoring sociality problem” (?) (and recalled below).

  • Given an agent defeasible theory , a literal is supported in iff there exists a rule such that is applicable, otherwise is not supported. For we use and to indicate that is supported / not supported by rules for .

  • Primitive intentions of an agent are those intentions given as facts in a theory.

  • Primary intentions and obligations are those derived using only rules for intentions and obligations (without any rule conversion).

  • A social agent is an agent for which obligation rules are stronger than any conflicting intention rules but weaker than any conflicting belief rules.

4.3 Restoring Sociality Problem

Instance:
Let be a finite set of primitive intentions, a primary obligation, and a theory such that , , , , and .
Question:
Is there a theory equal to apart from containing only a proper subset of instead of , such that if then and ?

Let us the consider the theory consisting of

is a belief rule and so the rule is stronger than the obligation rule . In addition we have that the belief rule is not applicable (i.e., ) since there is no way to prove . There are no obligation rules for , so . However, rule behaves as an intention rule since all its antecedent can be proved as intentions, i.e., and . Hence, since is stronger than , the derivation of is prevented against the sociality of the agent.

The related decision problem is whether it is possible to avoid the “deviant” behaviour by giving up some primitive intentions, retaining all the (primary) obligations, and maintaining a set of primitive intentions as close as possible to the original set of intentions.

Theorem 10 ((?)).

The Restoring Sociality Problem is NP-complete.

5 Dialogue Games

The form of a dialogue game involves a sequence of interactions between two players, the Proponent and the Opponent . The content of the dispute being that attempts to assess the validity of a particular thesis (called critical literal within our framework), whereas attacks ’s claims in order to refute such thesis. We shift such position in our setting by stating that the Opponent has the burden of proof on the opposite thesis, and not just the duty to refute the Proponent’s thesis.

The challenge between the parties is formalised by means of argument exchange. In the majority of concrete instances of argumentation frameworks, arguments are defined as chains of reasoning based on facts and rules captured in some formal language (in our case, a defeasible derivation ). Each party adheres to a particular set of game rules as defined below.

The players partially shares knowledge of a defeasible theory. Each participant has a private knowledge regarding some rules of the theory. Other rules are known by both parties, but this set may be empty. These rules along with all the facts of the theory and the superiority relation represent the common knowledge of both participants.

By putting forward a private argument during a step of the game, the agent increases the common knowledge by the rules used within the argument just played.

Define the argument theory to be such that i. , ii. () is the private knowledge of the Proponent (Opponent), and iii. is the (possibly empty) set of rules known by both participants. We use the superscript notation , , , and to denote such sets at turn .

We assume that is coherent and consistent, i.e., there is no literal such that: i. , and ii. and .

We now formalise the game rules, that is how the common theory is modified based on the move played at turn .

The parties start the game by choosing the critical literal to discuss about: the Proponent has the burden to prove by using the current common knowledge along with a subset of , whereas the Opponent’s final goal is to prove using instead of .

The players may not present arguments in parallel: they take turn in making their move.

The repertoire of moves at each turn just includes 1) putting forward an argument, and 2) passing.

When putting forward an argument at turn , the Proponent (Opponent) may bring a demonstration whose terminal literal differs from (). When a player passes, she declares her defeat and the game ends. This happens when there is no combination of the remaining private rules which proves her thesis.

Hence, the initial state of the game is with , and , .

If , the Opponent starts the game. Otherwise, the Proponent does so.

At turn , if Proponent plays , then

  • ( if );

  • ;

  • ;

  • , , and ;

  • .

At turn , if Opponent plays , then

  • ;

  • ;

  • ;

  • , , and ;

  • .

5.1 Strategic Argumentation Problem

Proponent’s instance for turn : Let be the critical literal, be the set of the private rules of the Proponent, and be such that either if , or otherwise.

Question: Is there a subset of such that ?

Opponent’s instance for turn : Let be the critical literal, be the set of the private rules of the Opponent, and be such that .

Question: Is there a subset of such that ?

6 Reduction

We now show how to transform Agent Logic (Section 4.2) into Argumentation Logic (Section 4.1). Basically, we need to act by transforming both literals and rules: whereas the agent theory deals with three different modes of rules and modal literals, the argumentation theory has rules without modes and literals.

The two main ideas of transformations proposed in Definitions 11 and 12 are

  • Flatten all modal literals with respect to internal negations modalities. For instance, is flattened into the literal , while is .

  • Remove modes from rules for , and . Thus, a rule with mode and consequent is transformed into a standard, non-modal rule with conclusion . An exception is when we deal with belief rules, given that they do not produce modal literals. Therefore, rule is translated in , while rule becomes .

Function flattens the propositional part of a literal and syntactically represents negations; function flattens modalities.

Definition 11.

Let be a defeasible agent theory. Define two syntactic transformations and as

Given that in BIO a belief modal literal is not but simply , we have that whenever the considered mode is , while if .

We need to redefine the concept of complement to map BIO modal literals into an argumentation logic with literals obtained through . Thus, if is a literal then is ; and if is , then is . Moreover, if is then ; and is then .

We now propose a detailed description of facts and rules introduced by Definition 12.

In the “restoring sociality problem” we have to select a subset of factual intentions, while in the “strategic argumentation problem” we choose a subset of rules to play to defeat the opponent’s argument. Therefore, factual intentions are modelled as strict rules with empty antecedent (), while factual beliefs and obligations are facts of .

We recall that, while proving , a rule in BIO may fire if either is of mode , through , or through . Hence, a rule in has many counterparts in .

Specifically, is built from by: removing the mode, and flattening each antecedent of as well as the consequent which in turn embeds the mode introduced by .

Moreover, if then it may be used through conversion to derive . To capture this feature we introduce a rule with conclusion and where for each antecedent the corresponding in is according either to clause (3) of or to condition 2. of Definition 9.

In , it is easy to determine which rule may fire against one another, being that consequents of rules are non-modal literals. Even when the rules have different modes and the conflict mechanism is used, their conclusions are two complementary literals. Given the definition of complementary literals obtained through we have introduced after Definition 11, this is not the case for the literals in . The situation is depicted in the following theory.

.

Here, may fire against through while cannot, given that is not the complement of . In the same fashion, if we derive then may fire against because of , while if we have either or then the conflict between beliefs and intentions is activated by the use of through either or , respectively. Nonetheless, in both cases there is no counterpart of in able to fire against .

To obviate this issue, we introduce a defeater where we flatten the antecedents of and the conclusion is the intention of the conclusion of , namely . This means that when fires, so does attacking . Notice that being a defeater, such a rule cannot derive directly but just prevents the opposite conclusion. The same idea is adopted for rules and : defeaters are needed to model conflict between beliefs and intentions (as rule in the previous example), whereas defeaters take care of situations where may be used to convert into and prevails over by .

Thus in the previous example, we would have: , , , , with .

Antecedents in BIO may be negation of modal literals; in that framework, a theory proves if such theory rejects (as stated by condition 3. of Definition 8). In we have to prove This is mapped in through conditions 810 of Definition 12 and the last condition of .

Definition 12.

Let be a defeasible agent theory. Define an argumentation defeasible theory such that

(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)

We name the argumentation counterpart of .

The following result is meant to prove the correctness of the transformation given in Definition 12. This is the case when the transformation preserves the positive and negative provability for any given literal.

Theorem 13.

Let be a defeasible agent theory and the argumentation counterpart of . Given and :

  1. iff ;

  2. iff , .

Proof.

The proof is by induction on the length of a derivation . For the inductive base, we consider all possible derivations of length 1 for a given literal . Given the proof tags’ specifications as in Definitions 11 and 12, the inductive base only takes into consideration derivations for , since to prove requires at least 2 steps.

.

This is possible either when clause (1), or (2) of in holds.

For (1), we have either i. and or then