Analysis of Dialogical Argumentation
via Finite State Machines
Dialogical argumentation is an important cognitive activity by which agents exchange arguments and counterarguments as part of some process such as discussion, debate, persuasion and negotiation. Whilst numerous formal systems have been proposed, there is a lack of frameworks for implementing and evaluating these proposals. First-order executable logic has been proposed as a general framework for specifying and analysing dialogical argumentation. In this paper111This paper has already been published in the Proceedings of the International Conference on Scalable Uncertainty Management (SUM’13), LNCS 8078, Pages 1-14, Springer, 2013., we investigate how we can implement systems for dialogical argumentation using propositional executable logic. Our approach is to present and evaluate an algorithm that generates a finite state machine that reflects a propositional executable logic specification for a dialogical argumentation together with an initial state. We also consider how the finite state machines can be analysed, with the minimax strategy being used as an illustration of the kinds of empirical analysis that can be undertaken.
Analysis of Dialogical Argumentation
via Finite State Machines
Anthony Hunter Department of Computer Science, University College London, Gower Street, London WC1E 6BT, UK
Dialogical argumentation involves agents exchanging arguments in activities such as discussion, debate, persuasion, and negotiation (?). Dialogue games are now a common approach to characterizing argumentation-based agent dialogues (e.g. (?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?)). Dialogue games are normally made up of a set of communicative acts called moves, and a protocol specifying which moves can be made at each step of the dialogue. In order to compare and evaluate dialogical argumentation systems, we proposed in a previous paper that first-order executable logic could be used as common theoretical framework to specify and analyse dialogical argumentation systems (?).
In this paper, we explore the implementation of dialogical argumentation systems in executable logic. For this, we focus on propositional executable logic as a special case, and investigate how a finite state machine (FSM) can be generated as a representation of the possible dialogues that can emanate from an initial state. The FSM is a useful structure for investigating various properties of the dialogue, including conformance to protocols, and application of strategies. We provide empirical results on generating FSMs for dialogical argumentation, and how they can be analysed using the minimax strategy. We demonstrate through preliminary implementation that it is computationally viable to generate the FSMs and to analyse them. This has wider implications in using executable logic for applying dialogical argumentation in practical uncertainty management applications, since we can now empirically investigate the performance of the systems in handling inconsistency in data and knowledge.
Propositional executable logic
In this section, we present a propositional version of the executable logic which we will show is amenable to implementation. This is a simplified version of the framework for first-order executable logic in (?).
We assume a set of atoms which we use to form propositional formulae in the usual way using disjunction, conjunction, and negation connectives. We construct modal formulae using the , , , and modal operators. We only allow literals to be in the scope of a modal operator. If is a literal, then each of , , , and is an action unit. Informally, we describe the meaning of action units as follows: means that the action by an agent is to add the literal to its next private state; means that the action by an agent is to delete the literal from its next private state; means that the action by an agent is to add the literal to the next public state; and means that the action by an agent is to delete the literal from the next public state.
We use the action units to form action formulae as follows using the disjunction and conjunction connectives: (1) If is an action unit, then is an action formula; And (2) If and are action formulae, then and are action formulae. Then, we define the action rules as follows: If is a classical formula and is an action formula then is an action rule. For instance, is an action rule (which we might use in an example where denotes belief, and denotes claim, and is some information).
Implicit in the definitions for the language is the fact that we can use it as a meta-language (?). For this, the object-language will be represented by terms in this meta-language. For instance, the object-level formula can be represented by a term where the object-level literals and are represented by constant symbols, and is represented by a function symbol. Then we can form the atom where is a predicate symbol. Note, in general, no special meaning is ascribed the predicate symbols or terms. They are used as in classical logic. Also, the terms and predicates are all ground, and so it is essentially a propositional language.
We use a state-based model of dialogical argumentation with the following definition of an execution state. To simplify the presentation, we restrict consideration in this paper to two agents. An execution represents a finite or infinite sequence of execution states. If the sequence is finite, then denotes the terminal state, otherwise .
An execution is a tuple , where for each where , is a set of ground literals, is a set of ground action units, is a set of ground literals, is a set of ground action units, is a set of ground literals, and . For each , if , then an execution state is = where is the initial state. We assume = = . We call the private state of agent 1 at time , the action state of agent 1 at time , the public state at time , the action state of agent 2 at time , the private state of agent 2 at time .
In general, there is no restriction on the literals that can appear in the private and public state. The choice depends on the specific dialogical argumentation we want to specify. This flexibility means we can capture diverse kinds of information in the private state about agents by assuming predicate symbols for their own beliefs, objectives, preferences, arguments, etc, and for what they know about other agents. The flexibility also means we can capture diverse information in the public state about moves made, commitments made, etc.
The first 5 steps of an infinite execution where each row in the table is an execution state where denotes belief, and denotes claim.
We define a system in terms of the action rules for each agent, which specify what moves the agent can potentially make based on the current state of the dialogue. In this paper, we assume agents take turns, and at each time point the actions are from the head of just one rule (as defined in the rest of this section).
A system is a tuple where is the set of action rules for agent , and is the set of initial states.
Given the current state of an execution, the following definition captures which rules are fired. For agent , these are the rules that have the condition literals satisfied by the current private state and public state . We use classical entailment, denoted , for satisfaction, but other relations could be used (e.g. Belnap’s four valued logic). In order to relate an action state in an execution with an action formula, we require the following definition.
For an action state , and an action formula , satisfies , denoted , as follows.
iff when is an action unit
For an action state , and an action formula , minimally satisfies , denoted , iff and for all , .
Consider the execution in Example 1. For agent 1 at n = 1, we have .
We give two constraints on an execution to ensure that they are well-behaved. The first (propagated) ensures that each subsequent private state (respectively each subsequent public state) is the current private state (respectively current public state) for the agent updated by the actions given in the action state. The second (engaged) ensures that an execution does not have one state with no actions followed immediately by another state with no actions (otherwise the dialogue can lapse) except at the end of the dialogue where neither agent has further actions.
An execution is propagated iff for all , for all , where
Let be an execution and . is finitely engaged iff (1) ; (2) for all , if , then (3) ; and (4) . is infinitely engaged iff (1) ; and (2) for all , if , then .
The next definition shows how a system provides the initial state of an execution and the actions that can appear in an execution. It also ensures turn taking by the two agents.
Let = be a system and = be an execution. generates iff (1) is propogated; (2) is finitely engaged or infinitely engaged; (3) ; and (4) for all
If is odd, then and either or there is an s.t. and
If is even, then and either or there is an s.t. and
We can obtain the execution in Example 1 with the following rules: (1) ; And (2) .
Generation of finite state machines
In (?), we showed that for any executable logic system with a finite set of ground action rules, and an initial state, there is an FSM that consumes exactly the finite execution sequences of the system for that initial state. That result assumes that each agent makes all its possible actions at each step of the execution. Also that result only showed that there exist these FSMs, and did not give any way of obtaining them.
In this paper, we focus on propositional executable logic where the agents take it in turn, and only one head of one action rule is used, and show how we can construct an FSM that represents the set of executions for an initial state for a system. For this, each state is a tuple , and each letter in the alphabet is a tuple , where is an execution step and is the agent holding the turn when and is when .
A finite state machine (FSM) represents a system = for an initial state iff
(5)Trans is the smallest subset of s.t. for all executions and for all there is a transition such that
where is 1 when is odd, is 2 when is even, is 1 when and is odd, is 2 when and is even, and is 0 when .
Let M be the following FSM where = ; = ; = . = , ; and = . M represents the system in Ex 1.
For each , then there is an FSM such that represents for an initial state .
A string reflects an execution iff is the string and for each , is the tuple .
Let be a system. and let be an FSM that represents for .
for all s.t. accepts , there is an s.t. generates and and reflects ,
for all finite s.t. generates and , then there is a such that accepts and reflects .
So for each initial state for a system, we can obtain an FSM that is a concise representation of the executions of the system for that initial state. In Figure 3, we provide an algorithm for generating these FSMs. We show correctness for the algorithm as follows.
Let = be a system and let . If represents w.r.t. and = , then .
An FSM provides a more efficient representation of all the possible executions than the set of executions for an initial state. For instance, if there is a set of states that appear in some permutation of each of the executions then this can be more compactly represented by an FSM. And if there are infinite sequences, then again this can be more compactly represented by an FSM.
Once we have an FSM of a system with an initial state, we can ask obvious simple questions such as is termination possible, is termination guaranteed, and is one system subsumed by another? So by translating a system into an FSM, we can harness substantial theory and tools for analysing FSMs.
Next we give a couple of very simple examples of FSMs obtained from executable logic. In these examples, we assume that agent 1 is trying to win an argument with agent 2. We assume that agent 1 has a goal. This is represented by the predicate in the private state of agent 1 for some argument . In its private state, each agent has zero or more arguments represented by the predicate , and zero or more attacks from to . In the public state, each argument is represented by the predicate . Each agent can add attacks to the public state, if the attacked argument is already in the public state (i.e. is in the public state), and the agent also has the attacker in its private state (i.e. is in the private state). We have encoded the rules so that after an argument has been used as an attacker, it is removed from the private state of the agent so that it does not keep firing the action rule (this is one of a number of ways that we can avoid repetition of moves).
For the following action rules, with the initial state where the private state of agent 1 is , the public state is empty, and the private state of agent 2 is , we get the FSM in Figure 1.
The terminal state therefore contains the following argument graph.
Hence the goal argument is in the grounded extension of the graph (as defined in (?)).
For the following action rules, with the initial state where the private state of agent 1 is , the public state is empty, and the private state of agent 2 is , we get the FSM in Figure 2
The terminal state therefore contains the following argument graph.
Hence the goal argument is in the grounded extension of the graph.
In the above examples, we have considered a formalisation of dialogical argumentation where agents exchange abstract arguments and attacks. It is straightforward to formalize other kinds of example to exchange a wider range of moves, richer content (e.g. logical arguments composed of premises and conclusion (?)), and richer notions (e.g. value-based argumentation (?)).
Minimax analysis of finite state machines
Minimax analysis is applied to two-person games for deciding which moves to make. We assume two players called MIN and MAX. MAX moves first, and they take turns until the game is over. An end function determines when the game is over. Each state where the game has ended is an end state. A utility function (i.e. a payoff function) gives the outcome of the game (eg chess has win, draw, and loose). The minimax strategy is that MAX aims to get to an end state that maximizes its utility regardless of what MIN does
We can apply the minimax strategy to the FSM machines generated for dialogical argumentation as follows: (1) Undertake breadth-first search of the FSM; (2) Stop searching at a node on a branch if the node is an end state according to the end function (note, this is not necessarily a terminal state in the FSM); (3) Apply the utility function to each leaf node (i.e. to each end state) in the search tree to give the value of the node; (4) Traverse the tree in post-order, and calculate the value of each non-leaf node as follows where the non-leaf node is at depth and with children :
If is odd, then is the maximum of ,.., .
If is even, then is the minimum of ,.., .
There are numerous types of dialogical argumentation that can be modelled using propositional executable logic and analysed using the minimax strategy. Before we discuss some of these options, we consider some simple examples where we assume that the search tree is exhaustive, (so each branch only terminates when it reaches a terminal state in the FSM), and the utility function returns 1 if the goal argument is in the grounded extension of the graph in the terminal state, and returns 0 otherwise.
From the FSM in Example 5, we get the minimax search tree in Figure (a)a, and from the FSM in Example 6, we get the minimax search tree in Figure (b)b. In each case, the terminal states contains an argument graph in which the goal argument is in the grounded extension of the graph. So each leaf of the minimax tree has a utility of 1, and each non-node has the value 1. Hence, agent 1 is guaranteed to win each dialogue whatever agent 2 does.
The next example is more interesting from the point of view of using the minimax strategy since agent 1 has a choice of what moves it can make and this can affect whether or not it wins.
In this example, we assume agent 1 has two goals and , but it can only present arguments for one of them. So if it makes the wrong choice it can loose the game. The executable logic rules are given below and the resulting FSM is given in Figure 4. For the minimax tree (given in Figure (c)c) the left branch results in an argument graph in which the goal is not in the grounded extension, whereas the right branch terminates in an argument graph in which the goal is in the grounded extension. By a minimax analysis, agent 1 wins.
We can use any criterion for identifying the end state. In the above, we have used the exhaustive end function giving an end state (i.e. the leaf node in the search tree) which is a terminal state in the FSM followed by two empty transitions. If the branch does not come to a terminal state in the FSM, then it is an infinite branch. We could use a non-repetitive end function where the search tree stops when there are no new nodes to visit. For instance, for example 4, we could use the non-repetitive end function to give a search tree that contains one branch where is the root and is the leaf. Another simple option is a fixed-depth end function which has a specified maximum depth for any branch of the search tree. More advanced options for end functions include concession end function when an agent has a loosing position, and it knows that it cannot add anything to change the position, then it concedes.
There is also a range of options for the utility function. In the examples, we have used grounded semantics to determine whether a goal argument is in the grounded extension of the argument graph specified in the terminal public state. A refinement is the weighted utility function which weights the utility assigned by the grounded utility function by where is the depth of the leaf. The aim of this is to favour shorter dialogues. Further definitions for utility functions arise from using other semantics such as preferred or stable semantics and richer formalisms such as valued-based argumentation (?).
In this study, we have implemented three algorithms: The generator algorithm for taking an initial state and a set of action rules for each agent, and outputting the fabricated FSM; A breadth-first search algorithm for taking an FSM and a choice of termination function, and outputting a search tree; And a minimax assignment algorithm for taking a search tree and a choice of utility function, and outputting a minimax tree. These implemented algorithms were used together so that given an initial state and rules for each agent, the overall output was a minimax tree. This could then be used to determine whether or not agent 1 had a winning strategy (given the initial state). The implementation incorporates the exhaustive termination function, and two choices of utility function (grounded and weighted grounded).
The implementation is in Python 2.6 and was run on a Windows XP PC with Intel Core 2 Duo CPU E8500 at 3.16 GHz and 3.25 GB RAM. For the evaluation, we also implemented an algorithm for generating tests inputs. Each test input comprised an initial state, and a set of action rules for each agent. Each initial state involved 20 arguments randomly assigned to the two agents and up to 20 attacks per agent. For each attack in an agent’s private state, the attacker is an argument in the agent’s private state, and the attacked argument is an argument in the other agent’s private state. The results are presented in Table 1.
|Average no.||Average no.||Average no.||Average no.||Average||Median||No. of runs|
|attacks||FSM nodes||FSM transitions||tree nodes||run time||run time||timed out|
As can be seen from these results, up to about 15 attacks per agent, the implementation runs in negligible time. However, above 15 attacks per agent, the time did increase markedly, and a substantially minority of these timed out. To indicate the size of the larger FSMs, consider the last line of the table where the runs had an average of 18.02 attacks per agent: For this set, 8 out of 100 runs had 80+ nodes in the FSM. Of these 8 runs, the number of states was between 80 and 163, and the number of transitions was between 223 and 514.
The algorithm is somewhat naive in a number of respects. For instance, the algorithm for finding the grounded extension considers every subset of the set of arguments (i.e. sets). Clearly more efficient algorithms can be developed or calculation subcontracted to a system such as ASPARTIX (?). Nonetheless, there are interesting applications where 20 arguments would be a reasonable, and so we have shown that we can analyse such situations successfully using the Minimax strategy, and with some refinement of the algorithms, it is likely that larger FSMs can be constructed and analysed.
Since the main aim was to show that FSMs can be generated and analysed, we only used a simple kind of argumentation dialogue. It is straightforward to develop alternative and more complex scenarios, using the language of propositional executable logic e.g. for capturing beliefs, goals, uncertainty etc, for specifying richer behaviour.
In this paper, we have investigated a uniform way of presenting and executing dialogical argumentation systems based on a propositional executable logic. As a result different dialogical argumentation systems can be compared and implemented more easily than before. The implementation is generic in that any action rules and initial states can be used to generate the FSM and properties of them can be identified empirically.
In the examples in this paper, we have assumed that when an agent presents an argument, the only reaction the other agent can have is to present a counterargument (if it has one) from a set that is fixed in advance of the dialogue. Yet when agents argue, one agent can reveal information that can be used by the other agent to create new arguments. We illustrate this in the context of logical arguments. Here, we assume that each argument is a tuple where is a set of formulae that entails a formula . In Figure (a)a, we see an argument graph instantiated with logical arguments. Suppose arguments , and are presented by agent 1, and arguments , and are presented by agent 2. Since agent 1 is being exhaustive in the arguments it presents, agent 2 can get a formula that it can use to create a counterargument. In Figure (b)b, agent 1 is selective in the arguments it presents, and as a result, agent 2 lacks a formula in order to construct the counterarguments it needs. We can model this argumentation in propositional executable logic, generate the corresponding FSM, and provide an analysis in terms of minimax strategy that would ensure that agent 1 would provide and not , thereby ensuring that it behaves more intelligently. We can capture each of these arguments as a proposition and use the minimax strategy in our implementation to obtain the tree in Figure (b)b.