Dynamic Reasoning Systems

Dynamic Reasoning Systems

DANIEL G. SCHWARTZ Florida State University
July 2013
Abstract

A dynamic reasoning system (DRS) is an adaptation of a conventional formal logical system that explicitly portrays reasoning as a temporal activity, with each extralogical input to the system and each inference rule application being viewed as occurring at a distinct time step. Every DRS incorporates some well-defined logic together with a controller that serves to guide the reasoning process in response to user inputs. Logics are generic, whereas controllers are application-specific. Every controller does, nonetheless, provide an algorithm for nonmonotonic belief revision. The general notion of a DRS comprises a framework within which one can formulate the logic and algorithms for a given application and prove that the algorithms are correct, i.e., that they serve to (i) derive all salient information and (ii) preserve the consistency of the belief set. This paper illustrates the idea with ordinary first-order predicate calculus, suitably modified for the present purpose, and two examples. The latter example revisits some classic nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how these can be resolved in the context of a DRS, using an expanded version of first-order logic that incorporates typed predicate symbols. All concepts are rigorously defined and effectively computable, thereby providing the foundation for a future software implementation.

Nonmonotonic reasoning, belief revision, dynamic reasoning
\acmVolume

0 \acmNumber0 \acmArticle0 \acmYear0000 \acmMonth0

\category

I.2.3Artificial IntelligenceDeduction and Theorem Proving

\terms

Algorithms, Theory

\acmformat

Daniel G. Schwartz. 2014. Dynamic reasoning systems.

{bottomstuff}

Author’s address: Daniel G. Schwartz, Department of Computer Science, Mail Code 4530, Florida State University, Tallahassee, FL 32306.

1 Introduction

The notion of a dynamic reasoning system (DRS) was introduced in [27] for purposes of formulating reasoning involving a logic of ‘qualified syllogisms’. The idea arose in an effort to devise some rules for evidence combination. The logic under study included a multivalent semantics where propositions were assigned a probabilistic ‘likelihood value’ in the interval , so that the likelihood value plays the role of a surrogate truth value. The situation being modeled is where, based on some evidence, is assigned a likelihood value , and then later, based on other evidence, is assigned a value , and it subsequently is desired to combine these values based on some rule into a resulting value . This type of reasoning cannot be represented in a conventional formal logical system with the usual Tarski semantics, since such systems do not allow that a proposition may have more than one truth value; otherwise the semantics would not be mathematically well-defined. Thus the idea arose to speak more explicitly about different occurrences of the propositions where the occurrences are separated in time. In this manner one can construct a well-defined semantics by mapping the different time-stamped occurrences of to different likelihood/truth values.

In turn, this led to viewing a ‘derivation path’ as it evolves over time as representing the knowledge base, or belief set, of a reasoning agent that is progressively building and modifying its knowledge/beliefs through ongoing interaction with its environment (including inputs from human users or other agents). It also presented a framework within which one can formulate a Doyle-like procedure for nonmonotonic ‘reason maintenance’ [5, 33]. Briefly, if the knowledge base harbors inconsistencies due to contradictory inputs from the environment, then in time a contradiction may appear in the reasoning path (knowledge base, belief set), triggering a back-tracking procedure aimed at uncovering the ‘culprit’ propositions that gave rise to the contradiction and disabling (disbelieving) one or more of them so as to remove the inconsistency.

Reasoning is nonmonotonic when the discovery and introduction of new information causes one to retract previously held assumptions or conclusions. This is to be contrasted with classical formal logical systems, which are monotonic in that the introduction of new information (nonlogical axioms) always increases the collection of conclusions (theorems). [27] contains an extensive bibliography and survey of the works related to nonmonotonic reasoning as of 1997. In particular, this includes a discussion of (i) the classic paper by McCarthy and Hayes [20] defining the ‘frame problem’ and describing the ‘situation calculus’, (ii) Doyle’ s ‘truth maintenance system’ [5] and subsequent ‘reason maintenance system’ [33], (iii) McCarthy’s ‘circumscription’ [19], (iv) Reiter’s ‘default logic’ [26], and (v) McDermott and Doyle’s ‘nonmonotonic logic’ [21]. With regard to temporal aspects, there also are discussed works by Shoham and Perlis. [30, 31] explores the idea of making time an explicit feature of the logical formalism for reasoning ‘about’ change, and [32] describes a vision of ‘agent-oriented programming’ that is along the same lines of the present DRS, portraying reasoning itself as a temporal activity. In [6, 7, 8, 9, 23, 25] Perlis and his students introduce and study the notion of ‘step logic’, which studies reasoning as ‘situated’ in time, and in this respect also has elements in common with the notion of a DRS. Additionally mentioned but not elaborated upon in [27] is the so-called AGM framework [1, 11, 12], named after its originators. Nonmonotonic reasoning and belief revision are related in that the former may be viewed as a variety of the latter.

These cited works are nowadays regarded as the classic approaches to nonmonotonic reasoning and belief revision. Since 1997 the AGM approach has risen in prominence, due in large part to the publication [15], which builds upon and substantially advances the AGM framework. AGM defines a belief set as a collection of propositions that is closed with respect to the classical consequence operator, and operations of ‘contraction’, ‘expansion’ and ‘revision’ are defined on belief sets. [15] made the important observation that a belief set can conveniently be represented as the consequential closure of a finite ‘belief base’, and these same AGM operations can be defined in terms of operations performed on belief bases. Since that publication, AGM has enjoyed a steadily growing population of adherents. A recent publication [10] overviews the first 25 years of research in this area.

Another research thread that has risen to prominence is the logic-programming approach to nonmonotonic reasoning known as Answer Set Prolog (AnsProlog). A major work on AnsProlog is the treatise [3]. This line of research suggests that an effective approach to nonmonotonic reasoning can be formulated in an extension of the well-known Prolog programming language. Interest in this topic has spawned a series of eleven conferences on Logic Programming and Nonmonotonic Reasoning, the most recent of which is [4].

The DRS framework discussed in the present work has elements in common with both AGM and AnsProlog, but also differs from these in several respects. Most importantly, the present focus is on the creation of computational algorithms that are sufficiently articulated that they can effectively be implemented in software and thereby lead to concrete applications. This element is still lacking in AGM, despite Hansson’s contribution regarding finite belief bases. The AGM operations continue to be given only as set-theoretic abstractions and have not yet been translated into computable algorithms. Regarding AnsProlog, this research thread holds promise of a new extension of Prolog, but siimliarly with AGM the necessary algorithms have yet to be formulated.

A way in which the present approach varies from the original AGM approach, but happens to agree with the views expressed by [15] (cf. pp. 15-16), in that it dispenses with two of the original ’rationality postulates’, namely, the requirements that the underlying belief set be at all times (i) consistent, and (ii) closed with respect to logical entailment. The latter is sometimes called the ‘omniscience’ postulate, inasmuch as the modeled agent is thus characterized as knowing all possible logical consequences of its beliefs.

These postulates are intuitively appealing, but they have the drawback that they lead to infinitary systems and thus cannot be directly implemented on a finite computer. To wit, the logical consequences of even a fairly simple set of beliefs will be infinite in number; and assurance of consistency effectively requires omniscience since one must know whether the logical consequences of the given beliefs include any contradictions. Dropping these postulates does have anthropomorphic rationale, however, since humans themselves cannot be omniscient in the sense described, and, because of this, often harbor inconsistent beliefs without being aware of it. Thus it is not unreasonable that our agent-oriented reasoning models should have these same characteristics. Similar remarks may be found in the cited pages of [15].

The present work differs from the AGM approach in several other respects. First, what is here taken as a ‘belief set’ is neither a belief set in the sense of AGM and Hansson nor a Hansson-style belief base. Rather it consists of the set of statements that have been input by an external agent as of some time , together with the consequences of those statements that have been derived in accordance with the algorithms provided in a given ’controller’. Second, by labeling the statements with the time step when they are entered into the belief set (either by an external agent or derived by means of an inference rule), one can use the labels as a basis for defining the associated algorithms. Third, whereas Gärdenfors, Hansson, and virtually all others that have worked with the AGM framework, have confined their language to be only propositional, the present work takes the next step to full first-order predicate logic. This is significant inasmuch as the consistency of a finite set of propositions with respect to the classical consequence operation can be determined by truth-table methods, whereas the consistency of a finite set of statements in first-order predicate logic is undecidable (the famous result due to Gödel). For this reason the present work develops a well-defined semantics for the chosen logic and establishes a soundness theorem, which in turn can be used to establish consistency. Last, the present use of a controller is itself new, and leads to a new efficacy for applications.

The notion a controller was not present in the previous work [27]. Its introduction here thus fills an important gap in that treatment. The original conception of a DRS provided a framework for modeling the reasoning processes of an artificial agent to the extent that those processes follow a well-defined logic, but it offered no mechanism for deciding what inference rules to apply at any given time. What was missing was a means to provide the agent with a sense of purpose, i.e., mechanisms for pursuing goals. This deficiency is remedied in the present treatment. The controller responds to inputs from the agent’s environment, expressed as propositions in the agent’s language. Inputs are classified as being of various ‘types’, and, depending on the input type, a reasoning algorithm is applied. Some of these algorithms may cause new propositions to be entered into the belief set, which in turn may invoke other algorithms. These algorithms thus embody the agent’s purpose and are domain-specific, tailored to a particular application. But in general their role is to ensure that (i) all salient propositions are derived and entered into to the belief set, and (ii) the belief set remains consistent. The latter is achieved by invoking a Doyle-like reason maintenance algorithm whenever a contradiction, i.e., a proposition of the form , is entered into the belief set.

This work accordingly represents a rethinking, refinement, and extension of the earlier work, aimed at (1) providing mathematical clarity to some relevant concepts that previously were not explicitly defined, (ii) introducing the notion of a controller and spelling out its properties, and (iii) illustrating these ideas with a small collection of example applications. The present effort may be viewed as laying the groundwork for a future project to produce a software implementation of the DRS framework, this being a domain-independent software framework into which can be plugged domain-specific modules as required for any given application. Note that the present mathematical work is a necessary prerequisite for the software implementation inasmuch as this provides the needed formal basis for an unambiguous set of requirements specifications.

The following Section 2 provides a fully detailed definition of the notion of a DRS. Section 3 presents the syntax and semantics for first-order predicate logic, suitably adapted for the present purpose, and proves a series of needed results including a Soundness Theorem. This section also introduces some derived inference rules for use in the ensuing example applications. Section 4 illustrates the core ideas in an application to a simple document classification system. Section 5 extends this to an application for multiple-inheritance reasoning, a form of default reasoning underlying frame-based expert systems. This provides new resolutions for some well-known puzzles from the literature on nonmonotonic reasoning. Section 6 consists of concluding remarks.

Regarding the examples, it may be noted that a Subsumption inference rule plays a central role, giving the present work elements in common also with the work on Description Logic (DL), c.f. [2]. The DL notion of a ‘role’ is not employed here, however, inasmuch as the concept of an object having certain properties is modeled in Section 5 through the use of typed predicate symbols.

An earlier, condensed, version of Sections 2, 3 and 4 has been published as [28]. The works [38, 40] contain precursors to the present notion of a DRS controller, and a DRS application using a ‘logic of belief and trust’ has been described in [37, 39, 40, 41]. While the present work employs classical first-order predicate calculus, the DRS framework can accommodate any logic for which the exists a well-defined syntax and semantics.

All proofs of propositions and theorems have been placed in the electronic appendix.

2 Dynamic Reasoning Systems

A dynamic reasoning system (DRS) comprises a model of an artificial agent’s reasoning processes to the extent that those processes adhere to the principles of some well-defined logic. Formally it is comprised of a ‘path logic’, which provides all the elements necessary for reasoning, and a ‘controller’, which guides the reasoning process.

Figure 2.1. Classical formal logical system.

Figure 2.2. Dynamic reasoning system.

For contrast, and by way of introductory overview, the basic structure of a classical formal logic system is portrayed in Figure 2.1 and that of a DRS in Figure 2.2. A classical system is defined by providing a language consisting of a set of propositions, selecting certain propositions to serve as axioms, and specifying a set of inference rules saying how, from certain premises one can derive certain conclusions. The theorems then amount to all the propositions that can be derived from the axioms by means of the rules. Such systems are monotonic in that adding new axioms always serves to increase the set of theorems. Axioms are of two kinds: logical and extralogical (or proper, or nonlogical). The logical axioms together with the inference rules comprise the ‘logic’. The extralogical axioms comprise information about the application domain. A DRS begins similarly with specifying a language consisting of a set of propositions. But here the ‘logic’ is given in terms of a set of axioms schemas, some inference rules as above, and some rules for instantiating the schemas. The indicated derivation path serves as the belief set. Logical axioms may be entered into the derivation path by applying instantiation rules. Extralogical axioms are entered from an external source (human user, another agent, a mechanical sensor, etc.). Thus the derivation path evolves over time, with propositions being entered into the path either as extralogical axioms or derived by means of inference rules in accordance with the algorithms provided in the controller. Whenever a new proposition is entered into the path it is marked as ‘believed’. In the event that a contradiction arises in the derivation path, a nonmonotonic belief revision process is invoked which leads to certain previously believed propositions becoming disbelieved, thereby removing the contradiction. The full details for the two components of a DRS are given in Sections 2.1 and 2.2.

2.1 Path Logic

A path logic consists of a language, axiom schemas, inference rules, and a derivation path, as follows.

Language: Here denoted , this consists of all expressions (or formulas) that can be generated from a given set of symbols in accordance with a collection of production rules (or an inductive definition, or some similar manner of definition). As symbols typically are of different types (e.g., individual variables, constants, predicate symbols, etc.) it is assumed that there is an unlimited supply (uncountably many if necessary) of each type. Moreover, as is customary, some symbols will be logical symbols (e.g., logical connectives, quantifiers, and individual variables), and some will be extralogical symbols (e.g., individual constants and predicate symbols). It is assumed that contains at least the logical connectives for expressing negation and conjunction, herein denoted and , or a means for defining these connectives in terms of the given connectives. For example, in the following we take and as given and use the standard definition of in terms of these.

Axiom Schemas: Expressed in some meta notation, these describe the expressions of that are to serve as logical axioms.

Inference Rules: These must include one or more rules that enable instantiation of the axiom schemas. All other inference rules will be of the usual kind, i.e., stating that, from expressions having certain forms (premise expressions), one may infer an expression of some other form (a conclusion expression). Of the latter, two kinds are allowed: logical rules, which are considered to be part of the underlying logic, and extralogical rules, which are associated with the intended application. Note that logical axioms are expressions that are derived by applying the axiom schema instantiation rules. Inference rules may be viewed formally as mappings from into itself.

The rule set may include derived rules that simplify deductions by encapsulating frequently used argument patterns. Rules derived using only logical axioms and logical rules will also be logical rules, and derived rules whose derivations employ extralogical rules will be additional extralogical rules.

Derivation Paths: These consist of a sequences of pairs , where is the sublanguage of that is in use at time , and is the belief set in effect at time . Such a sequence is generated as follows. Since languages are determined by the symbols they employ, it is useful to speak more directly in terms of the set comprising the symbols that are in use at time and then let be the sublanguage of that is based on the symbols in . With this in mind, let be the logical symbols of , so that is the minimal language employing only logical symbols, and let . Then, given , the pair is formed in one of the following ways:

  1. (so that ) and is obtained from by adding an expression that is derived by application of an inference rule that instantiates an axiom schema,

  2. and is obtained from by adding an expression that is derived from expressions appearing earlier in the path by application of an inference rule of the kind that infers a conclusion from some premises,

  3. and an expression employing these symbols is added to to form ,

  4. some new extralogical symbols are added to to form , and an expression employing the new symbols is added to to form ,

  5. and is obtained from by applying a belief revision algorithm as described in the following.

Note that the use of axiom schemas together with schema instantiation rules here replaces the customary approach of defining logical axioms as all formulas having the ‘forms’ described by the schemas and then including these axioms among the set of ‘theorems’. The reason for adopting this alternative approach is to ensure that the DRS formalism is finitary, and hence, machine implementable—it is not possible to represent an infinite set of axioms (or theorems) on a computer. That the two approaches are equivalent should be obvious. Expressions entered into the belief set in accordance with either (3) or (4) will be extralogical axioms. A DRS can generate any number of different derivation paths, depending on the extralogical axioms that are input and the inference rules that are applied.

Whenever an expression is entered into the belief set it is assigned a label comprised of:

  1. A time stamp, this being the value of the subscript on the set formed by entering the expression into the belief set in accordance with any of the above items (1) through (4). The time stamp effectively serves as an index indicating the expression’s position in the belief set.

  2. A from-list, indicating how the expression came to be entered into the belief set. In case the expression is entered in accordance with the above item (1), i.e., using a schema instantiation rule, this list consists of the name (or other identifier) of the schema and the name (or other identifier) of the inference rule if the system has more than one such rule. In case the expression is entered in accordance with above item (2), the list consists of the indexes (time stamps) of the premise expressions and the name (or other identifier) of the inference rule. In case the expression is entered in accordance with either of items (3) or (4), i.e., is a extralogical axiom, the list will consist of some code indicating this (e.g., es standing for ‘external source’) possibly together with some identifier or other information regarding the source.

  3. A to-list, being a list of indexes of all expressions that have been entered into the belief set as a result of rule applications involving the given expression as a premise. Thus to-lists may be updated at any future time.

  4. A status indicator having the value bel or disbel according as the proposition asserted by the expression currently is believed or disbelieved. The primary significance of this status is that only expressions that are believed can serve as premises in inference rule applications. Whenever an expression is first entered into the belief set, it is assigned status bel. This value may then be changed during belief revision at a later time. When an expression’s status is changed from bel to disbel it is said to have been retracted.

  5. An epistemic entrenchment factor, this being a numerical value indicating the strength with which the proposition asserted by the expression is held. This terminology is adopted in recognition of the work by Gärdenfors, who initiated this concept [11, 12], and is used here for essentially the same purpose, namely, to assist when making decisions regarding belief retractions. Depending on the application, however, this value might alternatively be interpreted as a degree of belief, as a certainty factor, as a degree of importance, or some other type of value to be used for this purpose.111Gärdenfors asserts that the notion of a degree of epistemic entrenchment is distinct from that of a degree (or probability) of belief. ‘This degree [of entrenchment] is not determined by how probable a belief is judged to be but rather by how important the belief is to inquiry and deliberation.’ [11], p. 17. Nonetheless, degrees of belief could be used as a basis for managing belief retraction if this were deemed appropriate for a given application. In the present treatment, epistemic entrenchment values are assigned only to axioms. No provision for propagating these factors to expressions derived via rule inferences is provided, although this would be a natural extension of the present treatment. It is agreed that logical axioms always receive the highest possible epistemic entrenchment value, whatever scale or range may be employed.

  6. A knowledge category specification, having one of the values a priori, a posteriori, analytic, and synthetic. These terms are employed in recognition of the philosophical tradition initiated by Kant [18]. Logical axioms are designated as a priori; extralogical axioms are designated as a posteriori; expressions whose derivations employ only logical axioms and logical inference rules are designated as analytic; and expressions whose derivations employ any extralogical axioms or extralogical rules are designated as synthetic. The latter is motivated by the intuition that an ability to apply inference rules and thereby carry out logical derivations is itself a priori knowledge, so that, even if the premises in a rule application are all a posteriori and/or the rule itself is extralogical, the rule application entails a combination of a priori and a posteriori knowledge, and the conclusion of the application thus qualifies as synthetic (rather than a posteriori) under most philosophical interpretations of this term.

Thus when an expression is entered into the belief set, it is more exactly entered as an expression-label pair , where is the label. A DRS’s language, axiom schemas, and inference rules comprise a logic in the usual sense. It is required that this logic be consistent, i.e., for no expression is it possible to derive both and . The belief set may become inconsistent, nonetheless, through the introduction of contradictory extralogical axioms.

In what follows, only expressions representing a posteriori and synthetic knowledge may be retracted; expressions of a priori knowledge are taken as being held unequivocally. Thus the term ‘a priori knowledge’ is taken as synonymous with ‘belief held unequivocally’, and ‘a posteriori knowledge’ is interpreted as ‘belief possibly held only tentatively’ (some a posteriori beliefs may be held unequivocally). Thus the distinction between knowledge and belief is somewhat blurred, and what is referred to as a ‘belief set’ might alternatively be called a ‘knowledge base’, as is often the practice in AI systems.

2.2 Controller

A controller effectively determines the modeled agent’s purpose or goals by managing the DRS’s interaction with its environment and guiding the reasoning process. With regard to the latter, the objectives typically include (i) deriving all expressions salient to the given application and entering these into the belief set, and (ii) ensuring that the belief set remains consistent. To these ends, the business of the controller amounts to performing the following operations.

  1. Receiving input from its environment, e.g., human users, sensors, or other artificial agents, expressing this input as expressions in the given language , and entering these expressions into the belief set in the manner described above (derivation path items (3) and (4)). During this operation, new symbols are appropriated as needed to express concepts not already represented in the current .

  2. Applying inference rules in accordance with some extralogical objective (some plan, purpose, or goal) and entering the derived conclusions into the belief set in the manner described above (derivation path items (1) and (2)).

  3. Performing any actions that may be prescribed as a result of the above reasoning process, e.g., moving a robotic arm, returning a response to a human user, or sending a message to another artificial agent.

  4. Whenever necessary, applying a ‘dialectical belief revision’ algorithm for contradiction resolution in the manner described below.

  5. Applying any other belief revision algorithm as may be prescribed by the context of a particular application.

In some systems, the above may include other types of belief revision operations, but such will not be considered in the present work.

A contradiction is an expression of the form . Sometimes it is convenient to represent the general notion of contradiction by the falsum symbol, . Contradiction resolution is triggered whenever a contradiction or a designated equivalent expression is entered into the belief set. We may assume that this only occurs as the result of an inference rule application, since it obviously would make no sense to enter a contradiction directly as an extralogical axiom. The contradiction resolution algorithm entails three steps:

  1. Starting with the from-list in the label on the contradictory expression, backtrack through the belief set following from-lists until one identifies all extralogical axioms that were involved in the contradiction’s derivation. Note that such extralogical axioms must exist, since, by the consistency of the logic, the contradiction cannot constitute analytical knowledge, and hence must be synthetic.

  2. Change the belief status of one or more of these extralogical axioms, as many as necessary to invalidate the derivation of the given contradiction. The decision as to which axioms to retract may be dictated, or at least guided by, the epistemic entrenchment values. In effect, those expressions with the lower values would be preferred for retraction. In some systems, this retraction process may be automated, and in others it may be human assisted.

  3. Forward chain through to-lists starting with the extralogical axiom(s) just retracted, and retract all expressions whose derivations were dependent on those axioms. These retracted expressions should include the contradiction that triggered this round of belief revision (otherwise the correct extralogical axioms were not retracted).

This belief revision algorithm is reminiscent of Hegel’s ‘dialectic’, described as a process of ‘negation of the negation’ [17]. In that treatment, the latter (first occurring) negation is a perceived internal conflict (here a contradiction), and the former (second occurring) one is an act of transcendence aimed at resolving the conflict (here removing the contradiction). In recognition of Hegel, the belief revision/retraction process formalized in the above algorithm will be called Dialectical Belief Revision.

2.3 General Remarks

Specifying a DRS requires specifying a path logic and a controller. A path logic is specified by providing (i) a language , (ii) a set of axiom schemas, and (iii) a set of inference rules. A controller is specified by providing (i) the types of expressions that the DRS can receive as inputs from external sources, with each such type typically being described as expressions having a certain form, and (ii) for each such input type, an algorithm that is to be executed when the DRS receives an input of that type. Such an algorithm typically involves applying inference rules, thereby deriving new formulas to be entered into the belief set, and it might specify other actions as well, such as moving a robotic arm, writing some information to a file, or returning a response to the user. All controllers are assumed to include a mechanism for dialectical belief revision as described above.

Thus defined a DRS may be viewed as representing the ‘mind’ of an intelligent agent, where this includes both the agent’s reasoning processes and its memory. At any time , the belief set represents the agent’s conscious awareness as of that time. Since the extralogical axioms can entail inconsistencies, this captures the fact that an agent can ‘harbor’ inconsistencies without being aware of this. The presence of inconsistencies only becomes evident to the agent when they lead to a contradictory expression being explicitly entered into the belief set, in effect, making the agent consciously aware of a contradiction that was implicit in its beliefs. This then triggers a belief revision process aimed at removing the inconsistency that gave rise to the contradiction.

Depending on the application, the controller may be programmed to carry out axiom schema instantiations and perform derivations based on logical axioms. Such might be the case, for example, if the logical rules were to include a ‘resolution rule’ and the controller incorporated a Prolog-like theorem prover. In many applications, however, it may be more appropriate to base the controller on a few suitably chosen derived rules. The objective in this would be to simplify the controller’s design by encapsulating frequently used argument patterns. In such cases, the use of axiom schemas and logical inference rules is implicit, but no logical axioms per se need be entered into the derivation path. Accordingly, all members of the belief set will be either a posteriori or synthetic and thus subject to belief revision. This is illustrated in the examples that follow.

3 First-Order Logic

3.1 Formalism

This section presents classical first-order logic (FOL) in a form suitable for incorporation into a DRS. The treatment follows [14]. As symbols for the language we shall have: individual variables, , denoted generically by , etc.; individual constants, , denoted generically by , etc.; predicate symbols, infinitely many for each arity (where arity is indicated by superscripts), , denoted generically by , etc.; punctuation marks, namely, the comma and left and right parentheses; the logical connectives and ; the (universal) quantifier symbol, ; and the falsum symbol, .222This omits the customary function symbols as they are not needed for the examples discussed here.

Here the logical symbols will be the individual variables, punctuation marks, logical connectives, quantifier symbol, and falsum symbol. The extralogical symbols will be the individual constants and the predicate symbols. Thus, as discussed in Section 2.1, the sublanguages of will differ only in their choice of individual constants and predicate symbols.

Given a sublanguage of , the terms of will be the individual variables and the individual contants of . The atomic formulas of will be the falsum symbol and all expressions of the form where is an -ary predicate symbol of and are terms of . The formulas of will be the atomic formulas of together with all expressions having the forms , , and , where and are formulas of and is an individual variable.

Further logical connectives and the existential quantifier symbol can be introduced as means for abbreviating other formulas:

for

for

for

for

For readability, parentheses may be dropped according to (i) takes priority over and , (ii) and take priority over and , and (iii) outermost surrounding parentheses are unneeded.

In a formula of the form , the expression is a (universal) quantifier and is the scope of the quantifier. If occurs in a formula within the scope of an occurrence of in , then that occurrence of is bound in by that occurrence of . If occurs in and is not bound by any quantifier, then that occurrence of is free in . Note that the same variable can have both free and bound occurrences in the same formula . A formula that does not contain any free variable occurrences is closed.

If an occurrence of is free in , then a different variable is substitutable for that occurrence of if the occurrence is not within the scope of the quantifier (i.e., putting in place of does not create a binding of ). Note that this implies that is always substitutable for any of its own free occurrences in any . An individual constant is substitutable for any free occurrence of any variable in any .

Where is a formula, is an individual variable, and is an individual term that is substitutable for in , denotes the formula obtained from by replacing all free occurrences of in with occurrences of . Note that the above implies that, if does not occur free in , or does not appear at all in , then is just . Note also that, if is not substitutable for in , then the notation is undefined.

This notation can be extended to arbitrarily many simultaneous replacements as follows. Where is a formula, the variables are distinct, and the terms are substitutable for all the free occurrences of the respective variables in , denotes the formula obtained from by replacing all occurrences of , respectively, with occurrences of .

The axiom schemas will be the meta-level expressions (observing the same rules as for formulas for dropping parentheses):

Let the formula meta symbols be denoted generically by , and let x be the individual variable meta symbol. Where is a schema, let be the formula obtained from by replacing all occurrences of , respectively, with occurrences of , and replacing each occurrence of x with an occurrence of the individual variable .

The inference rules will be:

Schema Instantiation 1

Where S is one of axiom schemas (1) through (4), infer , where are all the distinct formula meta symbols occurring in , and are formulas.

Schema Instantiation 2

Where S is axiom schema (5), infer , where is any formula, is any individual term, and is any individual variable.

Schema Instantiation 3

Where S is axiom schema (6), infer , where are any formulas and is any individual variable that does not occur free in .

Modus Ponens

From and infer , where are any formulas.

Generalization

From infer , where is any formula and is any individual variable.

Regarding , note first that, if occurs free in , since is substitutable for itself in , by taking for this rule allows one to derive . Next note that, if does not occur free in , then, by definition of the notation , is just , and the same rule allows one to to derive . It follows that all formulas of the form are logical axioms.

It is not difficult to establish that, if one leaves out , this formalism is equivalent to the first-order predicate calculus of Hamilton [14]. The present , , , and are identical to Hamilton’s , , , and , and it can be seen that together with effectively replaces Hamilton’s and under the present restricted notion of term that does not involve function symbols and the agreement that writing implies that is substitutable for in . To wit, the above note that, if does not occur free in , then one can derive , gives , and is obtained by the fact that, if is substitutable for in , then one can derive . Thus all of Hamilton’s logical axioms are logical axioms here. Moreover, excluding , the present formalism does not permit the introduction of logical axioms not found in Hamilton’s calculus. In other words, with the exception of , both formalisms have that same logical axioms. Because of this equivalence, the present formalism can make use of numerous results proved in [14]. Moreover, Hamilton’s system differs from that of Mendelson [22] only in Hamilton’s , and Exercise 4, Chapter 2 of [14] shows that Hamilton’s and Mendelson’s systems are equivalent. This allows appropriation of numerous results from [22].

A (first-order) theory will consist of a sublanguage of the foregoing language , denoted , the foregoing axiom schemas 1 through 6, the foregoing inference rules 1 through 6, and a set of formulas of to serve as extralogical axioms of . By a proof in is meant a sequence of formulas of such that each is either (i) a logical axiom, i.e., is derivable by means of one of the Schema Instantiation rules 1 through 3, (ii) an extralogical axiom, or (iii) can be inferred from formulas occurring before in the sequence by means of either Modus Ponens or Generalization. Such a sequence is a proof of the last member . A formula of is a theorem of if it has a proof in . The notation is used to indicate that is a theorem of , and is used to indicate the contrary.

Consider an entry in the derivation path of a DRS as defined in Section 2.1. A formula in will be active if its status is bel; otherwise it is inactive. The theory determined by will be first-order theory whose language is and whose extralogical axioms are the active extralogical axioms in .

The Dialectical Belief Revision algorithm has the effect of changing the status of some formulas in the belief set from bel to disbel. This is true also for other belief revision algorithms. Because of this there is no guarantee that the fact that the active extralogical axioms in are the extralogical axioms of implies that the active formulas in are all theorems of . That this implication can sometimes fail motivates the following.

A belief revision algorithm for a DRS is normal if, for any entry in a derivation path for the DRS, given that the active formulas in are theorems of the theory determined by , the active formulas in the belief set that results from an application of that algorithm will be theorems of the theory determined by .

Proposition 3.1. For any DRS, Dialectical Belief Revision is normal.

A DRS is normal if all its belief revision algorithms are normal.

Proposition 3.2. In a normal DRS, for each theory determined by a pair in a derivation path for the DRS, the active formulas in will be theorems of .

Where is a set of formulas in , let be the theory obtained from by adjoining the members of as extralogical axioms.

Proposition 3.3. Let be the theory determined by an entry in the derivation path for a normal DRS, let be the theory with language and no extralogical axioms, and let be the set of active formulas in . Then, for any formula of , iff .

It is assumed that the reader is familiar with the notion of tautology from the Propositional Calculus (PC). Axiom schemas , , and , together with Modus Ponens, are the axiomatization of PC found in [14]. Where is a theory and is a formula of , let indicate that has a proof in using only , , and , Instantiation 1, and Modus Ponens. Then, by treating atomic formulas of as propositions of PC, one has the following.

Theorem 3.1. (Soundness Theorem for PC) For any theory , if , then is a tautology.

Theorem 3.2. (Adequacy Theorem for PC) For any theory , if is a tautology, then .

These theorems can be used to show that axiom schema serves merely as a defining axiom for and does not enable proving any additional formulas not involving . Let indicate that has a proof in using only , , , and , Schema Instantiation 1, and Modus Ponens.

Proposition 3.4. If does not contain any occurrences of and , then .

A theorem of is said to be derivable in . A formula is said to be derivable from in if . Note that this is equivalent to saying that, if , for all , then . When is given as a list of one or more formulas, e.g., , the surrounding braces will be dropped, i.e., will be shortened to .

An inference rule is a statement of the form ‘From some premises having some forms , infer the conclusion having some form ’. In the context of a theory , an inference rule may be viewed as a mapping from into itself, with an application of the rule being represented as an -tuple of formulas of , , where are the premises and is the conclusion. An inference rule is valid (or derivable) in a theory , if the conclusion is always derivable from the premises in , i.e., if . Consider the following inference rules for an arbitrary .

Hypothetical Syllogism

From and infer , where are any formulas.

Aristotelian Syllogism

From and , infer , where are any formulas, is any individual variable, and is any individual constant.

Subsumption

From and , infer , where are any unary predicate symbols, and is any individual variable.

And-Introduction

From and infer .

And-Elimination

From infer and .

Conflict Detection

From , , and infer , where are any formulas, is any individual variable, and is any individual constant.

Contradiction Detection

From and infer , where is any formula.

Hypothetical Syllogism is a well-known principle of classical logic. Aristotelian Syllogism captures the reasoning embodied in the famous argument ‘All men are mortal; Socrates is a man; therefore Socrates is mortal’, by taking for ‘is a man’, for ‘is mortal’, and for ‘Socrates’. A concept subsumes concept if the set of objects represented by contains the set represented by as a subset. Thus Subsumption captures the transitivity of this subsumption relationship. In the context of a DRS, Conflict Detection can be used for triggering Dialectical Belief Revision. This is an example on one such triggering rule; others surely are possible.

Aristotelian Syllogism, Subsumption, and Conflict Detection will be used in the Document Management Assistant application developed in Section 4. In this respect, they may be considered to be application-specific. They are not domain-specific, however, inasmuch as they happen to be valid in any first-order theory, as demonstrated by the following.

Proposition 3.5. The above seven inference rules are valid in any theory .

A theory is inconsistent if there is a formula of such that both and ; otherwise is consistent.

Proposition 3.6. A theory is consistent iff there is a formula of such that .

This shows that any inconsistent system with the full strength of first-order logic is trivial in that all formulas are formally derivable.

A set of formulas of a language is consistent if the theory with language and with the formulas in as extralogical axioms is consistent; otherwise is inconsistent. For an entry in the derivation path of a DRS, is consistent if the set of active formulas in is consistent; otherwise is inconsistent.

Proposition 3.7. For an entry in the derivation path of a normal DRS, is consistent if and only if the theory determined by is consistent.

3.2 Semantics and Main Results Regarding First-Order Logic

This formulation of a semantics for first-order logic follows [29]. Let be a theory of the kind described above. An interpretation for the language consists of (i) a nonempty set serving as the domain of , the elements of which are called individuals, (ii) for each individual constant of , assignment of a unique individual , and (iii) for each -ary predicate symbol of , assignment of an -ary relation on . For each individual , let be a new individual constant, i.e., one not among the individual constants of , to serve as the name of . Let be the language obtained from by adjoining the names of the individuals in as new extralogical symbols. Where is an interpretation for , let for all . For a formula of a language and and interpretation for , an -instance of is a formula of the form where are the distinct individual variables occurring free in and . Note that -instances as defined above are closed.

Given an interpretation for a language , a truth valuation is a mapping satisfying:

  1. For being the atomic formula , .

  2. For atomic and having the form , iff (the relation holds for the -tuple . Note that, since is closed, the must all be individual constants and may be names of individuals in .

  3. For of the form , iff .

  4. For of the form , iff either or .

  5. For of the form , iff for every .

Items 3 and 4 encapsulate the usual truth tables for and . By definition of the various abbreviated forms, it will follow that, for closed :

For of the form , iff and .

For of the form , iff either or .

For of the form , iff .

For of the form , .

A closed formula is true or valid in an interpretation if . An open formula is valid in an interpretation if , for all -instances of . The notation is used to denote that is valid in . A formula of a language is logically valid if it is valid in every interpretation of .

Proposition 3.8. Let be a formula of in which no individual variable other than occurs free, let be an individual constant of , let be an interpretation of , and let . Then iff .

Proposition 3.9. For any language , the logical axioms of , i.e., all formulas derivable by means of the schema instantiation rules through , are logically valid.

An inference rule is validity preserving if, for every application comprised of formulas in a language for a theory , and for every interpretation for , implies .

Proposition 3.10. Modus Ponens and Generalization are validity preserving.

Theorem 3.3. (Soundness Theorem for FOL) Let be a theory with no extralogical axioms and let be a formula of . If , then is logically valid.

One can establish the converse, referred to in [14] as the Adequacy Theorem (Proposition 4.41). This also appears in [22] as Corollary 2.18. The proof in [14] can be adapted to the present system because of the equivalence between that system and the formalism studied here. It is only necessary to verify that the present notion of semantic interpretation is equivalent to that of [14]. This amounts to observing that the present notion of -instance is equivalent with the notion of ‘valuation’ in [14]. Details are omitted as this result in not needed in the present work.

An interpretation for the language of a theory is a model of if all theorems of are valid in . The notation expresses that is a model of .

Theorem 3.4. (Consistency Theorem) If a theory has a model, then is consistent.

Proposition 3.11. If is a theory with no extralogical axioms, then is consistent.

An interpretation for a language is a model for a set of formulas of if all the formulas in are valid in . The notation expresses that is a model of .

Proposition 3.12. Let be the extralogical axioms of a theory and let be an interpretation for . If , then .

For an entry in the derivation path for a DRS (Section 2.1), an interpretation for will be a model of if where is the set of active formulas in . Let indicate that is a model of .

Proposition 3.13. Let be an entry in a derivation path for a normal DRS. If there is an interpretation of that is a model of , then is consistent.

4 Example 1: A Document Management Assistant

A DRS is used to represent the reasoning processes of an artificial agent interacting with its environment. For this purpose the belief set should include a model of the agent’s environment, with this model evolving over time as the agent acquires more information about the environment. This section illustrates this idea with a simple DRS based on first-order logic representing an agent that assists its human users in creating and managing a taxonomic document classification system. Thus the environment in this case consists of the document collection together with its users. The objective in employing such an agent is to build concept taxonomies suitable for browsing. In this manner the DRS functions as a Document Management Assistant (DMA).

In the DMA, documents are represented by individual constants, and document classes are represented by unary predicate symbols. Membership of document in class is expressed by the atomic formula ; the property of class being a subset of class is expressed by , where is any individual variable; and the property of two classes and being disjoint is expressed by , where is any individual variable. A taxonomic classification hierarchy may thus be constructed by entering formulas of these forms into the belief set as extralogical axioms. It will be assumed that these axioms are input by human users.

In addition to the belief set, the DMS will employ an extralogical graphical structure representing the taxonomy. A formula of the form will be represented by an is-an-element-of link from a node representing to a node representing , a formula of the form will be represented by an is-a-subclass-of link from a node representing to a node representing , and a formula of the form will be represented by an are disjoint link between some nodes representing and . This structure will be organized as a directed acyclic graph (DAG) without redundant links with respect to the is-an-element-of and is-a-subclass-of links (i.e., ignoring are-disjoint links), where by a redundant link is meant a direct link from some node to an ancestor of that node other than the node’s immediate ancestors (i.e., other than its parents). To this end the controller will maintain a data structure that represents the current state of this graph. Whenever an axiom expressing a document-class membership is entered into the belief set, a corresponding is-an-element-of link will be entered into the graph, unless this would create a redundant path. Whenever an axiom expressing a subclass-superclass relationship is entered into the belief set, an is-a-subclass-of link will be entered into the graph, unless this would create either a cycle or a redundant path. Whenever an axiom expressing class disjointedness is entered into the belief set, a corresponding link expressing this will be entered into the graph. To accommodate this activity, the derivation path as a sequence of pairs is augmented to become a sequence of triples , where is the state of the graph at time .

This is illustrated in Figure 4.1, showing a graph that would be defined by entering the following formulas into the belief set, where TheLibrary, Science, Engineering, Humanities, ComputerScience, Philosophy, and ArtificialIntelligence are taken as alternative labels for the predicate letters , where Doc1, Doc2, Doc3 are alternative labels for the individual constants , and where is the individual variable :

The overall purpose of the DMA is to support browsing and search by the human users. The browsing capability is provided by the graph. For this one would develop tools for ‘drilling down’ through the graph, progressively narrowing in on specific topics of interest. User queries would consist of keyword searches and would employ the belief set directly, i.e., they do not necessarily require the graph. In such queries, the keywords are presumed to be the names of classification categories. The algorithms associated with the DMA’s controller are designed to derive all document-category classifications implicit in the graph and enter these into the belief set. These document-category pairs can be stored in a simple database, and keyword searches, possibly involving multiple keywords connected by ‘or’ or ‘and’ can be implemented as database queries. For ‘and’ queries, however, one may alternatively use the graph structure to find those categories’ common descendants.

As described in Section 2.1, entering a formula into the belief set also entails possibly expanding the current language by adding any needed new symbols and assigning the new formula an appropriate label. For the above formulas, the from-list will consist of an indication that the source of the formula was a human user. Let us use the code for this (as an alternative to the aforementioned es). As an epistemic entrenchment value, let us arbitrarily assign each formula the value 0.5 on the scale . Thus each label will have the form

Using for the epistemic entrenchment value effectively makes these values nonfunctional with respect to Dialectical Belief Revision. When a choice must be made regarding which of several formulas to disbelieve, this choice will either be random or made by the human user.

Figure 4.1. A taxonomy fragment.

Whenever the user wishes to enter a new formula into the belief set, the formula is provided to the controller, which executes a reasoning process depending on the type (or form) of the formula. This process may lead to the controller’s carrying out inference rule applications, the results of which are provided back to the controller, which may in turn lead to further rule applications and/or belief revision processes. In this activity the controller additionally modifies the language and graph as appropriate. As discussed in Section 2.2, the general purpose of a controller is two-fold: (i) to derive all salient information for the intended application, and (ii) to ensure that the belief set remains consistent. In the present case, the salient information is the graph together with the document classifications implicit in the graph, i.e., all formulas of the form that can be derived from the formulas describing the graph. In an application, these category-document pairs can be stored in a simple database and used for keyword-based sorting and retrieval (search).

4.1 Formal Specification of the DMA

These considerations motivate the following formal specification for the DMA. For the path logic, let the language be the language for first-order logic defined in Section 3.1, let the axiom schemas be through , and let the inference rules be through together with Aristotelian Syllogism, Subsumption, and Conflict Detection. In accordance with Section 2.1, let be the minimal sublanguage of consisting of all formulas that can be built up from the atomic formula , and let . In addition, let .

For the controller, all inputs by human users must be formulas having one of the forms (i) , (ii) , where and are distinct, and (iii) , where and are distinct. As mentioned, part of the function of the controller is to maintain the graphs that are represented in the derivation path by the . These graphs have two types of nodes: one type representing documents corresponding to individual constant symbols, and one type representing classification categories corresponding to unary predicate symbols. There are three kinds of links: is-an-element-of links from documents to categories, is-a-subclass-of links between categories, and are-disjoint links between categories. It is desired that the controller maintain these graphs, ignoring are-disjoint links, as directed acyclic graphs without redundant links, as described above.

To complete the specification of the controller it is necessary to provide algorithms that will be executed depending on the type of user input. In the present system, it is convenient to distinguish between two kinds of input event, the first being when a formula is provided to the controller by a human user, and the second being when a formula is provided to the controller as the result of an inference rule application. The following describes algorithms associated with five such event types. Of these, Event Types 1, 4, and 5 correspond to user inputs. These are initiating events, each of which may lead to a sequence of events of some of the other types.

In all the events it is assumed that, if the formula provided to the controller already exists and is active in the current belief set, its input is immediately rejected. This prevents unnecessary duplicates. In each of the following, assume that the most recent entry into the derivation path is .

Event Type 1: A formula of the form is provided to the controller by a human user. If either or is not in the symbol set for , form by adding the missing ones to the symbol set for ; otherwise set . Form from by adding the labeled formula . If there are no nodes representing either or in , form by adding such nodes together with an is-an-element-of link from the node to the node. If one of and is represented by a node in but the other is not, form by adding a node representing the one that is missing together with an is-an-element-of link from the node to the node. Note that if a category node is being added, this will become a root node. If both and are represented by nodes in , form by adding an is-an-element-of link from the node to the node, unless this would create a redundant path in the graph.

Search for active formulas of the forms or , where is the predicate symbol of the input formula and is some predicate symbol other than , and, if found, search for an active occurrence of , where is the individual constant of the input formula, and for each successful such search, apply Conflict Detection to infer and provide this formula to the controller. This is an event of Type 2.

Let be the most recent belief set, i.e., either it is or it is the belief set that has resulted from the processes associated with the indicated events of Type 2, if any occurred. If the input formula is still active, search for any active formulas having the form , where is the predicate symbol of the input formula. For each such formula, apply Aristotelian Syllogism to this and the formula to infer , and provide this formula to the controller. This is an event of Type 3.

Event Type 2: The formula is provided to the controller as the result of an application of Conflict Detection. Let . Form from by (i) adding the labeled formula , where the from-list contains the name of the inference rule (Conflict Detection) that was used to conclude this occurrence of , together with the indexes of the formulas that served as premises in the rule application, and (ii) updating the to-lists of all formulas that thus served as premises by including the index . Let .

Now invoke the Dialectical Belief Revision algorithm on as described in Section 2.2, starting with the formula just added to the belief set. As a result of this process, some formulas in the current belief set will have their status changed from bel to disbel. Let be the belief set obtained from by making these changes in the relevant formulas’ labels. Let . Obtain from by removing any elements representing formulas whose statuses have thus been changed to disbel. Specifically, (i) if a formula of the form is disbelieved, remove the is-an-element-of link connecting the node representing to the node representing , and remove the node representing , unless it is connected to some node other than the one representing , and (ii) if a formula of the form is disbelieved, remove the is-a-subclass-of link connecting the node representing to the node representing , and remove the node representing , unless it is connected to some node other than the one representing .

Event Type 3: A formula of the form is provided to the controller as a result of an inference rule application (Aristotelian Syllogism). In this case, both and are already in , so let . Form from by (i) adding the labelled formula , where the from-list contains the name of the inference rule that was used to infer (Aristotelian Syllogism), together with the indexes of the formulas that served as premises in the rule application, and (ii) updating the to-lists of all formulas that thus served as premises by including the index . Let . Note that no modification of the graph is warranted, since the membership of the document associated with in the category associated with is already implicit in the graph, so that a link between the respective nodes would form a redundant path.

Search for active formulas of the forms or , where is the predicate symbol of the input formula and is some predicate symbol other that , and, if found, search for an active occurrence of , where is the individual constant of the input formula, and for each successful such search, apply Conflict Detection to infer and provide this formula to the controller. This is an event of Type 2.

Let be the most recent belief set, i.e., either it is or it is the belief set that has resulted from the processes associated with the indicated events of Type 2, if any occurred. If the input formula is still active, search for any active formulas having the form , where is the predicate symbol of the input formula. For each such formula, apply Aristotelian Syllogism to this and the formula to infer , and provide this formula to the controller. This is a recursive occurrence of an event of Type 3.

Event Type 4: A formula of the form is provided to the controller by a human user. If both and are already in , begin by performing as many as possible of the following three actions. First, search to determine if either of the formulas and are active, and, if so, reject the input and inform the user that the input is disallowed inasmuch as it contradicts the current belief set. Second, explore all ancestors of to see if they include , and, if so, reject the input and inform the user that the input is disallowed inasmuch as it would create a redundant path in the subsumption hierarchy. Third, explore all ancestors of as expressed by formulas in to determine whether these include , and, if so, reject the input and inform the user that the input is disallowed inasmuch as it would create a loop in the subsumption hierarchy. If the input is not rejected for any of these reasons, do the following.

If either or is not in the symbol set for , form by adding the ones that are missing, otherwise let . Form from by adding the labeled formula . If there are no nodes representing either or in , form by adding such nodes together with an is-a-subclass-of link from the node to the node. If one of and is represented by a node in but the other is not, form by adding a node representing the one that is missing together with an is-a-subclass-of link from the node to the node. If both and are represented by nodes in , form by adding an is-a-subclass-of link from the node to the node.

Now search for any active formulas of the form where is the predicate symbol in the input formula, and, for each such formula, apply Aristotelian Syllogism to infer , and provide this to the controller. This is an event of Type 3.

Event Type 5: A formula of the form is provided to the controller by a user. If either or is not in the symbol set for , form by adding the ones that are missing, otherwise let . Form from by adding the labeled formula . If there are no nodes representing either or in , form by adding such nodes together with an are-disjoint link between the two nodes. If one of and is represented by a node in but the other is not, form by adding a node representing the one that is missing together with an are-disjoint link between the two nodes. If both and are represented by nodes in , then form by adding an are-disjoint link between the two nodes.

Having accomplished this, search for active formulas of the forms and , and, if found, apply Conflict Detection to infer the formula , and provide this to the controller. This is an event of Type 2.

It may be noted that the above does not provide for the user’s removing links from the graph, or more exactly, changing the status of an extralogical axiom from bel to disbel. An ability to do so would obviously be desirable in any practical application. In particular, if the user wished to modify the graph by inserting a new category represented by between two existing categories represented by and , one would need to remove the link between (the categories represented by) and , and then add a link from to and a link from to . It is not difficult to see that removing a link can be handled in a straightforward manner, simply by following to-lists starting with the formula representing the link between and and changing all formulas whose derivations relied on that formula to disbel. In effect, this undoes the event of adding the link in question, as well as any further additions to the belief set that may have been based on the presence of the link.

Note also that none of the given events employ the Subsumption rule. This is because, in the present example, information regarding subsumption among the categories was not included as part of the ‘salient information’. Such could be added to a future example, but this would make the associated algorithms more complex.

Proposition 4.1. The DMA is a normal DRS.

4.2 Illustration

The application of the algorithms associated with the foregoing events can be illustrated by considering the inputs needed to create the concept taxonomy shown in Figure 4.1. Let us abbreviate ‘TheLibrary’, ‘Science’, ‘Engineering’, ‘Humanities’, ‘ComputerScience’, ‘Philosophy’, and ‘ArtificialIntelligence’, respectively, by ‘TL’, ‘S’, ‘E’, ‘H’, ‘CS’, ‘P’, and ‘AI’. In accordance with the definition of derivation path in Section 2.1, the language will be the language generated by the logical symbols given in Section 3.1, i.e., by . This means that the only formula in is . Also in accordance with the Section 2.1, belief set . In accordance with the definition of the DMA, set .

Consider an input of the first formula in the foregoing list, namely, . This is an event of Type 4. The language is formed from by adding the symbols S and TL (or, more exactly, the predicate letters and ), i.e., . The belief set is formed from by adding the labeled formula . The graph is formed from by adding the vertices TL, S and the edge (link) .

The inputs of the next six formulas in the foregoing list are all handled similarly, each comprising an event of Type 4. This leads to a language generated by symbol set . Belief set will consist of the seven indicated labeled formulas with indexes (time stamps) 1 through 7. Graph will consist of the vertices TL, S, E, H, CS, P, AI and the seven is-a-subclass-of links shown in Figure 4.1.

Input of is an event ot Type 5. This gives , is formed from by adding the given input formula together with a label having index 8, and is formed from by adding the are-disjoint edge (E, H).

Consider input of the formula . This is an event of Type 1. This gives symbol set , the belief set is formed from by adding the input formula with a label having index 9, and graph is obtained from by adding the is-an-element-of edge . The algorithm for Event Type 1 then proceeds to apply Aristotelian Syllogism to the input formula and the formula that was input in step 1 (counting as step 0). This derives the formula and provides this formula to the controller. This is an event of Type 3. The effect of the algorithm for this event type is that , , and is formed from by adding the newly derived formula with a label having index 10. In addition, the from-list in the label for the derived formula is set to , and the to-lists in the label for formulas with indexes 9 and 1 are set to . For brevity in the following, assume that similar updatings of from-lists and to-lists are performed as appropriate in accordance with the definitions in Section 2.1. Note that a from-list will refer to at most one inference rule and set of premises, whereas a to-list may contain indexes of any number of derived conclusions.

Consider input of the formula . This is an event of Type 1. This gives , is formed from by adding the input formula with a label having index 11, and is formed from by adding the edge . The algorithm for Event Type 1 then proceeds to apply Aristotelian Syllogism to this formula and the formula that was input in step 2. This derives the formula and provides this formula to the controller. This is an event of Type 3. Since the formula is already in the belief set, the rule that duplicates are forbidden is invoked and the algorithm for Event Type 3 is not invoked.

Consider input of the formula . This is an event of Type 1. This gives symbol set , the belief set is formed from by adding the input formula with a label having index 12, and graph is obtained from by adding the is-an-element-of edge . The algorithm for Event Type 1 then proceeds to apply Aristotelian Syllogism to this formula and the formula that was input in step 7. This derives the formula and provides this formula to the controller. This is an event of Type 3. The algorithm for Type 3 is invoked, giving , is formed from by adding the derived formula with a label having index 13, and . Then, continuing with the algorithm for Event Type 3, Aristotelian Syllogism is applied to this formula and the formula that was input in step 4. This derives the formula and provides this formula to the controller. This is a recursive invocation of Event Type 3 leading to , is formed from by adding the derived formula with a label having index 14, and . Then, continuing with the algorithm for Event Type 3, Aristotelian Syllogism is applied to this formula and the formula that was input in step 4 (as this is the next formula in the belief set to which the inference rule can be applied). This derives the formula and provides this formula to the controller, which is another event of Type 3. The algorithm for this event type yields , is formed from by adding the derived formula with a label having index 15, and . Since there are no opportunities to apply Aristotelian Syllogism with this formula, the recursion now backtracks to the first invocation of Event Type 3, since there is another opportunity to apply Aristotelian Syllogism at that point, this time to the derived formula and the formula that was input in step 5. The algorithm proceeds similarly with the foregoing, giving , is formed from by adding the formula with a label having index 16, , and then Aristotelian Syllogism is applied deriving , giving an event of Type 3 whose algorithm is not invoked because of the rule forbidding duplicates in the belief set.

Consider input of the formula . This is an event of Type 1. Similarly with the foregoing, this gives symbol set , is formed from by adding the input formula with a label having index 17, and is formed from by adding the edge . The algorithm for Event Type 1 then proceeds to apply Aristotelian Syllogism to this formula and the formula that was input in step 6. This derives the formula and provides this formula to the controller. This is an event of Type 3. The algorithm for Type 3 is invoked, giving , is formed from by adding the derived formula with a label having index 18, and . Then, continuing with the algorithm for Event Type 3, Aristotelian Syllogism is applied to this formula and the formula that was input in step 3. This derives the formula and provides this formula to the controller, which is another event of Type 3. The algorithm for this event type yields , is formed from by adding the derived formula with a label having index 19, and . Since there are no opportunities to apply Aristotelian Syllogism with this formula, the algorithm terminates.

This completes the construction of the taxonomy in Figure 4.1. At this point the language is the one generated by symbol set , the belief set consists of the labeled formulas described above listed in the order of their indexes 1 through 19, and consists of the nodes and edges shown in Figure 4.1. Note that, at each time step , the belief set contains all formulas of the form that are implicit in the graph .

Now suppose the user inputs . This is an event of Type 1. Since both CS and Doc3 are in the current symbol set, this gives , is formed from by adding the input formula with a label having index 20, and is formed from by adding the edge . The algorithm for Event Type 1 then proceeds to apply Aristotelian Syllogism to this formula and the formula that was input in step 4. This derives the formula and provides this formula to the controller. This is an event of Type 3. The algorithm for Type 3 is invoked, giving