[

# [

[
###### Abstract

Our goal is to study the feasibility of porting termination analysis techniques developed for one programming paradigm to another paradigm. In this paper, we show how to adapt termination analysis techniques based on polynomial interpretations - very well known in the context of term rewrite systems (TRSs) - to obtain new (non-transformational) termination analysis techniques for definite logic programs (LPs). This leads to an approach that can be seen as a direct generalization of the traditional techniques in termination analysis of LPs, where linear norms and level mappings are used. Our extension generalizes these to arbitrary polynomials. We extend a number of standard concepts and results on termination analysis to the context of polynomial interpretations. We also propose a constraint-based approach for automatically generating polynomial interpretations that satisfy the termination conditions. Based on this approach, we implemented a new tool, called Polytool, for automatic termination analysis of LPs.

Manh Thang Nguyen et al.] MANH THANG NGUYEN
Deceased on June 3, 2009 and DANNY DE SCHREYE
Department of Computer Science, K. U. Leuven
Celestijnenlaan 200A, B-3001 Heverlee, Belgium
Danny.DeSchreye@cs.kuleuven.ac.be and JÜRGEN GIESL
LuFG Informatik 2, RWTH Aachen
Ahornstr. 55, D-52074 Aachen, Germany
giesl@informatik.rwth-aachen.de and PETER SCHNEIDER-KAMP
Dept. of Mathematics and Computer Science, U. Southern Denmark
Campusvej 55, DK-5230 Odense M, Denmark
petersk@imada.sdu.dk Polynomial Interpretations for Termination Analysis of Logic Programs]Polytool: Polynomial Interpretations as a Basis for Termination Analysis of Logic Programs

KEYWORDS: Termination analysis, acceptability, polynomial interpretations.

## 1 Introduction

Termination analysis plays an important role in the study of program correctness. A termination proof is mostly based on a mapping from computational states to some well-founded ordered set. Termination is guaranteed if the mapped values of the encountered states during a computation, under this mapping, decrease w.r.t. the order.

For LPs, termination analysis is done by mapping terms and atoms to a well-founded set of natural numbers by means of norms and level mappings. Proving termination is based on the search for a suitable norm and level mapping such that the resulting predicate calls decrease under the mapping.

Until now, most termination techniques for LPs are based on the use of linear norms and linear level mappings, which measure the size of each term or atom as a linear combination of the sizes of its sub-terms. For example, the Hasta-La-Vista system [SerebrenikandDeSchreye03] infers one specific linear norm and linear level mapping. In the context of numerical computations, it includes a refinement on this, based on a case analysis. The tool cTI [MesnardBagnara05] uses a concrete linear norm. The analyzers TermiLog [lindenstrauss97, Termilog] and TerminWeb [Codishetal99, terminWeb02] use a combination of several linear norms to obtain an approximation of the program and then infer linear level mappings for termination analysis of the approximated program. However, the restriction to linear norms and level mappings limits the power of termination analysis considerably. To illustrate this point, consider the following example, der, that formulates rules for computing the repeated derivative of a function in some variable u. This example from [DeSchreyeSerebrenik01, NaomiWST97] is inspired by a similar term rewriting example from [Dershowitz95].

###### Example 1 (der)
 \displaystyle d(\mathit{der}(u),1). (0) \displaystyle d(\mathit{der}(X+Y),\mathit{DX}+\mathit{DY}):\!\!-\;d(\mathit{% der}(X),\mathit{DX}),d(\mathit{der}(Y),\mathit{DY}). (0) \displaystyle d(\mathit{der}(X*Y),X*\mathit{DY}+Y*\mathit{DX}):\!\!-\;d(% \mathit{der}(X),\mathit{DX}),d(\mathit{der}(Y),\mathit{DY}). (0) \displaystyle d(\mathit{der}(\mathit{der}(X)),\mathit{DDX}):\!\!-\;d(\mathit{% der}(X),\mathit{DX}),d(\mathit{der}(\mathit{DX}),\mathit{DDX}). (0)

We are interested in proving termination of this program w.r.t. the set of queries S=\{\,d(t_{1},t_{2})\mid t_{1} is a ground term and t_{2} is an arbitrary term}. So the set of queries is specified by a mode that considers the first argument of d as an input argument and the second as an output.

As shown in [NaomiWST97, MNTDannyd05], the termination proof is impossible when using a linear norm and a linear level mapping. Indeed, it turns out that all existing non-transformational termination analyzers for LPs mentioned above fail to prove termination of this example. \square

In this paper, we propose a general framework for termination proofs of LPs based on polynomial interpretations. Using polynomial interpretations as a basis for ordering terms in TRSs was first introduced by Lankford in [Lankford79]. It is currently one of the best known and most widely used techniques in TRS termination analysis.

We develop the approach within an LP context. Classical approaches in LP termination use interpretations that map to natural numbers (using linear polynomial functions). In contrast, we will use interpretations that map to polynomials (using arbitrary polynomial functions). To adapt the classical LP approaches to polynomial interpretations, we use the concepts of “abstract norm” and “abstract level mapping” [Verschaetse&DeSchreye91]. We show that with our new approach, one can also prove termination of programs like Example 1.

We also developed an automated tool (Polytool) for termination analysis based on our approach [Nguyen&DeSchreye06]. We embedded this within the constraint-based approach developed in [Decorteetal98] and combined it with the non-linear Diophantine constraint solver developed by Fuhs et al. [Fuhsc07] (implemented in the AProVE system [Giesletal06]) to provide a completely automated system.

The paper is organized as follows. In the next section, we present some preliminaries. In Section 3, we introduce the notion of polynomial interpretations in logic programming and show how this approach can be used to prove termination. In Section 4, we discuss the automation of the approach. In Section 5, we provide and discuss the results of our experimental evaluation. We end with a conclusion in Section LABEL:conclusion.

## 2 Preliminaries

After introducing the basic terminology of LPs in Section 2.1, we recapitulate the concepts of norms and level mappings in Section 2.2 and explain their use for termination proofs in Section 2.3.

### 2.1 Notations and Terminology

We assume familiarity with LP concepts and with the main results of logic programming [Apt90, Lloyd87]. In the following, P denotes a definite logic program. We use \mathit{Var}_{P}, \mathit{Fun}_{P}, and \mathit{Pred}_{P} to denote the sets of variables, function, and predicate symbols of P. Given an atom A, \mathit{rel}(A) denotes the predicate occurring in A. Let p, q be predicates occurring in the program P. We say that p refers to q if there is a clause in P such that p is in its head and q is in its body. We say that p depends on q if (p,q) is in the transitive closure of the relation “refers to”. If p depends on q and vice versa, p and q are called mutually recursive, denoted by p\backsimeq q. A clause in P with a predicate p in its head and a predicate q in its body, such that p and q are mutually recursive, is called a (mutually) recursive clause. Within such a recursive clause, the body-atoms with predicate symbol q are called (mutually) recursive atoms. Let \mathit{Term}_{P} and \mathit{Atom}_{P} denote, respectively, the sets of all terms and atoms that can be constructed from P.

In this paper, we focus our attention on definite logic programs and SLD-derivations where the left-to-right selection rule is used. Such derivations are referred to as LD-derivations; the corresponding derivation tree is called LD-tree. We say that a query Q LD-terminates for a program P, if the LD-tree for (P,Q) is finite (left-termination [Lloyd87]). In the following, we usually speak of “termination” instead of “LD-termination” or “left-termination”.

### 2.2 Norms and Level Mappings

The concepts of norm and level mapping are central in termination analysis of logic programs.

###### Definition 1 (norm, level mapping)

A norm is a mapping {\parallel}.{\parallel}:\mathit{Term}_{P}\rightarrow\mathbb{N}. A level-mapping is a mapping {\mid}.{\mid}:\mathit{Atom}_{P}\rightarrow\mathbb{N}.

Several examples of norms can be found in the literature [Bossietal91]. One of the most commonly used norms is the list-length norm {\parallel}.{\parallel}_{\ell} which maps lists to their lengths and any other term to 0. Another frequently used norm is the term-size norm {\parallel}.{\parallel}_{\tau} which counts the number of function symbols in a term. Both of them belong to a class of norms called linear norms which is defined as follows.

###### Definition 2 (linear norm and level mapping [Serebrenik03])

A norm {\parallel}.{\parallel} is a linear norm if it is recursively defined by means of the following schema:

1. {\parallel}X{\parallel}=0 for any variable X,

2. {\parallel}f(t_{1},\ldots,t_{n}){\parallel}=f_{0}+\sum_{i=1}^{n}f_{i}{% \parallel}t_{i}{\parallel} where f_{i}\in\mathbb{N} and n\geq 0.

Similarly, a level mapping {\mid}.{\mid} is a linear level mapping if it is defined by means of the following schema:

1. {\mid}p(t_{1},\ldots,t_{n}){\mid}=p_{0}+\sum_{i=1}^{n}p_{i}{\parallel}t_{i}{\parallel} where p_{i}\in\mathbb{N} and n\geq 0.

### 2.3 Conditions for Termination w.r.t. General Orders

A quasi-order on a set S is a reflexive and transitive binary relation \succsim defined on elements of S. We define the associated equivalence relation \approx as s\approx t if and only if s\succsim t and t\succsim s. A well-founded order on S is a transitive relation \succ where there is no infinite sequence s_{0}\succ s_{1}\succ\ldots with s_{i}\in S. A reduction pair (\succsim,\succ) consists of a quasi-order \succsim and a well-founded order \succ that are compatible (i.e., t_{1}\succsim t_{2}\succ t_{3} implies t_{1}\succ t_{3}). We also need the following notion of a call set.

###### Definition 3 (call set)

Let \mathit{P} be a program and \mathit{S} be a set of atomic queries. The call set, \mathit{Call}(P,S), is the set of all atoms \mathit{A}, such that a variant of \mathit{A} is the selected atom in some derivation for \mathit{(P,Q)}, for some Q\in S.

Most often, one regards infinite sets \mathit{S} of queries. For instance, this is the case in Example 1. As in Example 1, \mathit{S} is then specified in terms of modes or types. As a consequence, in an automated approach, a safe over-approximation of \mathit{Call}(P,S) needs to be computed, using a mode or a type inference technique (e.g., [Bruynoogheetal05, GallagherHB05, HeatonACK00, Janssensetal92]).

In order to obtain a termination criterion that is suitable for automation, one usually estimates the effect of the atoms in the bodies of clauses by suitable interargument relations. This notion can be defined for arbitrary reduction pairs.

###### Definition 4 (interargument relation [DeSchreyeSerebrenik01])

Let P be a program, p be a predicate in P, and (\succsim,\succ) be a reduction pair on \mathit{Term}_{P}. An interargument relation for p in P w.r.t. (\succsim,\succ) is a relation R_{p} with the same arity as p: R_{p}=\{p(t_{1},\ldots,t_{n})\mid t_{i}\in\mathit{Term}_{P}\mbox{ for all $1% \leq i\leq n$, and }\varphi_{p}(t_{1},\ldots,t_{n})\}, where:

1. \varphi_{p}(t_{1},\ldots,t_{n}) is a boolean expression (in terms of disjunction, conjunction, and negation) of inequalities s\succsim s^{\prime} or s\succ s^{\prime}, in which

2. s,s^{\prime} are constructed from t_{1},\ldots,t_{n} by applying function symbols from \mathit{Fun}_{P}.

\mathit{R_{p}} is a valid interargument relation for \mathit{p} in P w.r.t. (\succsim,\succ) if and only if for every p(t_{1},\ldots,t_{n})\in\mathit{Atom}_{P}: P\models p(t_{1},\ldots,t_{n}) implies p(t_{1},\ldots,t_{n})\in R_{p}.

###### Example 2 (interargument relation)

Let P be the standard \mathit{append} program that computes list concatenation. Then there are a number of valid interargument relations. Consider the reduction pair (\succsim,\succ) corresponding to the list-length norm {\parallel}.{\parallel}_{\ell}, i.e., t_{1}\succsim t_{2} if and only if {\parallel}t_{1}{\parallel}_{\ell}\geq{\parallel}t_{2}{\parallel}_{\ell} and t_{1}\succ t_{2} if and only if {\parallel}t_{1}{\parallel}_{\ell}>{\parallel}t_{2}{\parallel}_{\ell}. For instance, valid interargument relations for append w.r.t. (\succsim,\succ) are R_{append} = \{append(t_{1},t_{2},t_{3})\mid t_{1},t_{2},t_{3}\in\mathit{Term}_{P}\wedge% \varphi_{append}(t_{1},t_{2},t_{3})\}, where \varphi_{append}(t_{1},t_{2},t_{3}) could be:

1. t_{3}\succsim t_{2}\wedge t_{3}\succsim t_{1},

2. t_{3}\succsim t_{2},

3. [t_{1},t_{2}|t_{3}]\succ[t_{2}|t_{3}], or

4. \mathit{true}

Of course, usually only the first two interargument relations are useful for termination analysis. \mathit{\square}

Finally, we need the notion of rigidity, in order to deal with bindings that are due to unification in LD-derivations. These bindings would have to be back-propagated to the variables in the initial goal. We reformulate rigidity for arbitrary reduction pairs.

###### Definition 5 (rigidity - adapted from [DeSchreyeSerebrenik01])

A term or atom A\in\mathit{Term}_{P}\cup\mathit{Atom}_{P} is called rigid w.r.t. a reduction pair (\succsim,\succ) if A\approx A\sigma holds for any substitution \sigma. A set of terms (or atoms) S is called rigid w.r.t. (\succsim,\succ) if all its elements are rigid w.r.t. (\succsim,\succ).

###### Example 3 (rigidity)

The list [X|t] (X is a variable, t is a ground term) is rigid w.r.t. the reduction pair (\succsim,\succ) corresponding to the list-length norm. For any substitution \sigma, we have {\parallel}[X|t]{\sigma}{\parallel}_{\ell}=1+{\parallel}t{\parallel}_{\ell}={% \parallel}[X|t]{\parallel}_{\ell}. Therefore, [X|t]\sigma\approx[X|t] w.r.t. (\succsim,\succ).

However, the list [X|t] is not rigid w.r.t. the reduction pair (\succsim^{\prime},\succ^{\prime}) corresponding to the term-size norm {\parallel}.{\parallel}_{\tau}, i.e., t_{1}\succsim^{\prime}t_{2} if and only if {\parallel}t_{1}{\parallel}_{\tau}\geq{\parallel}t_{2}{\parallel}_{\tau} and t_{1}\succ^{\prime}t_{2} if and only if {\parallel}t_{1}{\parallel}_{\tau}>{\parallel}t_{2}{\parallel}_{\tau}. \mathit{\square}

The following definition introduces the desired termination criterion, i.e., it recalls the definition of rigid order-acceptability w.r.t. a set of atoms.

###### Definition 6 (rigid order-acceptability [DeSchreyeSerebrenik01])

Let S be a set of atomic queries. A program P is rigid order-acceptable w.r.t. S if there exists a reduction pair (\succsim,\succ) on \mathit{Atom}_{P} where \mathit{Call}(P,S) is rigid w.r.t. (\succsim,\succ) and where for each predicate p in P, there is a valid interargument relation R_{p} in P w.r.t. (\succsim,\succ) such that

1. for any clause A:\!\!-\;B_{1},B_{2},\ldots,B_{n} in P,

2. for any atom B_{i}\in\{B_{1}\ldots,B_{n}\} such that \mathit{rel}(B_{i})\backsimeq\mathit{rel}(A),

3. for any substitution \theta such that the atoms B_{1}\theta,\ldots,B_{i-1}\theta are elements of their associated interargument relations R_{\mathit{rel}(B_{1})},\ldots,R_{\mathit{rel}(B_{i-1})}: A\theta\succ B_{i}\theta.

Theorem 1 states that rigid order-acceptability is a sufficient condition for termination. We refer to [Serebrenik03], Theorems 3.32 and 3.54, for the proof of Theorem 1.

###### Theorem 1 (termination criterion by rigid order-acceptability)

If P is rigid order-acceptable w.r.t. S, then P terminates for any query in S.

Rigid order-acceptability is sufficient for termination, but is not necessary for it (see [DeSchreyeSerebrenik01]). With Definition 6 and Theorem 1, proving termination of a program requires verifying the rigidity of the call set, verifying the validity of interargument relations for predicates, and verifying the decrease conditions for the (mutually) recursive clauses.

We will not discuss here the decidability or undecidability results related to various problems concerning: (i) the rigidity of the call set and (ii) the validity of interargument relations. The interested reader may refer to the relevant literature.

In the remainder of this paper we provide some answers to the question in the setting of a given set S, an inferred order based on polynomial interpretations, abstractions of S based on types, type inference to approximate the call set, and interargument relations based on inequalities between polynomials.

## 3 Polynomial Interpretation of a Logic Program

The approach presented in the previous section can be considered a theoretical framework for termination analysis of LPs based on general orders on terms and atoms. In this section, we specialize it to orders based on polynomial interpretations.

We first introduce polynomial interpretations in Section 3.1. Then in Section 3.2 we reformulate the termination conditions for LPs from Section 2.3 for polynomial interpretations.

### 3.1 Polynomial Interpretations

In this paper, we only consider polynomials with natural numbers as coefficients (so-called “natural coefficients”). Because natural numbers will occur many times in this paper, we will simply refer to them as “numbers”.

We say that a variable X occurs in a polynomial p if the polynomial contains a monomial with a coefficient different from 0 and X occurs in this monomial. If X_{1},\ldots,X_{n} are all the variables occurring in a polynomial p, we often denote p as p(X_{1},\ldots,X_{n}). For every polynomial p, there is an associated polynomial function F_{p}=\lambda X_{1},\ldots,X_{n}. p(X_{1},\ldots,X_{n}). For numbers or polynomials x_{1},\ldots,x_{n}, we often write “p(x_{1},\ldots,x_{n})” instead of “F_{p}(x_{1},\ldots,x_{n})”. Given p(X_{1},\ldots,X_{n}) and m\geq 1 we also have an associated polynomial function F_{p,m}=\lambda X_{1},\ldots,X_{n},Y_{1},\ldots,Y_{m}. \ p(X_{1},\ldots,X_{n}). For such an associated function on an extended domain, we often write “p(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})” to denote “F_{p,m}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})”.

###### Definition 7 (orders on polynomials)

Let p and q be two polynomials. Let X_{1},\ldots,X_{n} be all variables occurring in p or q. The quasi-order \succsim_{\mathbb{N}} is defined as p\succsim_{\mathbb{N}}q if and only if p(x_{1},\ldots,x_{n})\geq q(x_{1},\ldots,x_{n}) for all x_{1},\ldots,x_{n}\in\mathbb{N}. The strict order \succ_{\mathbb{N}} is defined as p\succ_{\mathbb{N}}q if and only if p(x_{1},\ldots,x_{n})>q(x_{1},\ldots,x_{n}) for all x_{1},\ldots,x_{n}\in\mathbb{N}.

Observe that (\succsim_{\mathbb{N}},\succ_{\mathbb{N}}) is a reduction pair. In other words, \succ_{\mathbb{N}} is well-founded and transitive, \succsim_{\mathbb{N}} is reflexive and transitive, and \succsim_{\mathbb{N}} and \succ_{\mathbb{N}} are compatible. Let \mathit{\Sigma} we denote the set of all polynomials with natural coefficients. Note that all these polynomials p are weakly monotonic, i.e., x_{i}\geq y_{i} for all 1\leq i\leq n implies p(x_{1},\ldots,x_{n})\geq p(y_{1},\ldots,y_{n}).

A polynomial interpretation maps each function and each predicate symbol of the program to a polynomial.

###### Definition 8 (polynomial interpretation)

A polynomial interpretation \mathit{I} for a logic program P maps each symbol f of arity n in \mathit{Fun}_{P}\cup\mathit{Pred}_{P} to a polynomial p_{f}(X_{1},\ldots,X_{n}).

Every polynomial interpretation induces a norm and a level mapping. Although it is standard in logic programming to distinguish between norms and level mappings, to simplify the formalization, here we will only introduce a level mapping and define it on both terms and atoms.

###### Definition 9 (polynomial level mapping)

The level mapping associated with a polynomial interpretation I, is a mapping {\mid}.{\mid}_{I}:\mathit{Term}_{P}\cup\mathit{Atom}_{P}\rightarrow\Sigma, which is defined recursively as:

1. {\mid}X{\mid}_{I}=X if X is a variable,

2. {\mid}f(t_{1},\ldots,t_{n}){\mid}_{I}=p_{f}({\mid}t_{1}{\mid}_{I},\ldots,{\mid% }t_{n}{\mid}_{I}), where p_{f}=I(f).

Every polynomial interpretation induces corresponding orders.

###### Definition 10 (reduction pair corresponding to polynomial interpretation)

Let I be a polynomial interpretation. We define the relations \succsim_{I} and \succ_{I} on \mathit{Term}_{P}\cup\mathit{Atom}_{P} as follows:

1. s\succsim_{I}t if and only if {\mid}s{\mid}_{I}\succsim_{\mathbb{N}}{\mid}t{\mid}_{I} for any s,t\in\mathit{Term}_{P}\cup\mathit{Atom}_{P}

2. s\succ_{I}t if and only if {\mid}s{\mid}_{I}\succ_{\mathbb{N}}{\mid}t{\mid}_{I} for any s,t\in\mathit{Term}_{P}\cup\mathit{Atom}_{P}

Again, observe that the orders induced by a polynomial interpretation form a reduction pair.

###### Example 4 (polynomial interpretation for “der”)

Let I be a polynomial interpretation with

 \begin{array}[]{llllclcll}I(+)&=&I(*)&=&p_{+}(X_{1},X_{2})&=&p_{*}(X_{1},X_{2}% )&=&X_{1}+X_{2}+2\\ I(u)&=&I(1)&=&p_{u}&=&p_{1}&=&1\\ \omit\span\omit\span\omit I(\mathit{der})&=&\omit\span\omit\span\omit p_{% \mathit{der}}(X)&=&X^{2}+2X+2\\ \omit\span\omit\span\omit I(d)&=&\omit\span\omit\span\omit p_{d}(X_{1},X_{2})&% =&X_{1}\end{array}

Then d(\mathit{der}(X+Y),DX+DY)\succ_{I}d(\mathit{der}(X),DX), since {\mid}d(\mathit{der}(X+Y),DX+DY){\mid}_{I}=(X+Y+2)^{2}+2(X+Y+2)+2\succ_{% \mathbb{N}}{\mid}d(\mathit{der}(X),DX){\mid}_{I}=X^{2}+2X+2.

### 3.2 Termination of Logic Programs by Polynomial Interpretations

We now re-state Definition 6 and Theorem 1 for the special case of polynomial interpretations. So instead of interargument relations for arbitrary orders as in Definition 4, we now use interargument relations w.r.t. polynomial interpretations.

###### Definition 11 (interargument relation w.r.t. a polynomial interpretation)

Let P be a program, p be a predicate in P, and I be a polynomial interpretation. R_{p} is an interargument relation for p in P w.r.t. I iff R_{p} is an interargument relation for p in P w.r.t. (\succsim_{I},\succ_{I}).

Instead of rigidity w.r.t. general orders as in Definition 5, we define rigidity w.r.t. polynomial interpretations.

###### Definition 12 (rigidity w.r.t. a polynomial interpretation)

A term or atom A\in\mathit{Term}_{P}\cup\mathit{Atom}_{P} is called rigid w.r.t. a polynomial interpretation I iff A is rigid w.r.t. (\succsim_{I},\succ_{I}), i.e., iff A\;{\approx}_{I}\,A\sigma holds for any substitution \sigma. A set of terms (or atoms) S is called rigid w.r.t. I if all its elements are rigid w.r.t. I.

For polynomial interpretations, rigidity can also be characterized in an alternative way using relevant variables.

###### Definition 13 (relevant variables)

Let I be a polynomial interpretation and A be a term or atom. A variable X in A is called relevant w.r.t. I if there exists a substitution \{X\rightarrow t\} of a term t for X, such that A\{X\rightarrow t\}\not\approx_{I}A.

###### Example 5 (relevant variables)

Let A=[X|Y] and \mathit{I} be the interpretation corresponding to the list-length norm {\parallel}.{\parallel}_{\ell}, i.e., {\mid}[H|T]{\mid}_{I}=1+{\mid}T{\mid}_{I}. Then the only relevant variable of A is Y. \square

###### Proposition 1 (alternative characterization of rigidity)

Let I be a polynomial interpretation and A be a term or atom. Then A is rigid w.r.t. I iff A has no relevant variables w.r.t. I.

• Obvious from Definitions 12 and 13.

Using the notions of interargument relations and rigidity w.r.t. a polynomial interpretation, we obtain the following specialization of Theorem 1:

###### Corollary 1 (termination criterion with polynomial rigid order-acceptability)

Let S be a set of atomic queries and P be a program. Let I be a polynomial interpretation, where \mathit{Call}(P,S) is rigid w.r.t. I and where for each predicate p in P, there is a valid interargument relation R_{p} in P w.r.t. I such that

1. for any clause A:\!\!-\;B_{1},B_{2},\ldots,B_{n} in P,

2. for any atom B_{i}\in\{B_{1}\ldots,B_{n}\} such that \mathit{rel}(B_{i})\backsimeq\mathit{rel}(A),

3. for any substitution \theta such that the atoms B_{1}\theta,\ldots,B_{i-1}\theta are elements of their associated interargument relations R_{\mathit{rel}(B_{1})},\ldots,R_{\mathit{rel}(B_{i-1})}:

4. A\theta\succ_{I}B_{i}\theta.

Then P terminates for any query in S.

• The corollary immediately follows from Theorem 1.

Corollary 1 can be applied to verify termination of a logic program w.r.t. a set of queries. More precisely, we have to check that all conditions in the following termination proof procedure are satisfied by some polynomial interpretation I. In Section 4 we will discuss how to find such an interpretation automatically.

###### Procedure 1 (a procedure for automatic termination analysis)

The termination proof procedure derived from Corollary 1 contains the following three steps:

1. Step 1: The call set \mathit{Call}(P,S) must be rigid w.r.t. I. In other words, no query A in the call set may have a relevant variable w.r.t. I.

2. Step 2: For a clause that has body-atoms between the head and a (mutually) recursive body-atom, valid interargument relations of those atoms w.r.t. I need to be inferred.

3. Step 3: For every clause, the polynomial level mapping of the head w.r.t. I should be larger than that of any (mutually) recursive body-atom, given that interargument relations for intermediate body-atoms hold.

For Step 2, we can follow the standard approach for LPs to verify that a relation R holds for all elements of the Herbrand model (see e.g. [Lloyd87]). To this end, one has to verify T_{P}(R)\subseteq R, where T_{P} is the immediate consequence operator corresponding to the program P. Thus, we verify the validity of interargument relations by first checking whether they are correct for the facts in the program. Then for every clause, if the interargument relations hold for all body-atoms, the interargument relation for the head should also hold.

###### Example 6 (applying Corollary 1 to the “der”-program)

Consider again the “der”-program from Example 1 and the set of queries S=\{d(t_{1},t_{2})\mid t_{1} is a ground term and t_{2} is an arbitrary term}. Note that here, \mathit{Call}(P,S)=S. Let I be the polynomial interpretation from Example 4. Then no A\in\mathit{Call}(P,S) has a relevant variable w.r.t. I. This means that \mathit{Call}(P,S) is rigid w.r.t. I.

Let R_{d}=\{d(t_{1},t_{2})\mid t_{1},t_{2}\in Term_{P},t_{1}\succ_{I}t_{2}\} be an interargument relation for the predicate d. Checking the validity of R_{d} is equivalent to verifying the correctness of the following conditions for any substitution \theta:

\mathit{der}(u)\theta\succ_{I}(1)\theta

\mathit{der}(X)\theta\succ_{I}\mathit{DX}\theta and \mathit{der}(Y)\theta\succ_{I}\mathit{DY}\theta implies

\mathit{der}(X+Y)\theta\succ_{I}(\mathit{DX}+\mathit{DY})\theta

\mathit{der}(X)\theta\succ_{I}\mathit{DX}\theta and \mathit{der}(Y)\theta\succ_{I}\mathit{DY}\theta implies

\mathit{der}(X*Y)\theta\succ_{I}(X*\mathit{DY}+Y*\mathit{DX})\theta

\mathit{der}(X)\theta\succ_{I}\mathit{DX}\theta and \mathit{der}(\mathit{DX})\theta\succ_{I}\mathit{DDX}\theta implies

\mathit{der}(\mathit{der}(X))\theta\succ_{I}\mathit{DDX}\theta.

To prove termination, we also need the following decrease conditions for any substitution \theta:

d(\mathit{der}(X+Y),\mathit{DX}+\mathit{DY})\theta\succ_{I}d(\mathit{der}(X),% \mathit{DX})\theta

d(\mathit{der}(X),\mathit{DX})\theta satisfies R_{d} implies

d(\mathit{der}(X+Y),\mathit{DX}+\mathit{DY})\theta\succ_{I}d(\mathit{der}(Y),% \mathit{DY})\theta

d(\mathit{der}(X*Y),X*\mathit{DY}+Y*\mathit{DX})\theta\succ_{I}d(\mathit{der}(% X),\mathit{DX})\theta

d(\mathit{der}(X),\mathit{DX})\theta satisfies R_{d} implies

d(\mathit{der}(X*Y),X*\mathit{DY}+Y*\mathit{DX})\theta\succ_{I}d(\mathit{der}(% Y),\mathit{DY})\theta

d(\mathit{der}(\mathit{der}(X)),\mathit{DDX})\theta\succ_{I}d(\mathit{der}(X),% \mathit{DX})\theta

d(\mathit{der}(X),\mathit{DX})\theta satisfies R_{d} implies

d(\mathit{der}(\mathit{der}(X)),\mathit{DDX})\theta\succ_{D}d(\mathit{der}(% \mathit{DX}),\mathit{DDX})\theta

The conditions above are equivalent to the following inequalities on the variables X,Y,\mathit{DX},\mathit{DY},\mathit{DDX}. For the conditions on the valid interargument relation, we obtain:

 \begin{array}[]{rcl}\hskip 227.622047pt5&>&1\\ \omit\span\omit\span\omit\forall X,Y,\mathit{DX},\mathit{DY}\in\mathbb{N}:\;X^% {2}+2X+2>\mathit{DX}\wedge Y^{2}+2Y+2>\mathit{DY}\Rightarrow\\ (X+Y+2)^{2}+2(X+Y+2)+2&>&\mathit{DX}+\mathit{DY}+2\\ \omit\span\omit\span\omit\forall X,Y,\mathit{DX},\mathit{DY}\in\mathbb{N}:\;X^% {2}+2X+2>\mathit{DX}\wedge Y^{2}+2Y+2>\mathit{DY}\Rightarrow\\ (X+Y+2)^{2}+2(X+Y+2)+2&>&x+\mathit{DY}+Y+\mathit{DX}+3\\ \omit\span\omit\span\omit\forall X,\mathit{DX},\mathit{DDX}\in\mathbb{N}:\;X^{% 2}+2X+2>\mathit{DX}\wedge\mathit{DX}^{2}+2\mathit{DX}+2>\mathit{DDX}% \Rightarrow\\ (X^{2}+2X+2)^{2}+2(X^{2}+2X+2)+2&>&DDX\end{array}

And for the decrease conditions we obtain:

 \begin{array}[]{rcl}\forall X,Y\in\mathbb{N}:\,(X+Y+2)^{2}+2(X+Y+2)+2&>&X^{2}+% 2X+2\\ \forall X,Y,\mathit{DX}\in\mathbb{N}:\,X^{2}+2X+2>\mathit{DX}\;\Rightarrow\;(X% +Y+2)^{2}+2(X+Y+2)+2&>&Y^{2}+2Y+2\\ \forall X,Y\in\mathbb{N}:\,(X+Y+2)^{2}+2(X+Y+2)+2&>&X^{2}+2X+2\\ \forall X,Y,\mathit{DX}\in\mathbb{N}:\,X^{2}+2X+2>\mathit{DX}\;\Rightarrow\;(X% +Y+2)^{2}+2(X+Y+2)+2&>&Y^{2}+2Y+2\\ \forall X\in\mathbb{N}:\,(X^{2}+2X+2)^{2}+2(X^{2}+2X+2)+2&>&X^{2}+2X+2\\ \forall X,\mathit{DX}\in\mathbb{N}:\,X^{2}+2X+2>\mathit{DX}\;\Rightarrow\;(X^{% 2}+2X+2)^{2}+2(X^{2}+2X+2)+2&>&\mathit{DX}^{2}+2\mathit{DX}+2\end{array}

The above inequalities are easily verified for all instantiations of the variables by numbers. Hence, the program terminates w.r.t. the set of queries S. \square

## 4 Automating the Termination Proof

A key question is how to automate the search for a polynomial interpretation and for interargument relations. In other words, to prove termination of a logic program, one has to synthesize the coefficients of the polynomials associated with the function and predicate symbols as well as the formulas \varphi_{p}(t_{1},\ldots,t_{n}) defining the interargument relations. In the philosophy of the constraint-based approach in [Decorteetal98], we do not choose a particular polynomial interpretation and particular interargument relations. Instead, we introduce a general symbolic form for the polynomials associated with the function and predicate symbols and for the interargument relations. As an example, assume that polynomials of degree 2 are selected for the interpretation. Then instead of assigning the polynomial p_{q}(X_{1},X_{2})=X_{1}^{2}+2X_{1}X_{2} to a predicate symbol q of arity 2, we would, for example, assign the symbolic polynomial p_{q}(X_{1},X_{2})=q_{00}+q_{10}X_{1}+q_{01}X_{2}+q_{11}X_{1}X_{2}+q_{1}X_{1}^% {2}+q_{2}X_{2}^{2}, where the q_{i} and q_{ij} are unknown coefficients ranging over \mathbb{N}. So our approach for termination analysis works as follows:

• introduce symbolic versions of the polynomials associated with function and predicate symbols,

• express all conditions resulting from Corollary 1 as constraints on the coefficients (e.g. q_{00},q_{10},q_{01},\ldots),

• solve the resulting system of constraints to obtain values for the coefficients.

Each solution for this constraint system gives rise to a concrete polynomial interpretation and to concrete valid interargument relations such that all conditions of Corollary 1 are satisfied. Therefore, each solution gives a termination proof.

In order to assign symbolic polynomials to the function and predicate symbols, we make the decision of assigning linear polynomials to predicate symbols and linear or simple-mixed polynomials to function symbols. These classes of polynomials are defined as follows:

1. The linear class: each monomial of a polynomial in this class contains at most one variable of at most degree 1:
p(X_{1},\ldots,X_{n})=p_{0}+\sum_{k=1}^{n}p_{k}X_{k}

2. The simple-mixed class: each monomial of a polynomial in this class contains either a single variable of at most degree 2 or several variables of at most degree 1:
p(X_{1},\ldots,X_{n})=\sum_{j_{k}\in\{0,1\}}p_{j_{1}{\ldots}j_{n}}X^{j_{1}}_{1% }\ldots X^{j_{n}}_{n}+\sum_{k=1}^{n}p_{k}X^{2}_{k}

The above classes of polynomials have proved to be particularly useful for automated termination proofs of TRSs. For more details on these classes of polynomials we refer to [contejean05jar, Steinbach92]. In our work, these choices resulted from extensive experiments with different kinds of polynomials, where our goal was to optimize both the efficiency and the power of the termination analyzer.

In Section 4.1, we first reformulate the conditions of our termination criterion in Corollary 1, using the above symbolic forms of polynomials. Then in Section 4.2, we transform these symbolic conditions into constraints on the unknown coefficients of the symbolic polynomials. Afterwards, in Section 4.3 we show how these resulting Diophantine constraints can be solved automatically. Finally, we conclude with a comparison of our contributions with related work from term rewriting in Section 4.4.

### 4.1 Reformulating the Termination Conditions

In this subsection, we reformulate all termination conditions of Corollary 1, i.e., of Procedure 1. These include the rigidity property (Step 1), the valid interargument relations (Step 2), and the decrease conditions (Step 3). The reformulation results in symbolic constraints, based on the symbolic forms of the polynomial interpretations.

#### 4.1.1 Rigidity Conditions (Procedure 1, Step 1)

There are several ways to approximate \mathit{Call}(P,S) (e.g., [Bruynoogheetal05, GallagherHB05, HeatonACK00, Janssensetal92]). In this paper, we apply the approximation technique of [GallagherHB05, Janssensetal92]. More precisely, we first specify the set of queries as a set of rigid type graphs. Then the technique in [GallagherHB05, Janssensetal92] is used to compute a new, finite set of rigid type graphs which approximate \mathit{Call}(P,S). Each of these new rigid type graphs represents a so-called call pattern. For further details, we refer to [GallagherHB05, Janssensetal92].

In the following, we recapitulate the notion of rigid type graphs and show how rigidity conditions are derived from the set of call patterns. First, we recall and extend some basic definitions from [Janssensetal92], which are based on linear norms and level-mappings, to the case of general polynomial interpretations. Example LABEL:der:symbolrigidcond will illustrate these definitions.

###### Definition 14 (rigid type graph [Janssensetal92])

A rigid type graph T is a 5-tuple, (\mathit{Nodes},\mathit{ForArcs},\mathit{BackArcs},\mathit{Label},\mathit{% ArgPos}), where

1. \mathit{Nodes} is a finite non-empty set of nodes.

2. \mathit{ForArcs}\subseteq\mathit{Nodes}\times\mathit{Nodes} such that (\mathit{Nodes},\mathit{ForArcs}) is a tree.

3. \mathit{BackArcs}\subseteq\mathit{Nodes}\times\mathit{Nodes} such that for every arc (m,n)\in\mathit{BackArcs}, node n is an ancestor of node m in the tree (\mathit{Nodes},\mathit{ForArcs}).

4. \mathit{Label} is a function \mathit{Nodes}\rightarrow\mathit{Fun}_{P}\cup\mathit{Pred}_{P}\cup\{\mbox{{MAX% }},\mbox{{OR}}\}.

5. If a node n is labelled with f\in\mathit{Fun}_{P}\cup\mathit{Pred}_{P} and f has arity k, then the node n has exactly k outgoing arcs (counting both \mathit{ForArcs} and \mathit{BackArcs}). These arcs are labelled with the numbers 1,\ldots,k. For every such arc (n,m), \mathit{ArgPos}(n,m) returns the corresponding label from \{1,\ldots,k\}.

The intuition behind rigid type graphs is related to the tree representation of terms and atoms in LP. A rigid type graph generalizes the tree representation of an atom by allowing:

• nodes labeled by MAX, denoting any term,

• nodes labeled by OR, denoting the union of all denotations of the sub-graphs rooted at this node,

• backarcs, denoting repeated traversals of a sub-graph.

For each rigid type graph representing a set of atoms S, each node MAX in the graph corresponds to a possible occurrence of a variable in the atoms of S. The set S is rigid w.r.t. the polynomial interpretation I iff all these variables are not relevant w.r.t. I. In the following, we formulate this rigidity condition syntactically based on the rigid type graph.

###### Definition 15 (critical path [Decorteetal98])

Let T\!=\!(\mathit{Nodes},\mathit{ForArcs},\mathit{BackArcs},\mathit{Label},% \mathit{ArgPos}) be a rigid type graph. A critical path in T is a path of arcs from the tree \mathit{ForArcs} which goes from the root node of the tree to a node labelled MAX.

The following proposition is extended from [Decorteetal93], where in [Decorteetal93] each function or predicate symbol is associated with a linear norm or level mapping. It provides a method to generate constraints for rigidity.

###### Proposition 2 (checking rigidity by critical paths)

Let P be a program and T=(\mathit{Nodes},\mathit{ForArcs},\mathit{BackArcs},\mathit{Label},\mathit{% ArgPos}) be a rigid type graph representing a set of atoms S. Let I be a polynomial interpretation, where for any function or predicate symbol f of arity k we have I(f)=p_{f}(X_{1},\ldots,X_{k})=\sum_{0\leq j_{1},\ldots,j_{k}\leq M_{f}}f_{{j_% {1}}\ldots{j_{k}}}X_{1}^{j_{1}}\ldots X_{k}^{j_{k}}. The set S is rigid w.r.t. I iff on every critical path of T there exists an arc (n,m) with \mathit{Label}(n)=f, \mathit{arity}(f)=k, and \mathit{ArgPos}(n,m)=i such that \sum_{j_{i}>0}f_{{j_{1}}\ldots{j_{k}}}=0, where k is the arity of f.

• Since we only regard polynomials with non-negative coefficients f_{{j_{1}}\ldots{j_{k}}}, the condition \sum_{j_{i}>0}f_{{j_{1}}\ldots{j_{k}}}=0 is equivalent to the requirement that f_{{j_{1}}\ldots{j_{k}}}=0, whenever j_{i}>0. This in turn is equivalent to the condition that X_{i} is not involved in p_{f}(X_{1},\ldots,X_{k}). Hence, the condition in the above proposition is equivalent to the requirement that for any MAX node, there is at least one function or predicate symbol f on the critical path to this MAX node, for which the argument position corresponding to the path is not involved in p_{f}. So equivalently, the atoms in the set S have no relevant variables w.r.t. I. According to Proposition 1, this is equivalent to rigidity w.r.t. I.

The following corollary shows how to express the above rigidity check as a constraint on the coefficients of the polynomial interpretation. To this end, we express the existence condition of an appropriate arc (n,m) by a suitable multiplication.

###### Corollary 2 (symbolic condition for checking rigidity)

Let T be a rigid type graph representing a set of atoms S and let \mathit{CP} be a critical path of T. Let (n^{1},m^{1}),\ldots,(n^{e},m^{e}) be all arcs in \mathit{CP} such that for all d\in\{1,\ldots,e\}, \mathit{Label}(n^{d})=f^{d} is a function or predicate symbol of some arity k^{d} and \mathit{ArgPos}(n^{d},\linebreak m^{d})=i^{d}. If for any such \mathit{CP} we have

then S is rigid w.r.t. I.

der:symbolrigidcond For Example 1, we define a symbolic polynomial interpretation I as follows.

 \begin{array}[]{rcl}I(+)&=&p_{1}X_{1}^{2}+p_{2}X_{2}^{2}+p_{11}X_{1}X_{2}+p_{1% 0}X_{1}+p_{01}X_{2}+p_{00}\\ I(*)&=&m_{1}X_{1}^{2}+m_{2}X_{2}^{2}+m_{11}X_{1}X_{2}+m_{10}X_{1}+m_{01}X_{2}+% m_{00}\\ I(\mathit{der})&=&\mathit{der}_{2}X^{2}+\mathit{der}_{1}X+\mathit{der}_{0}\\ I(u)&=&c_{u}\\ I(1)&=&c_{1}\\ I(d)&=&d_{0}+d_{1}X_{1}+d_{2}X_{2}\end{array}

We will reformulate the termination conditions for this example in symbolic form. However for reasons of space, we will not give all polynomial constraints. Instead, in order to illustrate the main ideas, in each sub-section we only present one constraint for the corresponding type of conditions.

Instead of checking termination of the “\mathit{der}”-program w.r.t. the set of queries S=\{d(t_{1},t_{2})\mid t_{1} is a ground term, t_{2} is an arbitrary term\} as in Example 1, we now regard the set of queries S_{1}=\{d(t_{1},t_{2})\mid t_{1} is of the form der(t^{\prime}_{1}), where t^{\prime}_{1} is a ground term constructed from the function symbols u, +, *, \mathit{der}, and t_{2} is an arbitrary term\}. S_{1} is represented by the type graph in Figure LABEL:fig:example:rigidcondi-der.

Obviously, termination of the program w.r.t. S_{1} also implies termination w.r.t. S. This can be proved easily by showing that for any query Q\in S\setminus S_{1}, the program trivially terminates by finite failure.

In our example, type inference [Janssensetal92] computes the call set \mathit{Call}(P,S_{1})=S_{1}, i.e., the graph in Figure LABEL:fig:example:rigidcondi-der also represents \mathit{Call}(P,S_{1}). Its only critical path consists of just the arc from the root to the node labelled MAX. Hence from the graph, the following rigidity condition is generated according to Corollary 2:

 d_{2}=0

\square

#### 4.1.2 Valid Interargument Relations (Procedure 1, Step 2)

Next we consider the other symbolic constraints, derived for valid interargument relations and decrease conditions. We will show that they all take the form:

where n\geq 0 and p_{i},q_{i} are polynomials with natural coefficients. Here, \overline{X} is the tuple of all variables occurring in p_{1},\ldots,p_{n+1},q_{1},\ldots,q_{n+1}. There are a number of works on inferring valid interargument relations of predicates. In [Decorteetal98], interargument relations are formulated as inequalities between a linear combination of the “inputs” and a linear combination of the “outputs”. We will not define input and output arguments formally in this paper, since we do not use them in our approach, but informally, inputs are the arguments of a predicate symbol which are only called with ground terms and outputs are the remaining arguments. We propose a new form of interargument relation, namely polynomial interargument relations, which are of the following form:

 \displaystyle R_{p}=\{p(t_{1},\ldots,t_{n})\mid i_{p}({\mid}t_{1}{\mid}_{I},% \ldots,{\mid}t_{n}{\mid}_{I})\succsim_{\mathbb{N}}o_{p}({\mid}t_{1}{\mid}_{I},% \ldots,{\mid}t_{n}{\mid}_{I})\} (0)

where i_{p} and o_{p} are polynomials with natural coefficients. The form of interargument relations in [Decorteetal98] can be considered a special case of the form (4.1.2) above, where i_{p}({\mid}t_{1}{\mid}_{I},\ldots,{\mid}t_{n}{\mid}_{I}) is constructed from the input arguments only and o_{p}({\mid}t_{1}{\mid}_{I},\ldots,{\mid}t_{n}{\mid}_{I}) is only constructed from the outputs. Since the approach in [Decorteetal98] only considers relations between the input and output arguments of the predicates, it has some limitations. In some cases, the desired relation does not compare inputs with outputs, but the relation holds among the inputs only or among the outputs only. In particular, if all arguments of a predicate are inputs (or outputs), then the approach in [Decorteetal98] fails to infer any useful relation among them. The following example shows this point. It computes the natural division of the first and second arguments of the predicate \mathit{div} and returns the result in its third argument.

###### Example 7 (div)
 \displaystyle\mathit{div}(X,s(Y),0):\!\!-\;\mathit{less}(X,s(Y)). \displaystyle\mathit{div}(X,s(Y),s(Z)):\!\!-\;\mathit{sub}(X,s(Y),R),\mathit{% div}(R,s(Y),Z). (0) \displaystyle\mathit{sub}(X,0,X). \displaystyle\mathit{sub}(s(X),s(Y),Z):\!\!-\;\mathit{sub}(X,Y,Z). \displaystyle\mathit{less}(0,s(Y)). \displaystyle\mathit{less}(s(X),s(Y)):\!\!-\;\mathit{less}(X,Y).

We consider the set of queries S=\{\,\mathit{div}(t_{1},t_{2},t_{3})\mid t_{1} and t_{2} are ground terms, and t_{3} is an arbitrary term}. This program terminates for all these queries. If we look at Clause (7), the decrease in size between the head and the recursive body-atom can be established if we can infer a suitable valid interargument relation for \mathit{sub}. This relation should imply that within Clause (7), the first argument of \mathit{sub} is greater than its third argument. However, if we apply the approach in [Decorteetal98], inferring such an interargument relation for \mathit{sub} is impossible. Since the first two \mathit{sub}-arguments are used as input and the last one is output, the approach can only infer interargument relations where a linear combination of the sizes of the first and second arguments is greater than or equal to the size of the third argument. Then, we cannot conclude that for every successful answer substitution for the call \mathit{sub}(X,s(Y),R) in Clause (7), the first \mathit{sub}-argument X is strictly greater than the third \mathit{sub}-argument R. In contrast, if we use Form (4.1.2), then it is possible to infer the following valid interargument relation for \mathit{sub}:

 R_{\mathit{sub}}=\{\mathit{sub}(t_{1},t_{2},t_{3})\mid{\mid}t_{1}{\mid}_{I}% \succsim_{\mathbb{N}}{\mid}t_{2}{\mid}_{I}+{\mid}t_{3}{\mid}_{I}\}

Note that in the right-hand side {\mid}t_{2}{\mid}_{I}+{\mid}t_{3}{\mid}_{I} of the above inequality, we have both an input argument t_{2} and an output argument t_{3}. This valid polynomial interargument relation guarantees that for any successful answer substitution for the call \mathit{sub}(X,s(Y),R) in Clause (7), we have {\mid}X{\mid}_{I}\succ_{\mathbb{N}}{\mid}R{\mid}_{I} if {\mid}s(Y){\mid}_{I}\succsim_{\mathbb{N}}1. Our implementation in the system Polytool is indeed able to infer this interargument relation using the constraint solving technique explained below. Therefore, Polytool can prove termination of “div”. If we used the form of interargument relations in [Decorteetal98] instead, Polytool would not be able to solve this problem. \square.

Similar to the symbolic form of polynomial interpretations, we also use a symbolic form of polynomial interargument relations. To this end, we take symbolic polynomials i_{p} and o_{p}. For the inference of valid interargument relations, we then apply the technique proposed in [Decorteetal98], cf. Procedure 1, Step 2. For any sequence of terms t_{1},\ldots,t_{n}, let \mathbf{R}_{p}(t_{1},\ldots,t_{n}) abbreviate the inequality i_{p}({\mid}t_{1}{\mid}_{I},\ldots,{\mid}t_{n}{\mid}_{I})\geq o_{p}({\mid}t_{1% }{\mid}_{I},\ldots,{\mid}t_{n}{\mid}_{I}). The goal is to impose constraints on the polynomials i_{p} and o_{p} which ensure that the corresponding interargument relation R_{p}=\{p(t_{1},\ldots,t_{n})\mid\forall\overline{X}\in\mathbb{N}:\,\mathbf{R}% _{p}(t_{1},\ldots,t_{n})\} is valid. To this end, we generate for every clause of the program:

 p(\overline{t}):\!\!-\;p_{1}(\overline{t_{1}}),\ldots,p_{n}(\overline{t_{n}})

the constraint

It is clear that this formula has Form (4.1.2).

###### Example 8 (symbolic interargument relation for the “der”-program)

We continue Example LABEL:der:symbolrigidcond and use linear polynomials for i_{\mathit{der}} and o_{\mathit{der}}, i.e., i_{\mathit{der}}(X,Y)=i_{0}+i_{1}X+i_{2}Y and o_{\mathit{der}}=o_{0}+o_{1}X+o_{2}Y. Hence, the the symbolic form of the polynomial interargument relation for the predicate d is

 R_{d}=\{d(t_{1},t_{2})\mid i_{0}+i_{1}{\mid}t_{1}{\mid}_{I}+i_{2}{\mid}t_{2}{% \mid}_{I}\succsim_{\mathbb{N}}o_{0}+o_{1}{\mid}t_{1}{\mid}_{I}+o_{2}{\mid}t_{2% }{\mid}_{I}\}.

There are four clauses (1) - (1) from which constraints for valid interargument relations are inferred. We only present the constraint resulting from the last clause (1):

 d(\mathit{der}(\mathit{der}(X)),\mathit{DDX}):\!\!-\;d(\mathit{der}(X),\mathit% {DX}),d(\mathit{der}(\mathit{DX}),\mathit{DDX})

Here, we obtain the constraint

 \displaystyle\forall X,\mathit{DX},\mathit{DDX}\in\mathbb{N}: \displaystyle\mathbf{R}_{d}(\mathit{der}(X),\mathit{DX})\wedge\mathbf{R}_{d}(% \mathit{der}(\mathit{DX}),\mathit{DDX})\Rightarrow\mathbf{R}_{d}(\mathit{der}(% \mathit{der}(X)),\mathit{DDX}). (0)

\square

#### 4.1.3 Decrease Conditions (Procedure 1, Step 3)

Finally, one has to require the decrease condition between the head and any (mutually) recursive body-atom in any (mutually) recursive clause. So for any clause

 p(\overline{t}):\!\!-\;p_{1}(\overline{t_{1}}),\ldots,p_{n}(\overline{t_{n}})

of the program where p\backsimeq p_{i} (i.e., where p and p_{i} are mutually recursive), we require

Obviously, the formula is in Form (4.1.2).

###### Example 9 (constraints for the decrease conditions of “der”)

There are three recursive clauses (1) - (1) where decrease conditions can be inferred. We present the decrease condition for the recursive body-atom d(\mathit{der}(\mathit{DX}),\mathit{DDX}) of the last clause (1):

 \displaystyle\forall X,\mathit{DX},\mathit{DDX}\in\mathbb{N}: \displaystyle i_{0}+i_{1}(\mathit{der}_{2}X^{2}+\mathit{der}_{1}X+\mathit{der}% _{0})+i_{2}\mathit{DX}\geq \displaystyle o_{0}+o_{1}(\mathit{der}_{2}X^{2}+\mathit{der}_{1}X+\mathit{der}% _{0})+o_{2}\mathit{DX} \displaystyle\Rightarrow (0) \displaystyle d_{0}+d_{1}(\mathit{der}_{2}(\mathit{der}_{2}X^{2}+\mathit{der}_% {1}X+\mathit{der}_{0})^{2}+ \displaystyle\phantom{d_{0}+d_{1}(}\mathit{der}_{1}(\mathit{der}_{2}X^{2}+% \mathit{der}_{1}X+\mathit{der}_{0})+\mathit{der}_{0})+d_{2}\mathit{DDX}\geq \displaystyle d_{0}+d_{1}(\mathit{der}_{2}\mathit{DX}^{2}+\mathit{der}_{1}% \mathit{DX}+\mathit{der}_{0})+d_{2}\mathit{DDX}+1.

\square

### 4.2 From Symbolic Conditions to Constraints on Coefficients

Our goal is to find a polynomial interpretation such that all constraints generated in the previous section are satisfied. To this end, we transform all these constraints into Diophantine constraints. In this transformation, we first eliminate implications, cf. Section 4.2.1. Afterwards, in Section 4.2.2, the universally quantified variables (e.g., X,DX,DDX,\ldots) are removed and the former unknown coefficients (e.g., \mathit{der}_{0},\mathit{der}_{1},\mathit{der}_{2},\ldots) become the new variables. If the resulting Diophantine constraints can be solved, then the program under consideration is terminating. As we analyzed in Section 4.1.1, all generated rigidity constraints have the Form (2). Hence, these are already Diophantine constraints which only contain unknown coefficients, but no universally quantified variables. The other constraints, generated for the valid interargument relations and the decrease conditions, have the following form:

where n\geq 0 and p_{i},q_{i} are polynomials with natural coefficients. In the following, we introduce a two-phase method to transform all constraints of Form (4.1.2) into Diophantine constraints on the unknown coefficients.

#### 4.2.1 First Phase: Removing Implications

The constraints of Form (4.1.2) are implications. In the first phase, such constraints are transformed into inequalities without premises, i.e., into constraints of the form

However, here p is a polynomial with integer (i.e., possibly negative) coefficients. The transformation is sound: if the new constraints of Form (4.2.1) are satisfied by some substitution which instantiates the unknown coefficients with numbers, then this substitution also satisfies the original constraints of Form (4.1.2). The idea for the transformation is the following. Constraints of the form (4.1.2) may have an arbitrary number n of premises p_{i}\geq q_{i}. We first transform them into constraints with at most one premise. Obviously, p_{1}\geq q_{1}\wedge\ldots\wedge p_{n}\geq q_{n} implies p_{1}+\ldots+p_{n}\geq q_{1}+\ldots q_{n}. Thus, instead of (4.1.2), it would be sufficient to demand

So in order to combine the n polynomials in the premise, we can use the polynomial \mathit{prem}(X_{1},\ldots,X_{n})=X_{1}+\ldots+X_{n}. Then instead of (4.1.2), we may require

A similar method was also used for termination analysis of logic programs in [Decorteetal98] and for termination of term rewriting in [JAR07, Section 7.2] to transform disjunctions of polynomial inequalities into one single inequality. For example, the constraint

can now be transformed into

Since the latter constraint is valid, the former one is valid as well. However, in order to make the approach more powerful, one could also use other polynomials \mathit{prem} in order to combine the n inequalities in the premise. The reason is that if \mathit{prem} is restricted to be the addition, then many valid constraints of the form (4.1.2) would be transformed into invalid ones. For example, the valid constraint

would be transformed into the invalid constraint

For instance, the constraint does not hold for X_{1}=4, X_{2}=0, and X_{3}=2. To make the transformation more general and more powerful, we therefore permit the use of arbitrary polynomials \mathit{prem} with natural coefficients. In the above example, now the resulting constraint

would indeed be valid for a suitable choice of \mathit{prem}. For instance, one could choose \mathit{prem} to be the addition of the first argument with the square of the second argument (i.e., \mathit{prem}(X_{1},X_{2})=X_{1}+X^{2}_{2}). By the introduction of the new polynomial \mathit{prem}, every constraint of the form (4.1.2) can now be transformed into an implication with at most one premise. It remains to transform such implications further into unconditional inequalities. Obviously, instead of

 \mathit{prem}(p_{1},\ldots,p_{n})\geq\mathit{prem}(q_{1},\ldots,q_{n})\;% \Rightarrow\;p_{n+1}\geq q_{n+1}, (0)

it is sufficient to demand

 p_{n+1}-q_{n+1}\geq\mathit{prem}(p_{1},\ldots,p_{n})-\mathit{prem}(q_{1},% \ldots,q_{n}). (0)

This observation was already used in the work of [Decorteetal98] and also in termination techniques for term rewriting to handle such conditional polynomial inequalities [CADE98, CADE07]. However, the approach can still be improved. Recall that we used an arbitrary polynomial \mathit{prem} to combine the polynomials in the former premises. In a similar way, one could also apply an arbitrary polynomial \mathit{conc} to the polynomials p_{n+1} and q_{n+1} in the former conclusion. To see why this can be necessary, consider the valid constraint

With the transformation of (4.2.1) into (4.2.1) above, it would be transformed into the unconditional constraint

which is invalid. We have encountered several examples of this kind in our experiments, which motivates this further extension. In such examples, it would be better to apply a suitable polynomial \mathit{conc} to the polynomials X and 1 in the former conclusion. Then we would obtain

instead. By choosing \mathit{conc}(X)=2X, now the resulting constraint is valid. So to summarize, in the first phase of our transformation, any constraint of the form (4.1.2) is transformed into the unconditional constraint

 \forall\overline{X}\in\mathbb{N}:\,\mathit{conc}(p_{n+1})-\mathit{conc}(q_{n+1% })\,\geq\,\mathit{prem}(p_{1},\ldots,p_{n})-\mathit{prem}(q_{1},\ldots,q_{n}). (0)

Here, \mathit{prem} and \mathit{conc} are two arbitrary new polynomials. The only requirement that we have to impose is that \mathit{conc} must not be a constant. Indeed, if \mathit{conc} would be a constant, then (4.2.1) no longer implies that (4.2.1) holds for all instantiations of the variables in the polynomials p_{1},\ldots,p_{n+1},q_{1},\ldots,q_{n+1}. Note that we do not need a similar requirement on \mathit{prem}. If a constant \mathit{prem} would satisfy (4.2.1), then (4.1.2) trivially holds. The following proposition proves the soundness of this transformation.

###### Proposition 3 (Soundness of Removing Implications)

Let \mathit{prem} and \mathit{conc} be two polynomials with natural coefficients, where \mathit{conc} is not a constant. Moreover, let p_{1},\ldots,p_{n+1},q_{1},\ldots,q_{n+1} be arbitrary polynomials with natural coefficients. If

 \forall\overline{X}\in\mathbb{N}:\;\;\;\mathit{conc}(p_{n+1})-\mathit{conc}(q_% {n+1})-\mathit{prem}(p_{1},\ldots,p_{n})+\mathit{prem}(q_{1},\ldots,q_{n})\;% \geq\;0

is valid, then

is also valid.

• For any tuple of numbers \overline{x}, let p_{i}(\overline{x}) and q_{i}(\overline{x}) denote the numbers that result from p_{i} and q_{i} by instantiating the variables \overline{X} by the numbers \overline{x}. So if p(X_{1},X_{2}) is the polynomial X_{1}^{2}+2X_{1}X_{2}, then p(2,1)=8. Suppose that there is a tuple of numbers \overline{x} with p_{i}(\overline{x})\geq q_{i}(\overline{x}) for all i\in\{1,\ldots,n\}. We have to show that then p_{n+1}(\overline{x})\geq q_{n+1}(\overline{x}) holds as well. Since \mathit{prem} only has natural coefficients, it is weakly monotonic. Thus, p_{i}(\overline{x})\linebreak\geq q_{i}(\overline{x}) for all i\in\{1,\ldots,n\} implies \mathit{prem}(p_{1}(\overline{x}),\ldots,p_{n}(\overline{x}))\geq\mathit{prem}% (q_{1}(\overline{x}),\linebreak\ldots,q_{n}(\overline{x})) and thus, \mathit{prem}(p_{1}(\overline{x}),\ldots,p_{n}(\overline{x}))-\mathit{prem}(q_% {1}(\overline{x}),\ldots,q_{n}(\overline{x}))\geq 0. The prerequisites of the proposition ensure

 \mathit{conc}(p_{n+1})-\mathit{conc}(q_{n+1})\;\geq\;\mathit{prem}(p_{1},% \ldots,p_{n})-\mathit{prem}(q_{1},\ldots,q_{n})

for all instantiations of the variables. Hence, we also obtain \mathit{conc}(p_{n+1}(\overline{x}))-\mathit{conc}(q_{n+1}(\overline{x}))\geq 0 or, equivalently,

 \mathit{conc}(p_{n+1}(\overline{x}))\geq\mathit{conc}(q_{n+1}(\overline{x})). (0)

Now suppose that p_{n+1}(\overline{x})\not\geq q_{n+1}(\overline{x}). Since p_{n+1}(\overline{x}) and q_{n+1}(\overline{x}) are numbers (not polynomials with variables), we would then have p_{n+1}(\overline{x})<q_{n+1}(\overline{x}). Since \mathit{conc} only has non-negative coefficients and since it is not a constant, it is strictly monotonic. Thus, p_{n+1}(\overline{x})<q_{n+1}(\overline{x}) would imply

 \mathit{conc}(p_{n+1}(\overline{x}))<\mathit{conc}(q_{n+1}(\overline{x}))

in contradiction to (4.2.1). Hence, we have p_{n+1}(\overline{x})\geq q_{n+1}(\overline{x}), as desired.

For the symbolic form of \mathit{prem} and \mathit{conc}, we again choose linear or simple-mixed polynomials. From our experiments, this choice provided good results on the benchmark programs, while remaining reasonably efficient. By applying Proposition 3, we can now transform all constraints for the termination proof into unconditional constraints of the form (4.2.1). If there exists a substitution of the unknown coefficients by numbers that makes the resulting unconditional constraints valid, then the same substitution also satisfies the original conditional constraints.

###### Example 10 (applying Proposition 3 to the “der”-program)

We choose the decrease condition (9) in Example 9 as an example showing how to transform an implication into an unconditional constraint. Since the constraint (9) has only one premise, here the polynomial \mathit{prem} has arity 1. We choose a simple-mixed form for \mathit{prem} and a linear form for \mathit{conc}:

 \displaystyle\mathit{prem}(X)=\mathit{prem}_{0}+\mathit{prem}_{1}X+\mathit{% prem}_{2}X^{2} \displaystyle\mathit{conc}(X)=\mathit{conc}_{0}+\mathit{conc}_{1}X.

Since \mathit{conc} must not be a constant, one also has to impose the constraint

 \mathit{conc}_{1}>0.

Now we can transform (9) into an unconditional constraint. Here, we use the following abbreviations:

 \begin{array}[]{lll}p_{1}&=&i_{0}+i_{1}(\mathit{der}_{2}X^{2}+\mathit{der}_{1}% X+\mathit{der}_{0})+i_{2}\mathit{DX}\\ q_{1}&=&o_{0}+o_{1}(\mathit{der}_{2}X^{2}+\mathit{der}_{1}X+\mathit{der}_{0})+% o_{2}\mathit{DX}\\ p_{2}&=&d_{0}+d_{1}(\mathit{der}_{2}(\mathit{der}_{2}X^{2}+\mathit{der}_{1}X+% \mathit{der}_{0})^{2}+\\ &&\phantom{d_{0}+d_{1}(}\mathit{der}_{1}(\mathit{der}_{2}X^{2}+\mathit{der}_{1% }X+\mathit{der}_{0})+\mathit{der}_{0})+d_{2}\mathit{DDX}\\ q_{2}&=&d_{0}+d_{1}(\mathit{der}_{2}\mathit{DX}^{2}+\mathit{der}_{1}\mathit{DX% }+\mathit{der}_{0})+d_{2}\mathit{DDX}+1\end{array}

Then (9) is the constraint

and its transformation yields

 \begin{array}[]{lll}\forall X,\mathit{DX},\mathit{DDX}\in\mathbb{N}:&\mathit{% conc}_{0}+\mathit{conc}_{1}\,p_{2}-\mathit{conc}_{0}-\mathit{conc}_{1}\,q_{2}&% \\ &-\mathit{prem}_{0}-\mathit{prem}_{1}\,p_{1}-\mathit{prem}_{2}\,p_{1}^{2}&\\ &+\mathit{prem}_{0}+\mathit{prem}_{1}\,q_{1}+\mathit{prem}_{2}\,q_{1}^{2}&\geq 0% .\end{array}

By applying standard simplifications, the constraint can be rewritten to the following form:

 \displaystyle\forall X,\mathit{DX}\in\mathbb{N}: \displaystyle M_{1}X^{4}+M_{2}X^{3}+M_{3}X^{2}+M_{4}X+ \displaystyle M_{5}\mathit{DX}^{2}+M_{6}\mathit{DX}+M_{7}X^{2}\mathit{DX}+M_{8% }X\mathit{DX}+M_{9}\;\geq\;0 (0)

where M_{1},\ldots,M_{9} are polynomials over the unknown coefficients \mathit{prem}_{j}, i_{j}, o_{j}, \mathit{der}_{j}, and d_{j} with j\in\{0,1,2\} and \mathit{conc}_{j} with j\in\{0,1\}. For example, we have

 M_{1}\;=_{\mathit{def}}\;\mathit{conc}_{1}\,d_{1}\,\mathit{der}_{2}^{3}+% \mathit{prem}_{2}\,o_{1}^{2}\,\mathit{der}_{2}^{2}-\mathit{prem}_{2}\,i_{1}^{2% }\,\mathit{der}_{2}^{2}.

\square

#### 4.2.2 Second Phase: Removing Universally Quantified Variables

In this phase, we transform any constraint of the form

into a set of Diophantine constraints on the unknown coefficients. The transformation is again sound: if there is a solution for the resulting set of Diophantine constraints, then this solution also satisfies the original constraint (4.2.1). We use a straightforward transformation proposed by [Hongetal98], which is also used in all related tools for termination of term rewriting. One only requires that all coefficients of the polynomial p are non-negative integers. Obviously, the criterion is only sufficient, because, for instance, p(X)=(X-1)^{2}\geq 0, but X^{2}-2X+1 does not have non-negative coefficients only.

###### Example 11 (removing universally quantified variables for the “der”-program)

We continue the transformation of Example 10. Here, we obtained the constraint (10). We derive the following set of Diophantine constraints which contains the unknown coefficients \mathit{conc}_{j}, \mathit{prem}_{j}, i_{j}, o_{j}, \mathit{der}_{j}, and d_{j} as variables: M_{1}\geq 0,M_{2}\geq 0,\ldots,M_{9}\geq 0. \square

### 4.3 Solving Diophantine Constraints

The previous sections showed that one can formulate all termination conditions in symbolic form and that one can transform them automatically into a set of Diophantine constraints. The problem then becomes solving a system of non-linear Diophantine constraints with the unknown coefficients as variables. If the Diophantine constraints are solvable, then the logic program under consideration is terminating. Solving such problems has been studied intensively, especially in the context of constraint logic programming. Moreover, there are approaches from termination of term rewriting in order to solve such restricted Diophantine constraints automatically e.g., [SMTCADE09, contejean05jar, Fuhsc07]. In [Fuhsc07], Diophantine constraints are encoded as a SAT-problem, and then a SAT solver is used to solve the resulting SAT-problem. As shown in [Fuhsc07], this approach is significantly more efficient than solving Diophantine constraints by dedicated solvers like [contejean05jar] or by standard implementations of constraint logic programming like in SICStus Prolog.

###### Example 12 (solving Diophantine constraints for the“der”-program)

We start with the symbolic polynomial interpretation from Example LABEL:der:symbolrigidcond (e.g., with I(\mathit{der})\linebreak=\mathit{der}_{2}X^{2}+\mathit{der}_{1}X+\mathit{der}% _{0}) and obtain the solution \mathit{der}_{2}=1 and \mathit{der}_{0}=\mathit{der}_{1}=2, which corresponds to X^{2}+2X+2. Similarly, we start with the symbolic form of the polynomial interargument relation as in Example 8:

 R_{d}=\{d(t_{1},t_{2})\mid i_{0}+i_{1}{\mid}t_{1}{\mid}_{I}+i_{2}{\mid}t_{2}{% \mid}_{I}\succsim_{\mathbb{N}}o_{0}+o_{1}{\mid}t_{1}{\mid}_{I}+o_{2}{\mid}t_{2% }{\mid}_{I}\}.

Then we get the solution i_{1}=1, i_{0}=i_{2}=0, o_{2}=1, o_{0}=o_{1}=0. This corresponds to the interargument relation R_{d}=\{d(t_{1},t_{2})\mid{\mid}t_{1}{\mid}_{I}\succsim_{\mathbb{N}}{\mid}t_{2% }{\mid}_{I}\}. So we obtain the concrete simple-mixed polynomial interpretation from Example 4 and the concrete interargument relation from Example 6. \square

### 4.4 Relation to Approaches from Term Rewriting

Finally, we briefly discuss the connection between our approach for automated LP termination proofs from Section 4.1 - 4.3 and related approaches used for termination analysis of TRSs. Section 4.1 describes how to obtain constraints for a symbolic polynomial order which guarantee that the requirements of our termination criterion are fulfilled. This is similar to related approaches used in term rewriting. Here, one also chooses a symbolic polynomial interpretation and constructs corresponding inequalities. If one applies polynomial interpretations directly for termination analysis of TRSs, then these inequalities ensure that every rewrite rule is strictly decreasing. If one uses more sophisticated termination techniques like the dependency pair method [ArtsGiesl00, JAR07, Hirokawa_Middeldorp04], then one builds inequalities which ensure that dependency pairs are weakly or strictly decreasing and that rules are weakly decreasing. The decrease conditions of dependency pairs correspond to our decrease conditions in Section 4.1.3 and the requirement that rules are weakly decreasing roughly corresponds to our symbolic constraints for valid interargument relations in Section 4.1.2. Still, there are subtle differences. For example, in LPs, a predicate symbol may have several output arguments which is the reason for the different polynomials i_{p} and o_{p} in our polynomial interargument relations. Moreover, while term rewriting uses matching for evaluation, in logic programming one uses unification. This is the reason for our additional rigidity conditions in Section 4.1.1. The approach in Section 4.2 shows how to find suitable values for the symbolic coefficients. This is the same problem as in the corresponding techniques for term rewriting. However, the usual techniques in term rewriting can only handle unconditional inequalities. Therefore, we have developed a new method in Section 4.2.1 to remove conditions. This is a new contribution of the present paper. In fact, after having developed this contribution for the current paper, due to its success in the tool Polytool, two of the authors of the current paper later even adapted this method to term rewriting (see [MAXPOLO, Footnote 14]). The techniques of the short sections 4.2.2 and 4.3 are identical to the corresponding approaches used in term rewriting. We only included them here in order to have a self-contained presentation of our approach and to finish its illustration with the “\mathit{der}”-example.

## 5 Experimental Evaluation

In this section we discuss the experimental evaluation of our approach. We implemented our technique in a system called Polytool [Nguyen&DeSchreye06] written in SICStus Prolog.111For the source code, we refer to http://www.cs.kuleuven.be/~manh/polytool. Essentially, the Polytool system consists of four modules: The first module is the type inference engine, where we use the inference system of [GallagherHB05]. The second module generates all termination conditions using symbolic polynomials as in Section 4.1. The third module transforms the resulting polynomial constraints into Diophantine constraints, as in Section 4.2. The final module is a Diophantine constraint solver, cf. Section 4.3. We selected the SAT-based Diophantine solver [Fuhsc07] of the AProVE tool [Giesletal06]. We tested the performance of Polytool on a collection of 296 examples. The collection (Table LABEL:table1) consists of all benchmarks for logic programming from the Termination Problem Data Base (TPDB), where all examples that contain arithmetic or built-in predicates were removed. Polytool applies the following strategy: first, we search for a linear polynomial interpretation. If we cannot find such an interpretation satisfying the termination conditions, then we search for a simple-mixed polynomial interpretation. More precisely, then we still interpret predicate symbols by linear polynomials, but we map function symbols to simple-mixed polynomials. We use similar symbolic polynomials for \mathit{conc} and \mathit{prem} from Section 4.2.1: if the polynomial interpretation is linear, then both \mathit{conc} and \mathit{prem} are linear. Otherwise, we use a linear form for \mathit{conc} and a simple-mixed form for \mathit{prem}. The domain for all unknown coefficients in the generated Diophantine constraints is fixed to the set \{0,1,2\}. The experiments were performed on an AMD 64 bit, 2GB RAM running Linux. We performed an experimental comparison with other leading systems for automated termination analysis of logic programs, namely: Polytool-WST07, cTI-1.1 [MesnardBagnara05], TerminWeb [Codishetal99, terminWeb02], TALP [OhlebuschAAECC] and AProVE [Giesletal06]. For TALP, the option of non-linear polynomial interpretations was chosen. For cTI-1.1, we selected the “default” option. For AProVE and TerminWeb, the fully automatic modes were chosen. We did not include the tool Hasta-La-Vista [SerebrenikandDeSchreye03] in the evaluation because it is a predecessor of Polytool. We used a time limit of 60 seconds for testing each benchmark on each termination tool. This time limit is also used in the annual termination competition. In Table LABEL:table1, we give the numbers of benchmarks which are proved terminating (”YES”), the number of benchmarks which could not be proved terminating but where processing ended within the time limit (”FAILURE”), and the number of benchmarks where the tool did not stop before the timeout (”TIMEOUT”). The number in square brackets is the average runtime (in seconds) that a particular tool uses to prove termination of benchmarks (or fails to prove termination of them within the time limit). The detailed experiments (including also the source code of the benchmarks and the termination proofs produced by the tools) can be found at http://www.cs.kuleuven.be/~manh/polytool/POLY/journal07.html. Note that the two examples \mathit{der} and \mathit{div} presented in this paper do not occur in the TPDB. For completeness we just mention that Polytool and AProVE succeed on \mathit{der}, whereas cTI-1.1 and TerminWeb fail, and TALP reaches the timeout. For \mathit{div}, all systems except TALP succeed. In the next sub-sections we discuss the results of the experiments. For a more detailed discussion, we refer to [ThangThesis].

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters