Solving non-linear Horn clauses using a linear Horn clause solver
In this paper we show that checking satisfiability of a set of non-linear Horn clauses (also called a non-linear Horn clause program) can be achieved using a solver for linear Horn clauses. We achieve this by interleaving a program transformation with a satisfiability checker for linear Horn clauses (also called a solver for linear Horn clauses). The program transformation is based on the notion of tree dimension, which we apply to a set of non-linear clauses, yielding a set whose derivation trees have bounded dimension. Such a set of clauses can be linearised. The main algorithm then proceeds by applying the linearisation transformation and solver for linear Horn clauses to a sequence of sets of clauses with successively increasing dimension bound. The approach is then further developed by using a solution of clauses of lower dimension to (partially) linearise clauses of higher dimension. We constructed a prototype implementation of this approach and performed some experiments on a set of verification problems, which shows some promise.
jgJohnblue \declareauthorpgPierreorange!50 \declareauthorbkBishred
Many software verification problems can be reduced to checking satisfiability of a set of Horn clauses (the verification conditions). In this paper we propose an approach for checking satisfiability of a set of non-linear Horn clauses (clauses whose body contains more than one non-constraint atom) using a linear Horn clause solver. A program transformation based on the notion of tree dimension is applied to a set of non-linear Horn clauses; this gives a set of clauses that can be linearised and then solved using a linear solver for Horn clauses. This combination of dimension-bounding, linearisation and then solving with a linear solver is repeated for successively increasing dimension. The dimension of a tree is a measure of its non-linearity – for example a linear tree (whose nodes have at most one child) has dimension zero while a complete binary tree has dimension equal to its height.
A given set of Horn clauses can be transformed into a new set of clauses , whose derivation trees are the subset of ’s derivation trees with dimension at most . It is known that can be transformed to a linear set of clauses preserving satisfiability; hence if we can find a model of the linear set of clauses then the original clauses also have a model.
The algorithm terminates with success if a model (solution) of is also a model (after appropriate translation of predicate names) of . However if is not a solution of , then we proceed to generate and repeat the procedure. The algorithm terminates if is shown to be unsatisfiable (unsafe) for some , since this implies that is also unsatisfiable.
A more sophisticated version of the algorithm attempts to use the model of to (partially) linearise . We can exploit the model of in the following way; if has a counterexample that does not use the (approximate) solution for , then is unsatisfiable. We continue this process successively for increasing value of until we find a solution or a counterexample to , or until resources are exhausted.
As an example program, we consider a set of constrained Horn clauses in Figure 1 which defines the Fibonacci function. It is an interesting problem since its derivations are trees whose dimensions depend on an input argument. The last clause represents a property of the Fibonacci function expressed as an integrity constraint.
We have made a prototype implementation of this approach and performed some experiments on a set of software verification problems, which shows some promise. The main contributions of this paper are as follows.
We present a linearisation procedure for dimension-bounded Horn clauses using partial evaluation (Section 3).
We give an algorithm for solving a set of non-linear Horn clauses using a linear Horn clause solver (Section 4).
We demonstrate the feasibility of our approach in practice applying it to non-linear Horn clause problems (Section 5).
A constrained Horn clause (CHC) is a first order formula of the form () (using Constraint Logic Programming syntax), where is a conjunction of constraints with respect to some constraint theory, are (possibly empty) vectors of distinct variables, are predicate symbols, is the head of the clause and is the body. An atomic formula, or simply atom, is a formula where is a non-constraint predicate symbol and a tuple of arguments. Atoms are sometimes written as , or , possibly with sub- or superscripts.
A clause is called non-linear if it contains more than one atom in the body, otherwise it is called linear. A set of Horn clauses is called linear if only contains linear clauses, otherwise it is called non-linear. Integrity constraints are a special kind of Horn clauses whose head is where is always interpreted as False. A set of Horn clauses is sometimes called a (constraint logic) program.
An interpretation of a set of CHCs is represented as a set of constrained facts of the form where is an atomic formula where is a tuple of distinct variables and is a constraint over with respect to some constraint theory. An interpretation that makes each clause in True is called a model of . We say a set of Horn clause (including integrity constraints) is safe (solvable) iff it has a model. In some works e.g. [6, 30], a model is also called a solution and we use them interchangeably in this paper.
A labeled tree () is a tree whose nodes are labeled by identifiers, where is the label of the root and are labeled trees, the children of the root.
Definition 1 (Tree dimension (adapted from ))
Given a labeled tree , the tree dimension of represented as is defined as follows:
Given a set of Horn clauses, we associate with each clause a unique identifier whose arity is .
Labelled trees can represent Horn clause derivations, where node labels are clause identifiers.
Definition 2 (Trace tree)
A trace tree for an atom in a set of Horn clauses is a labelled tree if is a clause identifier for a clause in (with variables suitably renamed) and are trace trees for in respectively.
There is a one-one correspondence between trace trees and derivation trees of Horn clauses up to variable renaming. Thus when we speak about the dimension of a Horn clause derivation, we refer to the dimension of its corresponding trace tree.
To make the paper self contained, we describe the transformation to produce a dimension-bounded set of clauses. Given a set of CHCs and , we split each predicate occurring in into the predicates and where . An atom with predicate or is denoted or respectively. Such atoms have derivation trees of dimension at most and exactly respectively.
Definition 3 ( At-most--dimension program )
If , then .
If then for .
If with and one of the following holds:
For , and :
Set and for . Then: .
For , and with :
Set if and if . If all are defined, i.e., if , then: .
for , and every .
is also called the -dimension-bounded program corresponding to . When the value of is not important, any program generated using the Definition 3 is called a dimension-bounded program. The relation between and its -dimensional program is given by in the Proposition 1 where is the usual “logical consequence” operator.
Proposition 1 (Relation between and )
Let be a program and () be its -dimension-bounded program. Let be an atom where is a predicate of and () be an atom where is a predicate of . Then we have: .
In other words, Proposition 1 says that the set of facts that can be derived from is a subset of the set of facts that can be derived from , taking the predicate renaming into account. In this sense is an under-approximation of . In particular, if then .
Let be an interpretation of a dimension-bounded set of clauses . That is, is a set of constrained facts of the form or . An interpretation of is constructed from as follows.
Definition 4 (: an interpretation of constructed from an interpretation of )
Let be an interpretation of . Then is the following set of constrained facts.
The set is a disjunctive interpretation of where the interpretation of is the disjunction of the interpretations of the corresponding dimension-bounded versions of in .
The at-most--dimension program of Fib in Figure 1 is depicted in Figure 3. In textual form we represent a predicate by p[k] and a predicate by p(k). The at-most--dimension program of Fib in Figure 1 is depicted in Figure 4. Note that -dimension program is included in -dimension program. In general, all the clauses in are also in . This provides a basis for an iterative strategy for a bounded set of Horn clauses. Since some programs have derivation trees of unbounded dimension, trying to verify a property for its increasing dimension separately is not a practical strategy. It only becomes a viable approach if a solution of for some is general enough to hold for all dimensions of .
3 Linearisation strategies for dimension-bounded set of Horn clauses
In this section, we present linearisation strategies for set of clauses of bounded dimension. It is known  that a dimension-bounded set of clauses can be linearised, preserving satisfiability. In this section we describe a practical technique for linearisation, based on partial evaluation of an interpreter.
3.1 Linearisation based on partial evaluation
Partial evaluation (PE) has been studied for a variety of languages including logic programs [22, 15, 26, 23, 28]. We follow the pattern of transforming a program (a set of Horn clauses) by specialising an interpreter for that program [14, 23]. Let be a partial evaluator, an interpreter and an object program. Then the partial evaluation of with respect to , denoted , represents the “compilation” of using the semantics given by .
We first write an interpreter for Horn clause programs, which is also written as a set of Horn clauses. Given a (possibly empty) conjunction of atoms (called a goal) the interpreter constructs a derivation, implementing a standard left-to-right, depth-first search. In the interpreter predicate solve(Gs), Gs is the goal, represented as a list of atoms. The basic step of the interpreter is represented by the clauses for solve(Gs) shown in Figure 5. If the conjunction is not empty, its first atom G is selected along with a matching Horn clause in the program being interpreted, where is a conjunction of constraints and is a conjunction of atoms. This clause is represented by hornClause(G,Cs,B) in the interpreter. The body of the clause is conjoined with the remaining goal atoms, and the derivation continues with the new goal Gs1. If the conjunction is empty, the derivation is successful (second clause).
To interpret a dimension-bounded set of clauses (say the bound is ), we use the fact that in all successful runs of the interpreter in which goals are selected in increasing order of dimension, the size of the conjunction of goals (that is, the length of the argument of solve) has an upper bound related to . This bound is known as the index of the set of clauses and is given as , where is the maximum number of non-constraint atoms in the body of clauses . Given this index, we can augment the interpreter with a check on the size of the conjunction, ensuring that it never exceeds the index. In addition, due to the requirement of increasing dimension in the selection of atoms, a left-to-right computation rule is not sufficient; therefore we permute the set of atoms in each clause body, since in at least one permutation the goals will be ordered by dimension. With these changes the interpreter remains complete for clauses of the given maximum index, at the possible cost of some redundancy in the search.
These additions result in the interpreter whose top level is shown in Figure 6. Let the interpreter predicate solve(Gs,Index,L) mean that the conjunction of goals Gs is to be solved, and L, Index are numbers representing the size of Gs and the maximum size of the stack of goals.
Partial evaluation of the interpreter.
Given a set of facts of the form hornClause(G,Cs,B) representing the Horn clauses to be linearised, and some value of Index, the interpreter can be partially evaluated. We use Logen  to perform the partial evaluation with respect to a call to go(Index), which initiates a proof of the goal false (see first clause of interpreter). All interpreter computations are partially evaluated except for the calls to solve(Gs,Index,L) and the execution of constraints within the goal solveConstraints(Cs). Furthermore Logen performs standard structure-flattening and predicate renaming operations, yielding a set of clauses of the form solve_i(X) :- Cs, solve_j(Y), where solve_i(X) and solve_j(Y) are instantiations of solve(Gs,Index,L) and Cs is a constraint. Thus the resulting clauses are linear, and furthermore preserve the meaning of the original clauses as given by the interpreter, by correctness of the partial evaluation procedure. The linearisation procedure is independent of the constraint theory underlying the clauses.
Let be a program and () be its -dimension-bounded program. Let be the maximum number of atoms in clause bodies of . Let . Let be a partial evaluation of the interpreter in Figure 6, with respect to and the goal . Then iff .
Note that linearisation required partial evaluation of the perm predicate, giving a blow-up in program size related to the length of the clause bodies. This is further discussed at the end of Section 5.
3.2 Obtaining linear over-approximations with a partial model
First we note that the set of predicates in is a subset of the set of predicates in . Given a model for the predicates in , can be linearised if we replace each occurrence of a predicate from in the body of a clause in with the corresponding constraint from the model . The resulting set of clauses is linear since contains at most one predicate in its body from which is not in . Furthermore if has a model then so does the set of clauses resulting from the replacement; the converse is however not the case since the model represents an over-approximation of . An example is given in Section 4.
More generally, we can replace any subset of the occurrences of predicates from in . We summarise this in the following lemma.
Lemma 1 (Linear over-approximation)
Let be a model of the predicates in , represented by a set of “constrained facts” where is a predicate in . Let be any set of clauses obtained from by replacing some of the occurrences of predicates from in the bodies of clauses in with their corresponding interpretation in . Then
If has a model then so does ;
If contains no predicate from , then is linear.
4 Algorithm for solving sets of non-linear Horn clauses
A basic procedure for solving a set of non-linear Horn clauses using a linear Horn clause solver is presented in Algorithms 4.1 and 4.1. We use the term “linear solver” for linear Horn clause solver for brevity. The main procedure \ProcStySOLVE(\ArgStyP\ProcSty) takes a set of non-linear Horn clauses as input and outputs (upon termination) (safe, solution) if \ArgStyP is solvable or (unsafe, counterexample) otherwise. We represent counterexample as a trace tree. For a linear program it corresponds to a sequence of clauses used to derive a counterexample.
Let be an interpretation of a set of Horn clauses . Let be any trace tree for some atom in (Definition 2) and let be the set of heads of clauses with identifiers in . Then is defined to be the set
Informally, the derivation corresponding to does not use any predicate interpreted by . This notion is used in Algorithm 4.1.
Algorithm 4.1 is an extended version of Algorithm 4.1, which uses the solution for to help to linearise and also allows a more refined termination condition based on whether or not the solution for is used in constructing a counterexample for .
The procedures make use of several sub-procedures which will be described next.
4.1 Components of the algorithm
SOLVE_LINEAR(\ArgStyP\ProcSty): solves a set of linear Horn clauses . We assume the following about a linear solver: (i) if it terminates on , then it returns either safe and a solution or unsafe and a counterexample; (ii) it is sound, that is, if it returns a solution for then has a model and is a solution (model) of ; if it returns unsafe and a counterexample cEx then is unsafe and cEx is a witness. In our setting (Algorithms 4.1 and 4.1), corresponds to a linearised version of for some and . For technical reasons, the top level predicate of if any, is renamed to before passing to a linear solver.
In essence, any Horn clause solver which complies with our assumption, for example QARMC , Convex polyhedral analyser , ELDARICA  etc. can be used in a black-box fashion but in this paper, we make use of a solver described in , which is based on abstract interpretation  over the domain of convex polyhedra  but without refinement using finite tree automata. The solver produces the following solution for the program in Figure 3. We can check it is in fact a solution (model).
fib(0)(A,B) :- [-A>= -1,A>=0,B=A]. fib(A,B) :- [-A>= -1,A>=0,B=A]. false :- <>. % <> means that there is no model for false, %so we can discard it
LINEARISE(\ArgSty,,\ProcSty) generates a linear set of clauses from and an interpretation for . Let be a set of constrained facts of the form , where is a predicate from , the procedure replaces every clause from with head by . This produces a set of clauses say . Then the procedure \ProcStyLINEARISE_PE(\ArgSty,\ProcSty) is called, which is the linearisation procedure based on partial evaluation described in Section 3 where \ArgSty is a bound for the stack usage for linearising .
An excerpt from is shown below.
false(1) :- A>5, B<A, fib(1)(A,B). fib(1)(A,B) :- A>1, C=A-2, E=A-1, B=F+D, fib(1)(C,D), fib(E,F). fib(0)(A,B) :- B=A, A=<1, A>=0.
After reusing the solution obtained for and linearising, we obtain the following set of linear clauses.
false(1) :- A>5, B<A, fib(1)(A,B). fib(1)(A,B) :- -A>= -2, A>1, A-C=2, B-D=1, fib(1)(C,D).
Continuing to run our algorithm, the following solution obtained for becomes a solution for the program in Figure 1 (the original program) and the algorithm terminates.
fib(0)(A,B) :- [-A>= -1,A>=0,B=1]. fib(A,B) :- [-A>= -1,A>=0,B=1]. fib(1)(A,B) :- [A>=2,A+ -B=0]. fib(A,B) :- [A+ -B>= -1,B>=1,-A+B>=0]. fib(2)(A,B) :- [A>=4,-2*A+B>= -3]. fib(A,B) :- [A>=0,B>=1,-A+B>=0].
Procedure \ProcStySOLVE(\ArgStyP\ProcSty) \SetKwKwGoTogoto \KwInA set of CHCs \KwOut(safe, solution), (unsafe, cex) \tcc*[r]Result is a solution or a cex \uIf safe \lIf( is a solution of ) \Return \Else \Return \tcc*[r]Result is a cex \KwGoTo4.1
Procedure \ProcStySOLVE(\ArgStyP\ProcSty) \SetKwKwGoTogoto \KwInA set of CHCs \KwOut(safe, solution), (unsafe, cex) \tcc*[r]Result is a solution or a cex \uIf safe \lIf( is a solution of ) \Return \Else \lIf(\tcc*[r]Result is a linear cex)\Return
Procedure \ProcStyLINEARISE(\ArgStyP, k, S\ProcSty) \SetKwKwGoTogoto \KwInA set of CHCs , an integer and a set of constrained facts \KwOutA linearised set of clauses \tcc*[r]Definition 3 \ProcStySUBSTITUTE(\ArgSty\ProcSty)\tcc*[r]substitute atoms of with their interpretations from \tcc*[r]where is the maximal number of body atoms of \tcc*[r]Section 3.1 \Return
4.2 Reuse of solutions, refinement and linearisation
Algorithm 4.1 solves non-linear Horn clauses in essentially the same way as Algorithm 4.1, but incorporates a refinement phase in the case that the linear solver finds a counterexample. This counterexample possibly uses some of the model of the lower-dimension predicates , in which case it is not certain whether it is a false alarm or a real counterexample. If the counterexample did use some of the predicate solutions from , then we discard those solutions (Algorithm 4.1, line 12) and return to the linearisation step. If the counterexample does not use any predicate solutions from , then it is a real counterexample (Algorithm 4.1, line 12). We will clarify this with an example program (linear for simplicity) shown below.
c1. false:- X=0, p(X). c2. false:- q(X). c3. p(X):- X¿0. c4. q(X):-X=0.
Suppose we have an approximate solution for the predicate . Using this solution, the above program is transformed into the following program.
c1. false:- X=0, p(X). c2. false:- q(X). c3. p(X):- true. (approximate solution) c4. q(X):-X=0.
The trace is a counterexample for this transformed program but not to the original program (since it uses approximate solution for the predicate ). However the trace is a counterexample for this program as well as to the original since it does not use any approximate solution for the predicates appearing in the counterexample.
A schematic overview of Algorithm 4.1 is shown in Figure 7. At each iteration of the abstraction-refinement loop, the at-most--dimension under-approximation of is computed, then linearised and solved using a solver for linear Horn clauses.
Proposition 3 (Soundness)
Another property of the Algorithm 4.1 is that of progress, that is, the same counterexample does not arise more than once.
5 Experimental results
We made a prototype implementation of Algorithm 4.1 in the tool called LHornSolver
|Program||Safety||# iter.||Time (s)||Safety||#iter.||Time (s)|
In the table Program represents a program, Safety represents a verification result, #iter. and Time (s) successively represent the number of refinement iterations and the time in seconds need to solve a program using both RAHFT and LHornSolver. It is to note that the underlying abstract interpreter, that is, the convex polyhedral analyser (CPA) is the same for both RAHFT and LHornSolver but LHornSolver uses it to solve linear Horn clauses though the CPA is not optimised for linear problems. The column #iter. for LHornSolver represents a value of for which a solution of (under-approximation) of a set of clauses becomes a solution for or becomes unsafe. The symbol “?” means that the result is unknown within the given time bound. The result “safe” means that the program is safe (solvable) and “unsafe” means it is unsafe.
LHornSolver solves 27 out of 44 (about ) problems within a second. In most of these problems, a solution of an under approximation () becomes a solution for the original program or becomes unsafe for a fairly small value of ( or ). This suggests that the solvability of a problem is shallow with respect to its dimension. This demonstrates the feasibility of solving a set of non-linear Horn clauses using a solver for linear Horn clauses.
In contrast, RAHFT solves all the problem. The difference in results maybe due to the following reason: the linear solver that is used in LHornSolver is the CPA (without refinement in contrast to ). The solver terminates but produces false alarms. If we use CPA with refinement as in , then we lose predicates names (due to program transformation), so the solution or counterexamples produced by the tool do not correspond to the original program (it is very hard to keep track of the changes). This hinders the reuse of solution from lower dimension to linearise program of higher dimension or refine it using the counterexample trace. Other solvers which don’t modify the programs but produce solutions or counterexamples can be used as a linear solver in principle and we leave it for the future work. Another disadvantage of using CPA is that, if it cannot solve a linear program, then it emits an abstract trace which is checked for a feasibility. If it is spurious then LHornSolver returns with unknown (in principle we can refine the program but the refinement will have the problem as mentioned above). So it is highly unlikely that the trace picked by the tool non-deterministically results to be a real counterexample. We noticed in our experiments that the trace picked was spurious most of the times and LHornSolver immediately returned “unknown” answer before the timeout. This also explains why solving time of LHornSolver is less than that of RAHFT.
The interpreter described in Figure 6 computed a permutation of the atoms in a clause body; partial evaluation of the permutation procedure can cause a blow-up of the size of the linearised program, relative to the number of atoms in clause bodies. During our experiment we found that the maximum number of atoms in the bodies of the clauses in our benchmark programs was 5 and the value of was relatively small (). The permutation procedure can be avoided if we first generate an at-most--dimension program whose body atoms are ordered by increasing dimension. This needs unfolding of the -clauses, since atoms whose predicate is cannot be ordered directly; only atoms with predicates of the form can be ordered. We have not yet evaluated the trade-offs in these two approaches.
6 Related Work
In the world of Horn clause solvers, after fixing a constraint theory, we can
distinguish solvers depending on whether they can handle general non-linear Horn
clauses or not. A majority of solvers
[19, 17, 31, 30, 24]
handle non-linear Horn clauses but there are notable exceptions like
VeriMAP  or
Our linearisation method based on partial evaluation described in Section 3.1 is related to the linearisation method based on fold-unfold transformations described by De Angelis et al. . While their procedure transforms the target set of clauses directly, we transform an interpreter for the clauses using a generic partial evaluation procedure. Any clause transformation procedure could be formulated as a meta-program and partial evaluation applied to that program to yield the specified transformation. Thus neither approach offers any more power than the other. However the use of partial evaluation is arguably more flexible. The interpreter that is partially evaluated in our procedure is a standard interpreter for Horn clauses, modified with a bound on the size of goals, directly incorporating a general result that there is an upper bound on the size of goals in derivations with dimension-bounded programs. This provides a very generic starting point for the transformation with an explicit relation to the semantics of the clauses. A whole family of similar transformations could be formulated by varying the interpreter (for example using breadth-first search). The procedure in  is tailored to a restrictive setting where only goal clauses (integrity constraints) are non-linear and rest of the clauses are linear; correctness has to be established for that case.
Ganty, Iosif and Konečný  used the notion of tree dimension for computing summaries of procedural programs by underapproximating them. Roughly speaking, they compute procedure summaries iteratively, starting from the program behaviors captured by derivation trees of dimension . Then they reuse these summaries to compute summaries for program behaviors captured by derivation trees of dimension and so on for , , etc. Kafle, Gallagher and Ganty  adapted the idea of dimension-based underapproximations to the setting of Horn clause systems. They gave empirical evidence supporting the thesis that for small values of the dimension the solutions are general enough to hold for every dimension. Their approach still required the use of general Horn-clause solvers capable of handling non-linear clauses. In this paper, we lift this requirement and allow the use of solvers for linear clauses only. Moreover, we provide an abstraction refinement loop that enables the solutions for lower dimension to be reused when searching for solutions in higher dimension.
7 Conclusion and future work
We presented an abstraction-refinement approach for solving a set of non-linear Horn clauses using an off-the-shelf linear Horn clause solver. It was achieved through a linearisation of a dimension bounded set of Horn clauses (which are known to be linearisable) using partial evaluation and the use of a linear Horn clause solver. Experiment on a set of non-linear Horn clause verification problems using our approach shows that the approach is feasible (a linear solver can be used for solving non-linear problems) and the solvability of a problem is shallow with respect to its dimension.
A linear set of clauses is essentially a transition system. Many tools exist whose input languages have a form such as C programs (without procedure calls), control flow graphs, Boogie programs, and such formalisms whose semantics is usually given as a transitions system. The results of this paper suggest that such tools could be applied to the verification of non-linear Horn clauses.
In the future, we plan to compare our results with the results from a specialised linear Horn clause solver like VeriMap and other non-linear Horn clause solvers. We also plan to experiment with different linearisation strategies for Horn clauses and study their effects in Horn clause verification.
The authors would like to thank José F. Morales for his help with Ciao Prolog foreign language interface and some parts of the implementation.
- thanks: The research leading to these results has been supported by EU FP7 project 318337, ENTRA - Whole-Systems Energy Transparency, EU FP7 project 611004, coordination and support action ICT-Energy, EU FP7 project 610686, POLCA - Programming Large Scale Heterogeneous Infrastructures, Madrid Regional Government project S2013/ICE-2731, N-Greens Software - Next-GeneRation Energy-EfficieNt Secure Software, and the Spanish Ministry of Economy and Competitiveness project No. TIN2015-71819-P, RISCO - RIgorous analysis of Sophisticated COncurrent and distributed systems.
- Foto N. Afrati, Manolis Gergatsoulis & Francesca Toni (2003): Linearisability on Datalog programs. Theor. Comput. Sci. 308(1-3), pp. 199–226, doi:10.1016/S0304-3975(02)00730-2.
- Roberto Bagnara, Patricia M. Hill & Enea Zaffanella (2008): The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems. Sci. Comput. Program. 72(1-2), pp. 3–21, doi:10.1016/j.scico.2007.08.001.
- Christel Baier & Cesare Tinelli, editors (2015): Tools and Algorithms for the Construction and Analysis of Systems - 21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015. Proceedings. Lecture Notes in Computer Science 9035, Springer, doi:10.1007/978-3-662-46681-0.
- Dirk Beyer (2015): Software Verification and Verifiable Witnesses - (Report on SV-COMP 2015). In Baier & Tinelli , pp. 401–416, doi:10.1007/978-3-662-46681-0_31.
- Nikolaj Bjørner, Kenneth L. McMillan & Andrey Rybalchenko (2013): On Solving Universally Quantified Horn Clauses. In Francesco Logozzo & Manuel Fähndrich, editors: SAS, LNCS 7935, Springer, pp. 105–125. Available at http://dx.doi.org/10.1007/978-3-642-38856-9_8.
- Patrick Cousot & Radhia Cousot (1977): Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints. In Robert M. Graham, Michael A. Harrison & Ravi Sethi, editors: POPL, ACM, pp. 238–252. Available at http://doi.acm.org/10.1145/512950.512973.
- Patrick Cousot & Nicolas Halbwachs (1978): POPL. ACM Press, pp. 84–96, doi:10.1145/512760.512770. Available at http://dl.acm.org/citation.cfm?id=512760.
- Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi & Maurizio Proietti (2014): VeriMAP: A Tool for Verifying Programs through Transformations. In Erika Ábrahám & Klaus Havelund, editors: TACAS, LNCS 8413, Springer, pp. 568–574. Available at http://dx.doi.org/10.1007/978-3-642-54862-8_47.
- Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi & Maurizio Proietti (2015): Proving correctness of imperative programs by linearizing constrained Horn clauses. TPLP 15(4-5), pp. 635–650, doi:10.1017/S1471068415000289.
- Bruno Dutertre (2014): Yices 2.2. In Armin Biere & Roderick Bloem, editors: Computer-Aided Verification (CAV’2014), Lecture Notes in Computer Science 8559, Springer, pp. 737–744, doi:10.1007/978-3-319-08867-9_49.
- Javier Esparza, Pierre Ganty, Stefan Kiefer & Michael Luttenberger (2011): Parikh’s theorem: A simple and direct automaton construction. Inf. Process. Lett. 111(12), pp. 614–619, doi:10.1016/j.ipl.2011.03.019.
- Javier Esparza, Stefan Kiefer & Michael Luttenberger (2007): On Fixed Point Equations over Commutative Semirings. In: STACS 2007, 24th Annual Symposium on Theoretical Aspects of Computer Science, Proceedings, LNCS 4393, Springer, pp. 296–307, doi:10.1007/978-3-540-70918-3_26.
- J. P. Gallagher (1986): Transforming Logic Programs by Specialising Interpreters. In: Proceedings of the 7th European Conference on Artificial Intelligence (ECAI-86), Brighton, pp. 109–122.
- John P. Gallagher (1993): Tutorial on Specialisation of Logic Programs. In David A. Schmidt, editor: Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation, PEPM’93, Copenhagen, Denmark, June 14-16, 1993, ACM, pp. 88–98, doi:10.1145/154630.154640. Available at http://dl.acm.org/citation.cfm?id=154630.
- Pierre Ganty, Radu Iosif & Filip Konečný (2013): Underapproximation of Procedure Summaries for Integer Programs. In Nir Piterman & Scott A. Smolka, editors: TACAS 2013. Proceedings, Lecture Notes in Computer Science 7795, Springer, pp. 245–259, doi:10.1007/978-3-642-36742-7_18.
- Sergey Grebenshchikov, Ashutosh Gupta, Nuno P. Lopes, Corneliu Popeea & Andrey Rybalchenko (2012): HSF(C): A Software Verifier Based on Horn Clauses - (Competition Contribution). In Cormac Flanagan & Barbara König, editors: TACAS, LNCS 7214, Springer, pp. 549–551. Available at http://dx.doi.org/10.1007/978-3-642-28756-5_46.
- Sergey Grebenshchikov, Nuno P. Lopes, Corneliu Popeea & Andrey Rybalchenko (2012): Synthesizing software verifiers from proof rules. In Jan Vitek, Haibo Lin & Frank Tip, editors: ACM SIGPLAN PLDI, ACM, pp. 405–416, doi:10.1145/2254064.2254112. Available at http://dl.acm.org/citation.cfm?id=2254064.
- Arie Gurfinkel, Temesghen Kahsai & Jorge A. Navas (2015): SeaHorn: A Framework for Verifying C Programs (Competition Contribution). In Baier & Tinelli , pp. 447–450, doi:10.1007/978-3-662-46681-0_41.
- Manuel V. Hermenegildo, Francisco Bueno, Manuel Carro, Pedro López-García, Edison Mera, José F. Morales & Germán Puebla (2012): An overview of Ciao and its design philosophy. TPLP 12(1-2), pp. 219–252, doi:10.1017/S1471068411000457.
- Hossein Hojjat, Filip Konecný, Florent Garnier, Radu Iosif, Viktor Kuncak & Philipp Rümmer (2012): A Verification Toolkit for Numerical Transition Systems - Tool Paper. In Dimitra Giannakopoulou & Dominique Méry, editors: FM. Proceedings, Lecture Notes in Computer Science 7436, Springer, pp. 247–251, doi:10.1007/978-3-642-32759-9_21.
- N. Jones, C.K. Gomard & P. Sestoft (1993): Partial Evaluation and Automatic Software Generation. Prentice Hall.
- Neil D. Jones (2004): Transformation by interpreter specialisation. Sci. Comput. Program. 52, pp. 307–339, doi:10.1016/j.scico.2004.03.010.
- Bishoksan Kafle & John P Gallagher (2015): Horn clause verification with convex polyhedral abstraction and tree automata-based refinement. Computer Languages, Systems & Structures, doi:10.1016/j.cl.2015.11.001.
- Bishoksan Kafle, John P. Gallagher & Pierre Ganty (2015): Decomposition by tree dimension in Horn clause verification. In Alexei Lisitsa, Andrei P. Nemytykh & Alberto Pettorossi, editors: VPT., EPTCS 199, pp. 1–14, doi:10.4204/EPTCS.199.1.
- Michael Leuschel (1994): Partial Evaluation of the “Real Thing”. In Laurent Fribourg & Franco Turini, editors: LOPSTR, Proceedings, Lecture Notes in Computer Science 883, Springer, pp. 122–137, doi:10.1007/3-540-58792-6_8.
- Michael Leuschel, Daniel Elphick, Mauricio Varea, Stephen-John Craig & Marc Fontaine (2006): The Ecce and Logen partial evaluators and their web interfaces. In John Hatcliff & Frank Tip, editors: PEPM 2006, ACM, pp. 88–94, doi:10.1145/1111542.1111557.
- Michael Leuschel & Germán Vidal (2014): Fast offline partial evaluation of logic programs. Inf. Comput. 235, pp. 70–97, doi:10.1016/j.ic.2014.01.005.
- Michael Luttenberger & Maximilian Schlund (2016): Convergence of Newton’s Method over Commutative Semirings. Inf. Comput. 246, pp. 43–61, doi:10.1016/j.ic.2015.11.008.
- Kenneth L. McMillan & Andrey Rybalchenko (2013): Solving Constrained Horn Clauses using Interpolation. Technical Report, Microsoft Research.
- Philipp Rümmer, Hossein Hojjat & Viktor Kuncak (2013): Disjunctive Interpolants for Horn-Clause Verification. In Natasha Sharygina & Helmut Veith, editors: CAV, Lecture Notes in Computer Science 8044, Springer, pp. 347–363, doi:10.1007/978-3-642-39799-8. Available at 10.1007/978-3-642-39799-8_24.