On Deciding Local Theory Extensions via E-matching
Satisfiability Modulo Theories (SMT) solvers incorporate decision procedures for theories of data types that commonly occur in software. This makes them important tools for automating verification problems. A limitation frequently encountered is that verification problems are often not fully expressible in the theories supported natively by the solvers. Many solvers allow the specification of application-specific theories as quantified axioms, but their handling is incomplete outside of narrow special cases.
In this work, we show how SMT solvers can be used to obtain complete decision procedures for local theory extensions, an important class of theories that are decidable using finite instantiation of axioms. We present an algorithm that uses E-matching to generate instances incrementally during the search, significantly reducing the number of generated instances compared to eager instantiation strategies. We have used two SMT solvers to implement this algorithm and conducted an extensive experimental evaluation on benchmarks derived from verification conditions for heap-manipulating programs. We believe that our results are of interest to both the users of SMT solvers as well as their developers.
To the memory of Morgan Deters
Published in Computer Aided Verification
The final publication is available at Springer via: http://link.springer.com/chapter/10.1007%2F978-3-319-21668-3_6
Satisfiability Modulo Theories (SMT) solvers are a cornerstone of today’s verification technology. Common applications of SMT include checking verification conditions in deductive verification [DBLP:conf/lpar/Leino10, DBLP:conf/esop/FilliatreP13], computing program abstractions in software model checking [DBLP:conf/fmcad/McMillan11, DBLP:journals/jar/BrilloutKRW11, DBLP:conf/cav/AlbarghouthiLGC12], and synthesizing code fragments in software synthesis [DBLP:conf/cav/BodikT12, DBLP:conf/popl/BeyeneCPR14]. Ultimately, all these tasks can be reduced to satisfiability of formulas in certain first-order theories that model the semantics of prevalent data types and software constructs, such as integers, bitvectors, and arrays. The appeal of SMT solvers is that they implement decision procedures for efficiently reasoning about formulas in these theories. Thus, they can often be used off the shelf as automated back-end solvers in verification tools.
Some verification tasks involve reasoning about universally quantified formulas, which goes beyond the capabilities of the solvers’ core decision procedures. Typical examples include verification of programs with complex data structures and concurrency, yielding formulas that quantify over unbounded sets of memory locations or thread identifiers. From a logical perspective, these quantified formulas can be thought of as axioms of application-specific theories. In practice, such theories often remain within decidable fragments of first-order logic [DBLP:journals/jar/BrilloutKRW11, DBLP:conf/atva/BouajjaniDES12, DBLP:conf/tacas/AlbertiGS14, DBLP:conf/popl/ItzhakyBILNS14]. However, their narrow scope (which is typically restricted to a specific program) does not justify the implementation of a dedicated decision procedure inside the SMT solver. Instead, many solvers allow theory axioms to be specified directly in the input constraints. The solver then provides a quantifier module that is designed to heuristically instantiate these axioms. These heuristics are in general incomplete and the user is given little control over the instance generation. Thus, even if there exists a finite instantiation strategy that yields a decision procedure for a specific set of axioms, the communication of strategies and tactics to SMT solvers is a challenge [DBLP:conf/birthday/MouraP13]. Further, the user cannot communicate the completeness of such a strategy. In this situation, the user is left with two alternatives: either she gives up on completeness, which may lead to usability issues in the verification tool, or she implements her own instantiation engine as a preprocessor to the SMT solver, leading to duplication of effort and reduced solver performance.
The contributions of this paper are two-fold. First, we provide a better understanding of how complete decision procedures for application-specific theories can be realized with the quantifier modules that are implemented in SMT solvers. Second, we explore several extensions of the capabilities of these modules to better serve the needs of verification tool developers. The focus of our exploration is on local theory extensions [SS05, IhlemannETAL08LocalReasoninginVerification]. A theory extension extends a given base theory with additional symbols and axioms. Local theory extensions are a class of such extensions that can be decided using finite quantifier instantiation of the extension axioms. This class is attractive because it is characterized by proof and model-theoretic properties that abstract from the intricacies of specific quantifier instantiation techniques [G01, SS05, DBLP:conf/frocos/HorbachS13]. Also, many well-known theories that are important in verification but not commonly supported by SMT solvers are in fact local theory extensions, even if they have not been presented as such in the literature. Examples include the array property fragment [DBLP:conf/vmcai/BradleyMS06], the theory of reachability in linked lists [DBLP:conf/vmcai/RakamaricBH07, DBLP:conf/popl/LahiriQ08], and the theories of finite sets [DBLP:conf/birthday/Zarba03] and multisets [Zarba02CombiningMultisetsIntegers].
We present a general decision procedure for local theory extensions that relies on E-matching, one of the core components of the quantifier modules in SMT solvers. We have implemented our decision procedure using the SMT solvers CVC4 [conf/cav/BarrettCDHJKRT11] and Z3 [MouraBjoerner08Z3] and applied it to a large set of SMT benchmarks coming from the deductive software verification tool GRASShopper [DBLP:conf/cav/PiskacWZ13, grasshopper]. These benchmarks use a hierarchical combination of local theory extensions to encode verification conditions that express correctness properties of programs manipulating complex heap-allocated data structures. Guided by our experiments, we developed generic optimizations in CVC4 that improve the performance of our base-line decision procedure. Some of these optimizations required us to implement extensions in the solver’s quantifier module. We believe that our results are of interest to both the users of SMT solvers as well as their developers. For users we provide simple ways of realizing complete decision procedures for application-specific theories with today’s SMT solvers. For developers we provide interesting insights that can help them further improve the completeness and performance of today’s quantifier instantiation modules.
Sofronie-Stokkermans [SS05] introduced local theory extensions as a generalization of locality in equational theories [DBLP:conf/kr/GivanM92, G01]. Further generalizations include Psi-local theories [IhlemannETAL08LocalReasoninginVerification], which can describe arbitrary theory extensions that admit finite quantifier instantiation. The formalization of our algorithm targets local theory extensions, but we briefly describe how it can be generalized to handle Psi-locality. The original decision procedure for local theory extensions presented in [SS05], which is implemented in H-Pilot [DBLP:conf/cade/IhlemannS09], eagerly generates all instances of extension axioms upfront, before the base theory solver is called. As we show in our experiments, eager instantiation is prohibitively expensive for many local theory extensions that are of interest in verification because it results in a high degree polynomial blowup in the problem size.
In [Jacobs09], Swen Jacobs proposed an incremental instantiation algorithm for local theory extensions. The algorithm is a variant of model-based quantifier instantiation (MBQI). It uses the base theory solver to incrementally generate partial models from which relevant axiom instances are extracted. The algorithm was implemented as a plug-in to Z3 and experiments showed that it helps to reduce the overall number of axiom instances that need to be considered. However, the benchmarks were artificially generated. Jacob’s algorithm is orthogonal to ours as the focus of this paper is on how to use SMT solvers for deciding local theory extensions without adding new substantial functionality to the solvers. A combination with this approach is feasible as we discuss in more detail below.
Other variants of MBQI include its use in the context of finite model finding [ReyEtAl-CADE-13], and the algorithm described in [GM09], which is implemented in Z3. This algorithm is complete for the so-called almost uninterpreted fragment of first-order logic. While this fragment is not sufficiently expressive for the local theory extensions that appear in our benchmarks, it includes important fragments such as Effectively Propositional Logic (EPR). In fact, we have also experimented with a hybrid approach that uses our E-matching-based algorithm to reduce the benchmarks first to EPR and then solves them with Z3’s MBQI algorithm.
E-matching was first described in [Nelson:1980:TPV:909447], and since has been implemented in various SMT solvers [MB07, GBT09]. In practice, user-provided triggers can be given as hints for finer grained control over quantifier instantiations in these implementations. More recent work [Dross2012] has made progress towards formalizing the semantics of triggers for the purposes of specifying decision procedures for a number of theories. A more general but incomplete technique [reynolds14quant_fmcad] addresses the prohibitively large number of instantiations produced by E-matching by prioritizing instantiations that lead to ground conflicts.
We start our discussion with a simple example that illustrates the basic idea behind local theory extensions. Consider the following set of ground literals
We interpret in the theory of linear integer arithmetic and a monotonically increasing function . One satisfying assignment for is:
We now explain how we can use an SMT solver to conclude that is indeed satisfiable in the above theory.
SMT solvers commonly provide inbuilt decision procedures for common theories such as the theory of linear integer arithmetic (LIA) and the theory of equality over uninterpreted functions (UF). However, they do not natively support the theory of monotone functions. The standard way to enforce to be monotonic is to axiomatize this property,
and then let the SMT solver check if is satisfiable via a reduction to its natively supported theories. In our example, the reduction target is the combination of LIA and UF, which we refer to as the base theory, denoted by . We refer to the axiom as a theory extension of the base theory and to the function symbol as an extension symbol.
Most SMT solvers divide the work of deciding ground formulas in a base theory and axioms of theory extensions between different modules. A quantifier module looks for substitutions to the variables within an axiom , and , to some ground terms, and . We denote such a substitution as and the instance of an axiom with respect to this substitution as . The quantifier module iteratively adds the generated ground instances as lemmas to until the base theory solver derives a contradiction. However, if is satisfiable, as in our case, then the quantifier module does not know when to stop generating instances of , and the solver may diverge, effectively enumerating an infinite model of .
For a local theory extension, we can syntactically restrict the instances that need to be considered before concluding that is satisfiable to a finite set of candidates. More precisely, a theory extension is called local if in order to decide satisfiability of , it is sufficient to consider only those instances in which all ground terms already occur in and . The monotonicity axiom is a local theory extension of . The local instances of and are:
Note that we do not need to instantiate and with other ground terms in , such as and . Adding the above instances to yields
which is satisfiable in the base theory. Since is a local theory extension, we can immediately conclude that is also satisfiable.
Recognizing Local Theory Extensions.
There are two useful characterizations of local theory extensions that can help users of SMT solvers in designing axiomatization that are local. The first one is model-theoretic [G01, SS05]. Consider again the set of ground clauses . When checking satisfiability of in the base theory, the SMT solver may produce the following model:
This is not a model of the original . However, if we restrict the interpretation of the extension symbol in this model to the ground terms in , we obtain the partial model
This partial model can now be embedded into the model (1) of the theory extension. If such embeddings of partial models of to total models of always exist for all sets of ground literals , then is a local theory extension of . The second characterization of local theory extensions is proof-theoretic and states that a set of axioms is a local theory extension if it is saturated under (ordered) resolution [DBLP:conf/lics/BasinG96]. This characterization can be used to automatically compute local theory extensions from non-local ones [DBLP:conf/frocos/HorbachS13].
Note that the locality property depends both on the base theory as well as the specific axiomatization of the theory extension. For example, the following axiomatization of a monotone function over the integers, which is logically equivalent to equation (2) in , is not local:
Similarly, if we replace all inequalities in equation (2) by strict inequalities, then the extension is no longer local for the base theory . However, if we replace by a theory in which is a dense order (such as in linear real arithmetic), then the strict version of the monotonicity axiom is again a local theory extension.
In the next two sections, we show how we can use the existing technology implemented in quantifier modules of SMT solvers to decide local theory extensions. In particular, we show how E-matching can be used to further reduce the number of axiom instances that need to be considered before we can conclude that a given set of ground literals is satisfiable.
Sorted first-order logic.
We present our problem in sorted first-order logic with equality. A signature is a tuple , where is a countable set of sorts and and are countable sets of function and predicate symbols, respectively. Each function symbol has an associated arity and associated sort with for all . Function symbols of arity 0 are called constant symbols. Similarly, predicate symbols have an arity and sort . We assume dedicated equality symbols with the sort for all sorts , though we typically drop the explicit subscript. Terms are built from the function symbols in and (sorted) variables taken from a countably infinite set that is disjoint from . We denote by that term has sort .
A -atom is of the form where is a predicate symbol of sort and the are terms with . A -formula is either a -atom , , , , or where and are -formulas. A -literal is either or for a -atom . A -clause is a disjunction of -literals. A -term, atom, or formula is said to be ground, if no variable appears in it. For a set of formulas , we denote by the set of all ground subterms that appear in .
A -sentence is a -formula with no free variables where the free variables of a formula are defined in the standard fashion. We typically omit if it is clear from the context.
Given a signature , a -structure is a function that maps each sort to a non-empty set , each function symbol of sort to a function , and each predicate symbol of sort to a relation . We assume that all structures interpret each symbol by the equality relation on . For a -structure where extends a signature with additional sorts and function symbols, we write for the -structure obtained by restricting to .
Given a structure and a variable assignment , the evaluation of a term in is defined as usual. For a structure and an atom of the form , satisfies iff . This is written as . From this satisfaction relation of atoms and -structures, we can derive the standard notions of the satisfiability of a formula, satisfying a set of formulas , validity , and entailment . If a -structure satisfies a -sentence , we call a model of .
Theories and theory extensions.
A theory over signature is a set of -structures. We call a -sentence an axiom if it is the universal closure of a -clause, and we denote a set of -axioms as . We consider theories defined as a class of -structures that are models of a given set of -sentences .
Let be a signature and assume that the signature extends by new sorts and function symbols . We call the elements of extension symbols and terms starting with extension symbols extension terms. Given a -theory and -axioms , we call the theory extension of with , where is the set of all -structures that are models of and whose reducts are in . We often identify the theory extension with the theory .
We formally define the problem of satisfiability modulo theory and the notion of local theory extensions in this section.
Let be a theory over signature . Given a -formula , we say is satisfiable modulo if there exists a structure in and an assignment of the variables in such that . We define the ground satisfiability modulo theory problem as the corresponding decision problem for quantifier-free formulas.
Problem 1 (Ground satisfiability problem for -theory ).
A quantifier-free -formula .
sat if is satisfiable modulo , unsat otherwise.
We say the satisfiability problem for is decidable if there exists a procedure for the above problem which always terminates with sat or unsat. We write entailment modulo a theory as .
We say an axiom of a theory extension is linear if all the variables occur under at most one extension term. We say it is flat if there there is no nesting of terms containing variables. It is easy to linearize and flatten the axioms by using additional variables and equality. As an example, with and as terms in may be written as
where is obtained from by replacing with . For the remainder of the paper, we assume that all extension axioms are flat and linear. For the simplicity of the presentation, we assume that if a variable appears below a function symbol then that symbol must be an extension symbol.
Definition 2 (Local theory extensions).
A theory extension is local if for any set of ground -literals : is satisfiable modulo if and only if is satisfiable modulo extended with free function symbols. Here is the set of instances of where the subterms of the instantiation are all subterms of or (in other words, they do not introduce new terms).
For simplicity, in the rest of this paper, we work with theories which have decision procedures for not just but also extended with free function symbols. Thus, we sometimes talk of satisfiability of a -formula with respect a -theory , to mean satisfiability in the with the extension symbols in treated as free function symbols. In terms of SMT, we only talk of extensions of theories containing uninterpreted functions (UF).
A naive decision procedure for ground SMT of a local theory extension is thus to generate all possible instances of the axioms that do not introduce new ground terms, thereby reducing to the ground SMT problem of extended with free functions.
Hierarchical extensions. Note that local theory extensions can be stacked to form hierarchies
Such a hierarchical arrangement of extension axioms is often useful to modularize locality proofs. In such cases, the condition that variables are only allowed to occur below extension symbols (of the current extension) can be relaxed to any extension symbol of the current level or below. The resulting theory extension can be decided by composing procedures for the individual extensions. Alternatively, one can use a monolithic decision procedure for the resulting theory , which can also be viewed as a single local theory extension . In our experimental evaluation, which involved hierarchical extensions, we followed the latter approach.
In this section, we describe a decision procedure for a local theory extension, say , which can be easily implemented in most SMT solvers with quantifier instantiation support. We describe our procedure as a theory module in a typical SMT solver architecture. For simplicity, we separate out the interaction between theory solver and core SMT solver. We describe the procedure abstractly as taking as input:
the original formula ,
a set of extension axioms ,
a set of instantiations of axioms that have already been made, , and
a set of satisfiable ground literals such that , and
a set equalities between terms.
It either returns
sat, denoting that is satisfiable; or
a new set of instantiations of the axioms, .
For completeness, we describe briefly the way we envisage the interaction mechanism of this module in a DPLL(T) SMT solver. Let the input problem be . The SAT solver along with the theory solvers for will find a subset of literals from such that its conjunction is satisfiable modulo . If no such satisfying assignment exists, the SMT solver stops with unsat. One can think of as being simply the literals in on the SAT solver trail. will be sent to along with information known about equalities between terms. The set can be also thought of as internal state maintained by the -theory solver module, with new instances sent out as theory lemmas and updated to after each call to . If returns sat, so does the SMT solver and stops. On the other hand, if it returns a new set of instances, the SMT solver continues the search to additionally satisfy these.
E-matching. In order to describe our procedure, we introduce the well-studied E-matching problem. Given a universally quantified -sentence , let denote the quantified variables. Define a -substitution of to be a mapping from variables to -terms of corresponding sort. Given a -term , let denote the term obtained by substituting variables in by the substitutions provided in . Two substitutions , with the same domain are equivalent modulo a set of equalities if . We denote this as .
Problem 3 (E-matching).
A set of ground equalities , a set of -terms , and patterns .
The set of substitutions over the variables in , modulo , such that for all there exists a with .
E-matching is a well-studied problem, specifically in the context of SMT. An algorithm for E-matching that is efficient and backtrackable is described in [MB07]. We denote this procedure by .
The procedure is given in Fig. 1. Intuitively, it adds all the new instances along the current search path that are required for local theory reasoning as given in Definition 2, but modulo equality. For each axiom in , the algorithm looks for function symbols containing variables. For example, if we think of the monotonicity axiom in Sect. 2, these would be the terms and . These terms serve as patterns for the E-matching procedure. Next, with the help of the E-matching algorithm, all new instances are computed (to be more precise, all instances for the axiom in which are equivalent modulo are skipped). If there are no new instances for any axiom in , and the set of literals implies , we stop with sat. as effectively we have that is satisfiable modulo . Otherwise, we return this set.
We note that though the algorithm may look inefficient because of the presence of nested loops, keeping track of which substitutions have already happened, and which substitutions are new. However, in actual implementations all of this is taken care of by the E-matching algorithm. There has been significant research on fast, incremental algorithms for E-matching in the context of SMT, and one advantage of our approach is to be able to leverage this work.
Correctness. The correctness argument relies on two aspects: one, that if the SMT solver says sat (resp. unsat) then is satisfiable (resp. unsatisfiable) modulo , and second, that it terminates.
For the case where the output is unsat, the correctness follows from the fact that only contains instances of . The sat case is more tricky, but the main idea is that the set of instances made by are logically equivalent to . Thus, when the solver stops, is satisfiable modulo . As a consequence, is satisfiable modulo . Since , we have that is satisfiable modulo .
The termination relies on the fact that the instantiations returned by procedure do not add new terms, and they are always a subset of . Since, is finite, eventually will stop making new instantiations. Assuming that we have a terminating decision procedure for the ground SMT problem of , we get a terminating decision procedure for .
An SMT solver with theory module is a decision procedure for the satisfiability problem modulo .
We briefly explain how our approach can be extended to the more general notion of Psi-local theory extensions [IhlemannETAL08LocalReasoninginVerification]. Sometimes, it is not sufficient to consider only local instances of extension axioms to decide satisfiability modulo a theory extension. For example, consider the following set of ground literals:
Suppose we interpret in a theory of an injective function with a partial inverse for some set . We can axiomatize this theory as a theory extension of the theory of uninterpreted functions using the axiom
is unsatisfiable in the theory extension, but the local instances of with respect to the ground terms are insufficient to yield a contradiction in the base theory. However, if we consider the local instances with respect to the larger set of ground terms
then we obtain, among others, the instances
Together with , these instances are unsatisfiable in the base theory.
The set is computed as follows:
It turns out that considering local instances with respect to is sufficient to check satisfiability modulo the theory extension for arbitrary sets of ground clauses . Moreover, is always finite. Thus, we still obtain a decision procedure for the theory extension via finite instantiation of extension axioms. Psi-local theory extensions formalize this idea. In particular, if satisfies certain properties including monotonicity and idempotence, one can again provide a model-theoretic characterization of completeness in terms of embeddings of partial models. We refer the reader to [IhlemannETAL08LocalReasoninginVerification] for the technical details.
To use our algorithm for deciding satisfiability of a set of ground literals modulo a Psi-local theory extension , we simply need to add an additional preprocessing step in which we compute and define where is a fresh predicate symbol. Then calling our procedure for with decides satisfiability of modulo .
6 Implementation and Experimental Results
Benchmarks. We evaluated our techniques on a set of benchmarks generated by the deductive verification tool GRASShopper [grasshopper-tool]. The benchmarks encode memory safety and functional correctness properties of programs that manipulate complex heap-allocated data structures. The programs are written in a type-safe imperative language without garbage collection. The tool makes no simplifying assumptions about these programs like acyclicity of heap structures.
GRASShopper supports mixed specifications in (classical) first-order logic and separation logic (SL) [Reynolds02SeparationLogic]. The tool reduces the program and specification to verification conditions that are encoded in a hierarchical combination of (Psi-)local theory extensions. This hierarchy of extensions is organized as follows:
Base theory: at the lowest level we have UFLIA, the theory of uninterpreted functions and linear integer arithmetic, which is directly supported by SMT solvers.
GRASS: the first extension layer consists of the theory of graph reachability and stratified sets. This theory is a disjoint combination of two local theory extensions: the theory of linked lists with reachability [DBLP:conf/popl/LahiriQ08] and the theory of sets over interpreted elements [DBLP:conf/birthday/Zarba03].
Frame axioms: the second extension layer consists of axioms that encode the frame rule of separation logic. This theory extension includes arrays as a subtheory.
Program-specific extensions: The final extension layer consists of a combination of local extensions that encode properties specific to the program and data structures under consideration. These include:
axioms defining memory footprints of SL specifications,
axioms defining structural constraints on the shape of data structures,
sorted constraints, and
axioms defining partial inverses of certain functions, e.g., to express injectivity of functions and to specify the content of data structures.
We refer the interested reader to [DBLP:conf/cav/PiskacWZ13, grasshopper, DBLP:conf/cav/PiskacWZ14] for further details about the encoding.
The programs considered include sorting algorithms, common data structure operations, such as inserting and removing elements, as well as complex operations on abstract data types. Our selection of data structures consists of singly and doubly-linked lists, sorted lists, nested linked lists with head pointers, binary search trees, skew heaps, and a union find data structure. The input programs comprise 108 procedures with a total of 2000 lines of code, 260 lines of procedure contracts and loop invariants, and 250 lines of data structure specifications (including some duplicate specifications that could be shared across data structures). The verification of these specifications are reduced by GRASShopper to 816 SMT queries, each serves as one benchmark in our experiments. 802 benchmarks are unsatisfiable. The remaining 14 satisfiable benchmarks stem from programs that have bugs in their implementation or specification. All of these are genuine bugs that users of GRASShopper made while writing the programs.111See www.cs.nyu.edu/~kshitij/localtheories/ for the programs and benchmarks used. We considered several versions of each benchmark, which we describe in more detail below. Each of these versions is encoded as an SMT-LIB 2 input file.
Experimental setup. All experiments were conducted on the StarExec platform [StumpST14] with a CPU time limit of one hour and a memory limit of 100 GB. We focus on the SMT solvers CVC4 [conf/cav/BarrettCDHJKRT11] and Z3 [MouraBjoerner08Z3]222We used the version of Z3 downloaded from the git master branch at http://z3.codeplex.com on Jan 17, 2015. as both support UFLIA and quantifiers via E-matching. This version of CVC4 is a fork of v1.4 with special support for quantifiers.333This version is available at www.github.com/kbansal/CVC4/tree/cav14-lte-draft.
In order to be able to test our approach with both CVC4 and Z3, wherever possible we transformed the benchmarks to simulate our algorithm. We describe these transformations in this paragraph. First, the quantified formulas in the benchmarks were linearized and flattened, and annotated with patterns to simulate Step 1(a) of our algorithm (this was done by GRASShopper in our experiments, but may also be handled by an SMT solver aware of local theories). Both CVC4 and Z3 support using these annotations for controlling instantiations in their E-matching procedures. In order to handle Psi-local theories, the additional terms required for completeness were provided as dummy assertions, so that these appear as ground terms to the solver. In CVC4, we also made some changes internally so as to treat these assertions specially and apply certain additional optimizations which we describe later in this section.
Our first experiment aims at comparing the effectiveness of eager instantiation versus incremental instantiation up to congruence (as done by E-matching). Figure 4 charts the number of eager instantiations versus the number of E-matching instantiations for each query in a logarithmic plot.444Figure 4 does not include timeouts for CVC4. Points lying on the central line have an equal number of instantiations in both series while points lying on the lower line have 10 times as many eager instantiations as E-matching instantiations. (The upper line corresponds to .) Most benchmarks require substantially more eager instantiations. We instrumented GRASShopper to eagerly instantiate all axioms. Subfigure (a) compares upfront instantiations with a baseline implementation of our E-matching algorithm. Points along the -axis required no instantiations in CVC4 to conclude unsat. We have plotted the above charts up to 10e10 instantiations. There were four outlying benchmarks where upfront instantiations had between 10e10 and up to 10e14 instances. E-matching had zero instantiations for all four. Subfigure (b) compares against an optimized version of our algorithm implemented in CVC4. It shows that incremental solving reduces the number of instantiations significantly, often by several orders of magnitude. The details of these optimizations are given later in the section.
Next, we did a more thorough comparison on running times and number of benchmarks solved for uninstantiated benchmarks. These results are in Table 1. The benchmarks are partitioned according to the types of data structures occurring in the programs from which the benchmarks have been generated. Here, “sl” stands for singly-linked, “dl” for double-linked, and “sls” for sorted singly-linked. The binary search tree, skew heap, and union find benchmarks have all been summarized in the “trees” row. The row “soundness” contains unsatisfiable benchmarks that come from programs with incorrect code or specifications. These programs manipulate various types of data structures. The actual satisfiable queries that reveal the bugs in these programs are summarized in the “sat” row.
We simulated our algorithm and ran these experiments on both CVC4 (C) and Z3 obtaining similar improvements with both. We ran each with three configurations:
Default. For comparison purposes, we ran the solvers with default options. CVC4’s default solver uses an E-matching based heuristic instantiation procedure, whereas Z3’s uses both E-matching and model-based quantifier instantiation (MBQI). For both of the solvers, the default procedures are incomplete for our benchmarks.
These columns refer to the E-matching based complete procedure for local theory extensions (algorithm in Fig. 1).555 The configuration C UL had one memory out on a benchmark in the tree family.
Doing instantiations inside the solver instead of upfront, opens the room for optimizations wherein one tries some instantiations before others, or reduces the number of instantiations using other heuristics that do not affect completeness. The results in these columns show the effect of all such optimizations.
As noted above, the UL and ULO procedures are both complete, whereas UD is not. This is also reflected in the “sat” row in Table 1. Incomplete Instantiation-based procedures cannot hope to answer “sat”. A significant improvement can be seen between the UL and ULO columns. The general thrust of the optimizations was to avoid blowup of instantiations by doing ground theory checks on a subset of instantiations. Our intuition is that the theory lemmas learned from these checks eliminate large parts of the search space before we do further instantiations.
For example, we used a heuristic for Psi-local theories inspired from the observation that the axioms involving Psi-terms are needed mostly for completeness, and that we can prove unsatisfiable without instantiating axioms with these terms most of the time. We tried an approach where the instantiations were staged. First, the instantiations were done according to the algorithm in Fig. 1 for locality with respect to ground terms from the original query. Only when those were saturated, the instantiations for the auxiliary Psi-terms were generated. We found this to be very helpful. Since this required non-trivial changes inside the solver, we only implemented this optimization in CVC4; but we think that staging instantiations for Psi-local theories is a good strategy in general.
A second optimization, again with the idea of cutting instantiations, was adding assertions in the benchmarks of the form where , are ground terms. This forces an arbitrary arrangement over the ground terms before the instantiation procedure kicks in. Intuitively, the solver first does checks with many terms equal to each other (and hence fewer instantiations) eliminating as much of the search space as possible. Only when equality or disequality is relevant to the reasoning is the solver forced to instantiate with terms disequal to each other. One may contrast this with ideas being used successfully in the care-graph-based theory combination framework in SMT where one needs to try all possible arrangements of equalities over terms. It has been observed that equality or disequality is sometimes relevant only for a subset of pairs of terms. Whereas in theory combination this idea is used to cut down the number of arrangements that need to be considered, we use it to reduce the number of unnecessary instantiations. We found this really helped CVC4 on many benchmarks.
Another optimization was instantiating special cases of the axioms first by enforcing equalities between variables of the same sort, before doing a full instantiation. We did this for axioms that yield a particularly large number of instances (instantiations growing with the fourth power of the number of ground terms). Again, we believe this could be a good heuristic in general.
Effective propositional Logic (EPR) is the fragment of first order-logic consisting of formulas of the shape with quantifier-free and where none of the universally quantified variables appears below a function symbol in . Theory extensions that fall into EPR are always local. Our third exploration is to see if we can exploit dedicated procedures for this fragment when such fragments occur in the benchmarks. For the EPR fragment, Z3 has a complete decision procedure that uses model-based quantifier instantiation. We therefore implemented a hybrid approach wherein we did upfront partial instantiation to the EPR fragment using E-matching with respect to top-level equalities (as described in our algorithm). The resulting EPR benchmark is then decided using Z3’s MBQI mode. This approach can only be expected to help where there are EPR-like axioms in the benchmarks, and we did have some which were heavier on these. We found that on singly linked list and tree benchmarks this hybrid algorithm significantly outperforms all other solver configurations that we have tried in our experiments. On the other hand, on nested list benchmarks, which make more heavy use of purely equational axioms, this technique does not help compared to only using E-matching because the partial instantiation already yields ground formulas.
The results with our hybrid algorithm are summarized in Column Z3 PM of Table 2. Since EPR is a special case of local theories, we also tried our E-matching based algorithm on these benchmarks. We found that the staged instantiation improves performance on these as well. The optimization that help on the uninstantiated benchmarks also work here. These results are summarized in the same table.
Overall, our experiments indicate that there is a lot of potential in the design of quantifier modules to further improve the performance of SMT solvers, and at the same time make them complete on more expressive decidable fragments.
We have presented a new algorithm for deciding local theory extensions, a class of theories that plays an important role in verification applications. Our algorithm relies on existing SMT solver technology so that it can be easily implemented in today’s solvers. In its simplest form, the algorithm does not require any modifications to the solver itself but only trivial syntactic modifications to its input. These are: (1) flattening and linearizing the extension axioms; and (2) adding trigger annotations to encode locality constraints for E-matching. In our evaluation we have experimented with different configurations of two SMT solvers, implementing a number of optimizations of our base line algorithm. Our results suggest interesting directions to further improve the quantifier modules of current SMT solvers, promising better performance and usability for applications in automated verification.