Synthesising Interprocedural Bit-Precise Termination Proofs (extended version)The research leading to these results has received funding from the ARTEMIS Joint Undertaking under grant agreement number 295311 “VeTeSS”, and ERC project 280053 (CPROVER).

Synthesising Interprocedural Bit-Precise Termination Proofs (extended version)thanks: The research leading to these results has received funding from the ARTEMIS Joint Undertaking under grant agreement number 295311 “VeTeSS”, and ERC project 280053 (CPROVER).

Hong-Yi Chen, Cristina David, Daniel Kroening, Peter Schrammel and Björn Wachter Department of Computer Science, University of Oxford,

Proving program termination is key to guaranteeing absence of undesirable behaviour, such as hanging programs and even security vulnerabilities such as denial-of-service attacks. To make termination checks scale to large systems, interprocedural termination analysis seems essential, which is a largely unexplored area of research in termination analysis, where most effort has focussed on difficult single-procedure problems. We present a modular termination analysis for C programs using template-based interprocedural summarisation. Our analysis combines a context-sensitive, over-approximating forward analysis with the inference of under-approximating preconditions for termination. Bit-precise termination arguments are synthesised over lexicographic linear ranking function templates. Our experimental results show that our tool 2LS outperforms state-of-the-art alternatives, and demonstrate the clear advantage of interprocedural reasoning over monolithic analysis in terms of efficiency, while retaining comparable precision.

I Introduction

Termination bugs can compromise safety-critical software systems by making them irresponsive, e.g., termination bugs can be exploited in denial-of-service attacks [CVE]. Termination guarantees are therefore instrumental for software reliability. Termination provers, static analysis tools that aim to construct a termination proof for a given input program, have made tremendous progress. They enable automatic proofs for complex loops that may require linear lexicographic (e.g. [BG13, LH14]) or non-linear termination arguments (e.g. [BMS05b]) in a completely automatic way. However, there remain major practical challenges in analysing real-world code.

First of all, as observed by [FKS12], most approaches in the literature are specialised to linear arithmetic over unbounded mathematical integers. Although, unbounded arithmetic may reflect the intuitively-expected program behaviour, the program actually executes over bounded machine integers. The semantics of C allows unsigned integers to wrap around when they over/underflow. Hence, arithmetic on -bit-wide unsigned integers must be performed modulo-. According to the C standards, over/underflows of signed integers are undefined behaviour, but practically also wrap around on most architectures. Thus, accurate termination analysis requires a bit-precise analysis of program semantics. Tools must be configurable with architectural specifications such as the width of data types and endianness. The following examples illustrate that termination behaviour on machine integers can be completely different than on mathematical integers. For example, the following code:

void foo1(unsigned n) { for(unsigned x=0; x<=n; x++); }

does terminate with mathematical integers, but does not terminate with machine integers if n equals the largest unsigned integer. On the other hand, the following code:

void foo2(unsigned x) { while(x>=10) x++; }

does not terminate with mathematical integers, but terminates with machine integers because unsigned machine integers wrap around.

A second challenge is to make termination analysis scale to larger programs. The yearly Software Verification Competition (SV-COMP) [DBLP:conf/tacas/Beyer15] includes a division in termination analysis, which reflects a representative picture of the state-of-the-art. The SV-COMP’15 termination benchmarks contain challenging termination problems on smaller programs with at most 453 instructions (average 53), contained at most 7 functions (average 3), and 4 loops (average 1).

In this paper, we present a technique that we have successfully run on programs that are one magnitude larger, containing up to 5000 instructions. Larger instances require different algorithmic techniques to scale, e.g., modular interprocedural analysis rather than monolithic analysis. This poses several conceptual and practical challenges that do not arise in monolithic termination analysers. For example, when proving termination of a program, a possible approach is to try to prove that all procedures in the program terminate universally, i.e., in any possible calling context. However, this criterion is too optimistic, as termination of individual procedures often depends on the calling context, i.e., procedures terminate conditionally only in specific calling contexts.

Hence, an interprocedural analysis strategy is to verify universal program termination in a top-down manner by proving termination of each procedure relative to its calling contexts, and propagating upwards which calling contexts guarantee termination of the procedure. It is too difficult to determine these contexts precisely; analysers thus compute preconditions for termination. A sufficient precondition identifies those pre-states in which the procedure will definitely terminate, and is thus suitable for proving termination. By contrast, a necessary precondition identifies the pre-states in which the procedure may terminate. Its negation are those states in which the procedure will not terminate, which is useful for proving nontermination.

In this paper we focus on the computation of sufficient preconditions. Preconditions enable information reuse, and thus scalability, as it is frequently possible to avoid repeated analysis of parts of the code base, e.g. libraries whose procedures are called multiple times or did not undergo modifications between successive analysis runs.

  1. We propose an algorithm for interprocedural termination analysis. The approach is based on a template-based static analysis using SAT solving. It combines context-sensitive, summary-based interprocedural analysis with the inference of preconditions for termination based on template abstractions. We focus on non-recursive programs, which cover a large portion of software written, especially in domains such as embedded systems.

  2. We provide an implementation of the approach in 2LS, a static analysis tool for C programs. Our instantiation of the algorithm uses template polyhedra and lexicographic, linear ranking functions templates. The analysis is bit-precise and purely relies on SAT-solving techniques.

  3. We report the results of an experimental evaluation on 597 procedural SV-COMP benchmarks with in total 1.6 million lines of code that demonstrates the scalability and applicability of the approach to programs with thousands of lines of code.

Ii Preliminaries

In this section, we introduce basic notions of interprocedural and termination analysis.

Program model and notation.

We assume that programs are given in terms of acyclic111 We consider non-recursive programs with multiple procedures. call graphs, where individual procedures are given in terms of symbolic input/output transition systems. Formally, the input/output transition system of a procedure is a triple , where is the transition relation; the input relation defines the initial states of the transition system and relates it to the inputs ; the output relation connects the transition system to the outputs of the procedure. Inputs are procedure parameters, global variables, and memory objects that are read by . Outputs are return values, and potential side effects such as global variables and memory objects written by . Internal states are commonly the values of variables at the loop heads in .

These relations are given as first-order logic formulae resulting from the logical encoding of the program semantics. Fig. 2 shows the encoding of the two procedures in Fig. 1 into such formulae.222c?a:b is the conditional operator, which returns if evaluates to , and otherwise. The inputs of are and the outputs consist of the return value denoted . The transition relation of encodes the loop over the internal state variables . We may need to introduce Boolean variables to model the control flow, as shown in . Multiple and nested loops can be similarly encoded in .

Note that we view these formulae as predicates, e.g. , with given parameters , and mean the substitution when we write . Moreover, we write and with the understanding that the former is a vector, whereas the latter is a scalar.

Each call to a procedure at call site in a procedure  is modeled by a placeholder predicate occurring in the formula for . The placeholder predicate ranges over intermediate variables representing its actual input and output parameters and , respectively. Placeholder predicates evaluate to , which corresponds to havocking procedure calls. In procedure in Fig. 2, the placeholder for the procedure call to is with the actual input and output parameters and , respectively.

A full description of the program encoding is given in Appendix Synthesising Interprocedural Bit-Precise Termination Proofs (extended version)thanks: The research leading to these results has received funding from the ARTEMIS Joint Undertaking under grant agreement number 295311 “VeTeSS”, and ERC project 280053 (CPROVER).

Basic concepts.

Moving on to interprocedural analysis, we introduce formal notation for the basic concepts below:

Definition 1 (Invariants, Summaries, Calling Contexts).

For a procedure given by we define:

  • An invariant is a predicate such that:

  • Given an invariant , a summary is a predicate such that:

  • Given an invariant , the calling context for a procedure call at call site in the given procedure is a predicate such that

These concepts have the following roles: Invariants abstract the behaviour of loops. Summaries abstract the behaviour of called procedures; they are used to strengthen the placeholder predicates. Calling contexts abstract the caller’s behaviour w.r.t. the procedure being called. When analysing the callee, the calling contexts are used to constrain its inputs and outputs. In Sec. III we will illustrate these notions on the program in Fig. 1.

1unsigned f(unsigned z) {
2  unsigned w=0;
3  if(z>0) w=h(z);
4  return w;
1unsigned h(unsigned y) {
2  unsigned x;
3  for(x=0; x<10; x+=y);
4  return x;
Fig. 1: Example.
Fig. 2: Encoding of Example 1.

Since we want to reason about termination, we need the notions of ranking functions and preconditions for termination.

Definition 2 (Ranking function).

A ranking function for a procedure with invariant is a function from the set of program states to a well-founded domain such that

We denote by a set of constraints that guarantee that is a ranking function. The existence of a ranking function for a procedure guarantees its universal termination.

The weakest termination precondition for a procedure describes the inputs for which it terminates. If it is , the procedure terminates universally; if it is , then it does not terminate for any input. Since the weakest precondition is intractable to compute or even uncomputable, we under-approximate the precondition. A sufficient precondition for termination guarantees that the program terminates for all that satisfy it.

Definition 3 (Precondition for termination).

Given a procedure , a sufficient precondition for termination is a predicate such that

Note that is always a trivial model for , but not a very useful one.

Iii Overview of the Approach

In this section, we introduce the architecture of our interprocedural termination analysis. Our analysis combines, in a non-trivial synergistic way, the inference of invariants, summaries, calling contexts, termination arguments, and preconditions, which have a concise characterisation in second-order logic (see Definitions 1, and 3). At the lowest level our approach relies on a solver backend for second-order problems, which is described in Sec. V.

To see how the different analysis components fit together, we now go through the pseudo-code of our termination analyser (Algorithm 1). Function is given the entry procedure of the program as argument and proceeds in two analysis phases.

Phase one is an over-approximate forward analysis, given in subroutine , which recursively descends into the call graph from the entry point . Subroutine infers for each procedure call in an over-approximating calling context , using procedure summaries and other previously-computed information. Before analyzing a callee, the analysis checks if the callee has already been analysed and, whether the stored summary can be re-used, i.e., if it is compatible with the new calling context . Finally, once summaries for all callees are available, the analysis infers loop invariants and a summary for itself, which are stored for later re-use by means of a join operator.

The second phase is an under-approximate backward analysis, subroutine , which infers termination preconditions. Again, we recursively descend into the call graph. Analogous to the forward analysis, we infer for each procedure call in an under-approximating calling context (using under-approximate summaries, as described in Sec. IV), and recurses only if necessary (Line 1). Finally, we compute the under-approximating precondition for termination (Line 1). This precondition is inferred w.r.t. the termination conditions that have been collected: the backward calling context (Line 1), the preconditions for termination of the callees (Line 1), and the termination arguments for itself (see Sec. IV). Note that superscripts and in predicate symbols indicate over- and underapproximation, respectively.

1 global ;
2 function 
3       foreach procedure call in  do
4             ;
5             if  then
6                   ;
11       ;
12       foreach procedure call in  do
13             ;
14             if  then
15                   ;
17            ;
21       ;
22       ;
23       return ;
Algorithm 1

Our algorithm uses over- and under-approximation in a novel, systematic way. In particular, we address the challenging problem of finding meaningful preconditions:

  • The precondition Definition 3 admits the trivial solution for . How do we find a good candidate? To this end, we “bootstrap” the process with a candidate precondition: a single value of , for which we compute a termination argument. The key observation is that the resulting termination argument is typically more general, i.e., it shows termination for many further entry states. The more general precondition is then computed by precondition inference w.r.t. the termination argument.

  • A second challenge is to compute under-approximations. Obviously, the predicates in the definitions in Sec. II can be over-approximated by using abstract domains such as intervals. However, there are only few methods for under-approximating analysis. In this work, we use a method similar to [CGL+08] to obtain under-approximating preconditions w.r.t. property : we infer an over-approximating precondition w.r.t.  and negate the result. In our case, is the termination condition .


We illustrate the algorithm on the simple example given as Fig. 1 with the encoding in Fig. 2. f calls a procedure h. Procedure h terminates if and only if its argument y is non-zero, i.e., procedure f only terminates conditionally. The call of h is guarded by the condition z>0, which guarantees universal termination of procedure f.

Let us assume that unsigned integers are 32 bits wide, and we use an interval abstract domain for invariant, summary and precondition inference, but the abstract domain with the elements for computing calling contexts, i.e., we can prove that calls are unreachable. We use .

Our algorithm proceeds as follows. The first phase is , which starts from the entry procedure f. By descending into the call graph, we must compute an over-approximating calling context for procedure h for which no calling context has been computed before. This calling context is . Hence, we recursively analyse h. Given that h does not contain any procedure calls, we compute the over-approximating summary and invariant . Now, this information can be used in order to compute and invariant for the entry procedure f.

The backwards analysis starts again from the entry procedure f. It computes an under-approximating calling context for procedure h, which is , before descending into the call graph. It then computes an under-approximating precondition for termination or, more precisely, an under-approximating summary whose projection onto the input variables of h is the precondition . By applying this summary at the call site of h in f, we can now compute the precondition for termination of f, which proves universal termination of f.

We illustrate the effect of the choice of the abstract domain on the analysis of the example program. Assume we replace the domain by the interval domain. In this case, computes . The calling context is computed over the actual parameters and . It is renamed to the formal parameters and (the return value) when is used for constraining the pre/postconditions in the analysis of h. Subsequently, computes the precondition for termination of h using the union of all calling contexts in the program. Since h terminates unconditionally in these calling contexts, we trivially obtain , which in turn proves universal termination of f.

Iv Interprocedural Termination Analysis

We can view Alg. 1 as solving a series of formulae in second-order predicate logic with existentially quantified predicates, for which we are seeking satisfiability witnesses.333 To be precise, we are not only looking for witness predicates but (good approximations of) weakest or strongest predicates. Finding such biased witnesses is a feature of our synthesis algorithms. In this section, we state the constraints we solve, including all the side constraints arising from the interprocedural analysis. Note that this is not a formalisation exercise, but these are precisely the formulae solved by our synthesis backend, which is described in Section V.

Iv-a Universal Termination

1 global ;
2 function 
3       foreach procedure call in  do
4             ;
5             if  then
6                   ;
11       ;
12       foreach procedure call in  do
13             if  then
14                   ;
15                   ;
20       ;
21       ;
22       return ;
Algorithm 2 for universal termination

For didactical purposes, we start with a simplification of Algorithm 1 that is able to show universal termination (see Algorithm 2). This variant reduces the backward analysis to a call to and propagating back the qualitative result obtained: terminating, potentially non-terminating, or non-terminating.

This section states the constraints that are solved to compute the outcome of the functions underlined in Algorithm 2 and establish its soundness:

  • (Def. 4)

  • (Def. 5)

  • (Lemma 3)

Definition 4 ().

A forward calling context for in procedure in calling context is a satisfiability witness of the following formula:

where is the guard condition of procedure call in capturing the branch conditions from conditionals. For example, of the procedure call to h in f in Fig. 1 is . is the currently available summary for h (cf. global variables in Alg. 1). Assumptions correspond to assume() statements in the code.

Lemma 1.

is over-approximating.

Proof sketch.

when is the entry-point procedure is ; also, the summaries are initially assumed to be , i.e. over-approximating. Hence, given that and are over-approximating, is over-approximating by the soundness of the synthesis (see Thm. 3 in Sec. V). ∎


Let us consider procedure f in Fig. 1. f is the entry procedure, hence we have ( with when using the interval abstract domain for 32 bit integers). Then, we instantiate Def. 4 (for procedure f) to compute . We assume that we have not yet computed a summary for h, thus, is . Remember that the placeholder evaluates to . Notably, there are no assumptions in the code, meaning that .

A solution is , and .

Definition 5 ().

A forward summary and invariants for procedure in calling context are satisfiability witnesses of the following formula:

Lemma 2.

and are over-approximating.

Proof sketch.

By Lemma 1, is over-approximating. Also, the summaries are initially assumed to be , i.e. over-approximating. Hence, given that and are over-approximating, and are over-approximating by the soundness of the synthesis (Thm. 3). ∎


Let us consider procedure h in Fig. 1. We have computed (with actual parameters renamed to formal ones). Then, we need obtain witnesses and to the satifiability of the instantiation of Def. 5 (for procedure h) as given below.

A solution is and , for instance.

Remark 1.

Since Def. 4 and Def. 5 are interdependent, we can compute them iteratively until a fixed point is reached in order to improve the precision of calling contexts, invariants and summaries. However, for efficiency reasons, we perform only the first iteration of this (greatest) fixed point computation.

Lemma 3 ().

A procedure with forward invariants terminates if there is a termination argument :

Assertions in this formula correspond to to assert() statements in the code. They can be assumed to hold because assertion-violating traces terminate. Over-approximating forward information may lead to inclusion of spurious non-terminating traces. For that reason, we might not find a termination argument although the procedure is terminating. As we essentially under-approximate the set of terminating procedures, we will not give false positives. Regarding the solving algorithm for this formula, we refer to Sec. V.


Let us consider function h in Fig. 1. Assume we have the invariant . Thus, we have to solve

When using a linear ranking function template , we obtain as solution, for example, .

If there is no trace from procedure entry to exit, then we can prove non-termination, even when using over-approximations:

Lemma 4 (line 1 of ).

A procedure in forward calling context , and forward invariants never terminates if its summary is .

Termination information is then propagated in the (acyclic) call graph ( in line 2 in Algorithm 2):

Proposition 1.

A procedure is declared

  1. non-terminating if it is non-terminating by Lemma 4.

  2. terminating if

    1. all its procedure calls that are potentially reachable (i.e. with ) are declared terminating, and

    2. itself is terminating according to Lemma 3;

  3. potentially non-terminating, otherwise.

Our implementation is more efficient than Algorithm 2 because it avoids computing a termination argument for if one of its callees is potentially non-terminating.

Theorem 1.

If the entry procedure of a program is declared terminating, then the program terminates universally. If the entry procedure of a program is declared non-terminating, then the program never terminates.

Proof sketch.

By induction over the acyclic call graph using Prop. 1. ∎

Iv-B Preconditions for Termination

Before introducing conditional termination, we have to talk about preconditions for termination.

If a procedure terminates conditionally like procedure in Fig. 1 (Lemma 3) will not be able to find a satisfying predicate . However, we would like to know under which preconditions, i.e. values of y in above example, the procedure terminates.

Input: procedure with invariant , additional termination conditions
Output: precondition
1 ;
2 let ;
3 while  do
4       ;
5       solve for ;
6       if UNSAT then return ;
7       else
8             let be a model of ;
9             let ;
10             let ;
11             if  then  ;
12             else
13                   let ;
14                   let ;
15                   ;
Algorithm 3

We can state this problem as defined in Def. 3. In Algorithm 3 we search for , , and in an interleaved manner. Note that is a trivial solution for ; we thus have to aim at finding a good under-approximation of the maximal solution (weakest precondition) for .

We bootstrap the process by assuming and search for values of (Line 3). If such a value exists, we can compute an invariant under the precondition candidate (Line 3) and use Lemma 3 to search for the corresponding termination argument (Line 3).

If we fail to find a termination argument (), we block the precondition candidate (Line 3) and restart the bootstrapping process. Otherwise, the algorithm returns a termination argument that is valid for the concrete value of . Now we need to find a sufficiently weak for which guarantees termination. To this end, we compute an over-approximating precondition for those inputs for which we cannot guarantee termination ( in Line 3, which includes additional termination conditions coming from the backward calling context and preconditions of procedure calls, see Sec. IV-C). The negation of this precondition is an under-approximation of those inputs for which terminates. Finally, we add this negated precondition to our (Line 3) before we start over the bootstrapping process to find precondition candidates outside the current precondition () for which we might be able to guarantee termination.


Let us consider again function h in Fig. 1. This time, we will assume we have the invariant (with ). We bootstrap by assuming and searching for values of satisfying . One possibility is . We then compute the invariant under the precondition and get . Obviously, we cannot find a termination argument in this case. Hence, we start over and search for values of satisfying . This formula is for instance satisfied by . This time we get the invariant and the ranking function . Thus, we have to solve

to compute an over-approximating precondition over the template . In this case, turns out to be , therefore its negation is the that we get. Finally, we have to check for further precondition candidates, but is obviously UNSAT. Hence, we return the sufficient precondition for termination .

Iv-C Conditional Termination

We now extend the formalisation to Algorithm 1, which additionally requires the computation of under-approximating calling contexts and sufficient preconditions for termination (procedure , see Alg. 3).

First, computes in line 3 an over-approximating invariant entailed by the candidate precondition. is computed through Def. 5 by conjoining the candidate precondition to the antecedent. Then, line 3 computes the corresponding termination argument by applying Lemma 3 using instead of . Since the termination argument is under-approximating, we are sure that terminates for this candidate precondition if .

Remark 2.

The available under-approximate information , where