Deriving approximation tolerance constraints from verification runs††thanks: This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Centre “On-The-Fly Computing” (SFB 901).
Approximate computing (AC) is an emerging paradigm for energy-efficient computation. The basic idea of AC is to sacrifice high precision for low energy by allowing for hardware which only carries out ”approximately correct” calculations. For software verification, this challenges the validity of verification results for programs run on approximate hardware.
In this paper, we present a novel approach to examine program correctness in the context of approximate computing. In contrast to all existing approaches, we start with a standard program verification and compute the allowed tolerances for AC hardware from that verification run. More precisely, we derive a set of constraints which – when met by the AC hardware – guarantees the verification result to carry over to AC. Our approach is based on the framework of abstract interpretation. On the practical side, we furthermore (1) show how to extract tolerance constraints from verification runs employing predicate abstraction as an instance of abstract interpretation, and (2) show how to check such constraints on hardware designs. We exemplify our technique on example C programs and a number of recently proposed approximate adders.
Approximate computing (AC) [21, 16] is a new computing paradigm which aims at reducing energy consumption at the cost of computation precision. A number of application domains can tolerate AC because they are inherently resilient to imprecision (e.g., machine learning, big data analytics, image processing, speech recognition). Computation precision can be reduced by either directly manipulating program executions on the algorithmic level (e.g. by loop perforation ) or by employing approximate hardware for program execution . Approximation on the level of hardware can be achieved by techniques like voltage overscaling or by directly making imprecise hardware designs with less chip area. The approximate adders which we will later use employ the latter technique, and simply have limited carry propagation.
For software verification, the use of approximate hardware challenges soundness and raises the question of whether the achieved verification result will really be valid when the program is being executed. So far, correctness in the context of approximate computing has either studied quantitative reliability, i.e., the probability that outputs of functions have correct values [11, 24] (employed for the language Rely), or differences between approximate and precise executions [22, 14] (applying differential program verification). Alternatively, some approaches plainly use types and type checking to separate the program into precise and approximate parts (language EnerJ) . All of these techniques take a hardware-centric approach: take the (non-)guarantees of the hardware, and develop new analysis methods working under such weak guarantees. The opposite direction, namely use standard program analysis procedures and have the verification impose constraints on the allowed approximation, has not been studied so far. This is despite the fact that such an approach directly allows re-use of existing verification technology for program verification as well as for checking the constraints on the approximate hardware. Another advantage of this approach is that the imposed constraints can be checked on multiple hardware designs, as we did in our examples.
In this paper, we propose a new strategy for making software verification reliable for approximate computing. Within the broad spectrum of AC techniques, we focus on deterministic approximate designs, i.e., approximate hardware designs with deterministic truth tables. We start with a verification run proving safety of a program. For the moment on, we assume that the safety property is encoded with assertions or specific error labels ERR. With a proper instrumentation of the program various properties can be encoded in such a way. In Section 4 we exemplary describe how to encode termination proofs.
Our approach derives from that verification run requirements on the hardware executing the program. We call such requirements tolerance constraints. A tolerance constraint acts like a pre/postcondition pair and describes the expected output of a hardware design when supplied with specified inputs. The derived tolerance constraints capture the assumptions the verification run has made on the executing hardware. Thus, they are specific to the program and safety property under consideration. Tolerance constraints refer to program statements, e.g., statements using addition as operation. Typically, tolerance constraints are much less restrictive than the precise truth table of a hardware operation would dictate.
To instantiate this general idea, we had to select the underlying verification technique. We discuss the alternatives in Section 7, after we presented our concrete instantiation. In the following, we formulate the derivation of tolerance constraints within the framework of abstract interpretation, thus, making the technique applicable to all abstract interpretation based program analyses. We prove soundness of our technique by showing that a program, which has been proven correct, will also run correctly on AC when the employed approximate hardware satisfies the derived tolerance constraints.
To see our technique in practice, we instantiate the general framework based on abstract interpretation with predicate abstraction [15, 3]. In this case, tolerance constraints are pairs of predicates on inputs and expected outputs of a hardware operation. As a first example, take a look at the left program in Figure 1. The left program writes to an array within a for-loop. The property to be checked (encoded as an error state
ERR) is an array-index-inside-bounds check. Using and as inputs and as output (i.e., ), the tolerance constraint on addition (+) derived from a verification run showing correctness is
It states that the hardware adder should guarantee that adding 10 to a value in between 0 and 989 never brings us outside the range , and thus the program never crashes with an index-out-of-bounds exception.
Using the analysis tool CPAchecker  for verification runs, we implemented the extraction of tolerance constraints from abstract reachability graphs constructed during verification. The constraints will be in SMT-Lib format . To complete the picture, we have furthermore implemented a procedure for tolerance checking on hardware designs. This technique constructs a specific checker circuit out of a given hardware design (in Verilog) and tolerance constraint. We have evaluated our overall approach on example C programs, e.g., taken from the software verification competition benchmark, using as AC hardware different approximate adders from the literature (Verilog designs taken from the website accompanying ). During evaluation, we examined if a program which uses an approximate adder still terminates, adheres to a protocol, or remains memory safe. Additionally, we looked at certain properties of additions like monotonicity to capture the different behavior of precise and approximate adders.
We start by formally defining the syntax and semantics of programs, and by introducing the framework of abstract interpretation .
For our formal framework, we assume programs to have integer variables only111For the practical evaluation we, however, allow for arbitrary C programs., to be the set of binary operators on integers, to be the integer constants and the set of comparison operators on integers.
Programs use variables out of a set , and have two sorts of statements from a set : (1) conditionals
assume b ( boolean condition over formed using and ) and (2) assignments
v:=expr, , expression over formed with . Formally, programs are given by control flow automata.
A control flow automaton (CFA) consists of a finite set of locations , an initial location and a set of edges and a set of error locations .
Note that we mark the error locations in programs with the label
ERR (or similar). A concrete state of a program is a mapping , and is the set of all states. For a state , we define a state update wrt. and to be , for . For a state and a boolean condition , we write to state that is true in . A configuration of a program is a pair , .
The semantics of program statements is given by the following (partial) next transformers with
We lift to sets of states by . Note that this lifted function is total. The next transformers together with the control flow determine the transition system of a program.
The concrete transition system of a CFA consists of
a set of configurations ,
an initial configuration where for all ,
a transition relation with if and .
An error location is reachable in if there is a path from to a configuration with . If no error location is reachable, we say that the transition system is free of errors.
For verifying that a program is free of errors, we use the framework of abstract interpretation (AI) . Thus we assume that the verification run from which we derive tolerance constraints is carried out by an analysis tool employing abstract interpretation as basic verification technology.
Instead of concrete states, instances of AI frameworks employ abstract domains and execute abstract versions of the next transformers on it. Abstract domains are equipped with an ordering , and has to form a complete lattice (as does ). To relate abstract and concrete domain, two monotonic functions are used: an abstraction function and a concretisation function . The pair has to form a Galois connection, i.e. the following has to hold: and . We require the least element of the lattice (which we denote by ) to be mapped onto the least element of which is the empty set .
On the abstract domain, the AI instance defines a total abstract next transformer . To be useful for verification, the abstract transformer has to faithfully reflect the behaviour of the concrete transformer.
An abstract next transformer is a safe approximation of the concrete next transformer if the following holds:
Using the abstract next transformer, we can construct an abstract transition system of a program.
The abstract transition system of a program with respect to an abstract domain and functions consists of
a set of configurations ,
an initial configuration where with for all ,
a transition relation with if and .
An abstract configuration is reachable in if there is a path from to a configuration . We denote the set of reachable configurations in by or simply . An error location is reachable in if there is a path from to a configuration with , . Note that we allow paths to configurations , , since represents the empty set of concrete states, and thus does not stand for a concretely reachable error.
The abstract transition system can be used for checking properties of the concrete program whenever the abstract transformers are safe approximations.
Let be a CFA, its concrete and its abstract transition system according to some abstract domain and safe abstract next transformer. Then the following holds: If is free of errors, so is .
3 Transformer Constraints
The framework of abstract interpretation is used to verify that a program is free of errors. To this end, the abstract transition system is build and inspected for reachable error locations. However, the construction of the abstract transition system and thus the soundness of verification relies on the fact that the abstract transformer safely approximates the concrete transformer, and this in particular means that we verify properties of a program execution using the concrete transformers for next state computation. This assumption is not true anymore when we run our programs on approximate hardware.
For a setting with approximate hardware, we have approximate next transformers for (some or all) of our statements. The key question is now the following: Under which conditions on these approximate transformers will our verification result carry over to the AC setting? To this end, we need to find out what ”properties” of a statement the verification run has actually used. This can be seen in the abstract transition system by looking at the transitions labelled with a specific statement, and extracting the abstract states before and after this statement. A tolerance constraint for a statement includes all such pairs of abstract states, specifying a number of pre- and postconditions for the statement.
Let be an abstract transition system of a program , a statement. Let be the family of transitions in .
The tolerance constraint for in is the family of pairs of abstract states .
While the concrete transformers by safe approximation fulfill all these constraints, the approximate transformers might or might not adhere to the constraints.
A next transformer fulfills a tolerance constraint if the following property holds for all :
When programs are run on approximate hardware, the execution will use some approximate and some precise next transformers depending on the actual hardware. For instance, the execution might employ an approximate adder, and thus all statements using addition will be approximate. We let be the transition system of program constructed by using for the approximate statements and standard concrete transformers for the rest. This lets us now formulate our main theorem about the validity of verification results on AC hardware.
Let be a program and be a next transformer for fulfilling the tolerance constraint on derived from an abstract transition system wrt. some abstract domain and safe abstract next transfomers. Then we get:
If is free of errors, so is .
Proof: Let be the tolerance constraint for in . Assume the contrary, i.e., there is a path to an error location in : such that . We show by induction that there exists a path in the abstract transition system such that .
- Induction base.
since and by Galois connection properties.
- Induction step.
Let , and . Let (hence ). Now we need to consider two cases:
Case (1): : Then the next transformer applied to reach the next configuration is the standard transfomer. Thus let . By safe approximation of , we get . By monotonicity of : . By Galois connection: . Hence, by transitivity .
Case (2): : Let . By definition of tolerance constraint extraction, the pair has to be in the family of tolerance constraints, i.e., . Since fulfills the constraint, .
4 Preserving termination
So far, we have been interested in the preservation of already proven safety properties on approximate hardware. Another important issue is the preservation of termination: whenever we have managed to show that a program terminates on precise hardware, we would also like to get constraints that guarantee termination on AC hardware. In order to extend our approach to termination, we make use of an approach for encoding termination proofs as safety properties .
We start with explaining standard termination proofs. Nontermination arises when we have loops in programs and the loop condition never gets false in a program execution. In control flow automata, a loop is a sequence of locations such that there are statements , , with and . In this, a location is said to be on the loop. Every well-structured loop has a condition and a loop body: the start of a loop body is a location such that there are locations and a boolean condition s.t. and are in the CFA and is on a loop, but either is not on a loop, or is on a different loop. Basically, we just consider CFAs of programs constructed with while or for constructs, not with gotos or recursion. However, the latter is also possible when the verification technique used for proving termination covers such programs.
A non-terminating run of a CFA is an infinite sequence of configurations and statements in the transition system . If has no non-terminating runs, then terminates.
In every non-terminating run, at least one loop start occurs infinitely often.
We assume some standard technique to be employed for proving termination. Such techniques typically consist of (a) the synthesis of a termination argument, and (b) the check of validity of this termination argument. Termination arguments are either given as monolithic ranking functions or as disjunctively well-founded transition invariants . Here, we will describe the technique for monolithic ranking functions.
For every loop starting in , define a ranking function on the program variables, i.e., , where is a well-founded order with least element .
Show to decrease with every loop execution, i.e., if is a path in , show .
Show to be greater or equal than the least element of at loop start, i.e., for all starts of loop bodies and , show .
If properties (2) and (3) hold, we say that the ranking function is valid. Note that we are not interested in computing ranking functions here; we just want to make use of existing verification techniques. The following proposition states a standard result for ranking functions (see e.g. [23, 20]).
Let be a program. If every loop of has a valid ranking function, then terminates.
As an example consider the program on the left of Figure 2. It computes the sum of all numbers from 0 up to some constant . It terminates since variable is constantly increased. As ranking function we can take using the well-founded ordering .
In order to encode the above technique in terms of assertions, we instrument a program along the lines used in the tool Terminator  thereby getting a program as follows. Let be the set of variables occuring in the program. At starts of loop bodies we insert
if (!(f_l(x1, ..., xn) >= bot_W) ERR: old_x1 := x1; ... old_xn := xn;
and at loop ends we insert
if (!(f_l(x1, ..., xn) <_W f_l(old_x1, ..., old_xn)) ERR:
when given a ranking function and a well-founded ordering with bottom element
If if free of errors, then terminates.
Hence we can use standard safety proving for termination as well (once we have a ranking function), and thereby derive tolerance constraints. In the left of Figure 2 we see the instrumented version of program . Here, we have already applied an optimization: we only make a copy of variable
i since the ranking function only refers to
N does not change anyway.
5 Constraint Extraction for Predicate Analysis
Section 3 has formally defined the extraction of tolerance constraints from abstract transition systems and has proven its soundness. Now, we will take a closer look at constraint extraction in practice. To this end, we choose an instance of the abstract interpretation framework, namely predicate abstraction [15, 3]. Furthermore, instead of deriving constraints for statements, we derive constraints for operators since in practice we do not have specific hardware for whole statements but just for the operations used in expressions within a statement.
We start with defining predicate abstraction. For this, we fix a set of predicates over and . In practice, these predicates will be incrementally computed by a counter-example-guided abstraction refinement approach  which we just assume to exist (and which is provided by the tool that we employ for our experiments). We define and let the abstract domain be conjunctions of predicates or their negations (also directly written as set of literals, hence is true, is false):
The Galois connection is given by letting and . We write iff for all . For the definition of the abstract next transformers see for instance . Note that tolerance constraints in this domain take the form .
This abstract domain can be used to show program from Figure 1 to be free of errors. Figure 3 shows the abstract transition system of program using the predicate set . The predicates holding in an abstract configuration , i.e., the abstract state , are written next to the purple location. We see that the location labeled occurs in the graph, but the abstract state in this configuration is , and, thus, we say that this error is not reachable.
For the extraction of tolerance constraints for operators , we assume our statements to take the form of three-address code (3AC) . In three-address code form, all operators occur in programs only in statements , where and are variables or constants. Every program can be brought in such a form (e.g., intermediate representations generated during compilation take this form). We use this 3AC form because we need to isolate operators, and only have statements with one (possibly approximate) operator in. Note that program is in 3AC form.
Furthermore, the tolerance constraints, i.e., pre- and postcondition predicates, derived from abstract transition systems are specified over the program variables. As an example, take the operator . In the program this operator occurs in the statement
j:=j+10. The tolerance constraint for this statement derived from the abstract transition system in Figure 3 is . This constraint refers to the program variable . If the approximate adder used for has inputs and and output , this constraint first of all needs to be brought into a form using variables , and . This is achieved using the following replacement operator.
Let . The predicate is obtained from by replacing all occurrences of by . We lift this to sets by letting . For constants , we define .
For all such that :
For constraint , statement
j:=j+10 and adder with inputs and , output , the replacement we need to make is . This is the constraint which ultimately needs to be checked for the approximate hardware.
In the following we assume all binary operators to have signature , and to not occur as variables in the program nor in the predicates and use to refer to the constraints obtained after the replacement.
An approximate operator adheres to a tolerance constraint (i.e., over and ) if
Adherence to constraints by operators implies adherence to constraints by statements using these operators.
Let be a tolerance constraint extracted from for . If adheres to , then adheres to .
Proof: We need to show that adheres to . We first of all take the definition of it and rewrite it a little.
The last implication is now shown as follows:
Let be an abstract transition system constructed using safe approximations and let all approximate operators adhere to the constraints derived from . Then: If is free of errors, so is .
As proof of concept we integrated our proposed constraint extraction into the software analysis tool CPAchecker , a tool for C program analysis which is configurable to abstract interpretation based analyses. Mainly, we added a constraint extraction algorithm plus some additional helper classes. Our constraint extraction algorithm builds on top of CPAchecker’s predicate analysis which uses the technique of adjustable block enconding , a technique which allows to specify at which locations an abstraction should be computed. For our extraction we need to make sure that we have an abstract state immediately before and after each statement which uses the operation of interest 222The operation of interest is made configurable in CPAchecker.. To identify these abstraction points and later the tolerance constraints, we first need to identify the statements using the operation . Afterwards, we run CPAchecker’s standard predicate analysis which provides us with an abstract reachability graph (ARG), a structure similar to the abstract transition system. In the ARG, the predicates are given in SMT-Lib format  since CPAchecker is using state-of-art SMT solvers for predicate analysis. From the ARG, we extract the tolerance constraints and write one SMT file per constraint which is in the input format required by our next tool building the hardware checker. The SMT file mainly contains the description of pairs plus additional information about the signature of the statement for which the constraint was extracted. The signature is needed by the next tool to construct .
To run the tolerance constraint extraction within CPAchecker, one can use the configuration file predicateAnalysis-ToleranceConstraintsExtraction-
PLUS.properties that we used in our evaluation to extract tolerance constraints for additions.
6 Constraint Checking
The final step of our technique is the check of the extracted constraints on actual hardware designs of approximate operations. For simplicity of representation, we restrict the following explanations to the case of a single constraint333A generalization to a family of constraints is straightforward.. The input to the checking phase thus consists of a constraint , an approximate operator and the corresponding program statement . The checking of the constraint on a given hardware design with inputs and output (in our case specified in Verilog) proceeds in three steps:
The mapped tolerance constraint is constructed. As a result, the tolerance constraint uses the variables , and when referring to the inputs and output of . Additional variables of the program (besides and ) may still occur in the constraint which are not used in the hardware design. We denote these variables as side variables.
The mapped constraint is transformed into Verilog code giving a checker circuit. The checker circuit is created as Verilog code in two steps. First, the logical formulae of the tolerance constraints are compiled to Verilog code (see ). In this, side variables are treated like other inputs. We then fix a single output of the checker called by setting .
The generated tolerance constraint checker is afterwards combined with the hardware design of into an adherence checker. For our examples, the AC hardware designs are also given in Verilog. The combination is done using a top module that contains and wires the design of and the tolerance checker as sub-modules. The wiring is done as depicted in Figure 4.
The resulting circuit is afterwards checked for safety, i.e., that for no combinations of values on the primary inputs the error flag is raised. This step can be done using standard hardware verification techniques (unsatisfiability checking).
As an example, consider again program given on the left side of Figure 1. The tolerance constraint extracted for operator is and the program statement is
j:=j+10. In SMT-Lib format, the constraint is
ΨΨ(define-fun Q_1 () Bool (and (<= 0 |main::j|) (<= |main::j| 989))) ΨΨ(define-fun Q_2 () Bool (and (<= 0 |main::j@1|) ΨΨ (<= |main::j@1| 999)))
The structural mapping of the variables is represented as , and . As a result, the mapped constraint can be represented as follows.
ΨΨ(define-fun mappedQ_1 () Bool (and (and (<= 0 x) (<= x 989)) ΨΨ (= y 10))) ΨΨ(define-fun mappedQ_2 () Bool (and (<= 0 z) (<= z 999)))
Figure 5 gives the Verilog code of the checker circuit belonging to this mapped constraint. Note that the length of the input vectors have to be adapted to fit the one provided by the hardware design of .
In our experiments, we used the software analysis tool CPAchecker to extract the tolerance constraints from a verification run. We employed the tools Yosys  and ABC  for synthesis and generation of a CNF formula that encodes the value of the error flag in dependence on all the inputs. Using PicoSAT , we checked the unsatisfiability of the formula, denoting that the error flag is never raised, i.e., the tolerance constraints are met by the implementation.
In the following, we give the results of our experiments. In our experiments we studied tolerance constraints for addition (since this is the only operation for which approximate hardware is currently publicly available). While it is often accepted that in approximate computing a computation result is not functional equivalent with a precise computation result, a approximate computation must still well-behave. For example, memory accesses should remain safe or it should still terminate or stick to a certain protocol. That is why, during program verification we considered one of these properties instead of functional behavior.
We extracted tolerance constraints from the verification of a number of handcrafted programs (including our three examples) and some programs from the subcategory
ProductLines of the SV-COMP444Some additions first had to be brought in three-address code form and in some programs we replaced some constant assignments by proper addition. .
We chose our programs to get tolerance constraints from a variety of verification problems and are very well aware that these programs are no typical candidates for approximate computing.
The handcrafted programs AddOne, EvenSum, and MonotonicAdd should examine the addition of positive numbers.
Programs sum, quotient, and mirror_matrix use the previously described technique to encode termination proofs with assertions.
To artificially enforce a difference between the behavior of the approximate adder, we used program SpecificAdd which checks that the addition of is indeed .
The programs from the SV-COMP (the last 10 programs shown in Table 1) check protocol properties, e.g. correct locking behavior.
We checked the tolerance constraints on a standard, non-approximate ripple carry adder (RCA) and a set of approximate adders provided by the Karlsruhe library of  (called ACA-I , ACA-II (ACA_II_N16_Q4), ETAII , GDA  and GeAr). Table 1 shows our results. For each program, we show the number of additions , the number of program statements , the number of constraints extracted and whether an adder meets the tolerance constraints or not .
Our first observation is that except for program SpecificAdd which we created to show a different behavior between the approximate adders either or all approximate adders meet the extracted tolerance constraint or none of them. This is because all approximate adders use the same principle: reduction of the carry chain. In their addition, they use a set of subadders and the carry bit of the previous subadder is either dropped or imprecisely predicted. The effect of this reduction only shows off for specific numbers and these specific numbers differ among the approximate adder. Hence, adding 30 and 50 failed only in the approximate adders ACA-II.
Interestingly, the approximate adders meet the extracted tolerance constraints for all of the SV-COMP programs. On the one hand, not all additions in the programs have an effect on the correctness of the program (and thus verification imposes no constraints on them). On the other hand, typically the additions considered during verification which had an effect increase a variable value in the range by one which can be computed precisely by the first subadder of all approximate adders.
For our own programs, one can see that all sorts of cases occur: all approximate adders satisfy the extracted constraints (as is the case for program ), some do and some do not (on program SpecificAdd), and all do not. An instance of the latter case is our example program from the right of Figure 1. The variable which is increased by 1 can be any positive integer (it is an input). The derived constraint for operator is . For our verification of the property, we require that the increase of that variable does not result in value zero, which can be the case if the carry propagation is imprecise. Thus, here the approximate designs fail to satisfy the constraint. Hence, an execution of the program on approximate hardware with these adders could reach the error state. The imprecise carry propagation is also the reason why the approximate adders cannot guarantee termination of programs sum, quotient, and mirror_matrix. For termination all three programs rely on an addition which is strongly monotonic up to a certain threshold (maximal int value). However, due to the imprecise carry propagation an addition of two positive integers may result in value zero.
To compute requirements on AC hardware with the help of program verification, further approaches are conceivable. For example, one could model the approximate operation as a function call. This means, the approximate operations in a program, e.g. the approximate addition, must be replaced by a call to corresponding function. Now, one applies a verification technique, e.g. , which computes function summaries . The function summary for the approximate operation, in principle a description of a pre-/postcondition pair, gives us the constraint on the AC hardware.
In another alternative one would also model the approximate operation as a function call, but now one assumes that the behavior of the function modeling the approximate operation is unknown. In this case, one may use a technique like  which tries to generate the weakest specification for the function which still ensures program correctness w.r.t. the desired property. The specification for the function which is in principle an encoding of a pre-/postcondition pair describes the requirement on the AC hardware.
We are confident that both alternatives could be used with our general approximation tolerance constraints approach. To use those alternatives the function summary and the inferred specification must be transformed into a tolerance constraint checker. We think this is feasible because  and  already seem to use logic formulae to express the function summary and the specification.
For this paper, we decided to use abstract interpretation as a first example to generate the tolerance constraints for AC hardware. The disadvantage of abstract interpretation is that we might get multiple constraints. The two alternatives only generate one constraint. In practice, we solved this problem such that multiple constraints are conjuncted into a single constraint during the generation of the tolerance constraint checker. On the other hand, we do not need to transform the approximate operations into function calls and the generation of three-address code is rather standard for compilers. Another reason is that we are already familiar with abstract interpretation. Additionally, the verification tool CPAchecker which we typically use for verification is based on abstract interpretation and analyses functions via in-lining.
In this paper, we have proposed a new way of making software verification robust against approximate hardware. Its basic principle is the derivation of constraints on AC hardware from verification runs. We have shown our technique to be sound, i.e., shown that the verification result carries over to a setting with AC hardware when the hardware satisfies the derived constraints. First experimental results have shown that the verification result often but not always carries over. More experiments are, however, necessary when further AC implementations of operations – besides approximate adders – become available.
-  Aho, A.V., Sethi, R., Ullman, J.D.: Compilers: Principles, Techniques, and Tools. Addison-Wesley (1986)
-  Albarghouthi, A., Dillig, I., Gurfinkel, A.: Maximal specification synthesis. In: POPL. pp. 789–801. ACM (2016)
-  Ball, T., Podelski, A., Rajamani, S.K.: Boolean and cartesian abstraction for model checking C programs. STTT 5(1), 49–58 (2003)
-  Barrett, C., Fontaine, P., Tinelli, C.: The SMT-LIB Standard: Version 2.5. Tech. rep., Department of Computer Science, The University of Iowa (2015), available at http://www.SMT-LIB.org
-  Berkeley, ABC: A system for sequential synthesis and verification (2005)
-  Beyer, D.: Software verification and verifiable witnesses. In: Baier, C., Tinelli, C. (eds.) TACAS, LNCS, vol. 9035, pp. 401–416. Springer Berlin Heidelberg (2015)
-  Beyer, D., Keremoglu, M.E., Wendler, P.: Predicate abstraction with adjustable-block encoding. In: FMCAD. pp. 189–198. FMCAD Inc (2010)
-  Beyer, D., Keremoglu, M.: CPAchecker: A Tool for Configurable Software Verification. In: CAV, pp. 184–190. LNCS, Springer (2011)
-  Biere, A.: Picosat. http://fmv.jku.at/picosat (2013)
-  Carbin, M., Kim, D., Misailovic, S., Rinard, M.C.: Verified integrity properties for safe approximate program transformations. In: Albert, E., Mu, S. (eds.) Workshop on Partial Evaluation and Program Manipulation. pp. 63–66. ACM (2013)
-  Carbin, M., Misailovic, S., Rinard, M.C.: Verifying quantitative reliability for programs that execute on unreliable hardware. In: Hosking, A.L., Eugster, P.T., Lopes, C.V. (eds.) OOPSLA. pp. 33–52. ACM (2013)
-  Cook, B., Podelski, A., Rybalchenko, A.: Termination proofs for systems code. In: Schwartzbach, M.I., Ball, T. (eds.) PLDI. pp. 415–426. ACM (2006)
-  Cousot, P., Cousot, R.: Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Graham, R.M., Harrison, M.A., Sethi, R. (eds.) POPL. ACM (1977)
-  Gopalakrishnan, G., Haran, A., Lahiri, S., Rakamaric, Z.: Automated differential program verification for approximate computing, unpublished
-  Graf, S., Saidi, H.: Construction of abstract state graphs with PVS. In: Grumberg, O. (ed.) CAV, LNCS, vol. 1254, pp. 72–83. Springer Berlin Heidelberg (1997)
-  Han, J., Orshansky, M.: Approximate computing: An emerging paradigm for energy-efficient design. In: 18th IEEE European Test Symposium. pp. 1–6. IEEE Computer Society (2013)
-  Henzinger, T.A., Jhala, R., Majumdar, R., McMillan, K.L.: Abstractions from proofs. In: Jones, N.D., Leroy, X. (eds.) POPL. pp. 232–244. ACM (2004)
-  Hoare, C.A.R.: Procedures and parameters: An axiomatic approach. In: Engeler, E. (ed.) Symposium on Semantics of Algorithmic Languages. pp. 102–116. Springer Berlin Heidelberg, Berlin, Heidelberg (1971)
-  Kahng, A.B., Kang, S.: Accuracy-configurable adder for approximate arithmetic designs. In: DAC. pp. 820–825. ACM (2012)
-  Krzysztof R. Apt, Frank S. de Boer, E.R.O.: Verification of Sequential and Concurrent Programs. Springer, London (2009)
-  Kugler, L.: Is ”good enough” computing good enough? Commun. ACM 58(5), 12–14 (2015)
-  Lahiri, S.K., Rakamarić, Z.: Towards automated differential program verification for approximate computing. In: 2015 Workshop on Approximate Computing Across the Stack (WAX) (2015), http://sampa.cs.washington.edu/wax2015/papers/lahiri.pdf
-  Manna, Z., Pnueli, A.: Temporal verification of reactive systems: Progress (1996), draft
-  Misailovic, S., Carbin, M., Achour, S., Qi, Z., Rinard, M.C.: Chisel: reliability- and accuracy-aware optimization of approximate computational kernels. In: Black, A.P., Millstein, T.D. (eds.) OOPSLA. pp. 309–328. ACM (2014)
-  Pauck, F.: Generierung von Eigenschaftsprüfern in einem Hardware/Software-Co-Verifikationsverfahren. Bachelorthesis, Paderborn University (2014)
-  Podelski, A., Rybalchenko, A.: Transition invariants. In: LICS. pp. 32–41. IEEE Computer Society (2004)
-  Sampson, A., Dietl, W., Fortuna, E., Gnanapragasam, D., Ceze, L., Grossman, D.: EnerJ: approximate data types for safe and general low-power computation. In: Hall, M.W., Padua, D.A. (eds.) PLDI. pp. 164–174. ACM (2011)
-  Sery, O., Fedyukovich, G., Sharygina, N.: Interpolation-based function summaries in bounded model checking. In: HVC. pp. 160–175 (2011)
-  Shafique, M., Ahmad, W., Hafiz, R., Henkel, J.: A low latency generic accuracy configurable adder. In: DAC. pp. 86:1–86:6. ACM (2015)
-  Verma, A.K., Brisk, P., Ienne, P.: Variable latency speculative addition: A new paradigm for arithmetic circuit design. In: Proceedings of the conference on Design, automation and test in Europe. pp. 1250–1255. ACM (2008)
-  Wolf, C.: Yosys open synthesis suite. http://www.clifford.at/yosys/
-  Ye, R., Wang, T., Yuan, F., Kumar, R., Xu, Q.: On reconfiguration-oriented approximate adder design and its application. In: CAD. pp. 48–54. IEEE Press (2013)
-  Zhu, N., Goh, W.L., Yeo, K.S.: An enhanced low-power high-speed adder for error-tolerant application. In: International Symposium on Integrated Circuits. pp. 69–72. IEEE (2009)