We present the design, implementation, and foundation of a verifier
for higher-order functional programs with generics and recursive
data types. Our system supports proving safety and termination
using preconditions, postconditions and assertions. It supports writing proof hints
using assertions and recursive calls. To formalize the soundness of the system we introduce
System FR, a calculus supporting System F polymorphism, dependent refinement
types, and recursive types (including recursion through contravariant positions of
function types). Through the
use of sized types, System FR supports reasoning about termination of lazy data
structures such as streams. We formalize a reducibility argument using the Coq
proof assistant and prove the soundness of a type-checker with
respect to call-by-value semantics, ensuring type safety and normalization for
typeable programs.
Our program verifier is implemented as an alternative verification-condition
generator for the Stainless tool, which relies on existing SMT-based
solver backend for automation.
We demonstrate the efficiency of our approach by verifying a
collection of higher-order functional programs comprising around 14000 lines of
polymorphic higher-order Scala code, including graph search algorithms, basic
number theory, monad laws, functional data structures, and assignments from
popular Functional Programming MOOCs.
Automatically verifying the correctness of higher-order
programs is a challenging problem that applies to most
modern programming languages and proof assistants. Despite
extensive research in program
verifiers (Nipkow
et al., 2002a; Bertot and
Castéran, 2004a; Abel, 2010; Norell, 2007; Brady, 2013; Vazou et al., 2014; Swamy et al., 2013; Leino, 2010)
there remain significant challenges and trade-offs in
checking safety and termination.
A motivation for our work are implementations
that verify polymorphic functional programs using SMT solvers (Suter
et al., 2011; Vazou et al., 2014).
To focus on foundations, we look at simpler verifiers that do not perform invariant
inference and are mostly based on unfolding recursive
definitions and encoding of higher-order functions into SMT theories
(Suter
et al., 2011; Voirol
et al., 2015).
A recent implementation of such a verifier
is the Stainless system 111https://github.com/epfl-lara/stainless,
which claims to handle a subset of
Scala (Odersky
et al., 2008).
The goal of Stainless is to verify that function contracts hold and that all functions terminate.
Unfortunately, the termination checking procedure is not documented to the best of our knowledge
and even its soundness can be doubted.
Researchers have shown (Hupel and Kuncak, 2016) how to map certain patterns of specified Scala programs into Isabelle/HOL
to ensure verification,
but the link-up imposes a number of restrictions on
data type definitions and can certify only a fraction of programs that
the original verifier can prove. This paper
seeks foundations for verification and termination checking of functional
programs with such a rich set of features.
Termination is desirable for many executable functions in
programs and is even more important in formal specifications.
A non-terminating function definition such as f(x)=1+f(x) could be
easily mapped to a contradiction and violate the conservative
extension principle for definitions. Yet termination in the presence of higher-order
functions and data types is challenging to ensure. For example,
when using non-monotonic recursive types, terms can diverge
even without the explicit use of recursive functions, as
illustrated by the following snippet of
Scala code:
Furthermore, even though the concept of termination for all function inputs is an intuitively clear
property, its modular definition is subtle:
a higher order function G taking another function f as an argument
should terminate when given any terminating function f, which in
term can be applied to expressions involving further calls to G.
We were thus led to
type theoretic
techniques, where reducibility method
has long been used
to show strong normalization of expressive calculi (Tait, 1967), (Girard, 1990, Chapter 6), (Harper, 2016).
As a natural framework
for analyzing support for first-class functions with preconditions and
post-conditions we embraced the ideas of
refinement dependent types similar to those in Liquid Haskell (Vazou et al., 2014) with
refinement-based notion of subtyping.
To explain verification condition generation in higher-order case (including the question of
which assumptions should be visible inside for a given assertion),
we resorted to well-known dependent (Π) function types.
To support polymorphism we
incorporated type quantifiers, as in System F (Girard, 1971, 1990).
We found that the presence of refinement types allowed us to explain
soundness of well-founded recursion based on user-defined measures.
To provide expressive support for iterative unfolding of recursive functions,
we introduced rules to make function bodies available while type checking of recursive functions.
For recursive types, many existing
systems introduce separate notions of inductive and
co-inductive definitions. We found this distinction
less natural for developers and chose to support expressive recursive types
(without a necessary restriction to positive recursion)
using sized types (Abel, 2010).
We draw inspiration from a number of existing systems, yet our solution
has a new combination of features that work nicely together. For example,
we can encode user-defined termination measures for functions using a general fixpoint combinator
and refinement types that ensure termination condition semantically. The recursion
in programs is thus not syntactically restricted as in, e.g., System F.
We combined these features into a new type system, System FR,
which we present as a bidirectional type checking algorithm.
The algorithm generates type checking
and type inference goals by traversing terms and types, until it reaches a point
where it has to check that a given term evaluates to true. This typically
arises when we want to check that a term t has a refinement type
{x:T|b}, which is the case when t has type T, and when the term
b evaluates to true in the context where x equals t. Following
the tradition of SMT-based verifiers (Detlefs
et al., 1998; Barnett
et al., 2004), we call checks that
some terms evaluate to true verification conditions.
We prove the soundness of our type system using a reducibility interpretation of types.
The goal of our verification system is to ensure that a given term belongs to the
semantic denotation of a given type. For simple types such as natural numbers, this denotation
is the set of untyped lambda calculus terms that
evaluate, in a finite number of steps,
to a non-negative integer.
For function types the denotation are, as is typical in reducibility approaches,
terms that, when applied to terms in denotation of argument type, evaluate to terms
in the denotation of the result type.
Such denotation gives us a unified framework for function contracts expressed as refinement types. The approach ensures
termination of programs because the semantics of types
only contain terms that are terminating in call-by-value semantics.
We have formally proven using the Coq proof assistant (Bertot and
Castéran, 2004a) the soundness
of our typing algorithm, implying that when verification conditions generated for
checking that a term t belongs to a type T are semantically valid, the term
t belongs to the semantic denotation of the type T.
The bidirectional typing algorithm handles the expressive types in a
deterministic and predictable way, which enables good and localized error
reporting to the user.
To solve generated verification conditions, we use existing
implementation invoking the Inox solver222https://github.com/epfl-lara/inox that translates them into first-order
language of SMT solvers (Voirol
et al., 2015).
Our semantics of types provides a definition of soundness for such solvers;
any solver that respects the semantics can be used with our verification condition generator.
Our bidirectional type checking algorithm thus becomes a new, trustworthy
verification condition generator for Stainless. We were successful in verifying many existing
Stainless benchmarks using the new approach.
We summarize our contributions as follows:
We present a rich type system, called System FR, that combines System F with dependent types,
refinements, equality types, and recursive
types (Sections 3 and 4).
We define a bidirectional type-checking algorithm for System FR
(Section 5.6). Our algorithm generates
verification conditions that are then solved by the (existing) SMT-based
solver Inox.
Figure 1. Template of a recursive function with user-given contracts and a decreasing measure.
2. Examples of Program Verification and Termination Checking
Our goal is to verify correctness and termination of pure Scala
functions written as in Figure 1.
pre[x] is the precondition of the function f, and is written
by the user in the same language as the body of f. The precondition may
contain arbitrary expressions and calls to other functions. Similarly, the user
specifies in post the property that the results of the function should satisfy. To
ensure termination of f (which might call itself recursively), the user
may also provide a measure using the decreases keyword, which is also an
expression (of type Nat, the type of natural numbers) written in the same
language.
τ1 and τ2 may be arbitrary types, including
function types or algebraic data types. Informally, the function is terminating
and correct, if, for every value v of type τ1 such that pre[v] evaluates to true, f(v) returns (in a finite number of
steps) a value res of type τ2 such that post[v,res]
evaluates to true.
By using dependent and refinement types, this can be summarized by saying that
the function f has type:
Πx:{x:τ1|pre[x]}.{res:τ2|post[x,res]}.
Figure 2. The function filter filters elements of a list based on a
predicate p, and count counts the number of occurrences of x
in a list.
Figure 3. A partition function specified using filter and with
termination measure is given with size.
Figure 5. A function that checks whether a list is sorted and a
function that merges two sorted lists
As an example, consider the list type as defined in Figure 3.
We use Z to denote the type of integers (corresponding
to Scala’s BigInt in actual source code). The
function filter filters elements from a list, while count counts the
number of occurrences of an integer in the list. These two functions have no pre- or
postconditions. The decreases clauses specify that the functions
terminate because the size of the list decreases at each recursive call.
Using these functions we define partition in Figure 3,
which takes a list l of natural numbers and partitions it according to a
predicate p: Z⇒Bool. We prove in the postcondition
that partitioning coincides with applying filter to the list with p
and its negation.
Figure 4 shows a theorem
that partition also preserves the multiplicity of each element. We
use here count to state the property, but we could have used multisets
instead (a type which is natively supported in Stainless). The holds keyword is a shorthand for ensuring { res => res }. The
@induct annotation instructs the system to add a recursive call to partitionMultiplicity on the tail of l when l is not empty. This
gives us access to the multiplicity property for the tail of l, which the
system can then use automatically to prove that the property holds for l
itself. This corresponds to a proof by induction on l.
Figure 5 shows a function isSorted that checks whether a
list is sorted, and a function merge that combines two sorted lists in a
sorted list.
When given the above input, the system proves the termination
of all functions, establishes that postconditions of functions hold, and
shows that the theorem holds, without any user interaction or additional
annotations.
Our system also supports reasoning about infinite data structures, including
streams that are computed on demand. These data structures are challenging to deal with because even defining termination
of an infinite stream is non-obvious, especially in absence of a concrete operation that uses
the stream. Given some type X, Stream[X] represents the type of infinite streams containing elements in X. In
a mainstream call-by-value language such as Scala, this type can be defined as:
For the sake of concise syntax, we typeset a function taking unit, (u:Unit)=>e, using Scala’s syntax ()⇒e for a function of zero
parameters. Given a stream s: Stream[X], we can call s.head
to get the head of the stream (which is of type X), or s.tail to get
the tail of the stream (which is of type ()⇒Stream[X]). We can use
recursion to define streams, as shown in figures 8,
8, 8. The @ghost annotation is used to
mark the ghost parameters n of these functions. These parameters are used
as annotations to guide our type-checker, but they do not influence the
computation and can be erased at runtime. For instance, an erased version of
constant (without ghost code and without type annotation) looks like:
Informally, we can say that the constant stream is terminating.
Indeed, it has the interesting property that, despite the recursion, for every
n∈N, we can take the first n elements in finite time (no divergence
in the computation). We say that constant(x) is an
n-non-diverging stream. Moreover, when a stream is n-non-diverging
for every n∈N, we simply say that it is non-diverging, which
means that we can take as many elements as we want without diverging, which is
the case for constant(x). Note that non-divergence of constant
cannot be shown by defining a measure on its argument x that strictly
decreases on each recursive call, because constant is called recursively
on the exact same argument x. Instead, we define a measure on the ghost
argument n of the annotated version. This corresponds to using
type-based
termination (Abel, 2008, 2007; Barthe
et al., 2008),
where the type of the function for the recursive call is smaller than the type
of the caller. We expand on that technique in Section 5.2.
In the annotated version of constant from Figure 8, the
notation Stream[X](n) stands for streams of elements in X which are
n-non-diverging. The type of constant then states that constant can
be called with any (ghost) parameter n to build an n-non-diverging
stream. Since parameter n is computationally irrelevant, this proves that
the erased version of constant returns a non-diverging stream. At the
moment, while our formalization fully supports streams, but we have not made
Scala frontend modification for parameters such as n to
parse the functions given above. Instead we construct them internally in our tool as syntax trees.
Figure 9.
Grammar for untyped lambda calculus terms
3. Syntax and Operational Semantics
We now give a formal syntax for terms and show
(in Appendix) call-by-value operational semantics. This untyped lambda calculus with
pairs, tagged unions, integers, booleans, and error values models programs that our
verification system supports. It is Turing complete and rather conventional.
3.1. Terms of an Untyped Calculus
Let V be a set of variables. We let Terms be the set of all
(untyped) terms (see Figure 9) which includes the unit term
(), pairs, booleans, natural numbers, a recursor rec for iterating over
natural numbers, a pattern matching operator match for natural numbers, a
recursion operator fix, an error term err to represent crashes, and a
generic term refl to represent proofs of equality between terms. The
recursor rec can be simulated using fix and match but we keep
it in the paper for presenting examples.
The terms fold(t) and unfoldt1inx⇒t2 are used to represent data structures
(such as lists or streams), where ‘fold()’ plays the role of a constructor,
and ‘unfoldin’ the role of a deconstructor. The terms Λt and
t[] are used to represent the erasure of type abstractions and type
instantiation terms (for polymorphism) of the form Λα.t and
t[τ], where α is a type variable and τ is a type. These
type-annotated terms will be introduced in a further section.
The term size(t) is a special term to internalize the sizes of syntax trees
of values (ignoring lambdas) of our language. It is used for measure of
recursive functions such as the map examples on lists shown in
Section 2.
Given a term t we denote fv(t) the set of all free variables of t.
Terms are considered up to renaming of locally bound variables
(alpha-renaming).
3.2. Call-by-Value Operational Semantics
The set Val of values of our language is defined (inductively) to be
zero, (), true, false, refl, every variable x, every lambda
term λx.t or Λt, the terms of the form succ(v) or
fold(v) where v∈Val, and the terms of the form (v1,v2) where
v1,v2∈Val.
The call-by-value small-step relation between two terms t1,t2∈Terms,
written t1↪t2, is standard for the most part and given in
Figure 23 (Appendix A). Given a term t and a
value v, t[x↦v] denotes the term t where every free occurrence of
x has been replaced by v.
To evaluate the fixpoint operator fix, we use the rule fix(y⇒t)↪t[y↦λ().fix(y⇒t)],
which substitutes the fix under a lambda with unit argument.
We do this wrapping of fix
in a lambda term because we wanted all substitutions to be values for our
call-by-value semantics, and fix is not. This also means that, to make a
recursive call within t, one has to use y() instead of y.
To define the semantics of size(), we use a (mathematical) function
size_semantics() that returns the size of a value, ignoring lambdas for which
it returns 0. The precise definition is given in
Figure 24 (Appendix A).
We denote by ↪∗ the reflexive and transitive closure of
↪.
A term t is normalizing if there exists a value v such that t↪∗v.
Figure 10.
Grammar for types τ, where x∈V is a term
variable, α∈V is a type variable (t denotes type-annotated terms of Figure 14
that complete the mutually recursive definition)
4. Types, Semantics and Reducibility
We give in Figure 10 the grammar for the typesτ that
our verification system supports. Given two types τ1 and τ2, we use
the notation τ1→τ2 for Πx:τ1.τ2 when
x is not a free variable of τ2. Similarly, we use the notation τ1×τ2 when x is not a free variable of τ2.
For recursive types, we introduce the notation:
Rec(α⇒τ)≡∀n:Nat.Rec(n)(α⇒τ)
Then, the type of (non-diverging) streams informally introduced in Section 2
can be understood as a notation, when X is a type, for:
Stream[X]≡Rec(α⇒X×(Unit⇒α)).
Similarly, for a natural number n, the type of n-non-diverging streams
Streamn[X] is a notation for Rec(n)(α⇒X×(Unit⇒α)).
Using this notation, we can also define finite data structures such as lists of
natural numbers, as follows:
List≡Rec(α⇒Unit+Nat×α).
We show in Section 4.3 that these types indeed correspond to
streams and lists respectively.
Let Type be the set of all types. We define a (unary) logical relation on
types to describe terms that do not get stuck (e.g. due to the error term
err, or due to an ill-formed application such as ‘truezero’) and that
terminate to a value of the given type. Our definition is inspired by the notion
of reducibility or hereditary termination (see
e.g. (Tait, 1967; Girard, 1990; Harper, 2016)), which we use
as a guiding principle for designing the type system and its extensions.
4.1. Reduciblity for Closed Terms
For each type τ, we define in Figure 11 mutually
recursively the sets of reducible values\llbracketτ\rrbracketθv and
reducible terms\llbracketτ\rrbracketθt. In that sense, a type τ
can be understood as a specification that some terms satisfy (and some do
not).
These definitions require an environment θ, called an
interpretation, to give meaning to type variables. Concretely, an
interpretation is a partial map from type variables to sets of terms.
An interpretation θ has the constraint that for every type variable
α∈dom(θ), θ(α) is a reducibility
candidateC, which, in our setting, means that all terms in
θ(α) are (erased) values. The set of all reducibility candidates is
denoted by Candidates⊆2Terms, and an interpretation θ
is therefore a partial map in V↦Candidates.
When the interpretation has no influence on the definition, we may omit it. For
instance, for every θ∈(V↦Candidates), we have
\llbracketNat\rrbracketθv={zero,succ(zero),succ(succ(zero)),…}, so we can just denote
this set by \llbracketNat\rrbracketv.
By construction, \llbracketτ\rrbracketθv only contains (erased) values (of
type τ), while \llbracketτ\rrbracketθt contains (erased) terms that
reduce to a value in \llbracketτ\rrbracketθv. For example, a term in \llbracketNat→Nat\rrbracketθt is
not only normalizing as a term of its own, but also normalizes whenever applied
to a value in \llbracketNat\rrbracketθv.
Figure 11. Definition of reducibility for values and for terms for each type.
The function basetype() is an auxiliary function, used in the base case of
the definition for recursive types.
The type {x:τ|b} represents the values v of type τ for
which b[x↦v] evaluates to true. We use this type as a
building block for writing specifications (pre and postconditions).
The type ∀x:τ1.τ2 represents the values that are in the
intersection of the types τ2[x↦v] when v ranges over values of
type τ1.
The sum type τ1+τ2 represents values that are either of the form
left(v) where v is a reducible value of τ1, or of the form
right(v) where v is a reducible value of τ2.
The set of reducible values for the equality type
\llbrackett1≡t2\rrbracketθv makes use of a notion of equivalence on
terms which is based on operational semantics. More specifically, we say that
t1 and t2 are equivalent, denoted t1≈t2, if for
every value v, we have t1↪∗vifft2↪∗v. Note that
this equivalence relation is defined even if we do not know anything about the
types of terms t1 and t2, and it ensures that if one of the terms reduces
to a value, then so does the other.
The type ∀α:Type.τ is the polymorphic type from System F. The set
\llbracket∀α:Type.τ\rrbracketθv is defined by using the environment
θ to bind the type variable α to an arbitrary reducibility
candidate.
We use the recursive typeRec(n)(α⇒τ) as a building
block for representing data structures such as lists of streams. The definition
of reducibility for the recursive type makes use of an auxiliary function
basetype() that can be seen as an (upper) approximation of the recursive
type. Note that basetypeα(τ) removes the type variable α
from τ.
Our reducibility definition respects typical lemmas that are needed to prove the
soundness of typing rules, such as the following substitution lemma (see
(Girard, 1971) for the lemma on System F), which we have formally proven
(see also Section 5.8 below).
Lemma 4.1 ().
Let τ1 and τ2 be two types, and let α be a type variable that
may appear in τ1 but not in τ2. Let θ be a type interpretation.
Then, we have:
Having defined reducibility for closed terms,
we now define what it means for a term t with free term and type variables to
be reducible for a type τ. Informally, we want to ensure that for every
interpretation of the type variables, and for every substitution of values for
the term variables, the term t reduces in a finite number of steps to a value
in type τ. This is formalized by a (semantic) typing relation
Θ;Γ⊨Redt:τ which is defined as follows.
First, a contextΘ;Γ is made of a finite set Θ⊆V of type variables and of a sequence Γ of pairs in V×Type. The domain of Γ, denoted dom(Γ) is the
list of variables (in V) appearing in the left-hand-sides of the pairs. We
implicitly assume throughout the paper that all variables appearing in the
domains are distinct. This enables us to use Γ as a partial map from
V to Type. We use a sequence to represent Γ as the order of
variables is important, since a variable may have a (dependent) type which
refers to previous variables in the context.
Given a partial map γ∈V↦Terms, we write γ(t) for
the term t where every variable x is replaced by γ(x). We use the
same notation γ(τ) for applying a substitution to a type τ.
Given a context Θ;Γ, a reducible substitution for
Θ;Γ is pair a partial maps θ∈V↦Candidates and
γ∈V↦Terms where:
dom(θ)=Θ,
dom(γ)=dom(Γ), and
∀x∈dom(Γ).γ(x)∈\llbracketγ(Γ(x))\rrbracketθv.
Note that the substitution γ is also applied to the type Γ(x),
since Γ(x) may be a dependent type with free term variables.
The set of all
pairs of reducible substitutions for Θ;Γ is denoted
\llbracketΘ;Γ\rrbracketv.
Finally, given a context Θ;Γ, a term t and a type τ, we say
that Θ;Γ⊨Redt:τ holds when for every pair of substitutions
θ,γ for the context Θ;Γ, γ(t) belongs the
reducible values at type γ(τ). Formally,
Θ;Γ⊨Redt:τ is defined to hold when:
Our bidirectional type checking and inference algorithm in Section 5 is a sound (even if incomplete)
procedure to check
Θ;Γ⊨Redt:τ.
4.3. Recursive Types
We explain in this section how to interpret the type
Rec(n)(α⇒τ) (see reducibility definition in
Figure 11) and how the Stream[X] and List types
represent streams and lists.
4.3.1. Infinite Streams
For a natural number n, consider the type Sn≡StreamNat[n]≡Rec(n)(α⇒Nat×(Unit⇒α)). Let
us first see what Sn represents for small values of n. As a shortcut, we
use the notations 0, 1, 2, … for zero, succ(zero),
succ(succ(zero)), …
The definition \llbracketS0\rrbracketv refers to basetypeα(Nat×(Unit⇒α)), which is Nat×⊤ by
definition. This means that \llbracketS0\rrbracketv is the set of values of the
form fold((a,v)), where a∈\llbracketNat\rrbracketv, and v∈Val.
By unrolling the definition, we get that \llbracketS1\rrbracketv is the set of
values of the form fold(v) where v is in \llbracketNat×(Unit⇒α)\rrbracket[α↦\llbracketS0\rrbracketv]v, which is the same
(by Lemma 4.1) as \llbracketNat×(Unit⇒S0)\rrbracketv. Therefore, \llbracketS1\rrbracketv is the set of values of the
form fold(a,f) where a∈\llbracketNat\rrbracketv and f∈\llbracketUnit⇒S0\rrbracketv. This means that when it is applied to
(), f terminates and returns a value in \llbracketS0\rrbracketθv.
Similarly, \llbracketS2\rrbracketv is the set of values of the form
fold(a,f) where n∈\llbracketNat\rrbracketv and f∈\llbracketUnit⇒S1\rrbracketv.
To summarize, we can say that for every n∈\llbracketNat\rrbracketv, Sn
represents values of the language that behave as streams of natural numbers, as
long as they are unfolded at most n+1 times. This matches the property we
mentioned in Section 2, as Sn represents the streams that are
n+1-non-diverging.
We can show that as n grows, Sn gets more and more constraints:
\llbracketS0\rrbracketv⊇\llbracketS1\rrbracketv⊇\llbracketS2\rrbracketv⊇…
In the limit, a value v∈\llbracket∀n:Nat.Sn\rrbracketv (which is in
every Sn for n∈\llbracketNat\rrbracketv), represents a stream of natural
numbers, that, regardless of the number of times it is unfolded, does not
diverge, i.e. a non-diverging stream. Equivalently, we have
v∈\llbracketStream[Nat]\rrbracketv.
4.3.2. Finite Lists
Types of the form Rec(α⇒τ) can also be used to
represent finite data structures such as lists.
We let Listn be a notation for
Rec(n)(α⇒Unit+Nat×α), so that:
List≡∀n:Nat.Listn.
Here are some examples to show how lists are encoded:
The empty list is fold(left()),
A list with one element n is fold(right(n,fold(left()))),
More generally, given an element n and a list l, we can construct the
list n::l by writing: fold(right(n,l)).
Let us now see why List represents the type of all finite lists of natural
numbers. The first thing to note is that given n∈\llbracketNat\rrbracketv,
Listn does not represent the lists of size n. For instance, we
know that \llbracketList0\rrbracketv is the set of values of the form
fold(v) where v∈\llbracketbasetypeα(Unit+Nat×α)\rrbracketv,
i.e. v∈\llbracket⊤\rrbracketv=Val.
Therefore, List0
contains lists of all sizes (and also all values that do not represent lists,
such as fold(zero) or fold(λx.(())).
Instead, Listn can be understood as the values that, as long as they are
unfolded no more than n times, behave as lists. Just like for streams, we
have: \llbracketList0\rrbracketv⊇\llbracketList1\rrbracketv⊇\llbracketList2\rrbracket%
v⊇…333This monotonicity comes from the
fact that α only appear is positive positions in the definitions of the
recursive types for streams and lists.
In the limit, we can show that List contains all finite lists, and nothing
more.
\thmt@toks\thmt@toks
Let v∈Val be a value. Then, v∈\llbracketList\rrbracketv if and only if
there exists k≥0 and a1,…,ak∈\llbracketNat\rrbracketv such that
v=fold(right(a1,…fold(right(ak,fold(left())))…)).
Lemma 4.2 ().
It can seem surprising that the type of streams Rec(α⇒Nat×(Unit⇒α)) contains infinite streams while the type of lists
Rec(α⇒Unit+Nat×α) only contains finite
lists. The reason is that, in a call-by-value language, a value representing an
infinite list would need to have an infinite syntax tree, with infinitely many
fold()’s (which is not possible). On the other hand, we can represent
infinite streams by hiding recursion underneath a lambda term as shown in
Section 2.
5. A Bidirectional Type-Checking Algorithm
In this section, we give procedures for inferring a type τ for
a term t in a context Θ;Γ, denoted
Θ;Γ⊢t⇑τ, as well as for checking that the type of a term t
is τ, denoted Θ;Γ⊢t⇓τ. We introduce rules of
our procedures throughout this section; the full set of rules is given in
figures 12 and 13.