The framework
Abstract
We introduce the framework, LF, a dependent type theory based on the Edinburgh Logical Framework LF, extended with the strong prooffunctional connectives, i.e. strong intersection, minimal relevant implication and strong union. Strong prooffunctional connectives take into account the shape of logical proofs, thus reflecting polymorphic features of proofs in formulæ. This is in contrast to classical or intuitionistic connectives where the meaning of a compound formula depends only on the truth value or the provability of its subformulæ. Our framework encompasses a wide range of type disciplines. Moreover, since relevant implication permits to express subtyping, LF subsumes also Pfenning’s refinement types. We discuss the design decisions which have led us to the formulation of LF, study its metatheory, and provide various examples of applications. Our strong prooffunctional type theory can be plugged in existing common proof assistants.
() University of Udine Université Côte d’Azur, INRIA
1 Introduction
This paper provides a unifying framework for two hitherto unreconciled understandings of types: i.e. typesaspredicates à la Curry and typesaspropositions (sets) à la Church. The key to our unification consists in introducing strong prooffunctional connectives [40, 3, 4] in a dependent type theory such as the Edinburgh Logical Framework (LF) [22]. Both Logical Frameworks and ProofFunctional Logics consider proofs as first class citizens, albeit differently. Strong prooffunctional connectives take seriously into account the shape of logical proofs, thus allowing for polymorphic features of proofs to be made explicit in formulæ. They provide a finer semantics than classical/intuitionistic connectives, where the meaning of a compound formula depends only on the truth value or the provability of its subformulæ. However, existing approaches to strong prooffunctional connectives are all quite idiosyncratic in mentioning proofs. Logical Frameworks, on the other hand, provide a uniform approach to proof terms in object logics, but they have not fully capitalized on subtyping.
This situation calls for a natural combination of the two understandings of types, which should benefit both worlds. On the side of Logical Frameworks, the expressive power of the metalanguage would be enhanced thus allowing for shallower encodings of logics, a more principled use of subtypes [37], and new possibilities for formal reasoning in existing interactive theorem provers. On the side of type disciplines for programming languages, a principled framework for proofs would be provided, thus supporting a uniform approach to “proof reuse” practices based on type theory [38, 12, 20, 9, 6].
Therefore, in this paper, we extend LF with the connectives of strong intersection, strong union, and minimal relevant implication of ProofFunctional Logics [40, 3, 4]. We call this extension the framework (LF), since it builds on the calculus introduced in [31, 17]. Moreover, we illustrate by way of examples, that LF subsumes many expressive type disciplines in the literature [37, 3, 4, 38, 12].
It is not immediate to extend the judgmentsastype, CurryHoward paradigm to logics supporting strong prooffunctional connectives, since these connectives need to compare the shapes of derivations and do not just take into account the provability of propositions, i.e. the inhabitation of the corresponding type. In order to capture successfully strong logical connectives such as or , we need to be able to express the rules:
where is a suitable equivalence between logical proofs. Notice that the above rules suggest immediately intriguing applications in polymorphic constructions, i.e. the same evidence can be used as a proof for different statements. Pottinger [40] was the first to study the strong connective . He contrasted it to the intuitionistic connective as follows: “The intuitive meaning of can be explained by saying that to assert is to assert that one has a reason for asserting which is also a reason for asserting … (while) … to assert is to assert that one has a pair of reasons, the first of which is a reason for asserting and the second of which is a reason for asserting ”. A logical theorem involving intuitionistic conjunction which does not hold for strong conjunction is , otherwise there should exist a closed term having simultaneously both one and two abstractions. LopezEscobar [32] and Mints [35] investigated extensively logics featuring both strong and intuitionistic connectives especially in the context of realizability interpretations.
Dually, it is in the elimination rule that proof equality needs to be checked. Following Pottinger, we could say that asserting is to assert that one has a reason for , which is also a reason to assert and . The two connectives differ since the intuitionistic theorem is not derivable for , otherwise there would exist a term which behaves both as I and as K.
Following [4], Strong (or Minimal Relevant) Implication, , can be viewed as a special case of implication whose related function space is the simplest possible one, namely the one containing only the identity function. The operators and differ, since is not derivable. Relevant implication allows for a natural introduction of subtyping, in that morally means . Relevant implication amounts to a notion of “proofreuse”. Combining the remarks in [4, 3], minimal relevant implication, strong intersection and strong union correspond respectively to the implication, conjunction and disjunction operators of Meyer and Routley’s Minimal Relevant Logic [34].
Strong connectives arise naturally in investigating the propositionsastypes analogy for intersection and union type assignment systems. Intersection types were introduced by Coppo, Dezani et al. in the late 70’s [13, 15, 16, 5] to support a form of ad hoc polymorphism, for untyped calculi, à la Curry. Intersection types were used originally as an (undecidable) type assignment system for pure calculi, i.e. for finitary descriptions of denotational semantics [14]. This line of research was later explored by Abramsky [1] in a fullfledged Stone duality. Union types were introduced semantically, by MacQueen, Plotkin, and Sethi [33, 3]. In [3] strong intersection, union and subtyping were thoroughly studied in the context of typeassignment systems, see Figure 1. A classical example of the expressiveness of union types is due to Pierce [38]: without union types, the best information we can get for
is a boolean type. Designing a calculus à la Church with intersection and union types is problematic. The usual approach of simply adding types to binders does not work, as shown in Figure 2.
Same difficulties can be found with union types. Intersection and union type disciplines started to be investigated in a explicitly typed programming language settings à la Church, much later by Reynolds and Pierce [41, 38], Wells et al. [48, 49], Liquori et al. [29, 18], Frisch et al. [21] and Dunfield [19]. From a logical point of view, there are many proposals to find a suitable logics to fit intersection: among them we cite [37, 47, 42, 36, 11, 10, 39], and previous papers by the authors [17, 30] and a type checker implementation [45]. In [17], two of the present authors proposed the calculus as a typed calculus à la Church corresponding to the type assignment system à la Curry with intersection and union but without . The relation between Churchstyle and Currystyle calculi was expressed using an essence function, denoted by , that intuitively erases all the type information in terms (the full definition is shown in Figure 4).
The LF, introduced in this paper extends [31] with union types, dependent types and minimal relevant implication. The novelty of LF in the context of Logical Frameworks, lies in the fullfledged use of strong prooffunctional connectives, which to our knowledge has never been explored before. Clearly, all terms have a computational counterpart.
Pfenning’s work on Refinement Types [37] pioneered an extension of the Edinburgh Logical Framework with subtyping and intersection types. His approach capitalises on a tame and essentially ad hoc notion of subtyping, but the logical strength of that system does not go beyond the LF (i.e. simple types). The logical power of LF allows to type all strongly normalizing terms. Furthermore, subtyping in LF arises naturally as a derived notion from the more fundamental concept of minimal relevant implication, as illustrated in Section 2.
Miquel [36] discusses an extension of the Calculus of Constructions with implicit typing, which subsumes a kind of prooffunctional intersection. His approach has opposite motivations to ours. While LF provides a Churchstyle version of Currystyle type assignment systems, Miquel’s Implicit Calculus of Constructions encompasses some features of Currystyle systems in an otherwise Churchstyle Calculus of Constructions. In LF we can discuss also ad hoc polymorphism, while in the Implicit Calculus only structural polymorphism is encoded. Indeed, he cannot assign the type to the identity [28]. Kopylov [27] adds a dependent intersection type constructor to NuPRL, allowing the resulting system to support dependent records (which are a very useful data structure to encode mathematics). The implicit producttype of Miquel, together with the dependent intersection type of Kopylov, and a suitable equalitytype is used by Stump [46] to enrich the impredicative secondorder system , in order to derive induction.
In order to achieve our goals, we could have carried out simply the encoding of LF in LF. But, due to the sideconditions characterizing prooffunctional connectives, this would have be achieved only through a deep encoding. As an example of this, in Figure 8, we give an encoding of a subsystem of [3], where subtyping has been simulated using relevant arrows. This encoding illustrates the expressive power of LF in treating proofs as firstclass citizens, and it was also a source of inspiration for LF.
All the examples discussed in this paper have been checked by an experimental proof development environment for LF [45] (see Bull and BullSubtyping in [44]).
Synopsis. In Section 2, we introduce LF and outline its metatheory, together with a discussion of the main design decisions. In Section 3, we provide the motivating examples. In Section 4, we outline the details of the implementation and future work.
2 The framework: LF with prooffunctional operators
The syntax of LF pseudoterms is given in Figure 3. For the sake of simplicity, we suppose that convertible terms are equal. Signatures and contexts are defined as finite sequence of declarations, like in LF. Observe that we could formulate LF in the style of [23], using only canonical forms and without reductions, but we prefer to use the standard LF format to support better intuition. There are three prooffunctional objects, namely strong conjunction (typed with ) with two corresponding projections, strong disjunction (typed with ) with two corresponding injections, and strong (or relevant) abstraction (typed with ). Indeed, a relevant implication is not a dependent one because the essence of the inhabitants of type is essentially the identity function as enforced in the typing rules. Note that injections need to be decorated with the injected type in order to ensure the unicity of typing.
We need to generalize the notion of essence, introduced in [17, 30] to syntactically connect pure terms (denoted by ) and type annotated LF terms (denoted by ). The essence function compositionally erases all type annotations, see Figure 4.
One could argue that the choice of in the definition of strong pairs/copairs is arbitrary and could have been replaced with : however, the typing rules will ensure that, if (resp. ) is typable, then we have that . Thus, strong pairs/copairs are constrained. The rule for the essence of a relevant application is justified by the fact that the operator amounts to just a type decoration.
The six basic reductions for LF objects appear on the left in Figure 5. Congruence rules are as usual, except for the two cases dealing with pairs and copairs which appear on the right of Figure 5. Here redexes need to be reduced “in parallel” in order to preserve identity of essences in the components. We denote by the symmetric, reflexive, and transitive closure of , i.e. the compatible closure of the reduction induced by the first six rules on the left in Figure 5, with the addition of the last two congruence rules in the same figure. In order to make this definition truly functional as well as to be able to prove a simple subject reduction result, we need to constrain pairs and copairs, i.e. objects of the form and to have congruent components upto erasure of type annotations. This is achieved by imposing in both constructs. We will therefore assume that such pairs and copairs are simply not well defined terms, if the components have a different “infrastructure”. The effects of this choice are reflected in the congruence rules in the reduction relation, in order to ensure that reductions can only be carried out in parallel along the two components.
The restriction on reductions in pairs/copairs and the new constructs do not cause any problems in showing that is locally confluent:
[Local confluence]
The reduction relation on wellformed LFterms is locally confluent.
The extended type theory LF is a formal system for deriving judgements of the forms:
The set of rules for object formation is defined in Figure 6, while the sets of rules for signatures, contexts, kinds and families are defined as usual in the Appendix: all typing rules are syntaxdirected. Note that prooffunctionality is enforced by the essence sideconditions in rules , , and . In the rule we rely on the external notion of equality . An option could have be to add an internal notion of equality directly in the type system (), and prove that the external and the internal definitions of equality are equivalent, as was proved for semifull Pure Type Systems [43]. Yet another possibility could be to compare type essences , for a suitable extension of essence to types and kinds. Unfortunately, this would lead to undecidability of type checking, in connection with relevant implication, as the following example shows. Let the two constants of type and of type : the following term is typable with and its essence is .
Since the intended meaning of relevant implication is “essentially” the identity, introducing variables or constants whose type is a relevant implication, amounts to assuming axioms corresponding to type inclusions such as those that equate and . As a consequence, equality of essences becomes undecidable. Thus, we rule out such options in relating relevant implications in LF to subtypes in the type assignment system of [3].
2.1 Relating LF to
We compare and contrast certain design decisions of LF to the type assignment system of [3]. The proof of strong normalization for LF will rely, in fact, on a forgetful mapping from LF to . As pointed out in [3], the elimination rule for union types in breaks subject reduction for onestep reduction, but this can be recovered using a suitable parallel reduction. The wellknown counterexample for onestep reduction, due to Pierce is
where is the identity.
In the typing context , the first and the last terms can be typed with , while the terms in the fork cannot. The reason is that the subject in the conclusion of the rule uses a context which can have more than one hole, as in the present case
where
, and .
Notice that there is only one redex, namely , and the reduction of this redex leads to , and no other intermediate (untypable) terms are possible.
The following result will be useful in the following section. {theorem} The system without gives types only to strongly normalizing terms. A proof is embedded in Theorem 4.8 of [3]. It can also be obtained using the general computability method presented in [25] Section 4, by interpreting intersection and union types precisely as intersections and unions in the lattice of computability sets.
2.2 Lf metatheory
LF can play the role of a Logical Framework only if decidable. Due to the lack of space, we list here only the main results: the complete list appears in the Appendix. The first important step states that if a term is typable, then its type is unique up to . {theorem}[Unicity of types and kinds]

If and , then .

If and , then .
Strong normalization is proved as in LF. First we encode LFterms into terms of the type assignment system such that redexes in the source language correspond to redexes in the target language and we use Theorem 2.1. Then, we introduce two forgetful mappings, namely and , defined in Figure 11 of the Appendix, to erase dependencies in types and to drop prooffunctional constructors in terms and we conclude. Special care is needed in dealing with redexes occurring in typedependencies, because these need to be flattened at the level of terms.
[Strong normalization]

LF is strongly normalizing, i.e.,

If , then is strongly normalizing.

If , then is strongly normalizing.

If , then is strongly normalizing.


Every strongly normalizing pure term can be annotated so as to be the essence of a term.
Local confluence and strong normalization entail confluence, so we have {theorem}[Confluence] LF is confluent, i.e.:

If and , then such that and .

If and , then such that and .

If and , then such that and .
Then, we have subject reduction, whose proof relies on technical lemmas about inversion and subderivation properties (see Appendix).
[Subject reduction of LF]

If and , then .

If and , then .

If and , then .
Finally, we define a possible algorithm for checking judgements in LF by computing a type or a kind for a term, and then testing for definitional equality, i.e. , against the given type or kind. This is achieved by reducing both to their unique normal forms and checking that they are identical up to conversion. Therefore we finally have:
[Decidability] All the type judgments of LF are recursively decidable.
Minimal Relevant Implications and Type Inclusion. Type inclusion and the rules of subtyping are related to the notion of minimal relevant implication, see [4, 17]. The insight is quite subtle, but ultimately very simple. This is what makes it appealing. The apparently intricate rules of subtyping and type inclusion, which occur in many systems, and might even appear ad hoc at times, can all be explained away in our principled approach, by proving that the relevant implication type is inhabited by a term whose essence is essentially a variable.
In the following theorem we show how relevant implication subsumes the typeinclusion rules of the theory of [3], without rules (5) and (13) (dealing with ) and rule (10) (distributing over ) in Figure 1: we call such restricted subtype theory. Note that the reason to drop subtype rule (10) is due to the fact that we cannot inhabit the type
[Type Inclusion] The judgement (where both and do not contain dependencies or relevant families) holds iff holds in the subtype theory of enriched with new axioms of the form for each constant . As far as the system of Refinement Types introduced by Pfenning in [37], we have the following theorem: {corollary}[Pfenning’s Refinement Types] The judgment in can be encoded in LF by adding a constant of type to , where the latter is the signature obtained from by replacing each clause of the form or in by a constant of type . Moreover, while Pfenning needs to add explicitly the rules of subtyping (i.e. the theory of ) in , we inherit them naturally in LF from the rules for minimal relevant implication.
3 Examples
As we have argued in the previous sections, the point of this paper is a uniform and principled approach to the encoding of a plethora of type disciplines and systems which ultimately stem or can capitalize from strong prooffunctional connectives and subtyping. The framework LF, presented in this paper, is the first to accommodate all the examples and counterexamples that have appeared in the literature. The complete developments of both the implementation of the framework and example encodings can be found in [44].
We start the section showing the expressive power of LF in encoding classical features of typing disciplines with strong intersection and union.
Auto application. The judgement in , is rendered in LF by the LFjudgement .
Polymorphic identity. The judgement in , is rendered in LF by the judgement .
Commutativity of union. The judgement in is rendered in LF by the judgement
Pierce’s expression of page 2. The expressive power of union types highlighted by Pierce is rendered in LF by
The above example illustrates the advantages of taking LF as a framework. In LF we would render it only encoding deeply, ending up with the verbose code in pierce_program.v [44].
Hereditary Harrop Formulæ. The encoding of Hereditary Harrop’s Formulæ is one of the motivating examples given by Pfenning for introducing refinement types in [37]. In LF it can be expressed as in Figure 7 and type checked in [45] using our concrete syntax (file pfenning_harrop.bull [44]), without any reference to intersection types, by a subtle use of union types. We add also rules for solving and backchaining. Hereditary Harrop formulæ can be recursively defined using two mutually recursive syntactical objects called programs () and goals ():
Using Theorem A, we can provide an alternative encoding of atoms, goals and programs which is more faithful to the one by Pfenning. Namely, we can introduce in the signature the constants and in order to represent the axioms and in Pfenning’s encoding. Our approach based on union types, while retaining the same expressivity permits to shortcut certain inclusions and to rule out also certain exotic goals and exotic programs. Indeed, for the purpose of establishing the adequacy of the encoding, it is sufficient to avoid variables involving union types in the derivation contexts.
Natural Deductions in Normal Form. The second motivating example for intersection types given in [37] is natural deductions in normal form. We recall that a natural deduction is in normal form if there are no applications of elimination rules of a logical connective immediately following their corresponding introduction, in the main branch of a subderivation.
The encoding we give in LF is a slightly improved version of the one in [37]: as Pfenning, we restrict to the purely implicational fragment. As in the previous example, we use
union types to define normal forms () either as pure eliminationdeductions from hypotheses () or normal formdeductions (). As above we could have used also intersection types. This example is interesting in itself, being the prototype of the encoding of type systems using canonical and atomic syntactic categories [23] and also of Fitch Set Theory [26].
Adequacy, Canonical Forms, Exotic terms. In the presence of union types, we have to pay special attention to the exact formulation of Adequacy Theorems, as in the Harrop’s formulæ example above. Otherwise exotic terms arise, such as , where and are distinct contexts (i.e. terms with holes), which cannot be naturally simplified even if . More work needs to be done to streamline how to exclude, or even capitalize on exotic terms.
Metacircular Encodings. The following diagram summarizes the network of adequate encodings/inclusions between LF, LF, and that can be defined. We denote by
the encoding of system in system , where the label sh (resp. dp), denotes a shallow (resp. deep) embedding. The notation denotes that is an extension of . Due to lack of space, but with the intention of providing a better formal understanding of the semantics of strong intersection and union types in a logical framework, we provide in Figure 8 a deep LF encoding of a presentation of à la Church [17]. A shallow encoding of in LF (file intersection_union.bull [44]) can be mechanically type checked in [45]. A shallow encoding of LF in LF (file lf.bull) making essential use of intersection types can be also type checked.
LF encoding of . Figure 8 presents a pure LF encoding of a presentation of à la Church in Coq syntax using HOAS. We use HOAS in order to take advantage of the higherorder features of the frameworks: other abstract syntax representation techniques would not be much different, but more verbose. The Eq predicate plays the same role of the essence function in LF, namely, it encodes the judgement that two proofs (i.e. two terms of type (OK _)) have the same structure. This is crucial in the Pair axiom (i.e. the introduction rule of the intersection type constructor) where we can inhabit the type (inter s t) only when the proofs of its component types s and t share the same structure (i.e. we have a witness of type (Eq s t M N), where M has type (OK s) and N has type (OK t)). A similar role is played by the Eq premise in the Copair axiom (i.e. the elimination rule of the union type constructor). We have an Eq axiom for each proof rule. Examples of this encoding can be found in intersection_union.v [44].
4 Implementation and Future Work
In a previous paper [45], we have implemented in OCaml suitable algorithms for type reconstruction, as well as type checking. In [30] we have implemented the subtyping algorithm which extends the wellknown Hindley algorithm for intersection types [24] with union types. The subtyping algorithm has been mechanically proved correct in Coq, extending the Bessai’s mechanized proof of a subtyping algorithm for intersection types [8].
A ReadEvalPrintLoop allows to define axioms and definitions, and performs some basic terminalstyle features like error prettyprinting, subexpressions highlighting, and file loading. Moreover, it can typecheck a proof or normalize it, using a strong reduction evaluator. We use the syntax of Pure Type Systems [7] to improve the compactness and the modularity of the kernel. Binders are implemented using de Brujin indexes. We implemented the conversion rule in the simplest way possible: when we need to compare types, we syntactically compare their normal form. Abstract and concrete syntax are mostly aligned: the concrete syntax is similar to the concrete syntax of Coq (see Bull and BullSubtyping [44]).
We are currently designing a higherorder unification algorithm for terms and a bidirectional refinement algorithm, similar to the one found in [2]. The refinement can be split into two parts: the essence refinement and the typing refinement. In the same way, there will be a unification algorithm for the essence terms, and a unification algorithm for terms. The bidirectional refinement algorithm aims to have partial type inference, and to give as much information as possible to a hypothetical solver, or the unifier. For instance, if we want to find a such that , we can infer that and that .
Appendix A Appendix
LF can play the role of a logical framework only if decidable. The road map which we follow to establish decidability is the standard one, see e.g. [22]. In particular, we prove in order: uniqueness of types and kinds, structural properties, normalization for raw wellformed terms, and hence confluence. Then we prove the inversion property, the subderivation property, subject reduction, and finally decidability. {lemma} Let be either or . Then:

Weakening: If and , then .

Strengthening: If , then , provided that .

Transitivity: If and , then .

Permutation: If , then , provided that does not occur free in or in , and that is valid in .
[Unicity of Types and Kinds]

If and , then .

If and , then .
In order to prove strong normalization we follow the pattern used for pure LF. Namely, we map LFterms into terms of the system in such a way that redexes in the source language are mapped into redexes in the target language, and then take advantage of Theorem 2.1. Special care is needed in dealing with redexes occurring in typedependencies, because these need to be flattened at the level of terms.
Let the forgetful mappings and be defined as in Figure 11.