Gödel Logic: from Natural Deduction to Parallel Computation
Propositional Gödel logic extends intuitionistic logic with the non-constructive principle of linearity . We introduce a Curry–Howard correspondence for and show that a simple natural deduction calculus can be used as a typing system. The resulting functional language extends the simply typed -calculus via a synchronous communication mechanism between parallel processes, which increases its expressive power. The normalization proof employs original termination arguments and proof transformations implementing forms of code mobility. Our results provide a computational interpretation of , thus proving A. Avron’s 1991 thesis.
Logical proofs are static. Computations are dynamic. It is a striking discovery that the two coincide: formulas correspond to types in a programming language, logical proofs to programs of the corresponding types and removing detours from proofs to evaluation of programs. This correspondence, known as Curry–Howard isomorphism, was first discovered for constructive proofs, and in particular for intuitionistic natural deduction and typed -calculus  and later extended to classical proofs, despite their use of non-constructive principles, such as the excluded middle [18, 2] or reductio ad absurdum [17, 30].
Gödel logic, Avron’s conjecture and previous attempts
Twenty-five years have gone by since Avron conjectured in  that Gödel logic  – one of the most useful and interesting logics intermediate between intuitionistic and classical logic – might provide a basis for parallel -calculi. Despite the interest of the conjecture and despite various attempts, no Curry–Howard correspondence has so far been provided for . The main obstacle has been the lack of an adequate natural deduction calculus. Well designed natural deduction inferences can indeed be naturally interpreted as program instructions, in particular as typed -terms. Normalization , which corresponds to the execution of the resulting programs, can then be used to obtain proofs only containing formulas that are subformulas of some of the hypotheses or of the conclusion. However the problem of finding a natural deduction for with this property, called analyticity, looked hopeless for decades.
All approaches explored so far to provide a precise formalization of as a logic for parallelism, either sacrificed analyticity  or tried to devise forms of natural deduction whose structures mirror hypersequents – which are sequents operating in parallel . Hypersequents were indeed successfully used in  to define an analytic calculus for and were intuitively connected to parallel computations: the key rule introduced by Avron to capture the linearity axiom – called communication – enables sequents to exchange their information and hence to “communicate”. The first analytic natural deduction calculus proposed for  uses indeed parallel intuitionistic derivations joined together by the hypersequent separator. Normalization is obtained there only by translation into Avron’s calculus: no reduction rules for deductions and no corresponding -calculus were provided. The former task was carried out in , that contains a propositional hyper natural deduction with a normalization procedure. The definition of a corresponding -calculus and Curry–Howard correspondence are left as an open problem, which might have a complex solution due to the elaborated structure of hyper deductions. Another attempt along the “hyper line” has been made in . However, not only the proposed proof system is not shown to be analytic, but the associated -calculus is not a Curry–Howard isomorphism: the computation rules of the -calculus are not related to proof transformations, i.e. Subject Reduction does not hold.
: Our Curry–Howard Interpretation of Gödel Logic
We introduce a natural deduction and a Curry–Howard correspondence for propositional . We add to the -calculus an operator that, from the programming viewpoint, represents parallel computations and communications between them; from the logical viewpoint, the linearity axiom; and from the proof theory viewpoint, the hypersequent separator among sequents. We call the resulting calculus : parallel -calculus for . relates to the natural deduction for as typed -calculus relates to the natural deduction for intuitionistic logic :Soundness and
We prove: the perfect match between computation steps and proof reductions in the Subject Reduction Theorem; the Normalization Theorem, by providing a terminating reduction strategy for ; the Subformula Property, as corollary. The expressive power of is illustrated through examples of programs and connections with the -calculus [26, 33].
The natural deduction calculus that we use as type system for is particularly simple: it extends with the rule (its typed version is displayed below), which was first considered in  to define a natural deduction calculus for , but with no normalization procedure. The calculus follows the basic principle of natural deduction that new axioms require new computational reductions; this contrasts with the basic principle of sequent calculus employed in the “hyper approach”, that new axioms require new deduction structures. Hence we keep the calculus simple and deal with the complexity of the hypersequent structure at the operational side. Consequently, the programs corresponding to proofs maintain the syntactical simplicity of -calculus. The normalization procedure for extends Prawitz’s method with ideas inspired by hypersequent cut-elimination, by normalization in classical logic  and by the embedding in  between hypersequents and systems of rules ; the latter shows that reformulates Avron’s communication rule.
The inference rules of are decorated with -terms, so that we can directly read proofs as typed programs. The decoration of the inferences is standard and the typed version of is
Inspired by , we use the variable to represent a private communication channel between the processes and . The computational reductions associated to – cross reductions – enjoy a natural interpretation in terms of higher-order process passing, a feature which is not directly rendered through communication by reference  and is also present in higher-order -calculus . Nonetheless cross reductions handle more subtle migration issues. In particular, a cross reduction can be activated whenever a communication channel is ready to transfer information between two parallel processes:
Ii Preliminaries on Gödel logic
Also known as Gödel–Dummett logic , Gödel logic naturally turns up in a number of different contexts; among them, due to the natural interpretation of its connectives as functions over the real interval , is one of the best known ‘fuzzy logics’, e.g. .
Although propositional is obtained by adding the linearity axiom to any proof calculus for intuitionistic logic, analytic calculi for have only been defined in formalisms extending the sequent calculus. Among them, arguably, the hypersequent calculus in  is the most successful one, see, e.g., . In general a hypersequent calculus is defined by incorporating a sequent calculus (Gentzen’s LJ, in case of ) as a sub-calculus and allowing sequents to live in the context of finite multisets of sequents.
A hypersequent is a multiset of sequents, written as where, for all is an ordinary sequent.
The symbol “” is a meta-level disjunction; this is reflected by the presence in the calculus of the external structural rules of weakening and contraction, operating on whole sequents, rather than on formulas. The hypersequent design opens the possibility of defining new rules that allow the “exchange of information” between different sequents. It is this type of rules which increases the expressive power of hypersequent calculi compared to sequent calculi. The additional rule employed in Avron’s calculus for  is the so called communication rule, below presented in a slightly reformulated version (as usual stands for a possibly empty hypersequent):
Iii Natural Deduction
The very first step in the design of a Curry–Howard correspondence is to lay a solid logical foundation. No architectural mistake is allowed at this stage: the natural deduction must be structurally simple and the reduction rules as elementary as possible. We present such a natural deduction system for Gödel logic. extends Gentzen’s propositional natural deduction (see ) with a rule accounting for axiom . We describe the reduction rules for transforming every deduction into an analytic one and present the ideas behind the Normalization Theorem, which is proved in the -calculus framework in Section VI.
is the natural deduction version of the sequent calculus with systems of rules in ; the latter embeds (into) Avron’s hypersequent calculus for . Indeed  introduces a mapping from (and into) derivations in Avron’s calculus into (and from) derivations in the LJ sequent calculus for intuitionistic logic with the addition of the system of rules
where can only be applied (possibly many times) above respectively the left and right premise of . The above system, that reformulates Avron’s communication rule, immediately translates into the natural deduction rule below, whose addition to leads to a natural deduction calculus for
Not all the branches of a derivation containing the above rule are derivations. To avoid that, and to keep the proof of the Subformula Property (Theorem V.4) as simple as possible, we use the equivalent rule below, first considered in .
Definition III.1 ().
The natural deduction calculus extends with the rule:
Let and indicate the derivability relations in and in , respectively.
Theorem III.1 (Soundness and Completeness).
For any set of formulas and formula , if and only if .
( Applications of can be simulated by eliminations having as major premiss an instance of . () Easily follows by the following derivation:
Notation. To shorten derivations henceforth we will use
respectively, and call them communication inferences.
As usual, we will use and as shorthand for and . Moreover, we exploit the equivalence of and in (see ) and treat as a defined connective.
Iii-a Reduction Rules and Normalization
A normal deduction in should have two essential features: every intuitionistic Prawitz-style reduction should have been carried out and the Subformula Property should hold. Due to the rule, the former is not always enough to guarantee the latter. Here we present the main ideas behind the normalization procedure for and the needed reduction rules. The computational interpretation of the rules will be carried out through the calculus in Section IV.
The main steps of the normalization procedure are as follows:
We permute down all applications of .
The resulting deduction – we call it in parallel form – consists of purely intuitionistic subderivations joined together by consecutive inferences occurring immediately above the root. This transformation is a key tool in the embedding between hypersequents and systems of rules . The needed reductions are instances of Prawitz-style permutations for elimination. Their list can be obtained by translating into natural deduction the permutations in Fig. 1.
Once obtained a parallel form, we interleave the following two steps.
We apply the standard intuitionistic reductions () to the parallel branches of the derivation.
This way we normalize each single intuitionistic derivation, and this can be done in parallel. The resulting derivation, however, need not satisfy yet the Subformula Property. Intuitively, the problem is that communications may discharge hypotheses that have nothing to do with their conclusion.
We apply specific reductions to replace the applications that violate the Subformula Property.
These reductions – called cross reductions – account for the hypersequent cut-elimination. They allow to get rid of the new detours that appear in configurations like the one below on the left. To remove these detours, a first idea would be to simultaneously move the deduction to the right and to the left thus obtaining the derivation below right:
In fact, in the context of Krivine’s realizability, Danos and Krivine  studied the linearity axiom as a theorem of classical logic and discovered that its realizers implement a restricted version of this transformation. Their transformation does not lead however to the subformula property for NG. The unrestricted transformation above, on the other hand, cannot work; indeed might contain the hypothesis and hence it cannot be moved on the right. Even worse, may depend on hypotheses that are locally opened, but discharged below but above . Again, it is not possible to move on the right as naively thought, otherwise new global hypotheses would be created.
We overcome these barriers by our cross reductions. Let us highlight and , the hypotheses of and that are respectively discharged below and but above the application of . Assume moreover, that does not occur in and does not occur in as hypotheses discharged by . A cross reduction transforms the deduction below left into the deduction below right (if in the original proof discharges in each branch exactly one occurrence of the hypotheses, and and are formulas)
and into the following deduction, in the general case
where the double bar notation stands for an application of between sets of hypotheses and , which means to prove from the conjunction of the formulas of , then to prove the conjunction of the formulas of by means of a communication inference and finally obtain each formula of by a series of eliminations, and vice versa.
Mindless applications of the cross reductions might lead to dangerous loops, see e.g. Example IV.2. To avoid them we will allow cross reductions to be performed only when the proof is not analytic. Thanks to this and to other restrictions, we will prove termination and thus the Normalization Theorem.
Iv The -Calculus
We introduce , our parallel -calculus for . extends the standard Curry–Howard correspondence  for intuitionistic natural deduction with a parallel operator that interprets the inference for the linearity axiom. We describe -terms and their computational behavior, proving as main result of the section the Subject Reduction Theorem, stating that the reduction rules preserve the type.
- Linearity Axiom
- Ex Falso Quodlibet
The table above defines a type assignment for -terms, called proof terms and denoted by , which is isomorphic to . The typing rules for axioms, implication, conjunction and ex-falso-quodlibet are standard and give rise to the simply typed -calculus, while parallelism is introduced by the rule for the linearity axiom.
Proof terms may contain variables of type for every formula ; these variables are denoted as and whenever the type is not important simply as . For clarity, the variables introduced by the rule will be often denoted with letters , but they are not in a syntactic category apart. A variable that occurs in a term of the form is called -variable and a variable that occurs in a term is called communication variable and represents a private communication channel between the parallel processes and .
The free and bound variables of a proof term are defined as usual and for the new term , all the free occurrences of in and are bound in . In the following we assume the standard renaming rules and alpha equivalences that are used to avoid capture of variables in the reduction rules.
Notation. The connective associates to the right and by we denote the term and by , for , the sequence of projections selecting the th element of the sequence. Therefore, for every formula sequence the expression denotes or if .
Often, when and the list includes all the free variables of a proof term , we shall write . From the logical point of view, represents a natural deduction of from the hypotheses . We shall write whenever , and the notation means provability of in propositional Gödel logic. If the symbol does not occur in it, then is a simply typed -term representing an intuitionistic deduction.
We define as usual the notion of context as the part of a proof term that surrounds a hole, represented by some fixed variable. In the expression we denote a particular occurrence of a subterm in the whole term . We shall just need those particularly simple contexts which happen to be simply typed -terms.
Definition IV.1 (Simple Contexts).
A simple context is a simply typed -term with some fixed variable occurring exactly once. For any proof term of the same type of , denotes the term obtained replacing with in , without renaming of any bound variable.
As an example, the expression is a simple context and the term can be written as .
We define below the notion of stack, corresponding to Krivine stack  and known as continuation because it embodies a series of tasks that wait to be carried out. A stack represents, from the logical perspective, a series of elimination rules; from the -calculus perspective, a series of either operations or arguments.
Definition IV.2 (Stack).
A stack is a sequence such that for every , exactly one of the following holds: either , with proof term or , with . We will denote the empty sequence with and with the stacks of length . If is a proof term, as usual denotes the term .
We define now the notion of strong subformula, which is essential for defining the reduction rules of the -calculus and for proving Normalization. The technical motivations will become clear in Sections V and VI, but the intuition is that the new types created by cross reductions must be always strong subformulas of already existing types. To define the concept of strong subformula we also need the following definition.
Definition IV.3 (Prime Formulas and Factors ).
A formula is said to be prime if it is not a conjunction. Every formula is a conjunction of prime formulas, called prime factors.
Definition IV.4 (Strong Subformula).
is said to be a strong subformula of a formula , if is a proper subformula of some prime proper subformula of .
Note that in the present context, prime formulas are either atomic formulas or arrow formulas, so a strong subformula of must be actually a proper subformula of an arrow proper subformula of . The following characterization of the strong subformula relation will be often used.
Proposition IV.1 (Characterization of Strong Subformulas).
Suppose is any strong subformula of . Then:
If , with and are prime, then is a proper subformula of one among .
If , then is a proper subformula of a prime factor of or .
See the Appendix.∎
Definition IV.5 (Multiple Substitution).
Let be a proof term, a sequence of variables and . The substitution replaces each variable of any term with the th projection of .
We now seek a measure for determining how complex the communication channel of a term is. Logic will be our guide. First, it makes sense to consider the types such that occurs with type in and thus with type in . Moreover, assume has type and its free variables are . The Subformula Property tells us that, no matter what our notion of computation will turn out to be, when the computation is done, no object of type more complex than the types of the inputs and the output should appear. Hence, if the prime factors of the types and are not subformulas of , then these prime factors should be taken into account in the complexity measure we are looking for. The actual definition is the following.
Definition IV.6 (Communication Complexity).
Let a proof term with free variables . Assume that occurs in and thus in .
The pair is called the communication kind of .
The communication complexity of is the maximum among and the numbers of symbols of the prime factors of or that are neither proper subformulas of nor strong subformulas of one among .
We explain now the basic reduction rules for the proof terms of , which are given in Figure 1. As usual, we also have the reduction scheme: , whenever and for any context . With we shall denote the reflexive and transitive closure of the one-step reduction .
Intuitionistic Reductions. These are the very familiar computational rules for the simply typed -calculus, representing the operations of applying a function and taking a component of a pair . From the logical point of view, they are the standard Prawitz reductions  for .
Cross Reductions. The reduction rules for model a communication mechanism between parallel processes. In order to apply a cross reduction to a term
several conditions have to be met. These conditions are both natural and needed for the termination of computations.
First, we require the communication complexity of to be greater than ; again, this is a warning that the Subformula Property does not hold. Here we use a logical property as a computational criterion for answering the question: when should computation stop? An answer is crucial here, because, as shown in Example IV.2, unrestricted cross reductions do not always terminate. In -calculi the Subformula Property fares pretty well as a stopping criterion. In a sense, it detects all the essential operations that really have to be done. For example, in simply typed -calculus, a closed term that has the Subformula Property must be a value, that is, of the form , or . Indeed a closed term which is a not a value, must be of the form , for some stack (see Definition IV.2), where is a redex or ; but and would have a more complex type than the type of the whole term, contradicting the Subformula Property.
Second, we require to be normal simply typed -terms. Simply typed -terms, because they are easier to execute in parallel; normal, because we want their computations to go on until they are really stuck and communication is unavoidable. Third, we require the variable to be as rightmost as possible and that is needed for logical soundness: how could otherwise the term be moved to the right, e.g., if it contains ?
Assuming that all the conditions above are satisfied, we can now start to explain the cross reduction
Here, the communication channel has been activated, because the processes and are synchronized and ready to transfer respectively and . The parallel operator let the two occurrences of communicate: the term travels to the right in order to replace and travels to the left in order to replace . If and were data, like numbers or constants, everything would be simple and they could be sent as they are; but in general, this is not possible. The problem is that the free variables of which are bound in by some cannot be permitted to become free; otherwise, the connection between the binders and the occurrences of the variables would be lost and they could be no more replaced by actual values when the inputs for the are available. Symmetrically, the variables cannot become free. For example, we could have and and
and the transformation would just be wrong: the term will never get back actual values for the variables when they will become available.
These issues are typical of process migration, and can be solved by the concepts of code mobility  and closure . Informally, code mobility is defined as the capability to dynamically change the bindings between code fragments and the locations where they are executed. Indeed, in order to be executed, a piece of code needs a computational environment and its resources, like data, program counters or global variables. In our case the contexts and are the computational environments or closures of the processes and and the variables are the resources they need. Now, moving a process outside its environment always requires extreme care: the bindings between a process and the environment resources must be preserved. This is the task of the migration mechanisms, which allow a migrating process to resume correctly its execution in the new location. Our migration mechanism creates a new communication channel between the programs that have been exchanged. Here we see the code fragments and , with their original bindings to the global variables and .
The change of variables and has the effect of reconnecting and to their old inputs:
In this way, when they will become available, the data will be sent to and the data will be sent to through the channel . Note that in the result of the cross reduction the processes and are cloned, because their code fragments can be needed again. Thus behaves as a replicated input and replicated output channel. E.g., in , replicated input is coded by the bang operator of linear logic:
With symmetrical message passing and a “!” also in front of , one would obtain a version of our cross reduction. Finally, as detailed in Ex. VII.2, whenever and are closed terms the cross reduction is simpler and only maintains the first two of the four processes produced in the general case.
Example IV.1 ( in and in the -calculus).
A private channel is rendered in the -calculus [26, 33] by the restriction operator , as . Recall that the -calculus term represents two processes that run in parallel. The corresponding term is defined using a fresh channel with communication kind . As no cross reduction outside and can be applied, the whole term reduces neither to nor to , so that and can run in parallel.
Let and be bound variables occurring in the normal terms and . Without the condition on the communication complexity of , a loop could be generated:
In Sec. VI we show that if , this reduction sequence would terminate. What is then happening here? Intuitively, and are normal simply typed -terms, which forces .
Permutation Reductions. They regulate the interaction between parallel operators and the other computational constructs. The first four reductions are the Prawitz-style permutation rules  between parallel operators and eliminations. We also add two other groups of reductions: three permutations between parallel operators and introductions, two permutations between parallel operators themselves. The first group will be needed to rewrite any proof term into a parallel composition of simply typed -terms (Proposition V.3). The second group is needed to address the scope extrusion issue of private channels . We point out that a parallel operator is allowed to commute with other parallel operators only when it is strictly necessary, that is, when the communication complexity of is greater than and thus signaling a violation of the Subformula Property.
Example IV.3 (Scope extrusion (and -calculus)).
As example of scope extrusion, let us consider the term
Here the process wishes to send the channel to along the channel , but this is not possible being the channel private. This issue is solved in the -calculus using the congruence , provided that does not occur in , condition that can always be satisfied by -conversion. Gödel logic offers and actually forces a different solution, which is not just permuting inward but also duplicating it:
After this reduction can send to . If does not occur in , we have a further simplification step:
obtaining associativity of composition as in -calculus. However, if occurs in , this last reduction step is not possible and we keep both copies of . It is indeed natural to allow both and to communicate with .
Everything works as expected: the reductions steps in Fig. 1 preserve the type at the level of proof terms, i.e., they correspond to logically sound proof transformations. Indeed
Theorem IV.2 (Subject Reduction).
If and , then and all the free variables of appear among those of .
It is enough to prove the theorem for basic reductions: if and , then . The proof that the intuitionistic reductions and the permutation rules preserve the type is completely standard (full proof in the Appendix). Cross reductions require straightforward considerations as well. Indeed suppose
Since , and are correct terms. Therefore and , by Definition IV.5, are correct as well. The assumptions are that is the sequence of the free variables of which are bound in , is the sequence of the free variables of which are bound in , does not occur neither in nor in and is fresh. Therefore, by construction all the variables are bound in and all the variables are bound in . Hence, no new free variable is created.∎
where are normal simply typed -terms and simple contexts; is the sequence of the free variables of which are bound in ; is the sequence of the free variables of which are bound in ; and are the conjunctions of the types of the variables in and , respectively; the displayed occurrences of are the rightmost both in and in ; is fresh; and the communication complexity of is greater than
Definition IV.7 (Normal Forms and Normalizable Terms).
A redex is a term such that for some and basic reduction of Figure 1. A term is called a normal form or, simply, normal, if there is no such that . We define to be the set of normal -terms.
A sequence, finite or infinite, of proof terms is said to be a reduction of , if , and for all , . A proof term of is normalizable if there is a finite reduction of whose last term is a normal form.
Definition IV.8 (Parallel Form).
A term is a parallel form whenever, removing the parentheses, it can be written as
where each , for , is a simply typed -term.
V The Subformula Property
We show that normal -terms satisfy the important Subformula Property (Theorem V.4). This, in turn, implies that our Curry–Howard correspondence for is meaningful from the logical perspective and produces analytic proofs.
We start by establishing an elementary property of simply typed -terms, which will turn out to be crucial for our normalization proof. It ensures that every bound hypothesis appearing in a normal intuitionistic proof is a strong subformula of one the premises or a proper subformula of the conclusion. This property sheds light on the complexity of cross reductions, because it implies that the new formulas introduced by these operations are always smaller than the local premises.
Proposition V.1 (Bound Hypothesis Property).
is a simply typed -term and a variable occurring bound in . Then one of the following holds:
is a proper subformula of a prime factor of .
is a strong subformula of one among .
By induction on . See the Appendix. ∎
The next proposition says that each occurrence of any hypothesis of a normal intuitionistic proof must be followed by an elimination rule, whenever the hypothesis is neither nor a subformula of the conclusion nor a proper subformula of some other premise.
Let be a simply typed -term and
One of the following holds:
Every occurrence of in is of the form for some proof term or projection .
or is a subformula of or a proper subformula of one among the formulas .
Easy structural induction on the term. See the Appendix.∎
Proposition V.3 (Parallel Normal Form Property).
If is a -term, then it is in parallel form.
Easy structural induction on using the permutation reductions. See the Appendix.∎
We finally prove the Subformula Property: a normal proof does not contain concepts that do not already appear in the premises or in the conclusion.
Theorem V.4 (Subformula Property).
For each communication variable occurring bound in and with communication kind , the prime factors of and are proper subformulas of .
The type of any subterm of which is not a bound communication variable is either a subformula or a conjunction of subformulas of the formulas .
We proceed by induction on . By Proposition V.3 and each , for , is a simply typed -term. We only show the case . Let be the communication kind of , we first show that the communication complexity of is . We reason by contradiction and assume that it is greater than . and are either simply typed -terms or of the form . The second case is not possible, otherwise a permutation reduction could be applied to . Thus and are simply typed -terms. Since the communication complexity of is greater than , the types and are not subformulas of . By Prop. V.2, every occurrence of in is of the form and every occurrence of in is of the form . Hence, we can write
where are simple contexts and is rightmost. Hence a cross reduction of can be performed, which contradicts the fact that . Since we have established that the communication complexity of is , the prime factors of and must be proper subformulas of . Now, by induction hypothesis applied to and , for each communication variable occurring bound in , the prime factors of and are proper subformulas of the formulas and thus of the formulas ; moreover, the type of any subterm of or which is not a communication variable is either a subformula or a conjunction of subformulas of the formulas and thus of .∎
Our statement of the Subformula Property is slightly different from the usual one. However the latter can be easily recovered using the communication rule of Section III and additional reduction rules. As the resulting derivations would be isomorphic but more complicated, we prefer the current statement.
Vi The Normalization Theorem
Our goal is to prove the Normalization Theorem for : every proof term of reduces in a finite number of steps to a normal form. By Subject Reduction, this implies that proofs normalize. We shall define a reduction strategy for terms of : a recipe for selecting, in any given term, the subterm to which apply one of our basic reductions. We remark that the permutations between communications have been adopted to simplify the normalization proof, but at the same time, they undermine strong normalization, because they enable silly loops, like in cut-elimination for sequent calculi. Further restrictions of the permutations might be enough to prove strong normalization, but we leave this as an open problem.
The idea behind our normalization strategy is to employ a suitable complexity measure for terms and, each time a reduction has to be performed, to choose the term of maximal complexity. Since cross reductions can be applied as long as there is a violation of the Subformula Property, the natural approach is to define the complexity measure as a function of some fixed set of formulas, representing the formulas that can be safely used without violating the Subformula Property.
Definition VI.1 (Complexity of Parallel Terms).
Let be a finite set of formulas. The -complexity of the term is the sequence of natural numbers, where:
if the communication kind of is , then is the maximum among and the number of symbols of the prime factors of or that are not subformulas of some formula in ;
is the number of occurrences of in and ;
is the sum of the lengths of the intuitionistic reductions of and to reach intuitionistic normal form;
is the number of occurrences of in and .
The -communication-complexity of is .
For clarity, we define the recursive normalization algorithm that represents the constructive content of the proofs of Prop. VI.1 and VI.3, which are used to prove the Normalization Theorem. Essentially, our master reduction strategy consists in iterating the basic reduction relation defined below, whose goal is to permute the smallest redex of maximal complexity until and are simply typed -terms, then normalize them and finally apply the cross reductions.
Definition VI.2 (Side Reduction Strategy).
Let be a term with free variables and be the set of the proper subformulas of and the strong subformulas of the formulas . Let be the smallest subterm of , if any, among those of maximal -complexity and let be its -complexity. We write
whenever has been obtained from by applying to :
a permutation reduction
if and or ;
a sequence of intuitionistic reductions normalizing both and , if and ;
a cross reduction if and , immediately followed by intuitionistic reductions normalizing the newly generated simply typed -terms and, if possible, by applications of the cross reductions and to the whole term.
a cross reduction and if .
Definition VI.3 (Master Reduction Strategy).
We define a normalization algorithm taking as input a typed term and producing a term such that . Assume that the free variables of are and let be the set of the proper subformulas of and the strong subformulas of the formulas . The algorithm performs the following operations.
If is not in parallel form, then, using permutation reductions, is reduced to a which is in parallel form and is recursively executed.
If is a simply typed -term, it is normalized and returned. If is not a redex, then let and . If is normal, it is returned. Otherwise, is recursively executed.
If is a redex, we select the smallest subterm of having maximal -communication-complexity . A sequence of terms is produced
such that has -communication-complexity strictly smaller than . We substitute for in obtaining and recursively execute .
We observe that in the step 2 of the algorithm , by construction is not a redex. After and are normalized respectively to and , it can still be the case that is not normal, because some free variables of and may disappear during the normalization, causing a new violation of the Subformula Property that transforms into a redex, even though was not.
The first step of the normalization proof consists in showing that any term can be reduced to a parallel form.
Let be any term. Then , where is a parallel form.
Easy structural induction on . See the Appendix.∎
We now prove that any term in parallel form can be normalized with the help of the algorithm .
Let be a term in parallel form which is not a simply typed -term and a set of formulas containing all proper subformulas of and closed under subformulas. Assume that is the maximum among the -communication-complexities of the subterms of . Assume that the free variables of are such that for every , either each strong subformula of is in , or each proper prime subformula of which has more than symbols is in . Suppose moreover that no subterm with -communication-complexity contains a subterm of the same -communication-complexity. Then there exists such that and the maximum among the -communication-complexities of the subterms of is strictly smaller than .
We prove the lemma by lexicographic induction on the pair
where is the number of subterms of with maximal -complexity among those with -communication-complexity .
Let be the smallest subterm of having -complexity . Four cases can occur.
(a) , with . We first show that the term is a redex. Assume that the free variables of are among and that the communication kind of is .
Suppose by contradiction that all the prime factors of and are proper subformulas of or strong subformulas of one among . Given that there is a prime factor of or such that has symbols and does not belong to . The possible cases are two: (i) is a proper subformula of a prime proper subformula