1 Introduction

We take a fresh look at the logics of informational dependence and independence of Hintikka and Sandu and Väänänen, and their compositional semantics due to Hodges. We show how Hodges’ semantics can be seen as a special case of a general construction, which provides a context for a useful completeness theorem with respect to a wider class of models. We shed some new light on each aspect of the logic. We show that the natural propositional logic carried by the semantics is the logic of Bunched Implications due to Pym and O’Hearn, which combines intuitionistic and multiplicative connectives. This introduces several new connectives not previously considered in logics of informational dependence, but which we show play a very natural rôle, most notably intuitionistic implication. As regards the quantifiers, we show that their interpretation in the Hodges semantics is forced, in that they are the image under the general construction of the usual Tarski semantics; this implies that they are adjoints to substitution, and hence uniquely determined. As for the dependence predicate, we show that this is definable from a simpler predicate, of constancy or dependence on nothing. This makes essential use of the intuitionistic implication. The Armstrong axioms for functional dependence are then recovered as a standard set of axioms for intuitionistic implication. We also prove a full abstraction result in the style of Hodges, in which the intuitionistic implication plays a very natural rôle.

Section 1 Introduction

Our aim in this paper is to take a fresh look at the logics of informational dependence and independence [Hintikka and Sandu, 1989, Hintikka and Sandu, 1996, Väänänen, 2007], and their compositional semantics due to Wilfrid Hodges [Hodges, 1997a, Hodges, 1997b]. We shall focus on Dependence Logic, introduced by the second author [Väänänen, 2007].

The main objective of Hodges’ work was to provide a compositional model-theoretic semantics for the IF-logic of Hintikka and Sandu [Hintikka and Sandu, 1989, Hintikka and Sandu, 1996], which matched their “game-theoretical semantics”. This was achieved by lifting the standard Tarski semantics of first-order formulas, given in terms of satisfaction in a structure with respect to an assignment to the free variables, to satisfaction by sets of assignments.

We seek a deeper understanding of Hodges’ construction:

  • First and foremost, what is going on? Where does the Hodges construction come from? Is it canonical in any way? Why does it work? What structures are really at play here?

  • Because of the equivalence of Dependence Logic (or variants such as IF-logic) under this semantics to (a significant fragment of) second-order logic, there is no hope for a completeness theorem. But we may get a useful completeness theorem with respect to a wider class of models. Understanding the general algebraic context for the semantics points the way to such a completeness notion.

  • We can also look for representation theorems, with some infinitary ingredients.

The results of our investigation are quite surprising conceptually (at least to us). The main points can be summarized as follows.

  • We find a general context for Hodges’ construction. We shall not treat it in full generality here, as the general account is best stated in the language of categorical logic [Lawvere, 1969, Pitts, 2000], and we wish to avoid undue technicalities. However, we will indicate the possibilities for a general algebraic semantics, as the basis for a useful completeness theorem.

  • We find that the natural propositional logic associated with the Hodges construction is the logic of Bunched Implication of Pym and O’Hearn [O’Hearn and Pym, 1999, Pym, 2002], which combines intuitionistic and multiplicative linear connectives.

  • This not only yields a more natural view of the strangely asymmetric notions of conjunction and disjunction in the Hodges semantics (one is intuitionistic, while “disjunction” is actually multiplicative conjunction!), it also brings into prominence some connectives not previously considered in the setting of IF-logic or Dependence logic, in particular intuitionistic implication. This enables a novel analysis of the Dependence predicate of [Väänänen, 2007], as a Horn clause with respect to a more primitive predicate of single-valuedness. The well-known Armstrong axioms for functional dependence [Armstrong, 1974] then fall out as a standard axiomatization of intuitionistic (but not classical!) implication.

  • Intuitionistic implication also plays a natural rôle in our version of a full abstraction theorem in the sense of Hodges.

  • The construction is shown to lift the interpretation of the standard quantifiers in a canonical way, so that quantifiers are uniquely determined as the adjoints to substitution [Lawvere, 1969], just as in the standard Tarski semantics of first-order logic. This is also extended to characterizations of the dependence-friendly quantifiers of [Väänänen, 2007] as adjoints.

The plan of the remainder of the paper is as follows. In the next section we provide background on branching quantifiers, IF-logic, dependence logic, and Hodges’ semantics. Then in section 3 we show how the Hodges semantics is an instance of a general algebraic construction, in which the connectives of BI-logic arise naturally. In section 4, we show that the interpretation of the quantifiers in the Hodges construction is the canonical lift of the standard interpretation of the quantifiers as adjoints, and hence is uniquely determined. We also use the intuitionistic implication to show how the dependence-friendly quantifiers can be interpreted as certain adjoints. In section 5, we show how the intuitionistic implication arises naturally in the proof of a full abstraction theorem. In section 6, we show how the dependence predicate can be analyzed in terms of a more primitive predicate of single-valuedness, using the intuitionistic implication. This turns the “Armstrong axioms” into standard theorems of intuitionistic implicational logic. The final section outlines some further directions.

Section 2 Dependence, Independence and Information Flow

We begin with a standard example: the formal definition of continuity for a function on the real numbers.

This definition is often explained in current calculus courses in terms of an “epsilon-delta game”. The Adversary proposes a number, , as a measure of how close we must stay to the value of ; we must then respond with a number, , such that, whenever the input is within the interval , the output does indeed pass the -test of closeness to . Clearly, the choice of will depend on that of ; the nesting of the quantifiers expresses this dependency.

This is the definition of global continuity of , expressed in terms of local continuity at every point . This means that the choice of will depend, not only on , but on also. Now consider the definition of uniform continuity:

Here still depends on , but must be chosen independently of . This variation in dependency is tracked syntactically by the different order of the quantifiers. Indeed, it seems that it was only after the distinction between pointwise and uniform notions of continuity, and, especially, convergence, had been clarified in 19th-century analysis, that the ground was prepared for the introduction of predicate calculus.

More generally, dependence or independence of bounds on various parameters is an important issue in many results on estimates in number theory and analysis. Hodges quotes a nice example from one of Lang’s books [Lang, 1964] in [Hodges, 1997a].

Intuitively, there is an evident relation between these notions and that of information flow. Dependence indicates a form of information flow; independence is the absence of information flow.

2.1 Beyond first-order logic

It turns out that mere rearrangement of the order of quantifiers in first-order formulas is not sufficient to capture the full range of possibilities for informational dependence and independence. This was first realized almost 50 years ago, with Henkin’s introduction of branching quantifiers [Henkin, 1961]. The simplest case is the eponymous Henkin quantifier:

The intention is that must be chosen depending on , but independently of the choice of ; while must be chosen depending on , but independently of the choice of . The meaning of this formula can be explicated by introducing Skolem functions and : an equivalent formula will be

Here the constraints on dependencies are tracked by the dependence of the Skolem functions on certain variables, but not on others. Note that the Skolemized sentence is second-order; in fact, it belongs to the fragment of second-order logic. This second-order rendition of the meaning of the Henkin quantifier cannot be avoided, in the sense that the Henkin quantifier strictly increases the expressive power of first-order logic, and in fact the extension of first-order logic with the Henkin quantifier is equivalent in expressive power to the fragment [Henkin, 1961]. Examples

  1. Consider

    This expresses that and are equinumerous sets.

  2. Now consider

    This expresses that is an infinite set.

These examples show that the Henkin quantifier is not expressible in first-order logic.

2.2 Further developments

The next major development was the introduction of IF-logic (“independence-friendly logic”) by Jaakko Hintikka and Gabriel Sandu [Hintikka and Sandu, 1989]. The intention of IF-logic is to highlight informational dependence and independence. It provides a linear syntax for expressing branching quantification (and more), e.g. the Henkin quantifier can be written in linear notation as:

The “slashed quantifier” has the intended reading “there exists a not depending on ”. Note the strange syntactic form of this quantifier, with its “outward-reaching” scope for . Dependence Logic

A simplified approach was introduced by the second author, and developed extensively in the recent monograph [Väänänen, 2007]. The main novelty in the formulation of the logic is to use an atomic dependence predicate which holds if depends on , and only on these variables. We can then define “dependence-friendly quantifiers” as standard quantifiers guarded with the dependence predicate:

This yields essentially the same expressive power as IF-logic.

2.3 Compositionality: Hodges’ Semantics

But, what does it all mean? Hintikka claimed that a compositional semantics for IF logic could not be given [Hintikka, 1998]. Instead he gave a “Game-Theoretical Semantics”, essentially reduction to Skolem form as above.

Wilfrid Hodges showed that it could [Hodges, 1997a, Hodges, 1997b].

Before giving Hodges’ construction, it will be useful firstly to recall Tarski’s solution to the problem of how to define the truth of a sentence in a first-order structure with underlying set . In order to do this, he had to deal with the more general case of open formulas. The idea was to define

where is a finite set of variables including those occurring free in , and is an assignment of elements of to . Typical clauses include:

Here is the assignment defined on as follows: , and for .

The is the very prototype of a compositional semantic definition. Via Dana Scott, this idea led to the use of environments in denotational semantics [Scott, 1969]. Environments are nowadays ubiquitous in all forms of semantics in computer science [Winskel, 1993, Mitchell, 1996]. Teams

Hodges’ key idea was to see that one must lift the semantics of formulas from single assignments to sets of assignments. Notions of dependence of one variable on others are only meaningful among a set of assignments. Hodges called these sets “trumps”; we follow [Väänänen, 2007] in calling them teams.

We consider the semantics of Dependence logic [Väänänen, 2007]. Formulas are built up from standard atomic formulas and their negations and the dependence predicates, by conjunction, disjunction, and universal and existential quantification. We shall distinguish between the usual atomic formulas (including equality statements) over the first-order signature we are working with, and the dependence formulas. In the case of the standard atomic formulas, we shall also allow their negations, and as usual refer to positive and negated atomic formulas collectively as literals. We shall not allow negations of dependence formulas; we will see later how to access negative information about dependence, using the new connectives we will introduce in the next section.

The set of all individual variables is denoted . A team on is a set of Tarski assignments on . We define the following operations on teams:

  • If is a team on and , then is the team on defined by:

  • If is a team on , , and , then is the team on defined by: The Satisfaction Relation

We define a satisfaction relation

where the free variables of are contained in , and is a team on . (In practice, we elide ).

Firstly, for literals we have:

where is the standard Tarskian definition of satisfaction of an atomic formula or its negation in a structure with respect to an assignment. Connectives and Quantifiers

The clauses for connectives and quantifiers are as follows: Semantics of the dependence predicate

Given a set of variables and , we define the following notions:

  • An equivalence relation on assignments on :

  • A function depends only on , written , if for some , , where is the evident projection. Note that if such a exists, it is unique.

Now we can define:

Note that this expresses functional dependence, exactly as in database theory [Armstrong, 1974].

An equivalent definition can be given in terms of the dependency condition on functions:

Strictly speaking, this is the “positive part” of the definition as given in [Väänänen, 2007] following Hodges. There is also a negative part, which defines satisfaction for as for the positive definition, but with respect to the De Morgan dual of :

This allows for a “game-theoretic negation”, which formally “interchanges the rôles of the players”. It is simpler, and from our perspective loses nothing, to treat this negation as a defined operation, and work exclusively with formulas in negation normal form as above.

The theory of dependence logic: metalogical properties, connections with second-order logic, complexity and definability issues, et cetera, is extensively developed in [Väänänen, 2007]. However, as explained in the Introduction, many basic questions remain. We shall now show how the Hodges semantics can be seen in a new light, as arising from a general construction.

Section 3 The Hodges construction revisited

An important clue to the general nature of the construction is contained in the observation by Hodges [Hodges, 1997a] (and then in [Väänänen, 2007]) that the sets of teams denoted by formulas of IF-logic or Dependence logic are downwards closed: that is, if and , then . This is immediately suggestive of well-known constructions on ordered structures.

3.1 A general construction

We recall a couple of definitions. A commutative ordered monoid is a structure , where is a partially ordered set, and is a commutative monoid (a set with an associative and commutative operation with unit ), such that is monotone:

The primary example we have in mind is , the set of all teams on a set of variables , which we think of as the commutative ordered monoid .

A commutative quantale is a commutative ordered monoid where the partial order is a complete lattice, and distributes over all suprema: .

Let be a commutative ordered monoid. Then , the set of lower (or downwards-closed) sets of , ordered by inclusion, is the free commutative quantale generated by [Mitchell and Simmons, 2001].

A downwards closed subset of a partially ordered set is a set such that:

Thus this notion generalizes the downwards closure condition on sets of teams.

The following notation will be useful. Given , where is a partially ordered set, we define

the downwards closure of . A set is downwards closed if and only if .

As a commutative quantale, is a model of intuitionistic linear logic (phase semantics [Yetter, 1990, Rosenthal, 1990, Girard, 1987]). In particular, we have

We note that when the definition of , the multiplicative conjunction, is specialized to our concrete setting, it yields the definition of disjunction in the Hodges semantics!

The multiplicative implication has not been considered previously in the setting of IF-logic and Dependence logic. However, it is perfectly well defined, and is in fact uniquely specified as the adjoint of the linear conjunction:

Note that linear implication automatically preserves downwards closure.

3.2 What is the propositional logic of dependence?

In fact, carries a great deal of structure. Not only is it a commutative quantale (and hence carries an interpretation of linear logic), but it is also a complete Heyting algebra, and hence carries an interpretation of intuitionistic logic.

We have the clauses

The situation where we have both intuitionistic logic and multiplicative linear logic coexisting is the setting for BI logic, the “logic of Bunched Implications” of David Pym and Peter O’Hearn [O’Hearn and Pym, 1999, Pym, 2002], which forms the basis for Separation logic (Reynolds and O’Hearn) [Reynolds, 2002], an increasingly influential logic for verification. The construction is exactly the way a “forcing semantics” for BI-logic is converted into an algebraic semantics as a “BI-algebra”, i.e. a structure which is both a commutative quantale and a complete Heyting algebra [Pym et al., 2004]. is in fact the free construction of a complete BI-algebra over an ordered commutative monoid.

This provides one reason for proposing BI-logic as the right answer to the question posed at the beginning of this subsection. The compelling further evidence for this claim will come from the natural rôle played by the novel connectives we are introducing into the logic of dependence. This rôle will become apparent in the subsequent developments in this paper.

3.3 BID-logic and its team semantics

We shall spell out the extended logical language we are led to consider, and its concrete team semantics, extending the Hodges-style semantics already given in section 2.

We call the extended language BID, for want of a better name. Formulas are built from atomic formulas and their negations, and dependence formulas, by the standard first-order quantifiers, and the following propositional connectives: the intuitionistic (or “additive”) connectives , , , and the multiplicative connectives and . Team Semantics for BI Logic

The team semantics for BID-logic is as follows:

The clauses for atomic formulas and their negations and for the dependence formulas and quantifiers are as given in section 2.

As already noted, the semantics of and coincide with those given for conjunction and disjunction in section 2. The connectives and , intuitionistic or additive disjunction and implication, and the multiplicative implication , are new as compared to IF-logic or Dependence logic.

3.4 The semantics of sentences

It is worth spelling out the semantics of sentences explicitly. By definition, sentences have no free variables, and there is only one assignment on the empty set of variables, which we can think of as the empty tuple . In the Tarski semantics, there are only two possibilities for the set of satisfying assignments of a sentence, and , which we can identify with false and true respectively. When we pass to the team semantics for BID-logic, there are three possibilities for down-closed set of teams to be assigned to sentences: , , or . Thus the semantics of sentences is trivalent in general.

In his papers, Hodges works only with non-empty teams, and has bivalent semantics for sentences. However, there is no real conflict between his semantics and ours. Let be BID-logic without the linear implication. Note that properly contains Dependence logic, which is expressively equivalent to IF-logic [Väänänen, 2007].

Proposition 1

Every formula in -logic is satisfied by the empty team; hence in particular every sentence of -logic has either or as its set of satisfying teams, and the semantics of sentences in -logic is bivalent.

Proof   A straightforward induction on formulas of -logic.  

On the other hand, linear implication clearly violates this property. Note that the empty team satisfies if and only if every team satisfying also satisfies . We obtain as an immediate corollary:

Proposition 2

Linear implication is not definable in -logic, and a fortiori is not definable in Dependence logic or IF-logic.

3.5 The general Hodges construction

We shall briefly sketch, for the reader conversant with categorical logic, the general form of the construction.

The standard Tarski semantics of first-order logic is a special case of Lawvere’s notion of hyperdoctrine [Lawvere, 1969]. We refer to [Pitts, 2000] for a lucid expository account. Construing as a functor in the appropriate fashion, we can give a general form of the Hodges construction as a functor from classical hyperdoctrines to BI-hyperdoctrines [B. Biering, 2007]. Given a classical hyperdoctrine , we define a BI-hyperdoctrine on the same base category by composition with the functor :

Note that is an order-enriched category, and is an order-enriched functor, so it preserves adjoints, and hence in particular preserves the interpretations of the quantifiers. This observation is spelled out in more detail in Proposition 4.

This exactly generalizes the concrete Hodges construction, which is obtained by applying to the standard Tarski hyperdoctrine.

A full account will be given elsewhere.

Section 4 Quantifiers are adjoints in the Hodges construction

We recall the team semantics for the quantifiers.

We may wonder what underlying principles dictate these definitions.

To answer this question, we firstly recall the fundamental insight due to Lawvere [Lawvere, 1969] that quantifiers are adjoints to substitution.

4.1 Quantifiers as adjoints

Consider a function . This induces a function

This function has both a left adjoint , and a right adjoint . These adjoints are uniquely specified by the following conditions. For all , :

The unique functions satisfying these conditions can be defined explicitly as follows:

Given a formula with free variables in , it will receive its Tarskian denotation in as the set of satisfying assignments:

We have a projection function

Note that this projection is the Tarskian denotation of the tuple of terms . We can characterize the standard quantifiers as adjoints to this projection:

If we unpack the adjunction conditions for the universal quantifier, they yield the following bidirectional inference rule:


Here the set keeps track of the free variables in the assumptions . Note that the usual “eigenvariable condition” is automatically taken care of in this way.

Since adjoints are uniquely determined, this characterization completely captures the meaning of the quantifiers.

4.2 Quantifiers in the Hodges semantics

We shall now verify that the definitions of the quantifiers in the Hodges semantics are exactly the images under of their standard interpretations in the Tarski semantics, and hence in particular that they are adjoints to substitution. Thus these definitions are forced.

It will be convenient to work with the semantic view of quantifiers, as operators on subsets. Consider formulas with free variables in . The Tarski semantics over a structure assigns such formulas values in . We can regard the quantifiers , as functions

For any , we define . Thus is the set of downwards closed sets of teams on the variables . This provides the corresponding “space” of semantic values for formulas in the Hodges semantics. The interpretation of quantifiers in that semantics is given by the following set operators:

We extend the definition of to act on functions :

In the case that , where , we write .

Proposition 3

The Hodges quantifiers are the image under of the Tarski quantifiers:

Proof   Firstly, we show that for all . Suppose that . Let . This means that

Using the axiom of choice, there exists a function such that

Since is downwards closed, this implies that , as required.

The converse follows immediately from the fact that

Next we show that . Since

if , then by downwards closure. The converse follows similarly from .  

Proposition 4

The Hodges quantifiers are adjoints to substitution:

  1. is left adjoint to :

  2. is right adjoint to :

Proof   It is straightforward to verify the adjunction conditions directly. We give a more conceptual argument. There is a natural pointwise ordering on monotone functions between partially ordered sets, :

is an order-enriched functor with respect to this ordering. Functoriality means that

while order-enrichment means that

These properties imply that automatically preserves adjointness. That is, if we are given monotone maps

such that and , i.e. so that is left adjoint to , then

and similarly , so is left adjoint to (and of course is right adjoint to ). Combining this with Proposition 3 yields the required result.  

4.3 The dependence-friendly quantifiers

We shall also give characterizations of the dependence-guarded quantifiers as certain adjoints: this will be our first use of the intuitionistic implication.

We recall the definition of the dependence-friendly existential quantifier:

There has not been a comparably natural notion of dependence-friendly universal quantification. According to our analysis, this is because the appropriate connective needed to express the right notion, namely intuitionistic implication, has not been available. Using it, we can define such a quantifier:

As evidence for the naturalness of these quantifiers, we shall express them both as adjoints.

Firstly, we recall that intuitionistic conjunction and implication are related by another fundamental adjointness [Lawvere, 1969]:


This can be expressed as a bidirectional inference rule:

Next, we extend our semantic notation to the dependence-friendly quantifiers. Given , we define :

Now we can define the semantic operators corresponding to the dependence-friendly quantifiers:

Proposition 5

The dependence-friendly existential is left adjoint to the following operation:

The dependence-friendly universal is right adjoint to the following operation:

Proof   A direct verification is straightforward, but it suffices to observe that adjoints compose, and then to use Proposition 4 and the adjointness (\theequation).  

Of course, the analysis we have given in this sub-section applies to any guarded quantifiers; the dependence predicates play no special rôle here. The point is to show how the intuitionistic connectives round out the logic in a natural fashion. We shall apply them to a finer analysis of dependence itself in section 6.

Section 5 Full Abstraction

We shall now prove a full abstraction result in the sense of Hodges [Hodges, 1997a]. The point of this is to show that, even if we take sentences and their truth-values as primary, the information contained in the semantics of formulas in general is not redundant, since whenever two formulas receive different denotations, they make different contributions overall to the truth-values assigned to sentences.

The fact that such a result holds for -logic is notable, in that the logic is highly non-classical, while the semantics of sentences is bivalent. For BID-logic, the set of possible truth values for open formulas is huge even in finite models [Cameron and Hodges, 2001], while the semantics of sentences is trivalent.

While our argument follows that of Hodges [Hodges, 1997a], we find a natural rôle for the intuitionistic implication, and can give a very simple proof, while Hodges’ argument goes through the correspondence with the game-theoretical semantics.

To formalize full abstraction, we introduce the notion of a sentential context with respect to a set of variables . This is a formula with an occurrence of a “hole” such that inserting a formula with free variables in into the hole yields a sentence. Now consider two formulas and of BID-logic, with free variables in . We say that the formulas are semantically equivalent if they have the same denotations, i.e. the same sets of satisfying teams, in all interpretations with respect to all structures. We say that and are observationally equivalent if for all sentential contexts for , and are assigned the same truth values in all interpretations. The fact that semantic equivalence implies observational equivalence follows immediately from the compositional form of the semantics. The converse is full abstraction.

Proposition 6

The team semantics is fully abstract for any sublanguage of BID-logic containing universal quantification and intuitionistic implication.

Proof   Suppose that in some interpretation contains a team . Extend the language with a relation symbol , and the interpretation by assigning to . Then use the context

where the free variables in and are contained in . Then is true (satisfied by the empty tuple), since for every team satisfying , , and hence by assumption and downwards closure, satisfies . This means that all teams over satisfy the implication , and hence satisfies . On the other hand, is not satisfied by the empty tuple, since satisfies , while does not satisfy by assumption.  

Note that the use of the intuitionistic implication in relativizing to those teams satisfying the precondition is exactly what is needed.

Section 6 Analyzing Dependence

We now turn to the dependence predicate itself. Since it encapsulates the “jump” from first-order to second-order semantics, we cannot be too hopeful about taming it axiomatically. But it turns out that we can give a finer analysis in BID-logic.

Consider the following “trivial” case of dependence:

This expresses that depends on nothing at all, and hence has a fixed value — functional dependency for the constant function. Semantically, this is the following simple special case of the semantics of dependence:

Using the intuitionistic implication, we can define the general dependence predicate from this special case:

Proposition 7

The definition of from is semantically equivalent to the definition given previously:

Proof   This is just an exercise in unwinding the definitions. Note that the intuitionistic implication lets us range over all subsets of the team which are in a single equivalence class under , and require that is constant on those subsets.  

6.1 Armstrong Axioms

The current stock of plausible axioms for the dependence predicates is limited to the Armstrong axioms from database theory [Armstrong, 1974]. These are a standard complete set of axioms for functional dependence. They can be given as follows.

\@tabularll (1) & Always .
(2) & If , then .
(3) & If , then .
(4) & If , then .
(5) & If and , then .

However, in the light of our analysis, the Armstrong axioms simply fall out as standard properties of implication and conjunction. If we set , , , and use (\theequation) to translate the Armstrong axioms into purely implicational form, we see that they correspond to the following:

\@tabularll (1) & .
(2) & .
(3) & .
(4) & .
(5) & .

These are the well-known axioms I, C, W, K, B respectively [Curry and Feys, 1958] — which form a complete axiomatization of intuitionistic (but not classical!) implication. A standard example of a classically valid implicational formula which is not derivable from these axioms is Peirce’s law: .

Thus we have reduced the understanding of the dependence predicate to understanding of the, prima facie simpler, constancy predicate .

Section 7 Further Directions

In this final section, we shall sketch a number of further directions. Detailed accounts are under development, and will appear elsewhere.

7.1 Completeness

Predicate BI-logic is a well developed formalism, with a proof theory which is sound and complete relative to an algebraic semantics [Pym, 2002]. Since BID-logic is a special case, we have a sound ambient inference system. Of course this is not complete for the intended semantics for BID-logic — and cannot be. We may hope to obtain completeness for some smaller class of models, possibly on the lines of the Henkin completeness theorem for higher-order logic [Henkin, 1950].

7.2 Diagrams

Now fix a particular interpretation in a structure with universe . Consider the following construction. We introduce constants for each , the usual first order diagram (all true atomic sentences), and the following infinitary axiom:

We can define the predicate (and hence dependence ) by the following infinitary formula:

Note how the two different connectives (one additive, the other multiplicative) feature naturally.

This gives a logical (albeit infinitary) characterization of dependence.

7.3 Representation

We can also consider representation theory for the structures . We seek lattice-theoretic properties of these structures which suffice to characterize them.

Firstly, we note that the down-closures of single teams are exactly the complete join-primes of the lattice:

Moreover, these join-primes order generate, i.e. every element is the join of the join-primes below it. All of this structure is in terms of the intuitionistic disjunction.

Next, we note that the join-primes are closed under , which is moreover idempotent on the join-primes, endowing them with the structure of a semilattice. This is very different to the semilattice structure given by intuitionistic disjunction: e.g.

The double singletons are exactly the complete atoms in this semilattice, which is complete atomic in the usual sense.

Syntactically, assuming names for elements, we can describe these atomic join-primes in the lattice of propositions over variables as

These are of course the tuples. (Downclosures of) arbitrary teams are then described by expressions , where ranges over such atoms. Arbitrary elements are joins (intuitionistic disjunctions) of such elements. So there is a normal form for general elements:

Moreover, from the lattice-theoretic properties it is easily shown that the ordering between such normal forms agrees with the set inclusion ordering.

7.4 Expressiveness

One of the defining characteristics of Dependence Logic as well as IF-logic is that they can be expressed in Existential Second Order Logic, , and conversely, every definable property of structures can be expressed with a sentence of Dependence Logic. Both are true even on finite structures. To see what this connection with means let us adopt the notation that if is a team on a set of variables, then is the corresponding relation. Hodges [Hodges, 1997b] associates with every formula of IF-logic (equivalently, of Dependence Logic) with free variables in the set an Existential Second Order sentence , with an -ary predicate symbol, such that in any model and for any team on the following holds:


Conversely, if is any Existential Second Order sentence, then there is a sentence of Dependence Logic such that the following holds for all models :

Virtually all model theoretic properties of Dependence Logic follow from this relationship with , for example, the Compactness Theorem, the downward and upward Löwenheim-Skolem Theorems, the Interpolation Theorem, and the fact that every sentence in Dependence Logic for which there exists a “negation” such that for all

is actually first order definable. Also the interesting fact that the class of properties of finite structures expressible in Dependence Logic is exactly NP follows from this. Because of these connections it is quite interesting to ask whether the extensions and BID can likewise be embedded in , the existential fragment of Second Order Logic.

Now the question arises which semantics one should use. To be able to compare results with Dependence Logic and IF-logic, we use the full semantics familiar from [Hodges, 1997a] and [Väänänen, 2007].

Proposition 8

There is no translation of any extension of Dependence Logic containing either intuitionistic implication or linear implication into existential second order . The same is true on finite models, assuming NPco-NP.

Proof   Let be a formula of Dependence Logic in the empty vocabulary such that for any team : if and only if is infinite. Let denote a sentence in the empty vocabulary, only satisfied by the empty team, e.g. . Suppose there were an Existential Second Order sentence such that a model and a team on satisfy if and only if satisfies . If is any finite model and , where , then , whence . By the Compactness Theorem of Existential Second Order Logic, has an infinite model . Thus and the team satisfy . Moreover, . By the definition of the semantics of , since satisfies in , must satisfy , a contradiction.

Let us then consider finite models. It is easy to write down a formula of Dependence Logic in the vocabulary of graphs such that for any team : if and only if is 3-colorable. Let be as above. If is any graph that is not 3-colorable and , where , then . On the other hand, suppose is 3-colorable, but and some team satisfy . By the definition of the semantics of , since satisfies in , must satisfy , a contradiction. Thus a graph and a team satisfy if and only if is not 3-colorable. Suppose now there were an Existential Second Order sentence such that a graph and a team satisfy if and only if satisfies . Then we could check if a graph is not 3-colorable by checking if is satisfied by and and a team , where can be any assignment. The latter is NP, so we get NP=co-NP.

The same argument can be used to show that leads outside of : Suppose is as above and there is an Existential Second Order sentence such that a model and a team satisfy if and only if satisfies . If is any finite model and , then , whence . By the Compactness Theorem of Existential Second Order Logic, has an infinite model . Thus and the team satisfy . In particular, and satisfies . Since in this model any satisfies , by the definition of the semantics of , satisfies , a contradiction.  

The proof actually shows that BID fails to satisfy the Compactness Theorem. A similar argument shows that BID fails to satisfy the Downward Löwenheim Skolem Theorem.

Proposition 9

There is a translation of BID into Full Second Order Logic.

Proof   We follow [Hodges, 1997b] (see also [Väänänen, 2007]) and present only the additions needed over and above Dependence Logic and IF-logic:

In conclusion, we may say that and BID seem to have a more robust and uniform algebraic structure than Dependence Logic and IF-logic. We anticipate that this is reflected also in an effective proof theory, still to be developed. On the other hand the price of this seems to be that “nice” model theoretic properties are lost, at least in the full semantics. Perhaps there are some underlying, hitherto unidentified, reasons why logics developed for dependence cannot simultaneously have a “nice” model theory and effective proof theory. After all, we know from Lindström’s Theorem ([Lindström, 1969]) that there are intrinsic obstacles to having model-theoretically defined extensions of first order logic with both nice proof theory and nice model theory. However, we have a trivalent logic, unlike the setting considered by Lindström. So it is too early to say whether there are general reasons why BID does not satisfy Compactness and other model theoretic properties familiar from Dependence Logic, or whether we have just not found the right concepts yet.





  • Armstrong, 1974 Armstrong, W. W.: 1974, ‘Dependency structures of data base relationships’. In: Information Processing 74. Proc. IFIP Congress. pp. 580–583, North Holland.
  • B. Biering, 2007 B. Biering, L. Birkedal, N. T.-S.: 2007, ‘BI-hyperdoctrines, higher-order separation logic, and abstraction’. ACM Transactions on Programming Languages and Systems 29(5).
  • Cameron and Hodges, 2001 Cameron, P. and W. Hodges: 2001, ‘Some combinatorics of imperfect information’. J. Symbolic Logic 66(2), 673–684.
  • Curry and Feys, 1958 Curry, H. B. and R. Feys: 1958, Combinatory Logic Volume 1, Studies in Logic and the Foundations of Mathematics. North Holland.
  • Davey and Priestley, 2002 Davey, B. A. and H. A. Priestley: 2002, Introduction to Lattices and Order. Cambridge University Press, second edition.
  • Fagin, 1977 Fagin, R.: 1977, ‘Functional Dependencies in a Relational Data Base and Propositional Logic’. IBM Journal of Research and Development 21(6), 543–544.
  • Girard, 1987 Girard, J.-Y.: 1987, ‘Linear Logic’. Theoretical Computer Science.
  • Henkin, 1950 Henkin, L.: 1950, ‘Completeness in the Theory of Types’. J. Symbolic Logic 15, 81–91.
  • Henkin, 1961 Henkin, L.: 1961, ‘Some remarks on infinitely long formulas’. In: Infinitistic Methods. Proc. Symposium on Foundations of Mathematics. pp. 167–183, Pergamon.
  • Hintikka, 1998 Hintikka, J.: 1998, The Principles of Mathematics Revisited. Cambridge University Press.
  • Hintikka, 2002 Hintikka, J.: 2002, ‘Hyperclassical logic (a.k.a. IF logic) and its implications for logical theory’. Bulletin of Symbolic Logic 8(3), 404–423.
  • Hintikka and Sandu, 1989 Hintikka, J. and G. Sandu: 1989, ‘Informational independence as a semantical phenomenon’. In: J. E. F. et al. (ed.): Logic, Methodology and Philosophy of Science VIII. pp. 571–589, Elsevier.
  • Hintikka and Sandu, 1996 Hintikka, J. and G. Sandu: 1996, ‘Game-theoretical Semantics’. In: J. van Benthem and A. ter Meulen (eds.): Handbook of Logic and Language. Elsevier.
  • Hodges, 1997a Hodges, W.: 1997a, ‘Compositional Semantics for a Language of Imperfect Information’. Logic Journal of the IGPL 5(4), 539–563.
  • Hodges, 1997b Hodges, W.: 1997b, ‘Some strange quantifiers’. In: G. R. J. Mycielski and A. Salomaa (eds.): Structures in Logic and Computer Science, Vol. 1261 of Lecture Notes in Computer Science. Springer, pp. 51–65.
  • Lang, 1964 Lang, S.: 1964, Algebraic numbers. Addison-Wesley.
  • Lawvere, 1969 Lawvere, F. W.: 1969, ‘Adjointness in foundations’. Dialectica 23, 281–296.
  • Lindström, 1969 Lindström, P.: 1969, ‘On extensions of elementary logic’. Theoria 35, 1–11.
  • Milner, 1977 Milner, R.: 1977, ‘Fully Abstract Models of Typed Lambda-Calculi’. Theoretical Computer Science 4, 1–22.
  • Mitchell, 1996 Mitchell, J. C.: 1996, Foundations for Programming Languages. MIT Press.
  • Mitchell and Simmons, 2001 Mitchell, W. P. R. and H. Simmons: 2001, ‘Monoid Based Semantics for Linear Formulas’. J. Symbolic Logic 66(4), 1597–1619.
  • O’Hearn and Pym, 1999 O’Hearn, P. W. and D. J. Pym: 1999, ‘The Logic of Bunched Implications’. Bulletin of Symbolic Logic 5(2), 215–244.
  • Pitts, 2000 Pitts, A.: 2000, ‘Categorical logic’. In: S. Abramsky, D. Gabbay, and T. Maibaum (eds.): Handbook of Logic in Computer Science, Vol. 5. Oxford University Press, pp. 39–128.
  • Plotkin, 1977 Plotkin, G. D.: 1977, ‘LCF considered as a Programming Language’. Theoretical Computer Science 5, 223–255.
  • Pym, 2002 Pym, D. J.: 2002, The Semantics and Proof Theory of the Logic of Bunched Implications, Vol. 26 of Applied Logic Series. Kluwer.
  • Pym et al., 2004 Pym, D. J., P. W. O’Hearn, and H. Yang: 2004, ‘Possible worlds and resources: the semantics of BI’. Theoretical Computer Science 315, 257–305.
  • Reynolds, 2002 Reynolds, J.: 2002, ‘Separation logic: a logic for shared mutable data structures’. In: Proc. LiCS 2002. IEEE.
  • Rosenthal, 1990 Rosenthal, K. I.: 1990, Quantales and Their Applications, No. 234 in Pitman Research Notes in Mathematics. Longman Scientific and Technical.
  • Scott, 1969 Scott, D. S.: 1969, ‘Outline of a Mathematical Theory of Computation’. Technical Monograph PRG-2, Oxford University Computing Laboratory.
  • Tarski, 1936 Tarski, A.: 1936, ‘Der Wahrheitsbegriff in den formalisierten Sprachen’. Studia Philosophica 1, 261–405.
  • Tarski and Vaught, 1956 Tarski, A. and R. Vaught: 1956, ‘Arithmetical extensions of relational systems’. Compositio Mathematica pp. 81–102.
  • Urquhart, 1972 Urquhart, A.: 1972, ‘Semantics for Relevant Logics’. J. Symbolic Logic 37(1), 159–169.
  • Väänänen, 2001 Väänänen, J.: 2001, ‘Second-order logic and foundations of mathematics’. Bull. Symbolic Logic 7(4), 504–520.
  • Väänänen, 2007 Väänänen, J.: 2007, Dependence Logic, Vol. 70 of London Mathematical Society Student Texts. Cambridge University Press.
  • Winskel, 1993 Winskel, G.: 1993, The Formal Semantics of Programming Languages. MIT Press.
  • Yetter, 1990 Yetter, D. N.: 1990, ‘Quantales and (Noncommutative) Linear Logic’. J. Symbolic Logic 55(1), 41–64.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description