Unification and Logarithmic Space
Abstract
We present an algebraic characterization of the complexity classes Logspace and NLogspace, using an algebra with a composition law based on unification. This new bridge between unification and complexity classes is inspired from proof theory and more specifically linear logic and Geometry of Interaction.
We show how unification can be used to build a model of computation by means of specific subalgebras associated to finite permutations groups.
We then prove that whether an observation (the algebraic counterpart of a program) accepts a word can be decided within logarithmic space. We also show that the construction can naturally represent pointer machines, an intuitive way of understanding logarithmic space computing.
Keywords:
Implicit Complexity, Unification, Logarithmic Space, Proof Theory, Pointer Machines, Geometry of Interaction.itemsep=1pt,topsep=3pt,parsep=0pt,partopsep=0pt \pdfstringdefDisableCommands\pdfstringdefDisableCommands
Introduction
Proof theory and complexity theory. There is a longstanding tradition of relating proof theory (more specifically linear logic [1]) and implicit complexity theory that dates back to the introduction of bounded [2] and light [3] logics. Control over the modalities [4, 5], type assignment [6] and stratification of exponential boxes [7], to name a few, led to a clearer understanding of the complexity bounds linear logic could entail on the cutelimination procedure.
We propose to push further this approach by adopting a more semantical and algebraic point of view that will allow us to capture nondeterministic logarithmic space computation.
Geometry of Interaction. As the study of cutelimination has grown as a central topic in proof theory, its mathematical modeling became of great interest. The Geometry of Interaction [8] research program led to mathematical models of cutelimination in terms of paths in proofnets [9], token machines [10] and operator algebras [11]. It was already used with complexity concerns [12, 13].
Recent works [13, 14, 15] studied the link between Geometry of Interaction and logarithmic space, relying on the theory of von Neumann algebras. Those three articles are indubitably sources of inspiration of this work, but the whole construction is made anew, in a simpler framework.
Unification. Unification is one of the keyconcepts of theoretical computer science, for it is used in logic programming and is a classical subject of study for complexity theory. It was shown [16, 17] that one can model cutelimination with unification techniques.
Execution will be expressed in terms of matching in a unification algebra. This is a simple framework, yet expressive enough to encode the action of finite permutation groups on an unbounded tensor product, which is a crucial ingredient of our construction.
Contribution. We carry on the methodology of bridging Geometry of Interaction and complexity theory with a renewed approach. It relies on an simpler representation of execution in a unificationbased algebra, proved to capture exactly logarithmic space complexity.
While the representation of inputs (words over a finite alphabet) comes from the classical Church representation of lists, observations (the algebraic counterpart of programs) are shown to correspond very naturally to a notion of pointer machines. This correspondence allows us to prove that reversibility (of machines) is related to the algebraic notion of isometricity (of observations).
Organization of this article. In Sect.1 we review some classical results on unification of firstorder terms and use them to build the algebra that will constitute our computational setting.
We explain in Sect.2 how words and computing devices (observations) can be modeled by particular elements of this algebra. The way they interact to yield a notion of language recognized by an observation is described in Sect.3.
Finally, we show in Sect.4 that our construction captures exactly logarithmic space computation, both deterministic and nondeterministic.
1 The Unification Algebra
1.1 Unification
Unification can be generally thought of as the study of formal solving of equations between terms.
This topic was introduced by Herbrand, but became really widespread after the work of J. A. Robinson on automated theorem proving. The unification technique is also at the core of the logic programming language Prolog and type inference for functional programming languages such as CaML and Haskell.
Specifically, we will be interested in the following problem:
Given two terms, can they be “made equal” by replacing their variables?
Definition 1
(terms)
We consider the following set of firstorder terms
where are variables, are constants and is a binary function symbol.
For any , we will write the set of variables occurring in . We say that a term is closed when , and denote the set of closed terms.
Notation. The binary function symbol is not associative, but we will write it by convention as right associating to lighten notations:
Definition 2
(substitution)
A substitution is a map such that the set (the domain of ) is finite.
A substitution with domain such that will be written as .
If is a term we write the term where any occurrence of any variable has been replaced by .
If and , their composition is defined as
Remark. The composition of substitutions is such that holds.
Definition 3
(renamings and instances)
A renaming is a substitution such that and that is bijective. A term is a renaming of if for some renaming .
Two substitutions are equal up to renaming if there is a renaming such that .
A substitution is an instance of if there is a substitution such that .
Proposition 4
Let be two substitutions. If is an instance of and is an instance of , then they are equal up to renaming.
Definition 5
(unification)
Two terms are unifiable if there is a substitution such that .
We say that is a most general unifier (MGU) of if any other unifier of is an instance of .
Remark. It follows from Proposition 4 that any two MGU of a pair of terms are equal up to renaming.
We will be interested mostly in the weaker variant of unification where one can first perform renamings on terms so that their variables are distinct, we introduce therefore a specific vocabulary for it.
Definition 6
(disjointness and matching)
Two terms are matchable if are unifiable, where are renamings (Definition 3) of such that .
If two terms are not matchable, they are said to be disjoint.
Example. and are not unifiable.
But they are matchable, as which is unifiable with .
More generally, disjointness is stronger than nonunifiability.
The crucial feature of firstorder unification is the (decidable) existence of most general unifiers for unification problems that have a solution.
Proposition 7
(MGU)
If a unification problem has a unifier, then it has a MGU.
Whether two terms are unifiable and, in case they are, finding a MGU is decidable.
As unification grew in importance, the study of its complexity gained in attention. A complete survey [18] tells the story of the bounds getting sharpened: general firstorder unification was finally proved [19] to be a Ptimecomplete problem.
In this article, we are concerned with a very much simpler case of the problem: the matching (Definition 6) of linear terms (ie. where variables occur at most once). This case can be solved in a spaceefficient way.
Proposition 8
(matching in logarithmic space [20, Lemma 20])
Whether two linear terms with disjoint sets of variables are unifiable, and if so finding a MGU, can be computed in logarithmic space in the size^{1}^{1}1The size of a term is the total number of occurrences of symbols in it. of on a deterministic Turing machine
The lemma in [20] actually states that the problem is in NC, a complexity class of parallel computation known to be included in Logspace.
We will use only a special case of the result, matching a linear term against a closed term.
1.2 Flows and Wirings
We now use the notions we just saw to build an algebra with a product based on unification. Let us start with a monoid with a partially defined product, which will be the basis of the construction.
Definition 9
(flows)
A flow is an oriented pair written with such that .
Flows are considered up to renaming: for any renaming , .
We will write the set of (equivalence classes of) flows.
We set and so that is an involution of .
A flow can be thought of as a ‘match ... with u > t’ in a MLstyle language. The composition of flows follows this intuition.
Definition 10
(product of flows)
Let and .
Suppose we have chosen two representatives of the renaming classes such that their sets of variables are disjoint.
The product of and is defined if are unifiable with MGU (the choice of a MGU does not matter because of the remark following Definition 5) and in that case: .
Definition 11
(action on closed terms)
If is a closed term, is defined whenever and are unifiable, with MGU , in that case
Examples. Composition of flows: .
Action on a closed term: .
Remark. The condition on variables ensures that the result is a closed term (because ) and that the action is injective on its domain of definition (because ). Moreover, the action is compatible with the product of flows: and both are defined at the same time.
By adding a formal element (representing the failure of unification) to the set of flows, one could turn the product into a completely defined operation, making an inverse monoid. However, we will need to consider the wider algebra of sums of flows that is easily defined directly from the partially defined product.
Definition 12
(wirings)
Wirings are linear combinations of elements of (formally: almosteverywhere null functions from to ), endowed with the following operations:
We write the set of wirings and refer to it as the unification algebra.
Remark. Indeed, is a unital algebra: it is a algebra (considering the product defined above) with an involution and a unit .
Definition 13
(partial isometries)
A partial isometry is a wiring satisfying .
Example. is a partial isometry.
While offers the general algebraic background to work in, we will need to consider particular kind of wirings to study computation.
Definition 14
(concrete and isometric wirings)
A wiring is concrete whenever it is a sum of flows with all coefficients equal to .
An isometric wiring is a concrete wiring that is also a partial isometry.
Given a set of wirings we write for the set of all concrete wirings of .
Isometric wirings enjoy a direct characterization.
Proposition 15
(isometric wirings)
The isometric wirings are exactly the wirings of the form
with the pairwise disjoint (Definition 6) and pairwise disjoint.
It will be useful to consider the action of wirings on closed terms. For this purpose we extend Definition 11 to wirings.
Definition 16
(action on closed terms)
Let be the free vector space over .
Wirings act on base vectors of in the following way
which extends by linearity into an action on the whole .
Isometric wirings have a particular behavior in terms of this action.
Lemma 17
(isometric action)
Let be an isometric wiring and a closed term.
We have that and are either or another closed term (seen as an element of ).
1.3 Tensor Product and Permutations
We define now the representation in of structures that provide enough expressivity to model computation.
Unbounded tensor products will allow to represent data of arbitrary size, and finitesupport permutations will be used to manipulate these data.
Notations. Given any set of wirings or closed terms , we write the vector space generated by , ie. the set of finite linear combinations of elements of (for instance ).
Moreover, we set (with as in Definition 9) which is the algebra of multiples of the identity, and .
For brevity we write “algebra” instead of the more correct “subalgebra of ” (ie. a subset of that is stable by linear combinations, product and ).
Definition 18
(tensor product)
Let and be two flows.
Suppose we have chosen representatives of these renaming classes that have their sets of variables disjoint. We define their tensor product as
.
The operation is extended to wirings by bilinearity.
Given two algebras , we define their tensor product as the algebra
This actually defines an embedding of the algebraic tensor product of algebras into , which means in particular that . It ensures also that the operation indeed yields algebras.
Notation. As , the operation is not associative. We carry on our convention and write it as right associating: .
Definition 19
(unbounded tensor)
Let be a algebra.
We define the algebras for all as
and the algebra .
We will consider finite permutations, but allow them to be composed even when their domain of definition do not match.
Notations. Let be the set of finite permutations over , if , we define as the permutation extended to letting for .
We also write .
Definition 20
(representation)
To a permutation we associate the flow
A permutation will act on the first components of the unbounded tensor product (Definition 19) by swapping them and leaving the rest unchanged.
The wirings internalize this action: in the above definition, the variable at the end stands for the components that are not affected.
Example. Let be the permutation swapping the two elements of and . We have and .
Proposition 21
(representation)
For and we have
Definition 22
(permutation algebra)
For we set and .
We define then , which we call the permutation algebra.
Proposition 21 ensures that the and are algebras.
2 Words and Observations
The representation of words over an alphabet in the unification algebra directly comes from the translation of Church lists in linear logic and their interpretation in Geometry of Interaction models [11, 16].
This prooftheoretic origin is an useful guide for intuition, even if we give here a more straightforward definition of the notion.
From now on, we fix a set of two distinguished constant symbols .
Definition 23
(word algebra)
To a set of closed terms, we associate the algebra
(which is indeed an algebra because unification of closed terms is simply equality)
The word algebra associated to a finite set of constant symbols is the algebra defined as
The words we consider are cyclic, with a begin/end marker , a reserved constant symbol. For example the word is to be thought of as .
We consider therefore that the alphabet always contains the symbol .
Definition 24
(word representation)
Let be a word over and be distinct closed terms.
The representation with respect to of is an isometric wiring (Definition 14), defined as
We now define observations, programs computing on representations of words. They lie in a particular algebra based on the representation of permutations presented in Sect.1.3.
Definition 25
(observation algebra)
An observation over a finite set of symbols is any element of where , i.e. a finite sum of flows of the form
with closed terms, , and is a permutation.
Moreover when an observation happens to be an isometric wiring, we will call it an isometric observation.
3 Normativity: Independence from Representations
We are going to define how observations accept and reject words. This needs to be discussed, because there is an issue with word representations: an observation is an element of and can therefore only interact with representations of a word, and there are many possible representation of the same word (in Definition 24, different choices of closed terms lead to different representations). Therefore one has to ensure that acceptance or rejection is independent of the representation, so that the notion makes the intended sense.
The termination of computations will correspond to the algebraic notion of nilpotency, which we recall here.
Definition 26
(nilpotency)
A wiring is nilpotent if for some .
Definition 27
(automorphism)
An automorphism of a algebra is a linear application such that for all : , and .
Example. defines an automorphism of .
Notation. If is an automorphism of and is an automorphism of , we write the automorphism of defined for all as .
Definition 28
(normative pair)
A pair of algebras is a normative pair whenever any automorphism of can be extended into an automorphism of the algebra generated by such that for any .
The two following propositions set the basis for a notion of acceptance/rejection independent of the representation of a word.
Proposition 29
(automorphic representations)
Any two representations of a word over are automorphic: there exists an automorphism of such that
Proof. Consider a bijection such that for all . Then set , extended by linearity. ∎
Proposition 30
(nilpotency and normative pairs)
Let be a normative pair and an automorphism of . Let
, and let . Then is nilpotent if and only if is nilpotent.
Proof. Let be the extension of as in Definition 28 and .
We have for all that .
By injectivity of , if and only if . ∎
Corollary 31
(independence)
If is a normative pair, a word over and . The product of with the representation of the word, , is nilpotent for one choice of if and only if it is nilpotent for all choices of .
The basic components of the word and observation algebras we introduced earlier can be shown to form a normative pair.
Theorem 32
The pair is normative.
Proof (sketch). By simple computations, the set
can be shown to be a algebra , the algebra generated by .
As is an automorphism of , it can be written as for all , with an automorphism of .
We set for , which extends into an automorphism of by linearity. Finally, we extend to by . It is then easy to check that has the required properties. ∎
Remark. Here we sketched a direct proof for brevity, but this can also be shown by involving a little more mathematical structure (actions of permutations on the unbounded tensor and crossed products) which would give a more synthetic proof.
We can then define the notion of the language recognized by an observation, via Corollary 31.
Definition 33
(language of an observation)
Let be an observation over .
The language recognized by is the following set of words over :
4 Wirings and Logarithmic Space
Now that we have defined our framework and showed how observations can compute, we study the complexity of deciding whenever an observation accepts a word (4.1), and how wirings can decide any language in (N)Logspace (4.2).
4.1 Soundness of Observations
The aim of this subsection is to prove the following theorem:
Theorem 34
(space soundness)
Let be an observation over .

is decidable in nondeterministic logarithmic space.

If is isometric, then is decidable in deterministic logarithmic space.
Actually, the result stands for the complements of these languages, but as coNLogspace = NLogspace by the ImmermanSzelepcsényi theorem, this makes no difference.
The main tool for this purpose is the notion of computation space: finite dimensional subspaces of (Definition 16) on which we will be able to observe the behavior of certain wirings. It can be understood as the place where all the relevant computation takes place.
Definition 35
(separating space)
A subspace of is separating for a wiring whenever and implies .
Observations are finite sums of wirings. We can naturally associate a finitedimensional vector space to an observation and a finite set of closed terms.
Definition 36
(computation space)
Let be a set of distinct closed terms and an observation.
Let be the smallest integer and the smallest (finite) set of closed terms such that .
The computation space is the subspace of generated by the terms
where , , and the .
The dimension of is (where is the cardinal of ), which is polynomial in .
Lemma 37
(separation)
For any observation and any word , the space is separating for the wiring .
Proof (of Theorem 34). With these lemmas at hand, we can define the nondeterministic algorithm below. It takes as an input the representation of a word of length .
being a constant, one can compute once and for all and .
All computation paths (the “pick” at lines 3 and 8 being nondeterministic choices) accept if and only if for some lesser or equal to the dimension of the computation space . By Lemma 37, this is equivalent to being nilpotent.
The term chosen at lines 3 is representable by an integer of size at most and is erased by the one chosen at line 8 every time we go through the whileloop. and are integers proportional to the dimension of the computation space, which is representable in logarithmic space in the size of the input (Definition 36).
4.2 Completeness: Representing Pointer Machines as Wirings
To prove the converse of Theorem 34, we prove that wirings can encode a special kind of readonly multihead Turing Machine: pointers machines. The definition of this model will be guided by our understanding of the computation of wirings: they won’t have the ability to write and acceptance will be defined as termination of all paths of computation. For a survey of this topic, one may consult the first author’s thesis [21, Chap.4], the main novelty of this part of our work is to notice that reversible computation is represented by isometric operators.
Definition 38
(pointer machine)
A pointer machine over an alphabet is a tuple where

is an integer, the number of pointers,

is a finite set, the states of the machine,

, the transitions of the machine
(we will write the transitions, for readability).
A pointer machine will be called deterministic if for any , there is at most one and one such that . In that case we can see as a partial function, and we say that is reversible if is a partial injection.
We call the first of the pointers the main pointer, it is the only one that can move. The other pointers are referred to as the auxiliary pointers. An auxiliary pointer will be able to become the main pointer during the computation thanks to permutations.
Definition 39
(configuration)
Given the length of a word over and a pointer machine , a configuration of is an element of
The element of is the state of the machine and the element of is the letter the main pointer points at. The element of is the direction of the next move of the main pointer, and the elements of correspond to the positions of the (main and auxiliary) pointers on the input.
As the input tape is considered cyclic with a special symbol marking the beginning of the word (recall Definition 24), the pointer positions are integers modulo for an input word of length .
Definition 40
(transition)
Let be a word and be a pointer machine.
A transition of on input is a triple of configurations
such that

if , is the other element of ,

if and if ,

for ,

is the letter at position and is the letter at position ,

and belongs to .
There is no constraint on , but every time this value differs from the letter pointed by , the computation will halt on the next MOVE phase, because there is a mismatch between the value that is supposed to have been read and the actual bit of stored at this position, and that would contradict the first part of item 4. In terms of wirings, the MOVE phase corresponds to the application of the representation of the word, whereas the SWAP phase corresponds to the application of the observation.
Definition 41
(acceptance)
We say that accepts if any sequence of transitions such that for all is necessarily finite.
We write the set of words accepted by .
This means informally: we consider that a pointer machine accepts a word if it cannot ever loop, from whatever configuration it starts from. That a lot of paths of computation accepts “wrongly” is no worry, since only rejection is meaningful: our pointer machines compute in a “universally nondeterministic” way, to stick to the acceptance condition of wirings, nilpotency.
Proposition 42
(space and pointer machines)
If , then there exist a pointer machine such that .
Moreover, if then can be chosen to be reversible.
Proof (sketch). It is wellknown that readonly Turing Machines – or equivalently (non)Deterministic MultiHead Finite Automata – characterize (N)Logspace [22]. It takes little effort to see that our pointer machines are just a reasonable rearrangement of this model, since it is always possible to encode the missing information in the states of the machine.
That acceptance and rejections are “reversed” is harmless in the deterministic (or equivalently reversible [23]) case, and uses that coNLogspace = NLogspace to get the expected result in the nondeterministic case. ∎
As we said, our pointer machines are designed to be easily simulated by wirings, so that we get the expected result almost for free.
Theorem 43
(space completeness)
If , then there exist an observation such that .
Moreover, if then is an isometric wiring.
Proof. By Proposition 42, there exists a pointer machine such that . We associate to the set a set of distinct closed terms and write the term associated to . To any element of we associate the flow
and we define the observation as .
One can easily check that this translation preserves the language recognized (there is even a step by step simulation of the computation on the word by the wiring ) and relates reversibility with isometricity: in fact, is reversible if and only if is an isometric wiring. Then, if , is deterministic and can always be chosen to be reversible [23]. ∎
Discussion
The language of the unification algebra gives us a twofold point of view on computation, either through algebraic structures (that are described finitely by wirings) or pointer machines. We may therefore start exploring possible variations of the construction, combining intuitions from both worlds.
For instance, the choice of a normative pair can affect the expressivity of the construction: the more restrictive the notion of representation of a word is, the more liberal that of an observation can become, as suggested by T. Seiller. Whether and how this can affect the corresponding complexity class is definitely a direction for future work.
Another pending question about this approach to complexity classes is to delimit the minimal prerequisites of the construction, its core.
Earlier works [13, 14, 15] made use of von Neumann algebras to get a setting that is expressive enough, we ligthen the construction by using simpler objects. Yet, the possibility of representing the action of permutations on a unbounded tensor product is a common denominator that seems deeply related to logarithmic space and pointer machines.
The logical counterpart of this work also needs clarifying. Indeed, the idea of representation of words comes directly from prooftheory, while the notion of observation does not seem to correspond to any known logical construction.
Finally, execution in our setting being based on iteration of matching, which is computable efficiently by a parallel machine, it seems possible to relate our modelisation with parallel computation.
References
 [1] Girard, J.Y.: Linear logic. Theoretical Computer Science 50 (1987) 1–102
 [2] Girard, J.Y., Scedrov, A., Scott, P.J.: Bounded Linear Logic: A Modular Approach to Polynomial Time Computability. Theoretical Computer Science 97(1) (1992) 1–66
 [3] Girard, J.Y.: Light linear logic. In Leivant, D., ed.: Logic and Computational Complexity. Volume 960 of Lecture Notes in Computer Science. (1995) 145–176
 [4] Schöpp, U.: Stratified Bounded Affine Logic for Logarithmic Space. In: LICS, IEEE Computer Society (2007) 411–420
 [5] Dal Lago, U., Hofmann, M.: Bounded Linear Logic, Revisited. Logical Methods in Computer Science 6(4) (2010)
 [6] Gaboardi, M., Marion, J.Y., Ronchi Della Rocca, S.: An Implicit Characterization of PSPACE. ACM Transactions on Computational Logic 13(2) (2012) 18
 [7] Baillot, P., Mazza, D.: Linear logic by levels and bounded time complexity. Theoretical Computer Science 411(2) (2010) 470–503
 [8] Girard, J.Y.: Towards a Geometry of Interaction. In: Proceedings of the AMS Conference on Categories, Logic and Computer Science. (1989)
 [9] Asperti, A., Danos, V., Laneve, C., Regnier, L.: Paths in the lambdacalculus. In: LICS, IEEE Computer Society (1994) 426–436
 [10] Laurent, O.: A token machine for full geometry of interaction (extended abstract). In Abramsky, S., ed.: Typed Lambda Calculi and Applications. Volume 2044 of Lecture Notes in Computer Science. Springer Berlin Heidelberg (2001) 283–297
 [11] Girard, J.Y.: Geometry of interaction 1: Interpretation of System F. Studies in Logic and the Foundations of Mathematics 127 (1989) 221–260
 [12] Baillot, P., Pedicini, M.: Elementary Complexity and Geometry of Interaction. Fundamenta Informaticae 45(12) (2001) 1–31
 [13] Girard, J.Y.: Normativity in Logic. In: Epistemology versus Ontology. Volume 27 of Logic, Epistemology, and the Unity of Science. Springer (2012) 243–263
 [14] Aubert, C., Seiller, T.: Characterizing coNL by a group action. Arxiv preprint abs/1209.3422 (2012)
 [15] Aubert, C., Seiller, T.: Logarithmic Space and Permutations. Arxiv preprint abs/1301.3189 (2013)
 [16] Girard, J.Y.: Geometry of Interaction III: Accommodating the Additives. In: Advances in Linear Logic, LNS 222,CUP, 329–389. (1995) 329–389
 [17] Girard, J.Y.: Three lightings of logic (Invited Talk). In Ronchi Della Rocca, S., ed.: CSL. Volume 23 of Leibniz International Proceedings in Informatics., Schloss Dagstuhl  LeibnizZentrum für Informatik (2013) 11–23
 [18] Knight, K.: Unification: A Multidisciplinary Survey. ACM Computing Surveys 21(1) (1989) 93–124
 [19] Dwork, C., Kanellakis, P.C., Mitchell, J.C.: On the sequential nature of unification. Journal of Logic Programming 1(1) (1984) 35–50
 [20] Dwork, C., Kanellakis, P.C., Stockmeyer, L.J.: Parallel Algorithms for Term Matching. SIAM Journal on Computing 17(4) (1988) 711–731
 [21] Aubert, C.: Linear Logic and Subpolynomial Classes of Complexity. PhD thesis, Université Paris 13 – Sorbonne Paris Cité (November 2013)
 [22] Hartmanis, J.: On NonDeterminancy in Simple Computing Devices. Acta Informatica 1(4) (1972) 336–344
 [23] Lange, K.J., McKenzie, P., Tapp, A.: Reversible Space Equals Deterministic Space. Journal of Computer and System Sciences 60(2) (2000) 354–367