Axiomatization of Finite Algebras
Abstract
We show that the set of all formulas in variables valid in a finite class of finite algebras is always a regular tree language, and compute a finite axiom set for . We give a rational reconstruction of Barzdins’ liquid flow algorithm [BB91]. We show a sufficient condition for the existence of a class of prototype algebras for a given theory . Such a set allows us to prove simply by testing whether holds in .
1 Introduction
Abstraction is a key issue in artificial intelligence. In the setting of mathematical logic and model theory, it is concerned with the relation between concrete algebras and abstract statements about them in a formal language. For purely equational theories, the well–known construction of an initial model (e.g. [DJ90]) allows one to compute a kind of prototypical algebra for a given theory. In the other direction (i.e. from concrete algebras to theories), however, no computable procedures are yet known. While it is trivial to check whether a given formula is valid in a given finite algebra, it is not clear how to find a finite description of all valid formulas.
In 1991, Barzdin and Barzdin [BB91] proposed their liquid–flow algorithm which takes an incompletely given finite algebra and acquires hypotheses about what are probable axioms. We give a rational reconstruction of this work that is based on well–known algorithms on regular tree grammars. We give a correspondence between Barzdins’ notions and grammar notions, showing that the liquid–flow algorithm in fact amounts to a combination of classical grammar algorithms (Thm. 3.1).
The correspondence leads to synergies in both directions: Barzdins’ approach could be extended somewhat, and a classical algorithm seems to be improvable in its time complexity using the liquid–flow technique.
Next, we focus on a completely given algebra and show how to compute finite descriptions of the set of all variable–bounded formulas valid in it. This set is described by a grammar (Thm. 3.2) and by an axiom set (Thm. 3.4).
We relate our work to Birkhoff’s variety theorem [MT92], which states that a class of algebras can be characterized by equational axioms only up to its variety closure . If is a finite class of finite algebras such that is is finitely axiomatizable at all, we can compute an equational axiom set for it (Cor. 1).
As an application in the field of automated theorem proving, we give a sufficient criterion for establishing whether a class of algebras is a prototype class for a given theory (Cor. 2). If the criterion applies, the validity of any formula in variables can be decided quickly and simply by merely testing in , avoiding the search space of usual theorem proving procedures: if and only if is satisfied in every .
Section 2 recalls some formal definitions. In order to make this paper self–contained, we refer well–known results on regular tree grammars that are used in the sequel. Section 3 first gives a rational reconstruction of Barzdins’ liquid flow algorithm; then we show how to compute an axiom set for a finite class of finite algebras. In Sect. 4 and 5, we discuss the applications to Birkhoff characterizations and prototype algebras in theorem proving, respectively. A full version including all proofs can be found in [Bur02].
2 Definitions and Notations
Definition 1
[Sorted term, substitution] We assume familiarity with the classic definitions of terms and substitutions in a many–sorted framework. Let be a finite set of sorts. A signature is a set of function symbols , each of which has a fixed domain and range. Let be an infinite set of variables, each of a fixed sort. For and , denotes the set of all well–sorted terms of sort over and ; let . Let denote the unique sort of a term . denotes a well–sorted substitution that maps each variable to the term . ∎
Definition 2
[Algebra] We consider w.l.o.g. term algebras factorized by a set of operation–defining equations. In this setting, a finite many–sorted algebra of signature is given by a nonempty finite set of constants for each sort and a set consisting of exactly one equation for each with and each , where . The are just the domains of for each sort , while defines the operations from on these domains. Define . We write for the congruence relation induced by ; each ground term equals exactly one . ∎
We will only allow closed quantified equations as formulas. This is sufficient since an arbitrary formula can always be transformed into prenex normal form, and we can model predicates and junctors by functions into a dedicated sort .
Definition 3
[Formula, theory] For a –tuple such that for , define as the set of all quantifier prefixes over . Any expression of the form for and is called a formula over and . We will sometimes omit the index of . We denote a formula by , and a set of formulas, also called theory, by . ∎
When encoding predicates and junctors using a sort , in order to obtain an appropriate semantics^{1}^{1}1If in some algebra we had and , any formula was valid in . it is necessary and sufficient to fix the interpretation of the sort accordingly for every algebra under consideration. Therefor, we define below the notion of an admitted algebra, and let the definition of , , etc. depend on it.
We tacitly assume that,

when we consider only equations, each algebra is admitted, while,

when we consider arbitrary predicates, junctors and a sort , only algebras with an appropriate interpretation of are admitted.
Definition 4
[Admitted algebras] Let a signature be given. Let be a set of sorts; and let be the set of all that have all argument and result sorts in . Let a fixed –algebra be given; we denote its domain sets by .
We say that a –algebra is admitted if for each and for all and . ∎
Definition 5
[Validity] For an admitted –algebra and a formula , we write if is valid in , where equality symbols in are interpreted as identity relations on , rather than by an arbitrary congruence on it. For a class of admitted –algebras , and a theory , we similarly write , , and . Define if implies for each admitted –algebra .
If we choose , each algebra is admitted. Choosing , , and as the two–element Boolean algebra, we prescribe the interpretation of for each admitted algebra. ∎
Definition 6
[Complete theorem sets] For a –algebra , and a tuple of variables as in Def. 3, define
as the set of all formulas over and that are valid in . The elements of can be considered as terms over the extended signature
For a class of –algebras, define . ∎
Example 2.1
The algebra , defined by and , is a –algebra for . The set contains the formula , but not , since . ∎
Definition 7
[Regular tree grammar] A regular tree grammar consists of rules or where are nonterminal symbols and . Note that may be also . Each or is called an alternative. are assigned a sort each that have to fit with each other and with .
The size of is its total number of alternatives. We denote the set of nonterminals of by . The language produced by a nonterminal of is denoted by , it is a set of ground terms over ; if is the start symbol of , we also write . is called deterministic if no different rules have identical alternatives.
Define the generalized height of a ground term by
where , and may be defined arbitrarily. For a nonterminal of a grammar , define as the minimal height of any term in , it is if is empty. ∎
Theorem 2.2
[Properties of regular tree grammars]

Incorporating [McA92, Sect.6]
Given a finite many–sorted –algebra , a grammar of size can be computed in time such that 
Externing [McA92, Sect.3]
Given a deterministic grammar , a set of ground equations can be computed in time such that for all ground terms :where denotes the congruence induced by .

Lifting [CDG99, Thm.7 in Sect.1.4]
Given a grammar and a ground substitution , a grammar of size can be computed in time such thatwhere and is the set of nonterminals of and , respectively. Note that the signature gets extended by ; these variables are treated as constants in .

Intersection [CDG99, Sect.1.3]
Given grammars , a grammar of size can be computed in time such that for each 
Restriction [special case of 4]
Intersection of one grammar with a term universe , such thatcan be done in time by removing all symbols not in .

Union [CDG99, Sect.1.3]
Given grammars , a grammar of size can be computed in time such thatby adding one rule.

Composition [Trivial]
Given an –ary function symbol , a grammar , and nonterminals , a grammar of size can be computed in time such thatfor a certain nonterminal , by adding one rule.

Weight computation [AM91, Sect.4]
Given a grammar , the heights can be computed for all nonterminals simultaneously in time . 
Language enumeration [BH96, Fig.21]
Given a grammar and the heights of all nonterminals, the elements of can be enumerated in order of increasing height in time linear in the sum of their sizes by a simple Prolog program. ∎
3 Equational Theories of Finite Algebras
First, we give a rational reconstruction of the liquid flow algorithm of Barzdin and Barzdin [BB91]. They use labeled graphs to compute an axiom set from an incompletely given finite algebra. Their approach can be reformulated in terms of regular tree languages using the correspondence of notions shown in Fig. 1. Our following theorem corresponds to their main result, Thm. 2. It is in fact a slight extension, as it allows for sorts and for substitutions that map several variables to the same value.
On the other hand, the liquid flow algorithm turns out to be an improvement of the weight computation algorithm from Thm. 2.2.8. Both are fixpoint algorithms, and identical except for minor, but important, modifications. The algorithm from Thm. 2.2.8 has a complexity of , while Barzdins’ algorithm runs in , exploiting the fact that always and therefor the first value assigned to some must be its final one already. Since in each cycle at least one must change its assigned value^{2}^{2}2Unless the fixpoint has been reached already , by an appropriate incremental technique (water front), linear complexity can be achieved. A formal complexity proof of this improved grammar fixpoint algorithm, extended to somewhat more general weight definitions, shall appear in [Bur02].
Theorem 3.1
[Reconstruction of Barzdin] Given a –algebra , domain elements , and defining for and , the set of terms is a regular tree language. A grammar for it can be computed in time . After computing nonterminal weights in time , the language elements can be enumerated in order of increasing height in linear time.
Proof
Barzdin [BB91]  Tree Grammars 

sample  equations from Def. 2 
open term, level  term in , height 
closed term  term in 
sample graph  grammar 
domain node  nonterminal 
functional node  expression 
upper node  , if 
lower nodes  
node weight  language height from Def. 7 
chain of dotted arcs  rule rhs with alternatives ordered by increasing height 
annotated sample graph 
grammar with heights of nonterminals obtained from Thm. 2.2.8 
liquid–flow algorithm  (improved) height computation algorithm from Thm. 2.2.8 
–term  term in 
minimal –term of domain node 
term of minimal height 
minimal –term of functional node 
term of minimal height 
Theorem 2  Theorem 3.1 
Theorem 1  Theorem 3.1 for 
Barzdin and Barzdin allow to specify an algebra incompletely, since their main goal is to acquire hypotheses about what are probable axioms.
We now investigate the special case that the substitutions in Thm. 3.1 describe all possible assignments of algebra domain elements to the variables . This way, we obtain certainty about the computed axioms – they are guaranteed to be valid in the given algebra.
Theorem 3.2
[Computing complete theorem sets] Let be a –tuple of variables, be a finite –algebra and a finite class of finite –algebras, then and are regular tree languages.
Proof (sketch)
Define . For each , let . Let and for . For and , let , where is a new binary infix function symbol. Let for each . We have
Now, for each , apply set operations corresponding to to the ; e.g. if and , let
In general, we get
For each , let , where is a new unary prefix function symbol. Let , then . From this, we immediately get a grammar for by Thm. 2.2.4. ∎


Example 3.3
Let us compute for the algebra from Exm. 2.1 and . We have the substitutions and use the naming convention
e.g. . A “*” may serve as don’t care symbol, e.g. .
From Thm. 3.2, after incorporating, lifting, and restricting, we obtain e.g. , with nonterminals and . As turns out to be empty, we simply have
We obtain by just adding the rule . To compute , we build all intersections without “*”; only four of them turn out to be nonempty, their rules are shown in Fig. 2. The grammar consists of these rules and an additional one for its start symbol . The grammars , , and have similar starting rules, which use only nonterminals containing a “*”. Since the rules for the latter are trivial, only the one for is shown. Finally, the grammar for consists of all these rules and an additional one for its start symbol . Figure 3 show some example derivations. ∎
Theorem 3.4
[Computing complete axiom sets] The sets and obtained from Thm. 3.2 can be represented as the deductive closure of a finite set of formulas, called and , respectively. We have:
Proof (sketch)
First, we consider the purely universal formulas in . Using the notions of Thm. 3.2, the grammar is deterministic since no union operations were involved in its construction. Using Thm. 2.2.2, we get a finite set of equations, each of which we compose with the appropriate universal quantifier prefix . The resulting formula set implies any purely universal equation valid in . By construction of , it can reduce each term in any quantified equation in to a unique normal form. Let denote the set of all those normal forms; it is finite since is finite.
Next, for any quantifier prefix containing some “”, let
Any formula in can then be deduced from and in and in , where and are the normal forms of and , respectively.
Finally, let . The proof for is similar. ∎
Observe that the variables in are introduced as constants into the grammars, hence in the above proof is a set of ground equations. A closer look at the algorithm referred by Thm. 2.2.2 reveals that it generates in fact a Noetherian ground–rewriting system assigning unique normal forms. Anyway, no proper instance of any formula from is needed to derive any one in . By permitting proper instantiations, we may delete formulas that are instances of others, thus reducing their total number significantly. To find such subsumed formulas, an appropriate indexing technique may be used, see e.g. [Gra94].



Example 3.5
Continuing Exm. 3.3, and referring to the notions the proof of of Thm. 3.4, we obtain the set shown at the top of Fig. 4, where we chose the normal form of , , , and as , , , and , respectively^{3}^{3}3The algorithm of Thm. 2.2.2 can easily be modified to work with arbitrary chosen normal forms instead of external constants. , which are each of minimal size. Fr