Towards Automated Proof Strategy Generalisation

# Towards Automated Proof Strategy Generalisation

Gudmund Grov
School of Mathematical and Computer Sciences
Heriot-Watt University    Edinburgh       Ewen Maclean
School of Informatics
University of Edinburgh
###### Abstract

The ability to automatically generalise (interactive) proofs and use such generalisations to discharge related conjectures is a very hard problem which remains unsolved; this paper shows how we hope to make a start on solving this problem. We develop a notion of goal types to capture key properties of goals, which enables abstractions over the specific order and number of sub-goals arising when composing tactics. We show that the goal types form a lattice, and utilise this property in the techniques we develop to automatically generalise proof strategies in order to reuse it for proofs of related conjectures. We illustrate our approach with an example.

## 1 Introduction

When verifying large systems one often ends up applying the same proof strategy many times – albeit with small variations. An expert user/developer of a theorem proving system would often implement common proof patterns as a so-called tactics, and use this to automatically discharge “similar” conjectures. However, other users often need to manually prove each conjecture. Our ultimate goal is to automate the process of generalising a proof (possibly a few proofs) into a sufficiently generic proof strategy capable of proving “similar” conjectures. In this paper we make a small step towards this goal by developing a suitable representation with necessary strong formal properties, and give two generic methods which utilises this representation to generalise a proof.

Whilst the manual repetition of similar proofs have been observed across different formal methods, for example Event-B, B and VDM (see ), we will focus on a subset of separation logic , used to reason about pointer-based programs111However, note that we believe that our approach is still generic across different formal methods.. In the subset, there are two binary operations and and a predicate , with the following axioms:

 (A∗B)∗C⇔A∗(B∗C)(ax1)pure(B)→(A∧B)∗C⇔(A∗C)∧B(ax2)

These axioms pertain specifically to separation logic, and allow pure/functional content to be expressed apart from shape content, used to describe resources. Now, consider the conjecture:

 p:pure(e),h:c∗((f∗(d∗b)∧e)∧e)∗a⊢ ((c∗f)∗(d∧e))∗((b∧e)∗a) (1)

which demonstrate the typical form of a goal resulting from proving properties about heap structures, which involve some resource content and some functional content. For example, one could view the as propositions about space on a heap, with containing some functional information – for example about order.

Figure 1 illustrates a proof of this conjecture in the Isabelle theorem prover. Next, consider the following “similar” conjecture:

 p′:pure(d),h′:a∗(((b∗c)∧d)∗e)⊢((a∗((b∧d)∗c))∗e) (2)

which again demonstrate the form of a typical proof. This conjecture can be proven by the following sequence of tactic applications:

 {apply} (subst ax1); {apply} (subst % ax2); {apply} (rule p'); {apply} (% rule h')

Our goal is to be able to apply some form of analogous reasoning to use the proof shown in Figure 1 to automatically discharge (2). However, a naive reuse of this proof will not work since:

• There are different number of tactic applications in the two proofs.

• Naive generalisations such as “apply subst ax1 until it fails” will fail, since subst ax1 is still applicable for (2) after the first application. Continued application will cause the rest of the proof to fail.

• The “analogous” assumptions have different names, e.g. and , thus rule p will not work for (2).

Even if the proofs are not identical, they are still captured by the same proof strategy. In fact, the proofs can be described as simple version of the the mutation proof strategy developed to reason about functional properties in separation logic . Here, we assume the existence of a hypothesis (i.e. or ), with some desirable properties we will return to. The strategy can then be described as:

 continue applying ax1 while there are no symbols at the same position in the goal and H; then apply ax2 while the goal does not match H and there is a fact P which can discharge the condition of ax2 finally, discharge the goal with H. Figure 2: Mutation

The rest of the paper will focus on how we can automatically discover such strategy from the proof shown in Figure 1. To achieve this a suitable proof strategy representation is required. Firstly, as we can see from the strategy, the representation needs to include properties about the sub-goals/proof states as well as information about tactics. Moreover, sub-goals arising from a tactic application are often treated differently (e.g. the condition arising from the use of ax2), thus some “flow information” is required. From this we argue that

A graph where the nodes contains the goals, and edges annotated with goal information working as channels for the goals, is a suitable representation to support the automatic generalisation of proof strategies from proofs.

Previously, a graph-based language to express proof strategy has been developed , and we will briefly summarise this in the next section. However, the annotation of goals on edges has not been developed, and developing this is a key challenge in achieving our ambitious goal. A key contribution of this paper is the development of such a goal type which serves as a specification of a particular goal and can be generalised across proofs. The example has shown that there is vast number of information required which the goal type need to capture:

• The conclusion to be proven, e.g. initially.

• The facts available, including local assumptions (such as p and q), and axioms/lemmas (e.g. ax1 and ax2).

• Properties between facts and the conclusion (or other facts). For example, ax2 is applied because the condition of it can be discharged by p.

• Properties relating goals to tactics; e.g. after applying ax2 one subgoal is discharged by p but not the other.

Moreover, other information could also be essential for a particular proof strategy, for example: definitions; fixed/shared variables; and variance between steps (e.g. for each step a “distance” between and the goal is reduced).

We argue that a language need to be able to capture such properties, and we are not familiar with any proof language which can capture them in a natural way. The development of a goal type to capture this is a key contributions and the topic of §3. We then briefly show how this can be utilised when evaluating a conjecture over the strategy in §4. The second key contribution of this paper is the topic of §5. Here, we utilise the graph language and goal types to generalise a proof into a proof strategy, illustrated by re-discovering the mutation strategy. A key feature here is that we see the goal types as a lattice which can naturally be generalised. This is combined with graph transformations to find common generalisable sub-strategies and loops with termination conditions. We discuss related work and conclude in §6 and §7.

## 2 Background on the Proof Strategy Language

The graphical proof strategy language was introduced in  built upon the mathematical formalism of string diagrams . A string diagram consists of boxes and wires, where the wires are used to connect the boxes. Both boxes and wires can contain data, and data on the edges provides a type-safe mechanism of composing two graphs. Crucially, string diagrams allow dangling edges. If such edge has no source, then this becomes and input for the graph, and dually, if an edge as no destination then it is the output of the graph.

In a proof strategy graph , the wires are labelled with goal types, which is developed in the next section. A box is either a tactic or a list of goals. Such goal boxes are used for evaluation by propagating them towards the outputs as shown in Figure 3.

There are two types of tactics. The first type is known as a graph tactic, which is simple a node holding one or more graphs which be unfolded. This is used to introduce hierarchies to enhance readability. A second usage, in the case it holds more then one child graphs, is to represent branching in the search space, as there multiple ways of unfolding such graph. Note however that, as explained in , graph tactics are evaluated in-place and are thus not unfolded first.

The other type of tactic is an atomic tactic. This corresponds to a tactic of the underlying theorem prover. Here, we here assume works on a proof state (containing named hypothesis, the open conjecture, fixed variables etc). When evaluated, such tactic turn a proof state (goal) into a list of new proof states (sub-goals). Since this may also involve search it is returns a set of such list of proof state, thus it has the type

 proof state→{[proof state]}

Here, for a type , is the type of finite lists of and is the type of finite sets whose elements are of type .

For this paper, we assume two atomic tactics: subst and rule , which performs a single substitution or resolution step, respectively. Here, may be both a single rule or a set of rules (all of them are then attempted). It can also be a description of a set of rules, which we call a class and is introduced in the next section.

In order to apply an atomic tactic in the strategy language, it has to be typed with goal types, also introduced next. Let and the represent goal type variables. A typed tactic is then a function of the form:

 α→{[β1]×[β2]×…×[βn]}

This type has to be reflected in our representation of goal nodes, which we will return in §4 after we have developed our notion of goal types, which is the next topic.

## 3 Towards a Theory of Goal Types

### 3.1 Classes

A goal type must be able to capture the intuition of the user, potentially using all the information listed in §1. This information is then used to guide the proof and send sub-goals to the correct tactic. To achieve this we firstly need to capture important properties of the conclusion of the conjecture. Next, it is important to note that, in general, most of the information available is not relevant, inclusion of it will act as noise (and increase the chance of “over-fitting” a strategy to a particular proof). Thus, we need to be able to separate the wheat from the chaff, and capture properties of the ‘relevant’ facts, where facts refer to both lemmas/axioms, and assumptions which are local to the conclusion. Henceforth we will term a fact or conclusion an element. There are a large set of such element properties, e.g.:

• a particular shape or sub-shape;

• the symbols used, or symbols at particular positions (e.g. top symbol);

• certain types of operators are available, e.g. (1) contains associative-commutative operators;

• the element contains variables we can apply induction to or (shared) meta-variables;

• certain rules are applicable;

• the element’s origin, e.g. it is from group theory or it is a property of certain operator.

This list is by no means complete, and here we will focus on two such properties:

• top_symbol describes the top level symbol;

• has_symbol describes the symbols it must contain.

Each such feature will have data associated:

 data:=int | term | position | boolean

where term refers to the term of the underlying logic, and a position refers to an index of a term tree. A class describes a family of elements where certain such features hold. A class, for example, could be a conclusion or a hypothesis, for which certain properties hold.

###### Definition 1.

A class is a map

 class:=name\lx@stackrelm→[[data]]

such that for each name in the domain of a class, there is an associated predicate on an element, termed the matcher. There are two special cases where the predicates always succeeds or always fails on certain data, denoted by and as described below. A class matches to a conclusion/fact if the predicate on each element holds.

The intuition behind the list of lists of data is that it represent a property in DNF form, e.g. , which is equivalent to . For the conjecture in (1), identifies the conclusion, while identifies the first assumption, but not the second, and captures both assumptions and the goal. We call this a semantic representation of the data.

We write the constant space of feature names as and, for a class , with , is the data associated with feature for class . We define the semantic representation of the data for a particular feature in a class using the notation for some data . By semantic representation, we mean that the structure of the list of data is mapped to a representation about which we can reason – for example, above where a list of lists of data represents a formula in DNF. It is then possible to reason about this data. For example, for the feature in the conjecture (1) we write for :

 [[a1⋯am],⋯,[b1⋯bn]]s=((\llbracketa1\rrbracket∩⋯∩\llbracketam\rrbracket)∪⋯∪(\llbracketb1\rrbracket∩⋯∩\llbracketbn\rrbracket)) (3)

where denotes as an atom.

Classes form a bounded lattice , on which we can define a meet and a join. We show how to compute the join (: least upper bound), and meet (: greatest lower bound) for two classes and . We define the most general class as and the empty class as . We write the most general element of as and the least general to be .

###### Definition 2.

is the greatest lower bound of and if , where computes the greatest lower bound for feature .

###### Definition 3.

is the least upper bound of and if , where computes the least upper bound for feature .

For or we define and as:

###### Definition 4.

and

We further define and to be (the universal set) and respectively. To show that classes form a partial order, we prove the following properties about meet and joint:

###### Theorem 1.

and are commutative and associative operations.

###### Proof.

It suffices to prove that and commutative and associative for each . In our example we use Definition 4. This is provable since and are commutative, associative and idempotent operations in set theory. ∎

###### Theorem 2.

and follow the absorption laws , and .

###### Proof.

It suffices to prove that follow the absorption laws. This follows from the fact that and are set theoretic operations. It also follows that and are idempotent; , . ∎

Since is and is , it is trivial to show that and for a class . Thus, a class form form a bounded lattice.

Orthogonality is a key property to reduce non-determinism during evaluation of a strategy, whilst subtyping of classes is a key feature for our generalisation techniques discussed in §5:

###### Definition 5.

and are orthogonal if . We write this as . is a subtype of , written , if .

As an example, consider a goal class with features has_symbol and top_symbol:

 C1: {(top\_symbol↦[[∗]]),(has\_symbol↦[[∗,∧],[∨,∗]])} C2: {(top\_symbol↦[[∧]]),(has\_symbol↦[[∗,∧],[∨,∗]])} (4) C3: {(top\_symbol↦[[∗]]),(has\_symbol↦[[∗,∧,∨]])} (5)

as there is a feature () for which , since by the semantics . In order to determine whether is a subtype of we must show that for all features. Using definition 4 we must prove for :

 ((\llbracket∗\rrbracket∩\llbracket∧\rrbracket)∪(\llbracket∨\rrbracket∩\llbracket∗\rrbracket))∩(\llbracket∗% \rrbracket∩\llbracket∧\rrbracket∩\llbracket∨\rrbracket)=(\llbracket∗\rrbracket∩\llbracket∧\rrbracket∩\llbracket∨\rrbracket)

which is true and the same for which in this case follows trivially.

A class identifies a cluster of elements with certain common properties. However, certain types of properties are between elements – e.g. a conditional fact can only be applied if the condition can be discharged. Moreover, certain properties rely on information pertaining to previous nodes in the proof tree, e.g. a measure has to be reduced in a rewriting step to ensure termination. Such properties include;

• common symbols between two elements, or the position they are at;

• common shapes between two elements;

• embedding of one element into another;

• some form of difference between elements

• some sort of measure reduces/increases between elements;

We call such properties links. Moreover, we abstract links to make them relations between classes rather than between elements. Links are given an existential meaning: a link between two classes entails that there exists elements in them such that a property holds. In addition, we introduce a parent function on links to refer to the parent node. The meaning of this will become clearer in the next section, where we discuss evaluation.

###### Definition 6.

such that for each name in the domain of a link, there is an associated predicate called a matcher. A link matches to a conclusion/fact if the predicate on each element holds.

We write the constant space of link names as and for a link , with , is the data associated with feature , classes and , for link .

We will only consider the link features is_match and symb_at_pos for this exposition. The data of the former are booleans in DNF, and its matcher succeeds if the result of an exact match between the elements is the same as the semantic value of the data. The data of the latter is lists of position, where for example

 {(symb\_at\_pos,C1,C2)↦[[pos]]}

states that there exists elements of classes and where the symbol at position is the same. To state that there is no position where this is the case, we introduce an element for each , as we did with classes. In general, there will be more complicated links, with more complicated output data values. Defining these is ongoing work.

In order to define orthogonality and subtyping we define the meet and join for each name in .

###### Definition 7.

is the greatest lower bound of and if , where computes the greatest lower bound for link feature .

###### Definition 8.

is the least upper bound of and if , where computes the least upper bound for link feature .

As with classes, we introduce a semantic representation for the links using notation for some data . Since the data is a list of lists of positions, we use the same semantics as in (3). The intuition is that we should be able to generalise the link class to account for the same symbol to exist at multiple positions within the hypothesis and conclusions. The proofs and definitions of the lattice theory follow similarly to those for classes.

We then define orthogonality and subtyping for links:

###### Definition 9.

and are orthogonal if . We write this as is a subtype of , written , if .

### 3.3 Goal Types

A goal type is a description of the conclusion, the related facts, and the links between them:

###### Definition 10.

A goal type is a record:   GoalType := { link : link, facts : { class } , concl : class } where concl is the class describing the conclusion of a goal, facts is a set of classes of relevant facts, and link is a link relating classes of facts and concl.

Note that we keep a set of classes of facts to account for specifying the existence of multiple classes of hypotheses. For example, in the our example conjecture, hypothesis forms a class (with top_symbol ), while forms a class (with has_symbols ). Henceforth we assume that all members of facts are orthogonal – dealing with the general case which allows overlapping is future work. Orthogonality and subtyping of two goal types reduces to orthogonality of their respective classes. Due to the assumptions of orthogonality between the facts, they have an universal interpretation for and an existential interpretation for <: := :=

## 4 Lifting of Goals and Tactics

Here, we will briefly outline how evaluation is achieved with the goal type introduced. Firstly, recall from Figure 3 that a single evaluation step is achieved by a tactic by consuming the input goal node on the input and produce the resulting sub-goals on the correct output edges. Since a goal nodes contains list of goals, this can be captured by meta graphical rewrite-rule shown in Figure 4. The details are given in , but one evaluation step works as follows:

 Match and partly instantiate the LHS of the meta-rule. Evaluate the tactic function for the matched input and output types. Finish instantiating the RHS with the lists gsi from the tactic. Apply the fully instantiated rule(s). \tikzfig goal_eval Figure 4: Evaluation meta-rule

where and are goal type variables. We assume is an atomic tactic, but this is trivial to extend to graph tactics. Further note that there are additional rules to split a list into a sequence of singleton lists and delete empty list nodes. For more details we refer to .

In the second step of this algorithm, the underlying tactic has to be lifted from to the form

First we need to introduce a goal. This can be seen as an instance of a goal type for a particular proof state:

###### Definition 11.

A goal is a record:

 goal:={fmap:class\lx@stackrelm→{fact}, ps:proof state, parent:{goal}}

where parent is either a singleton or empty set – empty if this is the first goal. Type checking relies on the “typing predicates” associated with classes and links. A goal is of type , iff

• The conclusion in matches .

• For each class , is defined, not empty, and each matches .

• For each there exists elements and such that the holds. Moreover, for each there must be an , such that (and dually the other way around).

Now, to lift a tactic we need to: unlift goal to project the underlying proof state; apply the tactic; and lift the resulting proof states to goals of a type in (which becomes instantiated to specific goal types when matching the RHS in the first step). Then, for a list of proof states, let be the set of all partitions of lifted into lists of goals , , such that all of the goals in the -th list have goal type . Then, we define lifting as:

 lift(tac)=λg. {lp(β1,…,βn;tac(\emphunlift(g))) if g is of type α∅ otherwise

We are then left to define unlifting and lifting for a single goal node and a single goal type. Firstly, a naive unlifting of a goal simply projects the goal state. More elaborate unliftings are tactic dependent, and may e.g. add all facts from a particular fact class as active assumptions beforehand.

Lifting is a partial function, and an element of lp is only defined if lifting of all elements succeeds. There are several (type-safe) ways to implement lifting. Here, we show a procedure which assumes that all relevant information is passed down the graph from the original goal node. Any fact “added” to a goal node is thus a fact generated by the tactic. However, one may “activate” existing facts explicitly in the tactic which will then be used by lifting. A new goal is then lifted as follows, using the (new) proof state , previous goal , and goal type as follows:

1. Set fields to , and to , fail if the conclusion does not match .

2. For each , set to be all facts in the range of and newly generated facts which matches . If for any , is empty (or undefined) then fail.

3. Check all link features. For each which is used by a link feature, filter out any element not “captured” by a link related link match. Fail if there does not exist an element in the related classes which holds for any of the links or any (for ) is empty after this filtering step.

## 5 Generalising Strategies

A proof is generalised into a strategy by first lifting the proof tree into a proof strategy graph, and then apply graph transformation techniques which utilises the goal type lattice to generalise goal types. Simple generalisation of tactics are also used. One important property when performing such generalisations, is that any valid proofs on a strategy should also be valid after, which we will provide informal justification for below. However, note that we do not deal with termination.

### 5.1 Deriving Goal Types from Proof States

In this section we will discuss how to generalise the proof shown in Figure 1 into the mutation strategy shown in Figure 2, utilising the lattice structure of goal types.

However, first we need to turn the proof tree of Figure 1 into a low-level proof strategy graph of the same shape. We utilise techniques described in  to get the initial proof tree. Now, since the shape is the same this reduces to (1) generalising proof states into goal types and (2) generalising the tactics.

(1) To generalise the proof state into goal type we have taken an approach which can be seen as a “locally maximum” derivation of goal type, where each assumption becomes a separate class, and make each class as specific as possible. Any link features that holds are also included. Consequently, the goal type will be as far down the lattice as possible whilst still being able to lift the goal state it is derived from. To illustrate, we will show how the proof state (1) is lifted to goal type . Let

 H={has_symbol↦[[∗,∧]]},{top_symbol↦[[∗]]}P={has_symbol↦[[pure]],top_symbol↦[[pure]]}G={has_symbol↦[[∗,∧]],top_symbol↦[[∗]]}L={(symb_at_pos,G,H)↦[[⊥]],(symb_at_pos,G,P)↦[[⊥]],(symb_at_pos,H,P)↦[[⊥]]}.

Then becomes . Note that the last two link features are useless, and are therefore ignored henceforth. However, this shows that in the presence of larger goal states and/or more properties heuristics will be required to reduce the size of the goal types, and filter out such “useless information”. This is future work.

(2) Tactics are kept with the difference that if a local assumption is used (e.g. or ) their respective class is used instead.

The resulting tree is shown left-most of Figure 5. For space reasons we have not included the goal types, but provided a name when referred to in the text. This is slightly more general than the original proof as it allows a very slight variation of the goals. However, it still e.g. relies on the exact number of application of each tactic.

### 5.2 Generalising Tactics

Next, we need to generalise tactics, A simple example of this is when sets of rules are used as arguments for the subst and rule tactics. Here, and can be generalised into . Another example turns a tactic into a graph tactic which nest both these tactics (and can be unfolded to either). A proviso for both is that their input and output goal types can be generalised. Both these generalisations only increases the search space and are thus proper generalisations.

Graph tactics can also be generalised by generalising the graph they nest into one. We return to this with an example below. We will use the notation for the generalisation of the two given tactics.

### 5.3 Generalising Goal Types

In the context of goal types: generalisation refers to computing the most general goal type for two existing goal types; while weakening applies to only one goal type and makes the description of it more general. Crucial to both generalisation and weakening is that multiple possible generalised and weakened goal types exist.

We use the notion of a least upper bound for a goal type lattice, described in §3 using the join operator , to define generalisation for goal types. For a class , we write:

###### Definition 12.

is a generalisation of and , also written , if .

As an example, consider the two classes shown in (4) and (5). We can compute by appealing to the set theoretic semantics and tranferring back to the class representation. For and we compute

 C(f1)s=(\llbracket∧\rrbracket∪\llbracket∗\rrbracket)⇝C(f1)=[[∧],[∗]]C(f2)s=\omit\span\omit((\llbracket∗\rrbracket∩\llbracket∧\rrbracket)∪(\llbracket∨\rrbracket∩\llbracket∗\rrbracket))∪(\llbracket∗\rrbracket∩\llbracket∧\rrbracket∩\llbracket∨\rrbracket)=((\llbracket∗\rrbracket∩\llbracket∧\rrbracket)∪(\llbracket∨\rrbracket∩\llbracket∗\rrbracket))⇝C(f2)=[[∗,∧],[∨,∗]]

producing a generalised class:

 C: {(top\_symbol↦[[∨],[∗]]),(has\_symbol↦[[∗,∧],[∧,∨]])}

The definition of generalisation for links extends similarly from its associated lattice theory described in §3.2. Recall that we assume orthogonality of fact classes. We define a function gen_map over two sets of (fact) classes, which generalises pairwise each fact class. Here, for any two fact classes and in the generalised set of fact classes, where we only retain , thus ensuring orthogonality. We can then define a function on goal types to be gen(, ) := { concl = gen((concl),(concl)), facts = gen_map((facts),(facts)), link = gen((link),(link))

### 5.4 (Re-)Discovering the Mutation Strategy

Armoured with the techniques for generalising the edges and nodes of a proof strategy, we now develop two techniques which allows us to generate our proof into the required mutation strategy.

Firstly, we need to abstract over the number of repeated sequential applications of the same tactic – i.e. we need to discover loops. When working in a standard LCF tactic language , the problem is to know: (a) on which goals (in the case of side conditions) the tactic should be repeated, and (b) when to stop. This was highlighted in , where a regular expression language, closely aligned with common LCF tacticals, was used to learn proof tactics, and hand-crafted heuristics were defined to state when to stop a loop (which by the way would fail for our example).

The advantage of our approach, is that we can utilise the goal types to identify termination conditions – reducing termination and goal focus to the same case, thus also handling the more general proof-by-cases paradigm. We illustrate our approach with what can be seen as an inductive representation of tactic looping, as shown by rules loop1 and loop2 of Figure 6. For loop1, we can see that it is correct since ensures that a goal will exit the loop when it matches . Moreover, the pre-condition ensures that the tactic can handle the input type. For loop2, similar arguments holds for the generalised edge.

Consider the left most graph of Figure 5, which is the proof tree lifted to a graph. Here, the stippled box highlights the sub-graph which matches with the rules shown above. loop1 is applied first, followed by two applications of loop2. The classes are identical so we only discuss link classes, which have the following values:

where denote the goal types in the intermediate stages of the repeated application of tactic subst ax1. Now, for the sequence to be detected as a loop, we must first discover

 GT2′=gen(gen(GT21,GT22),gen(GT23)=GT21

and show and . These are both true since and are equal, and and are orthogonal due to the existence of in the data argument denoting an empty feature.

The next step (s2) of Figure 5 layers the highlighted sub-graphs into the graph tactics pax2a and pax2b. Such layering can be done for a (connected) sub-graph if the inputs and outputs of the sub-graphs are respectively orthogonal.

Next, we again apply rule loop1 to the pax2a and pax2b sequence. However, this requires us to generalise these two graph tactics, i.e. combining the two graphs they contain into one. Now, as shown in , in the category of string graphs, two graphs are composed by a push-out over a common boundary. We can combine two graph tactics in the same way by a push-out over the largest common sub-graph. This is shown on the right-most diagram of Figure 6, which becomes the last step (s3) of Figure 5.

This graph is in fact the mutation strategy of Figure 2, with the addition that we have given semantics to the edges. Now, the first feedback loop is identified by , while the second feedback loop is identified by .

## 6 Related Work

We extend , which introduces the underlying strategy language, by developing a theory for goal types which we show form a lattice, and using this property to develop techniques for generalising strategies.

Our goal types can be seen as a lightweight implementation of pre/post-condition used in proof planning  – with the additional property that the language captures the flow of goals. It can be seen as further extending the marriage of procedural and declarative approaches to proof strategies [2, 13, 8], and addressing issues related to goal flow and goal focus highlighted in  – for a more detailed comparison we refer to .

The lattice based techniques developed for goal type generalisation is similar to antiunification  which generalises two terms into one (with substitutions back to the original terms). Whilst each feature is primitive, the goal type has several dimensions. More expressive class/link features, which is future work, may require higher-order anti-unification  – and such ideas may also be applicable to graph generalisations. Other work that may become relevant for our techniques are graph abstractions/transformations used in algorithmic heap-based program verification techniques, such as , and for parallelisation of functional programs .

As already discussed, the problem when ignoring goal information, is that one cannot describe e.g. where to send a goal or when to terminate a loop, in a way sufficiently abstract to capture a large class of proofs. Instead, often crude, heuristics have to be used in the underlying tactic language. This is the case for , which uses a regular expression language (close to LCF tactics), originally developed in  to learn proof plans.  further claims that explanation based generalisation (EBG) is applied to derive pre/post-conditions, but no details of this are provided. An EBG approach is also applied to generalise Isabelle proof terms into more generic theorems in . This could provide an alternative starting point for us, however, one may argue that much of the user intent will be lost by working in the low-level proof term representation. Further, note that our work focuses on proof of conjectures which requires structure, meaning machine learning techniques – such as , which learns heuristics to select relevant axioms/rules for automated provers – are not sufficient. However, in , an approach to combine essentially our techniques, with more probabilistic techniques to cluster interactive proofs , was outlined.

We would also like to utilise work on proof and proof script refactoring . This could be achieved either as a pre-processing step, or by porting these techniques to our graph based language. Finally, albeit for source code,  argues for the use of graphs to perform refactorings, which further justifies our graph based representation of proof strategies for the work presented here.

## 7 Conclusion and Future Work

In this paper we have reported on our initial results in creating a technique to generalise proof into high-level proof strategies which can be used to automatically discharged similar conjectures. This paper has two contributions: (1) the introduction of goal type to describe properties of goals using a lattice structure to enable generalisations; (2) two generic techniques, based upon loop discovery to generalise a proof strategy. The techniques was motivated and illustrated by an example from separation logic. We are in the process of implementation in Isabelle combined with the Quantomatic graph rewriting engine . Next plan to implement these methods in order to test them on more examples, using a larger set of properties to represent the goal types. In particular, we are interested in less syntactic properties, such as the origin of a goal, or if it is in a decidable sub-logic.

We also showed how the lattice structure corresponds to sub-typing, and we plan to incorporate sub-typing in the underlying theory of the language in order to utilise it when composing graphs. Further, we plan to develop more techniques for generalising graphs, which may include develop an underlying theory of graph generalisation, which will be less restrictive than rewriting.

Finally, we have already touched upon the need for heuristic guidance in this work, as there will be many ways of generalising. We are also planning to apply the techniques to extract strategies from a corpus of proofs. Here we believe we have a much better chance of finding and generalising common sub-strategies, and may also incorporate probabilistic techniques as a pre-filter . Such work may help to indicate which class/link features are more common, and can be used to improve the generalisation heuristics discussed above. Further, we would like to remove the restriction that facts have to be orthogonal, and improve the sub-typing to handle this case.

We only briefly discussed the process of turning proofs into initial low-level proof strategy graphs. With partners on the AI4FM project (www.ai4fm.org) we are working on utilising their work on capturing the full proof process, where the user may (interactively) highlight the key features of a proof (step) . This can further help the generalisation heuristics.

### Acknowledgements

This work has been supported by EPSRC grants: EP/H023852, EP/H024204 and EP/J001058. We would like to thank Alan Bundy, Aleks Kissinger, Lucas Dixon, members of the AI4FM project, Katya Komendantskaya, Jonathan Heras and Colin Farquhar for feedback and discussions.

## References

•  A. Asperti, W. Ricciotti, C. Sacerdoti, and C. Tassi. A new type for tactics. In PLMMS’09, pages 229–232, 2009.
•  Serge Autexier and Dominik Dietrich. A tactic language for declarative proofs. In ITP’10, volume 6172 of LNCS, pages 99–114. Springer, 7 2010.
•  I.B. Boneva, A. Rensink, M.E. Kurban, and J. Bauer. Graph abstraction and abstract graph transformation. Technical Report TR-CTI, University of Twente, July 2007.
•  A. Bundy. The use of explicit plans to guide inductive proofs. In R. Lusk and R. Overbeek, editors, CADE9, pages 111–120. Springer-Verlag, 1988.
•  A. Bundy, G. Grov, and C.B. Jones. Learning from experts to aid the automation of proof search. In PreProc of AVoCS’09, pages 229–232, 2009.
•  Lucas Dixon and Aleks Kissinger. Open graphs and monoidal theories. CoRR, abs/1011.4114, 2010.
•  Hazel Duncan. The use of Data-Mining for the Automatic Formation of Tactics. PhD thesis, University of Edinburgh, 2002.
•  M. Giero, F. Wiedijk, and Mariusz Giero. MMode, a mizar mode for the proof assistant coq. Technical report, January 07 2004.
•  Michael J. C. Gordon, Robin Milner, and Christopher P. Wadsworth. Edinburgh LCF, volume 78 of Lecture Notes in Computer Science. Springer, 1979.
•  G. Grov and A. Kissinger. A graphical language for proof strategies. Submitted to ITP’13. Also available at arXiv:1302.6890.
•  G. Grov, E. Komendantskaya, and A. Bundy. A statistical relational learning challenge - extracting proof strategies from exemplar proofs.
•  G. Grov and G. Michaelson. Hume box calculus: robust system development through software transformation. HOSC, 23:191–226, 2010.
•  John Harrison. A mizar mode for HOL. In TPHOLs, volume 1125 of LNCS, pages 203–220. Springer, 1996.
•  M. Jamnik, M. Kerber, and C. E Benzmuller. Learning Method Outlines in Proof Planning. Technical Report CSRP-01-8, University of Birmingham (CS), 2001.
•  E.B. Johnsen and C. Lüth. Theorem reuse by proof term transformation. In TPHOLs 2004, volume 3223 of LNCS, pages 152–167. Springer, 2004.
•  A. Kissinger, A. Merry, L. Dixon, R. Duncan, M. Soloviev, and B. Frot. Quantomatic, 2011.
•  E. Komendantskaya, J. Heras, and G. Grov. Machine learning in proof general: Interfacing interfaces. CoRR, abs/1212.3618, 2012.
•  U. Krumnack, A. Schwering, H. Gust, and K-U Kühnberger. Restricted higher-order anti-unification for analogy making. In AJAI 2007, volume 4830 of LNAI, pages 273–282. Springer, 2007.
•  Ewen Maclean and Andrew Ireland. Mutation in linked data structures. In ICFEM, volume 6991 of LNCS, pages 275–290. Springer, 2011.
•  T. Mens, N. Van Eetvelde, S. Demeyer, and D. Janssens. Formalizing refactorings with graph transformations. Journal of Software Maintenance, 17(4):247–276, 2005.
•  Tom Mitchell. Machine Learning. McGraw-Hill, 1997.
•  G. D. Plotkin. A note on inductive generalization. In Machine Intelligence 5, pages 153–163, Edinburgh, 1969. Edinburgh University Press.
•  J. C. Reynolds. Separation logic: A logic for shared mutable data structures. In Logic in Computer Science, pages 55–74. IEEE Computer Society, 2002.
•  E. Tsivtsivadze, J. Urban, H. Geuvers, and T. Heskes. Semantic graph kernels for automated reasoning. In Proc. 11th SIAM Int. Conf. on Data Mining, pages 795–803. SIAM / Omnipress, 2011.
•  Andrius Velykis. Inferring the proof process. In Christine Choppy, David Delayahe, and Kaïs Klaï, editors, FM2012 Doctoral Symposium, Paris, France, August 2012.
•  I. Whiteside, D. Aspinall, L. Dixon, and G. Grov. Towards formal proof script refactoring. In CICM’11, volume 6824 of LNCS, pages 260–275. Springer, 2011.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters   