Matrix Graph Grammars: Transformation of Restrictions

Matrix Graph Grammars: Transformation of Restrictions

Pedro Pablo Pérez Velasco
School of Computer Science
Universidad Autónoma de Madrid
Ciudad Universitaria de Cantoblanco
   28049 - Madrid    Spain
pedro.perez@uam.es
Abstract

In the Matrix approach to graph transformation we represent simple digraphs and rules with Boolean matrices and vectors, and the rewriting is expressed using Boolean operations only. In previous works, we developed analysis techniques enabling the study of the applicability of rule sequences, their independence, stated reachability and the minimal digraph able to fire a sequence. See [MGGBook] for a comprehensive introduction. In [MGGfundamenta], graph constraints and application conditions (so-called restrictions) have been studied in detail. In the present contribution we tackle the problem of translating post-conditions into pre-conditions and vice versa. Moreover, we shall see that application conditions can be moved along productions inside a sequence (restriction delocalization). As a practical-theoretical application we show how application conditions allow us to perform multidigraph rewriting (as opposed to simple digraph rewriting) using Matrix Graph Grammars.

\issue

XXI (2010)

Matrix Graph Grammars

Keywords: Matrix Graph Grammars, Graph Dynamics, Graph Transformation, Restrictions, Application Conditions, Preconditions, Postconditions, Graph Constraints.

1 Introduction

Graph transformation [graGraBook, handbook] is becoming increasingly popular in order to describe system behavior due to its graphical, declarative and formal nature. For example, it has been used to describe the operational semantics of Domain Specific Visual Languages (DSVLs, [JVLC]), taking the advantage that it is possible to use the concrete syntax of the DSVL in the rules which then become more intuitive to the designer.

The main formalization of graph transformation is the so-called algebraic approach [graGraBook], which uses category theory in order to express the rewriting step. Prominent examples of this approach are the double [DPO:handbook, graGraBook] and single [SPO:handbook] pushout (DPO and SPO) which have developed interesting analysis techniques, for example to check sequential and parallel independence between pairs of rules [graGraBook, handbook] or the calculation of critical pairs [Heckel, Lambers].

Frequently, graph transformation rules are equipped with application conditions (ACs) [AC:Ehrig, graGraBook, HeckelW95], stating extra (in addition to the left hand side) positive and negative conditions that the host graph should satisfy for the rule to be applicable. The algebraic approach has proposed a kind of ACs with predefined diagrams (i.e. graphs and morphisms making the condition) and quantifiers regarding the existence or not of matchings of the different graphs of the constraint in the host graph [AC:Ehrig, graGraBook]. Most analysis techniques for plain rules (without ACs) have to be adapted then for rules with ACs (see e.g. [Lambers] for critical pairs with negative ACs). Moreover, different adaptations may be needed for different kinds of ACs. Thus, a uniform approach to analyze rules with arbitrary ACs would be very useful.

In previous works [JuanPP_1, JuanPP_2, JuanPP_4, MGGBook] we developed a framework (Matrix Graph Grammars, MGGs) for the transformation of simple digraphs. Simple digraphs and their transformation rules can be represented using Boolean matrices and vectors. Thus, the rewriting can be expressed using Boolean operators only. One important point is that, as a difference from other approaches, we explicitly represent the rule dynamics (addition and deletion of elements) instead of only the static parts (rule pre and postconditions). This point of view enables new analysis techniques, such as for example checking independence of a sequence of arbitrary length and a permutation of it, or to obtain the smallest graph able to fire a sequence. On the theoretical side, our formalization of graph transformation introduces concepts from many branches of mathematics like Boolean algebra, group theory, functional analysis, tensor algebra and logics [MGGBook, MGGCombinatorics, MGGmodel]. This wealth of available mathematical results opens the door to new analysis methods not developed so far, like sequential independence and explicit parallelism not limited to pairs of sequences, applicability, graph congruence and reachability. On the practical side, the implementations of our analysis techniques, being based on Boolean algebra manipulations, are expected to have a good performance.

In MGGs we do not only consider the elements that must be present in order to apply a production (left hand side, LHS, also known as certainty part) but also those elements that potentially prevent its application (also known as nihil or nihilation part). Refer to [MGGfundamenta] in which, besides this, application conditions and graph constraints are studied for the MGG approach. The present contribution is a continuation of [MGGfundamenta] where a comparison with related work can also be found. We shall tackle pre and postconditions, their transformation, the sequential version of these results and multidigraph rewriting.

Paper organization. Section 2 gives an overview of Matrix Graph Grammars. Section 3 revises application conditions as studied in [MGGfundamenta]. Postconditions and their equivalence to certain sequences are addressed in Sec. 4. Section 5 tackles the transformation of preconditions into postconditions. The converse, more natural from a practical point of view, is also addressed. The transformation of restrictions is generalized in Sec. LABEL:sec:delocalization in which delocalization – how to move application conditions from one production to another inside the same sequence – is also studied together with variable nodes. As an application of restrictions to MGGs, Sec. LABEL:sec:fromSimpleDigraphsToMultidigraphs shows how to make MGG deal with multidigraphs instead of just simple digraphs without major modifications to the theory. The paper ends in Sec. LABEL:sec:conclusions with some conclusions, further research remarks and acknowledgements.

2 Matrix Graph Grammars Overview

We work with simple digraphs which we represent as , where is a Boolean matrix for edges (the graph adjacency matrix) and a Boolean vector for vertices or nodes.111The vector for nodes is necessary because in MGG nodes can be added and deleted, and thus we mark the existing nodes with a in the corresponding position of the vector. The left of Fig. 1 shows a graph representing a production system made up of a machine (controlled by an operator) which consumes and produces pieces through conveyors. Self loops in operators and machines indicate that they are busy.

Figure 1: Simple Digraph Example (left). Matrix Representation (right)

Well-formedness of graphs (i.e. absence of dangling edges) can be checked by verifying the identity , where is the Boolean matrix product,222The Boolean matrix product is like the regular matrix product, but with and and or instead of multiplication and addition. is the transpose of the matrix , is the negation of the nodes vector , and is an operation (a norm, actually) that results in the or of all the components of the vector. We call this property compatibility (refer to [JuanPP_1]). Note that results in a vector that contains a 1 in position when there is an outgoing edge from node to a non-existing node. A similar expression with the transpose of is used to check for incoming edges.

A type is assigned to each node in by a function from the set of nodes to a set of types , . Sets will be represented by . In Fig. 1 types are represented as an extra column in the matrices, where the numbers before the colon distinguish elements of the same type. It is just a visual aid. For edges we use the types of their source and target nodes. A typed simple digraph is . From now on we shall assume typed graphs and shall drop the subindex.

A production or grammar rule is a morphism of typed simple digraphs, which is defined as a mapping that transforms in with the restriction that the type of the image must be equal to the type of the source element.333We shall come back to this topic in Sec. LABEL:sec:delocalization. More explicitly, being and partial injective mappings , such that and , where stands for domain, for edges and for vertices.

A production is statically represented as . The matrices and vectors of these graphs are arranged so that the elements identified by morphism match (this is called completion, see below). Alternatively, a production adds and deletes nodes and edges, therefore they can be dynamically represented by encoding the rule’s LHS together with matrices and vectors representing the addition and deletion of edges and nodes:444We call such matrices and vectors for “erase” and for “restock”. , where contains the types of the new nodes, and are the deletion Boolean matrix and vector, and are the addition Boolean matrix and vector. They have a 1 in the position where the element is to be deleted or added, respectively. The output of rule is calculated by the Boolean formula , which applies both to nodes and edges.555The and symbol is usually omitted in formulae, so with precedence of over .

Figure 2: (a) Rule Example. (b) Static Formulation. (c) Dynamic Formulation

Example.Figure 2 shows a rule and its associated matrices. The rule models the consumption of a piece (Pack) by a machine (Mach) input via the conveyor (Conv). There is an operator (Oper) managing the machine. Compatibility of the resulting graph must be ensured, thus the rule cannot be applied if the machine is already busy, as it would end up with two self loops which is not allowed in a simple digraph. This restriction of simple digraphs can be useful in this kind of situations and acts like a built-in negative application condition. Later we will see that the nihilation matrix takes care of this restriction.

In order to operate with the matrix representation of graphs of different sizes, an operation called completion adds extra rows and columns with zeros to matrices and vectors, and rearranges rows and columns so that the identified edges and nodes of the two graphs match. For example, in Fig. 2, if we need to operate and , completion adds a fourth -row and fourth -column to . No further modification is needed because the rest of the elements have the right types and are placed properly.666In the present contribution we shall assume that completion is being performed somehow. This is closely related to non-determinism. The reader is referred to [MGGmodel] for further details.

With the purpose of considering the elements in the host graph that disable a rule application, we extend the notation for rules with a new simple digraph , which specifies the two kinds of forbidden edges: Those incident to nodes which are going to be erased and any edge added by the rule (which cannot be added twice, since we are dealing with simple digraphs). has non-zero elements in positions corresponding to newly added edges, and to non-deleted edges incident to deleted nodes. Matrices are derived in the following order: . Thus, a rule is statically determined by its LHS and RHS , from which it is possible to give a dynamic definition , with and , to end up with a full specification including its environmental behavior . No extra effort is needed from the grammar designer because can be automatically calculated: , with .777Symbol denotes the tensor or Kronecker product, which sums up the covariant and contravariant parts and multiplies every element of the first vector by the whole second vector. The evolution of the nihilation matrix (what elements can not appear in the RHS) – call it – is given by the inverse of the production: . See [MGGfundamenta] for more details.

Inspired by the Dirac or bra-ket notation [braket] we split the static part (initial state, ) from the dynamics (element addition and deletion, ): . The ket operators (those to the right side of the bra-ket) can be moved to the bra (left hand side) by using their adjoints.

Matching is the operation of identifying the LHS of a rule inside a host graph. Given a rule and a simple digraph , any total injective888MGG considers only injective matches. morphism is a match for in , thus it is one of the ways of completing in . Besides, we shall consider the elements that must not be present.

Given the grammar rule and the graph , is called a direct derivation with and result if the following conditions are satisfied:

  1. There exist total injective morphisms and with , .

  2. The match induces a completion of in . Matrices and are then completed in the same way to yield and . The output graph is calculated as .

The negation when applied to graphs alone (not specifying the nodes) – e.g. in the first condition above – will be carried out just on edges. Notice that in particular the first condition above guarantees that and will be applied to the same nodes in the host graph .

In direct derivations dangling edges can occur because the nihilation matrix only considers edges incident to nodes appearing in the rule’s LHS and not in the whole host graph. In MGG an operator takes care of dangling edges which are deleted by adding a preproduction (known as production) before the original rule. Refer to [JuanPP_1, JuanPP_2]. Thus, rule is transformed into the sequence , where deletes the dangling edges and remains unaltered.

There are occasions in which two or more productions should be matched to the same nodes. This is achieved with the marking operator introduced in Chap. 6 in [MGGBook]. A grammar rule and its associated -production is one example and we shall find more in future sections.

In [JuanPP_1, JuanPP_2, JuanPP_4, MGGBook] some analysis techniques for MGGs have been developed which we shall skim through. One important feature of MGG is that sequences of rules can be analyzed independently to some extent of any host graph. A rule sequence is represented by where application is from right to left, i.e. is applied first. For its analysis, the sequence is completed by identifying the nodes across rules which are assumed to be mapped to the same node in the host graph.

Once the sequence is completed, sequence coherence [JuanPP_1, MGGBook, MGGCombinatorics] allows us to know if, for the given identification, the sequence is potentially applicable, i.e. if no rule disturbs the application of those following it. The formula for coherence results in a matrix and a vector (which can be interpreted as a graph) with the problematic elements. If the sequence is coherent, both should be zero; if not, they contain the problematic elements. A coherent sequence is compatible if its application produces a simple digraph. That is, no dangling edges are produced in intermediate steps.

Given a completed sequence, the minimal initial digraph (MID) is the smallest graph that permits the application of such sequence. Conversely, the negative initial digraph (NID) contains all elements that should not be present in the host graph for the sequence to be applicable. In this way, the NID is a graph that should be found in for the sequence to be applicable (i.e. none of its edges can be found in ). See Sec. 6 in [MGGCombinatorics] or Chaps. 5 and 6 in [MGGBook].

Other concepts we developed aim at checking sequential independence (same result) between a sequence and a permutation of it. G-Congruence detects if two sequences, one permutation of the other, have the same MID and NID. It returns two matrices and two vectors, representing two graphs which are the differences between the MIDs and NIDs of each sequence, respectively. Thus if zero, the sequences have the same MID and NID. Two coherent and compatible completed sequences that are G-congruent are sequentially independent. See Sec. 7 in [MGGCombinatorics] or Chap. 7 in [MGGBook].

3 Previous Work on Application Conditions in MGG

In this section we shall brush up on application conditions (ACs) as introduced for MGG in [MGGfundamenta] with non-fixed diagrams and quantifiers. For the quantification, a full-fledged monadic second order logic999MSOL, see e.g. [Courcelle]. formula is used. One of the contributions in [MGGfundamenta] is that a rule with an AC can be transformed into (sequences of) plain rules by adding the positive information to the left hand side of the production and the negative to the nihilation matrix.

A diagram is a set of simple digraphs and a set of partial injective morphisms with . The diagram is well defined if every cycle of morphisms commute. is a graph constraint where is a well defined diagram and a sentence with variables in and predicates and . See eqs. (1) and (2). Formulae are restricted to have no free variables except for the default second argument of predicates and , which is the host graph in which we evaluate the GC. GC formulae are made up of expressions about graph inclusions. The predicates and are given by:

(1)
(2)

where predicate states that element (a node or an edge) is in graph . Predicate means that graph is included in . Predicate asserts that there is a partial morphism between and , which is defined on at least one edge ( ranges over all edges). The notation (syntax) will be simplified by making the host graph the default second argument for predicates and . Besides, it will be assumed that by default total morphisms are demanded: Unless otherwise stated predicate is assumed. We take the convention that negations in abbreviations apply to the predicate (e.g. ) and not the negation of the graph’s adjacency matrix.

Figure 3: Diagram Example

Example.The GC in Fig. 3 is satisfied if for every in it is possible to find a related in , i.e. its associated formula is , equivalent by definition to . Nodes and edges in and are related through morphism in which the image of the machine in is the machine in . To enhance readability, each graph in the diagram has been marked with the quantifier given in the formula. The GC in Fig. 3 expresses that each machine should have an output conveyor.

Given the rule with nihilation matrix , an application condition AC (over the free variable ) is a GC satisfying:

  1. such that and .

  2. such that is the only free variable.

  3. must demand the existence of in and the existence of in .

For simplicity, we usually do not explicitly show the condition 3 in the formulae of ACs, nor the nihilation matrix in the diagram which are existentially quantified before any other graph of the AC. Notice that the rule’s LHS and its nihilation matrix can be interpreted as the minimal AC a rule can have. For technical reasons addressed in Sec. 5 (related to converting pre into postconditions) we assume that morphisms in the diagram do not have codomain or . This is easily solved as we may always use their inverses due to ’s injectiveness.

It is possible to embed arbitrary ACs into rules by including the positive and negative conditions in and , respectively. Intuitively: “MGG + AC = MGG” and “MGG + GC = MGG”. In [MGGfundamenta] two basic operations are introduced: closure – that transforms universal into existential quantifiers, and decomposition – that transforms partial morphisms into total morphisms. Notice that a match is an existentially quantified total morphism. It is proved in [MGGfundamenta] that any AC can be embedded into its corresponding direct derivation. This is achieved by transforming the AC into some sequences of productions. There are four basic types of ACs/GCs. Let be a graph constraint with diagram and consider the associated production . The case is just the matching of in the host graph . It is equivalent to the sequence , where has as LHS and RHS, so it simply demands its existence in . We introduce the operator that replaces by and leaves the diagram and the formula unaltered. If the formula is considered, we can reduce it to a sequence of matchings via the closure operator whose result is:

(3)

with , ,101010 is an isomorphism. and .111111 is a maximal non-empty partial morphism with . This is equivalent to the sequence . If the application condition has formula , we can proceed by defining the composition operator with action:

(4)

where contains a single edge of and is the number of edges of . This is equivalent to the set of sequences .

Less evident are formulas of the form . Fortunately, operators and commute when composed so we can get along with the operator . The image of on such ACs are given by:

(5)

An AC is said to be coherent if it is not a contradiction (false in all scenarios), compatible if, together with the rule’s actions, produces a simple digraph, and consistent if host graph such that 121212We shall say that the host graph satisfies , written , if and only if , being is a total morphism. Also, satisfies , writteni , if and only if . Usually we shall abuse of the notation and write instead. For more details, please refer to [MGGfundamenta]. to which the production is applicable. As ACs can be transformed into equivalent (sets of) sequences, it is proved in [MGGfundamenta] that coherence and compatibility of an AC is equivalent to coherence and compatibility of the associated (set of) sequence(s), respectively. Also, an AC is consistent if and only if its equivalent (set of) sequence(s) is applicable. Besides, all results and analysis techniques developed for MGG can be applied to sequences with ACs. Some examples follow:

  • As a sequence is applicable if and only if it is coherent and compatible (see Sec 6.4 in [MGGBook]) then an AC is consistent if and only if it is coherent and compatible.

  • Sequential independence allows us to delay or advance the constraints inside a sequence. As long as the productions do not modify the elements of the constraints, this is transformation of preconditions into postconditions. More on Sec. 5.

  • Initial digraph calculation solves the problem of finding a host graph that satisfies a given AC/GC. There are some limitations, though. For example it is necessary to limit the maximum number of nodes when dealing with universal quantifiers. This has no impact in some cases, for example when non-uniform MGG submodels are considered (see nodeless MGG in [MGGmodel]).

  • Graph congruence characterizes sequences with the same initial digraph. Therefore it can be used to study when two GCs/ACs are equivalent for all morphisms or for some of them.

Summarizing, there are two basic results in [MGGfundamenta]. First, it is always possible to embed an application condition into the LHS of the production or derivation. The left hand side of a production receives elements that must be found – – and those whose presence is forbidden – –. Second, it is always possible to find a sequence or a set of sequences of plain productions whose behavior is equivalent to that of the production plus the application condition.

4 Postconditions

In this section we shall introduce postconditions and state some basic facts about them analogous to those for preconditions. We shall enlarge the notation by appending a left arrow on top of the conditions to indicate that they are preconditions and an upper right arrow for postconditions. Examples are for a precondition and for a postcondition. If it is clear from the context, arrows will be omitted.

{definition}

[Precondition and Postcondition] An application condition set on the LHS of a production is known as a precondition. If it is set on the RHS then it is known as a postcondition.

Operators and are defined similarly for postconditions. The following proposition establishes an equivalence between the basic formulae (match, decomposition, closure and negative application condition) and certain sequences of productions.

{proposition}

Let be a postcondition. Then we can obtain a set of equivalent sequences to given basic formulae as follows:

(6)
(7)
(8)
(9)

where is the number of potential matches of in the image of the host graph, is the number of edges in and asks for the existence of in the complement of the image of the host graph.

Proof
For the first case (match), the AC states that an additional graph has to be found in the image of the host graph. This is easily achieved by applying to the image of , i.e. by considering . The elements in are related to those in according to the identifications in a morphism that has to be given in the diagram of the postcondition. In the four cases considered in the proposition we can move from composition to concatenation by means of the marking operator . Recall that guarantees that the identifications in are preserved.

The second case (closure) is very similar. We have to verify all potential appearances of in the image of the host graph because . We proceed as in the first case but this time with a finite number of compositions: .

For decomposition, is not found in the host graph if for some matching there is at least one missing edge. It is thus similar to matching but for a single edge. The way to proceed is to consider the set of sequences that appear in eq. (8). Negative application conditions (NACs) are the composition of eqs. (7) and (8).

One of the main points of the techniques available for preconditions is to analyze rules with ACs by translating them into sequences of flat rules, and then analyzing the sequences of flat rules instead.

{theorem}

Any well-defined postcondition can be reduced to the study of the corresponding set of sequences.

Proof
The proof follows that of Th. 4.1 in [MGGfundamenta] and is included here for completeness sake. Let the depth of a graph for a fixed node be the maximum over the shortest path (to avoid cycles) starting in any node different from and ending in . The depth of a graph is the maximum depth for all its nodes. Notice that the depth is if and only if in the diagram are unrelated. We shall apply induction on the depth of the AC.

A diagram is a graph where nodes are digraphs and edges are morphisms . There are possibilities for depth in a AC made up of a single element , summarized in Table 1.

I(1*) (5*) (9*) (13*)
I(2*) (6*) (10*) (14*)
I(3*) (7*) (11*) (15*)
I(4*) (8*) (12*) (16*)
Table 1: All Possible Diagrams for a Single Element

Elements in the same row for each pair of columns are related using equalities and , so it is possible to reduce the study to cases (1*) – (4*) and (9*) – (12*). Identities and reduce (9*) – (12*) to formulae (1*) – (4*):

Proposition 4 considers the four basic cases which correspond to (1*) – (4*) in Table 1, showing that in fact they can all be reduced to matchings in the image of the host graph, i.e. to (1*) in Table 1, verifying the theorem.

Now we move on to the induction step which considers combinations of quantifiers. Well-definedness guarantees independence with respect to the order in which elements in the postcondition are selected. When there is a universal quantifier , according to eq. (7), elements of are replicated as many times as potential instances of can be found in the host graph. In order to continue the procedure we have to clone the rest of the diagram for each replica of , except those graphs which are existentially quantified before in the formula. That is, if we have a formula when performing the closure of , we have to replicate as many times as , but not . Moreover has to be connected to each replica of , preserving the identifications of the morphism . More in detail: When closure is applied to , we iterate on all graphs in the diagram. There are three possibilities:

  • If is existentially quantified after – then it is replicated as many times as . Appropriate morphisms are created between each and if a morphism existed. The new morphisms identify elements in and according to . This permits finding different matches of for each , some of which can be equal.131313If for example there are three instances of in the image of the host graph but only one of , then the three replicas of are matched to the same part of .

  • If is existentially quantified before – then it is not replicated, but just connected to each replica of if necessary. This ensures that a unique has to be found for each . Moreover, the replication of has to preserve the shape of the original diagram. That is, if there is a morphism then each has to preserve the identifications of (this means that we take only those which preserve the structure of the diagram).

  • If is universally quantified (no matter if it is quantified before or after ), again it is replicated as many times as . Afterwards, will itself need to be replicated due to its universality. The order in which these replications are performed is not relevant as .

Previous theorem and the corollaries that follow heavily depend on the host graph and its image (through matching) so analysis techniques developed so far in MGG which are independent of the host graphs can not be applied. The “problem” is the universal quantifier. We can consider the initial digraph and dispose to some extent of the host graph and its image. This is related to the fact (Sec. 5) that it is possible to transform postconditions into equivalent preconditions.

Two applications of Th. 4 are the following corollaries that characterize coherence, compatibility and consistency of postconditions.

{corollary}

A postcondition is coherent if and only if its associated (set of) sequence(s) is coherent. Also, it is compatible if and only if its associated (set of) sequence(s) is compatible and it is consistent if and only if its associated (set of) sequence(s) is applicable.

{corollary}

A postcondition is consistent if and only if it is coherent and compatible.

Example.Let’s consider the diagram in Fig. 4 with formula . The postcondition states that if an operator is connected to a machine, such machine is busy. The formula has an implication so it is not possible to directly generate the set of sequences because the postcondition also holds when the left of the implication is false. The closure operator reduces the postcondition to existential quantifiers, which is represented to the right of the figure. The resulting modified formula would be .

Figure 4: Postcondition Example

Once the formula has existentials only, we manipulate it to get rid of implications. Thus, we have . This leads to a set of four sequences: . Thus, the graph and the production satisfy the postcondition if and only if some sequence in the set is applicable to .

Something left undefined is the order of productions and in the sequences. Consistency does not depend on the ordering of productions – as long as the first to be applied is production – because productions (and their negation) are sequentially independent (they do not add nor delete any edge or node). If they are not sequentially independent then there exists at least one inconsistency. This inconsistency can be detected using previous corollaries independently of the order of the productions.

5 Moving Conditions

In this section we give two different proofs that it is possible to transform preconditions into equivalent postconditions and back again. The first proof (sketched) makes use of category theory while the second relies on the characterizations of coherence, G-congruence and compatibility. To ease exposition we shall focus on the certainty part only as the nihilation part would follow using the inverse of the production.

We shall start with a case that can be addressed using equations (6) – (9), Th. 4 and Cor. 4: When the transformed postcondition for a given precondition does not change.141414This is not so unrealistic. For example, if the production preserves all elements appearing in the precondition. The question of whether it is always possible to transform a precondition into a postcondition – and back again – in this restricted case would be equivalent to asking for sequential independence of the production and the identities or :

(10)

where the sequence to the left of the equality corresponds to a precondition and the sequence to the right corresponds to its equivalent postcondition.

Figure 5: Precondition to Postcondition Transformation

In general the production may act on elements that appear in the diagram of the precondition, spoiling sequential independence. Left and center of Fig. 5 – in which the first basic AC (match) is considered – suggest that the pre-to-post transformation is a categorical pushout151515The square is a pushout where , , , and are known and , and need to be calculated. in the category of simple digraphs and partial morphisms.

Theorem 4 proves that any postcondition can be reduced to the match case. Besides, we can trivially consider total morphisms (instead of partial ones) by restricting the domain and the codomain of to the nodes in . For the post-to-pre transformation we can either use pullbacks or pushouts plus the inverse of the production involved.

To see that precondition satisfaction is equivalent to postcondition satisfaction using category theory, we should check that the different pushouts can be constructed (, etcetera) and that and (refer to Fig. 5). Although some topics remain untouched such as dangling edges, we shall not carry on with category theory.

Figure 6: Restriction to Common Parts: Total Morphism

Example.Let be given the precondition to the left of Fig. 6 with formula . To calculate its associated postcondition we can apply the production to and obtain , represented also to the left of the same figure. Notice however that it is not possible to fing a match of in because of node . One possible solution is to consider and restrict the production to those common elements. This is done to the right of Fig. 6

{theorem}

Any consistent precondition is equivalent to some consistent postcondition and vice versa.

Proof!@

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
322091
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description