Serialisable Multi-Level Transaction Control:A Specification and VerificationThe research reported in this paper results from the project Behavioural Theory and Logics for Distributed Adaptive Systems supported by the Austrian Science Fund (FWF): [P26452-N15]. The first author, Humboldt research prize awardee in 2007/08, gratefully acknowledges partial support by a renewed research grant from the Alexander von Humboldt Foundation in 2014.The final publication is available at Elsevier via https://doi.org/10.1016/j.scico.2016.03.008.©2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.

Serialisable Multi-Level Transaction Control:
A Specification and Verificationthanks: The research reported in this paper results from the project Behavioural Theory and Logics for Distributed Adaptive Systems supported by the Austrian Science Fund (FWF): [P26452-N15]. The first author, Humboldt research prize awardee in 2007/08, gratefully acknowledges partial support by a renewed research grant from the Alexander von Humboldt Foundation in 2014.thanks: The final publication is available at Elsevier via https://doi.org/10.1016/j.scico.2016.03.008.thanks: ©2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.

Egon Börger Università di Pisa, Dipartimento di Informatica, I-56125 Pisa, Italy
boerger@di.unipi.it
   Klaus-Dieter Schewe Software Competence Centre Hagenberg, A-4232 Hagenberg, Austria
kdschewe@acm.org
   Qing Wang Research School of Computer Science, Australian National University, Australia
qing.wang@anu.edu.au
Abstract

We define a programming language independent controller TaCtl for multi-level transactions and an operator , which when applied to concurrent programs with multi-level shared locations containing hierarchically structured complex values, turns their behavior with respect to some abstract termination criterion into a transactional behavior. We prove the correctness property that concurrent runs under the transaction controller are serialisable, assuming an Inverse Operation Postulate to guarantee recoverability. For its applicability to a wide range of programs we specify the transaction controller TaCtl and the operator in terms of Abstract State Machines (ASMs). This allows us to model concurrent updates at different levels of nested locations in a precise yet simple manner, namely in terms of partial ASM updates. It also provides the possibility to use the controller TaCtl and the operator as a plug-in when specifying concurrent system components in terms of sequential ASMs.

\hexnumber@\symitalic

41 \hexnumber@\symitalic61

1 Introduction

This paper is about the use of generalized multi-level transactions as a means to control the consistency of concurrent access of programs to shared locations, which may contain hierarchically structured complex values, and to avoid that values stored at these locations are changed almost randomly. According to Beeri, Bernstein and Goodman [6] most real systems with shared data have multiple levels, where each level has its own view of the data and its own set of operations, such that operations on one level may be conflict-free, while they require conflicting lower-level operations.

A multi-level transaction controller interacts with concurrently running programs (i.e., sequential components of an asynchronous system) to control whether access to a possibly structured shared location can be granted or not, thus ensuring a certain form of consistency for these locations. This includes in particular the resolution of low-level conflicts by higher-level updates as provided by multi-level transactions [6, 35, 36] in distributed databases [7, 31]. A commonly accepted consistency criterion is that the joint behavior of all transactions (i.e., programs running under transactional control) with respect to the shared locations is equivalent to a serial execution of those programs. Serialisability guarantees that each transaction can be specified independently from the transaction controller, as if it had exclusive access to the shared locations.

It is expensive and cumbersome to specify transactional behavior and prove its correctness again and again for components of the great number of concurrent systems. Our goal is to define once and for all an abstract (i.e. programming language independent) transaction controller TaCtl which can simply be “plugged in” to turn the behavior of concurrent programs (i.e., components  of any given asynchronous system ) into a transactional one. This involves to also define an operator  that transforms a program into a new one , by means of which the programs  are forced to listen to the controller TaCtl when trying to access shared locations. To guarantee recoverability where needed we use an Inverse Operation Postulate (Sect.4.4) for component machines ; its satisfaction is a usage condition for submitting to the transaction controller.

For the sake of generality we define the operator and the controller in terms of Abstract State Machines (ASMs), which can be read and understood as pseudo-code so that TaCtl and the operator can be applied to code written in any programming language (to be precise: whose programs come with a notion of single step, the level where our controller imposes shared memory access constraints to guarantee transactional code behavior). The precise semantics underlying the pseudo-code interpretation of ASMs (for which we refer the reader to [12]) allows us to mathematically prove the correctness of our controller and operator.

Furthermore, we generalize the strictly hierarchical view of multiple levels by using the partial update concept for ASMs developed in [24] and further investigated in [27] and [34]. This abstraction by partial updates simplifies the transaction model, as it allows us to model databases with complex values and to provide an easy-to-explain, yet still precise model of multi-level transactions, where dependencies of updates of complex database values are dealt with in terms of compatibility of appropriate value changing operators (see also [28]). In fact, technically speaking the model we define here is an ASM refinement (in the sense of [8]) of some of the components of the model published in [10], namely by a) generalizing the flat transaction model to multi-level transactions which increase the concurrency in transactions and b) including an Abort mechanism. Accordingly, the serializability proof is a refinement of the proof in [10], as the refined model is a conservative extension of the model for flat transactions.111For a detailed illustration of combined model and proof refinement we refer the reader to the Java compiler correctness verification in [5].

We concentrate on transaction controllers that employ locking strategies such as the common two-phase locking protocol (2PL) [32]. That is, each transaction first has to acquire a (read- or write- or more generally operator-) lock for a shared, possibly nested location, whereby the access to the location to perform the requested operations is granted. Locks are released after the transaction has successfully committed and no more access to the shared locations is necessary.

There are of course other approaches to transaction handling, see e.g. [14, 21, 28, 33] and the extensive literature there covering classical transaction control for flat transactions, timestamp-based, optimistic and hybrid transaction control protocols, as well as other non-flat transaction models such as sagas. To model each of these approaches would fill a book; our more modest goal here is to concentrate on one typical approach to illustrate with rigour and in full detail a method by which such transaction handling techniques can be specified and proved to be correct. For the same reason we do not consider fairness issues, though they are important for concurrent runs.

In Section 2 we first give a more detailed description of the key ideas of multi-level transactions and their relationship to partial updates. We define TaCtl and the operator in Section 3 and the TaCtl components in Section 4. In Section 5 we prove the correctness of these definitions.

We assume the reader to have some basic knowledge of ASMs, covering the definitions—provided 20 years ago in [22] and appearing in textbook form in [12, Sect.2.2/4]—for what are ASMs (i.e. their transition rules) and how their execution in given environments performs state changes by applying sets of updates to locations. Nevertheless at places where some technical details about ASMs need to be refered to we briefly describe their notation and their meaning so that the paper can be understood also by a more general audience of readers who view ASMs as a semantically well-founded form of pseudo-code that performs computations over arbitrary structures.

2 Multi-Level Transactions and Partial Updates

While standard flat transaction models start from a view of operation sequences at one level, where each operation reads or writes a shared location—in less abstract terms these are usually records or pages in secondary storage—the multi-level transaction model [6, 35, 36] relaxes this view in various ways. The key idea is that there are multiple levels, each with its own view of the data and its own set of operations.

The operations on a higher level may be compatible with one another, whereas operations on a lower level implementing them are not. As a motivating example pages in secondary storage and records stored in these pages can be considered. Updating two different records in the same page should be compatible, but not simultaneous writing of the whole page. When updating a particular record, this record should be locked for writing; as writing the record requires also writing the page, the page should also be locked. However, the page lock could immediately be released after writing, as it is sufficient to block updates to the record until commit. So another transaction could get access to a different record on the same page with a long lasting lock on the record and another temporary lock on the page.

A second key idea of the multi-level transaction model stressed in [33, 35, 36] is that some high-level operations may even be compatible when applied to the same shared location. Standard examples are addition, subtraction or insertion of values into a set. For instance, if a field in a record is to be updated by adding 3 to the stored value, then another operation subtracting 2 could be executed as well without causing inconsistencies. Consequently, the strictness of a lock can be relaxed, as a plus-lock can co-exist with another plus-lock, but must prevent an arbitrary update or a times-lock (for multiplication).

We will demonstrate in the following sections that these key ideas of the multi-level transaction model can be easily and precisely captured by refinement of the ASM-based transaction handler in [10]. Since to execute a step a component ASM computes a set of updates (on which the transaction controller TaCtl can speculate for lock handling etc.), it suffices to incorporate partial updates (as handled in [34]) into the model developed in [10]. For the first idea of the multi-level transaction model we exploit the subsumption relation between locations defined in [34]: a location subsumes a location iff in all states the value of , i.e. , uniquely determines the value of , i.e. . For instance, a value of a page uniquely determines the values of the records in it, but also a tree value determines the values of subtrees and leaves. The notion of subsumption offers a simple realization of the concept of temporary locks: temporary locks are needed on all subsuming locations.

The second idea of compatible operations can be captured by introducing particular operation-dependent locks, which fine-tune the exclusive write locks. Some of these operation-locks may be compatible with each other, such that different transactions may execute simultaneously operations on the same location. Naturally, this is only possible with partial updates defined by an operator and an argument . The new value stored at location is obtained by evaluating . If operators are compatible in the sense that the final result is independent from the order in which the operators are applied, then several such partial updates can be executed at the same time.

Thus, the refinement of the concurrent ASM in [10] for handling flat transactions affects several aspects:

  • Each component machine resulting from the transaction operator will have to ask for more specific operation-locks and to execute partial updates together with other machines.

  • Each component machine will also have to release temporary locks at the end of each step.

  • In case already the partial updates of itself are incompatible, i.e. are such that they cannot be merged into a single genuine update, the machine should not fire at all; instead, it must be completely Aborted, i.e., all its steps will have to be undone immediately.

  • The LockHandler component requires a more sophisticated condition for granting locks, which takes subsumption into account.

  • The Recovery component will have to be extended to capture Undoing also partial updates, for which inverse operations are required.

  • The DeadlockHandler and Commit components remain unaffected.

While these refinements with partial updates to capture multi-level transactions require only a few changes—which also extend easily to the serializability proof—they also highlight some not so obvious deficiencies in the model of multi-level transactions itself. In [6] it is claimed that each higher-level operation is implemented by lower-level ones. For instance, an update of a record requires reading and writing a page. This is also true for object-oriented or complex value systems. For instance, in [33] it is anticipated that there could be levels for objects, records and pages, such that an operation on an object would require several update operations on records storing parts of the object. However, in the light of partial updates it is the object that subsumes the record. This implies that the definition of level-specific conflict relations [35, 33] with the condition that a high-level conflict must be reflected in a low-level one, but not vice versa, is too specific. It is true for fields, records and pages, but cannot be applied any more, when the higher-level locations subsume the lower-level ones. On the other hand, using subsumption for the definition of levels does not work either, as objects and pages are conceptually different and should not be considered as residing on the same level. To this end the use of subsumption between locations makes the idea behind multi-level transactions much clearer and formally consistent. In particular, the notion of level itself becomes irrelevant in this setting, so in a sense the transaction model formalised in this article can be seen as a moderate generalisation of the multi-level transaction model.

A second strengthening and generalisation of the concept of multi-level transactions realized in our model comes from the observation that in order to undo a partial update inverse operations are not just nice to have, but must exist, because otherwise recoverability cannot be guaranteed. This also shows that a transaction model cannot be treated in isolation without taking recovery into account at the same time.

3 The Transaction Controller and Operator

As explained above, a transaction controller performs the lock handling, the deadlock detection and handling, the recovery mechanism (for partial recovery) d the commit or abortion of single machines—we use Astract State Machines to describe programs. Thus we define TaCtl as consisting of five components specified in Sect. 4. We use SmallCaps for rules and italics for functions, sets, predicates, relations.

  • LockHandler

    DeadlockHandler

    Recovery

    Commit

    Abort

3.1 The Transaction Operator )

The operator  transforms the component machines  of any concurrent system (in particular an asynchronous, concurrent ASM [11]) into components of a concurrent system , where each component runs as transaction under the control of TaCtl. Thus is defined as follows:222For notational economy we use the same letters once to denote an operator applied to a set of component machines and TaCtl, once to denote an operator applied to single component machines and TaCtl. From the context it is always clear which we are talking about.

It remains to expalin the definition of below. TaCtl keeps a dynamic set of those machines , whose runs it currently has to supervise. This is to guarantee that operates in a transactional manner, until it has its transactional behavior (so that it can Commit it).333In this paper we deliberately keep the termination criterion abstract so that it can be refined in different ways for different transaction instances. To turn the behavior of a machine  into a transactional one, first of all  has to register itself with the controller TaCtl, i.e., to be inserted into the set of ions currently to be handled. Undoing some steps  made in the given transactional run as part of a recovery, a last-in first-out queue is needed, which for each step of  keeps track of the newly requested locks and of the recovery updates needed to Restore the values of the locations  changed in this step. When  enters the set , the has to be initialized (to the empty queue).

The crucial transactional feature is that each non-private (i.e. shared or monitored or output)444See [12, Ch.2.2.3] for the classification of locations and functions. location  a machine  needs to read or write for performing a step has to be for this purpose;  tries to obtain such locks by calling the LockHandler. In case no are needed by  in its or the LockHandler , canGo to try to perform its next step: if it cannot fire (due to an inconsistency of the set 555We borrow the name from CoreASM [15]. of updates computed by from the assignment and the partial update instructions of , see below) it calls the Abort component. If holds, we require to perform the -step together with one step of all -machines, i.e. of machines  that simultaneously with  and share some locations to be updated with  (possibly via some compatible update operations on those locations, see below).666This view of concurrency is an instance of the general definition of concurrent ASMs provided in [11]. This means to Aggregate the (below called genuine) updates  yields in its together with the partial updates of  together with the genuine updates and partial updates of all -machines. In addition a RecoveryRecord component has to Record for each of these machines  in its the obtained together with the ates needed should it become necessary to Undo the updates contributed by  to this Aggregate-step. Then  continues its transactional behavior until it is . In case the LockHandler , namely because another machine  in has some of these locks, has to for ; in fact it continues its transactional behavior by calling again the LockHandler for the needed —until the needed locked locations are unlocked, when ’s transactional behavior is Commited, whereafter a new request for these locks may become true.777A refinement (in fact a desirable optimization) consists in replacing such a waiting cycle by suspending  until the needed locks are released. Such a refinement can be obtained in various ways, a simple one consisting in letting  simply stay in until the and refining LockHandler to only choose pairs where it can and doing nothing otherwise (i.e. defining ). See Sect. 4.

As a consequence deadlocks may occur, namely when a cycle occurs in the transitive closure of the relation. To resolve such deadlocks the DeadlockHandler component of TaCtl chooses some machines as s for a recovery.888To simplify the serializability proof in Sect.4 and without loss of generality we define a reaction of machines  to their victimization only when they are in TA- (not in ). This is to guarantee that no locks are to a machine as long as it does . After a victimized machine  is by the Recovery component of TaCtl it can exit its mode and continue its transactional behavior.

This explains the following definition of as a control state ASM, i.e. an ASM with a top level Finite State Machine control structure. We formulate it by the flowchart diagram of Fig. 1, which has a precise control state ASM semantics (see the definition in [12, Ch.2.2.6]).999The components for the recovery feature are highlighted in the flowchart by a different colouring. The macros which appear in Fig. 1 are defined in the rest of this section.

Figure 1: TA(M,TaCtl)

3.2 The Macros

The predicate holds, if in the current state of  at least one of two cases happens: either reads some shared or monitored location, which is not yet for reading or writing, or  writes some shared or output location which is not yet for the requested write operation. We compute the set of such needed, but not yet locations by a function (whose arguments we omit for layout reasons in Fig.1).

Whether a lock for a location can be granted to a machine depends on the kind of read or write operation the machine wants to perform on the location.

3.2.1 Updates and partial updates.

In basic ASMs a write operation is denoted by assignment instructions of form resulting for in any given state  in an update of the location by the value  [12, pg.29]. Here denotes the evaluation of  in state  (under a given interpretation  of free variables). We call such updates genuine (in [34] they are called exclusive) to distinguish them from partial updates. The reader who is not knowledgeable about ASMs may interpret locations correctly as array variables with variable name  and index .

Analogously, we denote write operations that involve partial updates via an operation by update instructions of form

which require an overall update of the location by a value to which contributes by the value . A typical example of such operations is parallel counting (used already in [9]) where say seven occurences of a partial update instruction

in a state  are aggregated into a genuine update . Other examples are tree manipulation by simultaneous updates of independent subtrees or more generally term rewriting by simultaneous independent subterm updates, etc., see [33, 34]. Aggregate (which is implemented as a component in CoreASM [15]) specifies how to compute and perform the desired overall update effect, i.e. the genuine update set yielded by the set of all genuine and the multiset of all partial updates involving any location  and all other higher or lower level location updates the new value of  may depend upon due to an update to be performed at that level by some machine in the considered step.

Therefore, a location can be for reading () or for writing () via a genuine update or for a partial update using operation  (). We also use for a temporary lock of a location . Same as a genuine write-lock such a temporary lock blocks location for exclusive use by . However, such temporary locks will be immediately released at the end of a single step of . As explained in Section 2 the purpose of such temporary locks is to ensure that an implied write operation on a subsuming location (i.e., a partial update) can be executed, but the lock is not required any more after completion of the step, as other non-conflicting partial updates should not be prohibited.

Even if temporarily (case ) or for a partial update operation (case ) machine  still needs a lock to be allowed to Read  because for a partial update location a different machine could acquire another compatible operation lock on  that is not controllable by  alone. For this reason partial update operations are defined below to be incompatible with Read and genuine Write.

To CallLockHandler for the requested by  in its means to into the LockHandler’s set of to be handled s. Similarly we let CallCommit(M) resp. CallAbort(M) stand for insertion of  into a set resp. of the Commit resp. Abort component.

Once a machine because it has acquired all needed locks for its next proper step, it must be checked whether the it yields in its current state is consistent so that : if this is not the case, is Aborted whereby it interrupts its transactional behavior.

Here is defined as the set of updates  yields111111See the definition in [12, Table 2.2 pg.74]. in state , once the resulting genuine updates have been computed for all partial updates to be performed by  in .121212In CoreASM [15] this computation is done by corresponding plug-ins. If this update set is consistent, to Aggregate performs not only the (genuine and partial) updates of , but also those of any other -machine  which shares some to-be-updated location with  and simultaneously with .

The constraints defined in the next section for and the consistency condition for s guarantee that computes and performs a consistent update set.

3.2.2 Remark on notation.

As usual with programming languages, for ASMs we consider (names for) functions, rules, locations, etc., as elements of the universe for which sets (like , ) and relations (like subsumption) can be mathematically defined and used in rules. In accordance with usual linguistic reflection notation we also quantify over such elements, e.g. in , meaning that stands for an execution of (a step of) the ASM denoted by the logical variable .

The component has to Record for each -machine its ates (defined below where we need the details for the Recovery machine) and the obtained .

3.2.3 Remark on nondeterminism.

The ASM framework provides two ways to deal with nondeterminism. ‘True’ nondeterminism can be expressed using the construct to define machines of form

where has to be an ASM rule. Nondeterminism can also be modeled ‘from outside’ by using choice functions, say , in machines of form

where in the view of the transition rules everything is deterministic once a definition of the choice function is given. Using one or the other form of nondeterminism influences the underlying logic for ASMs (see [12, Ch.8.1]).

The locks acquired for a machine as above depend on the chosen value for so that when performs its next step it must have the same value for to execute . To ‘synchronize’ this choice of for for lock acquisition and rule execution we assume here nondeterminism in component machines to be expressed by choice functions.

4 The Transaction Controller Components

4.1 The Commit Component

A CallCommit(M) by machine  enables the Commit component, which handles one at a time s. For this we use the operator, so we can leave the order in which the s are handled refinable by different instantiations of TaCtl.

Commiting  means to Unlock all locations  that are .131313Unlock is called only in states where  has no orary lock. Note that each lock obtained by  remains with  until the end of ’s transactional behavior. Since  performs a CallCommit(M) when it has its transactional computation, nothing more has to be done to Commit besides deleting  from the sets of s and still to be handled ions.141414We omit clearing the queue since it is initialized when  is inserted into .

The locations are shared by the Commit, LockHandler and Recovery components, but these components never have the same  simultaneously in their request or set, respectively: When   has performed a CallCommit(M), it has its transactional computation and does not participate any more in any or ization. Furthermore, by definition no  can at the same time issue a (possibly triggering the LockHandler component) and be a (possibly triggering the Recovery component).

4.2 The LockHandler Component

As for Commit we use the operator also for the LockHandler to leave the order in which the s are handled refinable by different instantiations of TaCtl.

The strategy we adopted in [10] for lock handling with only genuine updates was to refuse all locks for locations requested by , if at least one of the following two cases occurs:

  • some of the requested locations is by another transactional machine  for writing,

  • some of the requested locations is a ation in that is by another transactional machine  for reading.

In other words, read operations of different machines are compatible and upgrades from read to write locks are possible. In the presence of partial updates, which have to be simultaneously performed by one or more transactional machines this compatibility relation has to be extended to partial operations, but guaranteeing the consistency of the result of the Aggregate mechanism which binds together shared updates to a same location. We adopt the following constraints defined in [34]:

  • A genuine Write is incompatible with a Read or genuine Write or any partial operation of any other machine.

  • A Read is incompatible with any Write (whether genuine or involving a partial operation ).

  • Two partial operations are incompatible on a location  if in some state applying to update  first then yields a different result from first applying then .

However, to guarantee the serializability of transactions in the presence of partial updates of complex data structures consistency is needed also in case one update concerns a substructure of another update. Therefore we stipulate that a lock request for  to a machine  as long as a location  which subsumes  is by another machine . The subsumption definition is taken from [34, Def.2.1]:

To RefuseRequestedLocks it suffices to set the communication interface of ; this makes  for each location  and operation  for which the lock to .

4.3 The DeadlockHandler Component

A originates if two machines are in a cycle, i.e., they wait for each other. In other words, a deadlock occurs, when for some (not yet ized) machine  the pair is in the transitive (not reflexive) closure of . In this case the DeadlockHandler selects for recovery a (typically minimal) subset of transactions —they are ized to , in which mode (control state) they are backtracked until they become . The selection criteria are intrinsically specific for particular transaction controllers, driving a usually rather complex selection algorithm in terms of number of conflict partners, priorities, waiting time, etc. In this paper we leave their specification for TaCtl abstract (read: refinable in different directions) by using the operator.

4.4 The Recovery Component

Also for the Recovery component we use the operator to leave the order in which the s are chosen for recovery refinable by different instantiations of TaCtl. In order to be a machine  is backtracked by steps until is not any more, in which case it is deleted from the set of s, so that by definition it is . This happens at the latest when has become empty.

To define an step we have to provide the details of the function used above in RecoveryRecord. This function collects for any given machine  and state  first of all the ates by which one can Restore the overwritten values in ations (i.e. locations to which  in writes via an assignment instruction); in [10] where we considered only genuine updates this function was called .

In addition, for each to be Aggregated update instruction collects the information to compute the ‘inverse’ update for , information that is needed when the controller has to Undo at the concerned location the effect of that partial update by  (but not of simultaneous partial updates concerning the same location by other machines). This information consists in an operation with the appropriate value  for its second argument, whereas the first argument is provided only when the Undo takes place. For the approach to ASMs with partial updates defined in [34] and adopted here one has to postulate that such operations and values which are to partial update operations (where ) are defined and satisfy the following constraint for partial update instructions:

  • Inverse Operation Postulate

This postulate can be justified by the requirement that any transaction should be recoverable [33]. If recoverability cannot be guaranteed, a transaction controller must (in principle) be able to undo updates that were issued long ago, which would be completely infeasible for any real system where once a transaction has committed, it can leave the system, and none of its updates can be undone any more. As partial updates operations and from two different machines and could be executed simultaneously, for each of these operations it must be forseen that it may be undone, even if the issueing transaction for the other operation has already committed—this situation has become possible. As the original value at location at the time of the partial update by using is no longer available—anyway, it may have been updated many times by other compatible partial updates— must be able to undo its part of the update independently from all other updates including Undone ones, i.e. to say after Undoing , the resulting values at location must be just the one that would have resulted, if only all other (not yet Undone) partial updates had been executed. This is guaranteed by the inverse operation postulate.

In the original work on multi-level transactions including [6] recovery is not handled at all, which leads to misleading conclusions that commutativity of high-level operations—those that can be defined by partial updates—is sufficient for obtaining increased transaction throughput by means of additional permitted schedules. However, commutativity (better called operator compatibility, see [34]) has to be complemented by the inverse operation postulate to ensure recoverability. Inverse operators are claimed in the MLR recovery system [30], but no satisfactory justification was given.

There may be more than one update instruction  performs for the same location so that the corresponding inverse ates have to be Aggregated with the ates by Restore.

The inverse operation postulate cannot guarantee that the inverse operations commute with each other in general. However, it can be guaranteed that on the values, to which the inverse operations are applied in Undo steps, commutativity holds: For this let be inverse for for , such that both operations are compatible and both inverse operations have to execute simultaneously on location . That is, if is the actual value of in the current state, we need to show . As these are Undo operations, we can assume that for have been executed on some previous value of location . Thus, due to commutativity we must have for some value . From this we get

Note that in our description of the DeadlockHandler and the (partial) Recovery we deliberately left the strategy for victim selection and Undo abstract, so fairness considerations will have to be discussed elsewhere. It is clear that if always the same victim is selected for partial recovery, the same deadlocks may be created again and again. However, it is well known that fairness can be achieved by choosing an appropriate victim selection strategy.

4.5 The Abort Component

The Abort component can be succinctly defined as turbo ASM [12, Ch.4.1]:

We use the construct only here and do this for notational convenience to avoid tedious programming of an iteration. We do not use to form component ASMs which go into .

5 Correctness Theorem

In this section we show the desired correctness property: if all monitored or shared locations of any are output or controlled locations of some other and all output locations of any are monitored or shared locations of some other (closed system assumption)151515This assumption means that the environment is assumed to be one of the component machines., each run of is equivalent to a serialization of the terminating -runs, namely the -run followed by the -run etc., where is the -th machine of which performs a commit in the run. To simplify the exposition (i.e. the formulation of statement and proof of the theorem) we only consider machine steps which take place under the transaction control, in other words we abstract from any step  makes before being Inserted into or after being Deleted from the set of machines which currently run under the control of TaCtl.

First of all we have to make precise what a serial multi-agent ASM run is and what equivalence of runs means in the general multi-agent ASM framework.

5.1 Definition of run equivalence

Let be a (finite or infinite) run of . In general we may assume that TaCtl runs forever, whereas each machine running as transaction will be or Aborted at some time – once Commited will only change values of non-shared and non-output locations161616It is possible that one ASM enters several times as a transaction controlled by TaCtl. However, in this case each of these registrations will be counted as a separate transaction, i.e. as different ASMs in .. To simplify the proof but without loss of generality we assume that each update concerning an Aborted machine is eliminated from the run. For let denote the unique set of genuine updates resp. multiset of partial updates leading to an Aggregated consistent set of updates defining the transition from to . By definition of each is the union of the corresponding sets resp. multisets171717We indicate multiset operations by an upper index  of the agents executing resp. TaCtl:

contains the genuine and the partial updates defined by the machine in state 181818We use the shorthand notation to denote , analogously ; in other words we speak about steps and updates of  also when they really are done by . Mainly this is about transitions between the control states, namely TA-, , (see Fig.1), which are performed during the run of  under the control of the transaction controller TaCtl. When we want to name an original update of  (not one of the updates of or of the Record component) we call it a proper -update., resp. contain the genuine resp. partial updates by the transaction controller in this state. The sequence