Abstract
We present a tractable method for synthesizing arbitrarily large concurrent programs, for a shared memory model with common hardwareavailable primitives such as atomic registers, compareandswap, loadlinked/store conditional, etc. The programs we synthesize are dynamic: new processes can be created and added at runtime, and so our programs are not finitestate, in general. Nevertheless, we successfully exploit automatic synthesis and modelchecking methods based on propositional temporal logic. Our method is algorithmically efficient, with complexity polynomial in the number of component processes (of the program) that are “alive” at any time. Our method does not explicitly construct the automatatheoretic product of all processes that are alive, thereby avoiding state explosion. Instead, for each pair of processes which interact, our method constructs an automatatheoretic product (pairmachine) which embodies all the possible interactions of these two processes. From each pairmachine, we can synthesize a correct pairprogram which coordinates the two involved processes as needed. We allow such pairprograms to be added dynamically at runtime. They are then “composed conjunctively” with the currently alive pairprograms to resynthesize the program as it results after addition of the new pairprogram. We are thus able to add new behaviors, which result in new properties being satisfied, at runtime. This “incremental composition” step has complexity independent of the total number of processes, it only requires the mechanical analysis of the two processes in the pairprogram, and their immediate neighbors, i.e., the other processes which they interact directly with. We establish a “large model” theorem which shows that the synthesized large program inherits correctness properties from the pairprograms.
Synthesis of Large Dynamic Concurrent Programs from Dynamic Specifications
[0.2in] Paul C. Attie
[0.05in] Department of Computer Science
American University of Beirut
and
Center for Advanced Mathematical Sciences
American University of Beirut
paul.attie@aub.edu.lb
July 6, 2019
1 Introduction
We exhibit a method of mechanically synthesizing a concurrent program consisting of a large, and dynamically varying, number of sequential processes executing in parallel. Our programs operate in shared memory, commonly available hardware primitives, such as using read and write operations on atomic registers, compareandswap, loadlinked/store conditional. Even thought our synthesis method is largely mechanical, we only require that each process have a finite number of actions, and that the data referred to in action guards be finite. Underlying data that processes operate on, and which does not affect action guards, can be infinite. Also, since the number of processes can increase without limit, the synthesized program as a whole is not finitestate. In addition, our method is computationally efficient, it does not explicitly construct the automatatheoretic product of a large number of processes (e.g., all processes that are “alive” at some point) and is therefore not susceptible to the stateexplosion problem, i.e., the exponential growth of the number of global states with the number of processes, which is widely acknowledged to be the primary impediment to largescale application of mechanical verification methods.
Rather than build a global product, our method constructs the product of small numbers of sequential processes, and in particular, the product of each pair of processes that interact, thereby avoiding the exponential complexity in the number of processes that are “alive” at any time. The product of each pair of interacting processes, or pairmachine, is a Kripke structure which embodies the interaction of the two processes. The pairmachines can be constructed manually, and then efficiently modelchecked (since it is small) to verify pairproperties: behavioral properties of the interaction of the two processes, when viewed in isolation from the remaining processes. Alternatively, the pairproperties can be specified first, and the pairmachine automatically synthesized from the pairproperties by the use of mechanical synthesis methods such as [EC82, MW84, KV97]. Again this is efficient since the pairmachines are small.
Corresponding to each pairmachine is a pairprogram, a syntactic realization of the pairmachine, which generates the pairmachine as its globalstate transition diagram. Finally, we syntactically compose all of the pairprograms. This composition has a conjunctive nature: a process can make a transition iff that transition is permitted by all of the pairprograms in which participates. We allow such “pairprograms” to be added dynamically at runtime. They are then composed with the currently alive pairprograms to resynthesize the program as it results after addition of the new pairprogram. We are thus able to add new behaviors, which result in new properties being satisfied, at runtime. The use of pairwise composition greatly facilitates this, since the addition of a new pairprogram does not disturb the correctness properties which are satisfied by the currently present pairprograms. We establish a “large model” theorem which shows that the synthesized large program inherits correctness properties from the pairprograms.
Since the pairmachines are small, and since the composition step operates on syntax, i.e., the pairprograms themselves, and not their statetransition diagrams, our method is computationally efficient. In particular, the dynamic addition of a single pairprogram requires a mechanical synthesis or model checking step whose complexity is independent of the total number of alive processes at the time, but which depends only on the checking products of the two processes involved in the pairprogram, together with some of their neighbors, i.e., the processes which they immediately interact with. Our method thus overcomes the severe limitations previously imposed by stateexplosion on the applicability of automatic synthesis methods, and extends these methods to the new domain of dynamic programs.
Our method can generate systems under arbitrary process interconnection schemes, e.g., fully connected, ring, star. In our model of parallel computation, two processes are interconnected if and only if either (1) one process can inspect the local state of the other process or (2) both processes read and/or write a common variable, or both.
The method requires the pairprograms to satisfy certain technical assumptions, thus it is not completely general. Nevertheless, it is applicable in many interesting cases. We illustrate our method by synthesizing a ringbased two phase commit protocol. Using the large model theorem, we show that correctness properties that two processes of the ring satisfy when interacting in isolation carry over when those processes are part of the ring. We then easily construct a correctness proof for the ring using these properties. We note that the ring can contain an arbitrarily large number of processes, i.e., we really synthesize a family of rings, one for each natural number.
A crucial aspect of our method is its soundness: which correctness properties can be established for our synthesized programs? We establish a “large model” theorem which shows that the synthesized program inherits all of the correctness properties of the pairprograms, i.e., the pairproperties. We express our pairproperties in the branching time temporal logic [GL94] minus the nexttime operator. In particular, propositional invariants and some temporal leadsto properties of any pairprogram also hold of the synthesized program. (A temporal leadsto property has the following form: if condition 1 holds now, then condition 2 eventually holds. can express temporal leadsto if condition 1 is purely propositional.) In addition, we can use a suitable deductive system to combine the pairproperties to deduce correctness properties of the large program which are not directly expressible in pairwise fashion.
This paper extends our previous work [AE98] on the synthesis of large concurrent programs in four important directions:

It eliminates the requirement that all pairprograms be isomorphic to each other, which in effect constrains the synthesized program to contain only one type of interaction amongst its component processes. In our method, every process can be nonisomorphic with every other process, and our method would still be computationally efficient.

It extends the set of correctness properties that are preserved from propositional invariants and propositional temporal leadsto properties (i,e., leadsto properties where the conditions are purely propositional) to formulae that can contain arbitrary nesting of temporal modalities.

It eliminates the requirement that the number of processes of the synthesized program be fixed: our previous work synthesized an infinite family of programs, each of which contains a large, but fixed, number of processes. By contrast, the current method produces a single program, in which the number of processes can dynamically increase at runtime.

It produces programs that do not require a large grain of atomicity: in [Att99, AE98], each process needed to atomically inspect the state of all of its neighbors (i.e., all processes with which it is composed in some pairprogram) in a single transition. By contrast, the current method produces programs that operate using only hardwareavailable primitives for interprocess communication and synchronization.
To demonstrate the utility of our method, we apply it to synthesize a twophase commit protocol, and a replicated data service.
Related work.
Previous synthesis methods [AM94, DWT90, EC82, KMTV00, KV97, MW84, PR89a, PR89b] all rely on some form of exhaustive state space search, and thus suffer from the stateexplosion problem: synthesizing a concurrent program consisting of sequential processes, each with local states, requires building the global state transition diagram of size . There are a number of methods proposed for verifying correctness properties of an infinite family of finitestate processes [APR01, CGB86, EK00, EN96, PRZ01, SG92]. All of these deal with an infinite family of concurrent programs, where each program consists of a possibly large, but fixed set of processes. No method to date can verify or synthesize a single concurrent program in which processes can be dynamically created at run time. Furthermore, all methods to date that deal with large concurrent programs, apart from our own previous work [Att99, AE98] make the “parametrized system” assumption: the processes can be partitioned into a small number of “equivalence classes,” within each of which all processes are isomorphic. Hence, in eliminating these two significant restrictions, our method is a significant improvement over the previous literature, and moves automated synthesis methods close to the realm of practical distributed algorithms. We illustrate this point by using our method to synthesize a replicated data service based on the algorithms of [FGL99, LLSG92]. Our algorithm is actually more flexible, since it permits the dynamic addition of more replicas at run time. Some synthesis method in the literature synthesize “open systems,” or “reactive modules,” which interact with an environment, and are required to satisfy a specification regardless of the environment’s behavior. The main argument for open systems synthesis is that open systems can deal with any “input” which the environment presents. We can achieve this effect by using the “exists nexttime” modality of the temporal logic CTL [EC82, Eme90]. We illustrate this in our replicated data service example, where we specify that a client can submit operations at any time.
The rest of the paper is as follows. Section 2 presents our model of concurrent computation. Section 3 discusses temporal logic and fairness. Section 4 presents a restricted version of the method, which is only applicable to static concurrent programs: those with a fixed set of processes. This approach simplifies the development and exposition of our method, Section 5 establishes the soundness of the synthesis method for static programs. Section 6 presents the two phase commit example, which can be treated with the restricted method. Section 7 presents the general synthesis method, which can produce dynamic concurrent programs. Section 8 shows that the general method is sound. Section 9 outlines how the synthesized programs can be implemented using atomic registers. In Section 10 we use our method to synthesize an eventuallyserializable replicated data service. Section 11 discusses further work and concludes.
2 Model of Concurrent Computation
We assume the existence of a possibly infinite, universal set Pids of unique process indices. A concurrent program consists of a finite, unbounded, and possibly varying number of sequential processes running in parallel, i.e., where execute in parallel and are the processes that have been “created” so far. For technical convenience, we do not allow processes to be “destroyed” in our model. Process destruction can be easily emulated by having a process enter a “sink” state, from which it has no enabled actions.
With every process , we associate a single, unique index, namely . Two processes are similar if and only if one can be obtained from the other by swapping their indices. Intuitively, this corresponds to concurrent algorithms where a single “generic” indexed piece of code gives the code body for all processes.
As stated above, we compose a dynamically varying number of pairprograms to synthesize the overall program. To define the syntax and semantics of the pairprograms, we use the synchronization skeleton model of [EC82]. The synchronization skeleton of a process is a statemachine where each state represents a region of code that performs some sequential computation and each arc represents a conditional transition (between different regions of sequential code) used to enforce synchronization constraints. For example, a node labeled may represent the critical section of . While in , may increment a single variable, or it may perform an extensive series of updates on a large database. In general, the internal structure and intended application of the regions of sequential code are unspecified in the synchronization skeleton. The abstraction to synchronization skeletons thus eliminates all steps of the sequential computation from consideration.
Formally, the synchronization skeleton of each process is a directed graph where each node is a unique local state of , and each arc has a label of the form ,^{1}^{1}1 denotes the integers from to inclusive. where each is a guarded command [Dij76], and is guarded command “disjunction,” i.e., the arc is equivalent to arcs, between the same pair of nodes, each labeled with one of the . Let denote the synchronization skeleton of process with all the arc labels removed.
Roughly, the operational semantics of is that if one of the evaluates to true, then the corresponding body can be executed. If none of the evaluates to true, then the command “blocks,” i.e., waits until one of the holds.^{2}^{2}2This interpretation was proposed by [Dij82]. Each node must have at least one outgoing arc, i.e., a skeleton contains no “dead ends,” and two nodes are connected by at most one arc in each direction. A (global) state is a tuple of the form where each is the current local state of , and is a list giving the current values of all the shared variables, (we assume these are ordered in a fixed way, so that specifies a unique value for each shared variable). A guard is a predicate on states, and a body is a parallel assignment statement that updates the values of the shared variables. If is omitted from a command, it is interpreted as , and we write the command as . If is omitted, the shared variables are unaltered, and we write the command as .
We model parallelism in the usual way by the nondeterministic interleaving of the “atomic” transitions of the individual synchronization skeletons of the processes . Hence, at each step of the computation, some process with an “enabled” arc is nondeterministically selected to be executed next. Assume that the current state is and that contains an arc from to labeled by the command . If is true in , then a permissible next state is where is the list of updated values for the shared variables produced by executing in state . The arc from to is said to be enabled in state . An arc that is not enabled is disabled, or blocked. A (computation) path is any sequence of states where each successive pair of states is related by the above nextstate relation. If the number of processes is fixed, then the concurrent program can be written as , where is fixed. In this case, we also specify a a set of global states in which execution is permitted to start. These are the initial states. The program is then written as . An initialized (computation) path is a computation path whose first state is an initial state. A state is reachable iff it lies along some initialized path.
3 Temporal Logic and Fairness
is a propositional branching time temporal logic [Eme90] whose formulae are built up from atomic propositions, propositional connectives, the universal and existential path quantifiers, and the lineartime modalities nexttime (by process ) , and strong until . The sublogic [GL94] is the “universal fragment” of : it results from by restricting negation to propositions, and eliminating the existential path quantifier . The sublogic [EC82] results from restricting so that every lineartime modality is paired with a path quantifier, and viceversa. The sublogic [GL94] results from restricting in the same way. The lineartime temporal logic PTL [MW84] results from removing the path quantifiers from .
We have the following syntax for . We inductively define a class of state formulae (true or false of states) using rules (S1)–(S3) below and a class of path formulae (true or false of paths) using rules (P1)–(P3) below:

The constants and are state formulae. is a state formulae for any atomic proposition .

If are state formulae, then so are , .

If is a path formula, then is a state formula.

Each state formula is also a path formula;

If are path formulae, then so are , .

If are path formulae, then so are , .
The lineartime temporal logic PTL [MW84] consists of the set of path formulae generated by rules (S1) and (P1)–(P3). We also introduce some additional modalities as abbreviations: (eventually) for , (always) for , (weak until) for , (infinitely often) for , and (eventually always) for .
Likewise, we have the following syntax for .

The constants and are state formulae. and are state formulae for any atomic proposition .

If are state formulae, then so are , .

If is a path formula, then is a state formula.

Each state formula is also a path formula;

If are path formulae, then so are , .

If are path formulae, then so are , , and .
The logic [GL94] is obtained by replacing rules (S3),(P1)–(P3) by (S3’):

If are state formulae, then so are , , and .
The set of state formulae generated by rules (S1)–(S3) and (P0) forms . The logic is the logic without the modality. We define the logic to be the logic without the modality, and the logic to be without the modality, and the logic to be where the atomic propositions are drawn only from .
Formally, we define the semantics of formulae with respect to a structure consisting of

, a countable set of states. Each state is a mapping from the set of atomic propositions into , and

, where is a binary relation on giving the transitions of process .
Here , where is the set of atomic propositions that “belong” to process . Other processes can read propositions in , but only process can modify these propositions (which collectively define the local state of process ).
A path is a sequence of states such that , and a fullpath is a maximal path. A fullpath is infinite unless for some there is no such that (. We use the convention (1) that denotes a fullpath and (2) that denotes the suffix of , provided , where , the length of , is when is infinite and when is finite and of the form ; otherwise is undefined. We also use the usual notation to indicate truth in a structure: (respectively ) means that is true in structure at state (respectively of fullpath ). In addition, we use to mean ), where is a set of states. We define inductively:

and . iff . iff .

iff and
iff or 
iff for every fullpath in :

iff

iff and
iff or 
iff is defined and and
iff there exists such that
and for all :
iff for all
if for all , then
When the structure is understood from context, it may be omitted (e.g., is written as ). Since the other logics are all sublogics of , the above definition provides semantics for them as well. We refer the reader to [Eme90] for details in general, and to [GL94] for details of .
3.1 Fairness
To guarantee liveness properties of the synthesized program, we use a form of weak fairness. Fairness is usually specified as a lineartime logic (i.e., PTL) formula , and a fullpath is fair iff it satisfies . To state correctness properties under the assumption of fairness, we relativize satisfaction () so that only fair fullpaths are considered. The resulting notion of satisfaction, , is defined by [EL87] as follows:

iff for every fair fullpath in :
Effectively, path quantification is only over the paths that satisfy .
4 Synthesis of Static Concurrent Programs
To simplify the development and exposition of our method, we first present a restricted case, where we synthesize static concurrent programs, i.e., those with a fixed set of processes. We extend the method to dynamic concurrent programs in Section 7 below.
As stated earlier, our aim is to synthesize a large concurrent program without explicitly generating its global state transition diagram, and thereby incurring time and space complexity exponential in the number of component processes of . We achieve this by breaking the synthesis problem down into two steps:

For every pair of processes in that interact directly, synthesize a pairprogram that describes their interaction.

Combine all the pairprograms to produce .
When we say and interact directly, we mean that each process can read the other processe’s atomic propositions (which, recall, encode the processe’s local state), and that they have a set of shared variables that they both read and write. We define the interconnection relation as follows: iff and interact directly, and is an formula specifying this interaction. In the sequel we let denote the specification associated with , and we say that is the domain of . We introduce the “spatial modality” which quantifies over all pairs such that and are related by . Thus, is equivalent to . We stipulate that is “irreflexive,” that is, for all , and that every process interacts directly with at least one other process: . Furthermore, for any pair of process indices , contains at most one pair such that and . In the sequel, we say that and are neighbors when or , for some . We shall sometimes abuse notation and write (or ) for . We also introduce the following abbreviations: denotes the set ; and denotes the set . Since the interconnection relation embodies a complete specification, we shall refer to a program that has been synthesized from as an program, and to its component processes as processes.
Since our focus in this article is on avoiding stateexplosion, we shall not explicitly address step 1 of the synthesis method outlined above. Any method for deriving concurrent programs from temporal logic specifications can be used to generate the required pairprograms, e.g., the synthesis method of [EC82]. Since a pairprogram has only states (where is the size of each sequential process), the problem of deriving a pairprogram from a specification is considerably easier than that of deriving an program from the specification. Hence, the contribution of this article, namely the second step above, is to reduce the more difficult problem (deriving the program) to the easier problem (deriving the pairprograms). We proceed as follows.
For sake of argument, let us first assume that all the pairprograms
are actually isomorphic to each other.
Let . We denote the pairprogram for processes and by
, where is the set of initial states, is
the synchronization skeleton for process in this pairprogram, and
is the synchronization skeleton for process .
We take
and generalize it in a natural way to an program.
We show that our generalization preserves a large class of correctness
properties. Roughly the idea is as follows. Consider first the
generalization to three pairwise interconnected processes ,
i.e., ^{3}^{3}3Note the abuse of
notation: we have omitted the formulae..
With respect to process , the
proper interaction (i.e., the interaction required to satisfy the
specification) between process and process is captured by the
synchronization commands that label the arcs of .
Likewise, the proper interaction between process and process
is captured by the arc labels of . Therefore, in the
threeprocess program consisting of processes executing
concurrently, (and where process is interconnected to both process
and process ), the proper interaction for process with
processes and is captured as follows: when process
traverses an arc, the synchronization command which labels that arc
in is executed “simultaneously” with the synchronization
command which labels the corresponding arc in . For
example, taking as our specification the mutual exclusion problem, if
executes the mutual exclusion protocol with respect to both
and , then, when enters its critical section, both
and must be outside their own critical sections.
Based on the above reasoning, we determine that the synchronization
skeleton for process in the aforementioned threeprocess program
(call it ) has the same basic graph structure as
and , and an arc label in is a “composition” of
the labels of the corresponding arcs in and .
In addition, the initial states of the threeprocess
program are exactly those states that “project” onto initial states
of all three pairprograms (, , and ).
Generalizing the above to the case of an arbitrary interconnection relation , we see that the skeleton for process in the program (call it ) has the same basic graph structure as , and a transition label in is a “composition” of the labels of the corresponding transitions in , where , i.e., processes are all the neighbors of process . Likewise the set of initial states of the program is exactly those states all of whose “projections” onto all the pairs in give initial states of the corresponding pairprogram.
We now note that the above discussion does not use in any essential way the assumption that pairprograms are isomorphic to each other. In fact, the above argument can still be made if pairprograms are not isomorphic, provided that they induce the same local structure on all common processes. That is, for pairprograms and , we require that , where result from removing all arc labels from respectively. Also, the initial state sets of all the pairprograms must be so that there is at least one state that projects onto some initial state of every pairprogram (and hence the initial state set of the program will be nonempty). We assume, in the sequel, that these conditions hold. Also, all quoted results from [AE98] have been reverified to hold in our setting, i.e., when the similarity assumptions of [AE98] are dropped.
Before formally defining our synthesis method, we need some technical definitions.
Since and have the same local structure, they have the same nodes (remember that and are synchronization skeletons). A node of , is a mapping of to . We will refer to such nodes as states. A state of the pairprogram is a tuple where are states, states, respectively, and give the values of all the variables in . We refer to states of as states. An state inherits the assignments defined by its component  and states: , , where , and are arbitrary atomic propositions in , , respectively.
We now turn to programs. If interconnection relation has domain , then we denote an program by . is the set of initial states, and is the synchronization skeleton for process () in this program. A state of is a tuple , where , () is an state and give the values of all the shared variables of the program (we assume some fixed ordering of these variables, so that the values assigned to them are uniquely determined by the list ). We refer to states of an program as states. An state inherits the assignments defined by its component states : , where , and is an arbitrary atomic proposition in . We shall usually use to denote states. If , then we define a program exactly like an program, but using interconnection relation instead of . state is similarly defined.
Let be an state. We define a statetoformula operator that takes an state as an argument and returns a propositional formula that characterizes in that , and for all states such that : , where ranges over the members of . is defined similarly. We define the state projection operator . This operator has several variants. First of all, we define projection onto a single process from both states and states: if , then , and if , then . This gives the state corresponding to the state , state , respectively. Next we define projection of an state onto a pairprogram: if , then , where are those values from that denote values of variables in . This gives the state corresponding to the state , and is well defined only when . We also define projection onto the shared variables in from both states and states: if , then , and if , then , where are those values from that denote values of variables in . Finally, we define projection of an state onto a program. If , then , where is the domain of , and are those values from that denote values of variables in . This gives the state (defined analogously to an state) corresponding to the state and is well defined only when .
To define projection for paths, we first extend the definition of path (and fullpath) to include the index of the process making the transition, e.g., each transition is labeled by an index denoting this process. For example, a path in would be represented as , where . Let be an arbitrary path in . For any such that , define a block (cf. [CGB86] and [BCG88]) of to be a maximal subsequence of that starts and ends in a state and does not contain a transition by any such that . Thus we can consider to be a sequence of blocks with successive blocks linked by a single transition such that (note that a block can consist of a single state). It also follows that for any pair of states in the same block. This is because a transition that is not by some such that cannot affect any atomic proposition in , nor can it change the value of a variable in ; and a block contains no such transition. Thus, if is a block, we define to be for some state in . We now give the formal definition of path projection. We use the same notation as for state projection. Let denote the th block of .
Definition 1 (Path projection)
Let be where is a block for all . Then the Path Projection Operator is given by:
Thus there is a onetoone correspondence between blocks of and states of , with the th block of corresponding to the th state of (note that path projection is well defined when is finite).
The above discussion leads to the following definition of the synthesis method, which shows how an process of the program is derived from the pairprocesses of the the pairprograms :
Definition 2 (Pairwise synthesis)
An process is derived from the
pairprocesses , for all as follows:
contains a move from to with label
iff
for every in contains a move from to
with label
.
The initial state set of the program is derived from
the initial state of the pairprogram as follows:
.
Here and are guarded command “disjunction” and “conjunction,” respectively. Roughly, the operational semantics of is that if one of the guards evaluates to true, then the corresponding body respectively, can be executed. If neither nor evaluates to true, then the command “blocks,” i.e., waits until one of evaluates to true.^{4}^{4}4This interpretation was proposed by [Dij82]. We call an arc whose label has the form a pairmove. In compact notation, a pairprocess has at most one move between any pair of local states.
The operational semantics of is that if both of the guards evaluate to true, then the bodies can be executed in parallel. If at least one of , evaluates to false, then the command “blocks,” i.e., waits until both of evaluate to true. We call an arc whose label has the form an move. In compact notation, an process has at most one move between any pair of local states.
The above definition is, in effect, a syntactic transformation that can be carried out in linear time and space (in both and ). In particular, we avoid explicitly constructing the global state transition diagram of , which is of size exponential in .
Let be the global state transition diagrams of , respectively. The technical definitions are given below, and follow the operational semantics given in Section 2.
Definition 3 (Pairstructure)
Let . The semantics of is given by the pairstructure where

is a set of states,

gives the initial states of , and

is a transition relation giving the transitions of . A transition by is in if and only if all of the following hold:

,

and are states, and

there exists a move in such that there exists :

,

, and

.

Here if and if .

In a transition , we say that is the start state and that is the finish state. The transition is called a transition. In the sequel, we use as an alternative notation for the transition . is Hoare triple notation [Hoa69] for total correctness, which in this case means that execution of always terminates,^{5}^{5}5Termination is obvious, since the righthand side of is a list of constants. and, when the shared variables in have the values assigned by , leaves these variables with the values assigned by . states that the value of guard in state is .^{6}^{6}6 is defined by the usual inductive scheme: “” iff , iff and , iff . We consider that possesses a correctness property expressed by an formula if and only if .
The semantics of is given by the global state transition diagram generated by its execution. We call the global state transition diagram of an system an structure.
Definition 4 (structure)
The semantics of