Maximally Permissive Controlled System Synthesis for Modal Logic Supported by the EU FP7 Programme under grant agreement no. 295261 (MEALS).

# Maximally Permissive Controlled System Synthesis for Modal Logic ††thanks: Supported by the EU FP7 Programme under grant agreement no. 295261 (MEALS).

A.C. van Hulst Eindhoven University of Technology, The Netherlands    M.A. Reniers Eindhoven University of Technology, The Netherlands    W.J. Fokkink Eindhoven University of Technology, The Netherlands
###### Abstract

We propose a new method for controlled system synthesis on non-deterministic automata, which includes the synthesis for deadlock-freeness, as well as invariant and reachability expressions. Our technique restricts the behavior of a Kripke-structure with labeled transitions, representing the uncontrolled system, such that it adheres to a given requirement specification in an expressive modal logic. while all non-invalidating behavior is retained. This induces maximal permissiveness in the context of supervisory control. Research presented in this paper allows a system model to be constrained according to a broad set of liveness, safety and fairness specifications of desired behavior, and embraces most concepts from Ramadge-Wonham supervisory control, including controllability and marker-state reachability. Synthesis is defined in this paper as a formal construction, which allowed a careful validation of its correctness using the Coq proof assistant.

## 1 Introduction

This paper presents a new technique for controlled system synthesis on non-deterministic automata for requirements in modal logic. The controlled systems perspective treats the system under control — the plant — and a system component which restricts the plant behavior — the controller — as a single integrated entity. This means that we take a model of all possible plant behavior, and construct a new model which is constrained according to a logical specification of desired behavior — the requirements. The automated generation, or synthesis, of such a restricted behavioral model incorporates a number of concepts from supervisory control theory [6], which affirm the generated model as being a proper controlled system, in relation to the original plant specification. Events are strictly partitioned into being either controllable or uncontrollable, such that synthesis only disallows events of the first type. In addition, synthesis preserves all behavior which does not invalidate the requirements, thereby inducing maximal permissiveness [6] in the context of supervisory control. The requirement specification formalism extends Hennessy-Milner Logic [10] with invariant, reachability, and deadlock-freeness expressions, and is also able to express the supervisory control concept of marker-state reachability [13].

The intended contribution of this paper is two-fold. First, it presents a technique for controlled system synthesis in a non-deterministic context. Second, it defines synthesis for a modal logic which is able to capture a broad set of requirements.

Regarding the first contribution, it should be noted that supervisory control synthesis is often approached using a deterministic model of both plant and controller. Notably, the classic Ramadge-Wonham supervisory control theory [13] is a well-researched example of this setup. The resulting controller restricts the behavior of the deterministic plant model, thereby ensuring that it operates according to the requirements via event-based synchronization. A controlled system can not be constructed in this way for a non-deterministic model, as illustrated by example in Fig. 1. Assume that we wish to restrict all technically possible behavior of an indicator light of a printer (Fig. 1a) such that after a single event, the indicator light turns immediately. In the solution shown in Fig. 1b, the self-loop at the right-most state is disallowed, as indicated using dashed lines, while all other behavior is preserved. Note that it is not possible to construct this maximally-permissive solution using event-based synchronization, as shown in [4]. However, an outcome as shown in Fig. 1b can be obtained by applying synthesis for the property , using the method described in this paper. As this example clearly shows, the strict separation between plant and controller is not possible for non-deterministic models, and therefore we interpret the controlled system as a singular entity.

The synthesized requirement in Fig. 1b represents a typical example of a requirement in modal logic applied in this paper. This requirement formalism, which extends Hennessy-Milner Logic with invariant and reachability operators, and also includes a test for deadlock-freeness, is able to express a broad set of liveness, safety, and fairness properties. For instance, an important liveness concept in supervisory control theory involves marker-state reachability, which is informally expressed as the requirement that it is always possible to reach a state which is said to be marked. This requirement is modeled as , using the requirement specification logic, in conjunction with assigning as a separate property to the designated states in the Kripke-model.

Safety-related requirements, which model the absence of faulty behavior, include deadlock-avoidance, expressed as (i.e., invariantly, deadlock-free) and safety requirements of a more general nature. For instance, one might require that some type of communicating system is always able to perform a step, directly after every step. Such a property is expressed as , using the requirement specification logic applied in this paper. In addition, we argue that this logic is able to model a limited class of of fairness properties. One might require from a system which uses a shared resource that in every state, the system has access to the resource (the state has the property), or it can do a step to claim the resource, after which access is achieved immediately. In order to constrain the behavior of the plant specification such that it adheres to this requirement, we synthesize the property .

The remainder of this paper is set up as follows. We consider a number of related works on control synthesis in Section 2. Preliminary definitions in Section 3 introduce formal notions up to a formal statement of the synthesis problem. Section 4 concerns the formal definition of the synthesis construction while Section 5 lists a number of important theorems indicating correctness of the synthesis approach, including detailed proofs, while these proofs are available in computer-verified form as well [16].

## 2 Related Work

Earlier work by the same authors concerning synthesis for modal logic includes a recursive synthesis method for Hennessy-Milner Logic [17], and a synthesis method for a subset of the logic considered in this paper, with additional restrictions on combinations of modal operators [18].

We analyze related work along three lines: 1) Allowance of non-determinism in plant specifications, 2) Expressiveness of the requirement specification formalism, and, 3) Adhering to some form of maximal permissiveness. Based on this comparison, we analyze related work alongside the intended improvements in this paper.

Ramadge-Wonham supervisory control [13] defines a broadly-embraced meth-odology for controller synthesis on deterministic plant models for requirements specified using automata. It defines a number of key elements in the relationship between plant and controlled system, such as controllability, marker-state reachability, deadlock-freeness and maximal permissiveness. Despite the fact that a strictly separated controller offers advantages from a developmental or implementational point of view, we argue that increased abstraction and flexibility justifies research into control synthesis for non-deterministic models. In addition, we emphasize that the automata-based description of desired behavior in the Ramadge-Wonham framework [13] does not allow the specification of requirements of existential nature. For instance, in this framework it is not possible to specify that a step labeled with a particular event must exist, hence the choice of modal logic as our requirement formalism.

Work by Pnueli and Rosner [12] concerns a treatment of synthesis for reactive systems, based upon a finite transducer model of the plant, and a temporal specification of desired behavior. This synthesis construction is developed further for deterministic automata in [12], but the treatment remains non-maximal. This research is extended in [2], which connects reactive synthesis to Ramadge-Wonham supervisory control using a parity-game based approach. The methodology described in [2] transforms the synthesis control problem for -calculus formulas in such a way that the set of satisfying models of a -calculus formula coincides with the set of controllers which enforce the controlled behavior. Although non-determinism is allowed in plant-specifications in [2], the treatment via loop-automata does not allow straightforward modeling of all (infinite) behaviors. Also, maximal permissiveness is not specified as a criterion for control synthesis in [2]. Interesting follow-up research is found in [3], for non-deterministic controllers over non-deterministic processes. However, the specification of desired behavior is limited to alternating automata [3], which do not allow complete coverage of invariant expressions over all modalities, or an equivalent thereof. Reactive synthesis is further applied to hierarchical [1] and recursive [11] component-based specifications. These works, which both are based upon a deterministic setting, provide a quite interesting setup from a developmental perspective, due to their focus on the re-usability of components.

Research in [19] relates Ramadge-Wonham supervisory control to an equivalent model-checking problem, resulting in important observations regarding the mutual exchangeability and complexity analysis of both problems. Despite the fact that research in [19] is limited to a deterministic setting, and synthesis results are not guaranteed to be maximally permissive, it does incorporate a quite expressible set of -calculus requirements. Other research based upon a dual approach between control synthesis and model checking studies the incremental effects of transition removal upon the validity of -calculus formulas [14], based on [7].

Research by D’Ippolito and others [8], [9] is based upon the framework of the world machine model for the synthesis of liveness properties, stated in fluent temporal logic. A distinction is made between controlled and monitored behavior, and between system goals and environment assumptions [8]. A controller is then derived from a winning strategy in a two-player game between original and required behavior, as expressed in terms of the notion of generalized reactivity, as introduced in [8]. Research in [8] also emphasizes the fact that pruning-based synthesis is not adequate for control of non-deterministic models, and it defines synthesis of liveness goals under a maximality criterion, referred to as best-effort controller. However, this maximality requirement is trace-based and is therefore not able to signify inclusion of all possible infinite behaviors. In addition, some results in [8] are based upon the assumption of a deterministic plant specification.

## 3 Definitions

We assume a set of events and a set of state-based properties. In addition, we assume a strict partition of into controllable events and uncontrollable events , such that and . State-based properties are used to capture state-based information, and are assigned to states using a labeling function. Example properties are shown in Fig. 1, as and . Fig. 1 also shows examples of the events and , which are assumed to be controllable in this example. Events are used to capture system dynamics, and represent actions occurring when the system transitions between states. Controllable events may be used to model actuator actions in the plant, while an uncontrollable event may represent, for instance, a sensor reading. Basic properties and events are used to model plant behavior in the form of a Kripke-structure [5] with labeled transitions, to be abbreviated as Kripke-LTS, as formalized in Definition 1. Note that we assume finiteness of the given transition relation.

###### Definition 1

We define a Kripke-LTS as a four-tuple for state-space , labeling function , finite transition relation , and initial state . The universe of all Kripke-LTSs is denoted by .

As usual, we will use the notation to denote that . The reflexive-transitive closure of a transition relation is defined in the following way: For all it holds that and if there exist and such that and then .

Two different behavioral preorders are applied in this paper. The first is the simulation preorder, which is reiterated in Definition 2. Simulation is used to signify inclusion of behavior, while synthesis may alter the transition structure due to, for instance, unfolding. Simulation as applied in this paper is a straightforward adaptation of the definition of simulation in [15].

###### Definition 2

For and we say that and are related via simulation (notation: ) if there exists a relation such that and for all the following holds:

1. We have ; and

2. If then there exists a step such that .

Partial bisimulation [4] is an extension of simulation such that the subset of uncontrollable events is bisimulated. For plant specification and synthesis result we require that is related to via partial bisimulation. This signifies the fact that synthesis did not disallow any uncontrollable event, which implies controllability in the context of supervisory control. Research in [4] details the nature of this partial bisimulation preorder.

###### Definition 3

If and , then and are related via partial bisimulation (notation: ) if there exists a relation such that and for all the following holds:

1. We have ;

2. If then there exists a step such that ; and

3. If for then there exists a step such that .

Requirements are specified using a modal logic given in Definition 5, which is built upon the set of state-based formulas in Definition 4.

###### Definition 4

The set of state-based formulas is defined by the grammar:

As indicated in Definition 4, state-based formulas are constructed from a straightforward Boolean algebra which includes the basic expressions and , as well as a state-based property test for . Formulas in are then combined using the standard Boolean operators , and .

###### Definition 5

The requirement specification logic is defined by the grammar:

We briefly consider the elements of the requirement logic . Basic expressions in Definition 4 function as the basic building blocks in the modal logic . Conjunction is included, having its usual semantics, while disjunctive formulas are restricted to those having a state-based formula in the left-hand disjunct. This restriction guarantees correct synthesis solutions, since it enables a local state-based test for retaining the appropriate transitions. The formula can be used to test whether holds after every -step, while the formula is used to assess whether there exists an -step after which holds. These two operators thereby follow their standard semantics from Hennessy-Milner Logic [10]. An invariant formula tests whether holds in every reachable state, while a reachability expression may be used to check whether there exists a path such that the state-based formula holds at some state on this path. Note that the sub-formula of a reachability expression is restricted to a state-based formula . This is used to acquire unique synthesis solutions. The deadlock-free test tests whether there exists an outgoing step of a particular state. Combined with the invariant operator, the formula can be used to specify that the entire synthesized system should be deadlock-free. Validity of formulas in and , with respect to a Kripke-LTS , is as shown in Definition 6.

###### Definition 6

For and we define if satisfies (notation: ) as follows:

 k⊨true  p∈L(x)(X,L,⟶,x)⊨p  ¬k⊨bk⊨¬b  k⊨fk⊨gk⊨f∧g  k⊨fk⊨f∨g  k⊨gk⊨f∨g ∀x\lx@stackrele⟶x′(X,L,⟶,x′)⊨f(X,L,⟶,x)⊨[e]f  x\lx@stackrele⟶x′(X,L,⟶,x′)⊨f(X,L,⟶,x)⊨f ∀x⟶∗x′(X,L,⟶,x′)⊨f(X,L,⟶,x)⊨□f  x⟶∗x′(X,L,⟶,x′)⊨b(X,L,⟶,x)⊨◊b  x\lx@stackrele⟶x′(X,L,⟶,x)⊨dlf

We may now formulate the synthesis problem in terms of the previous definitions in Definition 7. Research in this paper focuses on resolving this problem.

###### Definition 7

Given and , find in a finite method such that the following holds: 1) , 2) , 3) , 4) For all and holds , or determine that such an does not exist.

These four properties are interpreted in the context of supervisory control synthesis as follows. Property 1 (validity) states that the synthesis result satisfies the synthesized formula. Property 2 (simulation) asserts that the synthesis result is a restriction of the original behavior, while property 3 (controllability) ensures that no accessible uncontrollable behavior is disallowed during synthesis. Controllability is achieved if the synthesis result is related to the original plant-model via partial bisimulation, which adds bisimulation of all uncontrollable events to the second property. Note that the third property implies the second property, as can be observed in Definitions 2 and 3. Property 4 (maximality) states that synthesis removes the least possible behavior, and thereby induces maximal permissiveness. That is, the behavior of every alternative synthesis option is included in the behavior of the synthesis result.

## 4 Synthesis

The purpose of this section is to illustrate the formal definition of the synthesis construction. Synthesis as defined in this paper involves three major steps, after which a modified Kripke-LTS is obtained. If synthesis is successful, the resulting structure satisfies all synthesis requirements, as stated in Definition 7. The first stage of synthesis transforms the original transition relation , for state-space , into a new transition relation over the state-formula product space. This allows us to indicate precisely which modal (sub-)formula needs to hold at each point in the new transition relation. The second step removes transitions based upon an assertion of synthesizability of formulas assigned to the target states of transitions. This second step is repeated until no more transitions are removed. The third and final synthesis step tests whether synthesis has been successful by evaluating whether the synthesizability predicate holds for every remaining state. An overview of the synthesis process is shown in Fig. 2.

A formal derivation of the starting point in the synthesis process is shown in Definition 9. This definition relies upon the notion of sub-formulas, as formalized in Definition 8.

###### Definition 8

We say that is a sub-formula of (notation ) if this can be derived by the following rules:

 f∈sub(f)  f∈sub(g)f∈sub(g∧h)  f∈sub(h)f∈sub(g∧h)  f∈sub(g)f∈sub(□g)

As shown in Definition 8, sub-formulas align precisely with the restrictions on formula expansion for conjunctive and invariant formulas, as embedded in the the formula reductions shown in Definition 9. These restrictions on formula expansion guarantee finiteness of formula reductions.

###### Definition 9

For state-space and original transition relation , we define the starting point of synthesis as follows:

 x\lx@stackrele⟶x′(x,b)\lx@stackrele⟶0(x,true)  (x,f)\lx@stackrele⟶0(x′,f′)(x,g)\lx@stackrele⟶0(x′,g′)g′∈sub(f′)(x,f∧g)\lx@stackrele⟶0(x′,f′) (x,f)\lx@stackrele⟶0(x′,f′)(x,g)\lx@stackrele⟶0(x′,g′)g′∉sub(f′)(x,f∧g)\lx@stackrele⟶0(x′,f′∧g′)  x\lx@stackrele⟶x′x⊨b(x,b∨f)\lx@stackrele⟶0(x′,true) (x,f)\lx@stackrele⟶0(x′,f′)(x,b∨f)\lx@stackrele⟶0(x′,f′)  x\lx@stackrele⟶x′(x,[e]f)\lx@stackrele⟶0(x′,f)  x\lx@stackrele⟶x′e≠e′(x,[e′]f)\lx@stackrele⟶0(x′,true) x\lx@stackrele⟶x′(x,f)\lx@stackrele⟶0(x′,f)  x\lx@stackrele⟶x′(x,f)\lx@stackrele⟶0(x′,true)  (x,f)\lx@stackrele⟶0(x′,f′)f′∈sub(□f)(x,□f)\lx@stackrele⟶0(x′,□f) (x,f)\lx@stackrele⟶0(x′,f′)f′∉sub(□f)(x,□f)\lx@stackrele⟶0(x′,□f∧f′)  x\lx@stackrele⟶x′(x,◊b)\lx@stackrele⟶0(x′,true) x\lx@stackrele⟶x′(x,◊b)\lx@stackrele⟶0(x′,◊b)  x\lx@stackrele⟶x′(x,dlf)\lx@stackrele⟶0(x′,true)

The starting point of synthesis is subjected to transition removal via a synthesizability test for formulas assigned to the target states of transitions. In generalized form, we define a formula to be synthesizable in the state-formula pair if this can be derived by the rules in Definition 11. For an appropriate definition of synthesizability, it is necessary to extend the notion of sub-formulas in such a way that a state-based evaluation can be incorporated, in order to handle disjunctive formulas correctly. This leads to the sub-formula notion called , which is shown in Definition 10.

###### Definition 10

We say that a formula is a part of a formula in the context of a state based evaluation for if this can be derived as follows:

 f∈part(x,f)  f∈part(x,g)f∈part(x,g∧h)  f∈part(x,h)f∈part(x,g∧h) x⊭bf∈part(x,g)f∈part(x,b∨g)  f∈part(x,g)f∈part(x,□g)

Partial formulas as shown in Definition 10 are used in the definition of synthesizability as shown in Definition 11. In particular, this is used in the definition of synthesizability for formulas of type . In addition, partial formulas play a major role in the correctness proofs of the synthesis method.

###### Definition 11

With regard to an intermediate relation in the synthesis procedure, we say that a formula is synthesizable in the state-formula pair (notation: ) if this can be derived as follows:

 x⊨b(x,g)↑b  (x,g)↑f1(x,g)↑f2(x,g)↑f1∧f2  x⊨b(x,g)↑b∨f  (x,g)↑f(x,g)↑b∨f (x,g)↑[e]f  (x′,g′)↑f(x,g)\lx@stackrele⟶n(x′,g′)f∈part(x′,g′)(x,g)↑f (x,g)↑f(x,g)↑□f  (x,g)⟶∗n(x′,g′)x′⊨b(x,g)↑◊b  (x,g)\lx@stackrele⟶n(x′,g′)(x,g)↑dlf

It is important to note here that the synthesizability test serves as a partial assessment. The synthesizability predicate for holds in the state-formula pair if it is possible to modify outgoing transitions of in such a way that becomes satisfied in . However, synthesizability is not straightforwardly definable for a number of formulas. For instance, it can not be directly assessed whether it is possible to satisfy an invariant formula. Therefore, the synthesizability test in Definition 11 is designed to operate in conjunction with the process of repeated transition removal, as shown in Fig. 2. This is reflected, for instance, in the definition of synthesizability for an invariant formula , which only relies upon being synthesizable. However, since synthesizability needs to hold at every reachable state for synthesis to be successful, such a definition of synthesizability for invariant formulas is appropriate due to its role in the entire synthesis process. A synthesis example for the invariant formula is shown in Fig. 3.

Using the definitions stated before, we are now ready to define the main synthesis construction. That is, how transitions are removed from the synthesis starting point , and how are the subsequent intermediate transition relations constructed. In addition, more clarity is required with regard to reaching a stable point during synthesis, and verifying whether the synthesis construction has been completed successfully.

###### Definition 12

For and , we define the -th iteration in the synthesis construction as follows:

 (x,f)\lx@stackrele⟶n(x′,f′)e∈U(x,f)\lx@stackrele⟶n+1(x′,f′)  (x,f)\lx@stackrele⟶n(x′,f′)(x,f)↑f(x,f)\lx@stackrele⟶n+1(x′,f′)

The corresponding system model is defined as stated below, using the labeling function , such that , for all and .

One last definition remains, namely completeness of the synthesis construction. The formula reductions induced by Definition 9 are finite, which implies a terminating construction of the transition relation . Since consists of finitely many transitions, only finitely many steps may be removed. This means that at some point, no more transitions are removed, and a stable point will be reached. If at this point, synthesizability holds at every reachable state, synthesis is successful. Otherwise, it is not. It is natural that a formal notion representing the first situation serves as a premise for a number of correctness results. This notion is formalized as completeness in Definition 13.

###### Definition 13

For , and , we say that is complete if the following holds:

For all it holds that .

## 5 Correctness

In this section, we state the theorems for the key properties related to synthesis correctness: termination, validity, simulation, controllability and maximality, as given in Definition 7. All proofs are computer-verified using the Coq proof assistant [16]. The first result is shown in Theorem 5.1: the synthesis construction is always terminating.

###### Theorem 5.1

For , having finite , and , there exists an such that for all .

###### Proof

Observing the synthesis construction in Definition 12 it is straightforward that from the starting point of synthesis , transitions are only removed, and not added. This means that once we are able to show that is finite, given that is finite, then the synthesis construction is terminating. In other words, only finitely many transitions will ever be removed, if they do not satisfy the synthesizability test for the formula assigned to the target state. The focus of this proof is therefore on the finiteness of .

Let denote the formula reduction relation as implicitly defined in Definition 9. That is if . The reflexive-transitive closure of , denoted as , is defined in the natural way. It is clear that if then . This means that if, for each there exists a finite set such that for all , , then only finitely many transitions are constructed in , under the assumption that is finite. We prove this property by induction towards the structure of .

If , for , then choose , which is clearly finite. Two other cases can be handled in a similar way. For , choose and for , choose .

For the case , then by induction we obtain two finite sets and , containing the formula-reducts of and respectively. If we choose , then is clearly also finite. Assume that and , then by induction towards the length of it is clear that . Since , this completes the proof for finiteness of reductions under conjunction. For the next case for , for we obtain a representing the finiteness of the set of -reducts. Then simply choose , which is clearly also finite. The cases for and can be handled in a similar way. By induction we obtain a finite set corresponding to the formula-reducts of . For these two respective cases it is sufficient to choose and .

The case for , for some , is somewhat more involved. Let be obtained via induction, thus containing all such that . Assume that is restricted such that it strictly contains no other elements then those which satisfy the condition. We then define the function in the following way:

As our witness, we then choose , where refers to the finite number of elements in . Clearly it holds that , by the definition of . For each , there exists an , such that . However, the application of in the formula reductions for conjunction and invariant formulas in Definition 9 ensure that if then , as can be derived using induction towards .

The second result is shown in Theorem 5.2: If synthesis is complete then the synthesis result satisfies the synthesized formula. Since synthesis is terminating, as shown in Theorem 5.1, this results in a stable point in the synthesis process. It may then be quickly assessed whether synthesis is complete, by checking whether synthesizability is satisfied in every remaining reachable state, upon which the result in Theorem 5.2 holds.

If and then .

###### Proof

By induction towards the derivation of , using Definition 11.

###### Theorem 5.2

If is complete then .

###### Proof

Assume and . We show a more generalized result: if and is complete then . This immediately leads to the required result, since . Note that we have by Definition 13 and due to . Also, we have , by Lemma 1.

Apply induction towards the structure of . Suppose that , for some . Then from we have , which directly leads to , due to the fact that validity of a state-based formula only depends upon the labels assigned to .

If , and , then and , as is clear from Definition 10. By induction, we then have and . For the next case, suppose that . If , then . However, if , then also does not hold, so must be true. In addition, we have . This is precisely the reason why it is necessary to incorporate a state-based evaluation in Definition 10. Application of the induction hypothesis now gives .

Suppose that , and assume that . Using Definition 10, we may then conclude that . Let . We apply induction in order to obtain . Due to the assumption of , the induction premise for completeness is satisfied for as well. If , then by Lemma 1 we have . By Definition 11, there now exists a step such that . The latter condition shows why it is important to have the condition in Definition 11, for the formula . We apply the induction hypothesis to derive , for . Again, the induction premise for completeness in is satisfied due to the existence of the step , and completeness of .

The next case considers , for some . Assume the existence of a step-sequence . By Definitions 10 and 9, it is clear that , and therefore . This allows us to apply the induction hypothesis for , in order to obtain for each and . Hence, we obtain .

Suppose that , for some . By Lemma 1, there exists a path such that , leading directly to . For , the derivation from Lemma 1 also leads directly to .

We show that our synthesis method adheres to controllability by verifying that the synthesis result is related to the original plant model via partial bisimulation in Theorem 5.3. Note that this implies simulation.

###### Lemma 2

If and and , then there exists an such that for all , we have .

###### Proof

Using induction towards the structure of , we may derive the existence of an , such that . Given that , it is then straightforwardly derivable that , by induction on .

###### Theorem 5.3

If is complete then .

###### Proof

Let . According to Definition 3, we need to provide a witness relation , such that . Choose is complete, for . Suppose that . If there exists a step , then by Definition 9, there also exists a step , upon which we may conclude that , since completeness of extends to completeness of , for . If , for , then by Lemma 2, there exists a , such that , which again leads to the conclusion that . Note that the premise in Lemma 2, is derived from the completeness of .

As a final result, we show that the synthesis result is maximal within the simulation preorder, with respect to all simulants of the original system which satisfy the synthesized formula. This result, which implies maximal permissiveness in the context of supervisory control, is shown in Theorem 5.4.

###### Lemma 3

For , and such that and , and if and , there exists an such that and .

###### Proof

By induction towards the structure of . Note that simulation as given in Definition 2 includes strict equivalence of labels in related states. This implies that validity of a formula is preserved under simulation. This fact must be used in order to derive existence of a step for a disjunctive formula in the induction argument for .

###### Lemma 4

If and if , with relation to , then .

###### Proof

Note that the premise , with relation to , should be interpreted as if were applied in Definition 11. Apply induction towards . If then it is clear that . Suppose that for some , and , with relation to . Then also with relation to . Then, by Definition 12, it is clear that all conditions for the derivation of are satisfied.

###### Lemma 5

If and such that , and if such that , then