Coordination via Interaction Constraints I:
Local Logic
Abstract
Wegner describes coordination as constrained interaction. We take this approach literally and define a coordination model based on interaction constraints and partial, iterative and interactive constraint satisfaction. Our model captures behaviour described in terms of synchronisation and data flow constraints, plus various modes of interaction with the outside world provided by external constraint symbols, onthefly constraint generation, and coordination variables. Underlying our approach is an engine performing (partial) constraint satisfaction of the sets of constraints. Our model extends previous work on three counts: firstly, a more advanced notion of external interaction is offered; secondly, our approach enables local satisfaction of constraints with appropriate partial solutions, avoiding global synchronisation over the entire constraints set; and, as a consequence, constraint satisfaction can finally occur concurrently, and multiple parts of a set of constraints can be solved and interact with the outside world in an asynchronous manner, unless synchronisation is required by the constraints.
This paper describes the underlying logic, which enables a notion of local solution, and relates this logic to the more global approach of our previous work based on classical logic.
1 Introduction
Coordination models and languages [coordination] address the complexity of systems of concurrent, distributed, mobile and heterogeneous components, by separating the parts that perform the computation (the components) from the parts that “glue” these components together. The glue code offers a layer between components to intercept, modify, redirect, synchronise communication among components, and to facilitate monitoring and managing their resource usage, typically separate from the resources themselves.
Wegner describes coordination as constrained interaction [wegner]. We take this approach literally and represent coordination using constraints. Specifically, we take the view that a component connector specifies a (series of) constraint satisfaction problems, and that valid interaction between a connector and its environment corresponds to the solutions of such constraints.
In previous work [reo:deconstructing] we took the channelbased coordination model \reo [reo:primer], extracted constraints underlying each channel and their composition, and formulated behaviour as a constraint satisfaction problem. There we identified that interaction consisted of two phases: solving and updating constraints. Behaviour depends upon the current state. The semantics were described perstate in a series of rounds. Behaviour in a particular step is phrased in terms of synchronisation and data flow constraints, which describe the synchronisation and the data flow possibilities of participating ports. Data flow on the end of a channel occurs when a single datum is passed through that end. Within a particular round data flow may occur on some number of ends; this is equated with the notion of synchrony. The constraints were based on a synchronisation and a data flow variable for each port. Splitting the constraints into synchronisation and data flow constraints is very natural, and it closely resembles the constraint automata model [reo:ca06]. These constraints are solved during the solving phase. Evolution over time is captured by incorporating state information into the constraints, and updating the state information between solving phases. Stronger motivation for the use of constraintbased techniques for the \reocoordination model can be found in our previous work [reo:deconstructing]. By abstracting from the channels metaphor and using only the constraints, the implementation is free to optimise constraints, eliminating costly infrastructure, such as unnecessary channels. Furthermore, constraintsolving techniques are well studied in the literature, and there are heuristics to search efficiently for solution, offering significant improvement of other models underlying Reo implementations. To increase the expressiveness and usefulness of the model, we added external state variables, external function symbols and external predicates to the model. These external symbols enable modelling of a wider range of primitives whose behaviour cannot expressed by constraints, either because the internal constraint language is not expressive enough, or to wrap external entities, such as those with externally maintained state. The constraint satisfaction process was extended with means for interacting with external entities to resolve external function symbols and predicates.
In this paper, we make three major contributions to the model:
 Partiality

Firstly, we allow solutions for the constraints and the set of known predicates and functions to be partial [partiallogic]. We introduce a minimal notion of partial solution which admits solutions only on relevant parts (variables) of a connector. External symbols that are only discovered onthefly are more faithfully modelled in a partial setting.
 Locality

Secondly, we assume a do nothing solution for the constraints of each primitive exists, where no data is communicated. This assumption, in combination with partiality, allows certain solutions for part of a connector to be consistently extended to solutions for the full connector. Furthermore, our notion of locality enables independent parts of the connector to evolve concurrently.
 Interaction

Thirdly, we formalise the constraint satisfaction process with respect to the interaction with the external world, and we introduce external constraint symbols. These can be seen as lazy constraints, which are only processed by the engine on demand, by consulting an external source. These can be used to represent, for example, a stream of possible choices, which are requested on demand, such as the pages of different flight options available on an airline booking web page.
Organization
The next section gives an overview of the approach taken in this paper, providing a global picture and relating the different semantics we present for our constraints. The rest of the paper is divided into two main parts. The first part describes how constraints are defined, and defines four different semantics for variants of the constraint language and relates them. We present a classical semantics in § 3 and two partial semantics in LABEL:sec:partiality, and exploit possible concurrency by allowing local solutions in LABEL:sec:locality. The second part introduces a constraintbased engine to perform the actual coordination, search and applying solutions for the constraints. We describe stateful primitives in LABEL:sec:state, and add interaction in LABEL:sec:interaction. We give some conclusions about this work in LABEL:sec:conclusion.
2 Coordination = Scheduling + Data Flow
We view coordination as a constraint satisfaction problem, where solutions for the constraints yield how data should be communicated among components. More specifically, solutions to the constraints describe where and which data flow. Synchronisation variables describe the where, and data flow variables describe the which. With respect to our previous work [reo:deconstructing], we move from a classical semantics to a local semantics, where solutions address only part of the connector, as only a relevant subset of the variables of the constraints are required for solutions. We do this transformation for classical to local semantics in a stepwise manner, distinguishing four different semantics that yield different notions of valid solution , mapping synchronisation and data flow variables to appropriate values:
 Classical semantics


are always total (for the variables of the connector under consideration);

an explicit value \NOFLOW is added to the data domain to represent the data value when there is no data flow;

an explicit flow axiom is added to constraints to ensure the proper relationship between synchronisation variables and data flow variables; and

constraints are solved globally across the entire ‘connector’.

 Partial semantics


may be partial, not binding all variables in a constraint;

the \NOFLOW value is removed and modelled by leaving the data flow variable undefined; and

as the previous flow axiom is no longer expressible, the relationship between synchronisation and data flow variables is established by a new metaflow axiom, which acts after constraints have been solved to filter invalid solutions.

 Simple semantics


is partial, and the semantics is such that only certain “minimal” solutions, which define only the necessary variables, are found; and

the metaflow axiom is expressible in this logic, so a simple flow axiom can again be added to the constraints.

 Local semantics


formulæ are partitioned into blocks, connected via shared variables;

each block is required to always admit a do nothing solution;

some solutions in a block can be found without looking at its neighbours, whenever there is noflow on its boundary synchronisation variables;

two or more such solutions are locally compatible;

blocks can be merged in order to find more solutions, in collaboration, when existing solutions do not ensure the noflow condition over the boundary synchronisation variables; and

the search space underlying constraints is smaller than in the previous semantics, and there is a high degree of locality and concurrency.

We present formal embeddings between these logics, with respect to solutions that obey the various (meta) flow axioms (linking solutions for synchronisation and data flow variables). We call such solutions firings. The first step is from a classical to a partial semantics. The number of solutions increases, as new (smaller) solutions also become valid. We then move to a simple semantics to regain an expressible flow axiom, where only some “minimal” partial solutions are accepted. In the last step we present a local semantics, where we avoid the need to inspect even more constraints, namely, we avoid visiting constraints added conjunctively to the system, by introducing some requirements on solutions to blocks of constraints.
3 Coordination via Constraint Satisfaction
In previous work we described coordination in terms of constraint satisfaction. The main problem with that approach is that the constraints needed to be solved globally, which means that it is not scalable as the basis of an engine for coordination. In this section, we adapt the underlying logic and notion of solution to increase the amount of available locality and concurrency in the constraints. Firstly, we move from the standard classical interpretation of the logic to a partial interpretation. This offers some improvement, but the solutions of a formula need to be filtered using a semantic variant of the flow axiom, which is undesirable because filtering them out during the constraint satisfaction process could be significantly faster. We improve on this situation by introducing a simpler notion of solution for formulæ, requiring only relevant variables to be assigned. This approach avoids post hoc filtering of solutions. Unfortunately, even simple solutions still require more or less global constraint satisfaction. Although it is the case that many constraints may correspond to no behaviour within parts of the connector—indeed all constraints admit such solutions—, the constraint satisfier must still visit the constraints to determine this fact. In the final variant, we simply assume that the no behaviour solution can be chosen for any constraint not visited by the constraint solver, and thus the constraint solver can find solutions to constraints without actually visiting all constraints. This means that more concurrency is available and different parts of the implicit constraint graph can be solved independently and concurrently.
We start by motivating our work via an example, and we then describe the classical approach to constraint satisfaction and its problems, before gradually refining our underlying logic to a be more amenable to scalable satisfaction.
3.1 Coordination of a complex data generator
We introduce a motivating example, depicted in Figure 1, where a Complex Data Generator (CDG) sends data to Client. Data communication is controlled via a coordinating connector. The connector consists of a set of composed coordination building blocks, each with some associated constraints describing their behavioural possibilities. We call these building blocks simply primitives. The CDG and the Client are also primitives, and play the same coordination game by providing some constraints reflecting their intentions to read or write data.
Figure 1 uses a graphical notation to depict how the different primitives are connected. Each box represents a primitive with some associated constraints, connected to each other via shared variables. For example, the CDG shares variable with \fifo, \filterp, and \syncdrain, indicating that the same data flows through the lines connecting theses primitives in the figure. The arrows represent the direction of data flow, thus the value of is given by and further constrained by the other attached primitives.
Most of the coordination primitives are channels from the \reocoordination language [reo:primer]. Previous work [reo:deconstructing] described a constraintbased approach to modelling \reonetworks. Here we forego the graphical representation to emphasise the idea that a coordinating connector can be seen as a soup of constraints linked by shared variables. One optimisation we immediately employ is using the same name for ends which are connected by synchronous channels or replicators.^{1}^{1}1Semantically, this view of synchronous channels and replicators is valid. Note that in the original description of \reo, nodes act both as mergers and replicators. This behaviour can be emulated using merger and replicator primitives, as we have done. The result is a simpler notion of node, a 1:1 node which both synchronises and copies data from the output port to the input port. Primitives act as constraint providers, which are combined until they reach a consensus regarding where and which data will be communicated. Only then a possible communication of data takes place, and the primitives update their constraints.
In the particular case of the example in Figure 1, there is a complex data generator (CDG) that can write one of several possible values, a filter that can only receive (and send) data if it validates a given predicate , a component (User approval) that approves values based on interaction with a user, a destination client that receives the final result, and some other primitives that impose further constraints. We will come back to this example after introducing some basic notions about the constraints.
Notation
We write to denote a global set of possible data that can flow on the network. \NOFLOWis a special constant not in that represents no data flow. \Xdenotes a set of synchronisation variables over , a set of data flow variables over , a set of predicate symbols, and a set of function symbols such that . (Actually, is the Herbrand universe over function symbols .) We use the following variables to range over various domains: , , , and . Recall that synchronisation variables and data flow variables are intimately related, as one describes whether data flows and the other describes what the data is.
3.2 Classical Semantics
Consider the logic with the following syntax of formulæ () and terms ():
is true. We assume that one of the internal predicates in is equality, which is denoted using the standard infix notation . The other logical connectives can be encoded as usual: ; ; ; and . Constraints can be easily extended with an existential quantifier, provided that it does not appear in a negative position, or alternatively, that it is used only at the top level.
The semantics is based on a relation , where is a total map from to and from to , and is an arityindexed total map from to , for each , where is the set of all predicate symbols of arity , \Tis the set of all possible ground terms (terms with no variables) plus the constant \NOFLOW. The semantics is defined by a satisfaction relation defined as follows. The function replaces all variables by , and we assume that whenever , for some .
[Classical Satisfaction]