March 6, 2018
La cohérence en œil de poisson : maintenir la synchronisation des données dans un monde géo-répliqué
Fisheye Consistency: Keeping Data in Synch in a Georeplicated World
Au cours des trente dernières années, de nombreuses conditions de cohérence pour les données répliquées ont été proposées et mises en oeuvre. Les exemples courants de ces conditions comprennent la linéarisabilité (ou atomicité), la cohérence séquentielle, la cohérence causale, et la cohérence éventuelle. Ces conditions de cohérence sont généralement définies indépendamment des entités informatiques (noeuds) qui manipulent les données répliquées; c’est à dire qu’elles ne prennent pas en compte la façon dont les entités informatiques peuvent être liées les unes aux autres, ou géographiquement distribuées. Pour combler ce manque, ce document introduit la notion de graphe de proximité entre les noeuds de calcul d’un système réparti. Si deux noeuds sont connectés dans ce graphe, leurs activités doivent satisfaire une condition de cohérence forte, tandis que les opérations invoquées par d’autres noeuds peuvent ne satisfaire qu’une condition plus faible. Nous proposons d’utiliser un tel graphe pour fournir une approche générique à l’hybridation de conditions de cohérence des données dans un même système. Nous illustrons cette approche sur l’exemple de la cohérence séquentielle et de la cohérence causale, et présentons un modèle dans lequel, d’une part, toutes les opérations sont causalement cohérentes, et, d’autre part, les opérations par des processus qui sont voisins dans le graphe de proximité satisfont la cohérence séquentielle. Nous proposons et prouvons un algorithme distribué basé sur ce graphe de proximité, qui combine la cohérence séquentielle et la cohérence causal (nous appelons la cohérence obtenue cohérence en oeil de poisson). Ce faisant, le papier non seulement étend le domaine des conditions de cohérence, mais fournit une solution algorithmiquement correcte et générique directement applicable aux systèmes géo–répartis modernes.
Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems.
Asynchronous message-passing system, Broadcast abstraction, Causal consistency, Data consistency, Data replication, Geographical distribution, Linearizability, Provable property, Sequential consistency.
Systèmes par passage de messages asynchrones, Abstractions de diffusion, Cohérence Causale, Cohérence des données, Réplication des données, Distribution géographiques, Linéarisabilité, Propriété prouvables, Cohérence Séquentielle
Data consistency in distributed systems Distributed computer systems are growing in size, be it in terms of machines, data, or geographic distribution. Insuring strong consistency guarantees (e.g., linearizability ) in such large-scale systems has attracted a lot of attention over the years, and remains today a highly challenging area, for reasons of cost, failures, and scalability. One popular strategy to address these challenges has been to propose and implement weaker guarantees (e.g., causal consistency , or eventual consistency ).
These weaker consistency models are not a desirable goal in themselves , but rather an unavoidable compromise to obtain acceptable performance and availability [8, 13, 40]. These works try in general to minimize the violations of strong consistency, as these create anomalies for programmers and users. They further emphasize the low probability of such violations in their real deployments .
Recent related works For brevity, we cannot name all the many weak consistency conditions that have been proposed in the past. We focus instead on the most recent works in this area. One of the main hurdles in building systems and applications based on weak consistency models is how to generate an eventually consistent and meaningful image of the shared memory or storage . In particular, a paramount sticking point is how to handle conflicting concurrent write (or update) operations and merge their result in a way that suits the target application. To that end, various conditions that enables custom conflict resolution and a host of corresponding data-types have been proposed and implemented [4, 5, 10, 14, 26, 30, 36, 35].
Another form of hybrid consistency conditions can be found in the seminal works on release consistency [18, 21] and hybrid consistency [7, 16], which distinguish between strong and weak operations such that strong operations enjoy stronger consistency guarantees than weak operations. Additional mechanisms and frameworks that enable combining operations of varying consistency levels have been recently proposed in the context of large scale and geo-replicated data centers [38, 40].
Motivation and problem statement In spite of their benefits, the above consistency conditions generally ignore the relative “distance” between nodes in the underlying “infrastructure”, where the notions of “distance” and “infrastructure” may be logical or physical, depending on the application. This is unfortunate as distributed systems must scale out and geo-replication is becoming more common. In a geo-replicated system, the network latency and bandwidth connecting nearby servers is usually at least an order of magnitude better than what is obtained between remote servers. This means that the cost of maintaining strong consistency among nearby nodes becomes affordable compared to the overall network costs and latencies in the system.
Some production-grade systems acknowledge the importance of distance when enforcing consistency, and do propose consistency mechanisms based on node locations in a distributed system (e.g. whether nodes are located in the same or in different data-centers). Unfortunately these production-grade systems usually do not distinguish between semantics and implementation. Rather, their consistency model is defined in operational terms, whose full implications can be difficult to grasp. In Cassandra , for instance, the application can specify for each operation the type of consistency guarantee it desires. For example, the constraints QUORUM and ALL require the involvement of a quorum of replicas and of all replicas, respectively; while LOCAL_QUORUM is satisfied when a quorum of the local data center is contacted, and EACH_QUORUM requires a quorum in each data center. These guarantees are defined by their implementation, but do not provide the programmer with a precise image of the consistency they deliver.
The need to take into account “distance” into consistency models, and the current lack of any formal underpinning to do so are exactly what motivates the hybridization of consistency conditions that we propose in this paper (which we call fisheye consistency). Fisheye consistency conditions provide strong guarantees only for operations issued at nearby servers. In particular, there are many applications where one can expect that concurrent operations on the same objects are likely to be generated by geographically nearby nodes, e.g., due to business hours in different time zones, or because these objects represent localized information, etc. In such situations, a fisheye consistency condition would in fact provide global strong consistency at the cost of maintaining only locally strong consistency.
Consider for instance a node that is “close” to a node , but “far” from a node , a causally consistent read/write register will offer the same (weak) guarantees to on the operations of , as on the operations of . This may be suboptimal, as many applications could benefit from varying levels of consistency conditioned on “how far” nodes are from each other. Stated differently: a node can accept that “remote” changes only reach it with weak guarantees (e.g., because information takes time to travel), but it wants changes “close” to it to come with strong guarantees (as “local” changes might impact it more directly).
In this work, we propose to address this problem by integrating a notion of node proximity in the definition of data consistency. To that end, we formally define a new family of hybrid consistency models that links the strength of data consistency with the proximity of the participating nodes. In our approach, a particular hybrid model takes as input a proximity graph, and two consistency conditions, taken from a set of totally ordered consistency conditions, namely a strong one and a weaker one. A classical set of totally ordered conditions is the following one: linearizability, sequential consistency, causal consistency, and PRAM-consistency . Moreover, as already said, the notion of proximity can be geographical (cluster-based physical distribution of the nodes), or purely logical (as in some peer-to-peer systems).
The philosophy we advocate is related to that of Parallel Snapshot Isolation (PSI) proposed in . PSI combines strong consistency (Snapshot Isolation) for transactions started at nodes in the same site of a geo-replicated system, but only ensures causality among transactions started at different sites. In addition, PSI prevents write-write conflicts by preventing concurrent transactions with conflicting write sets, with the exception of commutable objects.
Although PSI and our work operate at different granularities (fisheye-consistency is expressed on individual operations, each accessing a single object, while PSI addresses general transactions), they both show the interest of consistency conditions in which nearby nodes enjoy stronger semantics than remote ones. In spite of this similitude, however, the family of consistency conditions we propose distinguishes itself from PSI in a number of key dimensions. First, PSI is a specific condition while fisheye-consistency offers a general framework for defining multiple such conditions. PSI only distinguished between nodes at the same physical site and remote nodes, whereas fisheye-consistency accepts arbitrary proximity graphs, which can be physical or logical. Finally, the definition of PSI is given in  by a reference implementation, whereas fisheye-consistency is defined in functional terms as restrictions on the ordering of operations that can be seen by applications, independently of the implementation we propose. As a result, we believe that our formalism makes it easier for users to express and understand the semantics of a given consistency condition and to prove the correctness of a program written w.r.t. such a condition.
Roadmap The paper is composed of 6 sections. Section 2 introduces the system model and two classical data consistency conditions, namely, sequential consistency (SC)  and causal consistency (CC) . Then, Section 3 defines the notion of proximity graph and the associated fisheye consistency condition, which considers SC as its strong condition and CC as its weak condition. Section 4 presents a broadcast abstraction, and Section 5 builds on top of this communication abstraction a distributed algorithm implementing this hybrid proximity-based data consistency condition. These algorithms are generic, where the genericity parameter is the proximity graph. Interestingly, their two extreme instantiations provide natural implementations of SC and CC. Finally, Section 6 concludes the paper.
2 System Model and Basic Consistency Conditions
2.1 System model
The system consists of processes denoted , …, . We note the set of all processes. Each process is sequential and asynchronous. “Asynchronous” means that each process proceeds at its own speed, which is arbitrary, may vary with time, and remains always unknown to the other processes. Said differently, there is no notion of a global time that could be used by the processes.
Processes communicate by sending and receiving messages through channels. Each channel is reliable (no message loss, duplication, creation, or corruption), and asynchronous (transit times are arbitrary but finite, and remain unknown to the processes). Each pair of processes is connected by a bi-directional channel.
2.2 Basic notions and definitions
This section is a short reminder of the fundamental notions typically used to define the consistency guarantees of distributed objects, namely, operation, history, partial order on operations, and history equivalence. Interested readers will find in-depth presentations of these notions in textbooks such as [9, 19, 27, 31].
Concurrent objects with sequential specification A concurrent object is an object that can be simultaneously accessed by different processes. At the application level the processes interact through concurrent objects [19, 31]. Each object is defined by a sequential specification, which is a set including all the correct sequences of operations and their results that can be applied to and obtained from the object. These sequences are called legal sequences.
Execution history The execution of a set of processes interacting through objects is captured by a history , where is a partial order on the set of the object operations invoked by the processes.
Concurrency and sequential history If two operations are not ordered in a history, they are said to be concurrent. A history is said to be sequential if it does not include any concurrent operations. In this case, the partial order is a total order.
Equivalent history Let represent the projection of onto the process , i.e., the restriction of to operations occurring at process . Two histories and are equivalent if no process can distinguish them, i.e.,
Legal history being a sequential history, let represent the projection of onto the object . A history is legal if, for any object , the sequence belongs to the specification of .
Process Order Notice that since we assumed that processes are sequential, we restrict the discussion in this paper to execution histories for which for every process , is sequential. This total order is also called the process order for .
2.3 Sequential consistency
Intuitively, an execution is sequentially consistent if it could have been produced by executing (with the help of a scheduler) the processes on a monoprocessor. Formally, a history is sequentially consistent (SC) if there exists a history such that:
is legal (the specification of each object is respected),
and are equivalent (no process can distinguish —what occurred—and —what we would like to see, to be able to reason about).
One can notice that SC does not demand that the sequence respects the real-time occurrence order on the operations. This is the fundamental difference between linearizability and SC.
An example of a history that is sequentially consistent is shown in Figure 1. Let us observe that, although op occurs before op in physical time, op does not see the effect of the write operation op, and still returns . A legal sequential history , equivalent to , can be easily built, namely,
2.4 Causal consistency
In a sequentially consistent execution, all processes perceive all operations in the same order, which is captured by the existence of a sequential and legal history . Causal consistency  relaxes this constraint for read-write registers, and allows different processes to perceive different orders of operations, as long as causality is preserved.
Formally, a history in which processes interact through concurrent read/write registers is causally consistent (CC) if:
There is a causal order on the operations of , i.e., a partial order that links each read to at most one latest write (or otherwise to an initial value ), so that the value returned by the read is the one written by this latest write and respects the process order of all processes.
For each process , there is a sequential and legal history that
is equivalent to , where is the sub-history of that contains all operations of , plus the writes of all the other processes,
respects (i.e., ).
Intuitively, this definition means that all processes see causally related write operations in the same order, but can see operations that are not causally related () in different orders.
An example of causally consistent execution is given in Figure 2. The processes and observe the write operations on by (op) and (op) in two different orders. This is acceptable in a causally consistent history because op and op are not causally related. This would not be acceptable in a sequentially consistent history, where the same total order on operations must be observed by all the processes. (When considering read/write objects, this constitutes the maim difference between SC and CC.)
3 The Family of Fisheye Consistency Conditions
This section introduces a hybrid consistency model based on (a) two consistency conditions and (b) the notion of a proximity graph defined on the computing nodes (processes). The two consistency conditions must be totally ordered in the sense that any execution satisfying the stronger one also satisfies the weaker one. Linearizability and SC define such a pair of consistency conditions, and similarly SC and CC are such a pair.
3.1 The notion of a proximity graph
Let us assume that for physical or logical reasons linked to the application, each process (node) can be considered either close to or remote from other processes. This notion of “closeness” can be captured trough a proximity graph denoted , whose vertices are the processes of the system (). The edges are undirected. denotes the neighbors of in .
The aim of is to state the level of consistency imposed on processes in the following sense: the existence of an edge between two processes in imposes a stronger data consistency level than between processes not connected in .
Example To illustrate the semantic of , we extend the original scenario that Ahamad, Niger et al use to motivate causal consistency in . Consider the three processes of Figure 3, , , and . Processes and interact closely with one another and behave symmetrically : they concurrently write the shared variable , then set the flags and respectively to , and finally read . By contrast, process behaves sequentially w.r.t. and : waits for and to write on , using the flags and , and then writes .
If we assume a model that provides causal consistency at a minimum, the write of by is guaranteed to be seen after the writes of and by all processes (because waits on and to be set to ). Causal consistency however does not impose any consistent order on the writes of and on . In the execution shown on Figure 4, this means that although reads in (and thus sees the write of after its own write), might still read in (thus perceiving ’.write(1)’ and ’.write(2)’ in the opposite order to that of ).
Sequential consistency removes this ambiguity: in this case, in Figure 4, can only read (the value it wrote) or (written by ), but not . Sequential consistency is however too strong here: because the write operation of is already causally ordered with those of and , this operation does not need any additional synchronization effort. This situation can be seen as an extension of the write concurrency freedom condition introduced in : is here free of concurrent write w.r.t. and , making causal consistency equivalent to sequential consistency for . and however write to concurrently, in which case causal consistency is not enough to ensure strongly consistent results.
If we assume and execute in the same data center, while is located on a distant site, this example illustrates a more general case in which, because of a program’s logic or activity patterns, no operations at one site ever conflict with those at another. In such a situation, rather than enforce a strong (and costly) consistency in the whole system, we propose a form of consistency that is strong for processes within the same site (here and ), but weak between sites (here between on one hand and on the other).
In our model, the synchronization needs of individual processes are captured by the proximity graph introduced at the start of this section and shown in Figure 5: and are connected, meaning the operations they execute should be perceived as strongly consistent w.r.t. one another ; is neither connected to nor , meaning a weaker consistency is allowed between the operations executed at and those of and .
3.2 Fisheye consistency for the pair (sequential consistency, causal consistency)
When applied to the scenario of Figure 4, fisheye consistency combines two consistency conditions (a strong and a weaker one, here causal and sequential consistency) and a proximity graph to form an hybrid distance-based consistency condition, which we call -fisheye (SC,CC)-consistency.
The intuition in combining SC and CC is to require that (write) operations be observed in the same order by all processes if:
They are causally related (as in causal consistency),
Or they occur on “close” nodes (as defined by ).
Formal definition Formally, we say that a history is -fisheye (SC,CC)-consistent if:
There is a causal order induced by (as in causal consistency); and
can be extended to a subsuming order (i.e. ) so that
where is the restriction of to the write operations of and ; and
for each process there is a history that
(a) is sequential and legal;
(b) is equivalent to ; and
(c) respects , i.e., .
If we apply this definition to the example of Figure 4 with the proximity graph proposed in Figure 5 we obtain the following: because and are connected in , by and by must be totally ordered in (and hence in any sequential history perceived by any process ). by must be ordered after the writes on by and because of the causality imposed by . As a result, if the system is -fisheye (SC,CC)-consistent, b? can be equal to or , but not to . This set of possible values is as in sequential consistency, with the difference that -fisheye (SC,CC)-consistency does not impose any total order on the operation of .
Given a system of processes, let denote the graph with no edges, and denote the graph with an edge connecting each pair of distinct processes. It is easy to see that CC is -fisheye (SC,CC)-consistency. Similarly SC is -fisheye (SC,CC)-consistency.
A larger example Figure 6 and Table 1 illustrate the semantic of -fisheye (SC,CC) consistency on a second, larger, example. In this example, the processes and on one hand, and and on the other hand, are neighbors in the proximity graph (shown on the left). There are two pairs of write operations: and on the register , and and on the register . In a sequentially consistency history, both pairs of writes must be seen in the same order by all processes. As a consequence, if sees the value first () and then the value () for , must do the same, and only the value can be returned by x?. For the same reason, only the value can be returned by y?, as shown in the first line of Table 1.
In a causally consistent history, however, both pairs of writes ( and ) are causally independent. As a result, any two processes can see each pair in different orders. x? may return 2 or 3, and y? 4 or 5 (second line of Table 1).
-fisheye (SC,CC)-consistency provides intermediate guarantees: because and are neighbors in , and must be observed in the same order by all processes. x? must return 3, as in a sequentially consistent history. However, because and are not connected in , and may be seen in different orders by different processes (as in a causally consistent history), and y? may return 4 or 5 (last line of Table 1).
4 Construction of an Underlying (SC,CC)-Broadcast Operation
Our implementation of -fisheye (SC,CC)-consistency relies on a broadcast operation with hybrid ordering guarantees. In this section, we present this hybrid broadcast abstraction, before moving on the actual implementation of of -fisheye (SC,CC)-consistency in Section 5.
4.1 -fisheye (SC,CC)-broadcast: definition
The hybrid broadcast we proposed, denoted -(SC,CC)-broadcast, is parametrized by a proximity graph which determines which kind of delivery order should be applied to which messages, according to the position of the sender in the graph . Messages (SC,CC)-broadcast by processes which are neighbors in must be delivered in the same order at all the processes, while the delivery of the other messages only need to respect causal order.
The (SC,CC)-broadcast abstraction provides the processes with two operations, denoted and . We say that messages are toco-broadcast and toco-delivered.
Causal message order Let be the set of messages that are toco-broadcast. The causal message delivery order, denoted , is defined as follows [11, 34]. Let ; , iff one of the following conditions holds:
and have been toco-broadcast by the same process, with first;
was toco-delivered by a process before this process toco-broadcast ;
There exists a message such that .
Definition of the -fisheye (SC,CC)-broadcast The (SC,CC)-broadcast abstraction is defined by the following properties.
If a process toco-delivers a message , this message was toco-broadcast by some process. (No spurious message.)
A message is toco-delivered at most once. (No duplication.)
- -delivery order.
For all the processes and such that is an edge of , and for all the messages and such that was toco-broadcast by and was toco-broadcast by , if a process toco-delivers before , no process toco-delivers before .
- Causal order.
If , no process toco-delivers before .
If a process toco-broadcasts a message , this message is toco-delivered by all processes.
It is easy to see that if has no edges, this definition boils down to causal delivery, and if is fully connected (clique), this definition specifies total order delivery respecting causal order. Finally, if is fully connected and we suppress the “causal order” property, the definition boils to total order delivery.
4.2 -fisheye (SC,CC)-broadcast: algorithm
Local variables To implement the -fisheye (SC,CC)-broadcast abstraction, each process manages three local variables.
is a local vector clock used to ensure a causal delivery order of the messages; is the sequence number of the next message that will toco-deliver from .
is a vector of logical clock values such that is the local logical clock of (Lamport’s clock), and is the value of as known by .
is a set containing the messages received and not yet toco-delivered by .
Description of the algorithm Let us remind that for simplicity, we assume that the channels are FIFO. Algorithm 1 describes the behavior of a process . This behavior is decomposed into four parts.
The first part (lines 1-6) is the code of the operation . Process first increases its local clock and sends the protocol message tocobc to each other process. In addition to the application message , this protocol message carries the control information needed to ensure the correct toco-delivery of , namely, the local causality vector (), and the value of the local clock (). Then, this protocol message is added to the set and is increased by (this captures the fact that the future application messages toco-broadcast by will causally depend on ).
The second part (lines 8-15) is the code executed by when it receives a protocol message tocobc from . When this occurs adds first this protocol message to , and updates its view of the local clock of () to the sending date of the protocol message (namely, ). Then, if the local clock of is late (), catches up (line 12), and informs the other processes of it (line 13).
The third part (lines 17-19) is the processing of a catch up message from a process . In this case, updates its view of ’s local clock to the date carried by the catch up message. Let us notice that, as channels are FIFO, a view can only increase.
The final part (lines 21-34) is a background task executed by , where the application messages are toco-delivered. The set contains the protocol messages that were received, have not yet been toco-delivered, and are “minimal” with respect to the causality relation . This minimality is determined from the vector clock , and the current value of ’s vector clock (). If only causal consistency was considered, the messages in could be delivered.
Then, extracts from the messages that can be toco-delivered. Those are usually called stable messages. The notion of stability refers here to the delivery constraint imposed by the proximity graph . More precisely, a set is first computed, which contains the messages of that (thanks to the FIFO channels and the catch up messages) cannot be made unstable (with respect to the total delivery order defined by ) by messages that will receive in the future. Then the set is computed, which is the subset of such that no message received, and not yet toco-delivered, could make incorrect – w.r.t. – the toco-delivery of a message of .
Once a non-empty set has been computed, extracts the message whose timestamp is “minimal” with respect to the timestamp-based total order ( is the sender of ). This message is then removed from and toco-delivered. Finally, if , is increased to take into account this toco-delivery (all the messages toco-broadcast by in the future will be such that , and this is encoded in ). If , this causality update was done at line 5.
theoremalgoimplementscccbroadcast Algorithm 1 implements a -fisheye (SC,CC)-broadcast.
4.3 Proof of Theorem 1
The proof combines elements of the proofs of the traditional causal-order [12, 34] and total-order broadcast algorithms [23, 8] on which Algorithm 1 is based. It relies in particular on the monoticity of the clocks and , and the reliability and FIFO properties of the underlying communication channels. We first prove some useful lemmata, before proving termination, causal order, and -delivery order in intermediate theorems. We finally combine these intermediate results to prove Theorem 1.
We use the usual partial order on vector clocks:
with its accompanying strict partial order:
We use the lexicographic order on the scalar clocks :
We start by three useful lemmata on and . These lemmata establish the traditional properties expected of logical and vector clocks.
The following holds on the clock values taken by :
The successive values taken by in Process are monotonically increasing.
The sequence of values attached to tocobc messages sent out by Process are strictly increasing.
Proof Proposition 1 is derived from the fact that the two lines that modify (lines 5, and 32) only increase its value. Proposition 2 follows from Proposition 1 and the fact that line 5 insures successive tocobc messages cannot include identical values.
The following holds on the clock values taken by :
The successive values taken by in Process are monotonically increasing.
The sequence of values included in tocobc and catch_up messages sent out by Process are strictly increasing.
The successive values taken by in Process are monotonically increasing.
Proof Proposition 1 is derived from the fact that the lines that modify (lines 2 and 12) only increase its value (in the case of line 12 because of the condition at line 11). Proposition 2 follows from Proposition 1, and the fact that lines 2 and 12 insures successive tobobc and catch_up messages cannot include identical values.
To prove Proposition 3, we first show that:
For , can only be modified at lines 10 and 18, by values included in tobobc and catch_up messages, when these messages are received. Because the underlying channels are FIFO and reliable, Proposition 2 implies that the sequence of and values received by from is also strictly increasing, which shows equation (1).
We prove the reverse implication by induction on the protocol’s execution by process . When is initialized is null:
because the above is true of any process, with Lemma 2, we also have
for all message that is toco-broadcast by Process .
Let us now assume that the invariant holds at some point of the execution of . The only step at which the invariant might become violated in when is modified for at line 32. When this increment occurs, the condition of the lemma potentially becomes true for additional messages. We want to show that there is only one single additional message, and that this message is , the message that has just been delivered at line 31, thus completing the induction, and proving the lemma.
For clarity’s sake, let us denote the value of just before line 32, and the value just after. We have .
at line 24, and hence
At line 24, has not been yet delivered (otherwise it would not be in ). Using the contrapositive of our induction hypothesis, we have
Because of line 5, is the only message tobo_broadcast by whose causal timestamp verifies (7). From this unicity and (7), we conclude that after has been incremented at line 32, if a message sent by verifies , then
either , and by induction assumption, has already been delivered;
or , and , and has just been delivered at line 31.
All messages toco-broadcast using Algorithm 1 are eventually toco-delivered by all processes in the system.
Proof We show Termination by contradiction. Assume a process toco-broadcasts a message with timestamp , and that is never toco-delivered by .
If , because the underlying communication channels are reliable, receives at some point the tocobc message containing (line 8), after which we have
might never be toco-delivered by because it never meets the condition to be selected into the set of (noted below) at line 24. We show by contradiction that this is not the case. First, and without loss of generality, we can choose so that it has a minimal causal timestamp among all the messages that never toco-delivers (be it from or from any other process). Minimality means here that
Let us now assume is never selected into , i.e., we always have
This means there is a process so that
If , we can consider the message sent by i just before (which exists since the above implies ). We have , and hence from (11) we have
If , applying Lemma 3 to when toco-broadcasts at line 3, we find a message sent by with such that was received by before toco-broadcast . In other words, belongs to the causal past of , and because of the condition on (line 24) and the increment at line 32, we have
As for the case , (11) also implies
We conclude that if a message from is never toco-delivered by , after some point remains indefinitely in
Without loss of generality, we can now choose with the smallest total order timestamp among all the messages never delivered by . Since these timestamps are totally ordered, and no timestamp is allocated twice, there is only one unique such message.
We first note that because channels are reliable, all processes eventually receive the tocobc protocol message of