Why You Can’t Beat Blockchains Consistency and High Availability in Distributed Systems
Abstract
We study the issue of data consistency in highlyavailable distributed systems. Specifically, we consider a distributed system that replicates its data at multiple sites, which is prone to partitions, and which is expected to be highly available. In such a setting, strong consistency, where all replicas of the system apply synchronously every operation, is not possible to implement. However, many weaker consistency criteria that allow a greater number of behaviors than strong consistency, are implementable in distributed systems.
We focus on determining the strongest consistency criterion that can be implemented in a distributed system that tolerates partitions. We show that no criterion stronger than Monotonic Prefix Consistency (MPC) can be implemented. MPC is the consistency criterion underlying blockchains.
1 Introduction
Replication is a mechanism that enables sites from different geographical locations to access a shared data structure with low latency. It consists of creating copies of this data structure on each site of a distributed system. Ideally, replication should be transparent, in the sense that the users of the data structure should not notice discrepancies between the different copies of the data structure.
An ideal replication scheme could be implemented by keeping all sites synchronized after each update to the data structure. This ideal model is called strong consistency, or linearizability [1]. The disadvantage of this model is that it can cause large delays for users, and worse the data structure might not be available to use at all times. This may happen, for instance, if some sites of the system are unreachable, i.e., partitioned from the rest of the network. Briefly, it is not possible to implement strong consistency in a distributed system while ensuring high availability [2, 3].
Given this impossibility, developers rely on weaker notions of consistency, such as causal consistency [4]. Weaker consistency criteria do not require sites to be exactly synchronized as in strong consistency. For instance, causal consistency allows different sites to apply updates to the data structure in different orders, as long as the updates are not causally related. Informally, a consistency criterion specifies the behaviors that are allowed by a replicated data structure. In this sense, causal consistency is more permissive than strong consistency. We also say that strong consistency is stronger than causal consistency, as strong consistency allows strictly fewer behaviors than causal consistency. A natural question is then: What is the strongest consistency criterion that can be implemented by a replicated data structure?
In [5], it was proven that nothing stronger than observable causal consistency (a variant of causal consistency) can be implemented. It is an open question whether observable causal consistency itself is actually implementable. Moreover, [5] does not study consistency criteria that are not comparable to observable causal consistency. Indeed, there exist consistency criteria that are neither stronger than causal consistency, nor weaker, and which can be implemented by a replicated data structure.
In our paper, we explore one such consistency criterion. More precisely, we prove that, under some conditions which are natural in a large distributed system, nothing stronger than monotonic prefix consistency (MPC) [6] can be implemented. This result does not contradict the result from [5], since MPC and causal consistency are incomparable.
The reason why MPC and observable causal consistency are incomparable is as follows. MPC requires all sites to apply updates in the same order (but not necessarily synchronized at the same time, as in strong consistency), while causal consistency allows noncausally related updates to be applied in different orders. On the other hand, causal consistency requires all causallyrelated updates to be applied in an order respecting causality, while MPC requires no such constraint.
MPC corresponds to the consistency criterion that blockchains implement with high probability [7, 8, 9]. A blockchain is a replicated data structure, composed of a list of blocks. Under some conditions, blocks can be appended at the end of the list, and all participants of the blockchain agree on the order in which blocks are appended.
Overall, our contribution is to prove that, for a notion of behavior where updates are anonymous and their times and places of origin do not matter (as is the case in largescale open implementations such as blockchains), nothing stronger than MPC can be implemented in a distributed setting. Blockchains therefore implement a consistency model which is closest to strong consistency and is achievable in a distributed setting. Moreover, we remark that, clients who are only sensitive to our notion of behavior cannot tell the difference between a strongly consistent implementation and an MPC implementation.
In the rest of this paper, we first give preliminary notions and a formal definition of the problem we’re addressing (sections 2 and 3). We then turn our attention to the MPC model by defining it formally and through an implementation (Section 4). We prove that, given the notion of behavior mentioned above, and under conditions natural in a largescale network (availability, convergence), nothing stronger than MPC can be implemented (Section 5). Then we compare MPC with other consistency models (Section 6), and conclude (Section 7).
2 Implementations of Replicated Data Structures
An implementation of a replicated data structure consists of several sites that communicate by sending messages. Messages are delivered asynchronously by the network, and can be reordered or delayed. To be able to build implementations that provide liveness guarantees, we assume all messages are eventually delivered by the network.
Each site of an implementation maintains a local state. This local state reflects the view that the site has on the replicated data structure, and may contain arbitrary data. Each site implements the protocol by means of an update handler, a query handler, and a message handler.
The update handler is used by clients to submit updates to the data structure. The update handler may modify the local states of the site, and broadcast a message to the other sites. Later, when another site receives the message, its message handler is triggered, possibly updating the local state of the site, and possibly broadcasting a new message.
The query handler is used by clients to make queries on the data structure. The query handler returns an answer to the client, and is a readonly operation that does not modify the local state or broadcast messages.
Remark 1
Our model only supports broadcast and not general peertopeer communication, but this is without loss of generality. We can simulate sending a message to a particular site by writing the identifier of the receiving site in the broadcast message. All other sites would then simply ignore messages that are not addressed to them.
In this paper, we consider implementations of the list data structure. The list supports an update operation of the form , with , which adds the element to the list. The list also supports a query operation that returns the whole list , which is a sequence of elements in .
Definition 1
Let be the set of updates, and be the set of all possibles answers to queries.
We focus on the list data structure because queries return the history of all updates that ever happened. In that regard, lists can encode any other data structure whose operations can be split in updates and queries, by adding a processing layer after the query operation of the list returns all updates. Data structures that contain operations which are queries and updates at the same time (e.g. the Pop operation of a stack) are outside the scope of this paper. We now proceed to give the formal syntax for implementations, and then the corresponding operational semantics.
Definition 2
An implementation is a tuple
where

is a nonempty set of local states,

is a nonempty finite set of process identifiers,

associates to each process an initial local state,

is a set of messages,

is a function, called the handler of incoming messages, which updates the local state of a site when a message is received, and possibly broadcasts a new message,

is a function, called the handler of updates, which modifies the local state when an update is submitted, and possibly broadcasts a message.

is a function, called the handler of client queries, which returns an answer to client queries.
The set is defined as , where is a special symbol denoting the fact that no message is sent.
Before defining the semantics of implementations, we introduce a few notations. We first define the notion of action, used to denote events that happen during the execution. Each action contains a unique action identifier , and the process identifier where the action occurs.
Definition 3
A broadcast action is a tuple , and a receive action is a tuple , where is the message identifier and is the message. An update action or a write action is a tuple where . Finally, a query action or a read action is a tuple where .
Executions are then defined as sequences of actions, and are considered up to action and message identifiers renaming.
Definition 4
An execution is a sequence of broadcast, receive, query and update actions where no two actions have the same identifier aid, and no two broadcast actions have the same message identifier mid.
We now describe how implementations operate on a given site .
Definition 5
We say that a sequence of actions from site pid follows if there exists a sequence of states such that , and for all , we have:

if with , then . This means that upon a write action, a site must update its state as defined by the update handler;

if with , then and . This condition states that query actions do not modify the state, and that the answer given to query actions must be as specified by the query handler, depending on the current state ;

if , then . Broadcast actions do not modify the local state;

if , then . The reception of a message modifies the local state as specified by .
Moreover, we require that broadcast actions are performed if and only if they are triggered by the handler of incoming messages, or the handler of clients requests. Formally, for all , if and only if either:

and such that and
, or 
, , and such that
and .
When all conditions hold, we say that is a run for . Note that when a run exists for a sequence of actions, it is unique.
We then define the set of executions generated by , denoted . In particular, this definition models the communication between sites, and specifies that a receive action may happen only if there exists a broadcast action with the same message identifier preceding the receive action in the execution. Moreover, a fairness condition ensures that, in an infinite execution, every broadcast action must have a corresponding receive action on every site.
Definition 6
Let be an implementation. The set of executions generated by is such that if and only if the three following conditions hold:

Projection: for all , the projection follows ,

Causality: for every receive action , there exists a broadcast action before in ,

Fairness: if is infinite, then for every site and every broadcast action performed on any site , there exists a receive action in ,
where is the subsequence of of actions performed by process pid:

;

;

whenever .
For the rest of the paper, we consider that updates are unique, in the sense that an execution may not contain two update actions that write the same value . This assumption only serves to simplify the presentation of our result, and can be done without loss of generality. In practice, updates can be made unique by attaching a unique timestamp to them.
3 Problem Definition
In this section, we explain how we compare implementations using the notion of trace. Informally, the trace of an execution corresponds to what is observable from the point of view of clients using the data structure.
Our notion of trace is based on two assumptions: 1) Clients know the order of the queries they have done on a site, but not the relative positions of their queries with respect to other clients’ queries, 2) Updates are anonymous, and their origin is not relevant for the implementation. This models freely available data structures, such as the Bitcoin blockchain, where any person can disseminate a transaction in the network, and the place and time where the transaction was created are not relevant for the protocol execution.
More precisely, a trace records an unordered set of anonymous updates (without a site identifier), and records for each site the sequence of queries that happened on this site.
Definition 7
The trace corresponding to an execution is denoted , where is a labelled partially ordered set such that:

is the set of action identifiers of query actions of ;

is a transitive and irreflexive relation over , sometimes called the program order, ordering queries performed on the same site; more precisely, we have if are action identifiers performed by the same site, and that appear in that order in ;

is the labelling function such that for any , is the answer of the query action corresponding to aid in ;
and is the set of elements that appear in an update action of .
We illustrate this definition with the following example.
Example 1
Consider the execution in Figure 1, and its corresponding trace . ( are site identifiers, are unique message identifiers, and are messages.)
Then, we compare implementations by looking at the set of traces they produce. The fewer traces an implementation produces, the stronger it is, and the closer it is to strong consistency.
Definition 8
The notation is extended to sets of executions pointwise. An implementation is stronger than , denoted iff
The implementations and are said to be equivalent, denoted , iff and . Moreover, is strictly stronger than , denoted , iff and .
Our goal is to identify the strongest implementations. These are the implementations that are minimal according to the order . More specifically, these are the implementations for which there does not exist an implementation strictly stronger than .
4 Monotonic Prefix Consistency (Mpc)
4.1 Description of Mpc
Often called consistent prefix [6, 10], the MPC model requires that all sites of the replicated system agree on the order of write operations (i.e., updates on the state). There exists a global order such that every read operation, whatever the site that performs it, always returns a prefix of this order. Moreover, read operations which execute on the same site are monotonic. This means that subsequent reads at the same site reflect a nondecreasing prefix of writes, i.e., the prefix must either increase or remain unchanged.
Note that the global order on write operations on which the sites agree does not necessarily satisfy causality among these operations nor realtime. In other words, the order in which clients submit write operations does not translate into any constraints on the global order in which these updates apply at all sites. With respect to freshness, MPC does not guarantee that a read operation will return all of the preceding writes, only a prefix of these writes. For instance, some sites can be later than other sites in applying some updates.
4.2 An Implementation of Mpc
For illustration purposes, we give a basic implementation of MPC in Figure 2. The idea is to let Site 1 decide on the order of all update operations. In general, the consensus mechanism can be arbitrary, and symmetric with respect to sites, but we present this one for its simplicity.
Though this is not the case in the model we presented in Section 2, we assume here that messages are received in the same order they were broadcast. More precisely, if one site broadcasts two messages, then every site will receive them in the order in which they were broadcast. In general, this can be implemented by adding a local version number to each broadcast message.
Upon receiving an update (L20), Site with forwards the update to Site . When receiving an update (L16) or when receiving a forwarded message (L25), Site updates its local state, and broadcasts an Apply messages for the other sites. Finally, when receiving an Apply messages (L30), Site with , updates its local state.
The query handler of each site (L35) simply returns the local state. This implementation ensures three properties that we formalize in the next section.

Monotonicity: The list (maintained in the local state ) of a site grows over time.

Prefix: At any moment, given two lists and of two sites, is a prefix of or vice versa.

Consistency: The list of a site only contains values that come from some update.
4.3 Formal Definition of Mpc
Definition 9
Given two lists , we say that is a prefix of , denoted , if there exists such that . Moreover, is a strict prefix of , denoted , if and .
By abuse of notation, we extend the prefix order to elements of Ans, which are of the form where is a list (see Def. 1). Moreover, we also use the prefix notations for other types of sequences, such as executions. We now formally define MPC.
Definition 10
MPC is the set of traces where satisfying the following conditions:

Monotonicity: A query done after aid on the same site cannot return a smaller list. For all , if , then .

Prefix: Queries done on different sites are compatible, in the sense that one is a prefix of the other. For any all , or .

Consistency: Queries only return elements that come from a write. For all , and for any element of , we have .
The set of traces generated by the implementation in Figure 2 is exactly MPC.
4.4 Relation between Mpc and Blockchains
In practice, the traces that the Bitcoin protocol [7]
produces are traces which belong to MPC with high probability. This
was shown in [8, 9]. More precisely, they
proved that the blockchains of two honest participants are compatible,
in the sense that one should be a prefix of the other with high
probability, when ignoring the last blocks
5 Nothing Stronger Than Mpc in a Distributed Setting
We now proceed to our main result, stating that there exists no convergent implementation stronger than MPC. Convergent in our setting means that every write action performed should eventually be taken into account by all sites. We formalize this notion in Section 5.1.
We focus on convergent implementations in order to avoid trivial implementations that do not provide progress guarantees. For instance, implementations that do not communicate and always return the empty list for all queries are not convergent.
In Section 5.2, we prove several lemmas that hold for all implementations. We make use of these lemmas to prove our main theorem in Section 5.3.
5.1 Convergence Property
Convergence is formalized using the notion of eventual consistency (see e.g. [11, 12] for definitions similar to the one we use there). A trace is eventually consistent if every write is eventually propagated to all sites. More precisely, for every action , the number of queries that do not contain in their list must be finite. Note that this implies that all finite traces are eventually consistent.
Definition 11
A trace with is eventually consistent if for every , the set is finite. An implementation is convergent if all of its traces are eventually consistent.
5.2 Implementations Properties
We give a few lemmas that describe closure properties of the set of executions generated by implementations in our setting. The following lemma states that the implementation is available for updates, meaning that, given a finite execution, it is always possible to perform a new write at the end of the execution.
Lemma 1 (Update Availability)
Let be an implementation. Let be a finite execution in , and let . Let . Then, there exists an execution such that is a prefix of and .
Proof
Since , we know that follows and that there exists a run for . Let . We distinguish two cases:
(1) If , let , where is a fresh action identifier that does not appear in , and pid is any process identifier in .
(2) If , let , where and are fresh action identifiers from , and mid is a fresh message identifier.
In both cases, we prove that belongs to by adding at the end of the run (once in case 1 or twice in case 2). Moreover, we have , which concludes our proof.
The next lemma shows that the implementation is available for queries. This means that given a finite execution, we can perform a query on any site and obtain an answer.
Lemma 2 (Query Availability)
Let be an implementation. Let be a finite execution and . Then, there exist and such that the execution belongs to .
Proof
Similar to the proof of Lemma 1, but using the query handler, instead of the update handler. This proof is also simpler, as there is no need to consider messages, since the query handler cannot broadcast any message. Therefore, in this proof, only case 1 needs to be considered.
Then, we prove that it is possible to remove a finite number of query actions from any finite or infinite execution.
Lemma 3 (Invisible Reads)
Let be an implementation. Let be an execution (finite of infinite) of the form , where , and . Then, .
Proof
This is a direct consequence of Definition 5, which specifies that query actions do not modify the local state of sites, and do not broadcast messages.
Finally, we prove a property about convergent implementations. We prove in Lemma 5 that given any finite execution , it is always possible to add a query action that returns a list containing all the elements appearing in a write action of . The proof relies on the notion of limit (as an infinite execution) of an infinite sequence of finite executions, and on Lemma 4, which shows that, under a fairness condition, the limit of executions in also belongs to .
Definition 12
Given an infinite sequence of finite sequences , such that for all , , the limit of is the (unique) infinite sequence such that for all , .
Lemma 4 (Limit)
Let be an implementation. Let be an infinite sequence of finite executions, such that for all , , , and such that for all , for all broadcast actions in , and for all , there exists such that contains a corresponding receive action.
Then, the limit of belongs to .
Proof
According to Definition 6, we have three points to prove. (1) (Projection) First, we want to show that, for all , the projection follows . For all , we know that , and deduce that follows . Let be the run of . Note that for all , we have . Let be the limit of the runs By construction, is a run of , which shows that follows .
(2) (Causality) We need to prove that every receive action in has a corresponding broadcast action that precedes it in . Let be a prefix of that contains . Since , we know that there exists a broadcast action corresponding to , and that precedes in . Finally, since , precedes in .
(3) (Fairness) We want to prove that for every broadcast action of and for every site , there exists a corresponding receive action . Let be a prefix of that contains . By assumption of the current lemma, there exists such that contains a receive action corresponding to . Moreover, since , belongs to , which concludes our proof.
Lemma 5 (Convergence)
Let be a convergent implementation. Let be a finite execution and . Let be the set of elements appearing in an update action of , i.e., .
Then, can be extended in an execution where contains every element of , i.e., . Moreover, we can define such an extension that does not contain any query or update actions.
Proof
We build an infinite sequence of finite executions , where for every , . Moreover, we have and for every , , and is obtained from as follows.
For every broadcast action in , and for every , if there is no receive action in , then we add one when constructing . Moreover, if the message handler specifies that a message should be sent when msg is received, we add a new broadcast action that sends , immediately following the receive action. Finally, using Lemma 2, we add a query action (read) on site pid.
Then, we define to be the limit of By Lemma 4, we have . Since is convergent, we know that is eventually consistent. This ensures that for every , out of the infinite number of queries that belong to , only finitely many do not contain .
Therefore, there exists such that ends with a query action that contains every element of . By construction, is of the form . Using Lemma 3, we remove every query action that appears in , and obtain an execution of the form where contains every element of , and where does not contain any query or update actions.
5.3 Nothing Is Stronger Than Mpc in a Distributed Setting
We now proceed with the proof that no convergent implementation is strictly stronger than MPC. We start with an implementation that is strictly stronger than MPC and derive a contradiction.
More precisely, using the lemmas proved in Section 5.2, we prove that any trace of MPC belongs to . First, we show that this holds for finite traces, by using an induction on the number of write operations in the trace. Then, we extend the proof to infinite traces by going to the limit.
Theorem 5.1
Let be a convergent implementation. Then, is not strictly stronger than MPC: .
Proof
Assume that is strictly stronger than MPC i.e. . Our goal is to prove that therefore leading to a contradiction. In terms of traces, we want to prove that .
Let . Our goal is to prove that .
Case where is finite. We prove this part by induction in Lemma 6.
Case where is infinite. Let . We first order all the query actions in as a sequence such that for every , , and for every , (in the program order of ) implies . Defining such a sequence is possible thanks to the Monotonicity property of MPC.
For each , we define a finite trace that contains all query actions with , and the subset of that contains all elements appearing in these query actions, i.e. . Our goal is to construct an execution such that , and such that for all , . We then define as the limit of By Lemma 4, we have . Since , we deduce that , which concludes the proof.
We now explain how to construct , for every , by induction on . Let be the empty execution and . For , we define by starting from , and extending it as follows. By induction, we know that , and want to extend it into an execution such that .
(Similar to Lemma 5) For every broadcast action in , and for every , if there is no receive action in , then we add one when constructing . Moreover, if the message handler specifies that a message should be sent when msg is received, we add a new broadcast action that sends , immediately following the receive action.
Lemma 6 below, used in Theorem 5.1, shows that no convergent implementation can produce strictly fewer finite traces than MPC.
Lemma 6
Let be a convergent implementation such that , and let be a finite trace of MPC. Then, there is a finite execution such that .
Proof
Let . We proceed by induction on the size of , denoted .
Case . In that case, the set is empty. First, by definition of , we have where is the empty execution. Then, for each read operation in , and using Lemma 2, we add a read operation to the execution. We obtain an execution .
We then have to prove that , meaning that all the read operations of return the empty list, as in . By our assumption that , we know that . By definition of MPC, and since contains no write operation, the Consistency property of MPC ensures that all the read actions of return the empty list. Therefore, we have , which concludes our proof.
Case . We consider two subcases. (1) There exists a write whose value does not appear in . We consider the trace . By definition of MPC, belongs to MPC, and we deduce by induction hypothesis that there exists an execution such that . By Lemma 1, we extend in an execution so that , which is what we wanted to prove.
(2) All the writes of appear in the reads of . By the Consistency and Prefix properties of MPC, there exists a nonempty sequence of elements from , such that all read actions return a prefix of , and there exist read actions that return the whole list .
Let , where is the last element of . Let be the trace , such that is the trace where every query action labelled by is replaced by a query action labelled by , and implicitly, every query action labelled by any prefix of is unchanged. Let the set of the newly added query actions, and let be the set of site identifiers that appear in an action of .
By definition of MPC, we have . By induction hypothesis, we deduce that there exists a finite execution such that .
Then, by Lemma 1, we add at the end of an update action (on some site and with some fresh ), which is of the form , so we get an execution such that .
Using Lemma 5, we extend in an execution by adding queries to the sites in , as many as were replaced by queries in . Since , and since by Lemma 5, the answers to these queries must contain all the elements of , we conclude that the only possible answer for all these queries is the entire list .
Finally, we use Lemma 3 to remove the queries from , and we obtain an execution in whose trace is .
6 Comparison with Other Consistency Criteria
6.1 Relation between Mpc and other consistency criteria
Consistency criteria are usually defined in terms of full traces that contain both the read and write operations in the program order (see e.g. [11]). The definition of trace we used in this paper (Def. 7, Section 3) puts the writes in an unordered set, unrelated to the read operations. This choice is justified in largescale, open, implementations, such as the Bitcoin blockchain. Indeed, in these systems, any participant can perform a write operation (e.g., a Bitcoin transaction), and the origin of the write has no relevance for the protocol.
When considering full traces, MPC as a consistency criterion is strictly weaker than strong consistency. Indeed, MPC allows a trace where a read preceded by a write on the same site ignores that write.
As explained in the introduction, MPC is not comparable to causal consistency. MPC allows full traces that causal consistency forbids and vice versa. Therefore, our result stating that nothing stronger than MPC that can be implemented in a distributed setting does not contradict earlier results of [13] and [5], which show that nothing stronger than variants of causal consistency can be implemented.
6.2 Relation with other criteria when using our notion of trace
When using our notion of trace, MPC is strictly stronger than causal consistency. First, MPC is stronger than causal consistency because every trace of MPC can be produced by a causally consistent system. The main reason is that our notion of trace doesn’t capture any causality relation. Moreover, there are some traces that causal consistency produces and which do not belong to MPC, e.g. a trace where Site 1 has a operation, and Site 2 has a , where and are not causally related (this explains that MPC is strictly stronger than causal consistency).
Moreover, it is interesting to note that, for our notion of trace, the traces allowed by MPC are exactly the traces allowed by strong consistency. This entails that, if the replicated data structure is used by clients who can only observe our traces, then there is no need to implement strong consistency. In short, MPC and strong consistency are indistinguishable to these clients.
7 Conclusion
We have investigated the question of what is the strongest consistency criterion that can be implemented when replicating a data structure, in distributed systems under availability and partitiontolerance requirements. Earlier work had established the impossibility of implementing strong consistency in such a system model, but left open the question of the strongest criteria that can be implemented. In this paper we have focused on the Monotonic Prefix Consistency (MPC) criterion. We proposed an implementation of MPC and showed that no criterion stronger than MPC can be implemented. Importantly, Blockchain protocols, such as Bitcoin, implement MPC with high probability, and therefore come as close as possible to strong consistency.
In future work we plan to investigate how the strongest achievable consistency criterion depends on observability – that is, the information encoded in a trace – and study conditions for the (non)existence of a strongest consistency criterion. We are also interested in extending our result to other system models. Specifically, answering the question of what is the strongest consistency criterion that can be implemented in systems where updates are not anonymous, or where the system is permissioned, i.e., where different sites may have different roles, such as primarybackup replication schemes.
Footnotes
 In Bitcoinlike protocols, the most recent blocks are ignored as they are considered unsafe to use until newer blocks are appended after them.
References
 Herlihy, M., Wing, J.M.: Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12(3) (1990)
 Brewer, E.: Cap twelve years later: How the “rules” have changed. Computer 45(2) (2012)
 Gilbert, S., Lynch, N.A.: Brewer’s conjecture and the feasibility of consistent, available, partitiontolerant web services. SIGACT News 33(2) (2002) 51–59
 Lamport, L.: Time, clocks, and the ordering of events in a distributed system. Commun. ACM 21(7) (July 1978) 558–565
 Attiya, H., Ellen, F., Morrison, A.: Limitations of highlyavailable eventuallyconsistent data stores. IEEE Transactions on Parallel and Distributed Systems 28(1) (2017) 141–155
 Terry, D.: Replicated data consistency explained through baseball. Technical Report MSRTR2011137, Microsoft Research (October 2011)
 Nakamoto, S.: Bitcoin: A peertopeer electronic cash system (2008)
 Pass, R., Seeman, L., Shelat, A.: Analysis of the blockchain protocol in asynchronous networks. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT’17. Volume 10211 of Lecture Notes in Computer Science., Paris, France (April 2017) 643–673
 Garay, J., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol: Analysis and applications. In: Annual International Conference on the Theory and Applications of Cryptographic Techniques, Springer (2015) 281–310
 Guerraoui, R., Pavlovic, M., Seredinschi, D.A.: Tradeoffs in replicated systems. IEEE Data Engineering Bulletin 39 (2016) 14–26
 Burckhardt, S.: Principles of Eventual Consistency. Now Publishers (October 2014)
 Bouajjani, A., Enea, C., Hamza, J.: Verifying eventual consistency of optimistic replication systems. In Jagannathan, S., Sewell, P., eds.: The 41st Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL’14, San Diego, CA, USA, January 2021, 2014, ACM (2014) 285–296
 Mahajan, P., Alvisi, L., Dahlin, M.: Consistency, availability, convergence. Technical Report TR1122, Computer Science Department, University of Texas at Austin (May 2011)