Differentially Private Iterative Synchronous Consensus

Differentially Private Iterative Synchronous Consensus

Zhenqi Huang    Sayan Mitra    Geir Dullerud
University of Illinois at Urbana-Champaign

The iterative consensus problem requires a set of processes or agents with different initial values, to interact and update their states to eventually converge to a common value. Protocols solving iterative consensus serve as building blocks in a variety of systems where distributed coordination is required for load balancing, data aggregation, sensor fusion, filtering, clock synchronization and platooning of autonomous vehicles. In this paper, we introduce the private iterative consensus problem where agents are required to converge while protecting the privacy of their initial values from honest but curious adversaries. Protecting the initial states, in many applications, suffice to protect all subsequent states of the individual participants.

First, we adapt the notion of differential privacy in this setting of iterative computation. Next, we present a server-based and a completely distributed randomized mechanism for solving private iterative consensus with adversaries who can observe the messages as well as the internal states of the server and a subset of the clients. Finally, we establish the tradeoff between privacy and the accuracy of the proposed randomized mechanism.

1 Introduction

This paper addresses the problem of reaching agreement in a group iteratively while preserving individual’s privacy. The setup consists of agents, each with some initial information modeled as the valuation of a variable. The problem requires the agents to interact with each other and update their internal states, so that eventually they all converge to a common decision or value. This agreement to a common decision can then be used for coordinating the actions of the participating agents. Indeed, this iterative consensus has been used as a building block for designing a variety of distributed coordination protocols for load balancing [6, 23], filtering and sensor fusion [16, 24], clock synchronization, and flocking [4, 18, 12, 19, 13], to name a few.

A natural, synchronous, and widely studied consensus mechanism involves, at each round, for every agent to update its state as a weighted average of its own value with values of its neighboring agents. This update rule can be expressed as , where is the vector of agent values and is a symmetric matrix with defining the communication weight between agents and . It turns out that this class of consensus mechanisms111We refrain from calling these mechanisms algorithms because they are designed to converge and not to terminate. converge to the average of the initial values of the agents and a measure of the speed of convergence is given by the second largest eigenvalue in absolute value of the matrix . More general necessary and sufficient conditions for achieving consensus with synchronous mechanisms, including cases where the matrix is time-varying, have been studied in [21, 17] (see the book for a complete overview [14]). Sufficient conditions for achieving consensus with message delays and losses has been developed in [22, 3] and more recently, a theorem prover-based verification framework for these mechanisms has been presented in [15, 5]. Furthermore, stochastic variants of the convergence mechanism under the presence of communication noises has been studied in [23, 11].

In this paper we study the private consensus problem which requires the agents to preserve the privacy of their initial values from an adversary who can see all the messages being exchanged, while also achieving convergence to the average of the initial values. The notion of privacy used in this paper is derived from the idea of differential privacy, first introduced in [8] (see [9] for a survey) in the context of “one-shot” computations on statistical databases. Roughly speaking, differential privacy ensures that the removal (or addition) of a single participant from a database does not affect the output of any analysis substantially. It follows that an adversary looking at the output of any analysis cannot threaten to breach the privacy and security of individual participants.

In [10], the notion of differential privacy is expanded along two dimensions. First, it included streaming and online computations in which the adversary can look at the entire sequence of outputs from the analysis algorithm. Secondly, it allowed the adversary to look at the internal state of the algorithm (Pan privacy) in addition to the communication messages.

This work is motivated by closed-loop applications where the output of the analysis is used as a feedback by the participating agents in updating their states. As a starting point in this investigation, we use a client-server setup for iterative consensus. The clients are the agents with private initial values. In each round, the clients send some information to the server based on their current state, the server updates its own state based on clients’ information and sends feedback to the clients. Finally, the clients update their state according to some local control law based on the server’s feedback. The clients require to converge, while their initial values should be protected from any honest but curious adversary with access to the messages (between the clients and the server) as well as the server’s internal state. We call this the Synchronous Private Consensus (SPC) problem.

In many distributed control systems, protected initial information imply protection of the current state. For example, consider a platoon of vehicles which require to move as a group with the same speed, while keeping their positions private. If the agents use a solution to the SPC problem for deciding on the common speed, then their initial velocities as well as their positions will be protected even if their initial positions and control laws are compromised.

In Section 3 we propose a randomized mechanism for solving the SPC problem. The key idea is to add a particular type of random noise to the clients’ messages to the server. Specifically, for a client with internal state at round , the message it sends to the server is where is a random (real) number chosen according to a Laplace distribution with a parameter that decays geometrically with . In contrast, the noise values added in [10] for implementing an approximate online counter are always chosen from the same Laplace distribution. The feedback provided by the server is the mean of the noisy messages it receives. And, the clients update their states by taking a linear combination of and their earlier state. This weighted average is an example of a simple type of client dynamics.

In Section 4, we generalize the client-server mechanism to a distributed setting where the adversary can access the messages and the states of a subset of compromised clients. The mechanism guarantees differential privacy of the good clients and we derive a sufficient condition for convergence based on the communication pattern of the clients.

As randomization is used for achieving privacy, this mechanism guarantees convergence to the average in a probabilistic sense: Given a probability and a radius , we say that the mechanism is -accurate if from any initial state, with probability the system converges to a value within distance of the average. In Section 5, we discuss the tradeoff between privacy and accuracy. There are two parameters in the definition of the mechanism which can be chosen to get different levels of privacy and accuracy. If these parameters are tuned to obtain -differential privacy, then we show that the accuracy that can be achieved is . That is, the accuracy radius depends inversely on privacy level () and the accuracy probability , and directly on .

The rest of the paper is organized as follows. In Section 2, we introduce the synchronous private consensus problem, and then formally define differential privacy, convergence, and accuracy. In Sections 3 and 4, we present and analyze the client-server and the distributed mechanisms for SPC. In Section 6, we compare our work with existing research papers in this area. In Section 7, we summarize our results and discuss possible future directions.

2 Preliminaries

For a natural number , we denote the set by . For an -valued vector of length , and , we denote the component by .

The mechanisms presented in this paper rely on random real numbers drawn according to the Laplace distribution. denotes the Laplace distribution with probability density given by . This distribution has mean 0 and variance . For any , .

2.1 Problem Statement

We state the synchronous private consensus (SPC) problem in the following setting. The system consists of clients with private initial values and one server. The clients and the server may have internal states and they communicate over channels. In each round, there are four phases: First, the clients send some messages to the server; next, the server performs computations to update its state; then it responds to the clients with some messages, and finally, the clients smoothly update their own internal states based on the response from the server.

Several vulnerabilities threaten to compromise the private initial values of the clients: (1) An intruder can have full access to all the communication channels. That is, he can peek inside all the messages going back and forth between he clients and the server. Furthermore, (2) the intruder can access the server’s internal state.

Roughly, a randomized mechanism for the clients and the server solves the synchronous private consensus problem if eventually all the clients converge to the average of their initial values with high probability and it guarantees that the intruder cannot learn about the initial private client values with any high level of confidence. We proceed to precisely define accuracy, convergence, and privacy.

Our definition of privacy is a modification of the notion of differential privacy introduced in [10] in the context of streaming algorithms. Let be the domain of individual internal states and messages.

Definition 1 (Adjacency).

Two vectors are -adjacent, for some , if there exists one , such that and for all , .

Definition 2 (Differential Privacy).

Let be the domain of global state equipped with metric . Let be the set of all possible message sequences and be the set of all possible sequences of internal states of Alg . A randomized mechanism preserves -differential privacy if for all sets and , and for all pairs of -adjacent initial global states

We use the standard mean square notion of convergence which has been used in the context of consensus protocols [11]. Let be the local states of agent at the beginning of round . denotes the secrete initial state of .

Definition 3 (Convergence).

A randomized mechanism is said to converge if for any initial configuration, for any , , where the expectation is over the coin-flips of the algorithm.

Definition 4 (Accuracy).

For any initial state , and a randomized mechanism is said to achieve -accuracy if every execution starting from converges to a state within of , with probability at least .

Our goal is to design a solution to the SPC problem that guaranteed to be converge. In addition, for an adversary, looking at all a sequence of messages passing through the channels as well as a sequence of internal states of the server (and possibly some of the clients), the probability of executions corresponding to adjacent initial local states and these sequences have to be related by the Equation in Definition 3.

3 A Client-Server Mechanism and its Analysis

In this section, we present a randomized mechanism for solving the synchronous private consensus problem. This mechanism has three parameters , and . The mechanism is specified by the following client and server actions which define the four phases of each round. Let be the infinite time domain. At each round :

  1. Client sends a message to the server, where is a random noise generated from the distribution .

  2. The server updates its own state as the average of all client messages .

  3. The server sends to all clients.

  4. Client updates its state by linearly interpolating between and with coefficient , that is,


3.1 Analysis

For , let be the vector defining the state of the clients at the beginning of round . Similarly, and are vectors for noise and messages. An execution of the mechanism is an infinite sequence of the form . Observe that given a initial vector and the sequence of noise vectors , the execution of the system is completely specified. That is, for all , it defines the messages , the internal states of the clients and that of the server . Thus, for brevity we will sometimes write an execution as an infinite sequence of the form . The prefix of upto round is denoted by . We denote the set of possible executions from as .

For a given execution , the adversary can observe the subsequence of messages and the server’s state . We denote this subsequence by . Hence, two executions and are indistinguishable to an adversary if . For a set of observation sequnces , the set of all possible executions from which correspond to some observation in is the set . We restate the definition of differential privacy in this context.

Definition 5 (Differential Privacy).

A randomized mechanism preserves -differential privacy if for any set of observation sequnces , and any pairs of -adjacent initial global states

Lemma 1 (Privacy).

For , the mechanism guarantees -differential privacy with .


Let and be arbitrary -adjacent initial global states. Without loss of generality, we assume that for some , . Fix any subset of observation sequences . We will show that Equation (2) holds by establishing a bijective correspondence between the executions in and . For brevity, we denote these sets by and .

First, we define a bijection . For defined by the sequence , we define ,
where for each ,

, , and for . Clearly, is a valid execution of the mechanism staring from .

The following proposition relates the states and the observable vectors of two corresponding executions.

Proposition 2.

For all , ,

  1. ,

  2. ,

  3. .


The proof is by induction on . For the base case , observe that for , , otherwise, ;

For the inductive step, assume that the proposition holds for all . From Equation 1, we have and . The difference of these two equation gives

For any other client , immediately from that and , we have .

Now we consider the clients’ reports . For the client, . For the other client , . So the reports . The match up of the server’s internal state immediately follows. ∎

Parts and of the above proposition establishes that and are indistinguishable, that is, indeed they produce the same observation sequence.

Next we will relate the probability of any finite prefix of an individual execution , and its corresponding execution , for a particular observation sequence :

Integrating over all executions , we get

where and are probability measures over and defined by the randomized mechanism. If , then as , the product converges to , where , and we obtain the required inequality for -differential privacy.

Lemma 3 (Convergence).

The mechanism described above achieves convergence.


We define a global potential function as . Using the matrix notation , where with elements:


The transition rule for the internal state of the client can be written as:


where . The update rule for all the agents can be written as . Then,


By Equation 3 we have . So in this particular case, we have . Similarly . Substitute the previous equation into Equation (5) we get,

where . For all , is a constant. Thus we have as , converges exponentially to , which implies convergence. ∎

From Equation (4), each agent adds an identical random variable to its local state in round . Although the average value drifts with this random variable, the relative distance between local states will not be affected. As a result, the mechanism converges deterministically.

Lemma 4 (Accuracy).

For any , the randomized mechanism achieves -accuracy.


This is a special case of a more general proof we show later. Please see the proof of Lemma 8 with particularly for this case. ∎

In this section we proposed an solution to the centralized synchronous consensus problem and formally established its privacy, convergence and accuracy properties. We will discuss the trade-offs between privacy and accuracy in Section 5.

4 A Distributed Mechanism

In this section, we present a second synchronous randomized mechanism for solving the private consensus problem which does not use a server but instead relies on the clients exchanging information with their neighbors in a truly distributed fashion. Let be a undirected connected graph, where is the set of vertices and is the set of edges. Let be the set of neighbors of node with whom it communicates. Let be the degree of node in .

As in the previous setting, an intruder has access to all the communication channels as well as the internal states of a set of compromised clients (but cannot overwrite them). Our mechanism will protect the privacy of clients who are not compromised. Thus, in this context, Definition 5 is modified by restricting the notion of -adjacency to uncompromised agents.

Now we state a mechanism to solve the distributed SPC problem. Besides the state variable which holds the consensus value, client holds another auxiliary state . The mechanism has parameters , and . Instead of sharing an identical linear combination factor, client has an independent which is the element of vector . At each round :

  1. Client sends a message to every , where is a random noise generated from the distribution .

  2. Client updates as the average of and the messages it receives:

  3. Client updates by linearly interpolating between and with coefficient , that is,


4.1 Analysis

The analysis of the distributed mechanism parallels the analysis presented in Section 3. An execution is defined similar to the centralized setting except that in this case is a vector rather than a scaler. The privacy of those corrupted nodes makes no sense. Let be the set of corrupted nodes.

Lemma 5 (Privacy).

For , where is the minimum element of vector , the distributed mechanism guarantees -differential privacy with respect to the uncorrupted nodes with .

We omit the proof of Lemma 5 as it is a straight forward generalization of the proof of Lemma 1.

In contrast to Lemma 3, the convergence of the distributed mechanism depends on the structure of graph . Before stating the convergence result, we introduce Laplacian matrix of graph with elements:


The Laplacian matrix for any graph is known to have several nice properties. It is by definition symmetric with real entries, hence it can be diagonalized by an orthogonal matrix. It is positive semidefinite, hence its real eigenvalues can be ordered as be the eigenvalues of . Furthermore and if and only if the graph is connected. Let be a set of orthonormal eigenvectors of such that corresponds to . In addition, denote . We state a sufficient condition of convergence as following.

Assumption 1.

Assume that graph has the following properties.

  1. , that is graph is connected.

  2. , where and .

Lemma 6 (Convergence).

The distributed mechanism described above achieves convergence if Assumption 1 holds.


We define a function as

Using the matrix notation . By Assumption 1, . According to Equation (6) and (7), the update equation of client is:




We define vector and matrix with elements:


The update rule for all the agents can be written as . Then,


Taking expectation of both sides with respect to the coin flips of the algorithm starting from any state:



The term vanishes because (i) and are independent; and (ii) by Equation (10), has zero mean.

Now we will prove that there exists a constant such that . Because is positive semidefinite, we have . From Assumption 1 and Equation (11), we have . Then,


The following proposition helps obtain a bound on .

Proposition 7.

For any , .


First, we show that the proposition holds for any eigenvector of . For the eigenvector corresponding to , we have and the inequality holds trivially. For any other eigenvector and the corresponding eigenvalue , we have . Next, we prove that the proposition holds for any vector . Because is an orthonormal basis, for any , . For any , we have:

From Equation (14), then it follows that

Thus, for any , the inequality holds. Also, by Assumption 1, . Then, for some , Equation (13) is reduced to

As the contribution of the first term converges to . For the second term, recall that each element of is a linear combination of i.i.d . For , . For any , , which also converges to 0. So as . Combining, we have as . ∎

In general, the expected consensus value of the distributed algorithm does not coincide with the initial average. Intuitively, a node with higher degree or slower evolution will have heavier weight on the consensus value. In this context, Definition 4 is modified by replacing the average with a weighted modification , where the weight .

Lemma 8 (accuracy).

The distributed mechanism achieves -accuracy, where .


Let us fix an initial state and define and . We rewrite Equation (9) with

Add up all equations and divided by , we get:

From the definition of and Equation (10), we have

By , the series converges.

By Chebyshev’s inequality for any :

Choosing , we have . Let , by Lemma 6 every execution converges. Then the lemma follows. ∎

The trade-off between accuracy and privacy of this mechanism is similar to that of the client-server mechanism of Section 3 and we discuss them together next.

5 Discussion on Results

We proposed two mechanisms that achieve iterative private consensus over infinite horizon by adding a stream of noises to the messages set by the clients (to each other or to the server). The standard deviation of the Laplace distribution of the noise added in every round decreases and ultimately converges to which is the Dirac distribution at . The mechanisms have 3 parameters: linear combination factor , initial noise and noise convergence rate . The constraint to achieve privacy over infinite horizon is that , which roughly means that the noise should converge slower than the system’s inertia so as to “cover” the trail of dynamics.

From Lemma 1 and 5 we observe that decreases with larger or . This implies that the system has a higher privacy if the noise values are picked from a Laplace distribution with larger parameters (and hence larger standard deviation). From Lemma 4 and 8, however, a more dispersive noise results in worse accuracy. The tradeoff between privacy and accuracy for different noise convergent rate () is illustrated in Figure 1. If we fix the parameter , we observe that for -differential privacy for agents and an accuracy level of the accuracy radius is . For specific values on these parameters, the dependence between and is shown in Figure 1.

Figure 1: Privacy and Accuracy as functions of the Noise convergent rate in the centralized mechanism. Parameterized with , , and .

6 Related Work

Our consensus mechanism has similarities with the protocols for computing sum and inner product presented in [1], in that, all these protocols rely on adding noise to the states communicated among the participants. Our mechanism differs in the type of noise (geometrically decaying Laplace) that is added. Moreover, in our setup, the computed outputs are used as feedback for updating the state of the participants to achieve convergence.

In [7] a framework for securely computing general types of aggregates is presented. Every client splits its private data into pieces and sends them to different servers. If at least one server is not compromised, then the iterative aggregate computation is guaranteed to preserve privacy of the individuals. Our mechanism is quite different and it guarantees privacy even if the only server is compromised.

In [25], the authors present distributed protocols for computing maximum values among all participants. In this protocol, the clients communicate a global vector of -maximum values over a ring network. In each step, the client processing the global vector either with an exponential decaying probability honestly replaces the values in global state if it is smaller than one of the local values, or it replaces the values in the vector with randomly generated small numbers. The metric of privacy is Loss of Privacy which characterizes the additional knowledge to the adversary of gaining intermediate result besides the final results. This work is setup with quite different definition of privacy compare to ours. In addition, some features of our mechanism, such as feedback update and infinite horizon, are not presented in this protocol.

7 Conclusions and Future direction

In this paper, we formalize a Synchronized Private Consensus problem and propose two mechanisms for solving it. The first one relies on the client-server model of communication and the latter is purely distributed. The key idea is to add a random noise to the clients’ messages to the server (or other clients) that is drawn from a Laplace distribution that converges to the Dirac distribution. The messages with large noice give differential privacy and as the noise level attenuates, the system converges to the target value with probability that depends inversely on the security parameter and directly in the number of participants. The feedback from the server is the mean of all noisy messages sent. And, the clients update their states by taking a linear combination of and their previous state. We formally prove the privacy and convergence of this mechanism. The key proof technique for privacy, relies on constructing a bijective map between two sets of executions starting from different but adjacent initial states.

To the best of our knowledge this is the first investigation of differential privacy in the context of control systems where the ultimate goal is convergence. Our results suggest several directions for future work. First, we are trying to apply our method to a larger set of control problems that arise from iterative closed-loop control. Novel applications of this arise from differential privacy and more generally security of distributed cyber-physical systems where the physical state is updated smoothly according to some differential equations.

Second, we also interested in exploring the tradeoff between privacy and performance under more general dynamics of the system. In the SPC problem we discussed, the dynamics of the system is discrete and linear. We expect to extend the analysis to continuous or non-linear systems. Also, establishing a lower bound for the problem will be of significance.

An orthogonal direction is to develop automated verification and synthesis algorithms for controllers that preserve differential privacy. Along these lines, a verification framework for streaming algorithms has been presented in [2, 20]. The challenge will be to extend these ideas to synthesis and feedback control systems.


  • [1] E. Abbe, A. E. Khandani, and A. W. Lo. Privacy-preserving methods for sharing financial risk exposures. CoRR, abs/1111.5228, 2011.
  • [2] G. Barthe, B. K’́opf, F. Olmedo, and S. Z. Béguelin. Probabilistic relational reasoning for differential privacy. In In Proceedings of ACM SIGPLAN-SIGACT symposium on Principles of programming languages, 2012.
  • [3] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation: numerical methods. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1989.
  • [4] V. Blondel, J. Hendrickx, A. Olshevsky, and J. Tsitsiklis. Convergence in multiagent coordination consensus and flocking. In Proceedings of the Joint forty-fourth IEEE Conference on Decision and Control and European Control Conference, pages 2996–3000, 2005.
  • [5] K. M. Chandy, S. Mitra, and C. Pilotto. Convergence verification: From shared memory to partially synchronous systems. In In proceedings of Formal Modeling and Analysis of Timed Systems (FORMATS‘08), volume 5215 of LNCS, pages 217–231. Springer Verlag, 2008.
  • [6] G. Cybenko. Load balancing for distributed memory multiprocessors. Journal of Parallel and Distributed Computing, 7:279–301, 1989.
  • [7] Y. Duan, J. Canny, and J. Zhan. P4p: practical large-scale privacy-preserving distributed computation robust against malicious users. In Proceedings of the 19th USENIX conference on Security, USENIX Security’10, pages 14–14, Berkeley, CA, USA, 2010. USENIX Association.
  • [8] C. Dwork. Differential privacy. In AUTOMATA, LANGUAGES AND PROGRAMMING, volume 4052 of Lecture Notes in Computer Science, 2006.
  • [9] C. Dwork. Differential privacy: a survey of results. In Proceedings of the 5th international conference on Theory and applications of models of computation, TAMC’08, pages 1–19, Berlin, Heidelberg, 2008. Springer-Verlag.
  • [10] C. Dwork, M. Naor, G. Rothblum, and T. Pitassi. Differential privacy under continual observation. In Proceedings of the 42nd ACM symposium on Theory of computing, 2010.
  • [11] M. Huang and J. Manton. Coordination and consensus of networked agents with noisy measurements: stochastic algorithms and asymptotic behavior. IAM Journal on Control and Optimization, 48, 2009.
  • [12] A. Jadbabaie, J. Lin, and A. S. Morse. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6):988–1001, 2003.
  • [13] T. Johnson and S. Mitra. Safe flocking in spite of actuator faults using directional failure detectors. Journal of Nonlinear Systems and Applications, 2(1-2):73–95, 2011.
  • [14] M. Mesbahi and M. Egerstedt. Graph-theoretic Methods in Multiagent Networks. Princeton University Press.
  • [15] S. Mitra and K. M. Chandy. A formalized theory for verifying stability and convergence of automata in pvs. In In proceedings of Theorem Proving in Higher Order Logics (TPHOLS‘08). LNCS, 2008. to appear.
  • [16] R. Olfati-saber. Distributed kalman filtering and sensor fusion in sensor networks. In Network Embedded Sensing and Control, volume LNCIS 331, pages 157–167. Springer-Verlag, 2006.
  • [17] R. Olfati-Saber, J. Fax, and R. Murray. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1):215–233, January 2007.
  • [18] R. Saber and R. Murray. Flocking with obstacle avoidance: cooperation with limited communication in mobile networks. volume 2, pages 2022–2028 Vol.2, Dec. 2003.
  • [19] H. G. Tanner, A. Jadbabaie, and G. J. Pappas. Automatic Control, IEEE Transactions on, 52:866—868, 2007.
  • [20] M. C. Tschantz, D. Kaynar, and A. Datta. Formal verification of differential privacy for interactive systems. Electronic Notes in Theoretical Computer Science, 2011.
  • [21] J. N. Tsitsiklis. Problems in Decentralized Decision Making and Computation. PhD thesis, Department of EECS, MIT, November 1984.
  • [22] J. N. Tsitsiklis. On the stability of asynchronous iterative processes. Theory of Computing Systems, 20(1):137–153, December 1987.
  • [23] L. Xiao, S. Boyd, and S.-J. Kim. Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput., 67(1):33–46, Jan. 2007.
  • [24] L. Xiao, S. Boyd, and S. Lall. A scheme for robust distributed sensor fusion based on average consensus. In Proceedings of the 4th international symposium on Information processing in sensor networks, IPSN ’05, Piscataway, NJ, USA, 2005. IEEE Press.
  • [25] L. Xiong, S. Chitti, and L. Liu. Preserving data privacy in outsourcing data aggregation services. ACM Trans. Internet Technol., 7(3), Aug. 2007.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description