Can You Really Backdoor Federated Learning?

Can You Really Backdoor Federated Learning?

Ziteng Sun
Cornell University
zs335@cornell.edu
&Peter Kairouz
Google
kairouz@google.com
&Ananda Theertha Suresh
Google
theertha@google.com
&H. Brendan McMahan
Google
mcmahan@google.com
Work done while ZS was an intern at Google.
Abstract

The decentralized nature of federated learning makes detecting and defending against adversarial attacks a challenging task. This paper focuses on backdoor attacks in the federated learning setting, where the goal of the adversary is to reduce the performance of the model on targeted tasks while maintaining a good performance on the main task. Unlike existing works, we allow non-malicious clients to have correctly labeled samples from the targeted tasks. We conduct a comprehensive study of backdoor attacks and defenses for the EMNIST dataset, a real-life, user-partitioned, and non-iid dataset. We observe that in the absence of defenses, the performance of the attack largely depends on the fraction of adversaries present and the “complexity” of the targeted task. Moreover, we show that norm clipping and “weak” differential privacy mitigate the attacks without hurting the overall performance. We have implemented the attacks and defenses in TensorFlow Federated (TFF), a TensorFlow framework for federated learning. In open sourcing our code, our goal is to encourage researchers to contribute new attacks and defenses and evaluate them on standard federated datasets.

1 Introduction

Modern machine learning systems can be vulnerable to various kinds of failures, such as bugs in preprocessing pipelines and noisy training labels, as well as attacks that target each step of the system’s training and deployment pipelines. Examples of attacks include data and model update poisoning (4; 20), model evasion (28; 4; 15), model stealing (30), and data inference attacks on users’ private training data (26).

The distributed nature of federated learning (22), particularly when augmented with secure aggregation protocols (6), makes detecting and correcting for these failures and attacks a particularly challenging task. Adversarial attacks can be broadly classified into two types based on the goal of the attack, untargeted or targeted attacks. Under untargeted attacks (5; 14; 11), the goal of the adversary is to corrupt the model in such a way that it does not achieve a near-optimal performance on the main task at hand (e.g., classification) often referred to as the primary task. Under targeted attacks (often referred to as backdoor attacks) (9; 18; 16), the goal of the adversary is to ensure that the learned model behaves differently on certain targeted sub-tasks while maintaining good overall performance on the primary task. For example, in image classification, the attacker may want the model to misclassify some “green cars” as birds while ensuring that other cars are correctly classified.

For both targeted and untargeted attacks, the attacks can be further classified into two types based on the capability of the attacker, model update poisoning or data poisoning. In data poisoning attacks (4; 27; 33; 24; 17), the attacker can change a subset of all the training samples which is unknown to the learner. In federated learning systems, since the training process is done on local devices, fully compromised clients can change the model update completely, which is called a model poisoning attack (2; 3). Model update poisoning attacks are even harder to counter when secure aggregation (SecAgg) (6), which ensures that the server cannot inspect each user’s update, is deployed in the aggregation of local updates.

Since untargeted attacks reduce the overall performance of the primary task, they are easier to detect. On the other hand, backdoor targeted attacks are harder to detect as the goal of the adversary is often unknown a priori. Hence, following (2; 3), we consider targeted model update poisoning attacks and refer to them as backdoor attacks. Existing approaches against backdoor attacks (27; 19; 31; 12; 32; 25) either require a careful examination of the training data or full control of the training process at the server, which may not apply in the federated learning case. We evaluate various attacks proposed in recent papers and defenses on a medium scale federated learning task with more realistic parameters using TensorFlow Federated (29). Our goal, in open sourcing our code, is to encourage researchers to evaluate new attacks and defenses on standard tasks.

2 Backdoor Attack Scenario

We consider the notations and definitions of federated learning as defined in (22).111While (22) considers relatively small problems, in more realistic scenarios for mobile devices we might have or higher, with the number of clients selected typically constant, say 100 to 1000 per round. In particular, let be the total number of users. At each round , the server randomly selects clients for some . Let be this set and be the number of samples at client . Denote the model parameters at round by . Each selected user computes a model update, denoted by , based on their local data. The server updates its model by aggregating the ’s, i.e.,

where is the server learning rate. We model the parameters of backdoor attacks as follows.

Sampling of adversaries. If fraction of the clients are completely compromised, then each round may contain anywhere between and adversaries. Under random sampling of clients, the number of adversaries in each round follows a hypergeometric distribution. We refer to this attack model as the random sampling attack. Another model we consider in this work is the fixed frequency attack, where a single adversary appears in every rounds (2; 3). For a fair comparison between the two attack models, we set the frequency to be inversely proportional to the number of total number of attackers (i.e., ).

Backdoor tasks. Recall that in backdoor attacks, the goal of the adversary is to ensure that the model fails on some targeted tasks. For example, in text classification the backdoor task might be to suggest a particular restaurant’s name after observing the phrase “my favorite restaurant is”. Unlike (2; 3), we allow non-malicious clients to have correctly labeled samples from the targeted backdoor tasks. For instance, if the adversary wants the model to misclassify some green cars as birds, we allow non-malicious clients to have samples from these targeted green cars correctly labeled as cars.

Further, we form the backdoor task by grouping examples from multiple selected “target clients”. Since examples from different target clients follow different distributions, we refer to the number of target clients as the “number of backdoor tasks” and explore its effect on the attack’s success rate. Intuitively, the more backdoor tasks we have, the richer the feature space the attacker is trying to break, and therefore the harder it is for the attacker to successfully backdoor the model without breaking its performance on the main task.

3 Model Update Poisoning Attacks

We focus on model update poisoning attacks based on the model replacement paradigm proposed by (2; 3). When only one attacker is selected in round (WLOG assume it is client 1), the attacker attempts to replace the whole model by a backdoored model by sending

(1)

where is a boost factor. Then we have

which will be in a small neighbourhood of if we assume the model has sufficiently converged and hence the other updates for are small. If multiple attackers appear in the same round, we assume that they can coordinate with each other and divide this update evenly.

Obtaining a backdoored model. To obtain a backdoored model , we assume that the attacker has a set which describes the backdoor task – for example, different kinds of green cars labeled as birds. We also assume the attacker has a set of training samples generated from the true distribution . Note that for practical applications, such data may be harder for the attacker to obtain.

Unconstrained boosted backdoor attack. In this case, the adversary trains a model based on and without any constraints and sends the update based on (1) back to the service provider. One popular training strategy is to initialize with and train the model with for a few epoches. This attack generally results in a large update norm and can serve as a baseline.

Norm bounded backdoor attack. Unconstrained backdoor attacks can be defended by norm clipping as discussed below. To overcome this, we consider the norm bounded backdoor attack. Here at each round, the model trains on the backdoor task subject to the constraint that the model update is smaller than . Thus, model update has norm bounded by after boosted by a factor of . This can be done by training the model using multiple rounds of projected gradient descent, where in each round we train the model using the unconstrainted training strategy and project it back to the ball of size around .

4 Defenses

We consider the following defenses for backdoor attacks.

Norm thresholding of updates. Since boosted attacks are likely to produce updates with large norms, a reasonable defense is for the server to simply ignore updates whose norm is above some threshold ; in more complex schemes could even be chosen in randomized fashion. However, in the spirit of investigating what a strong adversary might accomplish, we assume the adversary knows the threshold , and can hence always return malicious updates within this magnitude. Giving this strong advantage to the adversary makes the norm-bounding defense equivalent to the following norm-clipping approach:

This model update ensures that the norm of each model update is small and hence less susceptible to the server.

(Weak) differential privacy. A mathematically rigorous way for defending against backdoor tasks is to train models with differential privacy  (21; 13; 1). These approaches were extended to the federated setting by (23), by first clipping updates (as above) and then adding Gaussian noise. We explore the effect of this method. However, traditionally the amount of noise added to obtain reasonable differential privacy is relatively large. Since our goal is not privacy, but instead preventing attacks, we add a small amount of noise that is empirically sufficient to limit the success of attacks.

5 Experiments

In the above backdoor attack framework, we conduct experiments on the EMNIST dataset (10; 8). This dataset is a writer-annotated handwritten digit classification dataset collected from users with roughly images of digits per user. Each of them has their unique writing style. We train a five-layer convolution neural network with two convolution layers, one max-pooling layer and two dense layers using federated learning in the TensorFlow Federated framework (29). At each round of training, we select clients. Each client trains the model with their own local data for 5 epochs with batch size 20 and client learning rate 0.1. We use a server learning rate of .

In the experiment, we consider the backdoor task as classifying 7s from multiple selected “target clients” as 1s. Note that our attack approach does not require 7s from other clients to be classified as 1s. Since 7s coming from different target clients follow different distributions (because they have different writing styles), we refer to the number of target clients as the “number of backdoor tasks”.

Random sampling vs. fixed frequency attacks. To begin with, we conduct experiments for the two attack models discussed in Section 2 under different fractions of adversaries. The results are shown in Figure 1 (for unconstrained attack) and Figure 2 (for norm bounded attack). Additional plots are shown in Figure 5 and Figure 6 in the appendix. The figures show that both attack models have similar behaviors, despite fixed frequency attacks being slightly more effective than random sampling attacks. Furthermore, in the fixed frequency attack, it is easier to see if the attack happened in a particular round or not. Hence, to provide additional advantage for the attacker and for ease of interpretability, we focus our analysis on fixed-frequency attacks in the rest of this section.

Fraction of corrupted users. In Figure 1 and Figure 2 (also Figure 5 and Figure 6 in the appendix), we consider a malicious task with 30 backdoor tasks (around 300 images). We perform unconstrained attacks and norm-bounded attacks with fraction of users being malicious. Both fixed-frequency attack (left column) and random sampling (right column) attacks are considered. For fixed-frequency attack, this corresponds to attacking frequency of (attacking every round), and (once every ten rounds). From the above experiment, we can infer that the backdoor attack success largely depends on the fraction of adversaries and the performance of backdoor attack degrades as the fraction of fully compromised users falls below .

(a) Attack frequency = 1 ()
(b) Number of attackers = 113 ()
(c) Attack frequency = 1/10 ()
(d) Number of attackers = 11 ()
Figure 1: Unconstrained attack for fixed-frequency attacks (left column) and random sampling attack (right column) with different fractions of attackers. Green line is the cumulative mean for the backdoor accuracy.
(a) Attack frequency = 1 ()
(b) Number of attackers = 113 ()
(c) Attack frequency = 1/10 ()
(d) Number of attackers = 11 ()
Figure 2: Constrained attack with norm bound 10 for fixed-frequency attacks (left column) and random sampling attack (right column) with different fractions of attackers. Green line is the cumulative mean for the backdoor accuracy.

Number of backdoor tasks. The number of backdoor tasks affects the performance in two ways. First, the more backdoor tasks we have, the harder it is to backdoor a fixed-capacity model while maintaining its performance on the main task. Second, since we assume benign users have correct samples from the backdoor task, they can correct the attacked model with these samples. In Figure 3, we consider norm bounded attack with norm bound 10 and 10, 20, 30, 50 backdoor tasks. We can see from the plot that the more backdoor tasks we have, the harder it is to fit a malicious model.

(a) Backdoor Size = 10
(b) Backdoor Size = 20
(c) Backdoor Size = 30
(d) Backdoor Size = 50
Figure 3: The Effect of Backdoor Size for Constrained Attack with Norm Bound 10.

Norm bound for the update. In Figure 4(a), we consider norm bounded update from each user. We assume one attacker appears in every round, which corresponds to corrupted users, and we consider norm bounds of 3, 5, and 10 (the 90 percentile of benign users’ updates are below 2 for most of the rounds), which translates to norm bound for the update before boosting. We can see from the plot that selecting 3 as the norm bound will successfully mitigate the attack with almost no effect on the performance of the main task. Hence we can see that norm bounding may be a valid defense for current backdoor attacks.

(a) Norm clipping bound: Blue - unattacked baseline, Green: 3, Red: 5, Black: 10
(b) Gaussian noise (norm bound = 5). Red: , green: , blue: unattacked baseline
Figure 4: Effect of norm bounding and Gaussian noise. Dotted: main task. Solid: backdoor task.

Weak differential privacy In Figure 4(b), we consider norm bounding plus adding Gaussian noise. We use norm bound of 5, which itself would not mitigate the attack, and add independent Gaussian noise with variance 0.025 to each coordinate. From the plots, we can see that adding Gaussian noise can also help mitigate the attack beyond norm clipping without hurting the overall performance much. We note that similar to previous works on differential privacy (1), we do not provide a recipe for selecting the norm bound and variance of the Gaussian noise. Rather, we show that some reasonable values motivated by differential privacy literature perform well. Discovering algorithms to learn these bounds and noise values remains an interesting open research direction.

6 Discussion

We studied backdoor attacks and defenses for federated learning under the more realistic EMNIST dataset. In the absence of any defense, we showed that the performance of the adversary largely depends on the fraction of adversaries present. Hence, for reasonable success, there needs to be a large number of adversaries. Perhaps surprisingly norm clipping limits the success of known backdoor attacks considerably. Furthermore, adding a small amount of Gaussian noise, in addition to norm clipping, can help further mitigate the effect of adversaries. This gives rise to several interesting questions.

Better attacks and defenses. In the norm bounded case, multiple iterations of “pre-boosted” projected gradient descent may not be the best possible attack in a single round. In fact,the adversary may attempt to directly craft the “worst-case” model update that satisfies the norm bound (without any boosting). Moreover, if the attacker knows they can attack in multiple rounds, there might be better strategies for doing so under a norm bound. Similarly, more advanced defenses should be investigated.

Effect of model capacity. Another factor that may affect the performance of backdoor attacks is the model capacity, especially that it is conjectured that backdoor attacks use the spare capacity of the deep network (19). How model capacity interacts with backdoor attacks is an interesting question to consider both from the theoretical and practical sides.

Interaction of defenses with SecAgg. Existing approaches on range proofs (e.g. BulletProof (7)) can guarantee this when using secure multiparty computation but how to implement them in a computationally and communication efficient way is still an active research direction. This can also be made compatible with SecAgg if we have an efficient implementation of multi-party range proof.

References

  • [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang (2016) Deep learning with differential privacy. pp. 308–318. Cited by: §4, §5.
  • [2] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov (2018) How to backdoor federated learning. External Links: 1807.00459 Cited by: §1, §1, §2, §2, §3.
  • [3] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo (2019-09–15 Jun) Analyzing federated learning through an adversarial lens. Long Beach, California, USA, pp. 634–643. External Links: Link Cited by: §1, §1, §2, §2, §3.
  • [4] B. Biggio, B. Nelson, and P. Laskov (2012) Poisoning attacks against support vector machines. USA, pp. 1467–1474. External Links: ISBN 978-1-4503-1285-1, Link Cited by: §1, §1.
  • [5] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer (2017) Machine learning with adversaries: byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 119–129. External Links: Link Cited by: §1.
  • [6] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth (2017) Practical secure aggregation for privacy-preserving machine learning. pp. 1175–1191. Cited by: §1, §1.
  • [7] B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille, and G. Maxwell (2018) Bulletproofs: short proofs for confidential transactions and more. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 315–334. Cited by: §6.
  • [8] S. Caldas, P. Wu, T. Li, J. Konečnỳ, H. B. McMahan, V. Smith, and A. Talwalkar (2018) Leaf: a benchmark for federated settings. arXiv preprint arXiv:1812.01097. Cited by: §5.
  • [9] X. Chen, C. Liu, B. Li, K. Lu, and D. Song (2017) Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526. Cited by: §1.
  • [10] G. Cohen, S. Afshar, J. Tapson, and A. van Schaik (2017) EMNIST: extending mnist to handwritten letters. pp. 2921–2926. Cited by: §5.
  • [11] G. Damaskinos, E. M. E. Mhamdi, R. Guerraoui, R. Patra, and M. Taziki (2018) Asynchronous byzantine machine learning (the case of sgd). arXiv preprint arXiv:1802.07928. Cited by: §1.
  • [12] I. Diakonikolas, G. Kamath, D. Kane, J. Li, J. Steinhardt, and A. Stewart (2019-09–15 Jun) Sever: a robust meta-algorithm for stochastic optimization. In Proceedings of the 36th International Conference on Machine LearningAdvances in neural information processing systemsProceedings of the 29th International Coference on International Conference on Machine LearningProceedings of the Twenty-Ninth AAAI Conference on Artificial IntelligenceProceedings of the 35th International Conference on Machine LearningProceedings of the 36th International Conference on Machine Learning2017 International Joint Conference on Neural Networks (IJCNN)In Proceedings of the 3rd Theory of Cryptography ConferenceProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications SecuritySysML 2019Proceedings of the 20th International Conference on Artificial Intelligence and StatisticsInternational Conference on Learning Representations (ICLR)Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security26th USENIX Security Symposium (USENIX Security 17)3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track ProceedingsMachine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III25th USENIX Security Symposium, USENIX Security 16, Austin, TX, USA, August 10-12, 2016.2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 201725th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications SecurityECML-PKDD2017 IEEE Symposium on Security and Privacy (SP)Proceedings of the 10th ACM Workshop on Artificial Intelligence and SecurityProceedings of the 2017 ACM on Asia conference on computer and communications securityNeurIPSProceedings of the 34th International Conference on Machine Learning-Volume 70Advances in Neural Information Processing Systems, K. Chaudhuri, R. Salakhutdinov, J. Dy, A. Krause, K. Chaudhuri, and R. Salakhutdinov (Eds.), Proceedings of Machine Learning ResearchICML’12AAAI’15Proceedings of Machine Learning ResearchProceedings of Machine Learning ResearchCCS ’18, Vol. 978097, Long Beach, California, USA, pp. 1596–1606. External Links: Link Cited by: §1.
  • [13] C. Dwork, F. Mcsherry, K. Nissim, and A. Smith (2006) Calibrating noise to sensitivity in private data analysis. Cited by: §4.
  • [14] E. M. El Mhamdi, R. Guerraoui, and S. Rouault (2018-10–15 Jul) The hidden vulnerability of distributed learning in Byzantium. Stockholmsmässan, Stockholm Sweden, pp. 3521–3530. External Links: Link Cited by: §1.
  • [15] I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. External Links: Link Cited by: §1.
  • [16] T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg (2019) BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, pp. 47230–47244. Cited by: §1.
  • [17] P. J. Huber (1997) Robustness: where are we now?. Lecture Notes-Monograph Series, pp. 487–498. Cited by: §1.
  • [18] C. Liao, H. Zhong, A. Squicciarini, S. Zhu, and D. Miller (2018) Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307. Cited by: §1.
  • [19] K. Liu, B. Dolan-Gavitt, and S. Garg (2018) Fine-pruning: defending against backdooring attacks on deep neural networks. In International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 273–294. Cited by: §1, §6.
  • [20] Y. Liu, S. Ma, Y. Aafer, W. Lee, J. Zhai, W. Wang, and X. Zhang (2018) Trojaning attack on neural networks. External Links: Link Cited by: §1.
  • [21] Y. Ma, X. Zhu, and J. Hsu (2019) Data poisoning against differentially-private learners: attacks and defenses. arXiv preprint arXiv:1903.09860. Cited by: §4.
  • [22] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-efficient learning of deep networks from decentralized data. pp. 1273–1282. Cited by: §1, §2, footnote 1.
  • [23] H. B. McMahan, D. Ramage, K. Talwar, and L. Zhang (2018) Learning differentially private recurrent language models. Cited by: §4.
  • [24] S. Mei and X. Zhu (2015) Using machine teaching to identify optimal training-set attacks on machine learners. pp. 2871–2877. External Links: ISBN 0-262-51129-0, Link Cited by: §1.
  • [25] Y. Shen and S. Sanghavi (2019-09–15 Jun) Learning with bad training data via iterative trimmed loss minimization. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 5739–5748. External Links: Link Cited by: §1.
  • [26] R. Shokri, M. Stronati, C. Song, and V. Shmatikov (2017) Membership inference attacks against machine learning models. pp. 3–18. External Links: Link, Document Cited by: §1.
  • [27] J. Steinhardt, P. W. W. Koh, and P. S. Liang (2017) Certified defenses for data poisoning attacks. pp. 3517–3529. Cited by: §1, §1.
  • [28] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. External Links: Link Cited by: §1.
  • [29] Tensorflow federated. Note: https://www.tensorflow.org/federated Cited by: §1, §5.
  • [30] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart (2016) Stealing machine learning models via prediction apis. pp. 601–618. External Links: Link Cited by: §1.
  • [31] B. Tran, J. Li, and A. Madry (2018) Spectral signatures in backdoor attacks. In Advances in Neural Information Processing Systems, pp. 8000–8010. Cited by: §1.
  • [32] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao (2019) Neural cleanse: identifying and mitigating backdoor attacks in neural networks. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks, pp. 0. Cited by: §1.
  • [33] H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli (2015-07) Support vector machines under adversarial label contamination. Neurocomput. 160 (C), pp. 53–62. External Links: ISSN 0925-2312, Link, Document Cited by: §1.

Appendix A Additional figures for experiments

(a) Attack frequency = 1/3 ()
(b) Number of attackers = 38 ()
(c) Attack frequency = 1/5 ()
(d) Number of attackers = 23 ()
Figure 5: Unconstrained attack for fixed-frequency attacks (left column) and random sampling attack (right column) with different fractions of attackers. Green line is the cumulative mean for the backdoor accuracy.
(a) Attack frequency = 1/3 ()
(b) Number of attackers = 38 ()
(c) Attack frequency = 1/5 ()
(d) Number of attackers = 23 ()
Figure 6: Constrained attack with norm bound 10 for fixed-frequency attacks (left column) and random sampling attack (right column) with different fractions of attackers. Green line is the cumulative mean for the backdoor accuracy.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
399549
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description