Network-Density-Controlled Decentralized Parallel Stochastic Gradient Descent in Wireless Systems

Network-Density-Controlled Decentralized Parallel Stochastic Gradient Descent in Wireless Systems

Abstract

This paper proposes a communication strategy for decentralized learning on wireless systems. Our discussion is based on the decentralized parallel stochastic gradient descent (D-PSGD), which is one of the state-of-the-art algorithms for decentralized learning. The main contribution of this paper is to raise a novel open question for decentralized learning on wireless systems: there is a possibility that the density of a network topology significantly influences the runtime performance of D-PSGD. In general, it is difficult to guarantee delay-free communications without any communication deterioration in real wireless network systems because of path loss and multi-path fading. These factors significantly degrade the runtime performance of D-PSGD. To alleviate such problems, we first analyze the runtime performance of D-PSGD by considering real wireless systems. This analysis yields the key insights that dense network topology (1) does not significantly gain the training accuracy of D-PSGD compared to sparse one, and (2) strongly degrades the runtime performance because this setting generally requires to utilize a low-rate transmission. Based on these findings, we propose a novel communication strategy, in which each node estimates optimal transmission rates such that communication time during the D-PSGD optimization is minimized under the constraint of network density, which is characterized by radio propagation property. The proposed strategy enables to improve the runtime performance of D-PSGD in wireless systems. Numerical simulations reveal that the proposed strategy is capable of enhancing the runtime performance of D-PSGD.

Decentralized learning, stochastic gradient descent, radio propagation, edge computing

I Introduction

Based on the rapid development of deep neural networks (DNNs), many machine learning techniques have been proposed over the past decade. In general, constructing an accurate DNN incurs high computational costs and requires massive numbers of training samples. This problem has motivated many researchers to investigate machine learning techniques exploiting distributed computing resources, such as multiple graphical processing units in one computer, multiple servers in a data center, or smartphones distributed over a city [9, 4]. If one can efficiently utilize distributed computation resources, classifiers (or regressors) can be trained in a shorter time period compared to utilizing one machine with single-thread computation.

Several researchers have proposed algorithms for distributed machine learning [15, 11, 6, 3, 5, 12]. According to these past studies, we can categorize distributed machine learning techniques into (a) centralized [15, 11, 6], and (b) decentralized settings [3, 5, 12].

The centralized algorithms assume to prepare a centralized server, and all the nodes can connect to this server. Generally, centralized algorithms construct more accurate classifiers compared to decentralized algorithms because a centralized server allows such algorithms to exploit the conditions of all computation nodes (e.g., number of datasets, computational capabilities, and network status), facilitating the construction of an optimal learning strategy. However, applications of centralized algorithms are restricted to specific situations, such as federated learning [6, 7, 8], because all nodes must communicate with the centralized server. In contrast, decentralized algorithms enable these systems to construct a classifier in a distributed manner over the local wireless network, thereby facilitating novel applications of machine learning such as image recognition in cooperative autonomous driving [14] and the detection of white space in spectrum sharing systems [1], without any clouds and edge computing servers. Towards exploring further applicabilities of distributed machine learning, this paper studies decentralized learning algorithms on wireless systems.

I-a Problem of Decentralized Learning in Wireless Systems

There is a crucial problem that must be considered to realize decentralized machine learning on wireless network systems. Existing algorithms for decentralized machine learning [3, 5, 12] mainly consist of the following two steps: (1) updating local models and (2) communicating between nodes. In the procedure for local model updating, each computation node refines the model parameters of the classifier to be trained utilizing its own dataset (specific training samples at each computation node). During the communication procedure, the updated model parameters are shared between neighboring nodes. These procedures are performed iteratively until training loss converges. However, the communication procedure tends to be a bottleneck in terms of runtime performance because the number of model parameters that must be communicated is often enormous (e.g., VGG16 [10] requires more than 100 million model parameters). Furthermore, in wireless systems, the communication time required to guarantee successful communication tends to increase based on path loss and multipath fading [2]. These factors significantly deteriorate the runtime performance of machine learning.

This problem is challenging, but should be addressed utilizing either lower or higher transmission rates. Let us consider the situation where the transmitter can controlls the communication coverage by adjusting the transmission rate under given transmission power and bandwidth (e.g., Wi-Fi with adaptive modulation techniques). In general, high-rate transmission can easily reduce communication time. However, this strategy reduces communication coverage, meaning the network topology becomes sparse. Some theoretical works [5, 12] have argued that the training accuracy of decentralized algorithms deteriorates in a sparse network topology. In contrast, low-rate transmission makes network topologies denser, meaning training accuracy versus the number of iterations can be improved. However, runtime performance deteriorates because total communication time increases. We summarize these relationships in Fig. 1(a)(b), and the tradeoffs between training accuracy and runtime performance that are raised by the differences in the network topology, in Fig. 1(c).

Therefore, it is important to develop a communication strategy for decentralized learning in wireless systems that improves runtime performance.

Fig. 1: Tradeoffs between transmission rate, network density, and communication time. Past studies [5, 12] have shown that the upper bounds on training accuracy for decentralized learning algorithms depends on the density of network topologies. (a) High-rate transmission leads to shorter communication time between nodes, but it can make the network topology sparse, thereby degrading the training accuracy of the classifier [5, 12]. (b) Low-rate transmission allows us to facilitate the construction of a dense network topology, resulting in a more accurate classifier, but this strategy requires longer communication times. (c) A numerical example of runtime performance of training accuracy. It clearly shows the tradeoffs between the training accuracy and the runtime performance of decentralized learning.

I-B Objective of This Paper

In this paper, we analyze the performance of decentralized learning by considering the influences of network topology on wireless systems and propose a novel communication strategy for improving runtime performance. We specifically focus on decentralized parallel stochastic gradient descent (D-PSGD) [5], which is one of the state-of-the-art algorithms for decentralized learning, as a reference algorithm for our discussion. Wang et al.[12] formulated a relationship between network density and the performance of D-PSGD. They analyzed the performance of D-PSGD from the perspective of computation of the average squared gradient norm of a learning model, which directly affects training accuracy. Based on this analysis, we first discuss when and how network density affects the runtime performance of D-PSGD. This discussion yields the following two insights: dense network topology (1) does not significantly gain the training accuracy of D-PSGD compared to sparse one, and (2) strongly degrades the runtime performance because this setting generally requires to utilize a low-rate transmission. These insights suggest that the runtime performance of D-PSGD can be improved by high-rate transmission, which makes the network topology relatively sparse, but shortens the communication time between nodes (e.g., Fig. 1(c)). Motivated by these insights, we propose a communication strategy that makes each node high-rate transmissions whenever possible. In this method, each node adapts its transmission rate such that the required time for model sharing is minimized under a constraint on network topology density. By increasing the transmission rate without making the network less dense than necessary, this method improves runtime performance while maintaining training accuracy. To the best of our knowledge, this work is the first attempt that incorporated characteristics of wireless channels into the D-PSGD algorithm in wireless systems.

Ii System Model

Ii-a Overview of D-PSGD

Consider situation in which nodes are randomly deployed in a two-dimensional area. The -th node stores independent and identically distributed datasets that follow the probability distribution , and has the -dimensional model parameter vector of the classifier (or regressor) that consists of the data size  [bits]. We assume that each node location has been preliminarily shared with all nodes via periodic short-length communication (e.g., beaconing). Additionally, we also assume that all nodes can be roughly (ms order) synchronized once the aforementioned periodic short length communication or global positioning system is deployed.

The objective of distributed learning in a decentralized setting is to optimize the model vector. According to [5], this objective can be modeled as

(1)

where , denotes the data sample and represents the loss function for the -th node. After the optimization, each node can utilize as its classifier. Note that is not directly calculated during the optimization.

Under the conditions described above, decentralized learning can be performed utilizing a D-PSGD optimizer. D-PSGD iteratively performs the following procedure until the value of the loss function is minimized: (1) updating the model parameter at each node based on its dataset with the learning rate , (2) sharing updated model parameters with connected neighboring nodes, and (3) averaging received and own model parameters. The pseudo-code of this algorithm is summarized in Algorithm 1. In Algorithm 1, we denote the set of model vectors at the -th iteration as .

0:  initial point , learning rate , and number of iterations .
1:  for  do
2:     Randomly sample from local data of the -th node.
3:     Broadcast and receive model parameters to/from neighboring nodes.
4:     Calculate intermidiate model by averaging the received and own models
5:     Update the local model parameters .
6:  end for
Algorithm 1 D-PSGD on the -th node [5]

Ii-B Radio Propagation Model and Protocol

In wireless systems, the communication coverage is strongly affected by the relationships between the radio propagation characteristics, bandwidth, transmission rate, etc. In order to discuss the influence of these relationships on the performance of D-PSGD, we consider a typical wireless channel.

Because the communication coverage is mainly determined by the path loss, we model the received signal power at a distance  [m] as , where is the transmission power in dBm and is the path loss index. We assume that all nodes transmit with the same and the bandwidth . Under these conditions, the channel capacity at can be expressed as

(2)

where is the signal-to-noise ratio and is the noise floor in dBm. Additionally, we define channel-capacity matrix whose element represents the channel capacity between the -th and the -th nodes.

This paper assumes situations where each node can controll its communication coverage by adjusting the transmission rate. In such situations, we consider that each node broadcasts its own updated model at a transmission rate [bps] (Step 3 in Algorithm 1). If , the receiver can accurately receive the model parameters from neighboring nodes. We assume that and are constant over the area and that they can be given as prior knowledge to all nodes. Additionally, to avoid communication collisions between the nodes in Step 3 of Algorithm 1, the nodes share the spectrum based on the time division multiplexing; the model parameter is broadcasted to nodes in consecutive order from the terminal on the west side of the target area. With these assumptions, the communication time spent in one iteration is given by

(3)

If the transmission power and bandwidth are constrained, the transmission rate must be reduced to expand communication coverage (i.e., to make the network dense). This fact indicates that there is a tradeoff between network density and communication time when sharing model parameters. Therefore, even if the training accuracy of D-PSGD for a given number of iterations can be improved, runtime performance would deteriorate.

Note that our discussion can be extended to fading channels without loss of generality of our claim. This can be achieved by considering the following condition for successful communications: , where is a constant scalar that behaves as the margin of uncertainty for fading channels. These conditions enable each node to set a transmission rate to perform accurate communication.

Ii-C Modeling D-PSGD using Averaging Matrix

Previous studies [5, 12] have utilized an averaging matrix , which is automatically determined based on the network topology, for the analysis of D-PSGD. This averaging matrix satisfies , where is an -dimensional column vector of ones. Each element can be calculated by

(4)

where represents the connectivity between the -th and the -th nodes.

The use of allows us to analyze the influence of network topology on D-PSGD. The model updating rule at the  th iteration (i.e., Step 5 in Algorithm 1) can be re-defined as

(5)

In this paper, we also utilize Eq. (5) for analyzing the influence of network topology on the runtime performance of D-PSGD.

Iii Network-Density-Controlled D-PSGD

Iii-a Effects of Network Density

(a) , .
(b) , .
(c) , .
(d) Effect of where .
Fig. 2: Effects of on D-PSGD (the Lipschitz constant of the objective function , the variance bound of mini-batch SGD , the learning rate , , and ). For various values of and , if is below a certain threshold (e.g., in (c) and in (d) where ), reducing does not improve the upper bound significantly, at least on the order level. This numerical example implies that we can boost runtime performance by making the network topology more sparse (i.e., making the transmission rate higher) without significant degradation of training accuracy.

We will briefly discuss how the density of a network topology influences the training accuracy of D-PSGD. Wang et al.[12] analyzed the performance of D-PSGD from the perspective of convergence analysis of the expected value of the squared gradient norm , where is the number of iterations of optimization for D-PSGD. Because this expected value is directly related to training accuracy, we present this value as “training accuracy” throughout this paper.

According to [12], the training accuracy of D-PSGD decreases as the parameter ( and are the 2nd and -th largest eigenvalue of , respectively) increases. The parameter approaches zero as the number of non-zero elements in increases. This behavior of suggests that the value of represents the sparseness of a network topology because a denser network topology causes the number of non-zero elements in to increase.

To derive theoretical proof of D-PSGD performance evaluations, the authors of [12] introduced the following assumptions:

  • (Smoothness): ( is the Lipschitz constant of the loss function ).

  • (Lower bounded): .

  • (Unbiased gradients): ( is the gradient of )

  • (Bounded variance) ( and are non-negative constants that are inversely proportional to the mini-batch size).

  • (Averaging matrix): .

  • (Learning rate): learning rate should satisfies

    (6)

Under these assumptions, when all local models are initialized with the same vector , the average squared gradient norm at the -th iteration is bounded by:

(7)

This equation indicates that the upper bound of the average squared gradient norm can be expressed based on the following two factors. The first ((1) in Eq. (7)) is a component obtained from fully-synchronized SGD (i.e., ). The second ((2) in Eq. (7)) is a component generated by network errors, which are influenced by the density of network topology. The condition in Eq. (7) implies that training accuracy is strongly affected by , when and are large. Therefore, we evaluated effects of these parameters on the training accuracy.

Figs. 2(a)-(c) plot three numerical examples of Eq. (7) where , and , respectively. To highlight the influence of the network topology on the training accuracy of D-PSGD, we plot three curves: the total upper bound (value of the right side of Eq. (7)), effects of fully-synchronized SGD (value of the term (1) on the right side of Eq. (7)), and the effect of network errors (value of the term (2) on the right side of Eq. (7)). These examples show that as the number of iterations increases, the impact of network density on the training accuracy of D-PSGD increases, i.e., the effects of network error turns out being dominant with respect to the training accuracy (upper bound). However, the effect is small when the value of is below a certain threshold. For example, although the effects of become significant when , the upper bound in this case is on the order of in all regions where . The effect of the number of nodes , where , is presented in Fig. 2(d). Although the effect of on the training accuracy increases as increases, a similar dependence on threshold can be observed in this case (e.g., in where ). These numerical examples suggest that runtime performance can be improved by making a network topology more sparse (i.e., by increasing the transmission rates of nodes) without a significant degradation in training accuracy.

Iii-B Proposed Communication Strategy

As shown earlier, setting a higher transmission rate under the constraint of the network density will improve the runtime performance. Considering the relationships between transmission rate, network density, communication time, and the training accuracy of D-PSGD, we propose a novel communication strategy. In this strategy, each node selects a suitable transmission rate prior to initiating D-PSGD. Once is determined, each node broadcasts its model vector based on the transmission rate . This transmission rate is selected, such that communication time is minimized under constraints with respect to . This strategy can be modeled as

(8)

where denotes the set of transmission rates and represents the predetermined maximum value of (that satisfies Eq. (6)). This strategy enables one to increase each transmission rate , resulting in a sparse network topology. Because the constraint of prevents significant degradation of training accuracy, runtime performance can be improved while maintaining training accuracy.

Iii-C Solver for Eq. (8)

Eq. (8) should be solved at each node in a decentralized manner. There are some methods for optimizing based on given conditions, such as prior knowledge (i.e., with or without location information) and channel characteristics. This paper considers that both pre-shared information at node locations and path loss characteristics, i.e., the received signal power , the bandwidth and the noise floor can be obtained beforehand. With this knowledge, each node can construct the channel-capacity matrix independently. This matrix enables the -th node to estimate the required transmission rate that guarantees successful communications with the -th node. Thus, Eq. (8) can be expressed as a combination problem. In this paper, each node solves this problem utilizing a brute force search. We summarize these procedures in Algorithm 2. Even if each node solves this problem in a decentralized manner, all nodes arrive at the same result.

After is determined, each node initiates D-PSGD with the optimized transmission rate.

0:  Transmission power , noise floor , bandwidth , path loss index node locations, and .
1:  Calculate the channel-capacity matrix using Eq. (2).
2:  for all candidates of  do
3:     Construct a candidate of by selecting one from each row.
4:     Construct averaging matrix using Eq. (4).
5:     Calculate .
6:     Search for that minimizes the communication time under the constraint .
7:  end for
8:  return  optimized
Algorithm 2 Estimation of Optimal Transmission Rate (Solver for Eq. (8))

Iv Performance Evaluation

We simulated the proposed strategy on a computer employing a multi-core CPU. This computer employs AMD Ryzen Threadripper 2970WX, which consists of 24-physical cores1, and works with Ubuntu 18.04 LTS. The simulation program was implemented with PyTorch 1.0.1 on Python 3.7.3.

We conducted simulations of a case where six nodes are placed in a 200 m200 m area as shown in Fig. 3(a). We focus on the training accuracy at Node 1.

Iv-a Experimental Setup

We evaluated the proposed strategy on an image classification task utilizing the Fashion-MNIST dataset [13], which has been widely used as a benchmark for image classification performance in the machine learning community. This dataset includes 60 000 images for training that have already been categorized into ten different categories. This dataset also includes 10 000 images for test data. Each sample in this dataset is a single-channel, 8-bit image with a resolution of . In this experiment, we utilized a convolutional neural network (CNN) as an architecture to perform image classification. The details of the CNN we utilized are as follows: two convolutional layers (with 10 and 20 channels, respectively, each of which was activated by a rectified linear unit (ReLU) function), two max-pooling layers, and three fully-connected layers (320 and 50 units, respectively, with ReLU activation and an additional 10 units activated by the softmax function). Additionally, dropout was applied in the second convolutional layer and the first fully-connected layer with a dropout ratio of 0.5. Therefore, the total number of model parameters for the CNN was 21 840, and its data size was =698 880‬ bits (32-bit floating point numbers). Each node broadcasted data to neighboring nodes to train the CNN utilizing D-PSGD. To train a CNN utilizing the D-PSGD optimizer, we shuffled all of the training samples, then equally distributed them to six computation nodes. Therefore, each node was given 10 000 independently and identically distributed training samples. Additionally, we set the batch size for D-PSGD optimization to 1, meaning the number of iterations per epoch was , because each node was given 10 000 training samples.

We exected processes in parallel to train the CNN with D-PSGD on the computer, where we assigned one physical core to each process. The runtime of the calculation portion of D-PSGD was calculated based on the real elapsed time on the computer, and the communication time was calculated by Eq. (3). Note that we fixed a random seed at the start of the simulation to ensure reproducibility.

Iv-B Runtime Performance Results

In this section, we discuss the experimental results for the proposed strategy in terms of runtime performance.

We analyzed the performance of the proposed strategy by varying the path loss index because communication coverage, which is a key factor influencing runtime performance, is strongly affected by path loss. The path loss index is an environment-dependent factor that has been determined empirically. It tends to take on large values in environments with many obstacles, e.g., indoor and urban channels [2].

Fig. 3(a) present dependences of the training accuracy against the number of epochs for and 6, respectively. We highlight examples of the obtained training accuracy values at 100 epochs: 0.841 (), 0.833 (), and 0.821 (). These results indicate that the training accuracy decreases slightly as increases. They agree with the theoretical and numerical evaluations of the performance of D-PSGD in Fig. 2 and Eq. (7). Note that this epoch performance does not depend on because the proposed method always constructs the same network topology for a given and node placements, regardless of .

Figs. 3(c)-(f) present the runtime performances for and , respectively. When is large, a greater value of (i.e., the higher transmission rate and sparse network topology) significantly improves runtime performance, although this strategy degrades the training accuracy versus epoch performance. We highlight some comparisons on the real elapsed time required for which the training accuracy exceeds 0.8 in the case of . We obtained that the required times when setting to 0.1, 0.3, and 0.8 were approximately 270, 132, and 8 minutes, respectively. This comparison shows that the runtime performance with is approximately 3.9 times faster than that with , and 8.0 times faster than that with . Therefore, we would like to contend that the runtime performance can be improved significantly by setting to large (i.e., high transmission rate), when the path loss index is large.

These results suggest that high-rate transmissions with sparse network topology will facilitate the development of efficient decentralized machine learning, especially in situations such as in indoor or urban channels.

(a) Node placement.
(b) Epoch ().
(c) Runtime ().
(d) Runtime ().
(e) Runtime ().
(f) Runtime ().
Fig. 3: Training accuracy at Node 1 (transmission power , bandwidth , noise floor , and learning rate ). Although has almost no effect on epoch performance, a greater value of clearly improves runtime performance, especially in situations where the path loss index is large.

V Conclusion

We proposed a novel communication strategy for D-PSGD on wireless systems by incorporating influences of the network topology. We found that the influence of network density on the training accuracy of D-PSGD is less significant. Based on this finding, we designed the communication strategy for D-PSGD, in which each node communicates with a high-rate transmission rate under the constraint of the network density. This strategy enables to improve the runtime performance of D-PSGD while retaining high training accuracy.

Numerical evaluations showed that the network topology (transmission rate) highly influences on the runtime performance, especially in situations where the path loss index is large. We would like to conclude that the influences of the network topology will be a crucial factor that should be non-negligible to perform decentralized learning in wireless systems effectively, especially in indoor or urban scenarios.

In future work, we will develop sophisticated optimization methods for the proposed strategy that can be applied to more complex situations such as those where location information is not available.

Acknowledgements

This research was funded by The Telecommunications Advancement Foundation and the Japan Society for the Promotion of Science through KAKENHI under Grant 19K14988.

Footnotes

  1. Simultaneous multi-threading (SMT) was disabled.

References

  1. M. Bkassiny, Y. Li and S. K. Jayaweera (2013-Third) A survey on machine-learning techniques in cognitive radios. IEEE Commun. Surveys and Tuts. 15 (3), pp. 1136–1159. External Links: Document, ISSN Cited by: §I.
  2. A. J. Goldsmith (2005) Wireless communications. Cambridge University Press. Cited by: §I-A, §IV-B.
  3. A. Koloskova, S. Stich and M. Jaggi (2019) Decentralized stochastic optimization and gossip algorithms with compressed communication. arXiv abs/1902.00340. External Links: Link, 1902.00340 Cited by: §I-A, §I.
  4. H. Li, K. Ota and M. Dong (2018-01) Learning IoT in edge: deep learning for the internet of things with edge computing. IEEE Netw. 32 (1), pp. 96–101. External Links: Document, ISSN Cited by: §I.
  5. X. Lian, C. Zhang, H. Zhang, C. Hsieh, W. Zhang and J. Liu (2017) Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in NeurIPS 30, pp. 5330–5340. Cited by: Fig. 1, §I-A, §I-A, §I-B, §I, §II-A, §II-C, Algorithm 1.
  6. B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. y Arcas (2017-Apr.) Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proc. AISTATS 2017, Cited by: §I, §I.
  7. S. Niknam, H. S. Dhillon and J. H. Reed (2019) Federated learning for wireless communications: motivation, opportunities and challenges. arXiv arXiv:1908.06847v3. External Links: Link Cited by: §I.
  8. T. Nishio and R. Yonetani (2019-05) Client selection for federated learning with heterogeneous resources in mobile edge. In Proc. IEEE ICC2019, Vol. , pp. . External Links: Document, ISSN Cited by: §I.
  9. W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu (2016-10) Edge computing: vision and challenges. IEEE J. Internet of Things 3 (5), pp. 637–646. External Links: Document, ISSN Cited by: §I.
  10. K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv abs/1409.1556. External Links: Link Cited by: §I-A.
  11. S. U. Stich (2019-05) Local SGD converges fast and communicates little. In Proc. ICLR 2019, Cited by: §I.
  12. J. Wang and G. Joshi (2018) Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms. External Links: 1808.07576, Link Cited by: Fig. 1, §I-A, §I-A, §I-B, §I, §II-C, §III-A, §III-A, §III-A.
  13. H. Xiao, K. Rasul and R. Vollgraf (2017) Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. In arXiv, Vol. arXiv:1708.07747. External Links: 1708.07747, Link Cited by: §IV-A.
  14. H. Ye, L. Liang, G. Y. Li, J. Kim, L. Lu and M. Wu (2018-06) Machine learning for vehicular networks: recent advances and application examples. IEEE Veh. Technol. Mag. 13 (2), pp. 94–101. External Links: Document, ISSN Cited by: §I.
  15. M. A. Zinkevich, M. Weimer, A. Smola, L. Li, P. Zhou, Y. Ma, X. Wang, H. Ma, X. Xu and Z. Liu (2009) Parallelized Stochastic Gradient Descent. Opt. Lett. 34 (19), pp. 36. External Links: Document, ISSN 0146-9592 Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
409321
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description