Multi-view Point Cloud Registration with Adaptive Convergence Threshold and its Application on 3D Model Retrieval

Multi-view Point Cloud Registration with Adaptive Convergence Threshold and its Application on 3D Model Retrieval

Yaochen Li 1 School of Software Engineering, Xi’an Jiaotong University, Shaanxi, China
11email: yaochenli@mail.xjtu.edu.cn
2 Department of Computer Science, Xi’an Jiaotong University, Shaanxi, China
   Ying Liu 1 School of Software Engineering, Xi’an Jiaotong University, Shaanxi, China
11email: yaochenli@mail.xjtu.edu.cn
2 Department of Computer Science, Xi’an Jiaotong University, Shaanxi, China
   Rui Sun 1 School of Software Engineering, Xi’an Jiaotong University, Shaanxi, China
11email: yaochenli@mail.xjtu.edu.cn
2 Department of Computer Science, Xi’an Jiaotong University, Shaanxi, China
   Rui Guo 1 School of Software Engineering, Xi’an Jiaotong University, Shaanxi, China
11email: yaochenli@mail.xjtu.edu.cn
2 Department of Computer Science, Xi’an Jiaotong University, Shaanxi, China
   Li Zhu 1 School of Software Engineering, Xi’an Jiaotong University, Shaanxi, China
11email: yaochenli@mail.xjtu.edu.cn
2 Department of Computer Science, Xi’an Jiaotong University, Shaanxi, China
   Yong Qi 1 School of Software Engineering, Xi’an Jiaotong University, Shaanxi, China
11email: yaochenli@mail.xjtu.edu.cn
2 Department of Computer Science, Xi’an Jiaotong University, Shaanxi, China
Received: date / Accepted: date
Abstract

Multi-view point cloud registration is a hot topic in the communities of multimedia technology and artificial intelligence (AI). In this paper, we propose a framework to reconstruct the 3D models by the multi-view point cloud registration algorithm with adaptive convergence threshold, and subsequently apply it to 3D model retrieval. The iterative closest point (ICP) algorithm is implemented combining with the motion average algorithm for the registration of multi-view point clouds. After the registration process, we design applications for 3D model retrieval. The geometric saliency map is computed based on the vertex curvature. The test facial triangle is then generated based on the saliency map, which is applied to compare with the standard facial triangle. The face and non-face models are then discriminated. The experiments and comparisons prove the effectiveness of the proposed framework.

Keywords:
Point cloud registration ICP algorithm convergence threshold geometric saliency 3D model retrieval

1 Introduction

Multimedia and Artificial intelligence (AI) are important technologies to support people’s daily life and economic activities Lu_HM (). These technologies can be applied to the applications of cross-modal retrieval Xu_X (), motor anomaly detection Li_YJ (), etc. Multi-view point cloud registration is a hot topic in the communities of artificial intelligence and multimedia. With the rapid development of laser scanning technology, it is possible to obtain high precision data of the object surface Lomonosov_E (). Due to the limited scanning range of light, the laser scans usually cannot obtain complete object information. Therefore, it is important to effectively integrate and match point cloud data collected from different views to generate a complete model.

Pair-wise registration is the basis of multi-view registration. Iterative closest point (ICP) algorithm is one of the classic algorithms for the pair-wise registration, which aims to calculate the optimal space transformation between a pair of point cloud data. However, this algorithm has the drawbacks in local convergence, and cannot solve the case where partial overlap exists between the point cloud data. Chetverikov et al. Chetv_D () improve the ICP algorithm into trimmed iterative closet point (TrICP) algorithm, which utilizes the overlapping ratio for effective selection. Lomonosov et al. Lomonosov_E () implement the genetic algorithm for global search to get the optimal registration of initial value. Sandhu et al. Sandhu_R () apply particle filter to get the registration of the initial value effectively, and improve the ICP algorithm with global convergence properties. However, these methods can only deal with the registration between a pair of point clouds.

Multi-view registration of point clouds is implemented on the basis of pair-wise registration, which is more difficult for more parameters need to be considered Zhu_JH (); Guo_R (). The multi-view registration problem can be solved by quadratic programming of Lie algebra parameters Shi_SW (), where each node and edge in the graph dontes a point cloud and a pairwise registration. Chen et al. Chen_Y () propose the basic concept of point cloud registration, which matches the adjacent point clouds by the ICP algorithm, and transform all the point cloud data into a single global coordinate system. However, this method is easily affected by the local convergence property of the ICP algorithm. Bergevin et al. Bergevin_R () propose an improved ICP algorithm to deal with the rotations and transformations among all the point cloud data. However, this method is computationally expensive, especially in the big data case. Guo et al. Guo_YL () and Fantoni et al. Fantoni () propose approaches for the registration of point cloud data by extracting the features of vertices. However, the registration failure occurs in the case that insufficient features can be obtained.

As an application based on the multi-view registration of point clouds, 3D face detection is a hot issue in the community of 3D model retrieval. In the studies of Akagunduz_E (), a generic method for 3D face detection and modeling is proposed. The multi-scale analysis is implemented by computing Gaussian mean curvatures. Creusot et al. Creusot_C () present an automatic method to detect key points on 3D faces, which constitutes a ‘local shape dictionary’. In the studies of Rabiu et al. Rabiu_H (), a face segmentation method is presented with adaptive radius. The intrinsic properties of face are derived from Gaussian mean curvature for segmentation. The similar methods are described in the studies of Boukamcha_H (); Wang_Y ().

In this paper, we propose a multi-view point cloud registration method with adaptive convergence threshold. The proposed method improves the classic ICP algorithm by combining the motion average algorithm. The reconstruction 3D model is applied to the application of 3D model retrieval. As a typical application, the 3D face detection is implemented by clustering the geometric saliency into facial triangles. The face and non-face models are disciminated by the error of test and standard facial triangles. The main contribtions of the paper are summarized as follows: 1) The adaptive convergence threshold is applied to the ICP algorithm, which brings convience to the registration of multi-view point cloud data; 2) The motion average algorithm is integrated into the ICP algorithm for the registration of multi-view point cloud data; 3) A new 3D face detection method is proposed based on geometric saliency of surface, utilizing the matching of facial triangles.

The remainder of the paper is orginized as follows: In Section 2, the multi-view point cloud registration algorithm is introduced with adaptive convergence threshold. Section 3 describes the application of 3D model retrieval based on the multi-view registration results. The experiments and analysis are conducted in Section 4, followed by the conclusion and future works described in Section 5.

2 Multi-view Point Cloud Registration with Adaptive Convergence Threshold

In this section, we propose the algorithm for multi-view point cloud registration with adaptive convergence threshold. Firstly, the concept of adaptive convergence threshold is introduced. The ICP algorithm with adaptive convergence threshold and the motion average algorithm are described subsequently. Finally, the proposed multi-view registration algorithm is presented.

2.1 Adaptive Convergence Threshold

One of the stop conditions for the pair-wise registration of point clouds is the point cloud distance. The registration of two point cloud sets is implemented if the distance between point sets is lower than a certain threshold. The distance of point sets will reach a minimum value if they get close. As a result, we predict the ideal distance of point sets for registration, and then apply this distance as the distance threshold.

According to the studies of Wang et al. Wang_Y (), each corresponding point in the point cloud data that is ideally matched is located inside a circle with radius of around the modified point, where is the horizontal resolution. Moreover, the distance between the point pair will be affected by cloud overlapping rate. The average distance between the point pair decreases with the increase of the overlapping rate. Thus the weighted overlapping rate is utilized to make the distance more reasonable. At the same time, the distance error is also a factor that influences the registration result. As the weight of overlapping rate can be added, the resulting threshold for registration convergence is expressed by:

(1)

where denotes the number of coincidence points, is the total number of point clouds. indicates the range accuracy of the system.

In this paper, the laser point cloud is utilized instead of the array point cloud. As a result, the above equation will be modified to some extent.

  • There is no parameters to denote the horizontal resolution in the laser scanning system. The distance between the point clouds can be regarded to follow the uniform distribution, since the uniform scanning is adopted. As a result, the uniform distance between the point clouds is considered as the parameter for horizontal resolution.

  • The point clouds are selected before the registration, where the registration of point clouds are conducted where the overlap is over 50%. As a result, the overlap ratio is removed for it has little influence on the system ranging.

The modified convergence threshold is defined as follows:

(2)

2.2 ICP Algorithm based on Adaptive Convergence Threshold

Figure 1: Flow diagram of ICP algorithm with adaptive convergence threshold.

Given the data point cloud and model point cloud , we denote as the rotation parameter, and as the translation parameter. The flow diagram of the ICP method with adaptive convergence threshold is shown in Fig. 1. The registration process mainly includes the following steps:

  • Localization of the point pairs. Firstly, we perform the transformation for the point cloud using the initial parameters of rotation and transformation acquired from the rough registration. Secondly, we search the nearest point in for each point in . The overlapping ratio is computed simultaneously.

  • Computation of the parameter . The sum of the distances of all the corresponding points is calculated. Furthermore, the optimal rigid transformation is computed, which can minimize the sum of distances.

  • Transformation of the point cloud. The point cloud is transformed based on obtained from the second step.

  • Registration of the point cloud data. The iteration computation terminates at two conditions: (1) The average distance between the newly transformed point cloud and the model point cloud is less than the adaptive threshold . (2) The maximum iteration number is reached. Otherwise, the new transformed point cloud is utilized as a new point cloud to continue the iterations.

2.3 Motion Average Algorithm

The motion average algorithm aims to solve the error accumulation in the multi-view registration problem. The motion average algorithm gets the error correction terms by comparing the precise relative motion and the approximate relation motion. The error correction terms are the converted from the Lie group to the Lie algebra. The column vectors are then extracted from the Lie algebra space. Subsequently, the error correction information is assigned to the global motion of each point cloud. As a result, the error accumulation is alleviated. The main steps of the motion average algorithm are as follows:

  • (1) Compute the correction value between the relative motion and the initial relative motion :

    (3)
  • (2) The correction value is transformed from the Lie group to the corresponding Lie algebra Shi_SW ():

    (4)
  • (3) The column vector is established by the Lie algebra matrix:

    (5)

    The matrix with the correction information for the global motion is obtained by computing the column vector which contains all the error correction terms, as well as the matrix for the point cloud relations.

    (6)
    (7)
  • (4) Extract the column vectors of each point cloud in matrix , and restore the Lie algebras form:

    (8)
    (9)
  • (5) The Lie algebra is converted into the Lie group for each point cloud, which is applied to rectify the global motion of each frame:

    (10)
  • (6) For the newly computed global motion of each point cloud, we perform . Repeat steps (1)-(6) until all the column vectors with error correction term delta is small enough, i.e. .

2.4 Multi-View Registration Algorithm

The input of the algorithm are the point cloud data and the initial global motion of frames, while the output are the final precise global motion. The main steps are as follows: (1) The overlapping rate is computed between two frames of point clouds; (2) If the computed overlap rate in (1) is greater than , the pair-wise registration is executed. The ICP algorithm with convergence threshold is implemented to get the relative motion; (3) The more accurate global motion is computed by motion average, and the corresponding error is computed at the same time; (4) If the computed error is less than a certain threshold, the iteration stops; otherwise, the current global motion is iterated as a new initial value.

The process of multi-view registration can be summarized in Algorithm 1. The function represents the overlap percentage between the point cloud and . Function denotes the relative motion between point cloud and using the ICP algorithm based on adaptive convergence threshold. The function is the relative motion of a sequence of non-adjacent point clouds as input. The motion average algorithm is implemented to compute the accurate global motion between each frame point cloud. and denote the rotation matrices of adjacent iterations, while the function represents the matching error between the adjacent iterations.

0:  
1:  while  do
2:     for  do
3:        for  do
4:           
5:           if  then
6:              
7:           end if
8:        end for
9:     end for
10:     ;
11:     ;
12:     ;
13:     ;
14:  end while
14:  
Algorithm 1 Multi-View Point Cloud Registration

3 The Application of 3D Model Retrieval

The multi-view registration of point clouds can be applied to many useful applications, such as 3D model retrieval, 3D model reconstruction, etc. The 3D models are more intact based on the registration results, and thus has more geometric features for model retrieval. In this section, we use 3D face model as an example to illustrate the process of 3D model retrieval. The model retrieval process mainly includes two steps: (1) Geometric saliency computation, and (2) facial triangle match.

3.1 Geometric Saliency Computation

Given the mean curvatures of discrete vertices, the geometric saliency of a 3D face model is represented by Laplace-Beltrami operator to aggregate the 3D model vertices Lee_CH ().

We denote as the collection in which the vertices are within the distance threshold of to the vertex . The weighted Gaussian curvature of the vertex can be computed by:

(11)

where is the average curvature of vertex .

In order to compute the geometric saliency, the vertex significance at different scales are defined by:

(12)

where is the standard deviation of the Gaussian filter at scale , and the scale values in the experiment are . is set to be 0.3% of the diagonal length of the model surround box. It is necessary to cluster the geometric saliency regions after obtaining the geometric saliency map. Firstly, we set the saliency threshold as to get the high saliency regions. Secondly, the distance threshold is specified. The saliency regions where the distance is less than the distance threshold are considered as the same cluster.

3.2 Triangle Match and Error Computation

Any three saliency regions will form a triangle maps after the saliency clustering. Assuming salient regions exist in the 3D facial mesh, then we will select facial triangles by permutation and combination. The eye and nose regions of 3D face have more regional features, so the standard facial graph model is composed of the eye and nose regions. The illustration of the facial triangle is shown in Fig. 2.

According to the features of the 3D face model, the registration between the standard and test facial triangles is performed. The following steps are implemented for the registration: (1) The test triangle is shifted to make its center coincide with that of the standard facial triangle; (2) The test triangle is rotated to make its normal vector consistent with that of the standard facial triangle. (3) The test triangle is scaled to make its total vertex distance to the standard triangle reach the minimum. Based on the matching error between the test and the standard triangles, the face and non-face models can be discriminated.

Figure 2: Illustration of the facial triangle. (a) Vertex sets of the facial salient regions. (b)Topology of the facial model. (c) Facial triangle model.

4 Experiments and Analysis

In this section, we will design experiments and comparisons to evaluate the proposed algorithms. The experimental data for multi-view point cloud registration is collected from the Stanford 3D Model Database Stanford (), as shown in Fig. 3, Fig. 4 and Fig. 5.

Figure 3: (a)Bunny model. (b) The model data plotted from different views.
Figure 4: (a)Dragon model. (b) The model data plotted from different views.
Figure 5: (a)Happy model. (b) The model data plotted from different views.
Figure 6: Comparison of models in mottled degree. (a)3D model. (b)Initial model after rough registration. (c) Final model after multi-view registration.

Firstly, we compare the sectional views for the rough and precise registration results of multi-view point clouds, as shown in Fig. 6. The comparison results of mottled degree prove that the registration of point clouds is most accurate after precise registration. Because the results of the low-rank and sparse matrix decomposition (LRS-Llalm), and motion average based on ICP(MA-ICP) and the proposed method(MA-ATICP) are similar in the metric of mottled degree, we use data and crosssection of them to illustrate.

To quantitatively evaluate these approaches, the objective function is utilized as the error criterion for accuracy evaluation of the multi-view registration results. Table 1 compares the runtime and objective function value of the registration results of the aforementioned approaches, where the bold number denotes the best performance among these competed approaches. As shown in Table 1, the MA-ATICP algorithm can obtain almost the best performance in accuracy and efficiency among these approaches. The speed of MA-ATICP is much faster, especially in the case of big point cloud data.

LRS-Llalm MA-ICP MA-ATICP
Obj Time(s) Obj Time(s) Obj Time(s)
Bunny 0.6454 50.2754 0.8533 301.998 0.6434 40.9418
Dragon 0.4399 80.9986 0.5152 251.916 0.4355 28.5850
Happy 0.137 321.4806 0.1821 1540.374 0.1389 59.0338
Table 1: Performance comparisons based on different models.
Figure 7: Cross-section of multi-view registration results for two competed approaches. (a)3D model. (b)Cross-section of initial model. (c)Cross-section of LRS-Llalm. (d)Cross-section of MA-ICP. (e)Cross-section of MA-ATICP.
Figure 8: Cross-section of partially amplified results for Bunny and Dragon. (a) Cross-section of the initial model. (b)Amplified cross-section of the initial model. (c)Amplified cross-section of LRS-Llalm. (d)Amplified cross-section of MA-ICP. (e)Amplified cross-section of MA-ATICP.

In order to evaluate the registration accuracy in a more intuitive way, Fig. 7 shows the cross-sections of the three baseline methods based on the 3D models of Bunny, Dragon and Happy. Fig. 8 provides the cross-section of partially amplified results for comparison. As shown in Fig. 7 and Fig. 8, the MA-ATICP algorithm obtains the most accurate registration results over the other methods.

Figure 9: The objective function value of the registration results for the LRS-Llalm in each MC trial, (a) and (b) are the amplified results of (c).
Figure 10: The objective function value of the registration results for the MA-ICP in each MC trial, (a) and (b) are the amplified results of (c).
Figure 11: The objective function value of the registration results for the MA-ATICP in each MC trial, (a) and (b) are the amplified results of (c).

For the evluation of algorithm robustness, the three approaches are tested on Bunny with initial parameters. The experiments are conducted by adding the uniform noises to the initial global motions . In order to eliminate randomness, we implement 50 Monte Carlo trials (MC trials) with respect to three noise levels Guo_R (): , and , as shown in Table 2. The comparisons of mean value, standard deviation of objective function are presented, as well as the mean runtime. The the bold numbers denote the best performance. To view the registration results in a more intuitive way, Fig. 9, Fig. 10 and Fig. 11 depict the the objective function value of the registration results for all the baseline methods in each MC trial. The comparison results demonstrate that the MA-ATICP algorithm gets the most accurate and robust registration results under different noise levels.

The LRS-Llalm algorithm incorporates the low-rank and sparse decomposition method to implement the multi-view registration, which may be failed due to the high ratio of missing relative motions. The MA-ICP algorithm adopts the traditional ICP algorithm to deal with registration between double-view point clouds with partial overlapping, and the ideal multi-view registration result cannot be obtained when the initial error is unendurable. In comparison with these baseline methods, the MA-ATICP algorithm reaches the best performance.

rad rad rad
Obj Time(s) Obj Time(s) Obj Time(s)
Mean Std Mean Mean Std Mean Mean Std Mean
LRS-Llalm 0.6500 0.0048 94.635 0.6514 0.3445 104.857 1.0227 1.0071 209.742
MA-ICP 0.8532 0.0002 324.3 0.9164 0.4474 312.582 1.4980 1.6478 302.994
MA-ATICP 0.6464 0.0029 20.149 0.6505 0.0067 21.696 0.6992 0.0793 25.477
Table 2: Performance comparisons under varied noise levels.
Figure 12: Illustration of 3D face matching process.
index matching error index matching error index matching error
1 13.3297 13 6.6086 25 9.1297
2 13.8684 14 9.3108 26 5.2139
3 13.0567 15 8.8308 27 7.3424
4 10.9594 16 10.6764 28 10.6764
5 14.7189 17 10.8464 29 7.5995
6 6.9949 18 8.1302 30 18.2861
7 10.8464 19 7.9781 31 11.5770
8 13.9106 20 14.0030 32 8.8903
9 8.6825 21 5.8430 33 8.5329
10 8.0042 22 12.0190 34 6.1100
11 7.8590 23 14.5614 35 15.4480
12 11.2252 24 11.9552 36 15.2528
Table 3: The matching errors computed for 36 3D face models.

The registration of multi-view point cloud data lays a solid foundation for the application of 3D model retrieval. As a typical example for 3D model retrieval, we further evaluate the effectiveness of the 3D face detection. The reconstructed 3D face is utilized as test data for 3D face detection, and the matching error between the test and the standard models is computed, as shown in Fig. 12. Moreover, the matching errors for 36 test 3D faces are evaluated in Table 3. The face and non-face models can be easily discriminated by setting a proper threshold of matching error.

5 Conclusion and Future Works

In this paper, we propose a new algorithm for multi-view point cloud registration with adaptive convergence threshold. The point cloud registration is implemented based on an improved ICP algorithm, combining with the motion average algorithm. For the application of 3D model retrieval, we design a method for 3D face detection using geometric saliency. The test facial triangle is generated based on the saliency map, which is applied to compare with the standard facial triangle.

In the future, we will apply scale factors to specify the transformation relationship between the test and the benchmark point cloud data. The partially overlapping point cloud data are then algined based on the optimal scaling ICP method. For the application of 3D model retrieval, new vertex descriptors will be considered in addition to the curvature descriptor. Moreover, various research based on 3D models will be studied, such as 3D building model, 3D vehicle model, etc.

References

  • (1) Lu HM, Li YJ, Chen M. Brain intelligence: Go beyond artificial intelligence, Mobile Networks and Application, 23(2):368-375 (2018)
  • (2) Xu X, He L, Lu HM et al. Deep adversarial metric learning for cross-modal retrieval, World Wide Web, pp. 1-6 (2018)
  • (3) Li YJ, Lu HM, Kihara K, et al. Motor anomaly detection for aerial unmanned vehicles using temperature sensor, Artificial Intelligence and Robotics, 752(1): 295-304 (2017)
  • (4) Chane CS, Schutze R, Krsek P. Registration of arbitrary multi-view 3D acquisitions, Computers in Industry, 64(9):1082-1089 (2013)
  • (5) Chetverikov D, Stepanov D, Krsek P. Robust Euclidean Alignment of 3D point sets: The Trimmed Iterative Closet Point Algorithm, Image and Vision Computing, 27(11):1201-1208 (2006)
  • (6) Lomonosov E, Chetverikov D, Ekart A. Pre-registration of arbitrary oriented 3D surfaces using a genetic algorithm. Pattern Recognition Letters, 27(11):1201-1208 (2006)
  • (7) Zhu JH, Meng DY, Li ZY, et al. Robust Registration of Partially Overlapping Point Sets via Genetic Algorithm with Growth Operator, IET Image Processing, 8(10):582-590 (2014)
  • (8) Guo R, Zhu JH, Li YC, et al. Weighted motion averaging for the registration of multi-view range scans, Multimedia Tools and Applications, 77(1):10651-10668 (2018)
  • (9) Shi SW, Chuang YT, Yu TY. An efficient and accurate method for the relaxation of multiview registration error, IEEE Transactions on Image Processing, 17(6):968-981 (2009)
  • (10) Sandhu R, Dambreville S, Tannenbaum A. Point Set Registration via Particle Filtering and Stochastic Dynamics, IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1459-1473 (2010)
  • (11) Zhu JH, Du SY, Li ZY, et al. A partial Point Sets Registration Algorithm based on Particle Filter, Science China: Information Sciences, 44(7):86-89 (2010)
  • (12) Chen Y, Medioni G. Object Modeling by Registration of Multiple Range Images, Image and Vision Computing, 10(3):145-155 (1992)
  • (13) Bergevin R, Soucy M, Gagnon H, et al. Towards A Genral Multi-View Registration Technique, IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(5):540-547 (1996)
  • (14) Guo YL, Sohel F, Bennamoun M et al. An accurate and robust range image registration algorithm for 3D object modeling, IEEE Transactions on Multimedia, 16(5):1377-1390 (2014)
  • (15) Fantoni S, Castellani U, Fusiello A, Accurate and automatic alignment of range surfaces, Conference on 3D Imaging, pp. 73-80 (2012)
  • (16) Akagunduz E, Ulusoy I. 3D face detection using transform invariant features, Electronic Letters, 46(13):905-907 (2010)
  • (17) Creusot C, Pear N, Austin J. Automatic keypoint detection on 3D faces using a dictionary of local shapes. International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, pp. 204-211 (2011)
  • (18) Rabiu H, Saripan M, Marhaban MH, et al. 3D-based face segmentation using adaptive radius. IEEE International Conference on Signal and Image Processing Applications, pp. 237-240 (2013)
  • (19) Lei J, Zhou J, Mottaleb MA, et al. Detection, localization and pose classification of ear in 3D face images, IEEE International Conference on Image Processing, pp. 4200-4204 (2013)
  • (20) Boukamcha H, Elhallek M, Smach F. 3D face landmark auto detection. World Symposium on Computer Networks and Information Security, pp. 1-6 (2015)
  • (21) Wang Y, Wang F, Wang TF, et al. 3D facial mesh detection using geometric saliency of surface, IEEE International Conference on Multimedia and Expo, pp. 1-4 (2011)
  • (22) Lee CH, Varshney A, Jacobs DW. Mesh saliency, ACM Transactions on Graphics, 24(3):659-666 (2005)
  • (23) http://graphics.stanford.edu/data/3Dscanrep/
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
320258
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description