Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

# Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang
Department of Electrical & Computer Engineering, University of Pittsburgh, USA
College of Computer Science & Technology, Nanjing University of Aeronautics and Astronautics   MIIT Key Laboratory of Pattern Analysis & Machine Intelligence, China
JD Finance America Corporation
feh23@pitt.edu,  shg84@pitt.edu,  s.chen@nuaa.edu.cn,  heng.huang@pitt.edu
Corresponding Author.
###### Abstract

Zeroth-Order Stochastic Alternating Direction Method of Multipliers
for Nonconvex Nonsmooth Optimization

Feihu Huang, Shangqian Gao, Songcan Chen, Heng Huang**footnotemark: *
Department of Electrical & Computer Engineering, University of Pittsburgh, USA
College of Computer Science & Technology, Nanjing University of Aeronautics and Astronautics   MIIT Key Laboratory of Pattern Analysis & Machine Intelligence, China
JD Finance America Corporation
feh23@pitt.edu,  shg84@pitt.edu,  s.chen@nuaa.edu.cn,  heng.huang@pitt.edu

## 1 Introduction

Alternating direction method of multipliers (ADMM [??]) is a popular optimization tool for solving the composite and constrained problems in machine learning. In particular, ADMM can efficiently optimize some problems with complicated structure regularization such as the graph-guided fused lasso [?], which are too complicated for the other popular optimization methods such as proximal gradient methods [?]. Thus, ADMM has been widely studied in recent years [?]. For the large-scale optimization, the stochastic ADMM method [?] has been proposed. Due to variances of the stochastic gradient, however, these methods suffer from a slow convergence rate. To speedup the convergence, recently, some faster stochastic ADMM methods [??] have been proposed by using the variance reduced (VR) techniques such as the SVRG [?]. In fact, ADMM is also highly successful in solving various nonconvex problems such as tensor decomposition [?] and learning neural networks [?]. Thus, some fast nonconvex stochastic ADMM methods have been developed in [?].

Currently, most of the ADMM methods need to compute gradients of the loss functions over each iteration. However, in many machine learning problems, the explicit expression of gradient for objective function is difficult or infeasible to obtain. For example, in black-box situations, only prediction results (i.e., function values) are provided [??]. In bandit settings [?], the player only receives partial feedback in terms of loss function values, so it is impossible to obtain expressive gradient of the loss function. Clearly, the classic optimization methods, based on the first-order gradient or second-order information, are not competent to these problems. Thus, zeroth-order optimization methods [??] are developed by only using the function values in the optimization.

In the paper, we focus on using the zeroth-order methods to solve the following nonconvex nonsmooth problem:

 \vspace∗−8ptminx,{yj}kj=1 F(x,y[k])=:1nn∑i=1fi(x)+k∑j=1ψj(yj) (1) s.t. Ax+k∑j=1Bjyj=c,

where , for all , is a nonconvex and smooth function, and each is a convex and nonsmooth function. In machine learning, function can be used for the empirical loss, for multiple structure penalties (e.g., sparse + group sparse), and the constraint for encoding the structure pattern of model parameters such as graph structure. Due to the flexibility in splitting the objective function into loss and each penalty , ADMM is an efficient method to solve the above constricted problem. However, in the problem (1), we only access the objective values rather than the explicit function , thus the classic ADMM methods are unsuitable for this problem.

Recently, [??] proposed the zeroth-order stochastic ADMM methods, which only use the objective values to optimize. However, these zeroth-order ADMM-based methods build on the convexity of objective function. Clearly, these methods are limited in many applications such as adversarial attack on black-box deep neural network (DNN). Due to that the problem (1) includes multiple nonsmooth regularization functions and constraint, the existing nonconvex zeroth-order algorithms [???] are not suitable for this problem.

In the paper, thus, we propose a class of fast zeroth-order stochastic ADMM methods (i.e., ZO-SVRG-ADMM and ZO-SAGA-ADMM) to solve the problem (1) based on the coordinate smoothing gradient estimator [?]. In particular, the ZO-SVRG-ADMM and ZO-SAGA-ADMM methods build on the SVRG [?] and SAGA [?], respectively. Moreover, we study the convergence properties of the proposed methods. Table 1 shows the convergence properties of the proposed methods and other related ones.

### 1.1 Challenges and Contributions

Although both SVRG and SAGA show good performances in the first-order and second-order methods, applying these techniques to the nonconvex zeroth-order ADMM method is not trivial. There exists at least two main challenges:

• Due to failure of the Fejér monotonicity of iteration, the convergence analysis of the nonconvex ADMM is generally quite difficult [?]. With using the inexact zeroth-order estimated gradient, this difficulty becomes greater in the nonconvex zeroth-order ADMM methods.

• To guarantee convergence of our zeroth-order ADMM methods, we need to design a new effective Lyapunov function, which can not follow the existing nonconvex (stochastic) ADMM methods [??].

Thus, we carefully establish the Lyapunov functions in the following theoretical analysis to ensure convergence of the proposed methods. In summary, our major contributions are given below:

• We propose a class of fast zeroth-order stochastic ADMM methods (i.e., ZO-SVRG-ADMM and ZO-SAGA-ADMM) to solve the problem (1).

• We prove that both the ZO-SVRG-ADMM and ZO-SAGA-ADMM have convergence rate of for nonconvex nonsmooth optimization. In particular, our methods not only reach the existing best convergence rate for the nonconvex optimization, but also are able to effectively solve many machine learning problems with multiple complex regularized penalties.

• Extensive experiments conducted on black-box classification and structured adversarial attack on black-box DNNs validate efficiency of the proposed algorithms.

## 2 Related Works

Zeroth-order (gradient-free) optimization is a powerful optimization tool for solving many machine learning problems, where the gradient of objective function is not available or computationally prohibitive. Recently, the zeroth-order optimization methods are widely applied and studied. For example, zeroth-order optimization methods have been applied to bandit feedback analysis [?] and black-box attacks on DNNs [??]. [?] have proposed several random zeroth-order methods by using Gaussian smoothing gradient estimator. To deal with the nonsmooth regularization, [??] have proposed the zeroth-order online/stochastic ADMM-based methods.

So far, the above algorithms mainly build on the convexity of problems. In fact, the zeroth-order methods are also highly successful in solving various nonconvex problems such as adversarial attack to black-box DNNs [?]. Thus, [???] have begun to study the zeroth-order stochastic methods for the nonconvex optimization. To deal with the nonsmooth regularization, [??] have proposed some non-convex zeroth-order proximal stochastic gradient methods. However, these methods still are not well competent to some complex machine learning problems such as a task of structured adversarial attack to the black-box DNNs, which is described in the following experiment.

### 2.1 Notations

Let and for . Given a positive definite matrix , ; and denote the largest and smallest eigenvalues of , respectively, and . and denote the largest and smallest eigenvalues of matrix .

## 3 Preliminaries

In the section, we begin with restating a standard -approximate stationary point of the problem (1), as in [??].

###### Definition 1.

Given , the point is said to be an -approximate stationary point of the problems (1), if it holds that

 E[dist(0,∂L(x∗,y∗[k],λ∗))2]≤ϵ, (2)

where ,

 ∂L(x,y[k],λ)=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣∇xL(x,y[k],λ)∂y1L(x,y[k],λ)⋯∂ykL(x,y[k],λ)−Ax−∑kj=1Bjyj+c⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

Next, we make some mild assumptions regarding problem (1) as follows:

###### Assumption 1.

Each function is -smooth for such that

 ∥∇fi(x)−∇fi(y)∥≤L∥x−y∥, ∀x,y∈Rd,

which is equivalent to

 fi(x)≤fi(y)+∇fi(y)T(x−y)+L2∥x−y∥2.
###### Assumption 2.

Gradient of each function is bounded, i.e., there exists a constant such that for all , it follows that .

###### Assumption 3.

and for all are all lower bounded, and denote and for .

###### Assumption 4.

is a full row or column rank matrix.

Assumption 1 has been commonly used in the convergence analysis of nonconvex algorithms [?]. Assumption 2 is widely used for stochastic gradient-based and ADMM-type methods [?]. Assumptions 3 and 4 are usually used in the convergence analysis of ADMM methods [??]. Without loss of generality, we will use the full column rank of matrix in the rest of this paper.

## 4 Fast Zeroth-Order Stochastic ADMMs

In this section, we propose a class of zeroth-order stochastic ADMM methods to solve the problem (1). First, we define an augmented Lagrangian function of the problem (1) as follows:

 Lρ(x,y[k],λ) =f(x)+k∑j=1ψj(yj)−⟨λ,Ax+k∑j=1Bjyj−c⟩ +ρ2∥Ax+k∑j=1Bjyj−c∥2, (3)

where and denotes the dual variable and penalty parameter, respectively.

In the problem (1), the explicit expression of objective function is not available, and only the function value of is available. To avoid computing explicit gradient, thus, we use the coordinate smoothing gradient estimator [?] to estimate gradients: for ,

 ^∇fi(x)=d∑j=112μj(fi(x+μjej)−fi(x−μjej))ej, (4)

where is a coordinate-wise smoothing parameter, and is a standard basis vector with 1 at its -th coordinate, and 0 otherwise.

Based on the above estimated gradients, we propose a zeroth-order ADMM (ZO-ADMM) method to solve the problem (1) by executing the following iterations, for

 (5)

where the term with to linearize the term . Here, due to using the inexact zeroth-order gradient to update , we define an approximate function over as follows:

 ^Lρ(x,yt+1[k],λt,^∇f(x))=f(xt)+^∇f(x)T(x−xt) +12η∥x−xt∥2G+k∑j=1ψj(yt+1j)−λTt(Ax+k∑j=1Bjyt+1j−c) +ρ2∥Ax+k∑j=1Bjyt+1j−c∥2, (6)

where , is the zeroth-order gradient and is a step size. Considering the matrix is large, set with to linearize the term .

In the problem (1), not only the noisy gradient of is not available, but also the sample size is very large. Thus, we propose fast ZO-SVRG-ADMM and ZO-SAGA-ADMM to solve the problem (1), based on the SVRG and SAGA, respectively.

Algorithm 1 shows the algorithmic framework of ZO-SVRG-ADMM. In Algorithm 1, we use the estimated stochastic gradient with . We have , i.e., this stochastic gradient is a biased estimate of the true full gradient. Although the SVRG has shown a great promise, it relies upon the assumption that the stochastic gradient is an unbiased estimate of true full gradient. Thus, adapting the similar ideas of SVRG to zeroth-order ADMM optimization is not a trivial task. To handle this challenge, we choose the appropriate step size , penalty parameter and smoothing parameter to guarantee the convergence of our algorithms, which will be discussed in the following convergence analysis.

Algorithm 2 shows the algorithmic framework of ZO-SAGA-ADMM. In Algorithm 2, we use the estimated stochastic gradient with . Similarly, we have .

## 5 Convergence Analysis

In this section, we will study the convergence properties of the proposed algorithms (ZO-SVRG-ADMM and ZO-SAGA-ADMM). For notational simplicity, let

 ν1=L4+9L2σAmin, ν2=k(ρ2σBmaxσAmax+ρ2(σBmax)2+σ2max(H)), ν3=6L2+3σ2max(G)η2, ν4=18L2σAminρ2+3σ2max(G)σAminη2ρ2.

### 5.1 Convergence Analysis of ZO-SVRG-ADMM

In this subsection, we analyze convergence properties of the ZO-SVRG-ADMM.

Given the sequence is generated from Algorithm 1, we define a Lyapunov function:

 +18L2dσAminρb∥xst−1−~xs∥2+ct∥xst−~xs∥2],

where the positive sequence satisfies

 ct=⎧⎪⎨⎪⎩36L2dσAminρb+2Ldb+(1+β)ct+1, 1≤t≤m,0, t≥m+1.

In addition, we definite a useful variable ].

###### Theorem 1.

Suppose the sequence is generated from Algorithm 1. Let , , and , then we have

 mins,tE[dist(0,∂L(xst,ys,t[k],λst))2]≤O(d2lT)+O(d2+2lμ2),

where with , and is a lower bound of function . It follows that suppose the smoothing parameter and the whole iteration number satisfy

 1μ2≥2d2+2lϵmax{ν1ν2+3L22,ν1ν3+9L2σAminρ2,ν1ν4}, T=4νmax(R10−R∗)ϵγ,

then is an -approximate stationary point of the problems (1), where .

###### Remark 1.

Theorem 1 shows that given , , , and , the ZO-SVRG-ADMM has convergence rate of . Specifically, when , given , the ZO-SVRG-ADMM has convergence rate of ; when , given , it has convergence rate of ; when , given , it has convergence rate of . Thus, the ZO-SVRG-ADMM has the optimal function query complexity of for finding an -approximate local solution.

### 5.2 Convergence Analysis of ZO-SAGA-ADMM

In this subsection, we provide the convergence analysis of the ZO-SAGA-ADMM.

Given the sequence is generated from Algorithm 2, we define a Lyapunov function

 +18L2dσAminρb1nn∑i=1∥xt−1−zt−1i∥2+ct1nn∑i=1∥xt−zti∥2],

where the positive sequence satisfies

 ct=⎧⎪⎨⎪⎩36L2dσAminρb+2Ldb+(1−p)(1+β)ct+1, 0≤t≤T−1,0, t≥T.

In addition, we definite a useful variable .

###### Theorem 2.

Suppose the sequence is generated from Algorithm 2. Let , and then we have

 min1≤t≤TE[dist(0,∂L(xt,yt[k],λt))2]≤O(d2lT)+O(d2+2lμ2),

where with , and is a lower bound of function . It follows that suppose the parameters and satisfy

 1μ2≥2d2+2lϵmax{ν1ν2+3L22,ν1ν3+9L2σAminρ2,ν1ν4}, T=4κmaxϵγ(Ω0−Ω∗),

then is an -approximate stationary point of the problems (1), where .

###### Remark 2.

Theorem 2 shows that , , and , the ZO-SAGA-ADMM has the of convergence rate. Specifically, when , given , the ZO-SAGA-ADMM has convergence rate of ; when , given , it has convergence rate of ; when , given , it has convergence rate of . Thus, the ZO-SAGA-ADMM has the optimal function query complexity of for finding an -approximate local solution.

## 6 Experiments

In this section, we compare our algorithms (ZO-SVRG-ADMM, ZO-SAGA-ADMM) with the ZO-ProxSVRG, ZO-ProxSAGA [?], the deterministic zeroth-order ADMM (ZO-ADMM), and zeroth-order stochastic ADMM (ZO-SGD-ADMM) without variance reduction on two applications: 1) robust black-box binary classification, and 2) structured adversarial attacks on black-box DNNs.

### 6.1 Robust Black-Box Binary Classification

In this subsection, we focus on a robust black-box binary classification task with graph-guided fused lasso. Given a set of training samples , where and , we find the optimal parameter by solving the problem:

 minx∈Rd1nn∑i=1fi(x)+τ1∥x∥1+τ2∥^Gx∥1, (7)

where is the black-box loss function, that only returns the function value given an input. Here, we specify the loss function , which is the nonconvex robust correntropy induced loss [?]. Matrix decodes the sparsity pattern of graph obtained by sparse inverse covariance selection, as in [?]. In the experiment, we give mini-batch size , smoothing parameter and penalty parameters .

In the experiment, we use some public real datasets11120news is from https://cs.nyu.edu/~roweis/data.html; others are from www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/., which are summarized in Table 2. For each dataset, we use half of the samples as training data and the rest as testing data. Figure 1 shows that the objective values of our algorithms faster decrease than the other algorithms, as the CPU time increases. In particular, our algorithms show better performances than the zeroth-order proximal algorithms. It is relatively difficult that these zeroth-order proximal methods deal with the nonsmooth penalties in the problem (7). Thus, we have to use some iterative methods (such as the classic ADMM method) to solve the proximal operator in these proximal methods.

### 6.2 Structured Attacks on Black-Box DNNs

In this subsection, we use our algorithms to generate adversarial examples to attack the pre-trained DNN models, whose parameters are hidden from us and only its outputs are accessible. Moreover, we consider an interesting problem: “What possible structures could adversarial perturbations have to fool black-box DNNs ?” Thus, we use the zeroth-order algorithms to find an universal structured adversarial perturbation that could fool the samples , which can be regarded as the following problem:

 minx∈Rd 1nn∑i=1max{Fli(ai+x)−maxj≠liFj(ai+x),0} +τ1P∑p=1Q∑q=1∥xGp,q∥2+τ2∥x∥22+τ3h(x), (8)

where represents the final layer output before softmax of neural network, and ensures the validness of created adversarial examples. Specifically, if for all and , otherwise . Following [?], we use the overlapping lasso to obtain structured perturbations. Here, the overlapping groups generate from dividing an image into sub-groups of pixels.

In the experiment, we use the pre-trained DNN models on MNIST and CIFAR-10 as the target black-box models, which can attain and test accuracy, respectively. For MNIST, we select 20 samples from a target class and set batch size ; For CIFAR-10, we select 30 samples and set . In the experiment, we set , where and for MNIST and CIFAR-10, respectively. At the same time, we set the parameters , , and . For both datasets, the kernel size for overlapping group lasso is set to and the stride is one.

Figure 3 shows that attack losses (i.e. the first term of the problem (6.2)) of our methods faster decrease than the other methods, as the number of iteration increases. Figure 2 shows that our algorithms can learn some structure perturbations, and can successfully attack the corresponding DNNs.

## 7 Conclusions

In the paper, we proposed fast ZO-SVRG-ADMM and ZO-SAGA-ADMM methods based on the coordinate smoothing gradient estimator, which only uses the objective function values to optimize. Moreover, we prove that the proposed methods have a convergence rate of . In particular, our methods not only reach the existing best convergence rate for the nonconvex optimization, but also are able to effectively solve many machine learning problems with the complex nonsmooth regularizations.

## Acknowledgments

F.H., S.G., H.H. were partially supported by U.S. NSF IIS 1836945, IIS 1836938, DBI 1836866, IIS 1845666, IIS 1852606, IIS 1838627, IIS 1837956. S.C. was partially supported by the NSFC under Grant No. 61806093 and No. 61682281, and the Key Program of NSFC under Grant No. 61732006.

## References

• [Agarwal et al., 2010] Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT, pages 28–40. Citeseer, 2010.
• [Beck and Teboulle, 2009] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009.
• [Boyd et al., 2011] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011.
• [Chen et al., 2017] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.
• [Defazio et al., 2014] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In NIPS, pages 1646–1654, 2014.
• [Duchi et al., 2015] John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE TIT, 61(5):2788–2806, 2015.
• [Gabay and Mercier, 1976] Daniel Gabay and Bertrand Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976.
• [Gao et al., 2018] Xiang Gao, Bo Jiang, and Shuzhong Zhang. On the information-adaptive variants of the admm: an iteration complexity perspective. Journal of Scientific Computing, 76(1):327–363, 2018.
• [Ghadimi and Lan, 2013] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013.
• [Ghadimi et al., 2016] Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1-2):267–305, 2016.
• [Gu et al., 2018] Bin Gu, Zhouyuan Huo, Cheng Deng, and Heng Huang. Faster derivative-free stochastic algorithm for shared memory machines. In ICML, pages 1807–1816, 2018.
• [He et al., 2011] Ran He, Wei-Shi Zheng, and Bao-Gang Hu. Maximum correntropy criterion for robust face recognition. IEEE TPAMI, 33(8):1561–1576, 2011.
• [Huang et al., 2016] Feihu Huang, Songcan Chen, and Zhaosong Lu. Stochastic alternating direction method of multipliers with variance reduction for nonconvex optimization. arXiv preprint arXiv:1610.02758, 2016.
• [Huang et al., 2019] Feihu Huang, Bin Gu, Zhouyuan Huo, Songcan Chen, and Heng Huang. Faster gradient-free proximal stochastic methods for nonconvex nonsmooth optimization. In AAAI, 2019.
• [Jiang et al., 2019] Bo Jiang, Tianyi Lin, Shiqian Ma, and Shuzhong Zhang. Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis. Computational Optimization and Applications, 72(1):115–157, 2019.
• [Johnson and Zhang, 2013] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315–323, 2013.
• [Kim et al., 2009] Seyoung Kim, Kyung-Ah Sohn, and Eric P Xing. A multivariate regression approach to association analysis of a quantitative trait network. Bioinformatics, 25(12):i204–i212, 2009.
• [Liu et al., 2018a] Sijia Liu, Jie Chen, Pin-Yu Chen, and Alfred Hero. Zeroth-order online alternating direction method of multipliers: Convergence analysis and applications. In AISTATS, volume 84, pages 288–297, 2018.
• [Liu et al., 2018b] Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Paishun Ting, Shiyu Chang, and Lisa Amini. Zeroth-order stochastic variance reduction for nonconvex optimization. In NIPS, pages 3731–3741, 2018.
• [Nesterov and Spokoiny, 2017] Yurii Nesterov and Vladimir G. Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17:527–566, 2017.
• [Ouyang et al., 2013] Hua Ouyang, Niao He, Long Tran, and Alexander G Gray. Stochastic alternating direction method of multipliers. ICML, 28:80–88, 2013.
• [Suzuki, 2014] Taiji Suzuki. Stochastic dual coordinate ascent with alternating direction method of multipliers. In ICML, pages 736–744, 2014.
• [Taylor et al., 2016] Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training neural networks without gradients: a scalable admm approach. In ICML, pages 2722–2731, 2016.
• [Wang et al., 2015] Fenghui Wang, Wenfei Cao, and Zongben Xu. Convergence of multi-block bregman admm for nonconvex composite problems. arXiv preprint arXiv:1505.03063, 2015.
• [Xu et al., 2018] Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Deniz Erdogmus, Yanzhi Wang, and Xue Lin. Structured adversarial attack: Towards general implementation and better interpretability. arXiv preprint arXiv:1808.01664, 2018.
• [Zheng and Kwok, 2016] Shuai Zheng and James T Kwok. Fast-and-light stochastic admm. In IJCAI, pages 2407–2613, 2016.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters