On the estimation of initial conditionsin kernel-based system identification

# On the estimation of initial conditions in kernel-based system identification

Riccardo S. Risuleo, Giulio Bottegal and Håkan Hjalmarsson R. S. Risuleo, G. Bottegal and H. Hjalmarsson are with the ACCESS Linnaeus Center, School of Electrical Engineering, KTH Royal Institute of Technology, Sweden (e-mail addresses: risuleo@kth.se , bottegal@kth.se , hjalmars@kth.se). This work was supported by the European Research Council under the advanced grant LEARN, contract 267381 and by the Swedish Research Council under contract 621-2009-4017.
###### Abstract

Recent developments in system identification have brought attention to regularized kernel-based methods, where, adopting the recently introduced stable spline kernel, prior information on the unknown process is enforced. This reduces the variance of the estimates and thus makes kernel-based methods particularly attractive when few input-output data samples are available. In such cases however, the influence of the system initial conditions may have a significant impact on the output dynamics. In this paper, we specifically address this point. We propose three methods that deal with the estimation of initial conditions using different types of information. The methods consist in various mixed maximum likelihood–a posteriori estimators which estimate the initial conditions and tune the hyperparameters characterizing the stable spline kernel. To solve the related optimization problems, we resort to the expectation-maximization method, showing that the solutions can be attained by iterating among simple update steps. Numerical experiments show the advantages, in terms of accuracy in reconstructing the system impulse response, of the proposed strategies, compared to other kernel-based schemes not accounting for the effect initial conditions.

\renewbibmacro

in:

## 1 Introduction

Regularized regression has a long history [tikhonov1977solutions]. It has become a standard tool in applied statistics [hastie2005elements], mainly due to its capability of reducing the mean square error (MSE) of the regressor estimate [james1961estimation], when compared to standard least squares [ljung1999system]. Recently, a novel method based on regularization has been proposed for system identification [pillonetto2014kernel]. In this approach, the goal is to get an estimate of the impulse response of the system, using the so called kernel-based methods [scholkopf2002learning]. To this end, the class of stable spline kernels has been proposed recently in [pillonetto2010new][pillonetto2011prediction]. The main feature of these kernels is that they encode prior information on the exponential stability of the system and on the smoothness of the impulse response. These features have made stable spline kernels suitable for other estimation problems, such as the reconstruction of exponential decays [pillonetto2010regularized] and correlation functions [bottegal2013regularized]. Other kernels for system identification have been introduced in subsequent studies, see for instance [chen2012estimation][chen2014constructive].

Stable spline kernels are parameterized by two hyperparameters, that determine magnitude and shape of the kernel and that need to be estimated from data. An effective approach for hyperparameter estimation relies upon empirical Bayes arguments [maritz1989empirical]. Specifically, exploiting the Bayesian interpretation of regularization [wahba1990spline], the impulse response is modeled as the realization of a Gaussian process whose covariance matrix corresponds to the kernel. The hyperparameters are then estimated by maximizing the marginal likelihood of the output data, obtained by integrating out the dependence on the impulse response. Given a choice of hyperparameters, the unknown impulse response is found by computing its minimum MSE Bayesian estimate [pillonetto2014kernel].

One situation where kernel-based methods are preferable is when data records are short (e.g., five times the rise time of the system). This mainly because of two reasons:

1. Kernel-based methods do not require the selection of a model order. Standard parametric techniques (such as the prediction error method [ljung1999system][soederstroem1988system]) need to rely on model selection criteria, such as AIC or BIC, if the structure of the system is unknown [beghelli1990frisch]. These could be unreliable when faced with small data sets.

2. The bias introduced by regularization reduces the variance. With small data records, the variance can be very high. If the bias is of the right kind, it will compensate for the variance effect in the MSE [hastie2005elements, Ch. 2.9].

When data records are very short (e.g., two times the rise time of the system) we cannot ignore the effect of the initial conditions. In fact, if the system is not at rest before the experiment is performed, then there are transient effects that cannot be explained using only the collected data. Standard workarounds, such as discarding those output samples that depend on the initial conditions or approximating the initial conditions to zero [ljung1999system, Ch. 10.1], may give unsatisfactory results. Thus, it seems preferable to deal with the initial conditions by estimating them. In this paper we discuss how to incorporate the estimation of the initial conditions in the context of kernel-based system identification. We discuss three possible approaches to the problem. First, we propose a method that incorporates the unknown initial conditions as parameters, to be estimated along with the kernel hyperparameters. Then, assuming that the input is an autoregressive–moving-average (ARMA) stationary process, we propose to estimate the initial conditions using the available samples of the input, thus designing a minimum variance estimate of the initial conditions from the input samples. Finally, we design a mixed maximum a posteriori–marginal likelihood (MAP–ML) estimator (see [yeredor2000joint]) that effectively exploits information from both input and output data. We solve the optimization problems using novel iterative schemes based on the expectation-maximization (EM) method [dempster1977maximum], similar to the technique used in our previous works [risuleo2015kernel] and [bottegal2015blind], where methods for Hammerstein and blind system identification are proposed. We show that each iteration consists of a set of simple update rules which either are available in closed-form or involve scalar optimization problems, that can be solved using a computationally efficient grid search.

The paper is organized as follows. In Section 2, we formulate the problem of system identification with uncertainty on the initial conditions. In Section 3, we provide a short review of kernel-based system identification. In Section 4, we propose the initial-conditions estimation strategies and the related system identification algorithms. In Section 5, we show the results of numerical experiments. In these experiments, the discussed method are compared with standard techniques used to deal with unknown initial conditions. In Section 6, we summarize the work and conclude the paper.

## 2 Problem formulation

We consider the output error model of the form

 yt=∞∑k=0gkut−k+vt, (1)

where is the impulse response of a linear time-invariant system. For notational convenience, we assume there are no delays in the system (). We approximate by considering its first samples , where is chosen large enough to capture the system dynamics. The system is driven by the input and the measurements of the output are corrupted by the process , which is zero-mean white Gaussian noise with variance .

Given a set of measurements, denoted by , , we are interested in estimating the first samples of the impulse response . To this end, we formulate this system identification problem as the linear regression problem

 y=Ug+v, (2)

where we have introduced the following vector/matrix notation

 y:=⎡⎢ ⎢⎣y0⋮yN−1⎤⎥ ⎥⎦,g:=⎡⎢ ⎢⎣g0⋮gn−1⎤⎥ ⎥⎦,v:=⎡⎢ ⎢⎣v0⋮vN−1⎤⎥ ⎥⎦,
 U=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣u0u−1u−2u−n+1u1u0u−1u−n+2u2u1u0⋯u−n+3⋮⋮⋮⋮uN−1uN−2uN−3uN−n⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (3)

The matrix contains the samples , that we call initial conditions, that are unavailable. Common ways to overcome this problem are, for instance

• Discard the first collected samples of . However, if is not much larger than , (e.g., if and ), there is a considerable loss of information.

• Assume that the system is at rest before the experiment is performed, (i.e. , , ). This assumption might be too restrictive or unrealistic.

In this paper, our aim is to study how to exploit the available information to estimate the initial conditions, in order to improve the identification performance. Specifically, we will present three estimators that make different use of the available information.

## 3 Kernel-based system identification

In this section we briefly review the kernel-based approach introduced in [pillonetto2010new][pillonetto2011prediction]. Exploiting the Bayesian interpretation of kernel-based methods [wahba1990spline], we model the unknown impulse response as a Gaussian random process, namely

 g∼N(0,λKβ). (4)

We parameterize the covariance matrix (the kernel) with the hyperparameter . The structure of the kernel determines the properties of the realizations of (4); its choice is therefore of paramount importance. An effective kernel for system-identification purposes is the stable spline kernel [pillonetto2010new][pillonetto2011kernel]. In particular, in this paper we use the first-order stable spline kernel (or TC kernel in [chen2012estimation]), that is defined as

 {Kβ}i,j:=βmax(i,j), (5)

where is a scalar in the interval . The role of this hyperparameter is to regulate the velocity of the exponential decay of the impulse responses drawn from the kernel. The hyperparameter is a scaling factor that regulates the amplitude of the realizations of (4).

We collect the hyperparameters into the vector

 ρ:=[λβ] (6)

and introduce the following notation:

 u:=[u−u+]u−:=⎡⎢ ⎢⎣u−n+1⋮u−1⎤⎥ ⎥⎦u+:=⎡⎢ ⎢⎣u0⋮uN−1⎤⎥ ⎥⎦,

where contains the unknown initial conditions. Since we have assumed a Gaussian distribution for the noise, the joint description of and is Gaussian, parameterized by and . Therefore, we can write

 p([yg];ρ,u−)∼N([00],[ΣyΣygΣgyλKβ]), (7)

where and . It follows that the posterior distribution of given is Gaussian, namely

 p(g|y;ρ,u−)=N(^g,Σg|y), (8)

where

 (9)

Equation (8) implies that the minimum variance estimator of (in the Bayesian sense, see [anderson2012optimal]) is

 ^g=E[g|y;ρ,u−]. (10)

The estimate depends on the hyperparameter vector and the initial conditions. These quantities need to be estimated from data. In the next section we focus our attention to the estimation of the kernel hyperparameters and the initial conditions, describing different strategies to obtain these quantities.

###### Remark 1.

The estimator (10) depends also on the noise variance . In this work, we assume that this parameter is known. It can for instance be estimated by fitting a least-squares estimate of the system and then computing the sample variance of the residuals.∎

## 4 Estimation of initial conditions and hyperparameters

In most works on kernel-based system identification (see e.g. [pillonetto2014kernel] for a survey), the authors adopt an empirical-Bayes approach to estimate the hyperparameters that define the kernel. This amounts to maximizing the marginal likelihood (ML) of the output, found integrating out of (7).

In the standard case, that is when is assumed to be known, the ML estimator of the hyperparameters corresponds to

 ^ρ=argmaxρp(y;ρ,u−). (11)

We start from (11) to design new estimators for the initial conditions and the kernel hyperparameters.

### 4.1 Model-less estimate

The most straightforward generalization of (11) is to include the initial conditions among the ML parameters. The initial conditions become unknown quantities that parameterize the impulse response estimator. The ML criterion then becomes

 ^ρ,^u−=argmaxρ,u−p(y;ρ,u−), (12)

where the maximization is carried out over the unknown initial conditions as well. This problem is nonconvex and possibly high dimensional, as the number of initial conditions to be estimated is equal to the number of impulse response samples. To overcome this difficulty, we devise a strategy based on the expectation-maximization method that yields a solution to (12) by iterating simple updates. To this end, we suppose we have any estimate of the unknown quantities and , and we calculate the current estimate of the impulse response, as well as its variance, from (9). Define the matrix

 ^S(k)=^Σ(k)g|y+^g(k)^g(k)T, (13)

which is the second moment of the current estimated impulse response. We introduce the discrete-derivator matrix

 Δij=δi,j−δi,j+1, (14)

and we calculate the second moment of the derivative of the estimated impulse response at iteration , , given by

 ^D(k):=Δ^S(k)ΔT. (15)

The Toeplitz matrix of the input samples can be split in two parts, namely , where is fully determined by the available samples, and is composed of the unknown initial conditions. Define the matrix that satisfies the relation ; and call the Toeplitz matrix of the estimated impulse response at the th iteration. Furthermore, define

 ^A(k) (16) ^b(k)T :=vec(U+)T[^S(k)⊗IN]R−yT^G(k).

With all the definitions in place, we can state the theorem that provides us with the iterative update of the estimates that solves (12).

###### Theorem 1.

Consider the hyperparameter estimator (12). Starting from any initial guess of the initial conditions and the hyperparameters, compute

 ^u(k+1) =(^A(k))−1^b(k), (17) ^β(k+1) =argminβ∈[0,1)Q(β), (18) ^λ(k+1) =1nn∑i=1^d(k)iw^β(k+1),i, (19)

with and defined in (16), and

 Q(β) :=nlogf(β)+n(n−1)2logβ−log(1−β), (20) f(β) :=n−1∑i=1^d(k)iβ−i+^d(k)n(1−β)β1−n; (21)

where is the th diagonal element of (15), and is the th element of

 wβ:=1β−β2[11β⋯1βn−21−ββn−1], (22)

when . Let ; then the sequences and converge to a maximum of (12).

See Appendix. ∎

###### Remark 2.

The EM method does not guarantee convergence of the sequences to a global maximum (see [mclachlan2007algorithm] and [tseng2004analysis]). However, experiments (see Section 5) have show that, for this particular problem, the EM method converges to the global maximum independently of how it is initialized.

Thus, we can use Theorem 1 to find a maximum of the marginal likelihood of the hyperparameters and the initial conditions, and then use these parameters to solve the impulse response estimation problem with (10), where we use the limits of the sequences of estimates.

### 4.2 Conditional mean estimate

The model-less estimator presented in Section 4.1 estimates the initial conditions using only information present in the system output . It does not rely on any model of the input signal . To show how an available model can be used to estimate the missing initial conditions, we introduce the following assumption.

###### Assumption 1.

The input is a realization of a stationary Gaussian process with zero-mean and known rational spectrum. Equivalently, is a realization on an ARMA process with known coefficients.∎

Assumption 1 implies that can be expressed as the output of a difference equation driven by white Gaussian noise with unit variance [papoulis2002probability], namely

 ut+d1ut−1+⋯+dput−p=c0et+⋯+cpet−p, (23)

where . Since, using Assumption 1, we can construct the probability density of the input process, a possible approach to solve (11) is to estimate the missing initial conditions from the input process. To this end, consider (23). If we define the matrices as the toeplitz matrix of the coefficients , and as the toeplitz matrix of the coefficients , then we can write

 u=−Du+Ce,e:=[e−n+1⋯eN]T, (24)

so that , with

 Σu=(I+D)−1CCT(I+D)−T. (25)

We thus have a joint probabilistic description of the initial conditions and the available input samples . We can partition into four blocks according to the sizes of and

 Σu=[Σ−Σ−+Σ+−Σ+].

It follows (see [anderson2012optimal]) that the posterior distribution of the unavailable data is , where

 u−|+=Σ−+Σ−1+u+,Σ−|+=Σ−−Σ−+Σ−1+Σ+−. (26)

So we can find the minimum variance estimate of as the conditional mean , namely .

Having an estimate of the initial conditions, we need to find the hyperparameters that define the kernel. In this case, empirical Bayes amounts to solving the ML problem

 ^ρ=argmaxρp(y;ρ,^u−), (27)

where the unknown initial conditions have been replaced by their conditional mean. The following theorem states how to solve the maximization using the EM method.

###### Theorem 2.

Consider the hyperparameter estimator (27). Starting from an initial guess of the hyperparameters, compute

 ^β(k+1) =argminβ∈[0,1)Q(β), (28) ^λ(k+1) =1nn∑i=1^d(k)iw^β(k+1),i, (29)

with , , and defined in Theorem 1. Let , then the sequence converges to a maximum of (27).

See Appendix. ∎

###### Remark 3.

The updates (28) and (29) require the evaluation of (15) at each iteration. In this case the estimate of the impulse response is given by (10), where is replaced by its conditional mean.∎

### 4.3 Joint input-output estimate

The conditional mean estimator presented in Section 4.2 exploits the structure of the input to estimate the missing samples. The model-less estimator, instead, uses information contained in the output samples. In this section we show how to merge these two information sources, defining a joint input-output estimator of the initial conditions.

We use Assumption 1 to account for the statistical properties of . We propose the following mixed MAP–ML estimator

 ^ρ,^u−=argmaxρ,u−p(y|u−,u+;ρ)p(u−|u+), (30)

where we have highlighted the dependence on the known input sequence . A key role is played by the term : it acts as a prior distribution for the unknown values of and puts weight on the values that better agree with the observed data .

Even in this case, the solution can be found with an iterative procedure based on the EM method.

###### Theorem 3.

Consider the hyperparameter estimator (30). Starting from an initial guess of the initial conditions and the hyperparameters, compute

 ^u(k+1) =⎛⎜ ⎜⎝(^A(k))−1σ2+Σ−1−|+⎞⎟ ⎟⎠−1⎛⎝^b(k)σ2+Σ−1−|+u−|+⎞⎠, (31) ^β(k+1) =argminβ∈[0,1)Q(β), (32) ^λ(k+1) =1nn∑i=1^d(k)iw^β(k+1),i, (33)

with and from (16); and with , , and defined in Theorem 1. Let ; then, the sequences and converge to a maximum of (30).

See Appendix. ∎

###### Remark 4.

This estimator incorporates the prior information about the initial conditions using the mean and the covariance matrix . If we suppose that we can manipulate , we can see this estimator as a more general estimator, that contains the model-less and conditional mean as limit cases. In fact, setting , we get the model-less estimator. Conversely, setting , we obtain the conditional-mean estimator, as (31) would yield a degenerate iteration where all the updates are . We point however out that the model-less estimator does not rely on any assumption on the input model, whereas the joint input-output estimator requires that the input is a Gaussian process with known pdf.

## 5 Numerical experiments

### 5.1 Experiment setup

To compare the proposed methods, we perform five numerical experiments, each one consisting of 200 Monte Carlo simulations, with sample sizes 150, 200, 250, 300, and 400. At each Monte Carlo run, we generate a dynamic system of order 40. The system is such that the zeros are constrained within the circle of radius 0.99 on the complex plane, while the magnitude of the poles is no larger than 0.95. The impulse response length is 100 samples. The input is obtained by filtering white noise with unit variance through a 8-th order ARMA filter of the form (23). The coefficients of the filter are randomly chosen at each Monte Carlo run, and they are such that the poles of the filter are constrained within the circular region of radii 0.8 and 0.95 in the complex plane.

Random trajectories of input and noise are generated at each run. In particular, the noise variance is such that the ratio between the variance of the noiseless output of the system and the noise variance is equal to 20.

The following estimators are compared during the experiments.

• KB-IC-Zeros: This method does not attempt any estimation of the initial conditions. It sets their value to 0 (that, when assumption 1 holds, corresponds to the a-priori mean of the vector ). The kernel hyperparameters are obtained solving (11), with .

• KB-Trunc: This method also avoids the estimation of the initial conditions by discarding the first output samples, which depend on the unknown vector . The hyperparameters are obtained solving (11), using the truncated data.

• KB-IC-ModLess: This is the model-less kernel-based estimator presented in Section 4.1.

• KB-IC-Mean: This is the conditional mean kernel-based estimator presented in Section 4.2.

• KB-IC-Joint: This is the joint input-output kernel-based estimator presented in Section 4.3.

• KB-IC-Oracle: This estimator has access to the vector , and estimates the kernel hyperparameters using (11).

The performances of the estimators are evaluated by means of the fitting score, computed as

 FITi=100(1−∥gi−^gi∥2∥gi−¯gi∥2), (34)

where is the impulse response generated at the -th run, its mean and the estimate computed by the tested methods.

### 5.2 Results

Table 1 shows the average fit (in percent) of the impulse response over the Monte Carlo experiments. We can see that, for short data sequences the amount of information discarded by the estimator KB-Trunc makes its performance degrade with respect to the other estimators. The estimator KB-IC-Zeros performs better, however suffers from the effects of the wrong assumption that the system was at rest before the experiment was performed. From these results, we see that the estimation of the initial conditions has a positive effect on the accuracy of the estimated impulse response. For larger data records the performances of the estimator KB-IC-Mean and of the estimator KB-IC-ModLess improve, as more samples become available.

When the available data becomes larger, all the methods perform well, with fits that are in the neighborhood of the fit of the oracle.

The positive performance of KB-IC-Mean indicates that the predictability of the input can be exploited to improve estimates, and that model-based approaches to initial condition estimation outperforms model-less estimation methods (if the input model is known). The further improvement of KB-IC-Joint indicates that the output measurements can be used to obtain additional information about the unobserved initial conditions, information that is not contained in the input process itself.

## 6 Discussion

We have proposed three new methods for estimating the initial conditions of a system when using kernel-based methods. Assuming that the input is a stationary ARMA process with known spectrum, we have designed mixed MAP–ML criteria which aim at jointly estimating the hyperparameters of the kernel and the initial conditions of the systems. To solve the related optimization problems, we have proposed a novel EM-based iterative scheme. The scheme consists in a sequence of simple update rules, given by unconstrained quadratic problems or scalar optimization problems. Numerical experiments have shown that the proposed methods outperform other standard methods, such as truncation or zero initial conditions.

The methods presented here estimate initial conditions (where is the length of the FIR approximating the true system), since no information on the order of the system is given. Assuming that the system order is known and equal to say, , the number of initial conditions to be estimated would boil down to . However, there would also be unknown transient responses which need to be identified. These transients would be characterized by impulse responses with the same poles as the overall system impulse response, but with different zeros. How to design a kernel correlating these transient responses with the system impulse response is still an open problem.

## Appendix: Proofs

### 6.1 Theorem 1

Consider the ML criterion (12). To apply the EM method, we consider the complete log-likelihood

 L(y,g) =−12σ2∥y−Ug∥2−Nσ2logσ2 −12gT(λKβ)−1g−12logdet(λKβ).

where we have introduced as a latent variable. Suppose that we have computed the estimates of the hyperparameters and of the initial conditions. We define the function

 Q(ρ,u−;^ρ(k),^u(k)−):=E[L(y,g)], (35)

where the expectation is taken with respect to the conditional density , defined in (8). We obtain (neglecting terms independent from the optimization variables)

 Q(ρ,u−;^ρ(k),^u(k)−)=Q1(u−;^ρ(k),^u(k)−)+Q2(ρ;^ρ(k),^u(k)−), (36)

where

 Q1(u−;^ρ(k),^u(k)−) = (37) Q2(ρ;^ρ(k),^u(k)−) =−12\rm tr{(λKβ)−1^S(k)}−12logdet(λKβ), (38)

and . Define new parameters from

 ^ρ(k+1),^u(k+1)−=argmaxρ,u−Q(ρ,u−;^ρ(k),^u(k)−). (39)

By iterating between (35) and (39) we obtain a sequence of estimates that converges to a maximum of (12) (see e.g., [mclachlan2007algorithm] for details). Using (36), (39) splits in the two maximization problems

 ^u(k+1)− =argmaxu−Q1(u−;^ρ(k),u(k)−), (40) ^ρ(k+1) =argmaxρQ2(ρ;^ρ(k),^u(k)−). (41)

Consider the maximization of (37) with respect to . Using the matrix we have

 \rm tr{UT−U−^S(k)} =uT−RT[^S(k)⊗IN]Ru−, (42) \rm tr{UT−U+^S(k)} (43)

where denotes the Kronecker product. We now collect , and obtain

 Q1(u−;^ρ(k),^u(k)−)=−12σ2uT−^A(k)u−+1σ2uT−^b(k) (44)

with and defined in (16). Hence, (40) is an unconstrained quadratic optimization, whose solution is given by (17).

Consider now the maximization of (38) with respect to . We can calculate the derivative of with respect to , obtaining

 ∂Q2∂λ=−12λ2\rm tr{K−1β^S(k)}+n2λ; (45)

which is equal to zero for

 λ∗=1n(\rm tr{K−1β^S(k)}). (46)

We thus have an expression of the optimal value of as a function of . If we insert this value in , we obtain

 Q2([λ∗β];^ρ(k),^u(k)−) = −12log(\rm tr{K−1β^S(k)})−12logdetKβ+k1, (47)

where is constant. We now rewrite the first order stable spline kernel using the factorization (see [carli2014maximum])

 Kβ=Δ−1WβΔ−T, (48)

where is defined in (14) and

 Wβ:=(β−β2)\rm diag{1,β,…,βn−2,βn−11−β}. (49)

From (15), we find

 Q2([λ∗β];^ρ(k),^u(k)−) = nlog(n∑i=1^d(k)iw−1β,ii) +n∑i=1logwβ,ii+k2, (50)

where and are the -th diagonal elements of and of respectively, and is a constant. If we define the function (21), we can rewrite (6.1) as (20). so that we obtain (20) and (21). Using a similar reasoning, we can rewrite (46) as (19). ∎

### Theorem 2

Consider the ML criterion (27). The proof follows the same arguments as the proof of Theorem 1, with the optimization carried out on only and with .∎

### Theorem 3

Consider the ML criterion (30). Consider the complete data log-likelihood

 L2(y,g):=logp(y,g,u−;ρ,), (51)

Given any estimates and , take the expectation with respect to . We obtain

 E[L2(y,g)] =Q1(u−;^ρ(k),^u(k)−)+Q2(ρ;^ρ(k),^u(k)−) −12(u−−u−|+)TΣ−1−|+(u−−u−|+), (52)

with and defined in (37) and (38). Collecting the terms in and using (44) we obtain an unconstrained optimization problem in , that gives (31). The optimization in follows the same procedure as in Theorem 1 and gives (32) and (33).

\printbibliography
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters