Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy

Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy

Hunter Gabbard Corresponding author:    Chris Messenger    Ik Siong Heng    Francesco Tonolini    Roderick Murray-Smith SUPA, School of Physics and Astronomy,
University of Glasgow,
Glasgow G12 8QQ, United Kingdom

School of Computing Science,
University of Glasgow,
Glasgow G12 8QQ, United Kingdom
September 16, 2019
preprint: APS/123-QED

Gravitational wave (GW) detection is now commonplace Abbott and others (2016, 2016, 2017) and as the sensitivity of the global network of GW detectors improves, we will observe s of transient GW events per year Veitch et al. (2014). For each of these events the current methods used to estimate their source parameters employ optimally sensitive Searle et al. (2009) but computationally costly Bayesian inference approaches Veitch et al. (2014). For binary black hole (BBH) signals, existing complete GW analyses can take seconds to complete Veitch et al. (2014) and for binary neutron star (BNS) signals this increases by at least an order of magnitude  Abbott and others (2017). It is for this latter class of signal (and neutron star black hole (NSBH) systems) that counterpart electromagnetic (EM) signatures are expected, containing prompt emission on timescales of 1 second – 1 minute. The current fastest method for alerting EM follow-up observers Singer and Price (2016), can provide estimates in minute, on a limited range of key source parameters. Here we show that a conditional variational autoencoder (CVAETonolini et al. (2019); Pagnoni et al. (2018) pre-trained on BBH signals and without being given the precomputed posteriors can return Bayesian posterior probability estimates on source parameters. The training procedure need only be performed once for a given prior parameter space (or signal class) and the resulting trained machine can then generate samples describing the posterior distribution orders of magnitude faster than existing techniques.

The problem of detecting GWs has largely been solved through the use of template based matched-filtering Usman et al. (2016), a process recently replicated using machine learning techniques George and Huerta (2018); Gabbard et al. (2018); Gebhard et al. (2017). Once a GW has been identified through this process, Bayesian inference, known to be the optimal approach Searle et al. (2009), is used to extract information about the source parameters of the detected GW signal.

In the standard Bayesian GW inference approach, we assume a signal and noise model and both may have unknown parameters that we are either interested in inferring or prefer to marginalise away. Each parameter is given a prior astrophysically motivated probability distribution and in the GW case, we typically assume a Gaussian additive noise model (in reality, the data is not truly Gaussian). Given a noisy GW waveform, we would like to find an optimal procedure for inferring some set of the unknown GW parameters. Such a procedure should be able to give us an accurate estimate of the parameters of our observed signal, whilst accounting for the uncertainty arising from the noise in the data.

According to Bayes’ Theorem, a posterior probability distribution on a set of parameters, conditional on the measured data, can be represented as


where are the parameters, is the observed data, is the posterior, is the likelihood, and is the prior on the parameters. The constant of proportionality, which we omit here, is , the probability of our data, known as the Bayesian evidence or the marginal likelihood. We typically ignore since it is a constant and for parameter estimation purposes we are only interested in the shape of the posterior.

Due to the size of the parameter space typically encountered in GW parameter estimation and the volume of data analysed, we must stochastically sample the parameter space in order to estimate the posterior. Sampling is done using a variety of techniques including Nested Sampling Skilling (2006); Veitch et al. (2017); Speagle (2019) and Markov chain Monte Carlo methods Foreman-Mackey et al. (2013); Vousden et al. (2015). The primary software tools used by the advanced Laser Interferometer Gravitational wave Observatory (LIGO) parameter estimation analysis are LALInference and Bilby Veitch et al. (2014); Ashton et al. (2018), which offer multiple sampling methods.

Machine learning has featured prominently in many areas of GW research over the last few years. These techniques have shown to be particularly promising in signal detection George and Huerta (2018); Gabbard et al. (2018); Gebhard et al. (2019), glitch classification George et al. (2017); Zevin et al. (2017) and earthquake prediction Coughlin et al. (2017). We also highlight a recent development in GW parameter estimation (published in parallel and independent to this work) where marginalized one- and two-dimensional marginalised Bayesian posteriors are produced rapidly using neural networks. This is done without the need to compute the likelihood or posterior during training which is also a characteristic of the approach described in this letter.

Recently, a type of neural network known as CVAE was shown to perform exceptionally well when applied towards computational imaging inference Tonolini et al. (2019); Sohn et al. (2015), text to image inference Yan et al. (2015), high-resolution synthetic image generation Nguyen et al. (2016) and the fitting of incomplete heterogeneous data Nazabal et al. (2018). It is this type of machine learning network that we apply in the GW case to accurately approximate the Bayesian posterior .

The construction of a CVAE begins with the definition of a quantity to be minimised (referred to as a cost function). We can relate that aim to that of approximating the posterior distribution by minimising the cross entropy, defined as


between the true posterior and , the parametric distribution that we will use neural networks to construct and which we aim to be equal to the true posterior. In this case represents a set of trainable neural network parameters. Starting from this point it is possible to derive a computable form for the cross-entropy that is reliant on a set of unknown functions that can be modelled by variational encoder and decoder neural networks. The details of the derivation are described in the methods section and in Tonolini et al. (2019). The final form of the cross-entropy loss function is given by the bound


and requires three fully-connected networks; two encoder networks (labelled , in Fig. 1) representing the functions and respectively, and one decoder network (D) representing the function . The function denotes the Kullback–§Leibler (KL) divergence and the variable represents locations within the latent space. This latter object is typically a lower dimensional space within which the encoder networks attempt to represent their input data. In practice, during the training procedure the various integrations that are part of the derivation are approximated by a sum over a batch of training data samples (indexed by above) at each stage of training. Training is performed via a series of steps detailed in the methods section.

Figure 1: The configuration of the CVAE neural network. During training (left-hand side), a training set of noisy GW signals () and their corresponding true parameters () are given as input to encoder network , while only is given to . The K-L divergence (Eq. 7) is computed between the encoder output latent space representations (defined by and ) forming one component of the total cost function. Samples () from the latent space representation are generated and passed to the decoder network D together with the original input data . The output of the decoder () describes a distribution in the physical space and the ELBO cost component is computed by evaluating that distribution at the value of the original input . When performed in batches this scheme allows the computation of the total cost function Eq. Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy. After having trained the network, we test (right-hand side) using only the encoder and the decoder D to produce samples () from the posterior .

We present results on single detector GW test BBH waveforms in simulated advanced detector noise 5 and compare between variants of the existing Bayesian approaches and the CVAE. Posteriors produced by the Bilby inference library Ashton et al. (2018) are used as a benchmark in order to assess the efficiency and quality of our machine learning approach with the existing method for posterior sampling.

For the benchmark analysis we assume that 5 parameters are unknown: the component masses , the luminosity distance , the time of coalescence , and the phase at coalescence . For each parameter we use a uniform prior with ranges and fixed values defined in Table 2. We use a sampling frequency of  Hz and a timeseries duration of 1 second. The waveform model used is IMRPhenomPv2 Khan et al. (2018) with a minimum cutoff frequency of Hz. For each input test waveform we run the benchmark analysis using multiple sampling algorithms available within Bilby. For each run and sampler we extract samples from the posterior on the 5 physical parameters.

The CVAE training process used as input waveforms corresponding to parameters drawn from the same priors as assumed for the benchmark analysis. The waveforms are also of identical duration, sampling frequency, and waveform model as used in the benchmark analysis. When each waveform is placed within a training batch it is given a unique detector noise realisation after which the data is whitened using the same advanced detector power spectral density (PSDAbbott et al. (2016) from which the simulated noise is generated111Although we whiten the data as input to our network the whitening is simply to scale the input to a level more suitable to neural networks and need not be performed with the true PSD.. The CVAE posterior results are produced by passing our whitened noisy testing set of GW waveforms as input into the testing path of the pre-trained CVAE 1. For each input waveform we sample until we have generated posterior samples on 4 physical parameters (). We choose to output a subset of the full 5-dimensional space to demonstrate that parameters (such as in this case) can (if desired) be marginalized out within the CVAE procedure itself, rather than after training.

Figure 2: Corner plot showing 2 and 1-dimensional marginalised posterior distributions for one example test dataset. Filled (red) contours represent the posteriors obtained from the CVAE approach and solid (blue) contours are the posteriors output from our baseline analysis (Bilby using the dynesty sampler). In each case, the contour boundaries enclose and probability. One dimensional histograms of the posterior distribution for each parameter from both methods are plotted along the diagonal. Blue and red vertical lines represent the symmetric confidence bounds for Bilby and variational inference respectively. Black crosses and vertical black lines denote the true parameter values of the simulated signal. The original whitened noisy timeseries and the noise-free signal are plotted in blue and cyan respectively in the upper right hand panel. The test signal was simulated with optimal signal-to-noise ratio of 13.9.

We can immediately illustrate the accuracy of our machine learning predictions by directly plotting 2 and 1-dimensional marginalised posteriors generated using the output samples from our CVAE and Bilby approaches superimposed on each other. We show this for one example test dataset in Fig. 2 where the strong agreement between both Bilby (blue) and the CVAE (red) is clear.

Figure 3: One-dimensional p-p plots for each parameter and each benchmark sampler and VItamin. The curves were constructed using the 256 test datasets. The dashed black diagonal line indicates the ideal result.

A standard test used within the GW parameter estimation community is the production of so-called p-p plots which we show for our analysis in Fig. 3. The plot is constructed by computing a p-value for each output test posterior on a particular parameter evaluated at the true simulation parameter value (the fraction of posterior samples the simulation value). We then plot the cumulative distribution of these values Veitch et al. (2014). Curves consistent with the black dashed diagonal line indicate that the 1-dimensional Bayesian probability distributions are consistent with the frequentist interpretation - that the truth will lie within an interval containing of the posterior probability with a frequency of of the time. It is clear to see that our new approach shows deviations from the diagonal that are entirely consistent with those observed in all benchmark samplers. In Fig. 5 (see Sec. III) we also show distributions of the KL divergence statistic between all samplers on the joint posterior distributions. It is also clear that the KL divergences between VItamin and any other sampler are consistent with the distributions between any 2 existing benchmark samplers.

The dominating computational cost of running VItamin lies in the training time, which can take of order several hours to complete. Completion is determined by comparing posteriors produced by the machine learning model and those of Bilby iteratively during training. We additionally assess whether the cost curves (Fig. 4) have converged, such that their slope is near-zero. We use a single Nvidia Tesla V100 graphics processing units with Gb of RAM although consumer grade “gaming” GPU cards are equally fast for this application.

We stress that once trained, there is no need to retrain the network unless the user wishes to use different priors or assume different noise characteristics. The speed at which posterior samples are generated for all samplers used, including VItamin, is shown in Table 1. Run-time for the benchmark samplers is defined as the time to complete their analyses when configured using their default parameters Ashton et al. (2018). For VItamin this time is defined as the total time to produce samples. For our test case of BBH signals VItamin produces samples from the posterior at a rate which is orders of magnitude faster than our benchmark analysis using current inference techniques, representing a dramatic speed-up in performance.

sampler run time (seconds) ratio
min max median
Dynesty222The benchmark samplers all produced samples dependent on the default sampling parameters used. 602 1538 774333The reader may note that benchmark sampler run times are a few orders of magnitude lower than what is typical of a complete BBH analysis ( seconds). This is primarily due our use of a reduced parameter space, low sampling rate and choice of sampler hyperparameters.
Emcee 2005 11927 4351
Ptemcee 3354 12771 4982
Cpnest 1431 5405 2287
VItamin444For the VItamin sampler samples are produced as representative of a typical posterior. The run time is independent of the signal content in the data and is therefore constant for all test cases. 1
Table 1: Durations required to produce samples from each of the different posterior sampling approaches.

In this letter we have demonstrated that we are able to reproduce, to a high degree of accuracy, Bayesian posterior probability distributions generated through machine learning. This is accomplished using CVAEs trained on simulated GW signals and does not require the input of precomputed posterior estimates. We have demonstrated that our neural network model which, when trained, can reproduce complete and accurate posterior estimates in millisecond, can achieve the same quality of results as the trusted benchmark analyses used within the LIGO-Virgo Collaboration.

The significance of our results is most evident in the orders of magnitude increase in speed over existing approaches. This will help the LIGO-Virgo collaboration alert EM follow-up partners with minimum latency, enabling tightly coupled, closed-loop control of sensing resources, for maximum information gain. Improved low-latency alerts will be especially pertinent for signals from BNS mergers (e.g. GW170817 Abbott and others (2017)) and NSBH signals where parameter estimation speed will no longer be limiting factor555The complete low-latency pipeline includes a number of steps. The process of GW data acquisition is followed by the transfer of data. There is then the corresponding analysis and the subsequent communication of results to the EM astronomy community after which there are physical aspects such as slewing observing instruments to the correct pointing. in observing the prompt EM emission expected on shorter time scales than is achievable with existing LIGO-Virgo Collaboration (LVC) analysis tools such as Bayestar Singer and Price (2016).

The predicted number of future detections of BNS mergers ( The KAGRA Collaboration et al. (2013); The LIGO Scientific Collaboration and the Virgo Collaboration (2018)) will severely strain the GW community’s current computational resources using existing Bayesian methods. Future iterations of our approach will provide full-parameter estimation on compact binary coalescence (CBC) signals in second on a single GPU. Our trained network is also modular, and can be shared and used easily by any user to produce results. The specific analysis described in the letter assumes a uniform prior on the signal parameters. However, this is a choice and the network can be trained with any prior the user demands, or users can cheaply resample accordingly from the output of the network trained on the uniform prior. We also note that our method will be invaluable for population studies since populations may now be generated and analysed in a full-Bayesian manner on a vastly reduced time scale.

For BBH signals, GW data is usually sampled at kHz dependent upon the mass of binary. We have chosen to use the noticeably low sampling rate of 256Hz and a single detector configuration largely in order to decrease the computational time required to develop our approach. We do not anticipate any problems in extending our analysis to higher sampling frequencies other than an increase in training time and a larger burden on the GPU memory. Our lower sampling rate naturally limited the chosen BBH mass parameter space to high mass signals. We similarly do not anticipate that extending the parameter space to lower masses will lead to problems but do expect that a larger number of training samples may be required. Future work will incorporate a multi-detector configuration at which point parameter estimation will be extended to sky localisation.

In reality, GW detectors are affected by non-Gaussian noise artefacts and time-dependent variation in the detector noise PSD. Existing methods incorporate a parameterised PSD estimation into their inference Littenberg and Cornish (2015). To account for these within our scheme, we would retrain our network at regular intervals using samples of real detector noise (preferably recent examples to best reflect the state of the detectors). Our work can naturally be extended to include the full range of CBC signal types but also to any and all other parameterised GW signals and to analyses of GW data beyond that of ground based experiments. Given the abundant benefits of this method, we hope that a variant of this of approach will form the basis for future GW parameter estimation.

I Acknowledgements.

We would like to acknowledge valuable input from the LIGO-Virgo Collaboration, specifically from Will Farr and the parameter estimation and machine-learning working groups. We would additionally like to thank Szabi Marka for posing this challenge to us. We thank Nvidia for the generous donation of a Tesla V-100 GPU used in addition to LVC computational resources. The authors also gratefully acknowledge the Science and Technology Facilities Council of the United Kingdom. CM and SH are supported by the Science and Technology Research Council (grant No. ST/ L000946/1) and the European Cooperation in Science and Technology (COST) action CA17137. FT acknowledges support from Amazon Research and EPSRC grant EP/M01326X/1, and RM-S EPSRC grants EP/M01326X/1 and EP/R018634/1.

Ii addendum

ii.1 Competing Interests

The authors declare that they have no competing financial interests.

ii.2 Correspondence

Correspondence and requests for materials should be addressed to Hunter Gabbard (email:

Iii Methods

Conditional variational autoencoders are a form of variational autoencoder which are conditioned on an observation, where in our case the observation is a 1-dimensional GW time series signal . The autoencoders from which variational autoencoders are derived are typically used for problems involving image reconstruction and/or dimensionality reduction. They perform a regression task whereby the autoencoder attempts to predict its own given input (model the identity function) through a “bottleneck layer”, a limited and therefore distilled representation of the input parameter space. An autoencoder is composed of two neural networks, an encoder and a decoder Gallinari et al. (1987). The encoder network takes as input a vector, where the number of dimensions is a fixed number predefined by the user. The encoder converts the input vector into a (typically) lower dimensional space, referred to as the latent space. A representation of the data in the latent space is passed to the decoder network which generates a reconstruction of the original input data to the encoder network. Through training, the two sub-networks learn how to efficiently represent a dataset within a lower dimensional latent space which will take on the most important properties of the input training data. In this way, the data can be compressed with little loss of fidelity. Additionally, the decoder simultaneously learns to decode the latent space representation and reconstruct that data back to its original form (the input data).

The primary difference between a variational autoencoder Pagnoni et al. (2018) and an autoencoder concerns the method by which locations within the latent space are produced. In our variant of the variational autoencoder, the output of the encoder is interpreted as a set of parameters governing statistical distributions (in our case the means and variances of multivariate Gaussians). In proceeding to the decoder network, samples from the latent space () are randomly drawn from these distributions and fed into the decoder, therefore adding an element of variation into the process. A particular input can then have a range of possible outputs. In both the decoder and the encoder networks we use fully-connected layers (although this is not a constraint and any trainable network architecture may be used).

iii.1 Cost function derivation

We will now derive the cost function and the corresponding network structure and we begin with the statement defining the aim of the analysis. We wish to obtain a function that reproduces the posterior distribution (the probability of our physical parameters given some measured data ). The cross entropy between 2 distributions is defined in Eq. 2 where we have made the distributions explicitly conditional on (our measurement). In this case is the target distribution (the true posterior) and is the parametric distribution that we will use neural networks to construct. The variable represents the trainable neural network parameters.

The cross-entropy is minimised when and so by minimising


where indicates the expectation value over the distribution of measurements , we therefore make the parametric distribution as similar as possible to the target for all possible measurements .

Converting the expectation value into an integral over weighted by and applying Bayes’ theorem we obtain


where is the prior distribution on the physical parameters .

The CVAE network outlined in Fig. 1 makes use of a conditional latent variable model and our parametric model is constructed from the product of 2 separate distributions marginalised over the latent space


We have used and to indicate that the 2 separate networks modelling these distributions will be trained on these parameter sets respectively. Both new conditional distributions are modelled as dimensional multivariate uncorrelated Gaussian distributions (governed by their means and variances). However, this still allows to take a general form (although it does limit it to be unimodal).

One could be forgiven in thinking that by setting up networks that simply aim to minimise over the and would be enough to solve this problem. However, as shown in Sohn et al. (2015) this is an intractable problem and a network cannot be trained directly to do this. Instead we define a recognition function that will be used to derive an ELBO. Here we use to represent the trainable parameters of an encoder network ().

Let us first define the KL divergence between 2 of our distributions as

KL (7)

It can be shown, after some manipulation, that


where the ELBO is given by


and is so-named since KL cannot be negative and has a minimum of zero. Therefore, if we were to find a function (optimised on ) that minimised the KL-divergence then we can state that


After some further manipulation of Eq. 9 we find that


We can now substitute this inequality into Eq. 5 (our cost function) to obtain


which can in practice be approximated as a stochastic integral over draws of from the prior, from the likelihood function , and from the recognition function, giving us Eq. Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy, the actual function evaluated within the training procedure.

Figure 4: The cost as a function of training iteration. We show the ELBO cost function component (blue), the KL divergence component (orange) and total cost (green). The total cost is simply a summation of the 2 components and one training iteration is defined as training over one batch of signals.
Parameter name symbol min max units
mass 1 35 80 solar masses
mass 2 666Additionally is constrained such that . 35 80 solar masses
luminosity distance 1 3 Gpc
time of coalescence 0.65 0.85 seconds
phase at coalescence 0 radians
right ascension 1.375 radians
declination -1.2108 radians
inclination 0 radians
polarisation 0 radians
spins - 0 -
epoch - 1126259642 GPS time
detector - LIGO Hanford -
Table 2: The uniform prior boundaries and fixed parameter values used on the BBH signal parameters for the benchmark and the CVAE analyses.

iii.2 The training procedure

Having set up a cost function composed of 3 probability functions that have well defined inputs and outputs where the mapping of those inputs to outputs is governed by the parameter sets . These parameters are the weights and biases of 3 neural networks acting as (variational) encoder, decoder, and encoder respectively. To train such a network one must connect the inputs and outputs appropriately to compute the cost function and back-propagate cost function derivatives to update the network parameters. The network structure shown schematically in Fig. 1 shows how for a batch of sets of and corresponding values, the cost function is computed during each iteration of training.

Training is performed via a series of steps illustrated in Fig. 1.

  • The encoder is given a set of training GW signals () and encodes into a set of variables defining a distribution in the latent space. In this case describes the first 2 central moments (mean and variance) for each dimension of a uncorrolated (diagonal covariance) multivariate Gaussian distribution.

  • The encoder takes a combination of both the data and the true parameters defining the GW signal and encodes this into parameters defining another uncorrelated multivariate Gaussian distribution in the same latent space. These parameters we denote by again representing the means and variances.

  • We then sample from the distribution described by giving us samples within the latent space.

  • These samples, along with their corresponding data, then go to the decoder D which outputs , a set of parameters (much like ) that define the moments of an uncorrelated multivariate Gaussian distribution in the physical space.

  • The first term of the loss function (Eq. Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy) is then computed by evaluating the probability density defined by at the true training values. The component of the loss allows the network to learn how to predict accurate values of but to also learn the intrinsic variation due to the noise properties of the data . It is important to highlight that the GW parameter predictions from the decoder D do describe a multivariate Gaussian, but as is shown in our results (see Fig. 2), this does not imply that our final output posterior estimates will also be multivariate Gaussians.

  • Finally the loss component described by the KL divergence between the distributions described by and is computed using

    KL (13)

    Here we highlight that we do not desire that the network tries to make these 2 distributions equal to each other. Rather, we want the ensemble network to minimise the total cost (of which this is a component).

As is standard practice in machine learning applications, the cost is computed over a batch of training samples and repeated for a pre-defined number of iterations.

iii.3 Network and Training parameters

For our purposes, we found that training iterations, a batch size of training samples and a learning rate of was sufficient. We used a total of training samples in order to adequately cover the BBH parameter space. We additionally ensure that an (effectively) infinite number of noise realizations are employed by making sure that every time a training sample is used it is given a unique noise realisation despite only having a finite number of waveforms. Each neural network (, , D) is composed of 3 fully connected layers and has neurons in each layer with ReLU Nair and Hinton (2010) activation functions between layers. We use a latent space dimension of and we consider training complete when both components to the loss function have converged to approximately constant values or when comparisons with benchmark test posteriors indicate no significant changes in the output posterior.

iii.4 The testing procedure

After training has completed and we wish to use the network for inference we follow the procedure described in the right hand panel of Fig. 1. Given a new data sample (not taken from the training set) we simply input this into the encoder from which we obtain a single value of describing a distribution (conditional on the data ) in the latent space. We then repeat the following steps:

  • We randomly draw a latent space sample from the latent space distribution defined by .

  • Our sample and the corresponding original data are fed as input to our pre-trained decoder network (D). The decoder network returns a set of moments which describe a multivariate Gaussian distribution in the physical parameter space.

  • We then draw a random realisation from that distribution.

A comprehensive representation in the form of samples drawn from the entire joint posterior distribution can then be obtained by simply repeating this procedure with the same input data (see Eq. 6).

iii.5 Additional tests

Figure 5: Distributions of KL-divergence values between posteriors produced by different samplers. The KL divergence is computed between all samplers with every other sampler over all 256 GW test cases. The distributions of the resulting KL divergence values are then plotted, with each color representing a different sampler combination including VItamin as one of the sampler pairs. The grey distributions represent the results from all benchmark sampler pairs for comparison. Both the and axes are scaled logarithmically for readability.

The KL divergence between 2 distributions is a measure of their similarity and we use this to compare the output posterior estimates between samplers for the same input test data. To do this we run each independent sampler (including the CVAE) on the same test data to produce samples from the corresponding posterior. We then compute the KL-divergence between output distributions from each sampler with itself and each sampler with all other samplers. For distributions that are identical the KL-divergence is equal to zero but since we are representing our posterior distributions using finite numbers of samples, identical distributions should result in KL-divergence values . In Fig. 5 we show the distributions of these KL-divergences for the 256 test GW samples where we see that the CVAE approach when compared to the benchmark samplers have distributions consistent with those produced when comparing between 2 different benchmark samplers.


Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description