Unpaired Image EnhancementFeaturing Reinforcement-Learning-Controlled Image Editing Software

Unpaired Image Enhancement Featuring Reinforcement-Learning-Controlled Image Editing Software


This paper tackles unpaired image enhancement, a task of learning a mapping function which transforms input images into enhanced images in the absence of input-output image pairs. Our method is based on generative adversarial networks (GANs), but instead of simply generating images with a neural network, we enhance images utilizing image editing software such as Adobe® Photoshop® for the following three benefits: enhanced images have no artifacts, the same enhancement can be applied to larger images, and the enhancement is interpretable. To incorporate image editing software into a GAN, we propose a reinforcement learning framework where the generator works as the agent that selects the software’s parameters and is rewarded when it fools the discriminator. Our framework can use high-quality non-differentiable filters present in image editing software, which enables image enhancement with high performance. We apply the proposed method to two unpaired image enhancement tasks: photo enhancement and face beautification. Our experimental results demonstrate that the proposed method achieves better performance, compared to the performances of the state-of-the-art methods based on unpaired learning.


Image enhancement is a task of learning a mapping function which transforms input images into enhanced images. If we have a large number of original and enhanced image pairs, the task can be solved by image-to-image translation methods, which have made significant progress [10, 18] owing to the recent development of convolutional neural networks (CNNs). However, in many cases, it is difficult to collect a large number of such image pairs. To avoid this problem, we address an image enhancement task that does not require paired image datasets; that is, unpaired image enhancement. In this paper, we propose an unpaired image enhancement method which can be applied to real-world tasks.

A simple approach for unpaired image enhancement can be to use unpaired image-to-image translation methods, which are mainly based on generative adversarial networks (GANs) [7]. One of such methods is CycleGAN [20], where generators which have an encoder-decoder architecture are trained with cycle consistency. However, when using CNNs as decoders in real-world tasks, there are three problems. First, images generated by a CNN-based decoder have artifacts that can be attributed to CNN architecture. Because artifacts can seriously degrade the quality of images, they can have fatal defects when used in practical applications. Second, CNN-based decoders can only generate images with limited resolution in practice (e.g., 512512px in the CycleGAN paper). Recent high-resolution displays need 2000px or larger images, but generating images with a high resolution makes the training unstable and time-consuming. Third, image-to-image translation with CNN-based decoders is not interpretable. Because the procedure is black-box, users cannot understand and manually adjust it.

To achieve unpaired image enhancement that is without artifacts, is scale-invariant, and is interpretable, we use image editing software which edits the input image based on input parameters, such as Adobe Photoshop. Using image editing software in the processing flow has the following three benefits: edited images have no artifacts because the software is carefully designed for professional use, the same editing can be applied to large sized images using the scale-invariant image editing filters provided by the software, and the editing is interpretable allowing users to easily adjust it manually. By using image editing software, we can achieve high-quality and highly practical image enhancement. To utilize image editing software in a GAN, we propose a reinforcement learning (RL) framework where the generator works as the agent controlling the software. While a generator in a general GAN generates images directly, the generator in our framework selects the software’s parameters and is rewarded when the edited result fools the discriminator. By training the framework with RL, we can use high-quality non-differentiable image editing software.

To evaluate the performance of the proposed method, we apply it to two unpaired image enhancement tasks: photo enhancement and face beautification. The experimental results show that the proposed method achieves better performance than previous approaches.

This paper makes the following contributions:

  • We achieve unpaired image enhancement that is without artifacts, is scale-invariant, and is also interpretable.

  • We use image editing software and propose an RL framework to incorporate image editing software into a GAN. The generator is trained as the agent to select the software’s parameters and is rewarded when it fools the discriminator.

  • We apply the proposed framework to the tasks of photo enhancement and face beautification.

Figure 1: Overview of our method. In our framework, the generator is trained with RL to control image editing software, and the output of the discriminator is used as the reward.

Related Works

Image-to-Image Translation

We formulate image enhancement as a task of learning the mapping from original images to images with the desired characteristics, which is one of image-to-image translation problems. A major CNN-based method for image-to-image translation is pix2pix [10], which uses a conditional GAN [7] to learn a mapping from source to target images. Based on this method, Wang et al. \shortcitewang2018high achieved image-to-image translation with high resolution using multi-scale generators and discriminators. These paired-learning methods require a large number of pairs of input and output images, but in many cases, such pairs of images cannot be obtained. To solve this problem, Zhu et al. \shortcitezhu2017unpaired developed an unpaired image-to-image translation technique named CycleGAN, where two GANs are trained using cycle consistency. Kim et al. \shortcitekim2017learning and Yi et al. \shortciteyi2017dualgan also proposed similar methods and named them DiscoGAN and DualGAN, respectively. Choi et al. \shortcitechoi2018stargan proposed a method named StarGAN that can handle translation between multiple domains. We propose a more practical method than applying these methods directly to image enhancement.

Reinforcement Learning for Image Processing

In recent years, deep RL is being applied to image processing. Cao et al. \shortcitecao2017attention applied RL to super-resolution of facial images. In that study, areas to be enhanced are sequentially selected by RL. Li et al. \shortciteli2018a2 proposed an RL-based image cropping method, where an agent sequentially updates the cropping window enabling high-speed cropping. Yu et al. \shortciteyu2018crafting used RL to select a toolchain from a toolbox for image restoration. Furuta et al. \shortcitefuruta2019fully proposed a fully convolutional network that allows agents to perform pixel-wise manipulations.

One of the benefits of RL is that a framework containing non-differentiable functions can be optimized. Ganin et al. \shortciteganin2018synthesizing proposed a reinforced adversarial learning method for synthesizing simple images of letters or digits using a non-differentiable renderer. Because the image editing software we use and its renderer are both non-differentiable, we apply some of their training strategy to our unpaired image enhancement method.

Photo Enhancement

Photo enhancement can be formulated as a translation between low-quality original images and high-quality expert-retouched images. Bychkovsky et al. \shortcitebychkovsky2011learning created a large-scale paired dataset for photo enhancement. They hired five expert retouchers and created a collection of five sets of 5,000 input-output image pairs. Using this paired dataset, Yan et al. \shortciteyan2016automatic proposed an automatic photo adjustment framework, which considers the local semantics of an image. Gharbi et al. \shortcitegharbi2017deep developed a CNN to predict the coefficients of a locally affine model in a bilateral space and achieved high-speed edge-preserving photo enhancement. Wang et al. \shortcitewang2019underexposed built an underexposed image dataset and proposed a network that can handle diverse lighting conditions.

Collecting pairs of original and expert-retouched images is labor-intensive. To address this problem, unpaired learning methods have been proposed. Chen et al. \shortcitechen2018deep made some improvements to CycleGAN to develop a stable two-way GAN framework. Park et al. \shortcitepark2018distort created pseudo-input-retouched pairs by randomly distorting high-quality reference images. Hu et al. \shortcitehu2018exposure proposed a deep RL-based framework that applies retouching operations sequentially. Their method is similar to our proposed method, but their architecture can only use differentiable filters. While the available filters in their framework are limited, our method can use a variety of filters because our method does not require filters to be differentiable. In addition, the same framework can be applied to a completely different task such as face beautification.

Face Attribute Manipulation

Face beautification, a task of converting a less attractive face into an attractive face, is one application of face attribute manipulation. On of the methods for face attribute manipulation is CycleGAN, but the model is difficult to train, and generated images may include artifacts. Several GAN-based approaches have been proposed to overcome this problem. Shen et al. \shortciteshen2017learning achieved efficient face attribute manipulation by generating only the difference between images before and after the manipulation instead of generating the entire image. Zhang et al. \shortcitezhang2018generative introduced spatial attention to avoid edits in unrelated parts.

Another approach called deep feature interpolation (DFI), which does not use GANs, was proposed by Upchurch et al. \shortciteupchurch2017deep. By manipulating the deep features of the input image with a specific attribute vector and performing backpropagation to the image space, the image after the manipulation can be obtained. Using DFI, Chen et al. \shortcitechen2018facelet achieved fast and high-quality face attribute manipulation with an end-to-end CNN that learns attribute vectors. Chen et al. \shortcitechen2019semantic developed a model that decomposes a facial attribute into multiple semantic components, each corresponding to a specific face region. These techniques have produced great results, but face attribute manipulation using CNNs inevitably generates artifacts. This is a serious issue in face beautification.


Our goal is to learn a mapping function which transforms input images into enhanced images in the absence of input-output image pairs. We formulate this task as unpaired image-to-image translation from source domain to target domain , where and contain original images and images with the desired characteristics , respectively. We denote the data distribution as and . A simple approach can be training a CNN-based generator such as CycleGAN [20]. However, CNN-based generators have several problems: the generated image has artifacts, the generator is not scale-invariant, and the translation is not interpretable. To achieve high-quality image enhancement that addresses these problems, we introduce image editing software such as Adobe Photoshop. This image editing software takes an image and an action vector as input and outputs the edited image . Here, is the number of filters in the image editing software . To incorporate the image editing software into a GAN, we propose an RL framework, which consists of the image editing software, one generator, and one discriminator. In this framework, the generator works as an agent selecting parameters for the software and is rewarded when it fools the discriminator. Through the training process, the distribution defined by the generator gradually approaches . We show the overview of our framework in Figure 1 and give detailed explanations of the discriminator and the generator in the following sections.


The training process of our discriminator is the same as that of discriminators in general GANs. That is, it learns to distinguish the generated images from the real images. We follow a method of Wasserstein GAN with gradient penalty (WGAN-GP) [8] and define the loss function as follows,


where the first and the second terms increase the Wasserstein distance between generated images and real images. is a weight for , and is a regularization term for the discriminator to stay in the set of Lipschitz continuous functions,


is an image sampled along straight lines between images in and .


We aim to incorporate image editing software into a GAN framework. That is, our generator takes an original image as input and outputs parameters for the software. A simple approach is to design a differentiable image editing software . A generator which generates parameters for can be directly optimized by minimizing the following loss:


However, this method cannot use non-differentiable software such as Adobe Photoshop as .

To utilize non-differentiable image editing software , we train the generator using RL. In RL, an agent decides which action to execute according to the current state. We define an original image as the state and the parameter vector as the action. In the existing RL methods for image processing [2, 12, 19, 5, 6], the agent receives operated images and decides actions sequentially, whereas our agent receives an image and selects an action only once. This is because is not a linear function, for sequential actions and ,


Because it is hard for users to interpret sequential actions, we use only single-step actions.

We define the reward so that cannot be distinguished from images of the target domain . The simplest reward is , but maximizing only can lead to lack of consistency between and . To deceive the discriminator with as small a change as possible, we define the reward as follows:


where the second term calculates the mean squared error between two images.

We select advantage actor-critic (A2C) [14] as a method of RL following a training strategy by Ganin et al. \shortciteganin2018synthesizing. A2C consists of value network and policy network . Value network is a module that estimates the value of the current state . The loss to optimize is defined as follows:


Policy network is a module that outputs the probability of each action in the current state and is trained to maximize the expected reward,


Intuitively, if the reward obtained by the operation is greater than the reward predicted by the value network, the probability of increases. The second term is a function that calculates entropy, which encourages the agent to explore and prevents convergence to local optima.

Network Architecture

In this paper, we use the discriminator and the generator whose architecture is shown in Figure 2. The discriminator has general CNN architecture similar to the one used in WGAN-GP [8]. The generator consists of the policy network and the value network, which share the two-dimensional (2D) convolutional layers.

can take continuous parameters, but an agent which selects continuous actions is hard to train. Therefore, we design our agent to take discrete actions and the policy network to output probabilities for each discrete action. We name the output of the policy network as , which is a matrix of , and is the number of the discrete steps of the parameters. has a maximum value and a minimum value for each . We divide the range between maximum and minimum values into steps, and the policy network outputs probabilities for each discrete action as follows,


where . To represent relationship between adjacent discrete steps (e.g., and ), we use one-dimensional (1D) convolutional layers to make the probabilities from the CNN feature. We do not use padding for the 1D convolutional layers, because the padding can generate strange probability values at both ends of the steps and can destabilize the training.

Figure 2: Network architecture of the discriminator and the generator.
Figure 3: Qualitative comparison with other methods on a test image from the MIT-Adobe 5K dataset [1].
Table 1: The result of the quantitative comparison on the MIT-Adobe 5K dataset [1].
Table 2: The result of the user study on the MIT-Adobe 5K dataset [1].
Figure 4: Application process of the filters. Values in parentheses are filter parameters, which are normalized to [-1, 1].

Train and Test

While training, we resize all images to px, and select the action probabilistically according to , that is,


The resized image is edited according to . While testing, the agent takes an image resized to px as a state and selects action deterministically,


Then, the selected action is applied to the original image because the operation of image editing software is scale-invariant.

We train the discriminator and the generator alternately. According to the paper of WGAN-GP [8], the discriminator should be updated more frequently than the generator. Following Ganin et al.’s \shortciteganin2018synthesizing training strategy, we create a replay buffer which keeps images generated through the training process. For every update of the generator, the discriminator is updated times using images from the replay buffer.


Photo Enhancement


We apply the proposed method to photo enhancement, a task of converting an original photo into an expert-retouched photo. We use the MIT-Adobe 5K dataset [1] for training and testing. The dataset consists of 5,000 photos, and each image is retouched by five experts. Following Chen et al. \shortcitechen2018deep, we use the images retouched by Expert C as the target domain images. To create unpaired image sets, we use 2,250 original images and non-overlapping 2,250 retouched images as training data, and the other 500 pairs are used as test data.


We choose Adobe Lightroom® as the image editing software . This tool can adjust the color, brightness, or contrast of an image by manipulating various filter parameters. From the available filters, we choose the following: Dehaze, Clarity, Contrast, Exposure, Temp, Tint, Whites, Blacks, Highlights, Shadows, Vibrance, and Saturation. Because it is difficult to use Lightroom directly, we reproduce the filters on Python. We optimize the discriminator and the generator using Adam [11] with a learning rate of . Other parameters , , , , and are 10, 100, 0.001, 33, and 5, respectively.

Quantitative Evaluation

We conduct a quantitative comparison with the existing methods. We measure the difference between our result images and expert-retouched images using two common metrics, i.e., PSNR and SSIM. In general, higher PSNR and SSIM mean better results. To confirm that the proposed method is scale-invariant, we conduct evaluations with small and large images whose longer sides are 512px and 2048px, respectively. We compare our method with CycleGAN [20] and some unpaired photo enhancement methods: Exposure [9], Distort-and-Recover (D&R) [15], and Deep Photo Enhancer (DPE) [4]. CycleGAN and DPE, which use CNN-based decoders, are trained using small images. When testing with large sized images, small size results are resized to large size using bicubic interpolation. D&R and Exposure, which are filter-based methods, can apply the same enhancement to small and large images.

The result of the comparison is shown in Table 1. This result shows that our method achieves the best performance for both sizes. DPE and the proposed method have almost the same values for SSIM with small size, with which the model is trained, but DPE seriously drops SSIM on large sized images because the method is not scale-invariant. D&R and Exposure are filter-based methods and perform well for large images, but the filters used in these methods are simple ones, resulting in scores lower than ours. Compared to these filter-based methods, our method can use high-quality non-differentiable filters and achieve image enhancement with high performance.

To analyze our method, we conduct ablation experiments. First, we focus on the differentiability of the filters. Our proposed framework is trained with RL, which enables us to use non-differentiable filters in Lightroom. To verify that the non-differentiable filters contribute to the high performance, we replace them with differentiable filters used by Hu et al. \shortcitehu2018exposure (Ours w/ Differentiable Filters). As shown in the result, we obtain higher performance by using filters in Lightroom, and the availability of non-differentiable filters is important to the high performance.

We also conduct experiments where we remove the mean squared error from the reward (Ours w/o MSE) and replace the 1D convolutional layers with a fully connected layer (Ours w/o 1D Conv.). The results show that the mean squared error and the 1D convolutional layers are necessary factors for the high performance.

Qualitative Evaluation

We show a qualitative comparison with the other methods for a small sample in Figure 3. In addition to the methods compared in the quantitative evaluation, we use “auto white-balance” and “auto-tone adjustment” functions available in Adobe Lightroom, which we name Lightroom (auto). As shown in this result, the Lightroom (auto) makes the color dull, CycleGAN generates artifacts at the boundary between the sky and the building, Exposure overexposes the image, and D&R outputs a slightly darker image than the target image. Compared to these methods, our method can enhance the image without any artifacts and properly reproduces the retouch by the expert. DPE can achieve almost the same quality as ours but is scale-sensitive as shown in the quantitative evaluation.

We show the sequential application process of the filters in Figure 4. Our proposed framework uses image editing software, which enables users to interpret the enhancement and manually adjust it. Note that although the filters are sequentially applied, the agent selects all filter parameters at once.

User Study

We evaluate the proposed method through a user study. We randomly select 100 original images from 500 test pairs and perform enhancement using each existing method and the proposed method. 20 crowdworkers are hired via Amazon Mechanical Turk and presented with 100 groups of results from existing and proposed methods, which are arranged randomly to avoid bias. Then, we ask the crowdworkers to give a five-grade rating from 1 (Bad) to 5 (Excellent). Table 2 shows the average of all evaluations. Our proposed method obtains higher evaluation than all the existing methods, which shows that it is capable of a high-quality enhancement.

Figure 5: Qualitative comparisons with other methods on test images from the SCUT-FBP5500 dataset [13].

Face Beautification


We apply the proposed method to face beautification, a task of converting a less attractive face into an attractive face. For training and testing, we use the SCUT-FBP5500 dataset [13], which has a total of 5,500 facial images and attractiveness scores within [1, 5]. We consider images with top 1,500 attractiveness scores as attractive images and the others as less attractive images. Less attractive images with the lowest 1,500 attractiveness scores and all attractive images are used for the training, and remaining less attractive images are used for the test. We extract key points using the method of Kazemi et al. \shortcitekazemi2014one to align face positions and resize images to 224224px. The area outside of the face is masked-out with zero value while training to remove background information.


For image editing software , we choose the Face-Aware Liquify function in Adobe Photoshop, which provides filters to morph facial images by changing geometric structure such as eye size or face contour. From the available filters, we choose the following: Eye Size, Nose Height, Nose Width, Upper Lip, Lower Lip, Mouse Width, Mouse Height, Forehead, Chin Height, and Chin Contour. Because it is difficult to use Adobe Photoshop directly, we reproduce the filters on Python. The hyperparameters are the same as those used for photo enhancement, except that and are 300 and 17, respectively.

Table 3: The result of the user study on the SCUT-FBP5500 dataset [13].

Qualitative Evaluation

In Figure 5, we show qualitative comparisons with CycleGAN [20] and some face attribute manipulation methods: ResGAN [16], DFI [17], and Facelet [3]. All of these methods use CNN to manipulate portraits. As shown in the results, ResGAN only generates artifacts around the eyes. Although CycleGAN, DFI, and Facelet try to make the images look attractive, the edited images have artifacts derived from the structure of CNNs, which can prove fatal for the task of face beautification. Compared to these methods, our method can naturally beautify the faces by manipulating geometric structure such as enlarging the eyes or thinning the contours.

User Study

We evaluate the proposed method by a user study. 100 images are randomly selected from less attractive images excluding those used for the training, and we perform beautification using each existing method and the proposed method. We ask crowdworkers to evaluate the images according to naturality and preference in the same way as is done for photo enhancement. Table 3 shows the average of all evaluations. The proposed method obtains higher evaluation than all existing methods, which shows that our proposed method is capable of high-quality beautification.


In this study, we address unpaired image enhancement, a task of learning a mapping function which transforms input images into enhanced images in the absence of input-output image pairs. Existing CNN-based methods have the following problems: generated images have artifacts due to neural network architecture, only images with limited resolution can be generated, and the enhancement cannot be interpreted. To solve these problems, we use image editing software such as Adobe Photoshop, which can perform high-quality enhancement and avoids the problems. To use image editing software in a GAN, we propose an RL framework where the generator works as an agent controlling the software and the output of the discriminator is used as the reward. The framework can use carefully designed non-differentiable filters, which enable high-quality enhancement. We apply the proposed method to photo enhancement and face beautification. The experimental results show that our method performs better than existing methods.


A part of this research was supported by JSPS KAKENHI Grant Number 19K22863.


  1. V. Bychkovsky, S. Paris, E. Chan and F. Durand (2011) Learning photographic global tonal adjustment with a database of input/output image pairs. In CVPR, pp. 97–104. Cited by: Figure 3, Table 1, Table 2, Dataset.
  2. Q. Cao, L. Lin, Y. Shi, X. Liang and G. Li (2017) Attention-aware face hallucination via deep reinforcement learning. In CVPR, pp. 690–698. Cited by: Generator.
  3. Y. Chen, H. Lin, M. Shu, R. Li, X. Tao, X. Shen, Y. Ye and J. Jia (2018) Facelet-bank for fast portrait manipulation. In CVPR, pp. 3541–3549. Cited by: Qualitative Evaluation.
  4. Y. Chen, Y. Wang, M. Kao and Y. Chuang (2018) Deep photo enhancer: unpaired learning for image enhancement from photographs with gans. In CVPR, pp. 6306–6314. Cited by: Quantitative Evaluation.
  5. R. Furuta, N. Inoue and T. Yamasaki (2019) Fully convolutional network with multi-step reinforcement learning for image processing. In AAAI, pp. 3598–3605. Cited by: Generator.
  6. Y. Ganin, T. Kulkarni, I. Babuschkin, S. A. Eslami and O. Vinyals (2018) Synthesizing programs for images using reinforced adversarial learning. In ICML, pp. 1652–1661. Cited by: Generator.
  7. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: Introduction, Image-to-Image Translation.
  8. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. C. Courville (2017) Improved training of wasserstein gans. In NIPS, pp. 5767–5777. Cited by: Discriminator, Network Architecture, Train and Test.
  9. Y. Hu, H. He, C. Xu, B. Wang and S. Lin (2018) Exposure: a white-box photo post-processing framework. In ACM TOG, Vol. 37, pp. 26. Cited by: Quantitative Evaluation.
  10. P. Isola, J. Zhu, T. Zhou and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In CVPR, pp. 1125–1134. Cited by: Introduction, Image-to-Image Translation.
  11. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Implementation.
  12. D. Li, H. Wu, J. Zhang and K. Huang (2018) A2-rl: aesthetics aware reinforcement learning for image cropping. In CVPR, pp. 8193–8201. Cited by: Generator.
  13. L. Liang, L. Lin, L. Jin, D. Xie and M. Li (2018) SCUT-fbp5500: a diverse benchmark dataset for multi-paradigm facial beauty prediction. In ICPR, pp. 1598–1603. Cited by: Figure 5, Dataset, Table 3.
  14. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver and K. Kavukcuoglu (2016) Asynchronous methods for deep reinforcement learning. In ICML, pp. 1928–1937. Cited by: Generator.
  15. J. Park, J. Lee, D. Yoo and I. So Kweon (2018) Distort-and-recover: color enhancement using deep reinforcement learning. In CVPR, pp. 5928–5936. Cited by: Quantitative Evaluation.
  16. W. Shen and R. Liu (2017) Learning residual images for face attribute manipulation. In CVPR, pp. 4030–4038. Cited by: Qualitative Evaluation.
  17. P. Upchurch, J. Gardner, G. Pleiss, R. Pless, N. Snavely, K. Bala and K. Weinberger (2017) Deep feature interpolation for image content changes. In CVPR, pp. 7064–7073. Cited by: Qualitative Evaluation.
  18. T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, pp. 8798–8807. Cited by: Introduction.
  19. K. Yu, C. Dong, L. Lin and C. Change Loy (2018) Crafting a toolchain for image restoration by deep reinforcement learning. In CVPR, pp. 2443–2452. Cited by: Generator.
  20. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, pp. 2223–2232. Cited by: Introduction, Method, Quantitative Evaluation, Qualitative Evaluation.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description