Learn to Model Motion from Blurry Footages

Learn to Model Motion from Blurry Footages

Wenbin Li Da Chen Zhihan Lv Yan Yan Darren Cosker Department of Computing, Imperial College London, UK Department of Computer Science, University College London, UK Department of Computer Science, University of Bath, UK Department of Information Engineering and Computer Science (DISI), University of Trento, Italy Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), University of Bath, UK
Abstract

It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modelling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.

keywords:
Optical Flow, Convolutional Neural Network (CNN), Video/Image Deblurring, Directional Filtering
journal: Pattern Recognition

1 Introduction

In the image space, the information observed by the dynamical behavior of the object of interest or by the motion of the camera itself is a decisive interpretation for representing natural phenomena. Dense motion, in particular optical flow estimation between a consecutive image pair is the most low-level characterization of such information, which is supposed to estimate a dense field corresponding to the displacement of each pixel. It has become one of the most active fields of computer vision because such characterizations can be extremely embedded into a large number of other higher-level computer vision fields and application domains. Indeed, one can be interested in tracking APO (); APO_JIFS (); tang (), 3D reconstruction reflection (), segmentation, as well as the general virtual reality, augmented reality and post-production Rotopp2016 (); tv ().

A typical pipeline of optical flow estimation has been lied on solving a brightness energy with the assistance of patch detection, matching, constrained optimization and interpolation. For many state-of-the-art approaches – even the precision has reached a reasonable level – the related applications are still limited by the difficult photometric effects and low performance in runtime. In the recent years, the deep Convolutional Neural Networks (CNNs) grows rapidly, which makes a step forward to provide hidden features and end-to-end knowledge representation for many precentral issues e.g. motion and texture style etc. Such knowledge representation is able to improve the robustness and yields a rapid fashion in the typical optical flow pipeline.

Camera-shake blur is a common photometric effect in the real-world footage, which is often caused by the fast camera motion under a low light condition. Such effect may lead to an invariant blur information for each of the pixel, and may bring extra difficulties into typical optical flow estimation because the basic brightness constancy HS () is violated. However, the blur from a daily video footage (24 FPS) can be directionally characterized  moBlur (). This observation enables an extra prior to enhance the camera-shake deblurring Zhong () and further recover precise optical flow from a blurry images. Such directional prior needs a strict pre-knowledge on the motion direction of the camera which can be obtained by an external sensor moBlur ().

1.1 Contributions

In this paper, we study the issue of recovering accuracy optical flow from frames of a real-world video footage given a camera-shake blur. The main idea is to learn directional filters, encoded the angle and distance similarity between blur and camera motion. Such filters are further applied to enhance the optical flow estimation. Our proposed method only relies on the input images, and does not need any other information e.g. ground truth camera motion and blur prior.

In overview, we propose a novel hybrid approach: (1) we conduct a CNN architecture using a learnable directional filtering layer. Our network is able to extract the blur&latent features from a blurry image, and further recover the blur kernel within an iterative deconvolutional fashion (Sec. 4); (2) we integrate our network into a variational optical flow energy, further optimized within a hybrid coarse-to-fine framework (Sec. 5).

In the evaluation (Sec. 6), we quantitatively compare our method to four baselines on the synthetic Ground Truth (GT) sequences. Those baselines include two blur-oriented optical flow approaches and two other publicly available state-of-the-art methods. We also give quality comparison given real-world blurry footages.

2 Related Work

In this section, we will give brief discussion on the related work in specific fields of image deblurring and optical flow estimation.

2.1 Image Deblurring

Image blur is a common photometric effect for the daily capture. It is often caused by fast camera movement under a low light condition. Such global blur can be formulated as follows:

where an observed blurred image can be represented as a combination of spatial noise along with a convolution between the latent sharp image and a spatial-invariant blur kernel w.r.t. Point Spread Function. To solve the and , a blind deconvolution is normally performed on :

where represents a regularization that penalizes spatial smoothness with a sparsity prior FMD (). To solve this ill-posed problem, many approaches rely on additional priors regarding to properties of observed images krishnan2011blind (); Xu_deblur (); panblind (); michaeli2014blind (); sun2013edge (); xiang2012image (); yun2011linearized (); shao2016regularized (). Pan et al. panblind (), for example, propose a blind deconvolution method by taking advantages from the dark channel he2011single () regarding to the observation that the dark pixels in the observed image are normally averaged with neighboring pixels along the blur. Krishnan et al. krishnan2011blind () introduce a novel scale-invariant regularizer to generate a more stable kernel by fixing the attenuation of high frequencies.

By taking into account the efficient inference, several algorithms FMD (); xu2010two (); Zhong (); Shan () are also proposed to solve the deblurring problem. Cho and Lee FMD () adopts a predicted edge map as a prior and solve the blind deconvolution energy within a course-to-fine framework. Xu et al. xu2010two (), however, discuss a key observation that salient edges do not always help with blur kernel searching. These edges can greatly increase the blur ambiguity in many common scenes. Hence, instead of the use of edge map, they propose an automatic gradient selection scheme to eliminate the “noisy” edges for kernel initialization. Furthermore, Zhong et al. Zhong () introduce an approach to reduce the noise using a pre-filtering process. Such process preserves the useful image information by reducing the noise along a specific direction.

Both natural image properties based and efficient inference based methods mentioned above are able to provide highly accurate deblurring result for general invariant camera-shake blur. However, these methods often show difficulties given the cases under variant blur. A handful of approaches are proposed to solve such a problem gupta2010single (); hirsch2011fast (); whyte2012non (); hu2014joint (); khare2016blind (). Gupta et al. gupta2010single () propose a Motion Density Function to represent the camera motion which is further adopted to recover the spatially varying blur kernel. Hu et al. hu2014joint () consider the various depth information of the scene while most of the deblurring methods apply a constant depth for simplicity. They apply an unified layer-based model to jointly estimate the depth and deblurring result from the underlying geometric relationship caused by camera motions.

Since all the methods mentioned above have the specialty along with their limitation, there is no general solution for images blurred by mixed sources, with regard to mixture of fast camera and object movement and scene depth variance. In this case, the image blur is hard to represent by a global model. With the development of Convolutional Neural Network (CNN), some CNN based deblurring methods are proposed to solve such problem. Hradiš et al. hradivs2015convolutional () apply a CNN to restore the blurred text documents which is restricted by highly structured data. Xu et al. xu2014deep () propose a more general deblurring method. They design a neural network which is guided by traditional deconvolution schemes.

Those mentioned above usually involve a single blurred image as input. There are some hardware assisted methods which are supposed to improve the precision and performance of deblurring levin (); Joshi (); tai08 (); huimage (). Levin et al. levin () propose a uniform method using the known camera arc motion. Such uniformly deblurred image can be estimated by controlling the camera movement along with a parabolic arc. As an extension of this work, Joshi et al. Joshi () propose to estimate the acceleration and angular velocity of camera by a inertial sensor, i.e. gyroscopes and accelerometers. Instead of the highly accurate sensor, Hu et al. huimage () introduce a deblurring approach using the smartphone inertial sensors. These methods with extra camera motion information often yield higher performance comparing to those methods only rely on single blurred image as input. However, these methods require complex camera setup and precise calibration.

2.2 Optical Flow

Figure 1: Our Iterative Deblurring Network. Our network first directionally filters the input image in various directions. The filtered images are then transformed to a learned feature representation for the further kernel and latent image estimation. The resulting latent image together with the blurred one are input to the next iteration until obtaining the final latent image and the blur kernel.

Dense motion estimation problem, in particular optical flow, has been widely studied as it can be adopted to many computer vision applications, e.g. video segmentation grundmann2010efficient (), recognition wang2013action () and virtual reality vr () etc. Many estimation methods have obtained impressive performance in terms of reliability and accuracy showed on the Middlebury Middlebury () and Sintel Sintel () benchmark. Most of this works are based on the pioneering optical flow method proposed by Horn and Schunck HS (). They combine a data term and a smoothness term into an energy function where the former term assumes the certain constancy of the image feature – typically according to Brightness Constancy Constraint (BCC) – and the latter term controls how the motion field is varied (such as the Motion Smoothness Constraint). This energy function is then optimized across the entire image to reach the global motion field. This original formula is generally applicable but often limited by many challenges such as large displacement, non-rigid motion, motion boundaries discontinuities, motion blur etc. Sintel (). Numbers of extensive works have been proposed to conquer these challenges by introducing additional constraints and more advanced optimization procedure  black1996robust (); Brox (); sun2010layered (); xu2012motion (); LME (); tu2016weighted (); ahmad2008human (); smokeDa (); GT (); gt_case (); vi_nc (). Brox et al. Brox () bring a gradient constancy assumption into the data term in order to reduce the dependency of BCC, and bring a discontinuity-preserving spatio-temporal smoothness constraint to deal with motion discontinuities. Xu et al. xu2012motion () propose a novel extended coarse-to-fine (EC2F) refinement framework by taking advantages of feature matching technique. Li et al. LME () propose to apply laplacian mesh energy to adapt the non-rigid deformation in the scenes.

Moreover, some Neural Network based methods are recently popular. Revaud et al. revaud2015epicflow () propose a edge preserving interpolation based on a sparse deep convolutional matching result. The sparse-to-dense interpolation result is then apply to initialize the optimization process for obtaining the final motion field. However, this method strongly relies on the quality of sparse matching where parameters are set manually. Dosovitskiy et al. dosovitskiy2015flownet () propose an automatic approach for matching and interpolation. Guiding by a correlation layer, their network can better predict the flow to initialize the refinement. Furthermore, Teney&Hebert teney2016learning () introduce an stand-alone CNN structure for motion estimation requiring less training data. The result, however, is inferior to the state of the art methods.

The presence of blurring features in the scene easily fails the traditional optical flow methods because of the violation to brightness constancy assumption. Only a few of approaches are introduced to settle this problem Portz (); he2010motion (); wulff2014modeling (); tu2015estimating (); li2013nonrigid (). Portz et al. Portz () treat the appearance of each input frame as a parameterized function combining pixel motion and blur motion. The motion clues are then integrated to the data term in energy function. However, it favors the smooth motion field and usually fail at motion boundaries. To solve this problem, Wulff and Black wulff2014modeling () treat the motion blur as a function of the layer motion and segmentation determined by a generative model. The optimization is then applied to minimize the pixel error between the input blurred images and synthetic image. Tu et al. tu2015estimating () edit the data term using a blur detection based matching method. Their approach is supposed to improve the flow regularization at motion boundaries. Li et al. moblur_nc () embed an additional camera motion channel into a hybrid framework in order to obtain the deblurring result and motion estimation result iteratively. Their method requires a physical motion tracker to obtain the ground truth motion accompanied with the moving camera. Such motion information is supposed to be a hard constraint in the image deblurring step. Besides, their method needs rigid manual tuning for different sequences, e.g. kernel size, the number of levels of image pyramid, etc.

In a quick summary, the current methods show extra difficulties to estimate the optical flow from blurry images because the blur may break the photometric properties and further mislead the common regularization. Our proposed method represents the blur image using CNN features which are then used to recover optical flow within a fast optimization framework.

In the following sections, we first discuss our pipeline for recovering the optical flow from a blurred footage (Sec. 3). We then introduce the main contributions on our novel CNN based deblurring framework (Sec. 4); as well as our hybrid optical flow framework (Sec. 5) and the evaluation (Sec. 6).

3 Recover Motion Field from Blurred Footage

The typical optical flow framework considers a pair of adjacent images, and follows the Brightness Constancy assumption () and global smoothness constraint (), as follows:

(1)

where and denote the current frame and its successor respectively. Those observed images can also be represented using a relative latent image and blur kernel, , . The optical flow field, denoted by can be obtained by solving this functional.

However, given such a pair of blurred images, the blur information may damage image structure and further violate the basic Brightness Constancy assumption of optical flow estimation. Those large number of outliers would lead to uncertain errors to energy optimization. The straight forward solution is to remove the blur before performing the optical flow estimation. The deblurring process may sharpen the images but still permanently change the pixel intensity and further bring unpredictable artifacts. The alternative is to match un-uniform blur moblur_nc (); Portz () between the input images:

(2)

where we have uniform blur images and which is supposed to use in the Blur Brightness and Blur Gradient Constancy terms:

(3)

where denotes a spatial gradient and presents a linear weight. The smoothness term regularizes the global flow variation as follows:

(4)

where Lorentzian regularization is applied to preserve motion boundaries. The un-uniform blur matching is supposed to protect the color properties of the images, as well as further keep color correlation and consistency across the input images. In Table 2, we quantitatively evaluate how the blur matching significantly improves the flow precision.

In the following sections, we present our CNNs based approach which consists two stacked modules: (1) A layered network for blind deconvolution; (2) An iterative optical flow framework.

4 A Layered Network for Blind Deconvolution

Figure 2: Intermediary Visualization for Layers. In the implementation, our network consists six layers, From Left to Right: directional filtering, convolution filtering, tanh nonlinearity, linear combination on hidden layers, tanh nonlinearity and linear combination to four feature representations in particular two sharp images and two blurry ones.

As show in Fig. 1, we propose an -iteration coarse-to-fine blind deconvolution module which takes into account a trainable convolutional neural network. For each iteration, the input images are operated with the following processes:

4.1 Directional Filtering

The blur from daily photography may be highly nonlinear and hard to predict. This, however, can be parameterized as a near linear form if it is from a daily video footage captured at ordinary frame rate (24 Hz). In this context, direction filters Zhong (); moblur_nc () may be effective to regularize the blur within deconvolution operation. A common form reads:

(5)

where represents a pixel location and denotes a Gaussian kernel and controls the filtering direction. The filter is further normalized by . Such directional filter is able to remove the general noise but does not affect the signal along the orthogonal direction. Given a ground truth blur direction , the filtered image , may destroy the color properties but is supposed to enhance the useful blur information.

In our network, we propose a novel Directional Filtering (DF) layer which calculates a new group of image representation by applying a directional filter across different directions. This process aims to remove the spatial noise while preserve the blur information. We therefore model the first filtering layer using shared weights across all the locations. Our filters read:

(6)

where denotes the directional filter and weights the strongness for the specific direction of the filtering. is the number of the direction sampling. In our implementation, we uniformly select directions within . After the directional filtering, we further construct the deep feature representation for the images.

4.2 Feature Representation

Similar to Schuler et al. learndeblur (), our network does not predict the latent image directly but perform a Feature Representation which computes gradient image representations and preliminary estimates for the further kernel and latent image estimation. Our scheme is supposed to extract the features from a subset of pixels. The local feature information is then integrated by a global combination. Such heuristic strategy can greatly shrink the number of parameters for optimization.

To extract our Feature Representation, we adopt a sub-network with three layers, in particular convolution, nonlinearity and linear combination respectively. We first apply a group of Convolutional Filters (CFs) onto the denoised (directional filtered) images, then transform the values using tanh function. In this case, the resulting features are further combined linearly for a new representation as follows:

(7)

where denotes a set of Convolutional Filters that are shared between the and . presents the nonlinearity while the and weight the linear combinations for the and respectively. Fig. 2 shows the intermediary features for each layer of our network. Note that we stack the tanh and linear combination layers twice after the convolutional layer for proper level of nonlinearity learndeblur (). In practice, those layers can be further stacked for difficult cases i.e. strong blur and noise xu2014deep ().

After those layers, we obtain a new feature representations for and respectively. Those featured images are then used to estimate the latent image and kernel.

4.3 Kernel and Latent Image Estimation

Once we have the current feature representation and , a variation of Cho&Lee FMD (); moblur_nc () is adopted for the Kernel and Latent Image Estimation. Our method consists two steps: (1) Given and , we calculate their gradient maps along the horizontal and vertical directions, which is capable of further preserving the high frequency information i.e. edges and image structure. (2) Those resulting gradient maps and are then used to optimize the energy:

(8)

Figure 3: Kernels for Training. To train our network, 16 kernels are estimated from a real-world footage moblur_nc (). We then generate 10 variations for each of kernel by rotating radians. Those kernels are applied to the sharp training images after resized to various sizes.

where linearly weights the derivatives in either directions while presents a weight for Tikhonov regularization on the kernel. Here both initial and are from our feature representations. The proposed energy function above is highly nonlinear. which is minimized by following an iterative numerical scheme from FMD (); moblur_nc (). The resulting pre-optimal kernel can be used to estimate the latent image within a Non-blind Deconvolution process:

(9)

By minimizing the energy function above, we obtain the latent image . Depending on the desired quality of the final deblurring, can be stacked to the blurred image along the third dimension. Such stacked image is input to our network and run through layers iteratively until obtaining the final and . In this case, all the learned filters of our network have three dimensions.

In summary, our network only regularizes the the free parameters in Directional Filtering and Feature Representation but fixes the hyper-parameter in Kernel and Image Estimation. In this case, similar to learndeblur (), our learning model sticks on learning filters with a limited receptive field instead of the full dimensionality of the input blurred images.

4.4 Parameter Training

Figure 4: Features Learnt from Our Network.

Similar to the traditional approaches, we synthesize pairs of latent and blurred images to train our network. We randomly sample 1,000 images (cropped to 480480 pix.) from the recent large-scale datasets MIFDB16 () which contains around 34,000 feature-rich sharp images from three synthetic scenes. To synthesize associated blurred images, we first adopt 16 kernels from a real-world footage moblur_nc (). As shown in Fig. 3, those kernels are near linear. For each of these selected kernels, 10 variations are generated by rotating for radians. In this case, we obtain 160 distinguishing kernel variations. We randomly resize each of those kernels into the size between pixel and pixel; then apply them to the selected sharp image respectively. After this process, we obtain 160,000 pairs of training images. During the training, we randomly add either Gaussian (0.01) or Salt&Pepper (0.15) noise.

For obtaining a proper result, we may perform iterations on our network, which leads to a large number of parameters for training. Here we follow the stage based training strategy from learndeblur (). We start with the first iteration by using the loss function onto the ground truth and estimated results. We then fix the parameters from previous iteration but only update the ones of the next iteration until the last iteration. This training process is supposed to be more efficient against the end-to-end strategy because it limits the number of updated parameters for different training stages. In practice, we adopt a fixed learning rate (0.01) and a decay rate (0.95).

Figure 5: Training Performance on Our Network. The MSE metric is measured during the training on different architectures of our network (Sec. 4.4). More number of hidden layers, DFs and CFs can improve the general performance.

Fig. 5 illustrates the training performance on four variations of our approach i.e. LMoF, LMoF-Deep, LMoF-NoDF and LMoF-Deep-NoDF. Here LMoF denotes our method using a network of 8 DFs, 8 CFs, 8 hidden layers and linear combination while LMoF-Deep presents an enhanced version by using a deeper network of 24 DFs, 48 CFs, 30 hidden layers and linear combination. LMoF-NoDF and LMoF-Deep-NoDF are the related versions without the DF layer. LMoF and LMoF-Deep outperform the related versions without the DF layer. It is worthy noting that with a larger number of DFs, CFs, hidden layers and network iterations, the deblurring quality can be greatly improved.

The experiments in table 1 (LMoF versus LMoF-NoDF) illustrate the final optical flow precision is improved by around 20% for all trials. On the other hand, the deeper version (LMoF-Deep) shows similar comparative measure (LMoF-Deep also gives lower training error against LMoF-Deep-NoDF; and LMoF-Deep gives general the best precision for most of trials in Table 1) We believe that those improvement on training/results is not about the overfitting. However, the number of the network iterations significantly affects the computational speed. In this context, we run three iterations for each of our implementations to balance the general performance with precision.

  Algorithm 1: CNN based Optical Flow Framework   Input    : A blurred image pair ,   Output : Optical flow field w   1:   Construct an -level top-down image pyramid   2:   Level index   3:   , ,   4:   for coarse to fine do   5:            6:          , and resized to the th scale   7:          foreach do   8:                  CNNFeatureNet ( )   9:                  ( ) 11:                  ( ) 12:          endfor 13:          , 14:          EnergyOpt ( ) 15:          16:   endfor

In the next section, we embed our proposed network into an optical flow framework.

5 An Iterative Optical Flow Framework

Algorithm 1 sketches the proposed CNN based Optical Flow Framework which interleaves our layered network and an iterative optical flow optimization.

Within our framework, the input images are first resized into a coarse-to-fine (top-down) pyramid. On each pyramidical level , (1) the resized images are input into our Layered Network for Blind Deconvolution (Sec. 4) which yields intermediary latent images and kernels . (2) Those information is then used to generate uniform Motion Energy for Blurred Images (Sec. 3). (3) Such blurred energy is optimized for the incremental optical flow field (Sec. 5.1). Finally, those parameters and are then propagated to the next level until convergence. Note that our framework is not a simple combination of image deblurring and optical flow estimation. Our Layered Network for Blind Deconvolution is deeply embedded (Per-level) into every level of image pyramid. And the following blur matching step (step 13, Algorithm 1) further preserves brightness constancy. In this case, the CNN based deblurring process is automatically optimized against the size of image (different levels of image pyramid). Table 2 quantifies the advantage of our Per-level strategy given the ground truth dataset.

In the next subsection, we introduce the our energy optimization scheme in details.

5.1 Energy Optimization

Figure 6: Synthetic Ground Truth Sequences. Our extra ground truth sequences generated by applying real-world blur kernels onto the selected frames of Sintel sequences Bamboo1 and Market2.

Grove2 Hydrangea Rub.Whale Urban2 Bamboo1 Market2 Baselines Time AEE AAE AEE AAE AEE AAE AEE AAE AEE AAE AEE AAE LMoF-Deep 35 LMoF 29 LMoF-NoDF 27 0.71 2.78 0.96 0.98 3.88 1.42 3.12 2.46 8.61 4.54 8.98 Li et al. moblur_nc () 39 2.19 Portz et al. Portz () 79 1.14 4.11 2.62 3.55 3.12 8.18 3.44 5.10 2.32 9.02 4.68 8.91 Brox et al. Brox () 22 1.24 4.53 2.26 3.47 2.44 7.98 2.92 4.60 4.86 5.69 6.96 10.18 MDP xu2012motion () 422 1.06 3.46 3.40 3.55 3.70 8.21 5.62 6.82 2.97 10.54 5.88 9.59 FlowNetS dosovitskiy2015flownet () 0.09 1.31 4.48 1.78 3.37 1.20 6.75 2.55 4.24 7.21 FlowNetC dosovitskiy2015flownet () 0.12 1.43 4.80 2.49 3.96 1.87 7.15 3.60 5.55 1.96 6.51 4.18 7.98 Teney&Hebert teney2016learning () 7 0.78 3.21 0.91 2.78 0.88 5.74 1.97 4.10 2.93 6.33 5.49 9.09

Table 1: Quantitative Measure on GT dataset (Li et al.’s benchmark + our customized Sintel). Our method (LMoF-Deep, LMoF and LMoF-NoDF) is compared to the other baselines on the metrics of Average Endpoint Error (AEE), Average Angle Error (AAE) and Average Time Consumption (in second).

Figure 7: Visual Comparison on Bamboo1 and Market2. First Row: the blurry images and the GT flow fields. From left to right, the First and Third Columns are the error map comparing to the GT flow fields. The Second and Last Columns are the flow fields of each baselines.

To solve our highly nonlinear optical flow energy Eq. 1, we follow a Nested Fixed Point based optimization scheme Brox () which has been recently used in the state-of-the-art approaches. We define:

We first apply Euler-Lagrange Equations onto the energy Eq. 1. The resulting functional is further minimized within a coarse-to-fine fashion (Algorithm 1). We initialize the flow field on the top-coarsest level; and iteratively update the flow field on the next finer level as . Here denotes increments which is still the nonlinearity of the remaining system. Those are the solutions of

(10)
(11)

where the terms and contained provide robustness to flow discontinuity on the object boundary. In addition, is also regularizer for a gradient constraint in motion space. All of those terms can be detailed as follows:

(12)

For the further linearization on the system Eqs. (10, 11), please refer to our supplementary document.

Urban2 Market2 Baselines Time AEE AAE AEE AAE Independent Deblurring + Flow EnergyOpt 9 3.66 5.78 6.63 12.46 Independent Deblurring + Blur Matching + Flow EnergyOpt 9 2.07 4.11 5.17 10.60 Per-Level Deblurring + Flow EnergyOpt 29 2.33 4.02 5.59 9.91 Per-Level Deblurring + Blur Matching + Flow EnergyOpt, (LMoF) 29 1.28 2.99 4.23 8.90

Table 2: Quantitative Measure on GT sequences Urban2 and Market2. Four variations of our methods are evaluated along with different deblurring methods (Independent or Per-Level) and Blur Matching strategies (on or off) on the metrics of Average Endpoint Error (AEE), Average Angle Error (AAE) and Average Time Consumption (in second).

Urban2 Market2 Baselines Time AEE AAE AEE AAE Independent Chakrabarti Chakrabarti2016 () + BM + FE 139 3.69 6.37 6.51 12.21 Per-Level Chakrabarti Chakrabarti2016 () + BM + FE 802 2.91 5.83 6.04 11.66 Independent Hradiš et al. hradivs2015convolutional () + BM + FE 60 4.19 7.84 6.92 12.97 Per-Level Hradiš et al. hradivs2015convolutional () + BM + FE 360 3.28 7.69 5.64 12.33 Independent Xu&Jia xu2010two () + BM + FE 63 2.13 5.44 5.29 9.71 Per-Level Xu&Jia xu2010two () + BM + FE 363 3.42 5.96 6.66 9.92 Independent Levin et al. levin2011efficient () + BM + FE 275 4.47 7.80 7.19 11.71 Per-Level Levin et al. levin2011efficient () + BM + FE 1563 5.12 7.89 7.91 12.23 Independent ours + BM + FE 9 2.07 4.11 5.17 10.60 Per-Level ours + BM + FE, (LMoF) 29 1.28 2.99 4.23 8.90

Table 3: Quantitative Measure on GT sequences Urban2 and Market2. Two of our implementations are compared to eight variations which combine different deblurring baselines (Independent or Per-Level) into our optical flow framework using our blur matching strategy (BM) and energy optimization (FE).

5.2 Implementation

In the implementation, we use a customized C++/CUDA version of Caffe jia2014caffe () for both the network training and testing. In the training period, we sample 8 directions () for the directional filtering. The training takes around a week for each of iteration in a platform with Intel i7 3.5 GHz and GTX 780 4Gb. Furthermore, we implement the optical flow framework using C++; and construct the image pyramid with a downsampling factor of 0.8. The final system is solved using Conjugate Gradients with 60 iterations.

6 Evaluation

In this section, we perform an evaluation by comparing three variations of our proposed approach – i.e. LMoF, LMoF-Deep and LMoF-NoDF (Sec 4.4) – to four other famous optical flow methods, i.e. Portz et al. Portz (), Li et al. moblur_nc (), MDP xu2012motion () and Brox et al. Brox (). Portz et al.’s approach introduce the uniform blurry parameterizations and provides sharp image alignment for both the camera-shake and object blur cases. Li et al.’s method brings the directional filtering to give the recent state-of-the-art precision for the camera-shake blur. MDP is currently one of top method according to Middleburry benchmark Middlebury () while Brox et al.’s show the similar optimization scheme to the proposed method. We use the default parameter setting for all baselines.

In the following subsections, we evaluate our method on a synthetic GT dataset, as well as real-world sequences respectively.

6.1 Customized Benchmark

It is difficult to quantitatively evaluate the optical flow from real-world blurry scenes which may lead to the ambiguous matching issue. Portz et al. propose a synthetic benchmark that gives blurry object motion within a blur-free background but lack of camera-shake blur. Furthermore, by carefully sampling the useful correspondences, Li et al. moBlur () synthesize a benchmark for camera-shake blurred scenes by convoluting selective blur kernels onto a customized subset of the famous Middleburry dataset. Such benchmark is challenging as it contains many small details that can be easily destroyed by blur.

In this evaluation, we bring more challenges. As shown in Fig. 6, we synthesize two additional GT sequences applying Li et al.’s GT methodology onto selective Sintel Sintel () sequences (Market3 and Bamboo1, downsampling to pix.). Such extra benchmark is supposed to give more difficulties e.g. mixed blur, large displacement and illumination changes, etc.

Table 1 illustrates the quantitative comparison of our methods (three implementations) against the other baselines. Our LMoF-Deep yields the best Average Endpoint Error (AEE) precision for all the sequences. It also competitively ranks the second best Average Angular Error (AAE) for the Market2, and offers the top AAE measure for all other trials. Li et al.’s is the state-of-the-art approach in the community and provides very competitive precision measure comparing to LMoF – a fast version of our method. Their approach results in the second best AEE accuracy for the Grove2, Hydrangea, Rub.Whale and Bamboo1, as well as the third best AEE measure on the Urban2. Our other implementations of LMoF and LMoF-NoDF also outperform the baselines Portz et al.’s, Brox et al.’s and MDP for most of the trials. All our implementations show reasonable speed in the experiments. Note that most baselines give relevantly larger errors ( 3 pixel AEE and 6 degrees AAE) on the Market2 because the sequence contains additional difficulties e.g. invariant blur (motion blur and camera blur), large displacements and noise.

Figure 8: AEE Measure on Hydrangea by Ramping Up the Noise Distribution. Left: the numerical analysis by varying noise level. Right: the visualizations on flow fields.

Figure 9: Visual Comparison on Real-world Sequences of Chessboard, Desktop and Books. First Row: two input frames. For the rest, from left to right, the Second, Forth and Last Columns are the flow fields of each baselines. The First Third and Third Columns are the warping results using each baseline flow fields.

Table 1 also demonstrates the advantage of our method comparing to two neural network based optical flow approaches, i.e. FlowNet dosovitskiy2015flownet () (FlowNetS and FlowNetC) and Teney&Hebert teney2016learning () which provide an end-to-end process to recover optical flow from input images. We observe that both implementations of FlowNet (FlowNetS and FlowNetC) yield large error for the small motion scenes (Grove2, Hydragnea, Rub.Whale and Urban2) while they give relatively higher accuracy for the large motion cases i.e. Bamboo1 (2.00 pixel AEE) and Market2 (4.01 pixel AEE). Furthermore, Teney&Hebert encodes a hidden coarse-to-fine optimizer within the network. With this advantage, they give improved results for the small motion scenes and outperform the traditional approaches Brox et al. and MDP in most of trials. However, our methods produce the top precision measure for all the sequences except Matket2 (FlowNetS, 8.18 degrees AAE).

Fig. 7 visualizes the AEE errors of all the baselines on Bamboo1 and Market2. Our methods yield less details loss and clearer object boundaries in overall. Here Brox et al.’s overly smooth the object details of the scene. And MDP leads to extra errors because their feature detection and matching process is compromised by the blur, and even brings error into the final energy. We observe that all the baselines result in large errors on left area of the Market2 because the object there is moving quickly and leads to extra motion blur. Such invariant blur cannot be solved by any of our baselines, as well as is out of this paper’s scope.

Moreover, Table 2 shows the quantitative analysis given different deblurring strategies on our proposed approach. Here Per-level denotes the deblurring strategy used in our implementations. For each level of image pyramid (coarse-to-fine), our deblurring network stacks the blur image and the latent one propagated from previous level in order to compute the optimized latent image. This latent image is then propagated to the next level. Hence, on the final level, our deblurring network runs (the number of network iterations, Fig. 1) (the number of levels of image pyramid) network iterations on each of input images. However, Independent deblurring presents the process where our deblurring network and optical flow optimization are treated as two independent steps. In this case, deblurring network runs only once on the full resolution images. We observe that our Per-level is able to significantly improve the precision on both small (Urban2) and large motion (Market2) scenes but use longer time (29s vs. 9s) for computation. The quantitative analysis also illustrates that the Blur Matching(see Sec, 3) can also improve the final results.

In Table 3, we further evaluate how our deblurring network contributes to the final optical flow estimation. To highlight our advantage, we propose eight variations by replacing our deblurring network with four selected deblurring approaches in either Independent or Per-level fashion. Hradiš et al. hradivs2015convolutional () and Chakrabarti Chakrabarti2016 () are neural network based approaches. The former gives high quality deblurring result on fine details e.g. text and license plate; while the latter achieves the state-of-the-art for the general real-world scenes. Levin et al. levin2011efficient () and Xu&Jia xu2010two () are non-learning methods. The former is one of the most popular approaches in practice; while the latter shows high performance on the noisy image. Please note that we adopt the default and fixed parameter setting through all the trials.

It is observed that our method (both Independent and Per-level) yields the best precision measure for both trials while they are also much faster than any other baselines. We also observe that Hradiš et al. hradivs2015convolutional () and Chakrabarti Chakrabarti2016 () provide improved error measure when they are applied by a Per-level deblurring strategy. However, Levin et al. levin2011efficient () and Xu&Jia xu2010two () result in relevantly higher accuracy when they are performed as an Independent process. Our optimization framework (FE) contains a coarse-to-fine image pyramid in a top-down fashion. In the Per-level deblurring strategy, the baseline has to be performed on different resolutions (different levels of image pyramid) of the image. It is difficult for the non-learning methods to adapt to different resolutions without manually tuning the parameters. This issue may bring extra errors. However, the neural network based approaches are supposed to improve this issue if the training data is sufficient to cover different sizes of blur kernels.

Using the sequence Hydrangea, Fig. 8 quantizes and visualizes the effects by ramping up the distribution of the noise. By increasing the distribution of noise, all of our baselines give more errors in overall. The AEE of our implementations are still on the relevantly reasonable level ( 3.2 pix. AEE) while the errors of Brox et al., Portz et al. and MDP are climbing up quickly. Given the largest noise level (), our LMoF-Deep gives the best precision (1.61 pix. AEE). And the Li et al.’s yields a very competitive measure (1.88 pix. AEE) while LMoF gives 2.13 pix. AEE. The robustness of those three approaches against the noise benefits from the directional filtering which efficiently removes the noise but preserves the useful information moBlur ().

Within this evaluation, we compare our proposed approach to recently popular Li et al. moblur_nc () which uses ground truth camera motion to regularize the optical flow estimation. They give good precision on real-world blurry footages but additional hardware and difficult calibration are strictly required. They also have to tune parameters carefully for various scenes. Our method models the optical flow from blurry footage using convolutional neural network. This is an end-to-end unsupervised approach which does not need any manual parameter tuning or additional information/hardware. It is able to provide rapid computation and adapt to various image resolutions and kernel sizes. In our quantitative analysis (Table 1), our method produces more than AEE improvement and faster comparing to Li et al. moblur_nc ().

6.2 Real-world Scenes with Camera-shake Blur

To illustrate the feasibility of our method, we qualitatively compare our approach to other baselines on the real-world sequences. As shown in Fig. 9, from left to right, there are sequences Chessboard, Desktop and Books. Chessboard contains real-world photometric effects of nonrigid deformations and small occlusions while the Desktop represents the large camera motion and some featureless regions. Books give large displacement and out-of-plane rotation. We observe that our methods give the sharper flow on object boundaries, as well as shape preservation in the image warping.

7 Conclusion

In this paper, we investigate the problem for recovering optical flow from a camera-shake video footage. We first propose a novel CNNs architecture for video frame deblurring using an extra directional similarity and filtering layer. In practice, such learnable filters are able to adoptively preserve the directional blur information without the pre-knowledge of the camera motion. We then highlight the benefits of the Per-level integration of our network into an iterative optical flow framework. The evaluation demonstrates our hybrid framework gives the overall competitive precision and higher performance in runtime.

The limitations of our method may lie in the presence of mixed blur, globally invariant blur and spatial noise. Such difficulties could be improved by using more comprehensive training data.

Acknowledgements

This work was partially conducted when Wenbin Li was affiliated to UCL Department of Computer Science and University of Bath. We thank Gabriel Brostow and the UCL PRISM Group for their helpful comments. The authors are partially supported by Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA) EP/M023281/1; and EPSRC projects EP/K023578/1 and EP/K02339X/1.

References

  • (1) W. Li, D. Cosker, M. Brown, An anchor patch based optimisation framework for reducing optical flow drift in long image sequences, in: Asian Conference on Computer Vision (ACCV’12), Springer, 2012, pp. 112–125.
  • (2) W. Li, D. Cosker, M. Brown, Drift robust non-rigid optical flow enhancement for long sequences, Journal of Intelligent and Fuzzy Systems 31 (5) (2016) 2583–2595.
  • (3) R. Tang, D. Cosker, W. Li, Global alignment for dynamic 3d morphable model construction, in: Workshop on Vision and Language (V&LW’12), pp. 1–2.
  • (4) C. Godard, P. Hedman, W. Li, G. J. Brostow, Multi-view reconstruction of highly specular surfaces in uncontrolled environments, in: 3D Vision (3DV), 2015 International Conference on, IEEE, 2015, pp. 19–27.
  • (5) W. Li, F. Viola, J. Starck, G. J. Brostow, N. D. Campbell, Roto++: Accelerating professional rotoscoping using shape manifolds, ACM Transactions on Graphics (In proceeding of ACM SIGGRAPH’16) 35 (4).
  • (6) G. Ren, W. Li, E. O’Neill, Towards the design of effective freehand gestural interaction for interactive tv, Journal of Intelligent and Fuzzy Systems 31 (5) (2016) 2659–2674.
  • (7) B. Horn, B. Schunck, Determining optical flow, Artificial intelligence 17 (1-3) (1981) 185–203.
  • (8) W. Li, Y. Chen, J. Lee, G. Ren, D. Cosker, Robust optical flow estimation for continuous blurred scenes using rgb-motion imaging and directional filtering, in: IEEE Winter Conference on Application of Computer Vision (WACV’14), IEEE, 2014, pp. 792–799.
  • (9) L. Zhong, S. Cho, D. Metaxas, S. Paris, J. Wang, Handling noise in single image deblurring using directional filters, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13), 2013, pp. 612–619.
  • (10) S. Cho, S. Lee, Fast motion deblurring, ACM Transactions on Graphics (TOG’09) 28 (5) (2009) 145.
  • (11) D. Krishnan, T. Tay, R. Fergus, Blind deconvolution using a normalized sparsity measure, in: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, IEEE, 2011, pp. 233–240.
  • (12) L. Xu, S. Zheng, J. Jia, Unnatural l0 sparse representation for natural image deblurring, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13), 2013, pp. 1107–1114.
  • (13) J. Pan, D. Sun, H. Pfister, M.-H. Yang, Blind image deblurring using dark channel prior.
  • (14) T. Michaeli, M. Irani, Blind deblurring using internal patch recurrence, in: European Conference on Computer Vision, Springer, 2014, pp. 783–798.
  • (15) L. Sun, S. Cho, J. Wang, J. Hays, Edge-based blur kernel estimation using patch priors, in: Computational Photography (ICCP), 2013 IEEE International Conference on, IEEE, 2013, pp. 1–8.
  • (16) S. Xiang, G. Meng, Y. Wang, C. Pan, C. Zhang, Image deblurring with matrix regression and gradient evolution, Pattern Recognition 45 (6) (2012) 2164–2179.
  • (17) S. Yun, H. Woo, Linearized proximal alternating minimization algorithm for motion deblurring by nonlocal regularization, Pattern Recognition 44 (6) (2011) 1312–1326.
  • (18) W.-Z. Shao, H.-S. Deng, Q. Ge, H.-B. Li, Z.-H. Wei, Regularized motion blur-kernel estimation with adaptive sparse image prior learning, Pattern Recognition 51 (2016) 402–424.
  • (19) K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior, IEEE transactions on pattern analysis and machine intelligence 33 (12) (2011) 2341–2353.
  • (20) L. Xu, J. Jia, Two-phase kernel estimation for robust motion deblurring, in: European conference on computer vision, Springer, 2010, pp. 157–170.
  • (21) Q. Shan, J. Jia, A. Agarwala, High-quality motion deblurring from a single image, ACM Transactions on Graphics (TOG’08) 27 (3) (2008) 73.
  • (22) A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, B. Curless, Single image deblurring using motion density functions, in: European Conference on Computer Vision, Springer, 2010, pp. 171–184.
  • (23) M. Hirsch, C. J. Schuler, S. Harmeling, B. Schölkopf, Fast removal of non-uniform camera shake, in: 2011 International Conference on Computer Vision, IEEE, 2011, pp. 463–470.
  • (24) O. Whyte, J. Sivic, A. Zisserman, J. Ponce, Non-uniform deblurring for shaken images, International journal of computer vision 98 (2) (2012) 168–186.
  • (25) Z. Hu, L. Xu, M.-H. Yang, Joint depth estimation and camera shake removal from single blurry image, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2014, pp. 2893–2900.
  • (26) V. Khare, P. Shivakumara, P. Raveendran, M. Blumenstein, A blind deconvolution model for scene text detection and recognition in video, Pattern Recognition 54 (2016) 128–148.
  • (27) M. Hradiš, J. Kotera, P. Zemcík, F. Šroubek, Convolutional neural networks for direct text deblurring, in: Proceedings of BMVC, 2015, pp. 2015–10.
  • (28) L. Xu, J. S. Ren, C. Liu, J. Jia, Deep convolutional neural network for image deconvolution, in: Advances in Neural Information Processing Systems, 2014, pp. 1790–1798.
  • (29) A. Levin, P. Sand, T. S. Cho, F. Durand, W. T. Freeman, Motion-invariant photography, ACM Transactions on Graphics (TOG’08) 27 (3) (2008) 71.
  • (30) N. Joshi, S. B. Kang, C. L. Zitnick, R. Szeliski, Image deblurring using inertial measurement sensors, ACM Transactions on Graphics (TOG’10) 29 (4) (2010) 30.
  • (31) Y.-W. Tai, H. Du, M. S. Brown, S. Lin, Image/video deblurring using a hybrid camera, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’08), 2008, pp. 1–8.
  • (32) Z. Hu, L. Yuan, S. Lin, M.-H. Yang, Image deblurring using smartphone inertial sensors.
  • (33) M. Grundmann, V. Kwatra, M. Han, I. Essa, Efficient hierarchical graph-based video segmentation, in: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, 2010, pp. 2141–2148.
  • (34) H. Wang, C. Schmid, Action recognition with improved trajectories, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 3551–3558.
  • (35) Z. Lv, X. Li, W. Li, Virtual reality geographical interactive scene semantics research for immersive geography learning, Neurocomputing 0 (0) (2016) 12.
  • (36) S. Baker, D. Scharstein, J. Lewis, S. Roth, M. Black, R. Szeliski, A database and evaluation methodology for optical flow, International Journal of Computer Vision (IJCV’11) 92 (2011) 1–31.
  • (37) D. J. Butler, J. Wulff, G. B. Stanley, M. J. Black, A naturalistic open source movie for optical flow evaluation, in: European Conference on Computer Vision (ECCV’12), 2012, pp. 611–625.
  • (38) M. J. Black, P. Anandan, The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields, Computer vision and image understanding 63 (1) (1996) 75–104.
  • (39) T. Brox, A. Bruhn, N. Papenberg, J. Weickert, High accuracy optical flow estimation based on a theory for warping, in: European Conference on Computer Vision (ECCV’04), 2004, pp. 25–36.
  • (40) D. Sun, E. B. Sudderth, M. J. Black, Layered image motion with explicit occlusions, temporal consistency, and depth ordering, in: Advances in Neural Information Processing Systems, 2010, pp. 2226–2234.
  • (41) L. Xu, J. Jia, Y. Matsushita, Motion detail preserving optical flow estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence 34 (9) (2012) 1744–1757.
  • (42) W. Li, D. Cosker, M. Brown, R. Tang, Optical flow estimation using laplacian mesh energy, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13), IEEE, 2013, pp. 2435–2442.
  • (43) Z. Tu, R. Poppe, R. C. Veltkamp, Weighted local intensity fusion method for variational optical flow estimation, Pattern Recognition 50 (2016) 223–232.
  • (44) M. Ahmad, S.-W. Lee, Human action recognition using shape and clg-motion flow from multi-view image sequences, Pattern Recognition 41 (7) (2008) 2237–2252.
  • (45) D. Chen, W. Li, P. Hall, Dense motion estimation for smoke, in: Asian Conference on Computer Vision (ACCV’16), Springer, 2016, pp. 225–239.
  • (46) W. Li, D. Cosker, Z. Lv, M. Brown, Nonrigid optical flow ground truth for real-world scenes with time-varying shading effects, IEEE Robotics and Automation Letters (RA-L’16) 2 (1) (2017) 231–238.
  • (47) W. Li, D. Cosker, Z. Lv, M. Brown, Dense nonrigid ground truth for optical flow in real-world scenes, in: IEEE Conference on Automation Science and Engineering (CASE’16), 2016, pp. 1–8.
  • (48) W. Li, D. Cosker, Video interpolation using optical flow and laplacian smoothness, Neurocomputing 220 (2017) 236–243.
  • (49) J. Revaud, P. Weinzaepfel, Z. Harchaoui, C. Schmid, Epicflow: Edge-preserving interpolation of correspondences for optical flow, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1164–1172.
  • (50) A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, T. Brox, Flownet: Learning optical flow with convolutional networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2758–2766.
  • (51) D. Teney, M. Hebert, Learning to extract motion from videos in convolutional neural networks, arXiv preprint arXiv:1601.07532.
  • (52) T. Portz, L. Zhang, H. Jiang, Optical flow in the presence of spatially-varying motion blur, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR’12), 2012, pp. 1752–1759.
  • (53) X. He, T. Luo, S. Yuk, K. Chow, K.-Y. Wong, R. Chung, Motion estimation method for blurred videos and application of deblurring with spatially varying blur kernels, in: Computer Sciences and Convergence Information Technology (ICCIT), 2010 5th International Conference on, IEEE, 2010, pp. 355–359.
  • (54) J. Wulff, M. J. Black, Modeling blurred video with layers, in: European Conference on Computer Vision, Springer, 2014, pp. 236–252.
  • (55) Z. Tu, R. Poppe, R. Veltkamp, Estimating accurate optical flow in the presence of motion blur, Journal of Electronic Imaging 24 (5) (2015) 053018.
  • (56) W. Li, Nonrigid surface tracking, analysis and evaluation, Ph.D. thesis, University of Bath (2013).
  • (57) W. Li, Y. Chen, J. Lee, G. Ren, D. Cosker, Blur robust optical flow using motion channel, Neurocomputing 220 (2016) 170–180.
  • (58) C. Schuler, M. Hirsch, S. Harmeling, B. Scholkopf, Learning to deblur, IEEE Transactions on Pattern Analysis and Machine Intelligence PP (99) (2015) 1–1.
  • (59) N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy, T. Brox, A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation, in: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR’16), 2016.
  • (60) A. Chakrabarti, A neural approach to blind motion deblurring, in: European Conference Computer Vision, Springer International Publishing, 2016, pp. 221–235.
  • (61) A. Levin, Y. Weiss, F. Durand, W. T. Freeman, Efficient marginal likelihood optimization in blind deconvolution, in: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, IEEE, 2011, pp. 2657–2664.
  • (62) Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, Caffe: Convolutional architecture for fast feature embedding, in: Proceedings of the ACM International Conference on Multimedia, ACM, 2014, pp. 675–678.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
53300
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description