Handling noise in blind deblurring while still being fast

Handling noise in blind deblurring while still being fast

Efficient blind deblurring under high noise levels

Abstract

The goal of blind image deblurring is to recover a sharp image from a motion blurred one without knowing the camera motion. Current state-of-the-art methods have a remarkably good performance on images with no noise or very low noise levels. However, the noiseless assumption is not realistic considering that low light conditions are the main reason for the presence of motion blur due to requiring longer exposure times. In fact, motion blur and high to moderate noise often appear together. Most works approach this problem by first estimating the blur kernel and then deconvolving the noisy blurred image. In this work, we first show that current state-of-the-art kernel estimation methods based on the gradient prior can be adapted to handle high noise levels while keeping their efficiency. Then, we show that a fast non-blind deconvolution method can be significantly improved by first denoising the blurry image. The proposed approach yields results that are equivalent to those obtained with much more computationally demanding methods.

Efficient blind deblurring under high noise levels

Jérémy Anger, Mauricio Delbracio, and Gabriele Facciolothanks: Work partly financed by Office of Naval research grant N00014-17-1-2552, Agencia Nacional de Investigación e Innovación (ANII, Uruguay) grant FCE_1_2017_135458, Programme ECOS Sud – UdelaR - Paris Descartes U17E04, DGA Astrid project « filmer la Terre » nANR-17-ASTR-0013-01, MENRT; DGA PhD scholarship jointly supported with FMJH.
CMLA, ENS Cachan, CNRS, Université Paris-Saclay, 94235 Cachan, France
IIE, Universidad de la República, Uruguay


Index Terms—  Image deblurring, blur kernel estimation, deconvolution, high noise

1 Introduction

Blind image deblurring is an ill-posed image restoration problem that aims to restore a sharp image given a blurry one. Motion blur occurs when there is relative motion between the camera and the scene during the exposure time. This phenomenon is most visible in low light conditions, when the integration time has to be longer to compensate for the lack of photons. The formation of a blurry image is frequently modeled as the convolution between the sharp image and a latent blur kernel leading to

(1)

where denotes the convolution, and models acquisition noise (usually white Gaussian noise). The goal of blind image deblurring is to recover the image without knowing . Most methods propose a two step process: first estimating the blur kernel and then applying a non-blind deconvolution algorithm [1, 2, 3, 4]. The above stationary kernel model can be generally extended to a non-uniform model [5, 3]. However, this comes at the price of a non-negligible computational cost with, in general, only a minor quality improvement [6, 7].

Input ().
Tao [8] (19.10dB)

Zhong [9] (20.18dB)

Zhou [10] (20.90dB)

Pan [4] (20.97dB)

Proposed (21.66dB)
Fig. 1: Blind deblurring under high noise. The proposed method is able to estimate the kernel and restore an high quality image.

Current state-of-the-art methods, either variational [4, 11, 12] or learning based [8, 13], work very well on images with no noise or very low noise levels. However, the noiseless assumption is not realistic considering the low light conditions that lead to the motion blur in the first place.

Kernel estimation. Only a handful of blind deblurring algorithms from the literature consider the realistic case of moderate or high noise. Tai et al. [14] show that denoising the image before estimating the kernel leads to an oversmoothing of details in the blurry image and thus errors in the estimated kernel. Instead, they propose to iteratively denoise the image and estimate the kernel. The ad-hoc denoising step uses the motion information from the kernel. Xu et al. [15] propose a two step kernel estimation. The first step only estimates a coarse kernel. The second step uses an iterative support refinement of the kernel that enforces sparsity without an explicit prior. Zhong et al. [9] also observe that denoising before kernel estimation results in poor performance. To circumvent this, they design directional filters which reduce the noise level while preserving blur information in the orthogonal direction. The blur kernel is then reconstructed from projections using the inverse Radon transform. Pan et al. [4] propose a kernel estimation method based on the image gradient prior which allows high quality estimations in low noise level settings [6]. However, the authors indicate that the method under-performs in medium and high noise conditions [16]. In this paper, we propose an adaptation of the -based kernel estimation method which is both efficient and robust to noise.

Non-blind deconvolution. Once the blurring kernel is estimated, most methods apply a non-blind deconvolution algorithm to restore the sharp image . The fastest deconvolution methods usually rely on image priors that do not perform well under high noise conditions (e.g., Total Variation). In the past decade, better image priors have been introduced to offer higher quality non-blind deconvolution. For example, EPLL [17] learns a mixture of Gaussian models to encode representative patches from natural images, and proposes an iterative algorithm to restore the image in presence of Gaussian blur. Generic frameworks such as Plug-and-Play priors [18] and more recently Regularization by Denoising [19], allow to use any image denoiser as a prior for restoration problems. Similarly, Zhong et al. [9] propose to use NL-means at each step of an iterative non-blind deconvolution, and Tai and Lin [14] incorporate a motion-aware denoiser for blind deblurring. While these methods significantly outperform basic priors such as TV, they are usually prohibitively slow due to the complex optimizations involved. Other methods propose to first inverse the blur with little regularization and then denoise the result [20, 21]. While computationally efficient, these methods require to solve the difficult problem of removing correlated noise.

Contributions. We study the robustness to noise of the kernel estimation method introduced by Pan et al. [4] and improve it by making it robust to noise (up to while maintaining a good performance in terms of quality and speed. These adaptations are not specific to this particular method and can be included in most methods that alternate between sharp image prediction and kernel estimation. We then propose a non-blind deconvolution method capable of handling moderate to high noise. The method uses denoising as a preprocessing step. While being conceptually simple, the proposed method is competitive with the state-of-the-art that iterate denoising inside the algorithms, which is much more computationally demanding.

2 Proposed method

The proposed method first estimates the kernel by iterating between two steps: (i) sharp image prediction and (ii) kernel estimation. Then, once the kernel has been estimated, the final image is restored using a non-blind deconvolution algorithm.

2.1 Sharp image prediction

The goal of this step is to recover the main structures of the latent sharp image using the previously estimated blur kernel and imposing additional prior information about sharp images. One very effective prior is the gradient prior, introduced for image deblurring by Pan et al. [22] in the following optimization problem

(2)

The energy (2) is minimized using a half quadratic splitting formulation, which leads to iteratively solving two sub-problems

(3)
(4)

The closed form solution for the sub-problem (3) is the hard thresholding operator on the gradients of , whereas the sub-problem (4) corresponds to the deconvolution of with an attachment term on the vector field and . Unless specified and according to [4], is set to and to . The weight controls the amount of details – and noise – that should be contained in . After a complete sharp prediction step, the parameter is decreased until it reaches the threshold  [4].

We observed that when the blurry image is contaminated with noise and is small, the solution contains spikes fitting the noise. In order to have a clean estimation of , albeit coarser, it is required to increase the regularization weight until noise is no longer included in the solution. Since the minimization acts as a hard thresholding, it is clear that using a larger threshold will result in a more conservative noise artifact removal. However, as the regularization increases, restored details that would have otherwise been included are removed from the solution.

To summarize, the sharp image prediction using step can be made robust to noise by adapting the regularization limit so that noise artifacts are filtered. This tuning should be performed per noise level.

2.2 Kernel estimation

This step uses the current sharp image prediction and the blurry image to estimate a blur kernel. Since the support of the blur kernel is significantly smaller than the image, this problem is usually well posed if both images are noiseless. In such conditions, simple priors for the kernel can be employed, leading to efficient computations. For example, a well known minimization problem for the kernel estimation step is

(5)

Variants of this energy have been proposed. For example, Cho et al. [2] showed that by formulating the data term in a filtered domain (e.g. using image gradients) the conditioning of the problem was improved. This speeds up convergence when using a conjugate gradient algorithm but increases the weight of the frequencies most affected by noise. As the blurry image gets noisier, noise in the estimation also increases, with little control. A trick often found in kernel estimation implementations [4, 12, 23], consists in filtering the kernel values after its estimation using both a hard thresholding and a connected component filtering, removing low amplitude noise but also biasing the estimation.

(a)
(b)
(c)
(d)
(e)
Fig. 2: Kernels estimated from a blurry noisy image: (a) ground-truth, (b) our result including every prior from Equation (6) with and , (c) setting , (d) setting , and (e) with a data-term formulated in a filtered domain. Notice how noise increases as priors are removed.

Instead, we propose to use more suited priors and kernel constraints by minimizing

(6)

where is a rectangular domain covering the support of , and and are regularization parameters. The regularizers and were motivated in Xiong [24] for their effectiveness for kernel estimation and the spatial constraints were studied in Almeida et al. [25]. To highlight to importance of each constraint and prior, we evaluate their contribution by successively removing them and running the full blind kernel estimation method for two noise levels ( and ). Results are shown in Figure 2. The kernel (2b) was estimated using Equation (6) with and . We then successively set (2c), (2d), and finally use gradients in the data-term [2] (2e). Notice how each prior helps removing noise and the difference to the ground-truth is reduced (2a). Using a filtered domain to estimate the kernel introduces errors that can be otherwise easily avoided.

We propose an efficient solver for (6) based on half quadratic splitting [26]. Our kernel estimation step iterates as follows

(7)
(8)

Assuming circular boundary conditions for the convolution, the subproblem (7) can be solved efficiently using two discrete Fourier transforms

(9)

The subproblem (8) enforces non-negativity and a given spatial support for , and its solution corresponds to a soft thresholding

(10)

Similarly to continuation methods, starts with a low value and is multiplied by at each iteration. The method stops when it reaches which implies that only iterations are required, with FFTs per iteration. In comparison, conjugate gradient methods usually require iterations with FFTs per iteration in ideal conditions [2], but are unstable in presence of noise. Finally, even though unrealistic circular boundary conditions are assumed in Equation (9), we observed that the regularization terms in conjunction with an edge-tapering procedure [27] are sufficient to avoid boundary artifacts.

Coarse-to-fine scheme kernel estimation. Alternating between kernel estimation and sharp image prediction allows to successfully retrieve small kernels. A coarse-to-fine scheme is generally employed to efficiently recover large kernels [2]. Our implementation is based on [28] which upscales the predicted sharp image by a factor two using bicubic interpolation. However, instead of iterations per scale as performed in [28], our method requires only iterations by warm-starting the second one using the the previous estimation of . This allows to reduce the number of inner iterations required for the sharp prediction step by setting and in (3) and (4). These modifications constitute a significant speed-up with no loss of performance, as we show in the experimental section.

2.3 Non-blind deconvolution

Non-blind deconvolution algorithms in noiseless settings reach in general high quality results. The main difficulties come from errors in the estimated kernel or when a frequency component gets cancel by the blurring kernel. Priors such as total variation [29] (TV) are efficient at reducing ringing artifacts arising from these errors and fast solvers exist [30]. However, in presence of noise, the weight associated with the regularization has to be increased, and in the case of total variation artifacts such as staircasing start to appear, hence the need for more natural image priors.

Given recent progress in the denoising field [20, 31, 32, 33], we argue that preprocessing the image with a denoising before non-blind deconvolution is now a viable, and very efficient, solution against the noise. While a direct inversion of blur on a denoised image can still produce ringing artifacts, using a TV prior with a low regularization is sufficient to remove ringing while keeping a staircasing free image, giving it a more natural aspect than a high TV regularization without denoising as preprocessing. A similar approach was studied in Badri et al. [34].

We have found that the quality gain obtained from this procedure was quite independent of the denoiser and selected the implementation from [35] of the FFDNet [33] CNN denoiser.

3 Experiments

In what follows we present several deblurring results on synthetic and real images. We compare our results against Zhong et al. [9] which is robust to noise, Pan et al. [4] which uses the gradient prior and more recent blind methods [10, 8]. We first assess the performance of our kernel estimation method under challenging noise levels, then show qualitative results from our non-blind deconvolution procedure before evaluating blind results. Finally, we compare blind deblurring results on a real-world image. We first assess the performance of our kernel estimation method under challenging noise levels, then show qualitative results from our non-blind deconvolution procedure before evaluating blind results. Finally, we compare on a real-world image.

   images/exp1/v5_1   

   images/exp1/v5_1   

   images/exp1/v5_4   

   images/exp1/v5_4   

   images/exp1/v7_2   

   images/exp1/v7_2   

Input
Groundtruth
Pan [4]
Zhong [9]
Proposed
Fig. 3: Sample of three estimated kernels (from the dataset of Levin et al. [36]) with Gaussian noise.

Noise-robust kernel estimation. In order to assess the performance of our kernel estimation, we extend the dataset of Levin et al. [36] by adding three levels of Gaussian noise to the blurry images: , and . As a measure of quality of the estimated kernels, we compute the root mean square error (RMSE) minimized by translating the kernel by integer shifts. Table 1 shows the results for Pan et al. [4], Zhong et al. [9] and our kernel estimation on this dataset. As expected, in the noiseless case all kernels are well estimated. However as the noise increases, the results of Pan et al. degrade quickly while Zhong’s and ours show robustness.

In addition to this quantitative study, we show a sample of estimated kernels by the three methods in Figure 3 for the noise level . Visual inspection of the kernels are in accordance with the quantitative measure: Pan et al. show no robustness to noise, Zhong et al. kernels exhibit a correct recovering of the kernel’s shape while our method is able to estimate sharper kernels.

Method
Pan et al. [4] 0.132 0.163 0.171
Zhong et al. [9] 0.137 0.143 0.158
Proposed 0.123 0.136 0.151
Table 1: Comparison of kernel estimation methods on the dataset of Levin with added noise. Kernels are registered with integer translations to the ground-truth before computing the RMSE.

(a) Input ()
(b) Zhong et al. [9] (dB)
(c) Without denoising (dB)
(d) With denoising (dB)
Fig. 4: Non-blind deconvolution with ground-truth kernel. Regularization weights for the final deconvolution were optimized for PSNR over a set of 5 images.

Non-blind deconvolution under high noise. We proposed a non-blind deblurring method based on denoising the image before deconvolution. We compare three non-blind deconvolution methods: Zhong et al. [9], Krishnan et al. [30] (with as regularization), and our method composed of denoising using FFDNet and the deconvolution of [30]. Figure 4 compares non-blind deconvolution results using the ground-truth kernel with a noise level of . Regularization weights for all three methods are tuned for best average PSNR over five images from [37] (including the image in Figure (4a)). We observe that our method is able to recover more details than Zhong et al. [9] while having a smoother aspect than Krishnan et al. [30] thanks to the denoising preprocessing.

Blind deblurring comparison. The previous experiments indicated good performance for the kernel estimation and non-blind deconvolution. We now validate the complete blind deblurring method and compare against competitive methods on three levels of noise. Table 2 shows PSNR111PSNR is computed after registering the images with the ground-truth and cropping to avoid boundary effects. computed over 5 images from [37]. Running times are also reported in Table 2 for single thread CPU execution on an Intel Xeon E5-2650. For this experiment, we set and , and kept for all noise levels. A visual comparison of the results for is shown on Figure 1. In such challenging situations, most methods fail to estimate the kernel and the deconvolution introduces ringing or regularization artifacts that are much less present in our result. More visual results and source code are available online at the project webpage222https://goo.gl/p5Rndy.

Method Runtime
Pan et al. [4] 26.60 24.29 23.81 165s
Zhou et al. [10] 27.35 25.31 24.01 72s
Tao et al. [8] 24.99 22.76 20.28 123s
Zhong et al. [9] 24.39 23.84 23.38 154s
Proposed 27.68 26.20 25.10 17s
Table 2: Comparison of PSNR of the blind results. The reported values corresponds to the average PSNR after registration over 5 images of size . Regularization parameters are tuned for best PSNR for each noise level.

Real world images. Figure 5 shows the results on a real-world image from Zhong et al. [9]. We estimated the noise standard deviation to be approximately and applied our blind deblurring method. Even though the deblurring results are close, the method of Zhong et al. took 250s for kernel estimation and 370s for non-blind deconvolution (MATLAB implementation) while our method took 10s to estimate the kernel, 6s to denoise and 10s to deconvolve the image of size (C++ implementation).

   images/exp3/input   

   images/exp3/input   

(a) Input

   images/exp3/zhong   

   images/exp3/zhong   

(b) Zhong et al. [9]

   images/exp3/deconv_sigma4_lambda0.007   

   images/exp3/deconv_sigma4_lambda0.007   

(c) Proposed
Fig. 5: Blind deblurring of a real images from [9] (contrast enhanced for visualization).

4 Conclusion

We showed that even though kernel estimation is often understood as being very unstable in the presence of noise, it is possible to have robust estimations. First, we showed that the gradient prior could be actually very robust to noise if the regularization weight is set sufficiently high, leading to a noiseless sharp image prediction. Then, the kernel estimation step should also take the noise into account, and we proposed a splitting strategy to exploit spatial and non-negativity constraints as well as two regularizations terms on the kernel. Finally, for the final non-blind deconvolution, a simple and efficient way to handle high noise is simply to denoise the blurry image before using deconvolution. Qualitative and quantitative results highlighted the strength of our method when compared to other noise handling methods.

As future work, we would like to improve the non-blind deconvolution part by using a network trained on blurry image as well as use other restoration methods to remove JPEG compression artifacts for example.

References

  • [1] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph., vol. 25, no. 3, pp. 787, 2006.
  • [2] S. Cho and S. Lee, “Fast motion deblurring,” ACM Trans. Graph., vol. 28, no. 5, pp. 1, 2009.
  • [3] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf, “Fast removal of non-uniform camera shake,” in ICCV, 2011.
  • [4] J. Pan, Z. Hu, Z. Su, and M.-h. Yang, “Deblurring Text Images via L0-Regularized Intensity and Gradient Prior,” in CVPR, 2014.
  • [5] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images.,” Int. J. Comput. Vis., vol. 98, no. 2, pp. 168–186, 2012.
  • [6] W.-S. Lai, Z. Hu, N. Ahuja, and M.-H. Yang, “A Comparative Study for Single Image Blind Deblurring,” CVPR, 2016.
  • [7] R. Köhler, M. Hirsch, B. Mohler, B. Schölkopf, and S. Harmeling, “Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database,” LNCS, vol. 7578, no. 7, pp. 27–40, 2012.
  • [8] X. Tao, H. Gao, Y. Wang, X. Shen, J. Wang, and J. Jia, “Scale-recurrent Network for Deep Image Deblurring,” in CVPR, 2018.
  • [9] L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang, “Handling noise in single image deblurring using directional filters,” in CVPR, 2013.
  • [10] X. Zhou, M. Vega, F. Zhou, R. Molina, and A. K. Katsaggelos, “Fast Bayesian blind deconvolution with Huber Super Gaussian priors,” Digital Signal Processing: A Review Journal, vol. 60, pp. 122–133, 2017.
  • [11] L. Xu, S. Zheng, and J. Jia, “Unnatural L0 sparse representation for natural image deblurring,” in CVPR, 2013.
  • [12] J. Pan, D. Sun, H. Pfister, and M.-H. Yang, “Blind Image Deblurring Using Dark Channel Prior,” in CVPR, 2016.
  • [13] M.-H. Y. Jiawei Zhang, Jinshan Pan, Jimmy Ren, Yibing Song, Linchao Bao, Rynson W.H. Lau, “Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks,” in CVPR, 2018.
  • [14] Y. W. Tai and S. Lin, “Motion-aware noise filtering for deblurring of noisy and blurry images,” in CVPR, 2012.
  • [15] L. Xu and J. Jia, “Two-phase kernel estimation for robust motion deblurring,” LNCS, vol. 6311, no. 1, pp. 157–170, 2010.
  • [16] J. Pan, Z. Hu, Z. Su, and M. H. Yang, “L0-Regularized Intensity and Gradient Prior for Deblurring Text Images and beyond,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 2, pp. 342–355, 2015.
  • [17] D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in ICCV, 2011.
  • [18] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in GlobalSIP. IEEE, 2013, pp. 945–948.
  • [19] Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (red),” SIAM J Imaging Sci, vol. 10, no. 4, pp. 1804–1844, 2017.
  • [20] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
  • [21] C. J. Schuler, H. Christopher Burger, S. Harmeling, and B. Scholkopf, “A machine learning approach for non-blind image deconvolution,” in CVPR, 2013.
  • [22] J. Pan and Z. Su, “Fast l0-Regularized Kernel Estimation for Robust Motion Deblurring,” IEEE Signal Processing Letters, vol. 20, no. 9, pp. 841–844, 2013.
  • [23] A. Chakrabarti, “A neural approach to blind motion deblurring,” in LNCS, 2016, vol. 9907 LNCS, pp. 221–235.
  • [24] N. Xiong, R. W. Liu, M. Liang, D. Wu, Z. Liu, and H. Wu, “Effective alternating direction optimization methods for sparsity-constrained blind image deblurring,” Sensors (Switzerland), vol. 17, no. 1, pp. 1–27, 2017.
  • [25] M. S. C. Almeida and M. Figueiredo, “Blind Image Deblurring With Unknown Boundaries Using the Alternating Direction Method of Multipliers,” in IEEE ICIP, 2013.
  • [26] D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process., vol. 4, no. 7, pp. 932–946, 1995.
  • [27] S. Reeves, “Fast image restoration without boundary artifacts,” IEEE Trans. Image Process., vol. 14, no. 10, pp. 1448–1453, oct 2005.
  • [28] J. Anger, G. Facciolo, and M. Delbracio, “Blind image deblurring using the l0 gradient prior,” Image Proc. OnLine, 2019.
  • [29] L. I. Rudin and S. Osher, “Total variation based image restoration with free local constraints,” in IEEE ICIP, 1994.
  • [30] D. Krishnan and R. Fergus, “Fast Image Deconvolution using Hyper-Laplacian Priors,” in Advances in Neural Information Processing Systems, 2009, pp. 1033–1041.
  • [31] M. Lebrun, A. Buades, and J.-M. Morel, “Implementation of the "Non-Local Bayes" (NL-Bayes) Image Denoising Algorithm,” Image Proc. OnLine, vol. 3, pp. 1–42, 2013.
  • [32] G. Facciolo, N. Pierazzo, and J.-M. Morel, “Conservative Scale Recomposition for Multiscale Denoising (The Devil is in the High Frequency Detail),” SIAM J Imaging Sci, vol. 10, no. 3, pp. 1603–1626, jan 2017.
  • [33] K. Zhang, W. Zuo, and L. Zhang, “Ffdnet: Toward a fast and flexible solution for cnn based image denoising,” IEEE Trans. Image Process., 2018.
  • [34] H. Badri and H. Yahia, “Handling noise in image deconvolution with local/non-local priors,” in IEEE ICIP, 2014.
  • [35] M. Tassano, J. Delon, and T. Veit, “An Analysis and Implementation of the FFDNet Image Denoising Method,” Image Proc. OnLine, vol. 9, pp. 1–25, 2019.
  • [36] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in CVPR, 2009.
  • [37] J. Anger, G. Facciolo, and M. Delbracio, “Estimating an image’s blur kernel using natural image statistics, and deblurring it: An analysis of the goldstein-fattal method,” Image Proc. OnLine, vol. 8, pp. 282–304, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
355530
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description