Solar Image Restoration with the Cycle-GAN Based on Multi-Fractal Properties of Texture Features

Solar Image Restoration with the Cycle-GAN Based on Multi-Fractal Properties of Texture Features

[    Yi Huang    Bojun Cai    Dongmei Cai
Abstract

Texture is one of the most obvious characteristics in solar images and it is normally described by texture features. Because textures from solar images of the same wavelength are similar, we assume texture features of solar images are multi-fractals. Based on this assumption, we propose a pure data-based image restoration method: with several high resolution solar images as references, we use the Cycle-Consistent Adversarial Network to restore burred images of the same steady physical process, in the same wavelength obtained by the same telescope. We test our method with simulated and real observation data and find that our method can improve the spatial resolution of solar images, without loss of any frames. Because our method does not need paired training set or additional instruments, it can be used as a post-processing method for solar images obtained by either seeing limited telescopes or telescopes with ground layer adaptive optic system.

techniques: image processing – sun: general – atmospheric effects

0000-0001-6623-0931]Peng Jia robinmartin20@gmail.com \move@AU\move@AF\@affiliationCollege of Physics and Optoelectronics, Taiyuan University of Technology, Taiyuan, 030024, China \move@AU\move@AF\@affiliationDepartment of Physics, Durham University, South Road, Durham, DH1 3LE, UK \move@AU\move@AF\@affiliationKey Laboratory of Advanced Transducers and Intelligent Control Systems, Ministry of Education and Shanxi Province, Taiyuan University of Technology, Taiyuan, 030024, China

\move@AU\move@AF\@affiliation

College of Physics and Optoelectronics, Taiyuan University of Technology, Taiyuan, 030024, China

\move@AU\move@AF\@affiliation

College of Physics and Optoelectronics, Taiyuan University of Technology, Taiyuan, 030024, China

\move@AU\move@AF\@affiliation

College of Physics and Optoelectronics, Taiyuan University of Technology, Taiyuan, 030024, China

1 Introduction

The imaging process of optical telescopes can be modeled by equation 1:

(1)

where and are the original and observed images, is the convolutional operator, is the point spread function (PSF) of the whole optical system and stands for the noise from the background and the detector. During real observations, many different effects will introduce variable and . These effects will make the different from the and strangle further scientific researches.

For ground based solar observations, because the exposure time is short (dozens of millisecond) and the field of view is big (comparing to the isoplanatic angle), the atmospheric turbulence, thermal and gravity deformations of the optical system will introduce with highly spatial and temporal variations. Even with the help of adaptive optics and active optics systems, the residual error will still introduce a variable . The variable PSF of solar images is different from that of ordinary night time astronomy observations and it is often called short exposure PSF, because the exposure time is only dozens of milliseconds. The short exposure PSF can not be described by any contemporary analytical PSF models such as the Moffat or the Gaussian model and it is the main limitation for ground based solar observations.

Several different image restoration methods have been proposed to reduce the effects brought by the short exposure PSF and increase the spatial resolution of astronomical images, such as the blind deconvolution algorithm (Jefferies1993), the speckle reconstruction algorithm (Labeyrie1970; Luehe1993), the phase diversity algorithm (Paxman1992; Paxman1996; Matt1994) and the multi-object multi-frame blind deconvolution algorithm (Noort2005). These methods have different hypothesis or prior value of the or the , including wavefront measurements or assuming the image is invariant between different frames, and have achieved remarkable performance.

Texture is fundamental characteristic of an image and it describes the grey scale spatial arrangement of images. Normally texture features are used to evaluate textures. In our recent paper (Huang2019), we have shown the multi-fractal properties of texture features in solar images of different wavelengths. Based on the results from that paper, we use a Cycle-Consistent Adversarial Network (CycleGAN) to restore solar images with the multi-fractal properties as regularized condition in this paper. Our method can restore arbitrary number of solar images obtained by the same telescope within a few days with only several high resolution images as references. We will discuss the multi-fractal property of texture features in Section 2 and introduce our method in Section 3. In Section 4, we will show the performance of our method with real and simulated observation data. We will make our conclusions and anticipate our future work in Section 5.

2 The Multi-Fractal Property of Texture Features In Solar Images

Textures are mostly related to spatially repetitive structures which are formed by several repeating elements (castelli2002image). Similar to other natural images, solar images also have a lot of textures, such as the granulation in TiO and the filament in H-alpha, as shown in Figure 2. The texture feature is a description of the spatial arrangement of the gray scale in an image and it is usually used to describe the regularity or coarseness of an image. Manually designed texture features have been successfully used to describe the arrangement of texture constituents in a quantitative way (tamura1978textural; manjunath1996texture; li2019automatic). However, textures in solar images are not arranged in a regular or periodic way, which makes it hard to design adequate texture features by hand. According to our experience, textures in solar images have the following properties:
1. In the same wavelength, textures from different solar images are similar.
2. For the same solar image, the shape variation of textures in the same spatial scale satisfies the same statistical law. For example, filaments are bending with a smooth curve instead of a polyline.

These properties indicate us that although texture features are not organized in a regular way for solar images, the relative weights of different texture features are stable, which means if we measure the relative weights of texture features in a statistical way, the probability distribution in the same wavelength should be the same. We can use multi-fractal properties to describe texture features in solar images (Jia2014; peng2017discrimination). The multi-fractal property of texture features means the spatial distribution of textures in solar images (coded by texture features) satisfies the same continuous power spectrum and for different scales, the exponents of the spectrum are different.

Because textures are just appearances of the physical process behind and the physical process that generates the textures does not change, the multi-fractal property of texture features are valid for all solar images obtained in the same wavelength, i.e., texture features of solar images in the same wavelength satisfy the same power spectrum. Recent works suggest that neural network have very good property in representing complex functions.
In this paper, we further try to take advantage of this property and propose to use neural networks to evaluate multi-fractal properties from solar images. The multi-fractal properties are encoded in the neural network and can be visualized by feature maps of each layer. In our recent papers (Huang2019), we show the multi-fractal properties of G-band images. In this paper, the multi-fractal properties are used as regularized condition of the CycleGAN, just as other regularization conditions used in traditional deconvolution algorithm, such as: the total-variation condition in deconvolution algorithms. So we do not try to extract the multi-fractal property of texture features directly, instead we use many high resolution images to represent it in a statistical way.

In real observations, we can obtain a lot of high resolution solar images through speckle reconstruction, phase diversity, multi-object multi-frame deconvolution or observations with diffraction-limited adaptive optic systems, such as the single-conjugate or multi-conjugate adaptive optics systems (Rao2018). The spatial resolution of these images are around the diffraction limit of the telescope and can reveal the highest spatial frequency that the observation data have. As the texture features from high resolution solar images are just realization of the theoretical multi-fractal property, we can use multi-fractal properties of texture features from high resolution images as a restriction condition for image post-processing methods, as we will discuss in the next section.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefExamples of high resolution solar images obtained by (a) H-alpha and (b) TiO filters of the New Vacuum Solar Telescope (NVST). Besides the big structures, we can notice that there are a lot of textures in these images. People can easily tell the difference between images from these two wavelengths, because textures in the same wavelength are relatively stable since high resolution observations are carried out by human being.

3 The CycleGAN for Image Restoration with the Multi-fractal Property

3.1 Introduction of the CycleGAN

The deep convolutional neural network (DCNN) is a type of deep learning framework and is widely used in image restoration (xu2014deep; wieschollek2016end-to-end; zhang2017learning). For solar images, several different DCNNs have been proposed for image restoration or enhancing (Diazbaso2018; Asensio2018). These methods are based on supervised learning, which requires pairs of high resolution images and blurred images as training set to model the degradation process, i.e. the . However, for real observations, obtaining the training set is hard. Besides, the number and diversity of images in the training set are usually not large enough to represent different image degradation processes. A trained DCNN will output unacceptable results, when blurred images have a different than that of the training set. The requirement of many paired images in the training set limits wider application of these image restoration methods.

The generative adversarial network (GAN) is a generative model (goodfellow2014generative), which contains two DNNs: a generator G and a discriminator D. Given real data set R, G tries to create fake data that looks like the genuine data from R, while D tries to discriminate the fake data and the genuine data. The GAN can be trained with back-propagation algorithm effectively, when there are only limited training data. For image restoration of galaxies, the GAN has been successfully trained with only 4105 pairs of training images (Schawinski2017). However, as we discussed above, the GAN also models the degradation process from these training images. For solar observations, because the atmospheric turbulence induced short exposure PSF is much more complex than the long exposure PSF, the number of training images required in the GAN will be greatly increased and the performance of GAN will be strongly influenced by limited training data.

Limited training data is also a problem in other image related tasks. zhu2017unpaired propose the Cycle-Consistent Adversarial Network (CycleGAN) to solve this problem. The CycleGAN is an unsupervised learning algorithm, which contains a pair of Generative Adversarial Networks (GAN). Given two sets of images, one GAN learns the image mapping and the other GAN learns the inverse mapping. Under the constrain condition that the mapped image after inverse mapping should be similar to itself and vise versa (cycle consistency loss), the CycleGAN can restore blurred images directly with high resolution images as references. It should be noted that, when used for image restoration, although the supervised DCNN, the GAN and the CycleGAN all try to learn the restoration function, the CycleGAN has very different hypothesis than that of other methods. The CycleGAN is constrained by the probability distribution of data (multi-fractal property of texture features in this paper), while other methods are constrained by the blur properties contained in pairs of training images. The detail structure of the CycleGAN used in this paper will be discussed in the next subsection.

3.2 Structure of the CycleGAN for Solar Image Restoration

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefThe architecture of the CycleGAN used in this paper. The left figure shows the basic structure of CycleGAN. I stands for the observation image and O stands for the reference high resolution image. Restore and PSF stand for the two functions to be learned: the restore function and the PSF. and are two discriminators which are used to evaluate the generator output. The middle and right figure stand for the two learning processes in the CycleGAN. The blue dots are blurred images and the red dots are high resolution images. The CycleGAN will restore I to O’ and then will blur O’ to I’ and vise versa from O to O’. The cycle-consistency loss is introduced to make sure that the above translation will not change the image.

(a) (b) \H@refstepcounterfigure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefThe architecture of the generator in figure (a) and the discriminator in figure (b). The generator is shown in figure (a), which consists of convolutional layers (Conv in yellow color), instance normalization layers (IN in red color), Conv-transpose layers (convT in blue color) and residual blocks (residual blocks in purple color). The discriminator is shown in figure (b), which consists of convolutional layers, instance normalization layers and softmax layers (in dark purple). The high resolution image will be transformed to the blurred image through the generator and the blurred image will be sent to the discriminator for discrimination. There are two of sets of the generator and the discriminator in the CycleGAN and they have the same structure.

The structure of the CycleGAN used in this paper is shown in Figure 3.2222The complete code used in this paper is written in Python programming language (Python Software Foundation) with the package Pytorch and can be downloaded from http://aojp.lamost.org.. Because short exposure PSFs for solar observation have very complex structure, we use a very deep CNN as generator as shown in Figure 3.2 and Appendix A. This generator is inspired by zhu2017unpaired. However, since the CycleGAN will learn the short exposure PSF which has complex structures, we modify it and use smaller convolution kernels here to increase its representation ability. For the discriminator, we use an ordinary CNN that is normally used in image style transfer (isola2017image-to-image; Yi2017DualGAN; zhu2017unpaired).

As shown in Figure 3.2, and are blurred images and high resolution images. One GAN in the CycleGAN tries to learn the restoration function and it has a generator and a discriminator , which can be written as: . The other GAN tries to learn the PSF and it has a generator and a discriminator , which can be written as . We apply ordinary adversarial loss to both of these two GANs as:

(2)
(3)

where stands for the expectation in the probability space of and vise versa. These two adversarial loss can make the distribution generated by the above two generators close to the real distribution of or . However because the generator is very complex, with the adversarial loss along, the generator would learn other mappings that also match the distribution of or . To further restrict the space of possible mapping functions, we use the cycle consistency loss to constrain the solution space:

(4)

where stands for the 1-norm. The cycle consistency loss guarantees that, for each image, the CycleGAN will bring it back to the original value:

(5)
(6)

Because the image restoration algorithm should not flip the gray value between different pixels, we use the identity loss to constrain the contrast of the image, preventing from rapid change of gray scale between different pixels.

(7)

At last, we calculate the total variation of and and use them as the total variation loss to improve the image quality and reduce the artifacts generated by the CycleGAN,

(8)

where stands for the 2-norm, and are horizontal and vertical gradient of these images. We calculate the weighted summation of the above loss functions and use the function defined below to train the CycleGAN, where is the relative weight of the cycle consistent loss.

(9)

3.3 Other Restriction Conditions for the CycleGAN in Image Restoration

Because the CycleGAN tries to model the degradation process according to the statistical probability distribution of texture features in images, the restriction of this model should lie both in the image and the degradation process. First of all, as the CycleGAN does not have any paired training images as supervisions and it is supervised in the form of and , where and are high resolution images and blurred images respectively, they should satisfy the following properties. For :
1. High resolution images need to have the same or smaller pixel scale than that of the blurred images. Then we will down sample all the high resolution images to images of the same pixel scale as that of blurred images in data set .
2. The apparent structures should be removed from high resolution images, albeit keeping it small enough. Because the apparent structures, such as sunspots have different textures and will change the multi-fractal properties.
3. We need enough images with textures to represent the multi-fractals in a statistical way. According to our experience, at least 100 frames of reference images with pixels are required, however it is much smaller than ordinary DNN based image restoration method.

As the CycleGAN is to model the restoration function and the PSF, which have very strong spatial and temporal variations, we need to set several restrictions in to make the CycleGAN robust in real applications:
1. When restoring a single frame of solar image, it would be better to divide it into smaller images with size of around dozens of arcsec.
2. For several continuous frame of solar images, it would be better to cut the interested areas (with size of around dozens of arcsec) from these images and directly restore these temporal continuous small images with the CycleGAN.
3. To reduce image processing time, it would be better to use texture-rich small images that are cut from blurred images to train the CycleGAN. After training, the can be directly used to restore all the blurred images.

Last but not least, reference images and blurred images need to have the same multi-fractal property, so the images taken in the same band of wavelength within a few days is strict enough for our method. We tested our method with real observation data and found that the neural network used in this paper is complex enough for image restoration task, because we did not find any images that the CycleGAN failed to restore. With the above restrictions, the CycleGAN can be effectively trained through several thousand iterations. In the next Section, we will show implementations of our algorithm.

4 Implementations of the CycleGAN for Image Restoration

4.1 Performance Evaluation with Simulated Observation Data

Over-fitting is a major problem that would limit the performance of our algorithm. Because the CycleGAN is a generative model, overfitting will make remember the structure of high resolution images and generate fake structures during image restorations. To test our algorithm, we use the CycleGAN to restore simulated blurred images. There are two sets of images used in this paper: observations carried out in H-alpha wavelength between 02 April 2018 and 03 April 2018 and observations carried out in TiO band on 17 November 2014. These two sets of data are both observed by the NVST (liu2014new): the H-alpha data was observed in 656.28 nm and bandpass of 0.025 nm with pixelscale of 0.136 arcsec and the TiO data was observed in 705.8 nm and bandpass of 1 nm with with pixelscale of 0.052 arcsec 333For more details, please refer to: http://fso.ynao.ac.cn/cn/introduction.aspx?id=8. These images are restored by speckle reconstruction methods (Li2015High).
From 5 high resolution H-alpha wavelength images, we crop 500 images with size of pixels as references and crop another 5 images with size of as test images. From 5 high resolution TiO solar images, we crop 500 images with size of pixels as references and crop another 5 images with size of as test images. According to real observation conditions, we use Monte Carlo method to simulate several high fidelity atmospheric turbulence phase screens with of 10 for H-alpha wavelength and 4 for TiO data (jia2015simulation; jia2015real-time), where stands for the diameter of telescope and stands for the coherent length of atmospheric turbulence. We calculate simulated short exposure PSFs with these phase screens through far field propagation (Basden2018The). At last, we convolve test images and temporal-continuous PSFs to generate 100 simulated blurred images as simulated blurred H-alpha and TiO data.

These simulated blurred images and high resolution images in the same wavelength are used to train the CycleGAN with 2000 iterations. Because all the simulated blurred images have the original high resolution images, we can compare them to test our algorithm. The results are shown in the figure 4.1. We carefully check these images and find that the resolution of the restored images has been improved and there are no observable difference between the restored images and the the original images. We have also calculated the median filter-gradient similarity – MFGS (Deng2017) of the simulated blurred images, the original high resolution images and the restored images to further test our method. For H-alpha data, the MFGS is increased from to , while the mean MFGS of the original images is . For TiO data, the mean MFGS is increased from to , while the mean MFGS of the original images is . According to these results, we can find that the image quality has been increased by our method.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefSimulated short exposure images, restored images and the original high resolution images. The figures in the top row show the simulated blurred images, the figures in the middle row show the original high resolution images and images in the bottom row show the images restored by our method. The left two columns are images in H-alpha wavelength and the right two columns are images in TiO. From these figures, we can find the spatial resolution is increased by our method and comparing with the original high resolution images, there are no obvious artefacts in the restored images.

4.2 Performance Evaluation with Real Observation Data

In this part, we use real observation data from NVST to evaluate performance of the CycleGAN and also show our recommendations of how to use the CycleGAN in real applications. According to the size of isoplanatic angle, an image should be cut into small images for restoration. However considering the processing speed, we use images with size of pixels (equivalent to around arcmin) for restoration. It is much larger than the isopanatic angle and the restoration results will drop down slightly, however it is a trade-off we have to make. In real applications, there are two scenes of image restoration: restoration of several continuous solar images in a small interested region or restoration of a single frame image with relative large size.

For the first scene, we use both H-alpha images observed between 02 April and 03 April 2018 and TiO images observed on 19 August 2017 with small size to test our algorithm. For each wavelength, 500 frames of images with size pixels are extracted from 5 frames of blurred image with size of pixels as and 500 frames of images with size pixels are extracted from one speckle reconstructed image with size of pixels as . Because the CycleGAN is deep and complex, the maximal number and size of the reference image and the blurred image are actually limited by the computer444In this paper, we use a computer with two Nvidia GTX 1080 graphics cards, 128 GB memory and two Xeon E5 2650 processors. It will cost 4498 seconds to train the CycleGAN with 6000 iterations.. After training, we can use the to directly restore two interested regions with size of pixels in the blurred images. It costs around 1 minutes to process 333 frames of these blurred images. Two frames of restored images are shown in Figure  4.2 and interested readers can also find animated versions of these figures in the online version of this paper. They are 100 frames of blurred solar images from H-alpha and 79 frames from TiO before and after restoration, alone with their MFGS values. From these figures, we can find that the resolution of the restored images have been improved. The mean MFGS is increased from to for TiO data and from to for H-alpha data.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefTwo frames of real observation images and their corresponding restored images. The left two images are from H-alpha wavelength and the right two images are from TiO. It is obvious that restored images have better quality. An animated versions of these figure are available in the on line version of this paper.

For the second scene, we test the performance of our algorithm in big images. A blurred image from the NVST is shown in the panel of Figure 4.2. The validation part is the image in the white box and the rest of this image is used as the training part. We extract 500 frames of images with size of from the training part as and use 500 frames of high resolution images used above as in the CycleGAN. After 6000 iterations, the images in are restored and we use to restore images in the white box. The results are shown in the right panel of Figure 4.2. We can find that the spatial resolution of the observed images has been improved and the difference of image quality between the validation part and the training part is very small. The MFGS is increased from 0.78 to around 0.89 for this image.

\H@refstepcounter

figure \hyper@makecurrentfigure

Figure 0. \Hy@raisedlink\hyper@@anchor\@currentHrefTwo frames of real observation images and their corresponding restored images. The size of these two images is pixels. It is obvious that restored images have higher spatial resolution. The MFGS is increased from 0.78 to in the reference region and the MFGS in the test region of the restored image is .

5 Conclusions

As more and more high resolution solar images are obtained, we propose a pure data based image restoration method to make better use of these data. We assume texture features of solar images in the same wavelength are multi-fractals and use a deep neural network – CycleGAN to restore blurred images, with several high resolution images from the same telescope as references. Our method does not need paired images as the training set. Instead, with only several high resolution images observed in the same wavelength, our method can give promising restoration results for every frame of real observation data without any additional instruments. We use simulated blurred images to test our algorithm. We compare the reconstructed images with real images and find that the MFGS has been increased with our method. Besides, we further use our algorithm to restore real observation images. Although the image quality increased by our method is slightly smaller than the speckle reconstructed images, our method can restore every frame of blurred images, while the speckle reconstructed method use a lot of blurred images (dozens or hundreds) and only obtains one frame of restored image. Our method is suitable for future observation data obtained by seeing-limited telescope or telescope with ground layer adaptive optic systems. Because our method does not have any prior assumption of the degradation process, it can also be used to restore images of other astronomical objects with features, such as galaxies, nebulae or supernova remnants.

The authors are grateful to the anonymous referee for his or her comments and suggestions, which have greatly improved the quality of this manuscript. The authors would like to thank Dr. Yongyuan Xiang and Professor Kaifan Ji from Yunnan Astronomical Observatory for their suggestions and providing solar observation data from the NVST. P.J. would like to thank Professor Hui Liu and Professor Zhong Liu from Yunnan Observatory, Professor Yong Zhang from Nanjing Institute of Astronomical Optics and Technology, Dr. Alastair Basden, Dr. Tim Morris, Dr. James Osborn and Dr. Matthew Townson from Durham University, Dr. Yang Guo and Dr. Qi Hao from Nanjing University and Dr. Qinmin Zhang from Purple Mountain Observatory who provide very helpful suggestions for this paper. This work is supported by National Natural Science Foundation of China (NSFC)(11503018), the Joint Research Fund in Astronomy (U1631133) under cooperative agreement between the NSFC and Chinese Academy of Sciences (CAS). PJ is supported by the China Scholarship Council to study at the University of Durham. The data used in this paper were obtained by the New Vacuum Solar Telescope in Fuxian Solar Observatory of Yunnan Astronomical Observatory, CAS.

\onecolumngrid

APPENDIX

A Detail Structure of the CycleGAN

Type kernel size stride output
Conv2d
IN
ReLu
Conv2d
IN
ReLu
Conv2d
ResidualBlock
ResidualBlock
ResidualBlock
ResidualBlock
ResidualBlock
ConvT2d
IN
ReLu
ConvT2d
IN
ReLu
Conv2d
Table 1: Structure of the Generator. Conv2d is standard convolutional layer. IN is a instance normalization layer which will normalize each image through , where and are mean value and variance of I. ReLu is the activation function. ResidualBlock is special structure of neural network and the input will feed into its output and the first layer as defined in Table 3. ConvT2d is a transposed convolutional layer which will up sample the input data through learnable weights.
\LTpost

Type kernel size stride output negative slope
Conv2d
LeakyReLU 0.2
Conv2d
IN
LeakyReLU 0.2
Conv2d
IN
LeakyReLU 0.2
Conv2d
Sigmoid
Table 2: Structure of the Discriminator. LeakyReLu is a leakyReLu activation function, which has small slope for negative values, and in this paper we use the negative slope of 0.2. Sigmoid is the output layer of the discriminator and is used for classification of the input signals. All other layers have the same definitions as those in Table  1.
\LTpost

Type kernel size stride
Conv2d
IN
ReLu
Conv2d
IN
Table 3: Structure of the ResidualBlock. The residualblock was firstly introduced by He2015. All the layers have the same definitions as those in Table  1. The input will feed into the first layer and the output simultaneously (, where is the output of the last layer, is the input of the ResidualBlock and is the output of this ResidualBlock).
\LTpost

\bibliography@latex

ref

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
384197
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description