Solar hard X-ray imaging by means of Compressed Sensing and Finite Isotropic Wavelet Transform

Solar hard X-ray imaging by means of Compressed Sensing and
Finite Isotropic Wavelet Transform

M. A. Duval-Poo Dipartimento di Matematica, Università degli Studi di Genova, Via Dodecaneso 35, 16146 Genova, Italy
, 11email:
   M. Piana Dipartimento di Matematica, Università degli Studi di Genova, Via Dodecaneso 35, 16146 Genova, Italy
, 11email: CNR-SPIN, Via Dodecaneso 33, 16146 Genova, Italy
   A. M. Massone CNR-SPIN, Via Dodecaneso 33, 16146 Genova, Italy
Key Words.:
Sun: flares – Sun: X-rays, gamma rays – Techniques: image processing


Aims:This paper shows that compressed sensing realized by means of regularized deconvolution and the Finite Isotropic Wavelet Transform is effective and reliable in hard X-ray solar imaging.

Methods:The method utilizes the Finite Isotropic Wavelet Transform with Meyer function as the mother wavelet. Further, compressed sensing is realized by optimizing a sparsity-promoting regularized objective function by means of the Fast Iterative Shrinkage-Thresholding Algorithm. Eventually, the regularization parameter is selected by means of the Miller criterion.

Results:The method is applied against both synthetic data mimicking the Spectrometer/Telescope Imaging X-rays (STIX) measurements and experimental observations provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The performances of the method are compared with the results provided by standard visibility-based reconstruction methods.

Conclusions:The results show that the application of the sparsity constraint and the use of a continuous, isotropic framework for the wavelet transform provide a notable spatial accuracy and significantly reduce the ringing effects due to the instrument point spread functions.

1 Introduction

Imaging spectroscopy is a powerful tool for exploring the physics underlying particle acceleration and transport in solar flares. In order to realize imaging spectroscopy in the hard X-ray range, in 2002 NASA launched the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) (Lin et al. 2002), whose data have resulted in hard X-ray images of unprecedented angular and energy resolution. In a nutshell, RHESSI rotating collimators modulate the X-ray flux coming from the Sun, providing as a result sparse samples of its Fourier transform, named visibilities, picked up at specific points of the Fourier plane, named plane in this context.

The Spectrometer/Telescope for Imaging X-rays (STIX) (Benz et al. 2012) is one of the ten instruments in the payload of Solar Orbiter, which will be launched by ESA close to the Sun in 2018. Analogously to RHESSI the main goal of STIX is to measure hard X-ray photons emitted during solar flares in order to determine the intensity, spectrum, timing and location of accelerated electrons near the Sun. The imaging system that characterizes this device relies on the Moiré pattern concept (Oster et al. 1964) and, similarly to RHESSI, it provides as well a sampling of the Fourier transform of the photon flux in the plane (Giordano et al. 2015).

For both RHESSI and STIX, image reconstruction is needed to determine the actual spatial photon flux distribution from the few Fourier components acquired by the hard X-ray collimators and several methods have been realized to this goal (Högbom 1974; Cornwell & Evans 1985; Aschwanden & Schmahl 2002; Bong et al. 2006; Massone et al. 2009), but none of them exploits a methodology that has been widely applied in astronomical imaging in the last decade, i.e. compressed sensing (Donoho 2006; Candes & Wakin 2008; Bobin et al. 2008). The present paper describes a possible use of compressed sensing for regularized deconvolution in RHESSI and STIX imaging. In order to work, compressed sensing requires data incoherency and a sparse representation of the solution of the image reconstruction problem. Both RHESSI and STIX sample the Fourier domain in a way characterized by a notable level of incoherency; on the other hand, a typical hard X-ray image configuration is made of few, simple shapes (mainly, foot-points and loops) and therefore it is straightforward to represent it as the superposition of a small number of basis functions. It is well-known that an advantageous approach to realize compressed sensing is to use a wavelet transform since wavelets can provide, in addition to compression, a multi-scale signal representation.

Most wavelet implementations are associated to multi-resolution analysis (MRA) (Mallat 1999), mainly because of their computational effectiveness. However such implementations are far from optimal for applications like filtering and deconvolution, owing to the fact that they are not redundant, i.e., the dimension of the image decreases at coarser scales (Starck et al. 2007). As an alternative to MRA, the Isotropic Undecimated Wavelet Transform (IUWT) is rather often utilized in radio-interferometry (Li et al. 2011; Garsden et al. 2015) and this for two main reasons: first, it provides redundancy and, second, it is better appropriate for restoring astronomical sources (e.g.: stars, galaxies, flares), which are mostly isotropic or quasi-isotropic.

IUWT relies on a discrete wavelet transform path, which is not fully appropriate when the input data are provided by a rather sparse sampling of the frequency domain, as in visibility-based hard X-ray imaging. Therefore in the present paper we built a 2D isotropic wavelet transform that follows the continuous wavelet transform path. Specifically, our Finite Isotropic Wavelet Transform (FIWT) is inspired by the shearlet transform implementation proposed by Häuser & Steidl (2014) and is a redundant transform, which can be effectively applied in deconvolution problems like the RHESSI and STIX ones. This wavelet system is built in the frequency domain by using a 2D isotropic extension of the 1D Meyer mother wavelet (Mallat 1999) but other functions can be used with comparable results (Mallat 1999; Portilla & Simoncelli 2000). In order to reduce the numerical instability due to the ill-posedness of the deconvolution problem, in this paper we formulated a sparsity-enhancing regularized version of the FIWT multi-scale sparse decomposition, which we called the Finite Isotropic waVElet Compressed Sensing method (5-CS) and we used it to obtain reconstructions of hard X-ray images from synthetic STIX and experimental RHESSI data. We also point out that our approach adopts the analysis prior formulation instead of the synthesis prior formulation followed in the case of radio-interferometry visibilities (Li et al. 2011).

Finally, it is worth noting that an alternative way to realize compressed sensing in imaging is to use a dictionary made of few shapes, replicated many times for different scales and positions, and then to realize sparsity with respect to this dictionary. However, 5-CS realizes compressed sensing utilizing a continuous wavelet transformation and therefore it does not need any catalogue of basis images to work. This has two advantages: first, the construction of a catalogue requires to know in advance all possible source shapes and, second and more importantly, catalogue-based compressed sensing is computationally more demanding.

The paper is organized as follows. In Section 2 we formally set up the imaging problem for RHESSI and STIX. Section 3 defines the FIWT and its implementation. Section 4 describes the image reconstruction method based on FIWT and compressed sensing and Section 5 discusses the results obtained starting from STIX and RHESSI visibilities. Our conclusions are offered in Section 6.

2 The hard X-ray reconstruction problem

This section provides a quick overview of the model of data formation for RHESSI and STIX.

In the energy domain, RHESSI data range from some keV to some MeV with energy resolution of around 1 keV. The imaging module is composed by nine pairs of Rotating Modulation Collimators (RMCs), each one formed by a pair of equally spaced fine grids, placed in front of the detecting device. Each pair is composed by identical grids characterized by a given pitch, different from the ones characterizing all the other pairs of grids. The rotational motion of the spacecraft around its own axis causes a periodic modulation of the incident flux. As a result (Hurford et al. 2002) the instrument samples the plane according to nine circles, as shown in Figure 1 (a). It is worth mentioning that the number of samples in each circle is not fixed but determined in an optimal way during the computation procedure of the visibilities.

(b) STIX
Figure 1: Sampling of the plane performed by RHESSI (left panel) and STIX (right panel). For better visualization, panel (a) shows the sampling starting from detector 3.

STIX is formed by 30 detectors recording X-ray photons in the range 4 – 150 keV. On each detector, the incident flux is modulated by means of a sub-collimator formed by two distant grids with slightly different pitches and slightly different orientations. The effect of this grid configuration is to create the superposition of two spatial modulations, named Moiré pattern. The recording process on the detector associated with each Moiré pattern provides a specific STIX visibility. Therefore STIX recording hardware allows sampling the frequency domain in 30 different points placed on spirals as shown in Figure 1 (b). A rigorous description of the data formation process in STIX can be found in Giordano et al. (2015).

As a consequence of these kinds of hardware design, the mathematical model for data formation in RHESSI and STIX can be formulated in a matrix form as


where the original photon flux image to reconstruct is lexico-graphically re-ordered to define the vector , with . Further, is a sparse vector whose non-zero components correspond to the measured visibilities, is the discretized Fourier transform, is a sparse binary matrix that realizes the sampling in the plane, and denotes the entry-wise product. If we apply the discretized inverse Fourier transform on both sides of (1) we obtain


where is the convolution operator. Therefore, the reconstruction of the flux image from the given visibilities can be essentially viewed as a deconvolution problem, where is the point spread function and is the dirty map from which the PSF blurring effect must be subtracted.

3 The Finite Isotropic Wavelet Transform

Let be a family of functions defined by the translation and dilation of a mother wavelet function , i.e.


where is a point in , and are the translation and dilation parameters, respectively. The normalization factor ensures that , where is the norm in . The wavelet transform of a function is defined by (Mallat 1999)


where is the scalar product in and and are the Fourier transform of and , respectively. If the admissibility condition is satisfied, i.e.


then it is possible to define the inverse wavelet transform as


We constructed a new specific wavelet transform, named the Finite Isotropic Wavelet Transform (FIWT), using the 2D isotropic extension, in the Fourier domain, of a 1D symmetric function. Specifically, we started from the 1D Meyer mother function (Mallat 1999) and constructed the 2D isotropic mother wavelet function as


Consequently, from (3) we obtained


Further, in order to span the whole plane, we included in the wavelet framework the scaling function and constructed it again in the Fourier domain as


where is the 1D Meyer scaling function. The FIWT is finally defined from (3), i.e.


where is the inverse Fourier transform.

In the imaging framework the flux distribution is pixelized at positions


where is the image center, is the image field-of-view and is the pixel dimension in arcsec. With the same lexico-graphical re-ordering used in section 2 we obtain the vector from , where is the number of pixels of the image to restore and the components correspond to the flux intensity in each pixel. Accordingly to what done is Section 2, the vector is again the unknown of the image reconstruction problem.

In the wavelet transformation we will consider scales obtained according to the discretization


On the other hand, the discretization of the traslation parameter is made according to


Then, for each scale and for each translation , we construct the dimensional vector which corresponds to the pixelization and re-ordering of the wavelet (3) for , , and . Analogously, for each translation we construct the dimensional vector , which is the result of the pixelization and re-ordering of the scaling function . This leads to the construction of the set of -dimensional vectors . This set provides a Parseval frame, i.e., given , then




where denotes the canonical inner product in . Finally, is the matrix whose rows are made of the vectors for . The fact that is a Parseval frame implies that and that the forward and inverse discretized FIWT can be written as


respectively, where is the column vector of dimension whose components are


We conclude by observing that the computational complexity of the FIWT is around .

Figure 2: 2D isotropic extensions of the Meyer scaling function and mother wavelet with no translation. The functions are represented in the Fourier domain for 3 scales.

4 The image reconstruction method

The reconstruction of from is an ill-posed problem (Bertero & Boccacci 1998) and therefore some prior knowledge about the image is required to regularize the reconstruction problem (Engl et al. 1996). A valid approach in hard X-ray imaging is to regularize the inverse problem with the norm in some transformation domain. In fact, the method we propose in this paper, which we named Finite Isotropic waVElet transform Compressed Sensing (5-CS), addresses the optimization problem


The data term of the objective function to minimize, , quantifies the prediction error with respect to the measurements. The regularization term, , is designed to penalize an estimate that would not exhibit the sparsity property with respect to FIWT. The regularization parameter, , provides a tradeoff between fidelity to the measurements and sparsity.

Problem (20) can be numerically solved by means of any algorithm for non-linear optimization. Here we used the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) (Beck & Teboulle 2009), which is an solver widely used in many fields mainly because of its reliability and rapid convergence. Specifically, here FISTA is implemented by imposing the conservation of the flux at each iteration and by utilizing a standard rule for optimally stopping the iterations.

The only open parameter for method (20) is the regularization parameter , which plays a crucial role in the reconstruction process. Finding an appropriate value for is often not a trivial problem, depending on both the criteria adopted for assessing the quality of the reconstructed image and the amount of information known about the original image and its noise. There exist several strategies in the literature to properly estimate the regularization parameter, where the discrepancy principle (Morozov 1984), the Miller method (Miller 1970), the generalized cross-validation (GCV) method (Golub et al. 1979), and the L-curve method (Miller 1970; Hansen 1992) are the most used ones. In this work, we chose the Miller method (Miller 1970) because of its simplicity and because it is not computationally demanding. According to this selection criterion, if the following bounds


are known, or can be estimated from the dirty map, then the regularization parameter can be chosen as . In order to satisfy conditions (21), the bounds are estimated performing the first iteration of the FISTA algorithm with . The regularization parameter is then set equal to


We compared the performance of the Miller method with the one of the L-curve criterion in the case of the reconstruction of an image of the July 23 2002 event (00:29:10 - 00:30:19 UT; keV). More specifically, Figure 3 compares the reconstructions provided by 5-CS for four values of the regularization parameter: , the value provided by the L-curve method, the value provided by the Miller method, and the value realizing over-regularization. The values of and are very close and therefore the corresponding reconstructions are very similar, although the Miller method requires less computational effort.

Figure 3: July 23 2002 solar flare image reconstruction by 5-CS with different choices for : (a) (non-regularized solution), (b) as provided by the L-curve method, (c) as provided by the Miller method, (d) (over-regularized solution). The time interval is between 00:29:10-00:30:19 UT while the energy channel is keV. Detectors from through have been employed.

5 Experimental results

In this Section we performed an experimental assessment of the proposed reconstruction algorithm. First, we evaluated our method with STIX synthetic visibilities where the simulations are performed by means of the STIX Data Processing Software111 Next, 5-CS was validated against experimental visibilities produced by real flare measurements captured by RHESSI.

5.1 The case of Stix synthetic data

(a) Simulated Double Footpoint Flare (SDFF1)
(b) Simulated Double Footpoint Flare (SDFF2)
(c) Simulated Loop Flare (SLF1)
(d) Simulated Loop Flare (SLF2)
Figure 4: Reconstructed images for four different STIX simulated flare events. Left column: the original source shape. Middle column: reconstructions provided by CLEAN using visibilities. Right column: reconstructions provided by 5-CS.

We simulated four flaring events with different configurations. In the first two images (Figure 4, (a) and (b)) the overall photon flux is photons/cm, but in the first image the two foot-points have different intensity and different size, while in the second one the brightness and dimension of the two sources are the same. The other two images (Figure 4, (c) and (d)) are characterized by a higher overall intensity ( photons/cm) and contain two loops with significantly different curvatures.

Figure 4 compares the images reconstructed by 5-CS with the ones provided by CLEAN (Högbom 1974) and Table 1 contains some physical parameters characterizing the simulated and reconstructed images (in this Table the positions and the full width at half maximum (FWHM) are measured in arcsec). These results imply that 5-CS and CLEAN recover the physical parameters with comparable accuracy, although 5-CS is able to significantly reduce the impact of spurious artifacts.

Simulated Map CLEAN 5-CS
Simulated Double Footpoint Flare (SDFF1)
X 20.0 20.4 0.9 19.2 1.5
Y -20.0 -18.9 0.9 -21.8 1.1
{sideways}Peak 1 FWHM 15.0 9.6 3.4 9.0 4.4
X -5.0 -4.0 0.7 -5.0 1.2
Y 5.0 4.8 0.6 3.5 2.9
{sideways}Peak 2 FWHM 10.0 12.8 0.8 11.9 1.6
Flux Ratio 2.00 1.41 0.10 1.78 0.42
Simulated Double Footpoint Flare (SDFF2)
X -30.0 -28.9 0.6 -29.3 0.8
Y 0.0 1.0 0.7 0.4 0.7
{sideways}Peak 1 FWHM 10.0 13.2 0.9 9.6 0.8
X 30.0 29.5 0.5 28.0 0.8
Y 0.0 -0.1 0.6 -1.4 0.5
{sideways}Peak 2 FWHM 10.0 12.9 0.6 9.3 0.6
Flux Ratio 1.00 0.99 0.10 1.02 0.09
Simulated Loop Flare (SLF1)
X 10.0 9.0 0.5 9.6 0.5
{sideways}Peak Y -15.0 -13.6 0.5 -14.6 0.7
Simulated Loop Flare (SLF2)
X 10.0 9.8 0.4 8.7 0.7
{sideways}Peak Y 0.0 -0.1 0.3 -0.4 0.7
Table 1: Physical parameters inferred from the reconstructed maps in (Figure 4) and compared to the ones in the simulated maps (position and FHWM are measured in arcsec). The reconstructions have been made times, in correspondence with realizations of the STIX simulated data. The Table indicates the average values of the parameters and the corresponding standard deviations.

5.2 The case of Rhessi experimental measurements

We first considered five flaring events at specific energy channels and time intervals and compared the reconstructions provided by 5-CS with the ones obtained by two standard visibility-based methods, namely CLEAN (Högbom 1974) and uv smooth (Massone et al. 2009). In particular we have considered flaring events characterized by rather different morphologies like double foot-points (20 February 2002), loops (15 April 2002; 13 May 2013), extended sources (31 August 2004), and extended plus compact sources (2 December 2003). The results in Figure 5 show that sparsity promotion and the use of a continuous wavelet formulation reduce the artifacts and provide a higher spatial resolution. For all experiments we used RHESSI detectors from 2 to 9 and a 3-scale decomposition for the wavelet-based deconvolution methods. Figures 6 and 7 focus on datasets acquired by RHESSI during the July 23 2002 event. In particular, Figure 6 reproduces the same analysis performed by Emslie et al. (2003) using CLEAN and clearly points out how 5-CS better preserves the sources’ morphology along the energy increase, reduces the artifacts in between the different sources and maintains the image reliability particularly at high energies. On the other hand, Figure 7 shows the time evolution of the flaring emission at a fixed energy channel and probably seems to reject the presence of emission along a curved locus joining the northern and souther sources, in contrast to what argued by Massone et al. (2009), but accordingly to the results in (Emslie et al. 2003).

(a) February 20, 2002 – 11:06:02-11:06:34 UT – (22-26 keV)
(b) April 15, 2002 – 00:05:00-00:10:00 UT – (12-14 keV)
(c) December 02, 2003 – 22:54:00-22:58:00 UT – (18-20 keV)
(d) August 31, 2004 – 05:33:00-05:38:00 UT – (10-12 keV)
(e) May 13, 2013 – 02:04:16-02:04:48 UT – (6-12 keV)
Figure 5: Reconstructed images from different flaring events using three reconstruction methods: CLEAN (left), uvsmooth (center), and 5-CS (Right). RHESSI detectors from 2 to 9 have been used to generate the visibilities.
Figure 6: Images reconstructed by 5-CS for the July 23, 2002 flaring event through different energy ranges (00:29:10-00:30:19 UT). Visibilities collected by RHESSI collimators from 2 to 9 have been used for the reconstructions.
Figure 7: Images reconstructed by 5-CS for the July 23, 2002 flaring event through different time intervals (36-41 keV energy channel). Visibilities collected by RHESSI collimators from 2 to 9 have been used for the reconstructions.

5.3 Computational performance

The experiments in this paper have been performed by means of a second generation 4 core Intel i7-2600 (3.40 GHz) CPU using IDL 8.4 on Ubuntu 16.04. The computational time complexity of 5-CS is near where is the image size and is the number of iterations required by FISTA for solving the optimization problem. We did not include the number of employed scales for FIWT on the time complexity analysis, since is usually a small number.

Figure 8 shows the computational performance of 5-CS under different configurations in the case of the first STIX simulation (Figure 4(a)). In Figure 8 (a) we can observe the linear relation between the employed time and the number of iterations for the reconstruction of an image map of size and considering . On the other hand, Figure 8 (b) shows how the computational time increases when 5-CS recovers images with higher and higher size, where in this case, the number of iterations and considered scales are fixed at and , respectively. Since most reconstructions are performed with an image size of pixels and our method usually solves the reconstruction problem with less than 100 iterations, we can conclude that the average time required by 5-CS for reconstructing a single standard image is between 1.5 and 2.5 seconds.

Figure 8: Computational performance of 5-CS. The computational time is compared versus (a) the number of FISTA iterations and (b) the size of the reconstructed image. The test is made using the synthetic STIX visibilities used in Figure 4(a).

6 Conclusions

Hard X-ray solar space telescopes typically provide their experimental measurements in the form of visibilities, i.e., sparse samples of the Fourier transform of the incoming flux. Therefore producing images in this setting requires the application of reconstruction methods that realize the inversion of the Fourier transform from limited data. Several methods have been utilized so far but none of them explicitly exploited compressed sensing, i.e. the use of a sparsity-enhancing penalty term in regularization. Here we introduced a wavelet-based deconvolution method promoting sparsity for hard X-ray image reconstruction from visibilities. The main aspects of this method are that

  • It relies on a continuous isotropic wavelet transform, coherently to the fact that X-ray sources are either isotropic or characterized by a slow change of shape.

  • It avoids the use of a computationally demanding catalogue-based compressed sensing.

  • It realizes regularization by means of optimization of a minimum problem where the penalty term promotes sparsity.

  • It realizes numerical optimization by means of a fast iterative algorithm.

The applications against both synthetic STIX and experimental RHESSI visibilities show the reliability of the method in terms of both the spatial resolution achieved and the reduction of spurious artifacts. Finally, the computational burden required by the method is low and competitive with respect to possible big data applications. The implementation of the algorithm within Solar SoftWare, which is under construction, will allow the systematic use of this approach against RHESSI observations and future application against STIX measurements.

We would like to acknowledge Federico Benvenuto for useful discussions. This research work has been supported by the ASI/INAF grant ”Solar Orbiter ILWS - Supporto scientifico per la realizzazione degli strumenti METIS e SWA/DPU nelle fasi B2-C1”


  • Aschwanden & Schmahl (2002) Aschwanden, M. J. & Schmahl, E. 2002, Solar Physics, 210, 193
  • Beck & Teboulle (2009) Beck, A. & Teboulle, M. 2009, SIAM journal on imaging sciences, 2, 183
  • Benz et al. (2012) Benz, A., Krucker, S., Hurford, G., et al. 2012, in SPIE Astronomical Telescopes+ Instrumentation, International Society for Optics and Photonics, 84433L–84433L
  • Bertero & Boccacci (1998) Bertero, M. & Boccacci, P. 1998, Introduction to inverse problems in imaging (CRC press)
  • Bobin et al. (2008) Bobin, J., Starck, J. L., & Ottensamer, R. 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 718
  • Bong et al. (2006) Bong, S. C., Li, L. J., Gary, D. E., & Yun, H. S. 2006, Astrophysical Journal, 636, 1159
  • Candes & Wakin (2008) Candes, E. J. & Wakin, M. B. 2008, Signal processing magazine, 25, 21
  • Cornwell & Evans (1985) Cornwell, T. J. & Evans, K. 1985, Astronomy and Astrophysics, 143, 77
  • Donoho (2006) Donoho, D. L. 2006, IEEE Transactions on Information Theory, 52, 1289
  • Emslie et al. (2003) Emslie, A. G., Kontar, E. P., Krucker, S., & Lin, R. P. 2003, The Astrophysical Journal, 595, L107
  • Engl et al. (1996) Engl, H. W., Hanke, M., & Neubauer, A. 1996, Regularization of inverse problems, Vol. 375 (Springer Science & Business Media)
  • Garsden et al. (2015) Garsden, H., Girard, J. M., L, S. J., & S, C. 2015, Astronomy & Astrophysics, 575, A90
  • Giordano et al. (2015) Giordano, S., Pinamonti, N., Piana, M., & Massone, A. M. 2015, SIAM Journal on Imaging Sciences, 8, 1315
  • Golub et al. (1979) Golub, G. H., Heath, M., & Wahba, G. 1979, Technometrics, 21, 215
  • Hansen (1992) Hansen, P. C. 1992, SIAM review, 34, 561
  • Häuser & Steidl (2014) Häuser, S. & Steidl, G. 2014, ArXiv
  • Högbom (1974) Högbom, J. 1974, Astronomy and Astrophysics Supplement Series, 15, 417
  • Hurford et al. (2002) Hurford, G. J., Schmahl, E. J., Schwartz, R. A., et al. 2002, Solar Physics, 210, 61
  • Li et al. (2011) Li, F., Cornwell, T. J., & de Hoog, F. 2011, Astronomy & Astrophysics, 528, A31
  • Lin et al. (2002) Lin, R., Dennis, B., Hurford, G., et al. 2002, Solar Physics, 210, 3
  • Mallat (1999) Mallat, S. 1999, A wavelet tour of signal processing (Acaic press)
  • Massone et al. (2009) Massone, A. M., Emslie, A. G., Hurford, G., et al. 2009, The Astrophysical Journal, 703, 2004
  • Miller (1970) Miller, K. 1970, SIAM Journal on Mathematical Analysis, 1, 52
  • Morozov (1984) Morozov, V. A. 1984, Methods for solving incorrectly posed problems (Springer New York)
  • Oster et al. (1964) Oster, G., Wasserman, M., & Zwerling, C. 1964, JOSA, 54, 169
  • Portilla & Simoncelli (2000) Portilla, J. & Simoncelli, E. P. 2000, International journal of computer vision, 40, 49
  • Starck et al. (2007) Starck, J.-L., Fadili, J., & Murtagh, F. 2007, IEEE Transactions on Image Processing, 16, 297
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description