Sparse Faraday Rotation Measure Synthesis

Sparse Faraday Rotation Measure Synthesis

M. Andrecut, J. M. Stil and A. R. Taylor Institute for Space Imaging Science,
Department of Physics and Astronomy,
University of Calgary, Calgary, Alberta, T2N 1N4, Canada
Abstract

Faraday rotation measure synthesis is a method for analyzing multichannel polarized radio emissions, and it has emerged as an important tool in the study of galactic and extra-galactic magnetic fields. The method requires the recovery of the Faraday dispersion function from measurements restricted to limited wavelength ranges, which is an ill-conditioned deconvolution problem. Here, we discuss a recovery method, which assumes a sparse approximation of the Faraday dispersion function in an over-complete dictionary of functions. We discuss the general case, when both thin and thick components are included in the model, and we present the implementation of a greedy deconvolution algorithm. We illustrate the method with several numerical simulations that emphasize the effect of the covered range and sampling resolution in the Faraday depth space, and the effect of noise on the observed data.

Methods: data analysis - Techniques: polarimetric - magnetic fields

1 Introduction

Faraday rotation is a physical phenomenon where the position angle of linearly polarized radiation propagating through a magneto-ionic medium is rotated as a function of frequency. The work on astrophisical Faraday rotation has been initiated in (Burn, 1966), and since then several important contributions have been added to this topic (Gardner & Whiteoak, 1966; Sokoloff et al., 1998, 1999; Kronberg, 1994; Vallee, 1980; Widrow, 2002). Recently, Faraday rotation measure (RM) synthesis has been re-introduced as an important method for analyzing multichannel polarized radio data, where multiple emitting regions are present along the single line of sight of the observations (Brentjens & de Bruyn, 2005; Heald, 2009). In practice, the method requires the recovery of the Faraday dispersion function from measurements restricted to limited wavelength ranges, which is an ill-conditioned deconvolution problem, raising important computational difficulties. Since then, three different approaches have been proposed to solve this problem. A first approach uses an adaptation of the Hogbom CLEAN algorithm (Hogbom, 1974) to the RM deconvolution (Heald, 2009). The second approach is wavelet-based, and assumes field symmetries in order to project the observed data onto (Frick et al., 2010). The third approach (Wiaux et al., 2009; Li et al., 2011) is based on the compressed sensing paradigm (Donoho, 2006; Candes & Tao, 2006). All these methods are more or less successful in the case of mixed problems, i.e. when both thin and thick components are included in the model. For example, in a recent paper it has been shown that RM Synthesis may yield an erroneous Faraday structure in the presence of multiple, interfering RM components, even when cleaning of the Faraday spectrum is performed (Farnsworth et al., 2011). Also, to our knowledge these methods have not been evaluated in the presence of noise added to the observed data, a situation that makes the deconvolution problem even more difficult. Thus, the development of robust deconvolution methods for the recovery of the Faraday dispersion function in a given spectral range becomes crucial for the RM synthesis applications.

Inspired by the above mentioned contributions, in this paper we discuss the case of sparse approximation of the complex Faraday dispersion function, i.e. we assume that can be approximated by a small number of discrete components, which can be both thin or thick. Also, we present the implementation of a greedy deconvolution algorithm, and we illustrate the described method with several numerical simulations which emphasize the effect of the covered range and sampling resolution in the Faraday depth space, and the effect of noise on the observed data. The numerical results show that the described method performs quite well for simple component mixtures, at typical sampling resolution values and coverage range in the Faraday depth space, and it is quite robust in the presence of noise. We show that the described technique is well suited for exploratory data analysis, where prior information about the component distributions is not available, and it can be used as a complement to the previously proposed methods.

Although a sparse solution is an idealized model of a complex astrophysical system, the potential complexity of the solutions is adequate for a wide range of astrophysical situations. The sparseness requirement steers the solution to include the smallest number of components required to fit an observed Faraday depth spectrum. Double-lobed radio galaxies that are not resolved by the telescope may experience different Faraday rotation in each lobe because the differences in the foreground on scales smaller than the beam. The lobes themselves may be extended and experience differential Faraday rotation as well. A sparse solution may consist of two discrete Faraday components representing each lobe. If the data are good enough to detect differential Faraday rotation across the source, the solution may include one or more components with a finite extent in Faraday depth. Complex source structure may be built up out of a dictionary of basic thin and thick Faraday components, subject to the requirement that the solution remains sparse.

In the diffuse interstellar medium, a case where Faraday rotation of Galactic synchrotron emission is dominated by a single HII region along the line of sight is an example of a system that is well approximated with two components in Faraday depth, e.g. the circular Faraday screen discussed by Haverkorn et al. (2003) and De Bruyn et al. (2009). As in the case of double lobed radio sources, the sparse solution is not limited by two delta functions in Faraday depth, as it can increase in complexity if warranted by the data.

The assumption of sparseness may fail in case there is a power on a large range of Faraday depths, defined by the minimum and maximum Faraday depth detectable in a survey. This may occur in some supernova remnants with complex structure and strong magnetic fields.

2 Rotation measure synthesis

In this section we give a brief description of the Faraday RM synthesis problem, following the formulation introduced in Brentjens & de Bruyn (2005).

The Faraday rotation is characterized by the Faraday depth (in ), which is defined as:

(1)

where is the electron density (in ) , is the magnetic field (in ), and is the infinitesimal path length (in parsecs). We also define the complex polarization as:

(2)

where is the fractional polarization, , , are the Stokes parameters, and is the polarization angle observed at wavelength :

(3)

The Faraday RM is defined as the derivative of the polarization angle , with respect to :

(4)

We now identify RM with the Faraday depth , and we assume that the observed polarization originates from the emission at all possible values of , such that:

(5)

where is the complex Faraday dispersion function (the intrinsic polarized flux, as a function of the Faraday depth). Thus, in principle is the inverse Fourier transform of the observed quantity :

(6)

However, this operation is ill-defined since we cannot observe for , and also in practice the observations are limited to an interval .

In order to deal with the above limitations, the observed polarization is defined as:

(7)

where is the observation window function, with for , and otherwise. Therefore, we obtain the reconstructed dispersion function:

(8)

where

(9)

is the normalization constant for the observation window. The reconstructed dispersion function can also be written as:

(10)

where is the convolution operator, and

(11)

is the RM spread function (RMSF).

Using the shift theorem, we can also write:

(12)

and

(13)

where is the mean of the sampled values in .

The goal of the analysis is to find from the observed values (i.e. and ) over discrete channels , , with the given weights . Since the measured values are discrete (each value constitutes an integral over the channel centered at ), we should consider the discrete versions of the above equations, i.e.:

(14)

and respectively

(15)

The reconstructed function depends on the window , which acts as a filter, and improves substantially by increasing its coverage in the space. Obviously, is a “dirty” reconstruction of , i.e. the convolution of with , and and a deconvolution step is necessary to recover .

3 Sparse approximation

3.1 Discrete representation

In general, the number of data points is limited by the number of independent measurement channels, and therefore there are many different potential Faraday dispersion functions consistent with the measurements (Burn, 1966; Brentjens & de Bruyn, 2005; Heald, 2009; Frick et al., 2010; Li et al., 2011; Farnsworth et al., 2011). The usual approach to resolving such ambiguities, is to impose some extra constraints on the Faraday dispersion function. Our approach is based on the recently introduced framework of compressive sensing (Donoho, 2006; Candes & Tao, 2006). Compressive sensing relies on the observation that many types of signals can be well-approximated by a sparse expansion in terms of a suitable basis, or dictionary of functions. The main idea of compressive sensing is that if the signal is sparse, then a small number of measurements contain sufficient information for its approximate or exact recovery. In our case, the problem is to reconstruct a sparse from a relatively small number of measurements. Therefore, we assume that the model of is sparse in an over-complete dictionary of functions. By over-complete we understand that the number of functions in the dictionary is larger than the number of independent observation channels. Thus, the dictionary functions may be redundant (linearly dependent), and therefore non-orthogonal. In order to give a proper formulation of this approach we need to introduce a discrete representation of the space.

It is known (Brentjens & de Bruyn, 2005) that, for a discrete sampled Faraday dispersion function, the full width at half maximum of the main peak of the RMSF is given by:

(16)

where is the width of the observation interval. Also, using a uniform grid in space one can estimate the maximum observable Faraday depth by:

(17)

where is the width of an observing channel (Brentjens & de Bruyn, 2005). This estimation of is only an approximation, since in reality only the frequency is sampled linearly. Therefore, in our discrete representation we consider a nonlinear grid in the space: , where is the centered frequency of the channel , and is the speed of light. Also, we consider a linear grid in the space, where the computational window , the sampling resolution , and the number of points are set to:

(18)

where is the integer part of .

The model of is therefore characterized by a uniform grid, , , and a vector , which is assumed sparse, i.e. it has a small number of non-zero components, corresponding to the complex amplitudes of the sources located on the grid. For example, a thin source with the amplitude , located at , will be approximated by the product of with a Dirac function , while a thick source will be characterized by a contiguous set of non-zero amplitudes in the vector , which requires a different set of adaptive functions, capable of capturing their position and extensive support in the space. The goal of the analysis is to find the vector , which is a discrete approximation of the Faraday dispersion function , from the measurements and , .

3.2 Dirac approximation

Since, in general we can have , the Dirac functions , , form an over-complete dictionary in the space. The decomposition of with respect to the Dirac over-complete dictionary is:

(19)

From the equations (5) and (7) we obtain:

(20)

We observe that the transformation of into can be written in a matrix form as following:

(21)

where

(22)

is the -dimensional complex vector of observations, and is the matrix with the Fourier terms:

(23)

and is the diagonal matrix, with the diagonal elements equal with the channel weights: .

If we are searching for the sparsest solution possible, then the norm of :

(24)
(25)

should be minimized. This sparseness assumption leads to the following optimization problem:

(26)

However, finding the minimum norm is an NP-complete problem, which requires a combinatorial search of the parameter space and therefore is practically unfeasible (Donoho, 2006; Candes & Tao, 2006). A better approach is to replace the norm with the norm:

(27)

which transforms the combinatorial problem into a convex problem, that can be solved in polynomial time (Boyd & Vandenberghe, 2004), and it has been shown to give solutions close to the norm solutions (Chen et al., 2001). Thus, the problem can be reformulated as finding the vector such that:

(28)

One can see that we do not make any assumption on the number of non-zero components, we just assume that their number is smaller than .

So far we have not considered the influence of noise on the observed data. We assume a complex noise vector , with the components having the real and respective imaginary parts sampled from a normal distribution with zero mean and standard deviation : . Thus, the transformation of into can be rewritten as:

(29)

and the minimization problem can be reformulated as:

(30)

The use of the norm induces sparsity in , while the constraint ensures . Since is observed in the presence of noise, it is reasonable to not enforce exactly, and to stop the minimization process when the norm of the residual becomes comparable with the standard deviation of the noise ().

3.3 Generalization

Dirac functions can be used to approximate thin sources only. In orderDonoho2006 to approximate thick sources we extend the dictionary by incorporating a set of functions, characterized by adaptive translation and scaling properties, such that they are capable to capture the position and the extent of thick sources in the space. Thus, we assume that has a sparse approximation in an over-complete dictionary of functions , called atoms (Mallat & Zhang, 1993):

(31)

Here, is the number of atoms in the dictionary, and only a small number of the complex coefficients are assumed to be non-zero. Thus, by introducing the complex matrix , with the elements , and the sparse complex vector , and taking into account that:

(32)

we obtain the following minimization problem:

(33)

where

(34)

is a complex matrix. In this more general case, the goal is to find the sparse vector in the dictionary space. Obviously, when the dictionary is reduced to the Dirac basis we have , and , where is the identity matrix, and therefore reduces to the weighted Fourier matrix, .

3.4 Over-complete dictionaries

We should note that an over-complete dictionary that leads to sparse representations can be chosen as a pre-specified set of analysis functions (wavelets, Gaussian packets, Gabor functions etc.), or designed by modeling its content to a given set of signal examples (Candes & Tao, 2006; Mallat & Zhang, 1993). The success of such dictionaries in applications depends on how suitable they are to sparsely describe the signals in question. A general family of analysis functions can be obtained by scaling and translating a single normalized window function , with . Therefore, for any scale and translation we define the atom of the dictionary as following:

(35)

Therefore, the index of the atom function depends on both and parameters: . Thus, in order to represent in the dictionary , we need to select an appropriate countable subset of atoms , , such that can be written as a linear expansion. Depending on the choice of the atoms , the expansion coefficients will give explicit information about the behavior of . For example, we should note here that different wavelet transforms correspond to different families of atoms. In our definition, we do not limit the dictionary to a single wavelet basis, on contrary we consider an over-complete set, which also may contain different concatenated families (sub-dictionaries) of such analysis functions. In order to illustrate numerically this approach, let us consider the boxcar dictionary, defined as:

(36)

An important characteristic of the boxcar dictionary is that it can capture sources with arbitrary thickness. Another advantage is its easy discretization. In our case, the discretization grid has points with the sampling resolution . Thus, assuming that the maximum width of a boxcar atom is , where , then for each scale , , and translation , we can define a boxcar function with the index , such that:

(37)

Therefore, one can define maximum boxcar functions on such a grid, and we can easily build a discrete dictionary matrix of size . In this paper we limit our discussion to the boxcar dictionary defined above, since it is simple enough to illustrate the approach, and to provide meaningful results. Also, this dictionary includes by construction the Dirac set of functions, which in this case are the first functions with . A similar approach can be used to build sub-dictionaries corresponding to other families of analysis functions.

3.5 Multi-scale analysis

The sparse decomposition can also be used to perform a multi-scale analysis, by considering all the dictionaries , where . Also, let us assume that is the solution obtained for the scale , i.e. the recovered discrete representation of with the dictionary . We consider a matrix , where each line with the index corresponds to the solution obtained for the scale , i.e. . Obviously, the solution will depend on the maximum scale used in each dictionary , and by visualizing the matrix , we obtain a representation of the behavior of the solution at different scales.

4 Matching pursuit

The sparse optimization problem, defined in the previous section, is known as Basis Pursuit Denoising (BPD) (Donoho, 2006), and if written in a Lagrangian form:

(38)

it can be thought of as a least squares problem with an regularizer, where is a parameter that controls the trade-off between sparsity and reconstruction fidelity. Thus, BPD solves a regularization problem with a trade-off between having a small residual and making the solution simple in the sense. The solutions of BPD are often the best computationally tractable approximation of the under-determined system of equations (Donoho & Tanner, 2005). In our case, since the direct space and the inverse Fourier space are perfectly incoherent, the problem can be solved using linear programming techniques whose computational complexities are polynomial. However, for the sparse RM approximation problem, the BPD approach requires the solution of a very large convex, non-quadratic optimization problem, and therefore suffers from high computational complexity. Due to the complexity of the linear programming approach, several other optimization methods have been proposed to solve the BPD problem (Donoho, 2006; Candes & Tao, 2006). Here, we consider a method based on sub-optimal greedy algorithms, which requires far less computation. Our goal is not only to obtain a good sparse expansion, but also to provide a fast computational method, therefore here we focus our attention on the greedy Matching Pursuit (MP) algorithm (Mallat & Zhang, 1993), which is the fastest known algorithm for the BPD problem (Chen et al., 2001). MP has many applications in signal and image coding, shape representation and recognition, data compression etc. One of its main features is that it can be applied to arbitrary dictionaries.

Starting from an initial approximation and residual , the algorithm uses an iterative greedy strategy to pick the column vectors which best reduce the residual. At every time step the current residual can be decomposed as following:

(39)

where is the future residual, and is the standard inner product operator in the complex Hilbert space. Since and are orthogonal, , we have:

(40)

In order to minimize the norm of the future residual, the algorithm should choose the column vector which maximizes the projection on the current residual:

(41)

Therefore, after choosing the best column one can update the solution and the residual as following:

(42)
(43)

where

(44)

Thus, after iteration steps the resulted solution is a sparse vector with the non-zero coefficients . The algorithm stops when the maximum number of iterations has been reached (which usually is set to ), or when the norm of the residual becomes comparable with the standard deviation of the noise. The reconstruction of the target signals is then given by:

(45)
(46)

The pseudo-code of the RM-MP algorithm is listed in the Appendix.

5 Numerical results

5.1 Two different experiment layouts

In order to illustrate the described deconvolution method, we have considered two different experiment configurations, corresponding to two different ranges of observed frequencies. The first one is consistent with the observations with the Westerbork Synthesis Radio Telescope (WSRT) in the frequency range 315 MHz to 375 MHz, as described in (Brentjens & de Bruyn, 2005). The second one is consistent with the observations with the Arecibo telescope in the frequency range 1225 MHz to 1525 MHz, for The Galactic ALFA Continuum Survey (GALFACTS), as described in (Taylor & Salter, 2010). The separation between the frequency windows is roughly 1 GHz, and therefore the maximum observable Faraday depth and the half maximum of the main peak of the RMSF are quite different. Here we will show that the RM-MP method provides very good results in both cases.

As a testbed for numerical simulations, we have considered a mixed scenario consisting of three components with different widths, such that the simulation results provide the response of the algorithm to a full range of component widths. The first one is a thin component given by: . The second is a thick component given by: if , and otherwise. The third is a thicker component defined by: if , and otherwise. Thus, this scenario can be easily scaled for different computational windows , where is given in . Also, we have considered that all the observational channels are equally weighted: i.e. , , and .

5.2 Wsrt

The various parameters associated with the WSRT experiment layout (Brentjens & de Bruyn, 2005) are listed bellow:

Frequency range: , ;

Wave length range: , , ;

Number of channels: ;

Half maximum of the main peak of the RMSF: ;

Maximum observable Faraday depth: ;

Let us first consider the ideal noiseless case, when the sampling resolution in the space is equal with the half maximum of the main peak of the RMSF, , and the computational window is . In this particular case, as shown in Figure 1, the RM-MP algorithm provides an exact solution, since and therefore no information is lost in the measurement. One can also notice that in this noiseless exact sampling case, the solution is independent of the scale used in the dictionary, as it can be seen on the multi-scale representation for . However, the problem becomes ill-defined in the following situations: the noise is present; the sampling resolution becomes finer than the half maximum of the main peak of the RMSF, ; and the number of independent observed channels is smaller than the number of points in the space, . In this case the system becomes under-determined, and therefore some information is lost. In order to exemplify this situation, we consider a scenario in which all these factors are present. We add noise with the standard deviation , to the and values. We limit the computational window to , and we increase the number of points on the grid to , which is double of the number of observation channels . This results in a sampling resolution of . The obtained results (for , ) are shown in Figure 2. One can see that the phase of some components cannot be reliably recovered anymore, since there is not enough information in the signal to be detected properly. We should note that the problem is correctly resolved in the noiseless case (not shown here). Thus, the effect of noise addition consists in a partial loss of information about the phase of , which is expected, since the number of solutions compatible with the data increases dramatically with the added noise. Also, we should point out that the solution improves by increasing the signal-to-noise ratio, as shown in Figure 3, where we have increased the amplitude of the components by a factor of 1.5, keeping their phase unchanged. One can see that in this case, the RM-MP method resolves correctly all the components. This result suggests that an adequate signal to noise ratio should be taken into account, in order for the method to be successful.

5.3 Arecibo

The GALFACTS survey, carried out with the Arecibo telescope, has the following parameters:

Frequency range: , ;

Wave length range: , , ;

Half maximum of the main peak of the RMSF: ;

The maximum observable Faraday depth, , is inverse proportional with the width of the observation channel , and by increasing the number of channels, the maximum observable Faraday depth becomes unreasonable high. Therefore, in order to obtain some meaningful results, we have to limit both the number of observation channels in the space, and the computational window in the space.

First we consider that the compuational window is limited to and the number of observation channels is . Also, we consider the same testbed as for WSRT case, and we add noise with the standard deviation . In addition, we increase the number of points on the grid to , and therefore we obtain a sampling resolution: . The obtained results (for , ) are shown in Figure 4. One can see that all the components are relatively well resolved, with a small error in the phase, but with almost exact amplitudes. In the next experiment we zoom more in the space, and we impose a computational window of , keeping the same number of observation channels and number of points on the grid, such that . The results are again reasonable good, as shown in Figure 5, with a small variation of the phase due to the uncertainty introduced by the noise addition. However, if we zoom further in the space the solution is not as good anymore, as it can be seen in Figure 6. In this case we have a much finer sampling resolution , corresponding to a computational window of , number of observation channels , and the number of points on the grid . This is a consequence of the fact that by increasing , we have increased also the standard deviation of the noise to , such that the signal to noise ratio is smaller than before. Again, an improved solution can be obtained by increasing the amplitude of the components, such that the signal to noise ratio is higher.

5.4 Beyond the RMSF resolution

In the previous numerical experiments we have shown that the RM-MP algorithm is able to resolve correctly the components from the input model, if the separation between the components is higher than the half maximum of the main peak of the RMSF. In order to estimate the response of the RM-MP algorithm at resolutions beyond the RMSF limit, we consider two Dirac components, , and respectively , separated by , where is the half maximum of the main peak of the RMSF. The numerical experiments show that the RM-MP algorithm cannot resolve correctly the two components, but returns a boxcar function centered at the exact position value , with a width equal with the separation between the two components. In order to illustrate this result we consider the WSRT scenario, with , and , which gives a sampling resolution , in the space. In Figure 8 we give the width of the output boxcar function as a function of the input separation . One can see that for all performed experiments we have . Also, in Figure 9 we show a typical example, where the input separation is , or approximatively from . Thus, even at resolutions beyond the half maximum of the main peak of the RMSF, the RM-MP algorithm provides some useful information, i.e. the position and the separation width of the two components.

5.5 Discussion

The above numerical experiments have shown that the sparse RM-MP method works well for relatively simple sparse problems. We should note that the method can be used to recover more complex dispersion functions. For example, let us consider the situation from Figure 7, where we have two thin components and two thick components. The first thick component is modeled as a Gaussian, while the second is modeled as a boxcar function. Also, we assume the noisy WSRT experiment configuration, with: , , and . One can see that all the sources are almost exactly recovered, including the thick Gaussian, even though the dictionary does not contain any Gaussian functions. In fact, the shape of the Gaussian is reconstructed from several boxcar functions from the dictionary. Thus, the boxcar dictionary can be used to recover more complex functions. However, the success of the method depends on another aspect which has not yet been discussed. More specifically, the performance of the RM-MP method depends on the number of observation channels , the number of points on the grid, and the number of non-zero components in the discrete representation of the Faraday depth function . An important question here is that given and , what is the maximum value of , for a faithful recovery of ? In (Donoho, 2006; Candes & Tao, 2006) it has been shown that any -sparse signals of length , with , can be recovered from only random measurements (projections), where . The answer to this question is not obvious for the sparse RM synthesis problem, since the reconstruction process will depend on experiment layout, i.e. the observed frequency band and the half maximum of the main peak of the RMSF. This is an important theoretical question which we would like to address in the future development, in order to improve the performance of the method.

6 Conclusions

The recently introduced Faraday RM synthesis is becoming an important tool for analyzing multichannel polarized radio data, and derive properties of astrophysical magnetic fields. The method requires the solution of an ill-conditioned deconvolution problem, in order to recover the intrinsic Faraday dispersion function, and therefore the development of robust methods has become crucial for the RM Synthesis applications. Here, we have assumed that the complex Faraday dispersion function can be approximated by a small number of discrete components from an over-complete dictionary, and we have developed a greedy algorithm to solve the deconvolution problem. The method uses an over-complete dictionary of functions which can be efficiently used in a multi-scaling context, and it can easily include different types of analysis functions. We also have presented several numerical simulations showing the effect of the covered range and sampling resolution in the Faraday depth space, and the effect of noise on the observed data. The numerical results show that the described method performs well at common resolution values and coverage range in the Faraday depth space, and it is quite robust in the presence of noise. Therefore, the described technique is well suited for exploratory data analysis, and it can be used as a complement to the previously proposed methods.

Appendix A Appendix material


The pseudo-code of the RM-MP algorithm:

; solution vector (-dimensional)

; initial residual (-dimensional)

; systems matrix (-dimensional)

; standard deviation of the noise

; stopping (regularization) parameter

; maximum number of iterations.

; the projection coefficient

; the selected projection coefficient

; the index of the selected column

for()

{

;

for()

{

;

if()

{

;

;

}

}

;

;

;

if() then break;

}

return ;

References

  • Brentjens & de Bruyn (2005) Brentjens, M. & de Bruyn, A. 2005, A&A, 441, 1217.
  • Boyd & Vandenberghe (2004) Boyd, S. & Vandenberghe, L. 2004, Convex Optimization, Cambridge University Press.
  • Burn (1966) Burn, B. J. 1966, MNRAS, 133, 67.
  • Candes & Tao (2006) Candes, E., Tao, T. 2006, IEEE Trans. Inf. Theory, V52, 5406.
  • Chen et al. (2001) Chen, S., Donoho, D., Saunders, M., 2001, SIAM Rev., 43, 129.
  • De Bruyn et al. (2009) De Bruyn, A. G., Bernardi, G. & The LOFAR Team, in The Low-Frequency Radio Universe, proceedings of the conference held 8-12 December 2008, at National Centre for Radio Astrophysics (NCRA), TIFR, Pune, India, ed. D. J. Saikia, D. A. Green, Y. Gupta & T. Venturi, ASP Conf. 407, 3.
  • Donoho & Tanner (2005) Donoho, D., Tanner, J. 2005, PNAS, 102(27), 9446.
  • Donoho (2006) Donoho, D. 2006, IEEE Trans. Inf. Theory, V52, 1289.
  • Farnsworth et al. (2011) Farnsworth, D., Rudnick, L., Brown, S. 2011, AJ, V141(6), 191.
  • Frick et al. (2010) Frick, P., Sokoloff, D., Stepanov, R., and Beck, R. 2010, MNRAS:Lett., 401, L24.
  • Gardner & Whiteoak (1966) Gardner, F. F. & Whiteoak, J. B. 1966, ARA&A, 4, 245.
  • Haverkorn et al. (2003) Haverkorn, M., Katgert, P., & de Bruyn, A. G. 2003, A&A, 404, 233.
  • Heald (2009) Heald, G. 2009, Cosmic Magnetic Fields: from Planets, to Stars and Galaxies, 259, 591.
  • Hogbom (1974) Hogbom, J. 1974, A&A Supp., 15, 417.
  • Li et al. (2011) F. Li, S. Brown, T. J. Cornwell and F. de Hoog, 2011, A&A, 531, A126.
  • Kronberg (1994) Kronberg, P. P. 1994, Rep. Prog. Phys., 57, 325.
  • Mallat & Zhang (1993) Mallat, S., Zhang, Z. 1993, IEEE Trans. Signal Process., V41, 3397.
  • Sokoloff et al. (1998) Sokoloff, D. D., Bykov, A. A., Shukurov, A., et al. 1998, MNRAS, 299, 189.
  • Sokoloff et al. (1999) Sokoloff, D. D., Bykov, A. A., Shukurov, A., et al. 1999, MNRAS, 303, 207.
  • Taylor & Salter (2010) Taylor, A. R. & Salter, C. J. 2010, The Dynamic Interstellar Medium: A Celebration of the Canadian Galactic Plane Survey. Proceedings of a conference held at the Naramata Centre, Naramata, British Columbia, Canada on 6-10 June 2010. Edited by R. Kothes, T. L. Landecker, and A. G. Willis. San Francisco: Astronomical Society of the Pacific, 2010, p.402.
  • Vallee (1980) Vallee, J. P. 1980, A&A, 86, 251.
  • Wiaux et al. (2009) Wiaux, Y., Jacques, L., Puy, G., Scaife, A., & Vandergheynst, P. 2009, MNRAS, 395, 1733.
  • Widrow (2002) Widrow, L. M. 2002, Rev. Mod. Phys., 74, 775.
Figure 1: WSRT experiment layout, noiseless exact sampling case: and . The figure is bottom-up organized: the bottom row is the measured data, i.e. , , and ; the second row is the input (original) model of ; the third row is the dirty ; the forth row is the RM-MP algorithm recovered ; and the fifth row is the multi-scale representation of the solution (see text for details).
Figure 2: WSRT experiment layout, noisy sampling case (): , and .
Figure 3: WSRT experiment layout, noisy sampling case (), with a higher signal to noise ratio: , and .
Figure 4: Arecibo experiment layout, noisy sampling case (): , and .
Figure 5: Arecibo experiment layout, noisy sampling case (): , and .
Figure 6: Arecibo experiment layout, noisy sampling case (): , and .
Figure 7: WSRT experiment layout, noisy sampling case (): , and .
Figure 8: The width of the boxcar function response as a function of the separation width between two Dirac components.
Figure 9: A typical response of the RM-MP algorithm for two Dirac components separated by . WSRT experiment layout, noiseless sampling case: , and .
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
84199
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description