Global 21\,\textrm{cm} signal experiments: A designer’s guide

Global signal experiments: A designer’s guide

Adrian Liu acliu@berkeley.edu Dept. of Astronomy and Berkeley Center for Cosmological Physics, UC Berkeley, Berkeley, CA 94720, USA Dept. of Physics and MIT Kavli Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA    Jonathan R. Pritchard Imperial Center for Inference and Cosmology, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ, United Kingdom    Max Tegmark Dept. of Physics and MIT Kavli Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA    Abraham Loeb Dept. of Astronomy, Harvard University, Cambridge, MA 02138, USA
July 5, 2019
Abstract

The global (i.e. spatially averaged) spectrum of the redshifted line has generated much experimental interest lately, thanks to its potential to be a direct probe of the Epoch of Reionization and the Dark Ages, during which the first luminous objects formed. Since the cosmological signal in question has a purely spectral signature, most experiments that have been built, designed, or proposed have essentially no angular sensitivity. This can be problematic because with only spectral information, the expected global signal can be difficult to distinguish from foreground contaminants such as Galactic synchrotron radiation, since both are spectrally smooth and the latter is many orders of magnitude brighter. In this paper, we establish a systematic mathematical framework for global signal data analysis. The framework removes foregrounds in an optimal manner, complementing spectra with angular information. We use our formalism to explore various experimental design trade-offs, and find that 1) with spectral-only methods, it is mathematically impossible to mitigate errors that arise from uncertainties in one’s foreground model; 2) foreground contamination can be significantly reduced for experiments with fine angular resolution; 3) most of the statistical significance in a positive detection during the Dark Ages comes from a characteristic high-redshift trough in the brightness temperature; 4) Measurement errors decrease more rapidly with integration time for instruments with fine angular resolution; and 5) Better foreground models can help reduce errors, but once a modeling accuracy of a few percent is reached, significant improvements in accuracy will be required to further improve the measurements. We show that if observations and data analysis algorithms are optimized based on these findings, an instrument with a wide beam can achieve highly significant detections (greater than ) of even extended (high ) reionization scenarios after integrating for . This is in strong contrast to instruments without angular resolution, which cannot detect gradual reionization. Ionization histories that are more abrupt can be detected with our fiducial instrument at the level of 10’s to 100’s of . The expected errors are similarly low during the Dark Ages, and can yield a detection of the expected cosmological signal after only of integration.

pacs:
95.75.-z,98.80.-k,95.75.Pq,98.80.Es

I Introduction

Measurements of the highly redshifted line are thought to be the primary way to make direct observations of the epoch of reionization and the preceding dark ages, when the first luminous objects were formed from primordial fluctuations Loeb and Furlanetto (2013). Theoretical studies have shown that observations of the line will not only provide crucial constraints on a relatively poorly understood period of structure formation, during which a complex interplay of dark matter physics and baryon astrophysics produced large scale changes in the intergalactic medium Furlanetto et al. (2006); Morales and Wyithe (2010); Pritchard and Loeb (2012); eventually, the enormous volume probed by the line will also allow one to make exquisite measurements of fundamental physics parameters Barkana and Loeb (2005); McQuinn et al. (2006); Mao et al. (2008); Clesse et al. (2012). It is thus no surprise that numerous experimental groups are making concerted efforts to arrive at the first positive detection of the cosmological signal.

To date, most observational efforts have focused on understanding the fluctuations in the line by measuring the brightness temperature power spectrum (although there have also been theoretical proposals to capture the non-Gaussianity of the signal Wyithe and Morales (2007); Barkana and Loeb (2008); Gluscevic and Barkana (2010); Pan and Barkana (2012)). These include the Murchison Widefield Array (MWA) Tingay et al. (2012), the Precision Array for Probing the Epoch of Reionization (PAPER) Parsons et al. (2010), the Low Frequency Array (LOFAR) Rottgering et al. (2006), and the Giant Metrewave Radio Telescope Epoch of Reionization (GMRT-EoR) Paciga et al. (2011) projects. These experiments are difficult: high sensitivity requirements dictate long integration times to reach the expected amplitude of the faint cosmological signal, which is dominated by strong sources of foreground emission (such as Galactic synchrotron radiation) by several orders of magnitude in all parts of the sky Bernardi et al. (2009, 2010); de Oliveira-Costa et al. (2008). While lately these experiments have made much progress towards a detection of the cosmological signal, the challenges remain daunting.

As an alternative way to detect the cosmological signal, there have recently been attempts to measure the global signal, where one averages the signal over all directions in the sky and focuses on extracting a globally averaged spectrum, probing the evolution of the mean signal through cosmic history Furlanetto (2006). These observations are complementary to the power spectrum measurements, and have been shown to be an incisive probe of many physical parameters during reionization Pritchard and Loeb (2010). Examples of global signal experiments include the Experiment to Detect the Global EoR Signature (EDGES) Bowman and Rogers (2010), the Large aperture Experiment to detect the Dark Ages (LEDA) Greenhill and Bernardi (2012), the Long Wavelength Array (LWA) Bowman (2012), and if funded, the Dark Ages Radio Explorer (DARE) Burns et al. (2012).

Compared to power spectrum measurements, global signal experiments are in some ways easier, and in other ways more difficult Shaver et al. (1999). A rough calculation of the thermal noise reveals that global signal experiments need far less integration time to reach the required sensitivities for a detection of the cosmological signal. However, the problem of foreground mitigation is much more challenging. In a power spectrum measurement, one probes detailed maps of brightness temperature fluctuations in all three dimensions. The maps are particularly detailed in the line-of-sight direction, since for a spectral line like the line this translates to the frequency spectrum of one’s measurement, and typical instruments have extremely fine spectral resolution. The result is that the cosmological signal fluctuates extremely rapidly with frequency, since one is probing local structure. In contrast, the foregrounds are spectrally smooth. This difference has formed the basis of most foreground subtraction schemes that have been proposed for power spectrum measurements, and theoretical calculations have been encouraging Wang et al. (2006); Gleser et al. (2008); Bowman et al. (2009); Liu et al. (2009, 2009); Jelić et al. (2008); Harker et al. (2010, 2009); Petrovic and Oh (2011); Liu and Tegmark (2011). Global signal experiments cannot easily take advantage of this difference in fluctuation scale, for the globally averaged spectrum does not trace local structures, but instead probes the average evolution with redshift. The resulting signals are thus expected to be rather smooth functions of frequency, which makes them difficult to separate from the smooth foregrounds. Traditionally, experiments have attempted to perform spectral separations anyway, and have thus been limited to ruling out sharp spectral signatures such as those that might arise from rapid reionization Bowman and Rogers (2010).

In this paper, we confront the general problem of extracting the global signal from measurements that are contaminated by instrumental noise and foregrounds, placing the problem in a systematic mathematical framework. We adopt the philosophy that one should take advantage of every possible difference between the cosmological signal and contaminants, not just the spectral differences. To do so, we first develop optimal data analysis methods for cosmological signal estimation, using angular information to complement spectral information. Traditional spectral-only methods are a special case in our formalism, and we prove that in such methods, foreground subtraction simply amounts to subtracting a “best-guess” foreground model from the data and doing nothing to mitigate possible errors in the model itself.

Additionally, we build on existing foreground models in the literature, and use our improved model along with our data analysis formalism to predict expected measurement errors given various experimental parameters. Since our formalism allows these measurement errors to be calculated analytically (given a foreground model), we are able to explore parameter space efficiently to identify the best experimental designs and the best data analysis methods. We find that the most important such “lesson learned” is that angular information is necessary to suppress foregrounds to a low enough level to differentiate between different cosmological models.

Our paper therefore has two logically separate (but closely related) goals. The first is to develop a robust signal extraction formalism, and we fulfill this goal without assuming a precise model for the cosmological signal, thus immunizing ourselves to uncertainties in theoretical modeling. The second is to build intuition for instrumental design and to forecast the performance of different types of instruments. This part does assume a theoretical form for the cosmological signal.

In this paper our focus is on foreground removal, so for simplicity we do not consider calibration errors. However, we do note that an accurate bandpass calibration (or at least a good estimate of the calibration error) is a prerequisite for accurate foreground removal. It is for this reason that much effort has been spent on better instrument characterization recently Rogers and Bowman (2012). We also do not include frequency-dependent beam effects, although the formalism of this paper is general enough to include them. Both calibration errors and frequency-dependent beams are inevitably instrument-specific issues, so we defer detailed investigations of them to future work.

The rest of this paper is organized as follows. In Section II we detail the properties of the cosmological signal and our foreground and instrumental noise model. Readers that are more interested in data analysis methods may wish to skim this section (perhaps reading in detail only the summary in Section II.2.4), and skip ahead to Section III, where we establish our mathematical framework for signal extraction and error computation. Those that are interested in experimental design may wish to focus on Sections IV and V, where we explore various experimental trade-offs for observations targeting the dark ages and reionization respectively. Forecasts for the performances of some fiducial experiments are presented in Section VI, and we summarize our conclusions in Section VII.

Ii The cosmological global spectrum and sources of measurement contamination

ii.1 Model of the signal

We begin by sketching the ingredients of our global signal model. For more details there exist several good reviews on the physics of the signal Furlanetto et al. (2006); Morales and Wyithe (2010); Pritchard and Loeb (2012). The signal arises from the physical properties of neutral hydrogen in the intergalactic medium (IGM), specifically from the density of neutral hydrogen and the spin temperature, which describes the relative number of hydrogen atoms with proton and electron spin aligned or anti-aligned. These quantities respond to radiation from luminous sources and so are expected to vary from place to place.

Figure 1: Target global signal as predicted by the model of Pritchard and Loeb (2010). The exact details of this signal are uncertain and depend upon the nature of the first galaxies.

Fluctuations in the signal are being targeted by radio interferometers such as LOFAR, MWA, PAPER, and GMRT, as mentioned in Section I. These fluctuation are scale dependent with most power on the characteristic scale of ionized or heated regions, which are believed to be tens of arc-minutes across. When viewed with a beam larger than this characteristic size these fluctuations will average out, giving a measure of the mean or “global” signal. In this paper, we concentrate on measuring this isotropic part of the signal and consider the anisotropic fluctuations as a source of noise. Studies by Bittner and Loeb (2011) showed that during reionization a beam size of a few degrees is sufficient to smooth out most of the fluctuations. In this paper, any contribution of the fluctuations left over after convolving with the large beam will be considered as irreducible noise.

The basic dependence of the differential brightness temperature on the average ionized fraction and spin temperature is

(1)

where we have used the WMAP7 cosmological parameters to fix the baryon and mass abundances as and Larson et al. (2011). The key redshift dependence comes via and , which we model following the approach of (Pritchard and Loeb, 2008), where the reader will find technical details. This model incorporates (1) the ionizing radiation from galaxies, (2) X-ray heating from galaxies (Pritchard and Furlanetto, 2007), (3) Lyman-alpha emission from galaxies, and assumes a simple prescription linking the star formation rate to the fraction of mass in collapsed structure above a critical mass threshold required for atomic hydrogen cooling.

This model predicts a signal that divides into three qualitatively different regimes (see Figure 1). The first, a shallow absorption feature at begins as the gas thermally decouples from the CMB and ends as our Universe become too rarified for collisions to couple to . Next, a second and possibly deeper absorption feature occurs as the first galaxies form at . This is initiated as Lyman alpha photons illuminate the Universe, coupling spin and gas temperatures strongly, and ends as increasing X-ray emission heats the IGM above the CMB, leading to a emission signal. This emission signal is the third key feature, which slowly dies away in a “reionization step” as ionizing UV photons ionize the IGM. As described in Pritchard and Loeb (2010); Morandi and Barkana (2012) there is considerable uncertainty in the exact positions and details of these features, but the basic picture seems robust.

The last two features—an absorption trough driven by the onset of galaxy formation111For linguistic convenience, we will include the absorption trough at as part of the Dark Ages, even though it really marks the very end of the Dark Ages. and an emission step accompanying reionization—form the focus of this paper, since these seem most likely to be detectable (a fact that we will explain and rigorously justify in Section IV.2). The earliest absorption trough seems unlikely to be detected in the near future, since it occurs at frequencies MHz that are more strongly affected by the Earth’s ionosphere and where galactic foregrounds are considerably brighter.

It is important to note that the data analysis methods presented later in this paper do not depend on the precise form of the cosmological signal. This differs from other studies in the literature (such as Harker et al. (2012)), which assume a parametric form for the signal and fit for parameters. In contrast, we use the theoretical signal shown in Figure 1 only to assess detection significance and to identify trade-offs in instrumental design; our foreground subtraction and signal extraction algorithms will be applicable even if the true signal differs from our theoretical expectations, making our approach quite a robust one.

ii.2 Generalized noise model

We now construct our noise model. We define the generalized noise (or henceforth just “noise”) to be any contribution to the measurement that is not the global signal as described in the previous section. As mentioned above, by this definition our noise contains more than just instrumental noise and foregrounds. It also includes the anisotropic portion of the cosmological signal. In other words, the “signal” in a tomographic measurement (where one measures angular anisotropies on various scales) is an unwanted contaminant in our case, since we seek to measure the global signal (i.e. the monopole).

If we imagine performing an experiment that images pixels on the sky over frequency channels, the noise contribution in various pixels at various frequencies can be grouped into a vector of length equal to the number of voxels in our survey. It is comprised of three contaminants:

(2)

where , , and signify the foregrounds, instrumental noise, and anisotropic cosmological signal, respectively. Throughout this paper, we use Greek indices to signify the radial/frequency direction, and Latin indices to signify the spatial directions. Note that is formally a vector even though we assign separate spatial and spectral indices to it for clarity. In the following subsections we discuss each of these three contributions to the noise, with an eye towards how each can be mitigated or removed in a real measurement. We will construct detailed models containing parameters that are mostly constrained empirically. However, since these constraints are often somewhat uncertain, we will vary many of them as we explore parameter space in Sections IV and V. Our conclusions should therefore be robust to reasonable changes in our assumptions.

Finally, we stress that in what follows, our models are comprised of two conceptually separate—but closely related—pieces. To understand this, note that Equation (2) is a random vector, both because the instrumental noise is sourced by random thermal fluctuations and because the foregrounds and the cosmological signal have modeling uncertainties associated with them. Thus, to fully describe the behavior of , we need to specify two pieces of information: a mean (our “best guess” of what the foregrounds and other noise sources look like as a function of frequency and angle) and a covariance (which quantifies the uncertainty and correlations in our best guess). We will return to this point in Section II.2.4 when we summarize the essential features of our model. Readers may wish to skip directly to that section if they are more interested in the “designer’s guide” portion of the paper than the mathematical details of our generalized noise model.

ii.2.1 Foreground Model

Given that foregrounds are likely to be the largest contaminant in a measurement of the global signal, it is important to have a foreground model that is an accurate reflection of the actual contamination faced by an experiment, as a function of both angle and frequency. Having such a model that describes the particular realization of foregrounds contaminating a certain measurement is crucial for optimizing the foreground removal process, as we shall see in Section III. However, constructing such a model is difficult to do from first principles, and is much more difficult than what is typically done, which is to capture only the statistical behavior of the foregrounds (e.g. by measuring quantities such as the spatial average of a spectral index). It is thus likely that a full foreground model will have to be based at least partially on empirical data.

Unfortunately, the community currently lacks full-sky, low noise, high angular resolution survey data in the low frequency regime relevant to global signal experiments. Foreground models must therefore be constructed via interpolations and extrapolations from measurements that are incomplete both spatially and spectrally. One such effort is the Global Sky Model (GSM) of de Oliveira-Costa et al. (2008). In that study, the authors obtained foreground survey data at different frequencies, and formed a series of foreground maps, stored in the vector . The maps were then used to define a spectral covariance matrix :

(3)

where is the number of pixels in a spectrally well-sampled region of the sky, and in accordance with our previous notation, denotes the measured foregrounds in the pixel at the frequency channel. From this covariance, a dimensionless frequency correlation matrix was formed:

(4)

By performing an eigenvalue decomposition of into its principal components, the authors found that the spectral features of the foregrounds were dominated by the first three principal components, which could be used as spectral templates for constructing empirically-based foreground models. The GSM approach was found to be accurate to .

Being relatively quick and accurate, the GSM has been used as a fiducial foreground model in many studies of the global signal to date Pritchard and Loeb (2010); Harker et al. (2012). However, this may be insufficient for two reasons. First, as mentioned above, the GSM approach predicts the magnitude of foregrounds by forming various linear combinations of three principal component (i.e. spectral eigenmode) templates. Thus, if the GSM is considered the “true” foreground contribution in our models, it becomes formally possible to achieve perfect foreground removal simply by projecting out just three spectral modes from the data. This is too optimistic an assumption222The reader should thus be cautious when comparing our predictions to forecasts in the literature that use the GSM as their only foreground model. For identical experimental parameters, one should expect our work to give larger error bars. However, our new signal extraction algorithms (particularly those that make use of angular information) more than overcome this handicap, and our conclusions will in fact be quite optimistic.. The other weakness of the GSM is that it does not include bright point sources, which are expected to be quite numerous at the sensitivities of most experiments.

In this paper, we use the GSM as a starting point, but add our own spectral extensions to deal with the aforementioned shortcomings. The extensions come partly from the phenomenological model of Liu and Tegmark (2012), which in our notation can be summarized by writing down a matrix that is analogous to defined above:

(5)

where , , and refer to foreground contributions from unresolved extragalactic point sources, Galactic synchrotron radiation, and Galactic free-free emission, respectively. Each contribution takes the generic form

(6)

where , , and for the synchrotron contribution; , , and for the unresolved point source contribution; , , and for the free-free contribution, and is a reference frequency, which we take to be . Our strategy is to perform a principal component decomposition on this model, and to use its higher order principal components to complement the three components provided by the GSM, completing the basis.

We can check that our strategy is a sensible one by forming [Equation (4)] for both the GSM and our phenomenological model. For this test, we form the matrices for a set of observations over 70 frequency channels, each with bandwidth spanning the to frequency range (relevant to observations probing the first luminous sources). We then compute the eigenvalue spectrum of both models, and the result is shown in Figure 2. The GSM, built from three principal components, contains just three eigenvalues. The first three eigenvalues of the phenomenological model agree with these values quite well, with the phenomenological model slightly more conservative. It is thus reasonable to use the higher eigenmodes of the phenomenological model to “complete” the foreground model spectrally. We expect this completion to be necessary because there are more than three eigenvalues above the thermal noise limit for a typical experiment Liu and Tegmark (2012). Spatially, we form an angular model by averaging the GSM maps over all frequencies, and give all of the higher spectral eigenmodes this averaged angular dependence. While in general we expect the spatial structure to be different from eigenmode to eigenmode, our simple assumption is a conservative one, since any additional variations in spatial structure provide extra information which can be used to aid foreground subtraction.

Figure 2: Comparison with the global sky model de Oliveira-Costa et al. (2008). Our eigenvalues are represented by the blue circles, while those of the GSM are denoted by the purple squares. The GSM contains only three eigenvalues, which are in reasonable agreement with our model.

Next, we add bright point sources to our model. The brightnesses of these sources are distributed according to the source count function

(7)

and we give each source a spectrum of

(8)

where the is the brightness drawn from the source count function, and is the spectral index, drawn from a Gaussian distribution with mean and a standard deviation of Liu et al. (2009). Spatially, we distribute the bright point sources randomly across the sky. This is strictly speaking an unrealistic assumption, since point sources are known to be clustered. However, a uniform distribution suffices for our purposes because it is a conservative assumption—any spatial clustering would constitute an additional foreground signature to aid foreground removal.

Putting all these ingredients together, the result is a series of foreground maps like that shown in Figure 3, one at every frequency. This model constitutes a set of “best guess” foreground templates. Now, while our model is based on empirical data, it does possess uncertainties. A robust method for foreground subtraction thus needs to be able to account for possible errors in the generalized noise model. To set the stage for this, we define our best guess model described above to be the mean (amidst uncertainties in the construction of the model) of the vector in Equation (2), i.e.

(9)

where is our best guess model. The covariance of the foregrounds is defined as the error in our foreground model:

(10)

where we have assumed that the error in our foreground model is proportional to the model itself, with a proportionality constant between and . The matrices and encode the spatial and spectral correlations in our foreground model errors, respectively.

With this form, we are essentially assuming that there is some constant percentage error in every pixel and frequency of our model. Note that the model error will in general depend on the angular resolution, for if we start with a high resolution foreground model and spatially downsample it to the resolution of our experiment, some of the errors in the original model will cancel out. As a simple, crude model, we can suppose that the errors in different pixels average down as the square root of the number of pixels. Since the number of pixels scales as the inverse of , where is some generic angular resolution, we have

(11)

where is the angular resolution of our current experiment, is the “native” angular resolution of our foreground model, and is the fractional error in our model at this resolution. Equation (11) implicitly assumes a separation of scales, where we deal only with instruments that have coarser angular resolution that the correlation length of modeling errors specified by (described below). We will remain in this regime for the rest of the paper. Our foreground covariance thus becomes

(12)

Since the angular structure of our foreground model is based on that of the GSM, which conservatively has a accuracy at de Oliveira-Costa et al. (2008), we use and as fiducial values for this paper.

Figure 3: Foreground template at . The color scale is linear and has been capped at to bring out visual contrast even though much of the sky is far brighter.

To capture spatial correlations333We emphasize that the spatial correlations encoded by are spatial correlations in the foreground error, not correlations in the spatial structure of the foreground emission itself. The spatial structure of the foregrounds are captured by the terms of Equations 9 and 10, and will typically be correlated over much larger scales than the errors. The data analysis formalism that we present in Section III will take into account both types of correlation. in our foreground modeling errors, we choose the matrix to correspond to the continuous kernel

(13)

Aside from the constant factor (which is needed to make the discrete and continuous descriptions consistent Tegmark (1997); Knox (1995)), this is known as a Fisher function, the analog of a Gaussian on a sphere. The quantity measures the spread of the kernel, and since the GSM’s spectral fits were performed on pixels of roughly resolution, we set for this work444Note that does not necessarily have to be equal to . For instance, if one’s foreground model is based on a survey with some instrumental beam size that is oversampled in an attempt to capture all the features in the map, one would be in a situation where ..

For the spectral correlation matrix , suppose we imagine that our foreground model was constructed by spectrally fitting every pixel of a foreground survey to a power law of the form

(14)

where is a normalization constant that will later cancel out, is a spectral index, and is a reference frequency for the fits, which we take to be for the to observations targeting the first luminous sources, and for the to observations targeting reionization. (The reference frequency is somewhat arbitrary, and in practice one would simply adjust it to get the best possible fits when constructing one’s foreground model). The spectral index will have some error associated with it, due in part to uncertainties in the foreground survey and in part to the fact that foreground spectra are not perfect power laws. We model this error as being Gaussian distributed, so that the probability distribution of spectral indices is given by

(15)

where is a fiducial spectral index for typical foregrounds, which in this paper we take to be . The parameter controls the spectral coherence of the foregrounds, and we choose the rather large value of to be conservative.

With this, the mean spectral fit to our foreground survey is

(16)

Sampling this function at a discrete set of frequencies corresponding to the frequency channels of our global signal experiment, we can form a mean vector . A covariance matrix of the power law fits is then given by

(17)

where is a discretized version of , and

(18)

Finally, we take the covariance and insert it into the left hand side of Equation (4) (just as we did with earlier) to form our spectral correlation matrix . Note that the normalization constant cancels out in the process, as we claimed.

In Figure 4, we show the eigenvalues of , and in Figure 5, the first few eigenmodes. Just as with the foregrounds themselves, the roughly exponential decay of the eigenvalues show that the foreground correlations are dominated by the first few eigenmodes, which are smooth functions of frequency.

Figure 4: First few eigenvalues of the spectral correlation matrix . The eigenvalues decay roughly exponentially, which means that the foreground correlations are dominated by the first few eigenmodes.
Figure 5: First few eigenvectors of the spectral correlation matrix . These spectral eigenmodes are seen to be quite smooth, and have increasing structure as one goes to higher modes. In conjunction with Figure 4, this shows that the frequency correlations in the foreground are dominated by smooth spectral eigenmodes.

ii.2.2 Instrumental Noise

We model the instrumental noise in every pixel and every frequency channel as uncorrelated. Additionally, we make the assumption that there are no systematic instrumental effects, so that . What remains is the random contribution to the noise. Assuming a sky-noise dominated instrument, the amplitude of this contribution is given by the radiometer equation. In our notation, this gives rise to a covariance of the form555In principle, the cosmological signal also contributes to the sky-noise, and so there is in fact some cosmological information in the noise. For the noise term only, we make the approximation that the cosmological signal contributes negligibly to the sky temperature, since the foregrounds are so much brighter. For any reasonable noise level, this should be an excellent assumption, and in any case a conservative one, as extra sources of cosmological information will only serve to tighten observational constraints.

(19)

where is the integration time per pixel and is the channel width.

The instrumental noise is different from all other signal sources in an experiment (including the cosmological signal and all other forms of generalized noise) in that it is the only contribution to the measurement that is added after the sky has been convolved by the beam of an instrument. All the “other” contributions should be convolved with the beam. For instance, if an instrument has a Gaussian beam with a standard deviation , then these contributions to the covariance are multiplied by in spherical harmonic space Knox (1995); Tegmark and Efstathiou (1996). However, since the instrumental noise is the only part of our measurement that does not receive this correction, it is often convenient to adopt the convention that all measured maps of the sky are already deconvolved prior to the main data analysis steps. This allows us to retain all of the expressions for the cosmological signal and the foregrounds that we derived above, and to instead modify the instrumental noise to reflect the deconvolution. In this paper, we will adopt the assumption of already-deconvolved maps.

If the instrumental noise contribution were rotation-invariant on the sky (as is the case for many Cosmic Microwave Background experiments), modifying the noise to reflect deconvolution would be simple. One would simply multiply the instrumental noise covariance by in spherical harmonic space Knox (1995); Tegmark and Efstathiou (1996). Unfortunately, since we are assuming a sky-noise dominated instrument, the assumption of rotation-invariance breaks down, thanks to structures such as the Galactic plane, which clearly contributes to the sky signal in an angularly-dependent way. The simple prescription of multiplying by thus becomes inadequate, and a better method is required.

Suppose we work in a continuous limit and let be an integration kernel that represents a Gaussian instrumental beam, so that

(20)

where is convolved version of the original sky and are the spherical harmonics. With this, our effective (i.e. deconvolved) instrumental noise covariance is given by

(21)

where is the continuous version of , and the deconvolution kernel (given by the operator inverse of ), which in spherical harmonic space simply multiplies by . For notational simplicity, we have omitted constants and suppressed the frequency dependence (since it has nothing to do with the spatial deconvolution). Now, suppose we make the approximation that the foregrounds are spatially smooth and slowly varying. In real space, is a rapidly oscillating function that is peaked around . We may thus move the two copies of outside the integral, setting and :

(22)

If the terms were not present, this would be equivalent to the “usual” prescription, where the effective noise kernel involves transforming to spherical harmonic space, multiplying by , and transforming back. Here, the prescription is similar, except the kernel is modulated in amplitude by the sky signal.

In conclusion, we can take instrumental beams into account simply by replacing with an effective instrumental noise covariance of the form

(23)

where , and is a deconvolved white noise covariance (i.e. one that is multiplied by in spherical harmonic space) with proportionality constant .

In deriving our deconvolved noise covariance, we made the assumption that the sky emission is spatially smooth and slowly-varying. This allowed us to treat the deconvolution analytically even in the case of non-white noise, and in Section III.2 will allow us to analytically derive an optimal estimator for the cosmological signal. While our assumption of smooth emission is of course only an approximation, we expect it to be a rather good one for our purposes. The only component of the sky emission where smoothness may be a bad assumption is the collection of bright point sources. However, we will see in Section III.2 that the optimal estimator will heavily downweight brightest regions of the sky, so extremely bright point sources are effectively excluded from the analysis anyway.

ii.2.3 Cosmological Anisotropy Noise

In measuring the global signal, we are measuring the monopole contribution to the sky. As mentioned above, any anisotropic contribution to the cosmological power is therefore a noise contribution as far as a global signal experiment is concerned. By construction, these non-monopole contributions have a zero mean after spatially averaging over the sky, and thus do not result in a systematic bias to a measurement of the global signal. They do, however, have a non-zero variance, and therefore contribute to the error bars.

Although it is strictly speaking non-zero, we can safely ignore cosmological anisotropy noise because it is negligibly small compared to the foreground noise. Through a combination of analytic theory Lewis and Challinor (2007) and simulation analysis Bittner and Loeb (2011), the cosmological anisotropies have been shown to be negligible on scales larger than to , which is a regime that we remain in for this paper.

ii.2.4 Generalized noise model summary

In the subsections above, we have outlined the various contributions to the generalized noise that plagues any measurement of the global signal. Of these contributions, only foregrounds have a non-zero mean, so the mean of our generalized noise model is just that of the foregrounds:

(24)

Foregrounds therefore have a special status amongst the different components of our generalized model, for they are the only contribution with the potential to cause a systematic bias in our global signal measurement. The other contributions appear only in the total noise covariance, taken to be the sum of the foreground covariance and the effective instrumental noise covariance:

(25)

where as noted above, we are neglecting the cosmological anisotropy noise.

In the foreground subtraction/data analysis scheme that we describe in Section III, we will think of the mean as a foreground template that is used to perform a first subtraction. However, there will inevitably be errors in our templates, and thus our scheme also takes into account the covariance of our model. In our formalism, the mean term therefore represents our best guess as to what the foreground contamination is, and the covariance quantifies the uncertainty in our guess. We note that this is quite different from many previous approaches in the literature, where either the foreground modeling error is ignored (e.g. when the foreground spectra are assumed to be perfect polynomials), or the mean is taken to be zero and the covariance is formed by taking the ensemble average of the outer product of the foreground template error. The former approach is clearly unrealistic, while the latter approach has a number of shortcomings. For example, it is difficult to compute the necessary ensemble average, since foregrounds are difficult to model from first principles, and empirically the only sample that we have for taking this average is our Galaxy. As a solution to this, ensemble averages are often replaced by spatial (i.e. angular) averages. But this is unsatisfactory for our purposes, since in Section III.2 we will be using the angular structure of foregrounds to aid with foreground subtraction, and this is impossible if the information has already been averaged out. Even if an ensemble average could somehow be taken (perhaps by running a large suite of radiative transfer foreground simulations), a foreground subtraction scheme that involved minimizing the resulting variance would be non-optimal for two reasons. First, in such a scheme one would be guarding against foreground power from a “typical” galaxy, which is irrelevant—all that matters to an experiment are the foregrounds that are seen in our Galaxy, even if they are atypical. In addition, foregrounds are not Gaussian distributed, and thus a minimization of the variance is not necessarily optimal.

Our approach—taking the mean to be an empirical foreground template and the covariance to be the errors in this template—solves these problems. Since the covariance arises from measurement errors (which can usually be modeled to an adequate accuracy), taking the ensemble average is no longer a problem. And with the mean term being a template for the foregrounds as seen by our experiment, our foreground model is tailored to our Galaxy, even if our Galaxy happens to be atypical. Finally, while the foregrounds themselves are certainly not Gaussian, it is a much better approximation to say that the uncertainties in our model are Gaussian, at least if the uncertainties are relatively small. Constructing our foreground model in this way thus allows us to take advantage of the optimal data analysis techniques that we introduce in Section III.

ii.3 Why it’s hard

Before we proceed to describe how the global signal can be optimally extracted, we pause to describe the challenges ahead666Not included in this paper is the fact that an instrument might have a non-trivial frequency response that needs to be calibrated extremely well. In principle, if one has 1) sufficiently good instrumental calibration and 2) an exquisitely accurate foreground model, then it will always be able to pick out a small cosmological signal from beneath the foreground sources, however bright they might be. In this paper we concentrate on lessening the second requirement by proposing reliable foreground subtraction algorithms. Tackling the problem of calibration is beyond the scope of this paper, but encouraging progress has recently been made in engineering tests Rogers and Bowman (2012).. As an initial “straightforward” approach, one can imagine measuring the global signal by taking a simple spatial average of a measured sky. The corresponding foreground contamination would be obtained by spatially averaging our model, which is shown in Figure 6 along with our expected theoretical signal from Figure 1. A straightforward measurement would thus be completely dominated by the bright foregrounds. In addition, both the foreground contamination and the theoretical signal are smooth as a function of frequency, making it difficult to use foreground subtraction techniques that have been proposed for tomographic maps, where the cosmological signal is assumed to vary much more rapidly as a function of frequency than the foregrounds. It is therefore crucial that optimal foreground cleaning methods are employed in the analysis, and in the following section we derive such methods.

Figure 6: A comparison of the absolute value of the expected theoretical signal spectrum (blue circles) and the mean foreground contamination (purple squares). This highlights the difficulty of global signal experiments: the foregrounds are many orders of magnitude brighter than the signal we seek to measure.

Iii A framework for global signal measurements

In this section, we develop the mathematical framework for analyzing data from global signal experiments. We begin with a measurement equation. For an experiment with both spectral and angular sensitivity, we have

(26)

where is a vector of length containing the measurement, is the generalized noise contribution of Section II.2, is a vector of length containing the global signal that we wish to measure, and is a vertical stack of identity matrices of size . The effect of multiplying the global signal by is to copy the theoretical spectrum to every pixel on the sky before the noise and foregrounds are added. The term therefore represents a data ball777Or perhaps a “data shell”, since there is a lower limit to the redshift of the experiment. that contains ideal (noiseless and foregroundless) data that depends only on the radial distance from the center, and not on the direction. To every voxel of this data volume the combined noise and foreground contribution is added, giving the set of measured voxel temperatures that comprise . Note that by thinking of the measurement vector as a set of voxel temperatures, we have implicitly assumed that a prior mapmaking step has been performed on the raw time-ordered data. This step should ideally take into account instrumental complications such as instrumental beam sidelobes.

For measurements with no angular sensitivity, we can define to be the globally averaged measurement. In this notation, our measurement equation becomes

(27)

where is the angularly averaged noise and foreground contribution, with mean

(28)

where in the last step we used the definition in Equation (24), and covariance

(29)

Our goal is to derive a statistically optimal estimator for the true global signal . With an eye towards optimizing experimental design, we will construct optimal estimators for both measurement equations, treating the spectral-only measurements in Section III.1 and the spectral-plus-angular measurements in Section III.2. We will prove that beyond simply subtracting a best-guess model for the foregrounds, there is formally nothing that can be done to mitigate foreground residuals by using spectral information only. In contrast, adding angular foreground information partially immunizes one from errors in the best-guess model, and allows the final error bars to be reduced.

iii.1 Methods using only spectral information

For methods that use only spectral information, we write down an arbitrary linear estimator of the form

(30)

where is an matrix and is a vector of length , whose forms we will derive by minimizing the variance of the estimator.

Taking the ensemble average of our estimator and inserting Equation (27) gives

(31)

which shows that in order for our estimator to avoid having a systematic additive bias, one should select

(32)

With this choice, we have . The variance of this estimator can be similarly computed, yielding

(33)

We can minimize this variance subject to the normalization constraint by using Lagrange multipliers. To do so we minimize

(34)

with respect to the elements of . Taking the necessary derivatives and solving for the Lagrange multiplier that satisfies the normalization constraint, one obtains

(35)

Inserting this into our general form for the estimator, we find

(36)

In words, this prescription states that one should take the data, subtract off the known foreground contamination, and then inverse variance weight the result before re-weighting to form the final estimator. The inverse variance weighting performs a statistical suppression/subtraction of instrumental noise and foreground contamination. Loosely speaking, this step corresponds to the subtraction of polynomial modes in the spectra as simulated in Pritchard and Loeb (2010); Morandi and Barkana (2012) and implemented in Bowman and Rogers (2010). The final re-weighting by rescales the modes so that the previous subtraction step does not cause the estimated modes to be biased high or low. For instance, if a certain mode is highly contaminated by foregrounds, it will be strongly down-weighted by the inverse variance weighting, and thus give an artificially low estimate of the mode unless it is scaled back up.

The corresponding measurement error covariance for this estimator is given by

(37)

It is instructive to compare this with the errors predicted by the Fisher matrix formalism. By the Cramer-Rao inequality, an estimator that is unwindowed (i.e. one that has ) will have a covariance that is at least as large as the inverse of the Fisher matrix. Computing the Fisher matrix thus allows one to estimate the best possible errors bars that can be obtained from a measurement. In the approximation that fluctuations about the mean are Gaussian, the Fisher matrix takes the form

(38)

where commas denote derivatives with respect to the parameters that one wishes to measure. In our case, the goal is to measure the global spectrum, so the parameters are the values of the spectrum at various frequencies. Put another way, we can write our mean measurement equation as

(39)

where is a unit vector with 0’s everywhere except for a 1 at the frequency channel. The derivative with respect to the parameter (i.e. the derivative with respect to the mean measured spectrum in the frequency channel) is therefore simply equal to . Since the measurement covariance does not depend on the cosmological signal , our Fisher matrix reduces to

(40)

which shows that the Fisher matrix is simply the inverse covariance i.e. . This implies that the covariance of the optimal unwindowed method is equal to the original noise and foreground covariance, which means that the error bars on the estimated spectrum are no smaller than if no line-of-sight foreground subtraction were attempted beyond the initial removal of foreground bias [Equations (31) and (32)]. In our notation, an unwindowed estimator would be one with [from Equation (31) plus the requirement that the estimator be unbiased, i.e. Equation (32)]. But this means our optimal unwindowed estimator is

(41)

which says to subtract our best-guess foreground spectrum model and to do nothing more!

To perform just a direct subtraction on a spectrum that has already been spatially averaged and to do nothing else [as Equation (41) suggests] is undesirable, because the error covariance in our foreground model is simply propagated directly through to our final measurement covariance. This would be fine if we had perfect knowledge of our foregrounds. Unfortunately, uncertainties in our foreground models mean that residual foregrounds can result in rather large error bars. As an example, suppose we plugged our foreground covariance from Section II.2.1 into the spatially averaged covariance [Equation (29)]. If we optimistically assume completely uncorrelated foreground model errors (so that they average down), one obtains an error bar of at . This is far larger than the amplitude of the cosmological signal that one expects.

If one uses the minimum variance estimator [Equation (36)] instead, one can reduce the error bars slightly (by giving up on the unwindowed requirement, the estimator can evade the Cramer-Rao bound). However, this reduction is purely cosmetic, for Equations 36 and 41 differ only by the multiplication of an invertible matrix. There is thus no difference in information content between the two estimators, and the minimum variance estimator will not do any better when one goes beyond the measured spectrum to constrain theoretical parameters.

Intuitively, one can do no better than a “do nothing” algorithm because the global signal that we seek to measure is itself a spectrum (with unique cosmological information in every frequency channel), so a spectral-only measurement provides no redundancy. With no redundancy, one can only subtract a best-guess foreground model, and it is impossible to minimize statistical errors in the model itself.

The key, then, is to have multiple, redundant measurements that all contain the same cosmological signal. This can be achieved by designing experiments with angular information (i.e. ones that do not integrate over the entire sky automatically). Because the global signal is formally a spatial monopole, measurements in different pixels on the sky have identical cosmological contributions, but different foreground contributions, allowing us to further distinguish between foregrounds and cosmological signal.

iii.2 Methods using both spectral and angular information

For an experiment with both spectral and angular information, the relevant measurement equation is Equation (26), and the minimum variance, unwindowed estimator for the signal can be shown Tegmark (1997) to take the form

(42)

With this estimator, the error covariance can also be shown to be

(43)

These equations in principle encode all that we need to know for data analysis, and even allow for a generalization of the results that follow in the rest of this paper. For example, frequency-dependent beams (a complication that we ignore, as noted earlier) can be incorporated into the analysis by making suitable adjustments to . Recall from Section II.2.2 that our convention is to assume that all data sets have already been suitably deconvolved prior to our analysis. Taking into account a frequency-dependent beam is therefore just a matter of including the effects of a frequency-dependent deconvolution in the generalized noise covariance. For the Gaussian beams considered in this paper, for instance, we can simply make the parameter [in Equation (22)] a function of frequency.

Despite their considerable flexibility, we will not be using Equations 42 and 43 in their most general form. For the rest of the paper, we will make the approximation of frequency-independent beams, which allows us to make some analytical simplifications. Doing so not only allows us to build intuition for what the matrix algebra is doing, but also makes it computationally feasible to explore a wide region of parameter space, as we do in later sections. In its current form, Equation (42) is too computationally expensive to be evaluated over and over again because it involves the inversion of , which is an matrix. Given that is likely to be quite large for a survey with full spectral and angular information, it would be wise to avoid direct matrix inversions.

We thus seek to derive analytic results for and , where is given by Equation (25). We begin by factoring out the foreground templates from our generalized noise covariance, so that , where888Recall that throughout this paper, we use Greek indices to signify the spectral dimension (or spectral eigenmodes) and Latin indices to signify angular dimensions. , just as we defined in Section II.2.2. This is essentially a whitening procedure999In what follows, , , , and all refer to the same matrix, but use different unit conventions and/or are expressed in different bases. The original generalized noise is assumed to be in “real space” i.e. frequency and spatial angles on the sky; is in the same basis, but is in units where the foreground model has been divided out; is the same as , but in a spatial angle and spectral eigenforeground basis; is the same as , but in a spherical harmonic and spectral eigenforeground basis., making the generalized noise independent of frequency or sky pixel. Since both and involve , we proceed by finding , and to do so we move into a diagonal basis. The matrix is already diagonal and easily invertible, so our first step is to perform a similarity transformation in the frequency-frequency components of in order to diagonalize (which, recall from Section II.2, quantifies the spectral correlations of the foregrounds). In other words, we can write as

(44)

where , with signifying the value of the component (i.e. frequency channel) of the eigenvector of . The eigenvalue is given by . Note also that .

Our next step is to diagonalize , the spatial correlations of the foregrounds, as well as , which as the whitened instrumental noise covariance is spatially correlated after the data has been deconvolved. If we assume that the correlations are rotationally invariant101010Recall that encodes the spatial correlations in the errors of our foreground model. It is thus entirely possible to break rotation invariance, for instance by using a foreground model that is constructed from a number of different surveys, each possessing different error characteristics and different sources of error. For this paper we ignore this possibility in order to gain analytical intuition, but we note that it can be corrected by finding the eigenvectors and eigenvalues of , just as we did with ., these matrices will be diagonal in a spherical harmonic basis. For computational convenience, we will now work in the continuous limit. As discussed above and shown in Tegmark (1997); Knox (1995), this is entirely consistent with the discrete approach provided one augments all noise covariances with a factor of . In the continuous limit, the spatial correlations of the foregrounds therefore take the form

(45)

which (except for our factor) is known as a Fisher function, the analog of a Gaussian on a sphere. For , this reduces to the familiar Gaussian:

(46)

where is the angle between the two locations on the sphere. Switching to a spherical harmonic basis, we have

(47)

where denotes the spherical harmonic with azimuthal quantum number and magnetic quantum number , and the last approximation holds if .

For the instrumental noise term, we saw in Section II.2.2 that after dividing out the instrument’s beam, we have

(48)

and adding this to the foreground contribution gives us the equivalent of but in spherical harmonic space:

(49)

The diagonal nature of in this equation allows a straightforward inversion:

(50)

thus allowing us to write the inverse matrix in the original spatial basis as

(51)

where and is a unitary matrix that transforms from a real-space angular basis to a spherical harmonic basis. Obtaining the inverse of from here is done by evaluating .

We are now ready to assemble the pieces to form which, in the notation of our various changes of basis can be written as . We first compute

(52)

where is the reciprocal temperature (in units of ) at the frequency and the pixel of our foreground templates. Note that in the above expression, there is no sum over yet. This is accomplished by the angular summation matrix , giving

(53)

where there is similarly no sum over , and signifies our reciprocal foreground templates in spherical harmonic space. The inverse covariance is thus given by

(54)

Defining to be the total integration time over the survey 111111That is, refers to the amount of integration time spent by a single beam on a small patch of the sky with area equal to our pixel size, and refers to the total integration time of a single beam scanning across the entire sky. An experiment capable of forming independent beams simultaneously would require one to replace with in our formalism. , and making the substitution gives

(55)

At this point, notice that the angular cross-power spectrum between two reciprocal maps and at frequencies and respectively is given by

(56)

where the “” superscript serves to remind us that is the cross-power spectrum of the reciprocal maps, not the original foreground templates. With this, our expression for the inverse measurement covariance can be written as

(57)

Permuting the sums and recalling that and are the eigenvalue and eigenvector of our foreground spectral correlation matrix respectively, this expression can be further simplified to give

(58)

This provides a fast prescription for computing . One first inverts a series of relatively small matrices (each given by the expression in the square brackets). These inversions do not constitute a computationally burdensome load, for and are frequency indices, so the matrices are of dimension . One then uses publicly available fast routines for computing the angular cross-power spectrum, multiplies by and the inverted matrices, and sums over . The resulting matrix is then inverted, a task that is much more computationally feasible than a brute-force evaluation of , which would involve the inversion of an matrix.

Using essentially identical tricks, we can also derive a simplified version of our estimator [Equation (42)]. The result is

(59)

where and are continuous versions of the measured sky signal and the foreground model respectively, and the weights are defined as

(60)

In words, this estimator calls for the following data analysis recipe:

  1. Take the measured sky signal and subtract the best-guess foreground model .

  2. Downweight regions of the sky that are believed to be heavily contaminated by foregrounds by dividing by our best-guess model . [This and the previous step of course simplify to give , as we wrote in Equation (59)].

  3. Express the result in a spherical harmonic and spectral eigenmode basis.

  4. Take each mode (in spherical harmonic and frequency eigenmode space) and multiply by weights .

  5. Transform back to real (angle plus frequency) space.

  6. Divide by to downweight heavily contaminated foreground regions once more.

  7. Sum over the entire sky to reduce the data to a single spectrum.

  8. Normalize the spectrum by applying [the inverse of Equation (58)] to ensure that modes that were heavily suppressed prior to our averaging over the sky are rescaled back to their correct amplitudes. (Note that the error bars will also be correspondingly rescaled so that heavily contaminated modes are correctly labeled as lower signal-to-noise measurements).

Figure 7: Weights [Equation (60)] in spherical harmonic and eigenmode space for combining data in our optimal estimator of the global signal [Equation (59)]. Low and high modes are downweighted because of contamination by foregrounds and instrumental noise, respectively.

The recipe outlined here takes full advantage of spectral and angular information to mitigate foregrounds and produce the smallest possible error bars under the constraint that there be no multiplicative or additive bias i.e. under the constraint that . To see how this recipe works intuitively, we can plot the weights to see how our optimal prescription uses various spherical harmonic and eigenforeground modes. This is shown in Figure 7. At low spherical harmonic , the first few (i.e. smoothest) eigenmodes are dominated by foregrounds, so they are severely downweighted. At high , the limited angular resolution of our instrument means that the instrumental noise dominates regardless of spectral eigenmode, so all eigenmodes are weighted equally in an attempt to average down the instrumental noise. At high (spectrally unsmooth eigenmodes), the foreground contamination is negligible, so the weightings are dictated entirely by the instrumental noise, with the only trend being a downweighting of the noisier high modes.

It is important to emphasize that these weights are applied to whitened versions of the data, i.e. data where the best-guess foreground model was divided out in Step 2 above. If this weren’t the case, the notion of weights as a function of would be meaningless, for our goal is to estimate the monopole (in other words, the global signal), so all that would be required would be to pick the component. The monopole in our original data is a linear combination of different modes in the whitened units, and the weights tell us what the optimal linear combination is, taking into account the interplay between foreground and instrumental noise contamination. In contrast, the spectral-only methods are a “one size fits all” approach with no whitening and a simple weighting that consists solely of the row of Figure 7, ignoring the fact that at some the foregrounds dominate, whereas at others the instrumental noise dominates.

The rest of this paper will be dedicated to examining the error properties of our optimal estimator. The goal is to gain an intuitive understanding for the various trade-offs that go into designing an experiment to measure the global spectrum.

iii.3 Making the cosmological signal more apparent

In Sections IV.2 and V, we will find that despite our best efforts at foreground mitigation, the resulting error bars on our measured signal can sometimes still be large, particularly for instruments that lack angular resolution. These errors will be dominated by residual foregrounds at every frequency, and are unavoidable without an exquisitely accurate foreground model. The resulting measurements will thus look very much like foreground spectra.

Often, we will find that this happens even when the detection significance is high. This apparent paradox can be resolved by realizing that often the large errors are due to a small handful of foreground modes that dominate at every single frequency channel. Put another way, the contaminants are fundamentally sparse in the right basis, since there are a small number of modes that are responsible for most of the contamination. Plotting our results as a function of frequency is therefore simply a bad choice of basis. By moving to a better basis (such as one where our measurement covariance is diagonal, as we will discuss in Section IV.2), it becomes apparent that the cosmological signal can be detected to high significance.

Still, it is somewhat unfortunate that the frequency basis is a bad one, as the cosmological signal is most easily interpreted as a function of frequency. It would thus be useful to be able to plot, as a function of frequency, a measured spectrum that is not dominated by the largest amplitude foreground modes, even if we are unable to remove them. Essentially, one can arrive at a measurement that is closer to the “true” cosmological signal by simply giving up on certain modes. This means resigning oneself to the fact that some modes will be forever lost to foregrounds, and that one will never be able to measure those components of the cosmological signal. As discussed for an analogous problem in Liu and Tegmark (2012), this can be accomplished by subjecting our recovered signal to a Wiener filter :