Mapping quasar light echoes in 3D with forest tomography
Abstract
The intense radiation emitted by luminous quasars dramatically alters the ionization state of their surrounding IGM. This so-called proximity effect extends out to tens of Mpc, and manifests as large coherent regions of enhanced Lyman- (Ly) forest transmission in absorption spectra of background sightlines. Here we present a novel method based on Ly forest tomography, which is capable of mapping these quasar ‘light echoes’ in three dimensions. Using a dense grid (10-100) of faint () background galaxies as absorption probes, one can measure the ionization state of the IGM in the vicinity of a foreground quasar, yielding detailed information about the quasar’s radiative history and emission geometry. An end-to-end analysis – combining cosmological hydrodynamical simulations post-processed with a quasar emission model, realistic estimates of galaxy number densities, and instrument + telescope throughput – is conducted to explore the feasibility of detecting quasar light echoes. We present a new fully Bayesian statistical method that allows one to reconstruct quasar light echoes from thousands of individual low S/N transmission measurements. Armed with this machinery, we undertake an exhaustive parameter study and show that light echoes can be convincingly detected for luminous ( corresponding to at ) quasars at redshifts , and that a relative precision better than on the quasar age can be achieved for individual objects, for the expected range of ages between 1 Myr and 100 Myr. The observational requirements are relatively modest – moderate resolution () multi object spectroscopy at low is sufficient, requiring three hour integrations using existing instruments on 8m class telescopes.
Subject headings:
quasars: general – intergalactic medium – reionization1. Introduction
Quasars are the the most-luminous non-transient sources in the Universe and can be observed throughout cosmic history (Bañados et al., 2016; Padovani et al., 2017). They draw their immense power from accretion of matter onto a supermassive black hole (SMBH) and while every massive galaxy is expected to host a SMBH, most of them remain in a quiescent stage and only a small fraction appear as luminous quasars at any given time. The processes which trigger quasar activity are presently unknown and models deliver a diversity of explanations for the sources of nuclear activity (e.g Di Matteo et al., 2005; Springel et al., 2005; Hopkins et al., 2007; Novak et al., 2011; Cisternas et al., 2011). Observationally, one would like to constrain the duration of bright quasar phases, the so called quasar lifetime . However, observations have so far not converged on a conclusive picture (see e.g. Martini 2004 for a review). For example, clustering measurements of quasars can constrain the integrated time galaxies (or cosmic halos) host quasars and suggest values between and (Adelberger & Steidel, 2005; Croom et al., 2005; Shen et al., 2009; White et al., 2012; Conroy & White, 2013). Other techniques focusing on individual objects are more sensitive to the duration of the last quasar burst. For this, some authors claim rather short times between and (Schawinski et al., 2015; Eilers et al., 2017), while others see indications for extended quasar activity up to a few times (Jakobsen et al., 2003; Gonçalves et al., 2008; Worseck & Wisotzki, 2006; Schmidt et al., 2017, 2018).
Due to their high luminosity quasars have a profound impact on their environment on various scales. A better understanding of quasar activity cycles would therefore not only lead to a better understanding of the physics powering active galactic nuclei (AGN) and the growth of SMBH (Soltan, 1982; Shankar et al., 2009; Kelly et al., 2010) but also shed light on the impact of AGN on their host galaxies, in particular quasar feedback which is usually invoked in galaxy formation simulations (Di Matteo et al., 2005; Hopkins et al., 2008b, a). In addition, quasars are the dominant source of hard ionizing photons () in the Universe and it has been proposed that quasar flickering could have a substantial impact on the ionization state of metals in the circumgalactic medium (Oppenheimer et al., 2018; Segers et al., 2017). On even larger scales, quasars dominate the metagalactic UV background at (Faucher-Giguère et al., 2009; Haardt & Madau, 2012; Khaire, 2017; Kulkarni et al., 2018), which maintains the photoionization of the intergalactic medium (IGM) and drives the reionization of He ii, where the latter directly couples the morphology of helium reionization to the emission properties of quasars (e.g. Davies et al., 2017; Schmidt et al., 2018).
The impact of quasars on the IGM is probably also the most promising way to characterize their emission histories. Due to their large amount of ionizing photons, quasars create so called proximity zones in the IGM, megaparsec sized regions with enhanced photoionization and therefore reduced forest absorption. This proximity effect can be observed in various ways.
The line-of-sight proximity effect describes reduced IGM absorption in quasar spectra close to the quasar position (Carswell et al., 1982; Bajtlik et al., 1988; Scott et al., 2000; Dall’Aglio et al., 2008; Calverley et al., 2011). Since this observation takes place along the light cone, one can not directly probe the quasar luminosity at past times. However, radiative transfer effects result in some sensitivity to the quasar emission history on scales comparable to the photoionization timescale. Because at the photoionization timescale is for H i only , this method usually delivers only lower limits (Eilers et al., 2017, 2018). At higher redshift, during the epoch of reionization, is longer and a substantially neutral IGM can further delay the buildup of proximity zones, facilitating sensitivity to longer quasar ages (Davies et al., 2018a, b). Alternatively, one can analyze the He ii forest for which the photoionization rate is lower and therefore the photoionization time is longer by that amount (Khrykin et al., 2016, 2018).
Apart from the line-of-sight proximity effect, there is the transverse proximity effect which comes into effect for close quasar pairs. Here, a background sightline passes close to a foreground quasar, probing the forest absorption in the vicinity of this foreground quasar. The big advantage of this configuration is, that the pathlength from the foreground quasar to a point an the background sightline and from there to Earth is longer than the direct path from the foreground quasar to Earth. Therefore, the IGM absorption along the background sightline probes past times and is directly sensitive to the emission history of the foreground quasar. This enables constraints on the quasar age based on pure geometry and the finite speed of light (see e.g. Dobrzycki & Bechtold, 1991; Smette et al., 2002; Jakobsen et al., 2003; Adelberger, 2004; Hennawi et al., 2006; Hennawi & Prochaska, 2007; Furlanetto & Lidz, 2011). In addition, the background sightline probes the foreground quasars radiation from a different viewing angle than our vantage point from Earth. Quasar unification models predict that each quasar is obscured (e.g. by a dusty torus) toward a substantial part of the sky (Antonucci, 1993; Urry & Padovani, 1995; Netzer, 2015; Lusso et al., 2013). The transverse proximity effect provides an opportunity to directly test these models and to constrain the emission geometry of individual quasars.
To date, the H i transverse proximity effect has, for various reasons, thus far delivered inconclusive results (e.g. Liske & Williger, 2001; Schirber et al., 2004; Croft, 2004; Hennawi et al., 2006; Hennawi & Prochaska, 2007; Kirkman & Tytler, 2008; Lau et al., 2016). For example, at the redshift typical of most observations, the mean IGM transmission is already fairly high, and the enhancement resulting is relatively small, making it difficult to detect the proximity effect at this redshift. For He ii however, the opacity at similar redshifts is much higher and a single foreground quasar can cause a dramatic increase in the He ii forest transmission along a background sightline. The best example for this is the discovery of a strong He ii proximity effect along the Q0302-003 sightline (Heap et al., 2000; Jakobsen et al., 2003), which enables joint constraints on the age and obscuration of this foreground quasar (Schmidt et al., 2018). However, Schmidt et al. (2018) analyzed other similar sightlines and found a diversity of quasar emission properties. In particular, the absence of transmission spikes for several foreground quasars suggests that these quasars are either very young or highly obscured. This degeneracy can not be broken as long as only single background sightlines are analyzed. Unfortunately, the number of He ii sightlines is limited (28) and the analysis of large quasar samples infeasible.
A special variant of the transverse proximity effect does not rely on quasars as background source but uses faint (e.g. ) star forming galaxies. These are sufficiently abundant that the proximity zone of the foreground quasar can be probed via the forest absorption along many background sightlines. This technique, first proposed by Adelberger (2004) and Visbal & Croft (2008), allows one to map quasar light echoes in three dimensions and in much more detail than possible with single background sightlines. The concept is illustrated in Figure 1 which clearly shows how the parabolic-shaped appearance of the quasar proximity zone, which is caused by the finite age of the quasar (see § 3.3.1), can be seen in the forest absorption along the background sightlines. Despite the great potential, this tomographic mapping of quasar light echoes has so far never been attempted in practice, probably because the observational requirement were judged to be too challenging.
Tomographic reconstructions of the large-scale structure of the Universe using the Ly forest absorption in the spectra of faint background galaxies were recently pioneered by Lee et al. (2014a, b). They showed that at , using faint star forming galaxies as background sources delivers sightline densities around , which is sufficient to interpolate between sightlines and to reconstruct a tomographic map of the IGM absorption on Mpc scales. By clearly assessing the observational requirement, Lee et al. (2014a) determined that tomography is indeed possible with current-generation facilities, in particular 8–10 m class telescopes and existing multi-object spectrographs. Since then, the COSMOS Lyman-Alpha Mapping And Tomography Observations (Clamato) survey has proven the feasibility in practice and delivered the first tomographic map of the IGM Lee et al. (2014b, 2017). Prime objective of the Clamato survey is to map the large-scale structure of the Universe to find e.g. protocluster Stark et al. (2015b); Lee et al. (2016) and to study the cosmic web (Stark et al., 2015a; Krolewski et al., 2018). Similar techniques, based however on SDSS/BOSS spectra, were employed by Cai et al. (2015, 2017) to map large-scale overdensities.
In light of these developments, this paper revisits the question of mapping quasar light echoes with Ly forest tomography. Our aim is to demonstrate feasibility, assess sensitivity, and determine the optimal observing strategy by conducting an end-to-end analysis of the experiment, starting from the observational requirements, computation of realistic models, and finally fully Bayesian inference of parameter constraints. To keep the complexity of this pilot study at an acceptable level and to limit the computational expense, we adopt a simple isotropic lightbulb model for the quasar emission and focus only on measurements of the quasar age. In the future, we will relax these assumptions and consider more realistic anisotropic emission from quasars as well as more complex lightcurves.
The structure of this paper is as follows. We summarize all relevant observational parameters like quasar luminosity, instrument properties (sensitivity, spectral resolution, field-of-view), and achievable background sightline density in §2. We will then describe our models of the 3D proximity effect, which are based on state-of-the-art cosmological hydrodynamical simulations postprocessed with a quasar emission model (§3, §4). To compare observational data to our models and infer posterior probability distributions for the quasar age, we develop a sophisticated Bayesian method based on likelihood-free inference (§5). Finally, we apply this analysis pipeline to mock observations and determine the achievable precision on quasar age and assess dependencies on various observational parameters (§6), allowing us to choose the optimal observing strategy.
Throughout the paper we use a flat CDM cosmology with , , and which has been used for the computation of the Nyx hydro simulation and is broadly consistent with the Planck Collaboration et al. (2018) results. We use comoving distances and denote the corresponding units as . In this paper, we consider a simple lightbulb model for the quasar lightcurve in which the quasar turns on, shines with constant luminosity for its full lifetime until it turns off. This timespan is however different from the quasar age , which describes the time from turning on until emission of the photons that are observed on Earth today (see Figure 8). Magnitudes are given in the AB system (Oke & Gunn, 1983).
2. Observational Setup
To set the stage for our undertaking and to define the observational framework of our study, we discuss all relevant observational parameters in this section. This includes the luminosity of potential quasar targets, achievable background sightline densities, required spectral resolution, exposure times, field-of-view (FoV), and other elements. For most of these parameters we will motivate some initial estimates and build up a fiducial observing strategy. We will later in § 6 explore in detail how the quality of the inferred quasar properties depends on these choices and show how the strategy can be optimized.
A key question for mapping quasar light echoes with forest tomography is the optimal quasar redshift to operate at. At low redshift, the average IGM transmission is quite high, e.g. 85% at , and in consequence even the brightest foreground quasar can only cause a very limited enhancement in the forest transmission. In such conditions, the stochastic nature of the forest absorption and unavoidable uncertainties in estimating the continuum of the background sources might conceal the transmission enhancement. With increasing redshift and thus lower mean IGM transmission (e.g. 45% at ), foreground quasars can cause a stronger transmission enhancement that is easier to detect in a tomographic map. This clearly motivates working at higher redshifts. On the other hand, the number density and brightness of available background galaxies drops dramatically with increasing redshift. One therefore has to accept a much sparser sampling of the proximity region by background sightlines, work with lower spectra or substantially increase the exposure times. Finding the best compromise between strong transmission enhancement at high redshift and fine sampling of the tomographic map at low redshift is one of the primary intentions of this work and requires a detailed assessment of the relevant observational parameters.
Instrument | FoV | ^{a}^{a}Resolving Power at and with slit | ^{b}^{b}Limiting band magnitude to reach at in |
---|---|---|---|
VLT/FORSII | 945 | ||
Keck/LRIS | 1435 | ||
Keck/DEIMOS | 1852 | ||
Subaru/PFS | diameter | 2300 | t.b.d |
2.1. Quasar Luminosities
One would ideally target the brightest quasars at a given redshift since these cause the strongest enhancement in transmission and have the largest proximity zone. We have therefore compiled a collection of the most luminous quasars at each redshift based on the SDSS DR14Q spectroscopic quasar catalog (Pâris et al., 2017). For each redshift bin of size , we selected the ten brightest objects and show the resulting sample in Figure 2. Conversion from observed SDSS -band magnitude to monochromatic luminosity at () is based on the Lusso et al. (2015) quasar template and Galactic extinctions from Schlafly & Finkbeiner (2011)^{1}^{1}1https://irsa.ipac.caltech.edu/applications/DUST/.
Clearly, the most luminous quasars exist around and reach absolute UV magnitudes around . For higher redshifts, the peak luminosity slowly decreases to at while it steeply drops for redshifts below . By computing a running average (Gaussian filter of width) of the brightest quasar per redshift bin, we obtain a suitable representation of the evolution of the most luminous quasars in the universe. These are indeed the ideal targets for our experiment and for the rest of this paper we will use this smooth function (shown in Figure 2 in light brown color) as the fiducial quasar luminosity.
We stress that other quasar catalogs (e.g Schmidt & Green, 1983; Véron-Cetty & Véron, 2010; Flesch, 2015; Schindler et al., 2018) list additional ultraluminous quasars. However, including these affects predominantly redshifts and does not change the overall picture at the redshifts for which the forest is accessible from the ground.
2.2. Field-of-View
The region of interest around the foreground quasar is clearly set by the size of its proximity region, i.e. the region where the photoionization rate of the quasar substantially exceeds the UV background photoionization rate . The optical depth in the forest scales approximately inversely proportional to the ionization rate. Therefore, naively a increase in due to the quasars ionizing flux should result in a detectable effect.
In Figure 3, we show the expected quasar photoionization rate as a function of transverse distance , assuming a fiducial quasar luminosity of , consistent with the most luminous quasars in the universe (see Figure 2), the Lusso et al. (2015) quasar template and no Lyman limit absorption by the IGM (see Equation 5 and 6 in §3.3 for more details). The UV background photoionization rate is of order (Becker & Bolton, 2013). Therefore, hyperluminous quasars dominate the photoionization rate out to distance, corresponding to or at (Figure 3). However, as listed in Table 1, the field-of-view (FoV) of classical multi-object spectrographs is usually . This implies that, for typical spectrographs, covering the full extent of a proximity zone would require multiple pointings. But to remain efficient, one might rather focus on the central region where the quasar radiation will cause the strongest impact on the IGM transmission. The exception is the Subaru Prime Focus Spectrograph (first light in 2021) which will have a circular field-of-view with diameter that could cover the full proximity region with a single pointing. However, our reference concept focuses on the capabilities of currently existing instruments and therefore assumes a circular FoV with diameter. This could be covered by a VLT/FORS II or Keck/LRIS mosaic, or a mosaic with DEIMOS. Within this region the photoionization rate of an quasar exceeds the UV background by more than an order of magnitude and will strong alter the IGM transmission. The usefulness of larger fields will be explored later in §6.8.
For the calculation underlying Figure 3 we have assumed , but we stress that the amplitude of the UV background as well as the conversion from proper transverse distance to angular size depends only weakly on redshift. Therefore, Figure 3 is representative for the full redshift range we consider in this paper.
2.3. Spectral Resolution
As already pointed out by Adelberger (2004), peculiar velocities pose a substantial limitation to tomographic quasar light echo measurements, since they introduce non-trivial distortions of the order of a few into the reconstructed map. Without a priori knowledge of the density field, which sources these motions, it is of little benefit to take spectra with substantially better resolution than this velocity scale. Velocities of correspond to a resolving power of or at . For now, we take this as the reference resolution and later explore in detail, how the fidelity of our reconstructed tomographic map depends on the spectral resolution of the initial spectra (§ 6.6).
2.4. Required and Exposure Times
The stochastic nature of the forest absorption causes substantial fluctuations when measuring the mean IGM transmission. For example, an long chunk of a spectrum the scatter about the mean transmission is (see Figure 11). There is therefore limited gain in obtaining high spectra, since for , intrinsic fluctuations in the IGM absorption dominate the measurement uncertainty. The exact numbers depend of course on the IGM mean transmission and therefore redshift, however, the in general quite modest requirements on spectral is one of the key factors that makes forest tomography feasible with current generation telescopes (Lee et al., 2014a).
To allow for easier comparison between different instruments and spectral resolutions, we define as the achieved per or resolution element, independent of the actual resolution or pixel scale of the observations.^{2}^{2}2 Spectrographs might have a higher spectral resolution (see Table 1) and certainly a finer pixel scale to sample the line-spread function with up to 8 pixels. Therefore, a wide chunk of a spectrum will be sampled by several () pixels and the actual per pixel will be lower by . Specifying ensures that for higher resolving power the light is more dispersed and finer sampled but the overall number of detected photons and therefore the required exposure time to reach a certain is conserved. The choice of as reference resolution is arbitrary, however it is close to the minimum resolving power we require for this project. In addition, to be independent of the actual IGM absorption, we define as the continuum-to-noise in the region of the forest .
In Table 1 we have listed the approximate limiting -band magnitude to reach a within a exposure for different spectrographs. This calculation assumes the background galaxies have a power-law spectrum of the form (Bouwens et al., 2009). The limiting magnitude quoted is that which yields a continuum in the forest at , corresponding to a quasar redshifts of . We have chosen to parametrize the apparent magnitude of the background galaxies in the -band filter since it is conveniently observable and for samples the UV continuum of the galaxies redwards of . For higher redshifts, one technically has to specify -band magnitudes to avoid contamination by the forest. However, for the purpose of this work this is of no concern since we anyway specify unabsorbed continuum magnitudes.
The sensitivity of spectrographs is in general wavelength dependent and the at the quasar position achieved in a fixed exposure time (or vice versa the limiting magnitude) depends – even for the identical background galaxy – on the redshift of the foreground quasar. We show this dependence for FORS II, LRIS and DEIMOS in Figure 4. The exposure time estimates are based on the Keck^{3}^{3}3http://etc.ucolick.org/web_s2n/lris, http://etc.ucolick.org/web_s2n/deimos and ESO^{4}^{4}4https://www.eso.org/observing/etc/bin/gen/form?INS.NAME=FORS+INS.MODE=spectro, Version P102.5 exposure time calculators and assume good but realistic conditions^{5}^{5}5FORS II: E2V blue detector and G600V grating or MIT red detector and G600RI grating, Airmass=1.2, Fractional Lunar Illumination (FLI) =0.0, Seeing=0.7” (47% chance), , . However, the dependence of achieved on wavelength is relatively weak (see Figure 4) and can to some degree mitigated by using either red or blue optimized instrument setups. To not complicate matters any further, we ignore this wavelength dependence for the rest of our study and simply assume that for a limiting magnitude of a can be achieved in , independent of quasar redshift. In practice, when planning actual observations, the true sensitivity of the instruments has to be taken into account and the exposure times, limiting magnitudes or ratios adjusted accordingly. If adjustments to the observational parameters are necessary, these scale approximately like and as long as the objects are substantially fainter than the sky brightness.
2.5. Background Sightline Density
The most crucial factor for our experiment is probably the achievable density of background sightlines . We estimate this closely following the approach outlined in Lee et al. (2014a). Based on a luminosity function , which specifies the number of galaxies per luminosity and comoving volume, the sightline density is given by
(1) |
where is the limiting apparent magnitude of our survey and the comoving line element along the line-of-sight. Background galaxies can contribute to the tomographic map at the quasar redshift if their redshift falls in the redshift interval defined by
(2) |
in which and denote – in rest wavelengths – the usable part of the background spectra, i.e. and .
We use the luminosity functions of Cucciati et al. (2012) for and combine it with the Bouwens et al. (2015) measurements at higher redshifts. In both cases, we use the analytic Schechter representation of the luminosity function and interpolate the function parameter between redshifts. To convert from the absolute UV magnitude specified around in the luminosity functions to the apparent magnitude in our observed bandpass (SDSS band), we use the standard conversion (Hogg, 1999) and assume a galaxy SED of the form (Bouwens et al., 2009).
We show the result in Figure 5, expressed once in terms of sightlines per square degree and once as average comoving separation between sightlines. Clearly, fainter limiting magnitudes allow a higher sightline density and a finer sampling of the tomographic map. However, the achievable density of background sources drops rapidly with increasing quasar redshift. At an average sightline separation of about can be reached when only considering background galaxies. At , one has to go half a magnitude deeper and still only reaches an average sightline separation of . For , even with background galaxies as faint as , average separations will be larger than .
2.6. Summary of Observational Parameters
In the sections above we have collected all dependencies of the observational parameters relevant for our forest tomography project. This now allows us to explore the parameter space by varying single quantities like the desired , the field-of-view observed, or the spectral resolution. This also also allows us to explort certain paths through the parameter space, e.g. vary the foreground quasar redshift while simultaneously adjusting to the correct quasar brightness, limiting magnitude and therefore background sightline density, to keep the required exposure time constant. We will use this later in §6 find the optimal observing strategy for the project.
3. Models / Simulations
Given the observational framework outlined above, we create realistic models of the forest in the vicinity of bright quasars. Our models are based on outputs of cosmological hydrodynamical simulations which we postprocess with a photoionization model that explicitly incorporates finite quasar ages. From these we create mock forest spectra that resemble a given (e.g. the actually observed) pattern of background sightlines in the vicinity of a foreground quasar. In a final step, we forward model observational effects, in particular finite spectral resolution, finite , and continuum fitting errors. The overall scheme is nearly identical to the one we developed in Schmidt et al. (2018) to create He ii forest spectra.
3.1. Nyx Cosmological Hydrodynamical Simulations
We use simulations computed with the Eulerian hydrodynamical simulation code Nyx (Almgren et al., 2013; Lukić et al., 2015). The simulation box has a large size of , required to capture the full extent of the proximity zone of hyperluminous quasars. The hydrodynamics is computed on a fixed grid of resolution elements and the same number of dark matter particles is used to compute the gravitational forces. This results in a resolution of per pixel, sufficient to resolve the H i forest at (Lukić et al., 2015). The simulation runs make no use of adaptive mesh refinement since the H i forest signal originates from the majority of the volume (Lukić et al., 2015). Refining the resolution in the dense regions at the expense of underdense regions is therefore not beneficial for our case. Also, since the prime objective of the simulation is IGM science, no star or galaxy formation prescriptions were included. The simulations were run using a homogeneous, optically thin UV background with photoionization and heating rates from Haardt & Madau (2012). As described below, we rescale the H i photoionization rates to closely match the observed mean transmission but keep the thermal structure unchanged. We have simulation outputs available at redshifts , 2.5, 3.0, 3.5, 4.0, 4.5 and 5. Depending on the desired foreground quasar redshift, we take the snapshot closest in redshift and extract density, velocity and temperature along skewers. These will later be post-processed to simulate the observed H i Ly transmission along background sightlines.
For the host dark matter halos of the foreground quasars we choose halos with masses halos, which is the minimum mass of quasar host halos as indicated from clustering studies(e.g Richardson et al., 2012; White et al., 2012). As described in more detail in Sorini et al. (2018), halos in the Nyx simulations are identified by finding topologically connected components above 138 mean density (Lukić et al. in prep.). This gives similar results to the particle-based friends-of-friends algorithm (Davis et al., 1985). From the Nyx halo catalog we select all halos with mass larger ( for ) and randomly draw halos from this subset.
3.1.1 Sightline Pattern and Skewer Extraction
Given the selected halos, skewers are extracted along one of the grid axes using the sightline pattern illustrated in Figure 6. The whole pattern is rescaled, i.e. stretched or compressed in radial direction, to match the desired average sightline density (see Figure 5) and sightlines with a transverse separation larger than the adopted field-of-view (usually diameter) are discarded. Along the line-of-sight (i.e. velocity space), we center the skewers on the halo position in redshift space, taking the peculiar velocity of the halo into account. With the redshift of the foreground quasar as the origin, we assign individual redshifts to every pixel along the skewers.
To better represent redshift evolution of the density field, we rescale the density of each pixel according to
(3) |
We convert from simulated cosmic baryon density to hydrogen number density using the primordial abundances of 76% (Coc et al., 2015). The temperature and velocity field are taken directly from the simulation box without any change.
3.2. Background Photoionization Rates
Apart from the ionizing radiation of the foreground quasar, we adopt a spatially uniform UV background. Oñorbe et al. (2017) presented an empirical relation for the cosmic mean transmitted H i flux fitted to existing measurements (Fan et al., 2006; Becker et al., 2007; Kirkman et al., 2007; Faucher-Giguère et al., 2008; Becker et al., 2013) of the form
(4) |
where denotes the effective optical depth and the redshift. For simulation snapshot available at , we determine the mean transmission in a large set of random skewers and iteratively adjust the homogeneous H i UV background until the mean transmission matches the relation from Oñorbe et al. (2017). We interpolate these values determined for the fixed redshifts using a cubic spline to obtain a smooth function . This allows us to assign to each pixel the appropriate H i UV background matched to its redshift.
3.3. Foreground Quasar Photoionization Rates
Based on the the assumed magnitude of the foreground quasars and assuming the Lusso et al. (2015) template for the spectral energy distribution of the quasars, we compute the quasar luminosity , and from this the flux density at the background sightline according to
(5) |
Here, denotes the proper 3-D distance from the foreground quasar to a specific position at the background sightline and is the mean free path to H i ionizing photons. We ignore IGM absorption by setting , which is appropriate for the redshifts and scales we probe.^{6}^{6}6For , becomes comparable to the size of the proximity region (Worseck et al., 2014) and IGM absorption can no longer be neglected.
We assume the quasar spectra to be of power-law shape with slope beyond (Lusso et al., 2015) and for simplicity also treat the spectral dependence of the hydrogen ionization cross-section as a power-law of form . Using the cross-sections at the ionizing-edges from Verner et al. (1996a) leads to the H i photoionization rate
(6) |
where denotes Planck’s constant and is the frequency of the H i ionization edge, i.e. .
The photionization of He 2 by the quasar might also have an effect on the thermal structure of the IGM (Bolton et al., 2009, 2010, 2012). However, proper treatment of this thermal proximity effect would require radiative transfer calculations (Meiksin et al., 2010; Khrykin et al., 2017) which is beyond the scope of this study.
3.3.1 Finite Quasar Age
The above calculation implicitly assumes isotropic emission and a quasar luminosity that is constant for all times. In this section, we relax the latter assumption and compute which part of the background sightlines are illuminated by a foreground quasar of a given finite age.
A background sightline at transverse distance probes the foreground quasars emission at earlier times than the light we directly receive from the quasar (see e.g. Adelberger, 2004; Kirkman & Tytler, 2008; Furlanetto & Lidz, 2011; Schmidt et al., 2017, 2018). This arises from the fact that the geometric path length from the foreground quasar to a location along the background sightline, and from there to the observer (as probed in absorption by the background sightline) is longer compared to the direct path from the foreground quasar to Earth. The total comoving path length is composed of the comoving distance from Earth to a point on the background sightline at redshift and from there to the foreground quasar. The sum of both can be converted to a redshift and corresponding lookback time at which the ionizing radiation from the foreground quasar had to be emitted.
This lookback time at emission can be compared to the lookback time corresponding to the redshift of the foreground quasar . The difference is the additional time it takes to first reach a certain point on the background sightline. If one neglects cosmic expansion (which we do not do in practice but do here for the sake of illustration), this simplifies to
(7) |
with and denoting transverse and line-of-sight comoving separation from the quasar, the speed of light, and the cosmic scale factor at the quasar redshift. Curves of constant time difference are thus parabolas. In Figure 7 we give a detailed illustration of this behavior showing along four background sightlines that pass by a foreground quasar with transverse separations between and .
Since any point on a background sightline probes the quasar luminosity at a (different) earlier time than we observe the quasar today, we have to specify a quasar lightcurve . For this we assume a simple lightbulb model of the form
(8) |
in which is the currently observed quasar luminosity and the two terms are Heaviside step functions to tun the quasar on and off. That is, we define the time at which the photons we observe today on Earth were emitted as . In this model, the quasar turned on at and therefore we currently observe it at the age . Its total lifetime is and it will turn off at time . Since we observe the quasar on today, and turn-off will happen at some point , i.e. in the future. This diagram in Figure 8 illustrates these various times and aids visualization.
A certain point on a background sightline with time difference probes . Therefore, points on a background sightline for which appear for an observer on Earth to be illuminated by the foreground quasar. Since increases monotonically along the line of sight, i.e. with or (see Figure 7), all points at higher than the dividing line where appear not yet illuminated, simply because there was not enough time for the photons to reach these locations.
There is of course the possibility to assume different and probably more realistic quasar lightcurves. forest tomography should in principle be able to map the full emission history and able to constrain quasar variability over Myr timescales. For the moment however, we adopt the most basic lightbulb model and will explore the potential to constrain more complex quasar lightcurves in a future paper.
3.3.2 Quasar Obscuration
Contrary to the established AGN unification paradigm (e.g Antonucci, 1993; Urry & Padovani, 1995; Elvis, 2000; Netzer, 2015; Assef et al., 2013; Lusso et al., 2013; Padovani et al., 2017) that suggests that all quasars are obscured from some vantage points, for the purposes of this paper we assume that quasars emit isotropically. The method we present here is capable of determining the quasar emission history as well as the quasar emission geometry. However, modeling the quasar radiation e.g. as a bi-conical emission pattern caused by an obscuring torus, adds a substantial amount of complexity to the analysis. Such a non-isotropic quasar emission model requires three more parameters, two angles and describing the orientation of the quasar emission cone or bi-cone, and an opening angle setting the amount of obscuration or the opening angle of the emission cone(s). We will demonstrate in a future paper how these parameters can also be inferred from tomographic observations, but for the sake of simplicity focus in this paper solely on quasar age and assume isotropic quasar emission.
3.4. Ionization State of Hydrogen
Based on the temperature , velocity, and cosmic baryon density extracted from the Nyx simulation boxes, and adopting quasar and UV background photoionization rates as described above, we solve for the hydrogen ionization state. At the redshifts we have in mind for this experiment (), hydrogen in the IGM is highly ionized by the metagalactic UV background (e.g. Haardt & Madau, 2012; Planck Collaboration et al., 2018). We therefore assume ionization equilibrium, and that the IGM instantaneously adjusts to a change in the photoionization rate. This is well justified since the equilibration timescale for H i at these redshifts is short, e.g. , compared to the timescales of interest.
For calculating the H i density we follow, like in Schmidt et al. (2018), the approach described in Rahmati et al. (2013). We take the total ionization rate as the sum of photoionization and collisional ionization. For the photoionization we include the self-shielding prescription from Rahmati et al. (2013) in which the effective photoionization rate in high-density regions with is substantially reduced. For the collisional ionization we adopt the prescription by Theuns et al. (1998). We tie the fraction of helium in the He i and He ii states to the hydrogen ionization state by simply assuming . Given the similar ionization energies, this is justified and a common assumption. We ignore the double ionization of helium for this study, i.e. assuming , since it adds unnecessary complications and has no substantial effect on the results. We adopt the Case A recombination coefficients from Storey & Hummer (1995). This is appropriate since H i is highly ionized and the IGM optically thin on the relevant scales. With these inputs, the hydrogen ionized fraction can be easily computed.
3.5. Computing Synthetic Spectra
After determining along the skewers as stated above, the next step in our modeling procedure is to create synthetic spectra. For each pixel along the skewers we compute an individual Voigt absorption line profile with appropriate strength, line width and velocity shift corresponding to the physical conditions in that pixel. Oscillator strengths are taken from Verner et al. (1996b). We benefit here from the high resolution of the Nyx box ( or ) which is sufficient to resolve the forest (). Redshift space distortions (peculiar velocities) are included by displacing the absorption profile with the line-of-sight velocity from the Nyx simulation. Thermal broadening is computed according to for the Doppler broadening with denoting the gas temperature in a pixel and the atomic mass of hydrogen. The Lorentzian scale parameter is based on the transition probability from Verner et al. (1996b). The final transmission spectrum at a pixel in redshift space is the combination of all absorption profiles along a skewer.
3.6. Error Forward Model
Finally, we degrade these idealized spectra to account for observational effects by forward modeling finite spectral resolution, continuum errors, and photon-counting and instrumental noise.
3.6.1 Resolution
We convolve the spectra with a Gaussian line spread function of appropriate width to simulate finite resolution of the spectrograph utilized, parametrized by its resolving power . We also rebin the spectra in chunks of , which results in Nyquist sampling for resolving powers .
3.6.2 Adding Noise to the Spectra
We add random Gaussian noise to the rebinned spectra to mimick the desired ratio. As already stated in §2.4, we specify the signal to noise per or resolution element to keep the measure independent of spectral resolution. The actual per pixel will be lower by , with denoting the number of bins or pixels that sample a chunk of a spectrum.^{7}^{7}7For extremely low spectral resolution, might be and the per pixel in fact higher. This procedure makes the exposure time required to reach a certain (approximately) independent of the adopted spectral resolution.
3.6.3 Continuum Error
In addition to a random uncertainty per pixel due to photon-counting and instrumental noise, there is a more systematic effect related to uncertainties in the continuum fitting. This effect could have a potentially severe impact on the measured forest transmission and therefore on the reconstruction of the quasar light echo. Thus, we forward model continuum fitting uncertainties following the description given in Krolewski et al. (2018). Our final forward modeled transmission is therefore of the form
(9) |
where is the simulated transmission while and are random Gaussian deviates with zero mean and standard deviations and . The first term is related to the photon-counting and instrumental noise already mentioned above and drawn for each bin individually. The standard deviation is related to the desired and for wide bins at approximately
(10) |
The second term corresponds to the continuum uncertainty and is identical for all bins along the same background sightline. Following Lee (2012), Krolewski et al. (2018) presented an empirical relation for the continuum uncertainty as function of the of the data. Converting the given relation to the sampling used in this study we obtain
(11) |
which we will use throughout the paper to model continuum uncertainties.
4. Simulated Observations of Quasar Light Echoes
In what follows we briefly discuss the results of our simulations and try to build some intuition about the appearance of the H i quasar proximity effect in 3D. In Figure 9 we show a two-dimensional slice through our simulation box. The quasar has a redshift and an apparent magnitude of , corresponding to . Without the foreground quasar, the IGM has a mean transmission of , however with substantial scatter ( on scales) around this value due to cosmic density fluctuations. The ionizing radiation of the quasar increases the ionization state of hydrogen and pushes the H i transmission to nearly in its immediate vicinity. Even out to larger scales of several tens of megaparsec, a clear enhancement in the H i forest transmission is visible. The region of enhanced transmission has a clear boundary towards higher redshift (positive , right in Figure 9), caused by the finite speed of light and the finite age of the quasar. The parabolic shaped surface corresponding to a path length difference of marks the boundary between the illuminated and non-illuminted region. For all points to the right (higher redshift, larger ), the quasar does not shine long enough to make these region appear illuminated for an observer on Earth.
However, while the sharp boundary of the quasar light echo exists in real space, this will not be the case in in redshift space. A comparison of the top two panels in Figure 9 illustrates the impact of redshift space distortions on the light echo structure. The large-scale velocity field displaces the apparent position of absorption features in a non-trivial way and causes the quasar light echo to be in some regions less and in some regions more extended in redshift space than in real space. This spatial distortion is related to the large-scale density field, since these density fluctuations source the bulk velocity flows. This can most easily be seen close to a large overdensity behind the quasar (at , upper half of the plot) that appears to drag the quasar light echo to the right towards the overdensity.
In principle, forest tomography delivers a map of the large-scale density structure, which might allow one to derive a model of the peculiar velocities and remove at least some part of these distortion from the tomographic map. However, due to the complex nature of the redshift space distortion, we do not undertake such reconstructions here. Peculiar velocities will therefore result in a form of correlated noise for the characterization of quasar light echoes.
We quantify the amount of redshift space distortions in the map shown in Figure 9 and find that the distribution of line-of-sight velocities is approximately Gaussian with a standard deviation of . This corresponds to a FWHM of or at . Although redshift space distortions are correlated and not directly comparable to the effects of finite spectral resolution, one can already see that the line-of-sight velocities correspond approximately to a resolving power . Taking observations with substantially higher spectral resolution should therefore be of little benefit. However, we explore this dependence more thoroughly in §6.6.
In Figure 10 we illustrate the time evolution of the proximity zone. Here, we again show the same slice through the simulation box as in Figure 9 but compute the ionization state and Ly forest transmission for different quasar ages between and . This demonstrates how the quasar light echo expands into the IGM and how an increasingly large region around the quasar appears to be illuminated from Earth.
4.1. IGM Transmission Statistics
In Figures 11 we illustrate the flux probability distribution of the pixels within our tomographic map. For three background sightlines at transverse separations of , and we show the expected median transmission in wide bins as well as the 16th–84th percentile region. At low redshift (; Figure 11 left) it is difficult to determine if a specific part of a background sightline is illuminated by the quasar. Although the ionizing radiation from the foreground quasar dominates over the UV background, the expected median transmission for the illuminated case overlaps with the 84th percentile of the distribution for the unilluminated case in nearly all bins. A significant difference between illuminated and unilluminated case exists only very close to the quasar ( and or ) where the cosmic overdensity of the host halo causes in absence of the quasars ionizing radiation excess absorption whereas the transmission is if the quasar illuminates this region. Therefore, it will be very challenging to detect and characterize quasar light echoes at such low redshifts.
The situation substantially improves at higher redshift. At (Figure 11 right), the mean IGM transmission drops below while the expected transmission in the illuminated case remains basically the same and still reaches for . Given that the IGM scatter only slightly increases, the 16th–84th percentile regions of the two distributions have basically no overlap and one can in principle for any individual bin along a background sightline infer with high confidence if that bin is illuminated by the foreground quasar. This higher contrast between illuminated and unilluminated parts of the IGM at high redshift makes a detection of the proximity effect far easier then at low redshift, however at the expense of reduced background sightline density (see Figure 5).
5. Inferring Parameters from Quasar Light Echoes
Inferring quasar properties from tomographic observations requires a statistical comparison of the observed data to a set of models. The analysis scheme we develop for this task has to be able to cope with several challenges.
First, it has to combine and jointly fit the information from all transmission measurements along all background sightlines. This can, depending on the quasar redshift and limiting magnitude of the observations, result in up to twenty thousand individual measurements per tomographic map.
Second, the statistical analysis has to keep track of the correlations between individual measurements and cope with the intrinsically non-Gaussian transmission distributions in individual bins. This is of particular concern in the illuminated parts of the background sightlines which have transmissions close to 100 % (see Figure 11). Also, the measured IGM transmission has a very non-linear behavior with respect to the model parameters. Roughly speaking, it switches for a given spatial position between two binary states, depending on whether this part of the sightline is illuminated by the foreground quasar or not.
Third, we require the analysis to be fully Bayesian. This will allow us to deduce posterior probabilities for the inferred parameters and to determine meaningful confidence intervals. Additionally, it is desirable to have an analysis method that can handle degeneracies between model parameters. This is technically not necessary for the current paper since focus on inferring a single parameter, the quasar age, but our goal for the future is to generalize the modeling and inference to enable joint fits for quasar age, quasar orientation, and quasar obscuration. Particularly for short quasar age and high obscuration, one expects significant degeneracies between parameters.
Our approach to solve the issues outlined above using a set of dedicated models of the proximity effect and a Bayesian approach employing so called likelihood free inference.
5.1. Transmission Probability Distribution Functions
For a given foreground quasar (, ), proximity effect models are created as described in §3, based on outputs of a cosmological hydrodynamical simulation and postprocessed with a quasar photoionization model. We compute a model grid in that spans from to with 16 logarithmically spaced values of . For each of these models, we create 100 different model realizations which have the same quasar properties (i.e. , , ) and employ the same sightline pattern (illustrated in Figure 6) but are centered on different host halos and therefore have different IGM density, velocity and temperature structure. These 100 independent model realizations are necessary to properly characterize the stochasticity of the IGM absorption. We forward model observational effects like finite spectral resolution, signal to noise ratio and continuum fitting errors to make the model outputs directly comparable to observed spectra. Each spectrum extends for around the foreground quasars position and we bin the spectra in chunks of length.
For each chunk we obtain a kernel density estimate (KDE) of the probability distribution (PDF) of the transmission values in the chunk based on the 100 independent realizations of that model. This results in smooth functions, , that describe the probability for measuring an IGM transmission in bin , given the model parameter . Here, the index identifies the different background sightlines within the sightline pattern while denotes a certain chunk along the sightline. As long as we only focus on the quasar age, the parameter vector has only one component . However, we still denote it as a vector since this approach allows for easy and straight-forward generalization of the inference to additional parameters like quasar obscuration or orientation. Following this, each model of the H i proximity effect is fully described by a set of transmission probability functions .
5.2. Likelihood Computation
Due to the high dimensionality of the observable, , i.e. several thousand individual transmission measurements, determining the likelihood poses a very challenging task. The usual approach of approximating the likelihood as a multi-variate Gaussian is inadequate for our problem since the individual transmission PDFs are not well described by Gaussians. In addition, determining the significant correlations between all the elements of would likely require an excessive number of model realizations. Therefore, we follow a likelihood free approach for which we never have to actually write down an analytic form for the likelihood function.
In Schmidt et al. (2018) we solved a similar problem. There however, the dimensionality of the problem was low (at most three transmission measurements and two model parameters) which made it feasible to simply map the full likelihood space by brute-force sampling. An approach like this would be completely impossible for our current case. A fully Bayesian treatment is only achievable if the dimensionality of the problem can be drastically reduced. We do this by first computing a so called pseudo-likelihood which acts in many ways like a proper likelihood except that it ignores correlations between the . In a second step, we then map this pseudo-likelihood to determine a proper posterior probability distribution. Our approach is in many ways inspired by Alsing et al. (2018) and Davies et al. (2018b), but is customized to the problem at hand and is in many ways different from either of these strategies.
For a given set of observed IGM transmissions , we define the pseudo-likelihood as
(12) |
Therefore, we evaluate the transmission probability function of each chunk at the observed transmission level , and compute the product of these probabilities. If the individual bins were uncorrelated this pseudo-likelihood would indeed represent the true likelihood function. However, since this is in general not true, the pseudo-likelihood will not result in the correct parameter uncertainties and may also produce biased results.
We therefore only use it to find a parameter vector that maximizes this pseudo-likelihood . This acts as a data compression and reduces the dimensionality of the data (up to several thousand) to the dimensionality of the model parameter space (a few or in our current case only one ). During this process, some information might be lost, however, does retain the essence of information contained in the data and an approach like this has been proven to be a rather efficient and successful data compression algorithm (Davies et al., 2018b).
Since we only have models available for a set of discrete values, the maximization process described above requires interpolation between models. We do this by interpolating the logartihm of the transmission probabilities, , evaluated at the specific observed value of , using a simple quadratic interpolation scheme.
Compressing the dimensionality of the observable to the dimensionality of the model as described above now allows a fully Bayesian treatment of the problem.This requires that we determine the mapping between our summary statistic and the true parameter vector , which means we require the conditional probability distribution . The technical feasibility of this approach was presented by Alsing et al. (2018), however based on a different data compression scheme. Using Bayes’ theorem, the conditional probability distribution can be written as
(13) |
Here is our prior on the model parameters, which we here assume to be flat in , and is the evidence, which is basically just a normalization. Since we have a generative model that can create mock data realizations for a given parameter set , we can relatively easily determine . The low dimensionality of the problem, only , makes it computationally feasible to approximate this distribution by simply computing samples. In practice, we draw 1600 parameters values from the prior , compute model realizations for these values that yield the mock measurements , and straightforwardly determine for each realization. The latter is done by evaluating the transmission probabilities for each of the 1600 mock realization , computing the pseudo-likelihood in Equation 12, and finding the value that maximizes it.
The resulting distribution of the 1600 samples in space is shown in Figure 12. As one can see, most samples are located relatively close to the 1:1 relation. Clearly, the that maximizes the pseudo-likelihood is a good proxy for the maximum of the true likelihood . The width of the distribution around the 1:1 relation is a measure for the width of the posterior and therefore the uncertainty in the parameter estimate.
We note however, that in addition to this general scatter around the 1:1 relation, there are some outliers and artifacts in the distribution. For , where our method is less sensitive (see Figure 11), the pseudo-likelihood maximizer has the tendency to run towards the upper boundary of the grid at . This effect happens mostly at low redshift (high mean IGM transmission) and long quasar ages. However, this issue quickly disappears for and has in general very little impact on our analysis. Also, the maximizer has a slight tendency to pick values that lie exactly on the model grid. This behavior is related to the relatively simple interpolation scheme we use for interpolation between the discrete models. Using a more sophisticated interpolator (e.g. Gaussian process interpolation, Habib et al. 2007; Walther et al. 2018a, b.) would likely eliminate this issue. For now however, we monitor this behavior and ensure that, even if this artifact appears in our mapping procedure, it does not negatively impact our mapping.
To derive proper posterior probabilities, we require a smooth and continuous version of the distribution. We therefore apply a kernel density estimate to the 1600 computed samples which yields slightly better results than a Gaussian mixture model employed by Alsing et al. (2018). We show the resulting interpolated distribution as gray shading in Figure 12. Note that the conditional probability is nothing else than the joint probability . Slicing this now smooth distribution at a specific value of and re-normalizing the result to unity, finally yields the proper posterior probability .
Having set up our transmission probabilities . for a set of discrete values of and the determining the poseterior via the procedure described above, we can for any simulated mock or observed IGM transmission first determine the parameter vector that maximizes the pseudo-likelihood and then convert this into a proper posterior probability . In this way, we have a powerful and computationally feasible method to derive posterior probabilities in a fully Bayesian way that includes all effects related to correlations and non-Gaussianities without the requirement to write-down a likelihood function. It is simply based on the fact that we have a generative model capable of completely forward modeling mock observations.
6. Results
In the following, we will present which constrains forest tomography can impose on the quasar age. To validate that our complex statistical analysis works as expected, we will create mock observations, analyze these with the fitting scheme described above and derive realistic posterior probabilities. Furthermore, we will conduct an extensive parameter study to explore how the precision of the inferred depends on the properties of the foreground quasar, namely its redshift and UV luminosity , as well as the observational setup. The latter is defined by the spectral resolution , the covered field-of-view and the limiting magnitude for achieving the desired in a given exposure time. In consequence, these parameters also define the average sightline density and the number of sightlines that will probe the proximity region of the foreground quasar. Determining the dependencies between these various parameters will be essential for choosing the optimal survey strategy for this project.
6.1. Simulation Grid
We create models of the 3D proximity effect for foreground quasar with redshifts ranging from to in steps of . The fiducial setup assumes a limiting magnitude of . For background sources of this brightness, existing multi-object spectrographs on 8–10 m class telescopes (e.g. Keck/DEIMOS) should in good conditions achieve a in (see Figure 4 in § 2.4). As shown in Figure 5, the same limiting magnitude corresponds, depending on the foreground quasar redshifts, to different average sightline separations . In Table 2, we list the adopted values together with the number of background sightlines that fall into our diameter FoV. We adopt the sightline pattern shown in Figure 6. Table 2 also lists the adopted luminosity of the brightest available quasars at these redshifts (see Figure 2).
For each of the models in Table 2, we randomly select halos from the simulation box which define 100 independent sets of temperature, density and line-of-sight velocity along the background skewers. Each sightline pattern created in the above way is then processed with our photoionization model as described in §3, assuming 16 different quasar ages , corresponding to . The background sightline spectra extend from and are binned in chunks of length, resulting in 131 independent transmission measurements per sightline. This defines for each model listed in Table 2 and each a set of transmission probabilities ,
To realize the mapping from to as shown in Figure 12, we compute a second set of models adopting the same parameters as listed in Table 2. However, instead of simulating 16 discrete values for 100 IGM realizations, we compute 1600 realizations for which we draw randomly from our prior and pair it with a randomly selected halo above the minimum halo mass. As stated in §5.2, we adopt a prior that is flat in .
For validation of our method, we create a set of models that act as mock observations. For these we adopt quasar ages and choose random halos above the minimum halo mass. All other parameters are identical to the ones listed in Table 2. For each quasar age, we compute 25 different IGM realizations. All models described above are post-processed to mimic a desired spectral resolution (usually ), signal to noise ratio (e.g. ) and continuum uncertainties (see §3.6.3).
After computing the various kinds of models, we have everything together for an end-to-end test of our method. For this, we apply the statistical analysis to the the mock observations and infer posterior probabilities for , following the procedures described in § 5.
In this fitting process, we assume perfect knowledge of the foreground quasar redshift and do not include any uncertainties on this quantity. Given the overall expense of the tomographic observations and the extreme luminosity of the foreground quasar, it would in reality only add insignificant additional effort to obtain highly-precise redshift estimates e.g. from the [O iii] line () using infrared spectroscopy or alternatively the CO or [C ii] fine-structure lines () in the sub-mm regime.
We also assume perfect knowledge about the UV background and the quasars ionizing emissivity. This means in practice that the models we use for fitting have exactly the same mean transmission and quasar ionizing emissivity as the mock observations. The mean IGM transmission is relatively well known (to better than 2%, Becker et al. 2013) and the uncertainties probably dominated by the statistical fluctuations within the map. The quasar luminosity in the non-ionizing UV () is directly observable and should be known to very high precision. The extrapolation from there to the ionizing regime adds some uncertainty due to the a prior unknown quasar SED. However, variations in the quasar spectral slope relate to only moderate uncertainties in the ionizing flux, e.g. about based on Lusso et al. (2015). In addition, we would preferentially select target quasars that have confirmed flux beyond the Lyman limit (), which could give additional constraints on the quasar SED. In the future, we anyway intend to include the (possibly time dependent) quasar luminosity in the analysis procedure and fit for the quasar ionizing flux.
Note. – Parameters are chosen to keep the total observing time approximately constant, e.g. 3 with Keck/DEIMOS.
6.2. Example Posteriors
A set of posterior probabilities derived in the described way for a model with are shown in Figure 13. The figure clearly shows that our method works well and yields satisfying estimates for the quasar age. The posterior probabilities are localized and in the right place. In most of the cases, the true value is well within the extent of the confidence interval (16th – 84th percentile) and there are very few cases in which the derived estimate deviates substantially from the true value. Averaging the 25 individual posterior probabilities of each model gives an estimate of the achieved accuracy. This also shows that our method yields unbiased results. A slight exception from this might be the case, which approaches the highest quasar ages that can be constrained with forest tomography. At such long , as can be seen in Figure 10, the edge of the proximity region starts to blend smoothly with the IGM and the proximity region extends far beyond the adopted FoV. The exact behavior depends however on the luminosity of the quasar and the mean transmission of the IGM and therefore redshift. Also note that the cut-off of the posterior probabilities towards high might to some degree be artificial since our model grid only extends up to . At some point the results should be treated as lower limits. In general, this test proves that forest tomography is indeed able to constrain quasar ages in a precise and reliable fashion.
Figure 13 also reveals that, at least to first order, all seven models show a similar width of the posterior probabilities, more or less independent of the true age of the quasar. Since the axis in Figure 13 is scaled logarithmically, this translates to an approximately constant relative uncertainty . This general behavior can also be seen in the mapping from to which defines the width of the derived posterior probabilities. The distribution shown in Figure 12 has approximately constant width around the 1:1 relation. However, there is some dependence on the quasar age which we discuss in more detail below.
6.3. Dependence on
To quantify the precision of the derived estimate, we define the relative uncertainty as the 16th–84th percentile interval of the posterior probability parametrized as function of which yields .
Figure 14 shows this relative precision derived from individual posteriors averaged over the 25 realizations per model as function of quasar age. As one can see, the highest relative precision around 10% is achieved for young quasars (). For very long quasar ages, similarly small uncertainties around 15% can be reached. At intermediate ages around , the precision is only around 20%. However, this depends on the quasar redshift. Quantitatively understanding the origin of the dependence of the precision on quasar age is not a trivial task, but we believe it is related to a combination of the smearing effect of redshift space distortions and the geometry of the region illuminated by the quasar.
As shown, the quality of the derived constrains depends slightly on the adopted foreground quasar redshift. For low quasar redshifts, e.g the case, the achieved precision is in general not as good as for and deteriorates in particular for long quasar ages.
6.4. Dependence on
In Figure 15, we explore the redshift dependence of our method in more detail. In what follows we keep the limiting magnitude for achieving fixed at , which results in an approximately constant exposure time around 10 ks (see Figures 4 and 5). We also assume the same fixed FoV of . As already outlined in §2, one has to find a compromise between sampling the quasar light echo by many background sightlines at low and the overall stronger proximity at higher redshift, owing to the lower average IGM transmission and thus increased contrast in the proximity zone (see Figure 11).
Figure 15 indicates that the latter effect clearly dominates. For , we achieve an age precision better than 20 %, nearly independent of redshift. Below this however, the precision degrades substantially, despite sampling the quasar proximity zone with up to 216 background sightlines at . The deterioration is particularly dramatic for long quasar ages, where the transmission enhancement caused by the quasars ionizing radiation would have to be detected at large distances from the quasar, at which point the corresponding small transmission enhancement becomes indistinguishable from the average (high) IGM transmission given the relatively large stochastic fluctuations. As already discussed in §4.1 and shown clearly in Figure 11, redshifts are not well suited to map quasar light echoes and one should in general aim for higher redshift where the mean IGM transmission is lower.
At very high quasar redshift, () the average separation between sightlines becomes comparable to the size of our adopted field-of-view and the number of contributing background sightlines is rather low (see Table 2).The rigid sightline pattern we adopt in our analysis (see Figure 6) is not optimized for this regime. This discretization in background sightline density causes undesired jumps and wiggles in the curves shown in Figure 15, e.g at .^{8}^{8}8At , and two rings of our pattern fit into the FoV, resulting in . At , the average sightline separation is and the quasar proximity region is only sampled by 6 background sightlines. For the low sightline density regime, a random placement of background sightlines in the FoV would clearly be more appropriate.
A better assessment of the performance of our method at and beyond might therefore require a slightly different approach and a dedicated study. This might reveal that the method can be pushed to even higher redshifts. However, one has to note that in this study we model the proximity effect with a rather simple model that has only one free parameter. Therefore, a single background sightline theoretically delivers sufficient information to fully constrain the model. However, our previous studies of the He ii transverse proximity effect (Schmidt et al., 2017, 2018) showed that measurements along single background sightlines, even in the case of low mean IGM transmission and high contrast, are often not sufficient to deliver strong and unique constraints on quasar properties when quasar age, obscuration and orientation effects are taken into account. We expect that in general a higher number of background sightlines is necessary to constrain models more complex than we consider here. We will address these questions in more detail in a future paper.
6.5. Dependence on S/N
In §2.4 we argued that a relatively low is sufficient for our analysis since the stochastic IGM absorption causes by itself a substantial amount of noise in the transmission measurement. In this section, we now quantify this effect and determine the actual dependence of our parameter inference on the data quality. We therefore take the models listed in Table 2 and re-compute them with different . The associated continuum error is adjusted as well, following the procedure described in §3.6.3.
The dependence on the achieved shows some diversity for different quasar redshifts and quasar ages. We therefore show a broad selection of curves in Figure 16. The top panel varies at fixed illustrating different redshifts; the bottom panel fixes the redshift to , and varies the the quasar age. In general one sees that for , the achieved precision quickly deteriorates, whereas, above the curves flatten indicating that the uncertainty on the inferred quasar age depends only very weakly on the data quality. Increasing the of the data slightly improves the precision, but this gain is so small that it is in practice probably not worth acquiring data with .
Our generally adopted value of is for all and very much on the flat region of the curves in Figure 16, where the precision of the parameter inference is not limited by the of the data. It is therefore worthwhile to consider substantially relaxing the requirement on data quality e.g. to , which might yield only slightly inferior results at approximately one fourth of the exposure time. However, data quality still has to be good enough to properly identify the objects as high-redshift background galaxies and determine their redshifts which in practice often requires for sources which do not have strong emission lines.
6.6. Dependence on Spectral Resolution
Another aspect of the observing strategy is the requisred spectral resolution. As argued in §2.3, tomography will at some point be limited by the peculiar velocities in the IGM and in §4 we determined that these redshift space distortions have an amplitude of , indicating that spectral resolving powers of the order of should be sufficient. In Figure 17 we show that the actual requirement is even lower. Provided , we observe an extremely weak dependence of the resulting age precision on the resolution. Even below this, the achieved precision is only moderately impacted.
Therefore, spectral resolution is essentially of no concern for characterizing quasar light echoes. Nearly every multi-object spectrograph should deliver sufficient resolution, even in low resolution modes. Resolving powers of already come close to the regime of slitless grism or prism spectroscopy. It might be possible to benefit from these observing strategies for forest tomography without resulting in a significant penalty on the achieved precision^{9}^{9}9We stress however, that slitless spectroscopy suffers from substantially higher sky noise compared to slit spectroscopy. If at all, slitless spectroscopy is therefore only an option for space based observations.. However, the adopted spectral resolution has to be good enough to identify the objects as high-redshift background galaxies. In any case, our adopted fiducial resolving power of is more than sufficient.
6.7. Dependence on Sightline Density
A further key parameter for the tomographic mapping of quasar light echoes is the density of background sightlines. Clearly, this depends on the limiting magnitude of the observations and the redshift of the foreground quasar. The exact relations between these parameters is illustrated in Figure 5.
To explore how our precision depends on the sightline density, we compute another set of models for which we vary the limiting magnitude of the observations. This atlers the average separation of sightlines and, since we keep the FoV constant, the number of sightlines. For simplicity, we only conduct this exercise for a quasar redshift of . The exact details of the models used for this are given in Table 3.