A new Measurement of the Intergalactic Temperature at
Abstract
We present two measurements of the temperaturedensity relationship (TDR) of the intergalactic medium (IGM) in the redshift range using a sample of 13 highquality quasar spectra and high resolution numerical simulations of the IGM. Our approach is based on fitting the neutral hydrogen column density and the Doppler parameter of the absorption lines in the forest. The first measurement is obtained using a novel Bayesian scheme which takes into account the statistical correlations between the parameters characterising the lower cutoff of the distribution and the powerlaw parameters and describing the TDR. This approach yields and independent of the assumed pressure smoothing of the small scale density field. In order to explore the information contained in the overall distribution rather than only the lower cutoff, we obtain a second measurement based on a similar Bayesian analysis of the median Doppler parameter for separate columndensity ranges of the absorbers. In this case we obtain and in good agreement with the first measurement. Our Bayesian analysis reveals strong anticorrelations between the inferred and for both methods as well as an anticorrelation of the inferred and the pressure smoothing length for the second method, suggesting that the measurement accuracy can in the latter case be substantially increased if independent constraints on the smoothing are obtained. Our results are in good agreement with other recent measurements of the thermal state of the IGM probing similar (over)density ranges.
keywords:
intergalactic medium – quasar:absorption lines1 Introduction
The intergalactic medium (IGM), containing the overwhelming majority of the Universe’s baryons, retains key information about the cosmic transformations that occurred during helium and hydrogen reionisation (e.g., Hui & Gnedin, 1997; Gnedin & Hui, 1998; Theuns et al., 2001; Theuns et al., 2002; Hui & Haiman, 2003). Many authors have studied the signature of thermal heating caused by the ionizing photons in the absorption profiles of the forest, with the goal of probing the temperaturedensity relation (TDR) of the intergalactic medium (IGM) at (e.g. Haehnelt & Steinmetz, 1998; Schaye et al., 1999; McDonald et al., 2001; Zaldarriaga et al., 2001; Lidz et al., 2010). These works have been motivated by the simple form that the lowdensity TDR should take according to theoretical predictions, and by the potential implications for the history of reionization. Analytical studies and hydrodynamics simulations have indicated that the lowdensity gas in the IGM should be concentrated in a narrow region of the temperaturedensity plane along a power law
(1) 
where is the temperature at mean density, is the density divided by the mean of the Universe and is the index of the powerlaw relation (Hui & Gnedin, 1997). The shape of the TDR at different redshifts is dependent on the timing of reionization, on the nature of the sources and physical mechanisms responsible for the heating. If photoheating of residual neutral hydrogen is the dominant heat source, then it is predicted that well after reionization (Hui & Gnedin, 1997; McQuinn & Upton Sanderbeck, 2016).
Object  Source  S/N  pixel size  ESO program/reference  

Pks2126158  3.28  0.0939  UVES  40 – 90  2.5 km s  166.A0106(A) 
Q0347383  3.23  0.1485  UVES  50 – 55  2.5 km s  68.B0115(A) 
J134258135559  3.21  0.2545  UVES  30 – 50  2.5 km s  68.A0492(A) 
HS1425+6039  3.18  0.2356  HIRES  55 – 75  0.04 A  Sargent 
Q0636+6801  3.17  0.3152  HIRES  35 – 60  0.04 A  Sargent 
J210025064146  3.14  0.1356  HIRES  20 – 25  1.3 km s  KODIAQ O’Meara et al. (2015) 
Q0420388  3.12  0.2884  UVES  75 – 120  2.5 km s  166.A0106(A) 
HE09401050  3.08  0.2910  UVES  35 – 70  2.5 km s  166.A0106(A) 
GB1759+7539  3.05  0.1859  HIRES  27 – 32  2.0 km s  Outram et al. (1999) 
J013301400628  3.02  0.1929  UVES  25 – 35  2.5 km s  69.A0613(A),073.A0071(A),074.A0306(A) 
J040718441013  3.02  0.0962  UVES  40 – 115  2.5 km s  68.A0361(A),68.A0600(A),70.A0017(A) 
J224708601545  3.00  0.3293  UVES  35 – 75  2.5 km s  075A158(B) 
HE23472547  2.88  0.2205  UVES  95 – 125  2.5 km s  077.A0646(A),166.A0106(A) 
Although several statistical analyses of the forest find values of close to this prediction (Rudie et al., 2012; Bolton et al., 2014; Boera et al., 2016), other measurements which consider the flux probability distribution function (PDF), have claimed evidence for an inverted () TDR (Bolton et al., 2008; Viel et al., 2009; Calura et al., 2012; Garzilli et al., 2012) for which unconventional heating mechanisms such as blazar heating have been invoked as an explanation (Broderick et al., 2012; Chang et al., 2012; Pfrommer et al., 2012; Puchwein et al., 2012; Lamberts et al., 2015). Some authors have suggested that this discrepancy could be ascribable to unaccounted effects from systematic uncertainties due to “continuum fitting” of QSO absorption spectra necessary for the calculation of the flux PDF (Lee, 2012) or to an overestimation of the statistical significance of the measurements (Rollinde et al., 2013). An alternative solution to reconcile the apparent discrepancies between the measurements and the expected thermal state of a photoheated IGM was proposed in Rorai et al. (2017b), who analyzed the PDF of the lowopacity pixels in a very high signaltonoise quasar spectra in order to constrain the TDR in the lowdensity IGM. Rorai et al. (2017b) found that uncertainties in the continuum placement alone cannot explain the discrepancy with conventional models for the thermal state of the IGM. Instead, they found that a flat or inverse TDR (with high temperature in underdense regions) is indeed favoured by the PDF, though perhaps only at very low densities ( 1). They also showed that different forest statistics that give discrepant results, like the power spectrum or those based on linefitting methods, are sensitive to disjoint density ranges ( 23). Rorai et al. (2017b) thus challenged the description of the lowdensity TDR as a single spatiallyinvariant power law.
To investigate the low density TDR further, here we undertake a traditional Voigtprofile fitting decomposition of the forest absorption in the redshift range for a sample of 13 quasar spectra. We use a set of IGM models obtained by postprocessing high resolution hydrodynamics simulations and generate model spectra with the same noise and resolution characteristics. We apply the same Voigt decomposition to the these spectra so that they may then be compared directly with the telescope data. Following Schaye et al. (1999, 2000); Ricotti et al. (2000); Rudie et al. (2012); Bolton et al. (2014), we analyse the shape of the cutoff for narrow lines in the plane defined by the column density and the line width . Moreover, we introduce a Bayesian formalism to study not only the uncertainties on the inferred thermal parameters, including the pressure smoothing, but also their degeneracies. We further develop a new technique based on the medians of the distribution for separate columndensity ranges, in order to exploit the information contained in the bulk of the distribution in the plane.
This article is structured as follows. We start by presenting in § 2 the sample of quasar spectra we use in our analysis and how these data are treated, in particular with respect to metal contaminants. We then describe the hydrodynamics simulations and the models to which we compare the data (§ 3). In § 4 we explain how we analyse the statistical properties of lines in the forest to extract the information about the thermal properties of the IGM. The results of this analyses are illustrated in § 5 and subsequently discussed in § 6, where we also examine agreements and disagreements with previous studies. We draw our final conclusions in § 7.
2 data
A sample of high signaltonoise ratio quasar spectra from ESO Ultraviolet and Visual Echelle Spectrograph (Dekker et al., 2000, UVES) and Keck high resolution echelle spectrograph (Vogt et al., 1994, HIRES) archival data with coverage of the forest in the redshift range was selected. This redshift range is chosen to complement the reanalysis by Bolton et al. (2014) of the data presented by Rudie et al. (2012), which provided constraints on the TDR parameters at lower redshift. Note that the methods illustrated in this paper can in principle be applied to higher redshift data, but we found that at the stronger blending of lines makes the decomposition into Voigt profiles increasingly ambiguous. To have a large segment of the chosen absorption redshift range clear of the quasar proximity region (chosen to be within 4000 km s of the quasar redshift) then requires that the emission redshift , and the need to avoid blending with the forest leads to . A list of the 13 objects used, and the characteristics of the reduced spectra, is given in Table 1. The exposures for the nine UVES spectra were reduced using the European Southern Observatory (ESO) UVES Common Pipeline Language software (v4.2.8) and combined using UVES_popler (Murphy, 2016), as described in detail in Boera et al. (2014). The two spectra kindly provided by W.L.W. Sargent were reduced using makee (Barlow, 2008). Continuum estimates in the forest are based on fitting low order curves to the high points in the spectra.
Here we wish to compare the results of fitting Voigt profiles to the forest using VPFIT (Carswell & Webb, 2014) to the observational spectra with those from simulated data (see § 3), so we have to be aware of some features in the data which cannot be reproduced in the simulations, and some restrictions the simulations may place on the way the observations are analyzed. These are:

The resolution of the object spectra is not accurately known, since the observations were not always slit limited and nor would the seeing have been constant. Here we assume a Gaussian resolution element with a fullwidthhalfmaximum (FWHM) of 6.5 km s, which is a reasonable approximation for both the HIRES and UVES data. Most features have Doppler parameters km s, and for sample inclusion we choose km s (corresponding to FWHM 13.3 km s ), so even 10% uncertainties in the instrumental FWHM do not make a significant difference. We therefore convolve all simulated spectra with a Gaussian kernel with FWHM=6.5 km s.

The continuum estimates are based on large scale high points in the observational data, and may be in error. VPFIT allows a linear continuum adjustment as a function of wavelength over the fitting region, and inserts this adjustment automatically when the overall fit accuracy comes down to a specified level. To be consistent this was used both for the observational data and for the simulations.

VPFIT occasionally introduces very large Doppler parameter lines which appear to be better described as long range continuum adjustments. To remove these we omitted features with Doppler parameters km s.

The zero level may be offset by a small amount in the observed spectra while it is accurately known in the simulated ones. A zero level adjustment was introduced during the automatic fitting process when the normalized became if there where five or more contiguous pixels of the fitted profile below of the continuum value. The details are given in sections 6.5 and 7.4 of the VPFIT documentation (Carswell & Webb, 2014).

Heavy element lines contaminate the forest in the observational data, but are absent in the simulations. We identified as many as we could from systems which showed heavy element lines longward of the emission and then chose wavelength regions within the forest to avoid the stronger ones. Heavy element lines are usually narrow, so the 8 km s threshold adopted for this analysis will remove most of the ones we have not identified. Also in the cases where metal lines are clustered in groups, they are still fitted as separate narrow components for the signaltonoise ratios in our data sample.

The simulated spectra cover a fixed small range of just over 1000 km s (or 15.5 Å at redshift z = 2.75), and all lines in each of these spectra were fitted simultaneously. In the observed data, VPFIT was applied to regions of varying size, depending on the local line density and the positions of heavy element lines, but were chosen to be between 10 and 25 Å long, with an average of Å.

The flux noise in the observational data is not constant from object to object, or even within an object spectrum, where it depends on the signal. The continuum level noise, , may be estimated by interpolating between regions of the spectrum where there is little absorption, and the zero level noise, , by doing so between saturated lines. Then for the simulations the noise can be set using , where is the transmitted flux normalized by the continuum. We identified 53 spectral sections characterised by different () pairs. To account for this, the simulated sightlines are divided into 53 subsets with path length proportional to the path lengths of the 53 sections. To each of them we add fluxdependent Gaussian noise using the appropriate value of and .

Since flux noise is estimated on a pixelbypixel basis, we rebin the simulated flux in pixels of 2.5 km s, which is the pixel scale used in most of the data sample. For simplicity, in the few cases where the pixel size, , of the data is not 2.5 km s, and of the corresponding simulated sets are rescaled by before adding noise. We have checked, for a sample of spectra, that this rescaling procedure produces consistent fits with those in which the same pixel size and noise level as the data was used.

VPFIT adds as many components as necessary until a satisfactory fit is obtained. However, it may fail to converge to give an acceptable fit to either observational or model data, sometimes e.g. if there is a saturated line. Where this happens the fits from those regions are omitted. Model spectral regions were chosen with noise characteristics mimicking the observational ones with acceptable fits and, in cases where convergence failed, different sightlines through the model were chosen until an acceptable fit was obtained.
All fitted components are characterized by their central redshift , column density (in cm) and Doppler parameter (in km s), and VPFIT provides estimates and uncertainties for all these quantities. The observational data yielded 2271 fitted components in a total path length of . A potential problem with the approach taken to avoid metal line contamination is that possible CIV and MgII doublets may occur without counterparts redwards of the emission line and would not be identified inside the forest. To assess the impact of these contaminants we have compiled a sample of lowerredshift quasars () and identified CIV and MgII doublets lying in the range 43164802 Å, i.e. the range covered by the data (the same data were recently analysed by Kim et al., 2016). For consistency, we only consider doublets with no associated lines from lower ions (and longer wavelength transitions), which would have been detected in the first place by our identification process. The Voigtprofile fit parameters from this sample is then used to add the opacity of these contaminants to the simulated spectra and estimate the impact on our results. We have checked that by doing so the results of this paper are not significantly affected, once our chosen cuts on and are applied (see below).
3 Models
In order to predict the observed statistical properties of the forest, we used simulated spectra from the set of hydrodynamics simulations described in (Becker et al., 2011, hereafter B11). The simulations were run using the parallel Treesmoothed particle hydrodynamics (SPH) code GADGET3, which is an updated version of the publicly available code GADGET2 (Springel, 2005). The fiducial simulation volume is a 10 Mpc periodic box containing gas and dark matter particles. This resolution is chosen specifically to resolve the Ly forest at redshift (Bolton & Becker, 2009). The simulations were all started at , with initial conditions generated using the transfer function of (Eisenstein & Hu, 1999). The cosmological parameters are consistent with constraints of the cosmic microwave background from WMAP9 (Reichardt et al., 2009; Jarosik et al., 2011). The IGM is assumed to be of primordial composition with a helium fraction by mass of (Olive & Skillman, 2004). The gravitational softening length was set to 1/30th of the mean linear interparticle spacing and star formation was included using a simplified prescription which converts all gas particles with overdensity and temperature K into collisionless stars. In this work we will only use the outputs at .
The gas in the simulations is assumed to be optically thin and in ionization equilibrium with a spatially uniform ultraviolet background (UVB). The UVB corresponds to the galaxies and quasars emission model of Haardt & Madau (2001) (hereafter HM01). Hydrogen is reionized at and gas with subsequently follows a tight powerlaw temperaturedensity relation, , where is the temperature of the IGM at mean density (Hui & Gnedin, 1997; Valageas et al., 2002). As in B11, the photoheating rates from HM01 are rescaled by different constant factors, in order to explore a variety of thermal histories. Here we assume the photoheating rates , where are the HM01 photoheating rates for species [HI,HeI,HeII] and is a free parameter. Note that, different from B11, we do not consider models where the heating rates are densitydependent. In fact, we vary with the only purpose of varying the degree of pressure smoothing in the IGM, while the TDR is imposed in postprocessing. In practice, we only use the hydrodynamics simulation to obtain realistic density and velocity fields. For this reason, we will refer to as the ’smoothing parameter’. We then impose a specific temperaturedensity relationship on top of the density distribution, instead of assuming the temperature calculated in the original hydrodynamics simulation. This means that in our models the temperature is only a function of the density, strictly following eqn. 1 at all densities up to . As done in Rorai et al. (2017b), we set the temperature to be constant at higher densities, i.e. , in order to avoid unphysically high values. Note however that such densities correspond to strongly saturated absorbers, which are not used in our analysis (see below). We opt for this strategy in order to explore a wide range of parametrizations of the thermal state of the IGM, at the price of reducing the temperature density diagram of the gas to a deterministic relation between and . In this work, we use a total of 107 models based on hydrodynamics simulations with and . The grid of parameters spans values between 0.4 and 1.9 for and between K and K for .
Finally we calculate the optical depth to photons for a set of 1024 synthetic spectra in each model, assuming that the gas is optically thin, taking into account peculiar motions and thermal broadening. We scale the UV background photoionization rate in order to match the observed mean flux of the forest at the central redshift of the sample^{1}^{1}1In reality the average transmission of the forest evolves throughout this redshift bin. We have verified, however, that modeling the forest using a single value for the mean flux over this redshift range has only a small effect on the results when compared to the statistical uncertainties (see appendix A). (, Becker et al., 2013).
We stress that in this scheme the pressure smoothing and the temperature are set independently. While not entirely physical, this allows us to separate the impact on the Ly forest from instantaneous temperature, which depends mostly on the heating at the current redshift, from pressure smoothing, which is a result of the integrated interplay between pressure and gravity across the whole thermal history (Gnedin & Hui, 1998).
3.1 Parameterization of the pressure smoothing
Varying the smoothing parameter allows us to test the effect of different thermal histories on the structure of the IGM density field. In order to characterize it in a modelindependent way, we adopt the definition proposed by Kulkarni et al. (2015) for the pressure smoothing length in hydrodynamics simulations . This is based on the realspace flux , calculated as the transmitted flux of photons in the fluctuating GunnPetersson approximation
(2) 
where Å is the restframe wavelength, is the Einstein coefficient of the transition, is the Hubble parameter and is the neutral hydrogen number density at the point . Critically, does not include the effects of thermal broadening or peculiar velocities on the transmitted flux. In the opticallythin regime, where this is a nonlinear transformation of the density field that suppresses high densities, but preserves isotropy. In Kulkarni et al. (2015) it is shown that the power spectrum is sensitive to the thermal history of the lowdensity IGM, and is well approximated by the function
(3) 
where is the Fourier wavenumber and , and are the free parameters. We calculate the field for our hydrodynamics simulations, using the output at . Consistently to what we have done for the postprocessed models described in the previous section, we rescale so that the mean flux in a set of synthetic spectra extracted from the simulations matches the observed values at the correspondent redshift. We obtain from the fit of the power spectrum. This gives and 108 kpc for and 1.45, respectively. Here and elsewhere in the paper we always express the smoothing length in comoving kpc.
4 Methodology
Simulated spectra for various IGM models were produced to be compared with the data, and we derived Doppler parameter and column densities for the lines using VPFIT in exactly the same way as for the telescope data (see § 2 and § 3). The distribution of lines in the plane forms the basis for our statistical analysis. For both the data and the simulations there may be velocity structure which is not well represented by Voigt profiles. In the fitting process this can result in a number of nonphysical components, which can be quite close together, or in blended, broad but weak lines. A feature of these is that the error estimates tend to be large as a consequence of the presence of neighbouring systems. An example can be seen in Fig. 1, showing the results of applying the algorithm to a simulated spectrum. To remove many of these, and to prevent the distributions being dominated by noise, we require the relative error estimate in the Doppler parameter to be smaller than 50 %, and the error on smaller than 0.3. Additionally, for the line is usually saturated to a level where the column densities derived from fitting only the transition are unreliable. So we exclude systems with higher column densities from the analysis. Finally, for the highest values in the range the line detection limit is , so we adopt this as a lower limit for the comparison samples. To summarize, our statistical analysis is based on the absorption components with km s, , Doppler parameter relative error smaller than 50% and log column density error < 0.3. With these restrictions the observational data sample consists of 1625 points in the space. The line distribution for the data and for three different models is shown in Fig. 2.
4.1 The lower cutoff in the distribution
As noted previously (Schaye et al., 1999; Theuns et al., 2000; McDonald et al., 2001) the distribution shows a pronounced cutoff at low values of . This is usually interpreted as a signature of thermal broadening setting a lower limit to the absorption line widths. The position of this cutoff is dependent on the column density, suggesting that the temperature systematically varies with the density of the gas. This motivated several studies to use the slope and normalisation of the lower of the distribution as a direct probe of the TDR of the IGM. Pressure smoothing also broadens the lines by increasing the physical size of absorbers, which increases their velocity width due to the Hubble flow. This effect can be taken into account by means of theoretical/analytical arguments or hydrodynamics simulations (Schaye et al., 1999; Theuns et al., 2000; Garzilli et al., 2015).
The lower cutoff in the distribution is generally assumed to be a powerlaw relation between line width and column density,
(4) 
where and are the parameters connected with the TDR and is a reference value which is often chosen as the column density corresponding to gas at the mean density (see Bolton et al., 2014, for a discussion). In Fig. 2 we show the fitted cutoff for our observed absorption line sample and samples of simulated absorption lines for three different thermal models. In our analysis we arbitrarily fix cm, but we will relax the assumptions on the functional dependencies between and the thermal parameters. The standard algorithm used to fit this cutoff, which we also adopt, was first introduced by Schaye et al. (1999) and is based on a recursive rejection process: the expression in eqn. 4 is fitted to the line distribution to obtain . We then calculate the mean absolute deviation of the points from the fit in the dimension, and discard all lines whose value of is greater than the fitted relation by more than . The fit and the rejection of upper outliers are iteratively repeated until convergence, i.e. when all points lies below the upper mean deviation and the only outliers are below the fit. At that stage, the lines with are discarded and the fit is repeated one last time to define the final values of and . The errors associated with (it is convenient to operate in logarithmic space) and are estimated via a bootstrap technique applied to the line sample. In a first approximation we follow the standard practice of considering and as statistically independent and treat them separately. Correlations between these parameters are addressed in the next section.
One may notice from Fig. 2 that the lines fitted in the data sample (left panel) present more outliers below the cutoff compared to the models (right three plots). To understand what could generate this difference, we visually inspected all individual absorbers in the data with width lower than the cutoff by more than . Among 29 outliers, we found 7 that are compatible with being unidentified metal lines and other 7 which are absorbers associated with metals at the same redshift. The latters are metalenriched systems whose temperature might be affected by metal cooling, which is not accounted for in our models. We have tested the effect of removing these 14 lines and to reapply the cutoff fitting procedure. This did not significantly change the estimated cutoff parameters, but it reduced the uncertainty on by about 20%. For this reason we consider keeping all the outliers to be the most conservative choice.
In the published literature, the relations between the narrow line cutoff parameters and the thermal parameters ( and in this work also ), were based on theoretical arguments set out by (Schaye et al., 1999, , see also Rudie et al. (2012)) or calibrated with hydrodynamics simulations (Bolton et al., 2014). Here we will first make the ansatz that such relations can be written in the form
(5)  
(6) 
where are determined using the full set of models described in § 3 (see Fig. 3). Analogous to previous works, we for now assume no dependency on the pressure smoothing length , which for standard hydrodynamics simulations implicitly assumes that the thermal history is correct and hence consistently taken into account. Note that these relations are slightly different than what is usually assumed in other works. We will later check/verify the validity of this ansatz a posteriori using the model grid (see Fig. 3).
In the next section we discuss how we abandon these assumptions in order to include a more general relation between the parameters describing the lower cutoff of the distribution and parameters describing the thermal state and history of the IGM, including the smoothing length .
4.2 Generalising the calibration of the cutoff parameters
The relations described by eqns. 5 and 6 are based on the heuristic argument that the renormalization of the TDR changes the line widths at all column densities by a similar proportion, at least for lines that fall near the cutoff. Although Fig. 3 suggests that such an approximation is reasonable, we would like to consider the possibility of more general dependencies in order to find a scheme to quantify the effect of the pressure smoothing on the line distribution.
We approach this problem by starting with the assumption that a generic observable can be approximated by a linear combination of the logarithm of the relevant parameters. More precisely, we define the combination as
(7) 
where are free parameters. The choice of using logarithmic quantities (except for , which is an index) is equivalent to assuming a powerlaw relation with and . Note that changing the units in which and are expressed would only affect the offset . In general, could be a more general function of the parameters than a linear relation, so the validity of this approximation will need to be verified a posteriori. If is not a good description of the observable, or if there are statistical uncertainties in the models (e.g. due to finite box size), it is natural to expect some scatter. This can be accounted for by introducing some flexibility in choosing .
A convenient way of doing so is to implement a likelihood probability for fitting with and to include it in a MarkovChain Monte Carlo (MCMC) algorithm in the space of coefficients. This can be combined with a likelihood of the measured observable when compared to the value of for a given choice of the coefficients and of the thermal parameters. This allows us to draw quantitative inferences on the calibration coefficients and on the thermal parameters at the same time.
In our analysis the total likelihood is therefore composed of two parts. The first part quantifies how well the parameter combination defined by fits the points in the training grid of models. Given the observable , this can be written as
(8) 
where the sum is performed over all the models in the grid and is the uncertainty on , both relative to the th model. An MCMC run which employs as likelihood suffices to estimate the posterior distribution for the calibration coefficients , i.e. the range of parameter combinations which can be used to relate the observable to the IGM parameters via eqn. 7. In principle, this distribution could be used as a prior for a second MCMC run which includes the thermal parameters. However, we find it more practical to combine with a likelihood where the same values of are employed to make a prediction for and test it against the data. Assuming the error is Gaussian, this can be simply expressed as
(9) 
where and are estimated from the data. The two parts of the likelihood can then be summed to obtain the total likelihood which we use to run a MCMC in the 7dimensional space of the calibration and thermal parameters {}.
This scheme is computationally inexpensive and achieves several goals:

It generalizes the simple powerlaw calibration assumed in eqn.5.

It finds the parameter combinations to which the observable is sensitive, providing a way to quantitatively express its degeneracy direction.

The uncertainty of this calibration can be determined based on the scatter of the simulated points and their uncertainties.

It returns the constraints on the thermal parameters, marginalizing over the uncertainty on the coefficients of eqn. 7.
The main drawback is that the validity of eqn. 7 as an approximation of the observable must be verified a posteriori, using the estimated coefficients from the MCMC run.
In the case where the observable is not a scalar but a vector, it is possible to generalize the likelihood expressions 8 and 9 to multivariate Gaussian likelihoods. This will increase the dimensionality of the calculation, because there must be four coefficients for each observable component. It will also be necessary to calculate the full covariance matrix of the different components of .
In this work, we have applied this formalism to the observables that describe the lower cutoff of the distribution and to the differential median parameters that we will define in the next section. In both cases this involved an 11parameter MCMC analysis and adopting a bivariate Gaussian likelihood, both for eqn. 8 and 9. The correlation between and has been calculated by bootstrapping the sample of absorption lines, as it was done for the standard deviations.
4.3 Differential median of the line width distribution
Although the cutoff is certainly the most prominent feature of the distribution, it is reasonable to expect that the thermal properties of the IGM do not only affect the narrowest lines in the forest. Taking advantage of the additional information contained by the bulk of the line population is an obvious way of trying to refine the constraints achievable with a Voigtprofile decomposition approach. Some of our preliminary efforts to do so encountered systematic issues related to very broad lines ( km s). Our models were not able to reproduce the observed line distribution in that range, and, more problematically, inferences on the thermal parameters from various fitting schemes were strongly dependent on the way the upper range was chosen or on the statistic employed to characterize the line distribution. We concluded that the broad lines are in most cases probably an artefact of the fitting procedure and do not represent physical properties of the IGM (see for example Fig. 1) and are therefore more subject to systematic errors, in particular due to the uncertainty of the continuum placement.
For this reason, we turned our attention to a statistical estimator which is not sensitive to the distribution of lines at extreme values of . A natural choice is the median. In order to capture the columndensity dependency of the Dopplerparameter distribution, we adopt the following approach. We define as the median of the line width distribution for cm and as the median for cm. We then apply a linear transformation to parametrized as
(10) 
where the transformation coefficient is such that the medians and of the new quantity are identical (for the same domains defined above). It is straight forward to determine iteratively. This calculation is different from simply considering the differences of the median , because depends on the positions of all individual lines in the plane and not just on the median of in the two parts. However, for simplicity we will refer to this method as the ’differential median’ method. We choose (the median calculated before the transformation) and as our final observables to estimate the thermal parameters, to which we apply the analysis technique illustrated in the previous section. Analogously to the cutoff method, we calculate uncertainties and covariance for and by bootstrapping the line sample. An illustration of our differential median statistic is shown in Fig. 4, both for the data and for the same three thermal models shown in Fig 2. Its sensitivity to the parameters of the TDR is shown by the different slopes and normalizations of the lines for the different thermal models, which qualitatively have the same parameter dependencies as the cutoff.
5 Results
In the lefthand panels of Fig. 3 we show the values of in the simulations as a function of the temperature at mean density and as a function of the slope of the TDR . When fitting the linear relations in eqn. 5, we obtain ; (corresponding to the black dashed lines). The cutofffitting algorithm applied to the data returns and , marked as the red shaded region in the plot. By applying the above coefficients to convert these numbers to a measurement of the TDR parameters, we get and . The reported uncertainties take into account the propagated (small) errors of the cutoff calibration coefficients.
The generalization of the relation between the thermal parameters and and the parameters characterising the lower cutoff of the distribution described in § 4.2 is shown in the righthand panels of Fig. 3. These plots are analogous to the lefthand panels except that the axes are combinations of the thermal parameters as defined in eqn. 7. The coefficients for eqn. 7 are picked from the mean values of the MCMC posterior distribution, applying the method described in § 4.2 to a joint analysis of and . More precisely, the average combinations are and . These relations do not represent a new ’calibration’ in that the coefficients are free to vary consistently with the uncertainties on the values extracted from the simulations. However, showing that the relations between the observables and the typical combinations from the MCMC fall close to the identity relation (dashed line) is necessary to validate our approach. The constraints on the thermal parameters derived from the same MCMC analysis are shown in the green contour plots in Fig. 6. A degeneracy between the inferred values of and is evident, which is a consequence of taking the statistical correlation between the measured and into account. The rightmost panel demonstrates that the lower cutoff in the simulated distribution is not sensitive to the smoothing length . When marginalized over all parameters of the MCMC analysis, including the coefficients for and , the values inferred for the TDR parameters are and (quoted as mean and standard deviation from the posterior distribution). The uncertainties are smaller than those inferred from the lower cutoff, and the two measurements are in good agreement.
The results obtained from the differential median technique are presented in Fig. 5. The two panels show that the quantities and are reasonably approximated by a combination of the form of eqn. 7, although the scatter is not negligible. As in Fig. 3, the red bands trace the 1 limits for the measured parameters: (left) and . The relations are well described by and . The posterior distribution of the thermal parameters, shown as blue contours in Fig. 6, reveals degeneracies between the inferred values of all three considered parameters. In particular, and in contrast to the cutoff method, there is a significant sensitivity to the pressure smoothing length . This confirms that the the overall distribution contains information about the spatial smoothing of the IGM gas, as already noted by Garzilli et al. (2015). The marginalized uncertainties for the TDR parameters are in this case K and , in good agreement with those inferred from the lower cutoff of the distribution. The fact that using the full distribution does not substantially improve the accuracy of the constraints, compared to using just the lower cutoff, is due to the degeneracy with , which was negligible for the lower cutoff method. On the other hand, the emergence of such degeneracy implies that the precision can be substantially improved by combining these results with independent measurement of the smoothing length. At the moment, the only constraints available on are those obtained by the analysis of correlated absorption in close quasar pairs (Rorai et al., 2013, 2017a). The horizontal bars in Fig. 6 represent the result from Rorai et al. (2017a) in the redshift bins (red) and (black), both overlapping with the redshift range analysed in the present work. The nominal value of the measurement has been decreased by 10%, in order to match the definition of pressure smoothing as given by the cutoff (see § 3.1 and Fig. S11 in Rorai et al., 2017a). As it can be seen, the whole range for considered in this paper is consistent with either of the two data points. More precise measurements of the smoothing, or an analysis conducted on the same redshift limits used in this work, are required in order to break the degeneracy and improve the accuracy on and achievable with the differential median technique.
An alternative way of visualizing our results is to look at the constraints in the temperaturedensity plane. This does not require us to reduce the posterior distribution to two parameters with relative uncertainties. It also allows a straightforward comparison with the measurement of the temperature at overdensity obtained with the ’curvature’ statistic in BB11 and Boera et al. (2014), without requiring error propagation by which some information might be lost. The way we do this is the following: given the posterior distribution for and obtained from one of our MCMCs, we calculate the marginalized distribution of for the range of overdensities we are interested in. We then plot the 16th and the 84th percentile as a function of as the 1 limits in the temperaturedensity plane. The results for the various techniques described in this paper are shown in Fig. 7. The black dotted lines show the 16th and 84th percentiles of the temperature distributions obtained from the lower cutoff MCMC analysis, while the red shaded region shows the same for the differential median method. Note that the curvature results do not include uncertainties related to pressure smoothing. There is good agreement between the two measurements, and there is also good consistency with the results from BB11 and Boera et al. (2014) at the same redshift (red square and pentagon, respectively), although our dataset partially overlap with the sample used in Boera et al. (2014). Note that the uncertainties of the temperature measured with the median technique is lower at mild overdensities () than at the mean density. This is a consequence of the particular degeneracy direction in the plane of Fig. 5.
For reference we also show an analogous comparison at between the results of BB11 (blue square), Boera et al. (2014)(blue pentagon) and  cutoff results from Bolton et al. (2014)(blue circle). These limits assume that and are uncorrelated, which is likely incorrect given the results of our analysis at slightly higher redshifts. The light blue shaded area in the background represent the propagated 1 limits on assuming the measured value and uncertainties of and from Bolton et al. (2014).
6 Discussion
A more comprehensive comparison of the main results of this work with recent constraints on the thermal state of the IGM from the literature is presented in Fig. 8, where we show the evolution of (upper panel) and (lower panel) as a function of redshift. The red triangles correspond to the constraints from our fit to the lower cutoff of the distribution (§ 4.2), while the red squares are those obtained from the differential median method (§ 4.3). The black solid lines are the predictions of a recent hydrodynamics simulation from Puchwein et al. (2015) where the UV background is assumed to follow the model by Haardt & Madau (2012). This simulation fits well both the observational points from this work and those reported by Bolton et al. (2014)(blue circles). If we assume a slope of the TDR as in the Puchwein et al. (2015) simulation, (i.e. the black line in the lower panel), we can extrapolate the curvature measurements of (BB11, Boera et al., 2014) to mean density. The corresponding values of are shown as the dark blue connected circles for BB11, and the orange connected triangles for an updated version of the results of Boera et al. (2014)(Boera, private communication). The good agreement of our measurements of and the evolution of inferred from the extrapolation of the temperature measurements with the curvature method to mean density based on the values predicted by the Puchwein et al. (2015) simulation is a nontrivial result that suggests that our understanding of the thermal state IGM is converging from multiple different approaches. Convergence of results is also suggested by the measurements of obtained with a waveletanalysis technique by Lidz et al. (2010) (cyan triangles) and Garzilli et al. (2012) (grey pentagons). The discrepancy with their measurements in the redshift bins close to that of our measurement is less than .
In the lower panel, the measurement of at derived from a joint analysis of the flux PDF and the wavelet coefficients (Garzilli et al., 2012) are clearly in tension with our work presented here and the analysis of Bolton et al. (2014). As discussed earlier, other measurements based on the flux PDF (e.g. Viel et al., 2009; Calura et al., 2012) also have claimed that an "inverted" TDR () is required to match the data (but see Lee, 2012; Rollinde et al., 2013; Lee et al., 2015). This apparent discrepancy has been recognized for some time and was recently discussed in considerable detail by Rorai et al. (2017b). By analysing a high signaltonoise quasar spectrum at , Rorai et al. (2017b) confirmed that the flux PDF constrains the TDR to be flat or to rise towards low density; however, as they point out, this is true only for gas around and below the mean density, to which the PDF statistic is most sensitive. Rorai et al. (2017b) further show that techniques for measuring the IGM temperature that rely on quantifying the smoothness of the absorption profile, such as those discussed in this paper or the curvature method developed by B11, are mainly sensitive to overdense gas, i.e. . This has also been highlighted in Bolton et al. (2014), where they show the relation between column density and optical depthweighted density (see Fig.1 in their paper). The fact that our work presented here constrains a spatially invariant TDR to have corroborates the conclusion of Rorai et al. (2017b) that a single, spatiallyinvariant power law is not able to describe the TDR of the low density IGM. This may perhaps be explained by simulations of HeII reionization where radiative transfer effects have been implemented (Abel & Haehnelt, 1999; Paschos et al., 2007; McQuinn et al., 2009; Compostella et al., 2013; La Plante et al., 2017), in which the lower densities show a bimodal temperature distribution with increased temperatures and a flattening of the relation between temperature and density in regions where helium has most recently become doubly ionized.
Finally, we note that in a recent work Garzilli et al. (2015) found that the cutoff of the line distribution is significantly sensitive to the pressure smoothing, different from what our analysis shows (see Fig. 6). This could be possibly due to the differences in the fitting algorithms, both for the individual lines and for the cutoff, but it is most likely related to the particular range of column density we have selected. Fig. 8 in their paper suggests that the effect of pressure smoothing on the low lines is most prominent for lines with low column densities, while here we have only considered those with cm, for which the line width is dominated by thermal broadening, as argued also by Bolton et al. (2014).
7 Conclusions
We have analysed the forest of a sample of 13 highresolution quasar spectra in the redshift range with the help of high resolution numerical simulations of the IGM. The continuumnormalized spectra were decomposed into individual HI absorbers using vpfit, and we have carefully identified and excluded regions potentially contaminated by metal lines. We have then used the lower cutoff in the distribution and a newly introduced statistic based on the medians of the linewidth distribution in separate column density ranges to obtain two new measurements of the temperature of the IGM. In both cases we employed Bayesian MCMC techniques to constrain thermal parameters, using a grid of thermal models where the TDR has been imposed in post processing. Our results can be summarized as follows.

Fitting the lower cutoff of the distribution in the standard way gives K and .

The parameters describing the lower cutoff and that describing the TDR are strongly correlated in simulations, but there is significant scatter due to the residual dependence on other thermal parameters (in particular the smoothing length ).

We have introduced a new calibration of the temperature measurement based on a linear combination of the thermal parameters (in logarithmic space) whose coefficients are left free to vary. For this new calibration we have implemented a MCMC analysis in which the calibration coefficients are estimated together with the thermal quantities and .

Including the correlation between the statistical uncertainties of and in our likelihood analysis gives K and . Taking the correlation between and shows that the inferred values for the two parameters are strongly degenerate. Conversely, no sensitivity is found to the smoothing length .

We have introduced an alternative statistical estimator for the thermal parameters which is based on the medians and of the linewidth distribution for cm and cm, respectively. For this we have defined the transformation , with chosen such that for .

By applying the parameters and to the same MCMC technique used for the lower cutoff method, we obtain K and , in good agreement with our measurement from the lower cutoff. For the measurement based on the differential median method we find a strong degeneracy with the smoothing length , suggesting, unsurprisingly, that the overall distributions contains information about the smallscale spatial structure of the IGM.

When we use the posterior distribution for the TDR parameters to infer the temperature at , we obtain values consistent with the measurements at the same redshift from BB11 and Boera et al. (2014).

Our constraints on are in disagreement with claims of an inverted TDR based on the flux PDF, similar to published results using statistics based on the smoothness of the absorption. Our findings further corroborate those of Rorai et al. (2017b), who argued that this is due to the different overdensity range probed by the PDF.
The work presented here marks a further step toward a consistent characterization of the thermal state of the IGM at . The agreement with other recent measurements at the same redshift and with high resolution hydrodynamics models is an encouraging sign that our understanding of the thermal state of the IGM is converging. The techniques developed and presented here can be easily applied to datasets at other redshifts and should improve the well established approach of characterising the thermal state of the IGM with help of the distribution of forest absorbers. Our analysis takes into account the uncertainties in the calibration between the lower cutoff (or the median parameters in different column density ranges) and the thermal parameters, as well as parameter correlations and secondorder dependencies which were previously neglected. The degeneracies we find, especially in our analysis based on the differential median, suggest that a joint analysis with different statistics constraining the pressure smoothing length will significantly improve the precision of current constraints on the thermal state of the IGM.
Acknowledgements
We are grateful to W.L.W.Sargent for kindly providing two of the spectra analysed in this work, Volker Springel for making Gadget3 available and Girish Kulkarni for sharing the code for the calculation of in SPH simulations. We thank the anonymous referee for useful comments and suggestions, which greatly improved this manuscript. MH acknowledges support by ERC ADVANCED GRANT 320596 “The Emergence of Structure during the epoch of Reionization”. GB was supported by the National Science Foundation through grant AST1615814. JSB acknowledges the support of a Royal Society University Research Fellowship. MTM thanks the Australian Research Council for Discovery Project grant DP130100568. This work made use of the DiRAC High Performance Computing System (HPCS) and the COSMOS shared memory service at the University of Cambridge. These are operated on behalf of the STFC DiRAC HPC facility. This equipment is funded by BIS National Einfrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1.
References
 Abel & Haehnelt (1999) Abel T., Haehnelt M. G., 1999, ApJ, 520, L13
 Barlow (2008) Barlow T., 2008, MAKEE: MAuna Kea Echelle Extraction, http://www.astro.caltech.edu/∼tb/makee
 Becker et al. (2011) Becker G. D., Bolton J. S., Haehnelt M. G., Sargent W. L. W., 2011, MNRAS, 410, 1096
 Becker et al. (2013) Becker G. D., Hewett P. C., Worseck G., Prochaska J. X., 2013, MNRAS, 430, 2067
 Boera et al. (2014) Boera E., Murphy M. T., Becker G. D., Bolton J. S., 2014, MNRAS, 441, 1916
 Boera et al. (2016) Boera E., Murphy M. T., Becker G. D., Bolton J. S., 2016, MNRAS, 456, L79
 Bolton & Becker (2009) Bolton J. S., Becker G. D., 2009, MNRAS, 398, L26
 Bolton et al. (2008) Bolton J. S., Viel M., Kim T., Haehnelt M. G., Carswell R. F., 2008, MNRAS, 386, 1131
 Bolton et al. (2014) Bolton J. S., Becker G. D., Haehnelt M. G., Viel M., 2014, MNRAS, 438, 2499
 Broderick et al. (2012) Broderick A. E., Chang P., Pfrommer C., 2012, ApJ, 752, 22
 Calura et al. (2012) Calura F., Tescari E., D’Odorico V., Viel M., Cristiani S., Kim T.S., Bolton J. S., 2012, MNRAS, 422, 3019
 Carswell & Webb (2014) Carswell R. F., Webb J. K., 2014, VPFIT: Voigt profile fitting program, Astrophysics Source Code Library (ascl:1408.015)
 Chang et al. (2012) Chang P., Broderick A. E., Pfrommer C., 2012, ApJ, 752, 23
 Compostella et al. (2013) Compostella M., Cantalupo S., Porciani C., 2013, MNRAS, 435, 3169
 Dekker et al. (2000) Dekker H., D’Odorico S., Kaufer A., Delabre B., Kotzlowski H., 2000, in Iye M., Moorwood A. F., eds, Proc. SPIE Vol. 4008, Optical and IR Telescope Instrumentation and Detectors. pp 534–545
 Eisenstein & Hu (1999) Eisenstein D. J., Hu W., 1999, ApJ, 511, 5
 Garzilli et al. (2012) Garzilli A., Bolton J. S., Kim T.S., Leach S., Viel M., 2012, MNRAS, 424, 1723
 Garzilli et al. (2015) Garzilli A., Theuns T., Schaye J., 2015, MNRAS, 450, 1465
 Gnedin & Hui (1998) Gnedin N. Y., Hui L., 1998, MNRAS, 296, 44
 Haardt & Madau (2001) Haardt F., Madau P., 2001, in Neumann D. M., Tran J. T. V., eds, Clusters of Galaxies and the High Redshift Universe Observed in Xrays. (arXiv:astroph/0106018)
 Haardt & Madau (2012) Haardt F., Madau P., 2012, ApJ, 746, 125
 Haehnelt & Steinmetz (1998) Haehnelt M. G., Steinmetz M., 1998, MNRAS, 298, L21
 Hui & Gnedin (1997) Hui L., Gnedin N. Y., 1997, MNRAS, 292, 27
 Hui & Haiman (2003) Hui L., Haiman Z., 2003, ApJ, 596, 9
 Jarosik et al. (2011) Jarosik N., et al., 2011, ApJS, 192, 14
 Kim et al. (2016) Kim T.S., Carswell R. F., Mongardi C., Partl A. M., Mücket J. P., Barai P., Cristiani S., 2016, MNRAS, 457, 2005
 Kulkarni et al. (2015) Kulkarni G., Hennawi J. F., Oñorbe J., Rorai A., Springel V., 2015, ApJ, 812, 30
 La Plante et al. (2017) La Plante P., Trac H., Croft R., Cen R., 2017, ApJ, 841, 87
 Lamberts et al. (2015) Lamberts A., Chang P., Pfrommer C., Puchwein E., Broderick A. E., Shalaby M., 2015, ApJ, 811, 19
 Lee (2012) Lee K.G., 2012, ApJ, 753, 136
 Lee et al. (2015) Lee K.G., et al., 2015, ApJ, 799, 196
 Lidz et al. (2010) Lidz A., FaucherGiguère C.A., Dall’Aglio A., McQuinn M., Fechner C., Zaldarriaga M., Hernquist L., Dutta S., 2010, ApJ, 718, 199
 McDonald et al. (2001) McDonald P., MiraldaEscudé J., Rauch M., Sargent W. L. W., Barlow T. A., Cen R., 2001, ApJ, 562, 52
 McQuinn & Upton Sanderbeck (2016) McQuinn M., Upton Sanderbeck P. R., 2016, MNRAS, 456, 47
 McQuinn et al. (2009) McQuinn M., Lidz A., Zaldarriaga M., Hernquist L., Hopkins P. F., Dutta S., FaucherGiguère C.A., 2009, ApJ, 694, 842
 Murphy (2016) Murphy M., 2016, UVES_popler: UVES_popler: POstPipeLine Echelle Reduction software, doi:10.5281/zenodo.56158, https://doi.org/10.5281/zenodo.56158
 O’Meara et al. (2015) O’Meara J. M., et al., 2015, AJ, 150, 111
 Olive & Skillman (2004) Olive K. A., Skillman E. D., 2004, ApJ, 617, 29
 Outram et al. (1999) Outram P. J., Chaffee F. H., Carswell R. F., 1999, MNRAS, 310, 289
 Paschos et al. (2007) Paschos P., Norman M. L., Bordner J. O., Harkness R., 2007, preprint, (arXiv:0711.1904)
 Pfrommer et al. (2012) Pfrommer C., Chang P., Broderick A. E., 2012, ApJ, 752, 24
 Puchwein et al. (2012) Puchwein E., Pfrommer C., Springel V., Broderick A. E., Chang P., 2012, MNRAS, 423, 149
 Puchwein et al. (2015) Puchwein E., Bolton J. S., Haehnelt M. G., Madau P., Becker G. D., Haardt F., 2015, MNRAS, 450, 4081
 Reichardt et al. (2009) Reichardt C. L., et al., 2009, ApJ, 694, 1200
 Ricotti et al. (2000) Ricotti M., Gnedin N. Y., Shull J. M., 2000, ApJ, 534, 41
 Rollinde et al. (2013) Rollinde E., Theuns T., Schaye J., Pâris I., Petitjean P., 2013, MNRAS, 428, 540
 Rorai et al. (2013) Rorai A., Hennawi J. F., White M., 2013, ApJ, 775, 81
 Rorai et al. (2017a) Rorai A., et al., 2017a, Science, 356, 418
 Rorai et al. (2017b) Rorai A., et al., 2017b, MNRAS, 466, 2690
 Rudie et al. (2012) Rudie G. C., Steidel C. C., Pettini M., 2012, ApJ, 757, L30
 Schaye et al. (1999) Schaye J., Theuns T., Leonard A., Efstathiou G., 1999, MNRAS, 310, 57
 Schaye et al. (2000) Schaye J., Theuns T., Rauch M., Efstathiou G., Sargent W. L. W., 2000, MNRAS, 318, 817
 Springel (2005) Springel V., 2005, MNRAS, 364, 1105
 Theuns et al. (2000) Theuns T., Schaye J., Haehnelt M. G., 2000, MNRAS, 315, 600
 Theuns et al. (2001) Theuns T., Mo H. J., Schaye J., 2001, MNRAS, 321, 450
 Theuns et al. (2002) Theuns T., Schaye J., Zaroubi S., Kim T., Tzanavaris P., Carswell B., 2002, ApJ, 567, L103
 Valageas et al. (2002) Valageas P., Schaeffer R., Silk J., 2002, A&A, 388, 741
 Viel et al. (2009) Viel M., Bolton J. S., Haehnelt M. G., 2009, MNRAS, 399, L39
 Vogt et al. (1994) Vogt S. S., et al., 1994, in Crawford D. L., Craine E. R., eds, Proc. SPIE Vol. 2198, Instrumentation in Astronomy VIII. p. 362, doi:10.1117/12.176725
 Zaldarriaga et al. (2001) Zaldarriaga M., Hui L., Tegmark M., 2001, ApJ, 557, 519
Appendix A Effect of Temperature and Optical Depth Variation
To understand the effect of an evolving optical depth and a possible variation in temperature, we conducted the following test:

we created a model which is a mixture of four models with different effective optical depth, . This values encompass the evolution of the mean transmission across our redshift bin.

we created another model which is a mixture of three models with different temperatures at mean density K. This mixture could be interpreted either as a (rather extreme) temperature evolution with redshift or as spatial fluctuations.
We then calculated the cutoff and the differential median statistic for the two mixed models and for their individual components. The results are shown in Fig. 9.
In the case of the optical depth mixture (right panel), the differences between the total model (black solid lines) and the four components (coloured dashed lines) are minimal, especially for the differential median statistic. One of the four cutoffs slightly deviates from the others at low column densities, but given that there is no clear trend and given the magnitude of the error (the outlier model has an uncertainty of , four times larger than the other models), we consider it a statistical fluctuation due to the noise/bootstrap realizations.
In the case of the temperature mixture (left panel), it appears evident that the differential median of the total model (black solid) depends on the average temperature, i.e. coincides with the middle of the three components, while the cutoff picks the coldest of the three. This suggests that, if there is significant temperature evolution or fluctuation, measurements based on the cutoff will be biased towards the lower values, differently from the median statistic. However, our measurements based on the two methods are in good agreement, and if anything the cutoff analysis points toward slightly higher temperatures, which leads us to conclude that this effect is negligible at this level of precision.