# Urchin: A Reverse Ray Tracer for Astrophysical Applications

###### Abstract

We describe Urchin, a reverse ray tracing radiative transfer scheme optimised to model self-shielding from the post-reionisation ultraviolet (UV) background in cosmological simulations. The reverse ray tracing strategy provides several benefits over forward ray tracing codes including: (1) the preservation of adaptive density field resolution (2) completely uniform sampling of gas elements by rays; (3) the preservation of galilean invariance; (4) the ability to sample the UV background spectrum with hundreds of frequency bins; and (5) exact preservation of the input UV background spectrum and amplitude in optically thin gas. The implementation described here focuses on Smoothed Particle Hydrodynamics (SPH). However, the method can be applied to any density field representation in which resolution elements admit ray intersection tests and can be associated with optical depths. We characterise the errors in our implementation in stages beginning with comparison to known analytic solutions and ending with a realistic model of the cosmological UV background incident onto a suite of spherically symmetric models of gaseous galactic halos.

###### keywords:

methods: numerical – radiative transfer – intergalactic medium – quasars: absorption lines – diffuse radiation – ultraviolet: general^{†}

^{†}pagerange: Urchin: A Reverse Ray Tracer for Astrophysical Applications–Acknowledgments

^{†}

^{†}pubyear: 201?

## 1 Introduction

Cosmological gas dynamics simulations (e.g. Cen_1994; Theuns_1998; Theuns98b; Hernquist_1996), semi-analytic models (e.g. Bi_1997) and analytic calculations (e.g. Schaye_2001) have enabled us to understand the connection between galaxies, the intergalactic medium (IGM), and the large-scale structure of the Universe. HI Lyman- absorbers in the spectra of distant quasars are a particularly useful observational probe of this structure. Models indicate that at redshifts , the majority of Lyman- forest absorption lines (i.e. lines with column densities cm) arise in filamentary structures in the IGM with the highest column density lines being associated with the circumgalactic medium of galaxies. These lines are produced in systems which are highly ionized by the ultraviolet (UV) background (see Meiksin_09, for a recent review). An absorber with a column density cm has an optical depth greater than unity for Lyman-Limit photons and is called a Lyman-Limit System (LLS). Above a column density of cm, ‘damping wings’ due to the natural line-broadening of the Lyman- line are detectable, and the system is called a damped Lyman- absorber (DLA). These absorbers probe the interface between the IGM and galaxies as well as the interstellar medium (ISM) of the galaxies themselves. Hydrogen begins to self-shield from the UV background in the LLS column density range, and the reduction in ionising flux plays a major role in setting the ionisation state of these absorbers. The Urchin code described in this paper is designed to model HI self-shielding in the LLS and DLA range.

At present, the largest observational catalogues of self-shielded absorbers are produced through semi-automated searches (e.g. Prochaska_04; Noterdaeme_2012) of data from the Sloan Digital Sky Survey (SDSS^{1}^{1}1www.sdss3.org). Current and planned expansions to the SDSS such as the Baryon Oscillation Spectroscopic Survey (BOSS, Boss) and BigBOSS (BigBoss) will increase the amount of available data by a factor of ten. Due to atmospheric absorption of the rest frame Lyman- transition, ground based surveys for DLAs are limited to redshifts . Surveys for LLSs require spectral coverage of the Lyman-Limit transition for an accurate determination of and are therefore limited to redshifts of when performed from the ground. The new Cosmic Origins Spectrograph^{2}^{2}2www.stsci.edu/hst/cos on the Hubble Space Telescope provides significant capacity to probe lower redshift systems (e.g. Battisti_12) while the Advanced Camera for Surveys and Wide Field Camera 3 have recently been used to complete a survey for LLSs in the redshift range (Omeara_12).

In addition to Lyman series transitions, post-reionisation neutral hydrogen can also be effectively probed using the 21-cm emission line. The most recent determination of the local HI mass function is from the Arecibo Legacy Fast ALFA (ALFALFA) survey (Martin_10) which will have detected galaxies in HI 21-cm out to when it is complete. The Square Kilometer Array (SKA^{3}^{3}3www.skatelescope.org) represents the long term future for this type of radio astronomy however construction will not begin for several years. In preparation of this, a host of pathfinding telescopes ( ASKAP^{4}^{4}4www.atnf.csiro.au/projects/askap , MeerKAT^{5}^{5}5www.ska.ac.za/meerkat, WSRT^{6}^{6}6www.astron.nl/radio-observatory/astronomers/wsrt-astronomers, EVLA^{7}^{7}7https://science.nrao.edu/facilities/evla ) will soon make 21-cm emission surveys as data rich as their optical and near-infrared counterparts. In addition, pilot surveys for 21-cm absorption in the spectra of radio bright sources have shown potential (e.g. Gupta_10; Darling_2011) and at least two of these pathfinders (ASKAP and MeerKAT) will also perform large, blind, absorption surveys.

The combined output from these surveys will provide transformative information on the gas content of galaxies and their modes of accretion. It will also generate samples that trace the large scale structure of the Universe with different biases than those of optically selected samples. A prerequisite for making model predictions of hydrogen emission or absorption is the accurate calculation of the distribution of neutral hydrogen (e.g. Duffy_12; VandeVoort_12). Methods that accomplish this task with a minimum of free parameters will be able to take full advantage of observational data. These issues motivate many of the design choices for Urchin .

The standard approach of treating the post-reionization UV background in cosmological simulations is to impose a spatially uniform but time varying radiation field, and calculate the H1 fraction in the optically thin limit. Pioneering work on modeling self-shielding in gas dynamics simulations was done by Katz_1996 and Haehnelt_1998 by post-processing column density maps. More recent theoretical work has incorporated radiative transfer through 3-D density fields to calculate the attenuation of the UV background in dense gas (Razoumov_2006; Kohler_07; Pontzen_2008; Altay_2011; McQuinn_2011; Yajima_11; Fumagalli_11; Cen_12; Erkal_2012; Rahmati_13_evo; Rahmati_13_str).

In this paper we present and test Urchin, a reverse ray tracing scheme designed to calculate self-shielding corrections in the post-reionisation Universe. The code can be applied to any density field representation (e.g. particles, adaptive grids, unstructured meshes) in which the resolution elements can be associated with optical depths and subjected to ray intersection tests. The main benefits of Urchin are: (1) preservation of the adaptive density field resolution present in many gas dynamics codes; (2) uniform sampling of gas resolution elements with rays; (3) preservation of galilean invariance; (4) high spectral resolution; and (5) preservation of the standard uniform UV background in optically thin gas. The format of this paper is as follows. In §2 we introduce our notation and review some basic physics related to radiative transfer. In §3 we give a general description of our reverse ray tracing approach and place it in context by comparing it to alternative approaches. In §4 we discuss the details of our implementation using smoothed particle hydrodynamics (SPH) density fields. In §5 we present the results of tests meant to validate Urchin and in §6 we discuss the results, suggest improvements for future versions of the code, and conclude.

## 2 Definitions and Basic Physics

In this section, we define our notation and review some of the relevant physics. All simulations discussed in this work utilize a cubic simulation volume. For brevity, we will refer to any of these simulation volumes as boxes and their six faces as walls. When refering to distances, we will distinguish between proper and comoving measures using the prefixes ‘p ’and ‘c ’(e.g. pkpc, cMpc). In this work, we consider only hydrogen and leave the inclusion of other elements, particularly helium, to future work. For our description of the radiation field, we adopt the notation of Rybicki_86.

The specific intensity, , fully characterises a radiation field and is defined as the energy passing through an area element into a solid angle element in time due to photons with frequency between and . Several useful characterisations of the radiation field can be expressed as integrals over this quantity. Considering photons with frequency , and an optically thin medium, we can write the number density of hydrogen ionising photons and the photoionisation rate at the point respectively as:

(1) | |||||

(2) |

where and are the ionisation energy and photoionisation cross-section of hydrogen. In the case of a medium with finite opacity, the above quantities can be written in terms of the optically thin value of by making the replacement where is the optical depth between the sources producing and .

The frequency averaged (‘grey‘) photoionisation cross section is defined as . Under the grey approximation, every polychromatic spectrum (characterised by and ) corresponds to an equivalent monochromatic spectrum with the same and . The flux of the monochromatic spectrum is fixed by and the energy of the photons in the monochromatic spectrum is with implicitly defined by .

## 3 Reverse ray tracing Method

Urchin is designed to efficiently model the residual neutral hydrogen in the post-reionisation universe. In this section, we describe the algorithms used in Urchin and conceptual departures from alternative radiative transfer codes. We begin with a brief description of the standard treatment of the UV background in cosmological gas dynamics simulations.

### 3.1 Standard Treatment of Post-Reionisation UV Background

In the post-reionisation Universe, cosmic hydrogen is kept highly ionised by a pervasive UV background (Gunn_65) produced by galaxies and quasars. Quantitatively, the volume averaged neutral fraction for redshifts is on the order of (Fan06; Becker07). A rapid transition to higher neutral fractions signals the end of reionisation, evidence for which has recently been observed in the form of a damping wing in the spectrum of a quasar (Mortlock_11). At lower redshifts, the UV background determines both the ionisation state and temperature of gas in the IGM and sets the rate at which denser gas can cool, accrete onto small galaxies, and form stars (Efstathiou_1992; Okamoto_08).

The most widely used treatment of the post-reionisation UV background in cosmological simulations, is based on three approximations: (1) optically thin gas; (2) a spatially uniform UV background; and (3) photo/collisional ionisation equilibrium. These approximations are valid for the majority of cosmic gas, i.e. for highly ionised hydrogen; however, they break down for the majority of gas observable in HI surveys. The approximation of optically thin gas begins to break down in absorption systems that probe accretion (LLSs) from the IGM onto galaxies and completely fails in regions of significant self-shielding (DLAs) where the strongest HI signals arise. This approximation is the most important to remedy for HI surveys. The second approximation involves disregarding large scale gradients in the UV background as well as point sources. These fluctuations can have an effect on absorber statistics (e.g. Rahmati_13; Yajima_11; Croft_04) but the magnitude is not as large as that due to self-shielding. A notable exception is absorbers in the proximity zones of bright sources (Schaye_06). The third approximation involving photo/collisional equilibrium likely holds for dense gas where HI is self-shielded and recombination times are short compared to UV background variability, but will break down near variable sources and in less dense gas. Relaxing these approximations is an efficient way to improve model predictions for HI surveys. In this release of Urchin we focus on self-shielding.

### 3.2 Reverse Ray Tracing - Motivation

Numerical techniques for continuum radiative transfer have been developed based around ray tracing methods, (e.g. Nakamoto_01; Maselli_03; Razoumov_05; Susa_06; Whalen_06; Altay_08), the closely related method of characteristics (e.g. Mellema_06; Rijkhorst_06), angular moments of the radiative transfer equation (e.g. Gnedin_01; Aubert_08; Petkova_09; Finlator_09), and transport on unstructured meshes (e.g. Pawlik_08; Pawlik_11; Paardekooper_10). A detailed comparison between many codes currently in use is documented in the Cosmological Radiative Transfer Comparison Project (Iliev_06; Iliev_09).

The above methods are all based on following radiation from its source to the point where it is absorbed or scattered. When dealing with the post-reionisation UV background, the goal is to build up a radiation field that is known from both theoretical and observation studies to be mostly uniform. In these methods, the UV background field at a given point is the sum of radiation that has been transported from all sources being considered. However, in many methods this is true only in a statistical sense. For example, in Monte Carlo ray tracers such as Sphray, resolution elements are updated whenever a ray intersects them. The ray only carries information about the flux from one source, but if enough rays are traced, each resolution element will be updated by the rays from many sources. In Urchin we attempt a different type of solution to this problem. As opposed to building up a mostly uniform UV background from multiple sources, we begin with the standard approximations described above (optically thin gas, uniform field, photo-collisional equilibrium) and then calculate deviations from it. For the majority of cosmic gas, the optically thin photoionisation rate from this uniform field, , is in fact a good approximation. The current version of Urchin relaxes the optically thin approximation from the standard treatment by attenuating this uniform radiation field in denser regions that begin to self-shield. The fraction of gas (by mass or volume) where this correction is necessary is guaranteed to be small by the nature of the problem allowing us to concentrate the available computational resources where they are needed. In future versions of the code we will relax the second and third approximations in the standard treatment by adding large scale gradients, proximity regions, and considering non-equilibrium effects. Currently Urchin operates on static density fields as a post-processing step, but in principle could be coupled to existing gas dynamics codes in a straight-forward way (e.g. in a framework such as described in Zwart_09).

### 3.3 Reverse Ray Tracing - Algorithm

Consider a density field discretised into resolution elements labeled . We will refer to these resolution elements as particles; however, our reverse ray tracing technique can be applied to any density field representation in which the resolution elements can be associated with optical depths and subjected to ray intersection tests. From each particle, we cast ray segments out to a distance , in directions that uniformly cover the solid angle around the particle. We use the HEALPix algorithm (2005ApJ...622..759G) to determine ray directions. We then calculate the optical depth along each of these ray segments and sum over all rays to obtain an estimate of the self-shielded photoionisation rate, , at the location of each particle,

(3) | |||||

For most particles the optically thin approximation is very good and the effective optical depth will be small, , along all directions. However, for the small fraction of particles that dominate the HI abundance, the effective optical depth will be large (). We then use in an analytic solution for the equilibrium neutral fraction (Eq. 23 in the Appendix) to update the ionisation state of the particle. An altered neutral fraction for one particle leads to altered optical depths along ray segments that pass through it, and hence we iterate this procedure until the neutral fraction converges for all particles. This typically happens in tens of iterations with those few particles at the threshold between optically thin and optically thick converging last. and are numerical parameters of the scheme, and we have found that in fully cosmological runs at , values of pkpc and lead to converged column density distribution functions.

### 3.4 Reverse Ray Tracing - Advantages

Reverse ray tracing presents some important advantages in the post-reionization regime, related to the dramatic differences in the character of the radiation field during and after reionization. In the following sub-sections, we focus on questions that typically occur in ray tracing schemes attempting to model the post-reionization cosmological UV background: 1) Where should rays originate and terminate? 2) How should rays be traced to uniformly sample the gas elements? 3) How much spectral resolution can be attained? and 4) What optimizations can be applied? We will show that answers to these questions are simpler in reverse ray tracing schemes than in forward versions.

#### 3.4.1 Where Should Rays Originate?

In forward ray tracing schemes, the UV background at a given point in space is the result of transporting photons along rays originating at sources of radiation. These sources can be divided into two categories: those inside the box (internal sources) and those outside the box (external sources). Modelling the UV background from first principles using only internal sources imposes stringent conditions on the size of the box. A reasonable absolute minimum scale is one mean free path at the Lyman limit, . Prochaska_09 find that this mean free path depends on redshift approximately as pMpc for . At this is larger than the majority of cosmological gas dynamics simulations able to resolve any galaxy formation processes and becomes larger at lower redshifts. For this reason, forward ray tracing schemes inevitably must rely on tracing many rays from external sources in addition to internal ones.

The contribution from external sources is usually modeled by casting rays inward from the walls. These rays are then followed until a given fraction of their initial photon content is absorbed. The most straight forward method of choosing ray origins is a pseudo-random sampling of points on the walls. The appropriate flux is usually determined by requiring that the photoionisation rate in optically thin gas match an input value. This method has two drawbacks. The first, discussed in the next sub-section, is the difficulty in uniformly sampling an adaptive density field. The second is the artificial gradient in photoionisation rate that will develop between regions near the centre of the box, where the background rays will be most attenuated, and regions near the walls, where the background will be at its optically thin strength. This is what we term the galilean invariance problem. This adds ambiguity to any calibration of flux from the walls and makes the calibration dependent on the box size. In general, tracing rays from the walls leads unavoidably to a loss of symmetry in the UV background and difficulty in flux calibration. These problems can be alleviated somewhat by tracing rays from randomly selected under-dense regions within the box (Maselli_05), but not eliminated entirely. Some authors (e.g. McQuinn_2011) have circumvented this problem by excising small regions around halos and ray tracing them individually however this neglects filamentary gas between halos.

In a reverse ray tracing method, these problems are not present. All symmetries of the UV background are maintained and the photoionisation rate at a given particle position is independent of any properties of the box. In addition, the intensity of the UV background in optically thin gas is equal to its value in the standard approximation (optically thin gas, uniform field, photo-collisional equilibrium) by construction. This conveniently allows for direct comparisons between results with and without radiative transfer with no need for calibrating fluxes.

#### 3.4.2 The Uniform Sampling Problem

Modern gas dynamical simulations discretise density fields with adaptive resolution elements. This typically leads to better spatial resolution (i.e. smaller resolution elements) in regions of higher gas density. A radiative transfer approach which relies on rays uniformly cast from the simulation walls leads to poor sampling of the resolution elements. Specifically, the overdense self-shielded regions where radiative transfer is most important are undersampled, and the underdense optically thin regions where radiative transfer is unnecessary are oversampled. A scheme in which rays split and merge (Wise_11) can maintain a constant number of rays intersecting each resolution element but comes at the cost of increased overhead. On the other hand, the reverse ray-tracing algorithm samples each particle with the same number of rays by construction. This helps focus computational power where it is needed.

#### 3.4.3 Spectral Resolution

In a forward ray tracing scheme, the radiation field at a particle position is the summation of contributions from different rays. For monochromatic spectra, all attenuation information can be characterised by the number of photons in a ray. To accurately handle multi-frequency spectra, one must substantially increase the number of monochromatic rays traced or carry spectral information along with each ray. Similarly for moment methods, each frequency group must be treated separately by the solver. The commonly used grey approximation, i.e. an optimal monochromatic choice, can lead to order of magnitude errors in the neutral fraction (see Figure 3 below). Additionally, Mirocha_12 have shown that at least four frequency bins between 13.6 and 100 eV are required to obtain converged results when modelling neutral fractions in HII regions produced by smooth spectra such as thermal emission and power laws. To include the effects of sharp spectral features, for example the Helium Lyman- recombination line present in the UV background spectrum of HM_12 (see Figure 10), more frequency bins are required.

In reverse ray tracing, the rays simply sample the optical depth along a particular direction. Subsequently, arbitrarily complex input spectra can be numerically integrated over frequency to determine a shielded photoionisation rate, and update the neutral fraction of a particle based on full knowledge of the amplitude and spectral shape of the local radiation field. When using 100 frequency bins in Urchin, the numerical integration over the UV background spectrum consumes a negligible fraction of the computing time. This level of spectral resolution is comparable to the resolution with which modern spectral models are defined. For reference, the recent model spectrum of HM_12 contains 150 samples between one and ten Rydbergs. This allows for studies of spectral hardness and eases the inclusion of sharp features such as recombination lines into the UV background spectrum. As an added bonus, knowledge of the optical depth in all directions around a particle during the update allows one to choose the appropriate ‘case A’ (recombinations to all levels) or ‘case B’ (recombinations to all but the ground state) recombination rates in the on-the-spot approximation.

#### 3.4.4 Optimisations

The reverse ray tracing approach also lends itself to some important optimisations that are not easy to implement in forward ray tracing techniques. The most expedient is simply avoiding radiative transfer where it is unnecessary. As discussed above, particles that are not self-shielded and not in proximity zones are treated correctly in the uniform and optically thin limit. When looping over the particles in reverse ray tracing schemes, those that satisfy this criterion can be skipped. Identifying these particles is implementation dependent; however, in cosmological simulations, the majority of post-reionization gas does not need to participate in radiative transfer. We will discuss our technique for identifying these particles in the next section.

There is also a sense of ray locality and ray independence inherent in the method that can be useful in optimisations. Any sub-volume in a box can be treated independent of any other as long as a buffer of is included around the sub-volumes. In addition, each ray can be made independent during a single iteration by waiting to update the neutral fractions of particles until shielded photoionisation rates have been calculated for all of them. This strategy may increase the number of iterations necessary for convergence but would make each iteration much faster.

## 4 Implementation Details

Urchin applies a specified radiation field to a given density field to determine the properties of self-shielded regions. Spatially uniform models of the cosmological UV background which provide , are publically available (for example HM_12)^{8}^{8}8http://www.ucolick.org/~pmadau/CUBA. The main parameters in Urchin are the frequency range of ionising photons included in the calculation, the number of rays casts per particle, , their proper length, , the criterion for choosing between type A and type B recombination rates, and the condition for convergence . The choice of spectrum and frequency range determines the optically thin photoionisation rate, , from
Eq. (2). Our default runs use , and pkpc, which give converged numerical results at redshift . In addition, we use case B recombination rates for particles with .

Initially the neutral fraction of each particle is set to its optically thin value, . This allows for the calculation of the HI column density along each ray cast from a particle. Next we calculate a shielded photoionisation rate, , for each particle and an effective optical depth, using Eq. (3). Particles with maintain for all subsequent iterations while all other particles are updated each iteration. We continue to loop over particles which initially had until convergence in the neutral fraction, . Typical values for these parameters are , , pkpc and . For these choices we find that more than 99 percent of particles have converged neutral fractions after five iterations in a cosmological run, with the remaining 1 percent requiring tens of iterations. In our current implementation, we update the neutral fraction of each particle as soon as its self-shielded photoionisation rate has been computed. This update scheme works well provided the list of particles is traversed from high to low density, however, there is no fundamental restriction on when the particles should be updated. Implementations which calculate for each particle before performing any updates of the neutral fractions are independent of the order in which the particles are looped over, which could be useful in parallel strategies.

### 4.1 Reverse Ray Tracing in SPH

Although Urchin can be applied to many types of density field discretisations (particles, grids, unstructured meshes), in this section we discuss our implementation in Smoothed Particle Hydrodynamics (SPH, Gingold_77; Lucy_77).

#### 4.1.1 Column Densities in SPH

Our calculation of SPH column densities makes use of several improvements over the algorithm described in Altay_08. In the SPH formalism the number density of HI at any location can be calculated using the scatter approach as

(4) |

where is the mass of a proton, is the mass of particle in hydrogen, is its neutral fraction, is the distance between the particle and and the point in units of the particle’s smoothing length , and is the SPH smoothing kernel. The column density through an SPH distribution along a path parameterised by can then be written as:

(5) | |||||

where the summation is over particles with smoothing volumes intersected by the path and the subscripts in the variable indicate that it is a function of the summation index and distance along the path . In this way, the calculation of optical depths is reduced to calculating which particles are intersected by a ray and line integrals through the smoothing kernel .

The Gaussian function has many properties that make it a natural choice for the smoothing kernel, however, its lack of compact support leads to an impractical sum over all particles in the simulation volume. To remedy this, many SPH codes make use of spline functions with an approximately Gaussian shape, such as the cubic spline (MonLat85),

(6) |

In Urchin we use a truncated Gaussian kernel that allows us to obtain line integrals at a given impact parameter analytically. A normalised Gaussian function centred at the origin is given by

(7) |

where and is the variance. We truncate at and determine using the equation but demand the kernel satisfies the condition to obtain the normalization .

(8) |

where

(9) |

and the error function is defined as,

(10) |

The column density through such a Gaussian kernel at impact parameter , between the limits and is then

(11) |

We use this kernel to calculate the HI column densities along each ray using Eq. (5). Incidentally, the fact that the Gaussian kernel can be decomposed into three 1-D functions makes it useful for smoothing SPH particles onto 2-D or 3-D grids.

#### 4.1.2 Self-contribution to self-shielding in SPH

When passing from a continuous density field representation to a discrete one, Eq. (23) for the equilibrium neutral fraction at the point , becomes partially implicit: in the discrete case, has a dependence on through the particle’s own contribution to the optical depth along a ray. This is a generic problem of discretisation and has been discussed previously in Abel_99 and Mellema_06.

However when the density field is represented with SPH particles, further complications arise due to the weighted sum over neighbours. For example the HI density at location depends on the neutral fractions of all particles that appear in the sum of Eq. (4). This makes a straightforward scheme that updates neutral fractions based on calculated at the center of an SPH particle unstable. The reason for this instability is illustrated in the top part of Figure 1. Consider three SPH particles which are initially optically thin but are located in a region that will eventually become self-shield. Rays traced outward (in any direction) from each particle will probe optical depth contributed by all three. This will in turn cause the neutral fraction of each particle to be increased. In the next iteration, each ray will find increased optical depth and each neutral fraction will increase again. This process will continue and eventually spread to adjacent particles causing unphysical growth of neutral regions. The cause of this instability is the multi-valued relationship between points in space and resolution elements in SPH. As a counterexample, consider the uniform grid density field represented in the bottom part of Figure 1. This discretisation has a single-valued relationship between points in space and resolution elements. In this case, changes in the neutral fraction of each element do not necessarily change the optical depth encountered by each ray. For example, an increase in the neutral fraction of element 1 has no effect on the optical depth encountered by rays traced from elements 2 or 3. This prevents the instability from occuring.

We resolved this numerical artefact in SPH as follows. Particles intersected by each ray are split into two distinct groups labeled near and far, with . Near particles of particle are those that contribute to ’s (neutral) density in Eq. (4), i.e. the particle’s neighbours. All other particles that contribute to the sum in Eq. (5) are labelled ‘far’. The numerical instability only involves near particles, therefore we can treat the far particles with the algorithm described in §3. We model the near particles as a uniform slab with the same temperature and density as particle . The thickness of the slab is quantified by calculating the total hydrogen optical depth of the near particles at the Lyman Limit defined as . The column density is calculated as in Eq. (5) except all of the neutral fractions are set to unity.

The radiation incident onto the slab is determined by the user supplied spectrum and the optical depth through the far particles, . For each ray this incoming flux provides a photoionisation rate at the surface of the slab of . We then solve for the ionisation structure in the slab and associate the photoionisation rate at the bottom, , with the contribution from ray to the total shielded photoionisation rate of particle . In the case of monochromatic radiation, there is an analytic solution for photoionisation rate as a function of depth into the slab which can be used to determine (see Appendix C.4). For polychromatic spectra we tabulate the solution as a function of four variables,
^{9}^{9}9The optical depth is a function of frequency and not a scalar like the other parameters, however, the full shape of is determined by a single evaluation, for example . .

The first variable, , is the ratio of the amplitude of the incident radiation field over the density of the slab, the second, , determines the incident spectrum (i.e. how much the user supplied spectrum has been hardened before entering the slab), the third, , determines recombination and collisional ionization rates, and the fourth, is the thickness of the slab. In both the analytic and the lookup table case, we label this solution and note that the values of , , and are different for each of the rays traced from particle . In summary, the total shielded photoionisation rate for particle is computed in the following way. We trace rays and calculate,

(12) | |||||

(13) | |||||

(14) |

This whole process is illustrated in Fig. 2. The determination of has no dependence on the neutral fraction of particle or any of its neighbors and therefore the solution is numerically stable. However, it requires extra computational effort to evaluate the function and introduces errors due to the lack of interaction between different near slabs. In the following section we will quantify these errors.

## 5 Tests and Verification

In this section, we present several tests performed in order to validate Urchin. We begin with simple test cases with a known analytic solution, and end with more realistic tests that involve gaseous galactic halos drawn from a cosmological simulation. For those tests in which an analytic solution is not available, we compare the Urchin results to those of a straight-forward numerical solver which we will call to distinguish it from Urchin. After we have verified against analytic solutions, we will simply refer to as the analytic solution.

### 5.1 Test 1: Analytic Slab Solution

This test involves plane-parallel radiation with flux incident from one side onto a slab of hydrogen gas of thickness , uniform density , and fixed uniform temperature . The surface of the slab is coincident with the plane and the bulk extends in the direction. In the case of monochromatic radiation, this problem has an analytic solution which we derive in Appendix A4. The purpose of this test is to verify and to illustrate the dependence of the solution on the assumed spectrum of the incoming radiation.

The equilibrium neutral fraction as a function of depth, , is obtained with by dividing the slab into many thin slices perpendicular to the -axis. Starting from and working downwards, we solve for in one slice at a time using the following algorithm: 1) determine the HI optical depth, , above the current slice; 2) calculate an attenuated photoionisation rate in the slice, ; 3) determine in the slice by plugging into an analytic solution (Eq. 23). To avoid errors due to finite slice width, we choose the number of slices such that each has a total hydrogen optical depth below unity. This guarantees they will always be optically thin when considered individually. The numerical values for the parameters of this test are, pkpc, (500 times the cosmic mean at redshift 3), and K. We compare the analytical solution to that obtained by in Fig. 3 (pink line labelled grey versus green symbols): they agree very well.

An advantage of the reverse ray tracing approach is the high fidelity with which the spectrum of the ionising background can be sampled. This is non-trivial in forward ray tracing schemes and many rely on a simplified spectrum (see discussion in Mirocha_12). We can use to quantify the accuracy of such approximations. The slab parameters were chosen in order to produce a fully neutral region on the far side of the slab and a total column density when the redshift HM01 (HM01) UV background is incident. In what follows, we will refer to the specific intensity of the HM01 UV background at redshift 3 between the frequencies and as . In Figure 3 we compare HM01 with three different monochromatic approximations: i) 1 Rydberg photons with a flux that produces the same (optically thin) density of ionising photons as , ii) 1 Rydberg photons with a flux that produces the same (optically thin) photoionisation rate as , and iii) the grey approximation of which reproduces both the number density of photons and photoionisation rate of . Photons in the grey approximation have an energy of 19.2 eV.

In all cases the monochromatic solutions transition to fully neutral more abruptly than the true HM01 solution. This is due to the photons having a fixed photoionisation cross-section. The spread in frequencies in the HM01 spectrum smooths the transition from highly ionised to fully neutral. In the worst case, this can cause an error of several orders of magnitude in the neutral fraction. The monochromatic spectrum normalised to the same photo-ionisation rate (yellow line) recovers the correct neutral fraction in the optically thin region but underestimates the depth of the ionised region by almost 100 pkpc. The monochromatic spectrum with the same number density of ionising photons (olive line) underestimates the neutral fraction in the optically thin region by dex, but recovers the location of the ionisation front to better than 10 kpc. The grey approximation (pink line) recovers the ionised fraction in the optically thin region and the location of the ionisation front, but still has a maximum error of dex.

The dependence of on the spectrum of radiation illustrates the need for accurate multi-frequency treatments. An advantage of Urchin is that it can treat arbitrary spectral shapes accurately with approximately 100 frequency bins. In addition, the full attenuated ionising spectrum at a particle position is known when neutral fractions are calculated. For completeness we examine several other frequently used approximations to the HM01 spectrum in Appendix C.2, and various fitting formula used to approximate the hydrogen photoionisation cross section in Appendix C.1.

### 5.2 Test 2: Uniform Slab - Radiation Incident from One Side

In this test we compare Urchin solutions to those of . To this end, we create a set of uniform slabs with the same geometry as in Test 1, but vary the volume densities such that the projected HI column densities through the slabs cover the range (i.e. the range over which self-shielding becomes important). To create SPH realizations of the uniform density fields we generate glass-like distributions with , , and particles (hereafter labeled N16, N32, and N64). To model the plane parallel radiation in Urchin, we trace a single ray of length from each particle towards the surface of the slab. We calculate column densities by projecting all SPH particles onto a plane and measure the mean column on a fine grid of 2048 pixels. Similar projections were used in Altay_2011.

In the top panel of Figure 4, we compare solutions (solid lines) to those produced by Urchin (dashed lines) for slabs with different densities (different colours). The Urchin solutions are from the lowest (N16) resolution SPH density fields. Urchin faithfully follows the dependence of neutral fraction on depth for all models, including those where the gas becomes mostly neutral. The bottom panel of the figure quantifies the errors in neutral hydrogen column densities calculated from the SPH realizations (solid lines). Errors are below 0.05 dex at low resolution (N16, blue line), and improve with increasing resolution. The dashed lines quantify the Urchin errors in the case of monochromatic radiation, for which the analytic solution does not require the construction of the interpolation table discussed in Section 4.1.2. The errors are slightly larger here as the ionisation front becomes very steep when the radiation is monochromatic. However, this test demonstrates that our method of splitting the optical depth into a contribution from near and far particles works well for single ray applications.

### 5.3 Test 3: Uniform Slab - Radiation Incident from Two Sides

This test is identical to Test 2, except we irradiate the slabs from both sides. To obtain the analytic solution, we modify as follows. First we initialise the neutral fractions of all slab slices to their optically thin values, then we loop over the slices calculating the optical depth both above and below a given slice. These optical depths are used to calculate a photoionisation rate and hence a new value for the neutral fraction. We continue iterating over the slices until the neutral fraction in each slice has converged to one part in ten thousand. The solution is symmetric with respect to the centre of the slab at pkpc. To model the plane parallel radiation in Urchin, we trace two rays from each particle, one in the direction and one in the direction.

As in the previous test, we compare the Urchin (dashed lines) and analytic (solid lines) solutions for the neutral fraction in the top panel of Fig. 5. The goal of this test is to examine the accuracy of the near / far split described in section 4.1.3 when multiple rays are being used (diagramed in panel b of Figure 2). The algorithm introduces errors in the calculation of because each ray is considered independently. To illustrate this point we will consider the process of calculating for a particle situated in the middle of a slab in which the equilibrium neutral fraction doen not form a neutral core (i.e. any slab with ). First the ray is traced and a is calculated as in Eq. (12). This is then used as input to calculate as in Eq.(13) which represents the photoionisation rate at the bottom of the near slab. The error occurs due to the fact that the near slab should also be irradiated from the direction as all of the gas is highly ionised. This leads to an overestimate of the opacity of each of the near slabs and in turn to an overestimate of the neutral fraction in the slab. The errors are most severe during the transition from optically thin slabs to slabs that form a neutral core. In slabs that do form a neutral core the error is absent as at least one ray always encounters a high optical depth.

The resulting errors on the column density through the slab are shown in the bottom panel of Fig. 5. Different colours refer to different numerical resolutions, with solid lines representing the HM01 radiation field, and dashed lines the grey approximation. The Urchin optical depth is within 0.1 dex of the analytic result for columns cm or cm for the highest resolution slab. These column density limits correspond to the cases where the slab is either mostly ionised everywhere, or develops a neutral core. In the intermediate regime, Urchin overestimates the neutral hydrogen column density by up to 0.15 dex at N64 resolution, and 0.25 dex at N16 resolution. The errors in the case of monochromatic radiation (dashed lines) are not significantly different, showing that the Urchin error is due to the near/far split, rather than the implementation of the look-up table for polychromatic spectra employed in (as opposed to the analytic solution used for monochromatic spectra).

### 5.4 Test 4: Galactic Halos - Uniform UV Background

In previous sections we applied Urchin to very simple slab geometries for which we could calculate accurate analytic solutions. In this section we concentrate on the more realistic case of galactic halos. In particular we focus on halos extracted from the OWLS suite of cosmological galaxy formation simulations (Schaye_2010). This suite consists of a reference model (ref) and more than 50 variations around that reference model which explore changes to sub-grid physics and other parameters. The ref model, which we make use of here, included a model for pressure in the numerically unresolved cold interstellar medium, star formation, the timed release of 11 chemical elements by type I and type II supernovae and AGB stars, radiative cooling due to the same elements in the presence of the HM01 ionising background, and energetic feedback from supernovae (DallaVecchia_08; Schaye_08; Wiersma_09_cool; Wiersma_09_chem).

#### 5.4.1 Stacking Halos

To identify these halos we first calculate a friends-of-friends group catalogue using the standard algorithm of Davis85 run on the dark matter particles, and linking baryonic particles to their nearest dark matter particle. We then identify bound sub-structures (i.e. halos) within these FoF groups using the subfind algorithm (Springel_01; Dolag_09) and associate these halos with galaxies. Our goal is to create objects that have approximate spherical symmetry – so we can calculate self-shielding analytically – yet are representative of typical galaxy density profiles encountered in simulations. To this end, we ‘stack’ haloes of similar mass. First, haloes are grouped into mass bins according to their total gas mass, with the lowest mass bin edge being (100 gas particles) and the bin width equal to 0.3 dex. We then centre all haloes in a given bin on the location of the most bound particle of that halo. The mass of each particle in a stack is then adjusted such that the sum of all particle masses in the stack is at the logarithmic center of the mass bin. Finally, the smoothing lengths and densities of all particles in a stack are recalculated. These stacks, while still containing some particle noise, are now (nearly) spherically symmetric. In Figure 6 we show images of each halo stack out to a radius of 100 pkpc. The color scale logarithmically covers the range between and . We show this figure mainly to give the reader a visual impression of how the halo stacks become less spherically symmetric in the higher mass bins due to the smaller number of halos in each bin.

#### 5.4.2 Radial Profiles

In Fig. (7) we plot radial density profiles of the stacked haloes. In each panel there are four black dashed lines. The lower horizontal line indicates the cosmic mean value of at , the upper horizontal line indicates the star-formation density threshold used in ref, the vertical line indicates the gravitational softening length, and the diagonal line is an arbitrarily normalised line to guide the eye. Both the total and neutral hydrogen number density are shown with shaded regions. The two quantities are equal at small radii causing the shaded regions to have a shape similar to the greek letter lambda in each panel.

#### 5.4.3 Total Hydrogen Radial Profiles

Total hydrogen density profiles were calculated from the SPH stacks and are plotted as the upper blue shaded regions. The width of the regions correspond to one sigma variations from the mean density at a given radius. This is a measure of the deviations from spherical symmetry shown in Figure 6. The shaded regions are mass weighted averages but for reference we also plot the volume weighted average, i.e. the sum of all particle masses in a radial shell divided by the volume of the shell, with a green line in the bottom left panel. The volume weighted density hugs the lower end of the mass weighted density. In an SPH distribution with no particle noise the two quantities would be equal; however, any clumping will increase the mass weighted quantity relative to the volume weighted quantity. The offset between the two at small radii indicates the level of particle noise in the spherically symmetric part of the SPH stacks while the differences at large radii are due to clumps caused by stacking a finite number of halos. Note that this noise was mostly absent in the previous tests due to the use of glass like distributions.

For each mass bin we construct two smooth analytic profiles by fitting a polynomial through the one sigma variations described above (upper red shaded region). The differences between the two are only visible at the larger radii of the more massive bins. We use the point where the volume weighted intersects the cosmic mean to define a radius for each analytic profile which is why the red shaded regions are truncated at smaller radii than the blue. This provides us with a perfectly smooth and spherically symmetric approximation to our SPH stacks. In general, the density at a given radius is monotonically increasing with halo mass as is the radius of the halos. Each halo has a profile close to at intermediate radii but becomes steeper at both smaller and larger radii.

#### 5.4.4 Neutral Hydrogen Radial Profiles

To calculate neutral hydrogen profiles in spherically symmetric gas with we did the following. Each halo is divided into shells. From each shell, we trace rays which sample the azimuthal angle between and radians. The photoionisation rate in each shell is calculated by summing the contribution from each ray and is then used to calculate a new neutral fraction. We loop over all shells and iterate until the neutral fraction in each shell has converged to one part in ten thousand. We find that our results are numerically converged when using and . We use to calculate profiles from the and profiles, and show the results as the lower red shaded regions.

We also use Urchin to calculate the neutral fraction of every particle in each stack and construct average profiles in the same way as we calculated the SPH density profiles. These are shown as the lower blue shaded areas. When irradiated from outside, each halo develops a characteristic ionisation structure with a neutral core, a sharp ionization front, and an optically thin region. In the core, . When the density drops to the neutral core transitions to a steep ionisation front. At densities between and the ionisation profiles transition into the optically thin regime in which the neutral fraction is proportional to the density, . In all cases, the Urchin solution overlaps with the analytic solution. The differences between the two are largest in the ionization front but match the analytic solution in the optically thin and optically thick regimes. This is the spherical corrolary to the situation in Test 3 in which the errors were largest for intermediate column density slabs. In a spherical geometry, even with a neutral core, some rays will sample the problematic columns between .

#### 5.4.5 Neutral Hydrogen Column Density Profiles

In Figure 8 we show the neutral hydrogen column density as a function of impact parameter for the halo stacks. To calculate profiles in the analytic case we simply integrate lines of sight through the spherically symmetric shells shown in Figure 7 and tabulate the results as a function of impact parameter . We calculate these profiles for the and spherical mass profiles, and shade the region between the two in red. To calculate the same quantity from the SPH distributions we project all particles in a stack onto a plane and measure the column density on a fine grid of 2048 pixels. We then bin these pixels in impact parameter and show the one sigma variation around the mean as a shaded blue region. For density profiles of the form with the dominant contribution to along a line of sight comes from the smallest radii. Because of this, we can associate lines of sight in Figure 8 with the innermost radius they probe. This gives rise to a corresponding region in the plots for each of the regions described in the previous section (neutral core, ionisation front, and optically thin outskirts). In all mass bins, the Urchin solution overlaps with the analytic solution giving us confidence that Urchin can be used to accurately model neutral hydrogen absorbers in a cosmological context.

## 6 Discussion and Conclusions

We have presented and described a new publically available radiative transfer code called Urchin. It is optimized to model the residual neutral hydrogen in the post-reionization Universe and relies on reverse ray tracing to avoid problems typically associated with modelling the UV background. In particular, our implementation allows for the preservation of symmetry in the input radiation field, a completely uniform sampling of gas resolution elements, a high fidelity sampling of spectral features, and some optimisations not possible in forward ray tracing schemes.

We have validated Urchin in four sets of tests that range from comparison to analytic solutions in simple slab geometries to solving for the ionization structure in gaseous galactic halos. The errors discussed in §5 have two root causes. One is particle resolution, and the other is the multi-valued relationship between points in space and resolution elements in SPH. The only solution to the first problem is to run simulations with more particles. The second problem prompted us to introduce the near / far split and the analytic slab solution discussed in §4. This solution solves the multi-valued problem but is restrictive in its use of plane parallel geometry and its treatment of each ray / near slab pair independently (i.e. without considering the other pairs). It is possible that an approach that parameterises a spherical analytic solution using some average of the near slab optical depths and some average of the incoming radiation from all rays as parameters would be more sucessful. In addition, the analytic solution presented in Eq. 36 considers uniform density slabs although there is a density derivative in the fully general solution. Estimating along each ray and using this as an extra parameter in could also prove useful in improving the solution.

In the current implementation, we have shown that solutions generated by Urchin reproduce analytic solutions in the case of density fields approximating gaseous galacitc halos. We view Urchin as a first step in a series of incremental improvements to the standard treatment of the UV background. Future steps will include helium as in Altay_08, proximity zones from internal point sources, an equilibrium heating/cooling model, and non-equilibrium corrections for ionisation and heating.

## Acknowledgments

This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915. The calculations for this paper were performed on the ICC Cosmology Machine, which is part of the DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS, and Durham University.

## References

## Appendix A Urchin Parameter Choices

The main inputs expected from a user of Urchin are an optically thin spectrum and parameters determining the number of rays per particle, and the distance each ray is traced, . In some cases, the correct choices for these parameters are obvious, in others there are no unique correct choices. We will examine a few examples to clarify the situation.

Cases involving plane parallel radiation incident onto isolated gas slabs are in the first category. The number of rays is fixed to one or two depending on if the radiation is incident from one or both sides. In addition, the rays need to be at least as long as the slab in the direction they are traced. Once the rays have exited the slab, they will not add any additional optical depth and will therefore not change the results. These choices are independent of the input spectrum. For cases in which an isotropic optically thin background is incident on isolated halos, the ray length should again be equal to the linear extent of the object in question but the number of rays to trace for each particle is not obvious. Urchin allows a user to select from the available Healpix resolutions with with equal to a positive integer. A larger number of resolution elements in the density field will require higher angular resolution to fairly sample the region surrounding each particle. In general, the value of is entirely dependent on the resolution of the density field.

To date, Urchin has mostly been applied to the problem of modelling the post-reionization UV background in periodic cosmological simulations. In this case, it is important to keep in mind that the input optically thin spectrum (for example HM_12) already includes attenuation on large scales. What we wish to model is the local attenuation in overdense regions. It is also instructive to consider what happens as is increased from zero to infinity. For there is no attenuation and the results are identical to the optically thin limit. As is increased, the local environment of the particle is probed and the shielded photoionization rate begins to differ from that of the optically thin case in self-shielded regions. As is increased further, the ray begins to probe regions of the cosmological volume that are completely uncorrelated with the starting location. At this point, the results are equivalent to lowering the normalization of the input optically thin spectrum.

One possible strategy for producing converged results is the following. Begin with rays which are short compared to the size of density field resolution elements. Increase the ray length in steps which are some fraction of the mean free path for the grey approximation of the input spectrum. In most realistic applications, the results will converge before the rays reach a sizeable fraction of the mean free path. Next, test for convergence in . We note that these results will depend on the maximum frequency one includes in the input spectrum and on the particular statistic one is interested in. Our suggested values of pkpc and =12 result from our application of the above algorithm to the HI column density distribution function when truncating the UV background spectrum at four Rydbergs (see Altay_2011). As with all numerical results, the surest way to have confidence in any answer produced by Urchin is to perform convergence tests.

## Appendix B Comparison Between Smoothing Kernels

In Fig. 9, we compare the traditional spline (see Eq. 6) and the truncated gaussian used in Urchin (see Eq. 8). We note that the gaussian has dropped to less than 1% of the maximum value when it is truncated. The normalisation of the two kernels is identical by construction, and their shapes are very similar with differences between the two always smaller than 3% of the maximum value. Using the truncated gaussian as opposed to the cubic spline has the effect of slightly increasing the contribution from the core and the wings of the kernel while slightly decreasing the contribution from intermediate radii. While good SPH kernels posses certain desirable qualities, for example compact support, symmetry around , and smoothness (see Price_05), their specific shape is somewhat arbitrary.

## Appendix C Analytic Hydrogen Ionization Solutions

### c.1 Photoionization Cross Section

The photoionization cross section for hydrogenic atoms in the ground state can be expressed analytically as,

(15) |

where is the fine structure constant, is the Bohr radius, and is the atomic number of the atom. Two fitting formula are commonly used in the literature. The first is a power law form, usually accompanied by a citation to 1989agna.book.....O,

(16) |

This simple form can be useful for analytic treatments, but is not as accurate as the fit due to 1996ApJ...465..487V,

(17) |

We explore the errors these approximations introduce into column density calculations in the next section.

### c.2 Power-law approximations to the Haardt & Madau 2001 spectrum

The left panel in Figure 10 compares the shape of the HM01 (HM01) spectrum to power-law approximations of the form , for . The HM01 spectrum has features due to re-emission, such as the Helium Lyman- emission at 3 Rydberg. The effect of using such approximate spectra as opposed to the full HM01 spectrum is illustrated in the middle panel. We calculated the equilibrium neutral fraction as function of depth, , for plane-parallel radiation entering a slab with uniform hydrogen density (500 times the cosmic mean at ) and K. Power-law approximations to HM01 work reasonably well, as long as the spectra are normalised to give the same number density of ionising photons as the orginal HM01 spectrum (dashed lines). Normalising the power-law spectra to the same photoisation rate does not work well (solid lines). The right panel in Figure 10 compares the differences in neutral columns at pMpc, between the full HM01 spectrum and these powerlaw approximations. The largest error occurs when the slab fails to become fully neutral, as in the flat spectrum case (). The monochromatic grey approximation used in Section 5.1 (pink bar) overestimates the central HI column by dex. The bottom two bars show the effect of approximating the photoionisation cross-section by a simple power law versus using the more accurate fit from 1996ApJ...465..487V, and using the HM01 spectrum up to 10 Rydberg, as opposed to 4 Rydberg as we have done up to now.

### c.3 Time dependent solution of the neutral fraction

The rate of change of the hydrogen neutral fraction is determined by the rate of photoionisation , collisional ionisation , and recombination as well as the number density of free electrons .

(18) |

If we decompose the free electron number density into a contribution from ionised hydrogen and a contribution from all heavier elements, we can write:

(19) |

Substituting into the previous equation yields:

(20) | |||||

Grouping terms in powers of , we can write in the form of a Riccati equation:

(21) | |||||

(22) |

The roots of the quadratic term are

(23) | |||||

(24) |

To determine which of these roots is the physical equilibrium solution we consider the case of pure hydrogen () in the absence of radiation (). In this case,

(25) | |||||

(26) | |||||

(27) | |||||

(28) | |||||

(29) | |||||

(30) |

The collisional ionisation and recombination rates depend on temperature (see for example the fits in Theuns_1998) which, in general, will change as changes. However in the case of constant , , , and , the coefficients , and are constants as well. We can rewrite the derivative using Vieta’s formula as,

(31) |

and solve for the time-dependent solution by separation of variables

(32) |

(33) |

In a gas composed only of hydrogen and helium, is bound between and where is the hydrogen mass fraction of the gas. The solution is fully determined once values for , , , and are specified. In addition, all of the dependence on the ionisation state of elements other than hydrogen is contained in the variable . For all of the tests performed in this paper we set .

### c.4 Neutral fraction in a plane parallel slab

In the case of plane parallel monochromatic radiation with a photon flux incident on a semi-infinite slab of hydrogen gas with constant density and temperature, the ionisation structure can be calculated analytically. We orient our coordinate system such that the surface of the slab is coincident with the plane and the positive -axis extends into the slab. The photoionisation rate at is , where is the photoionisation rate at and is the photoionisation cross-section for the monochromatic radiation. In equilibrium, Eq. (20) with implies:

(34) |

where we have defined . Taking the logarithm of the last two terms and differentiating with respect to gives:

(35) |

Setting the derivative of to zero and integrating both sides over the interval yields the depth at which the neutral fraction is . We can write this inverse solution in terms of the equilibrium neutral fraction in the absence of radiation (i.e. in collisional ionisation equilibrium), , and the value of at the surface of the slab . The solution is then:

(36) | |||||

This allows to be mapped out. Once is known, a photoionisation rate can be determined using Eq. 34. We use this solution to verify our numerical solver ( in the text) and then use in more general (non-monochromatic) cases to verify Urchin. In addition, Eq. (36) forms the basis of the solution .