How well can Charge Transfer Inefficiency be corrected?
A parameter sensitivity study for iterative correction
Radiation damage to space-based Charge-Coupled Device (CCD) detectors creates defects which result in an increasing Charge Transfer Inefficiency (CTI) that causes spurious image trailing. Most of the trailing can be corrected during post-processing, by modelling the charge trapping and moving electrons back to where they belong. However, such correction is not perfect – and damage is continuing to accumulate in orbit. To aid future development, we quantify the limitations of current approaches, and determine where imperfect knowledge of model parameters most degrade measurements of photometry and morphology.
As a concrete application, we simulate “worst case” galaxy and star images to test the performance of the Euclid visual instrument detectors. There are two separable challenges: If the model used to correct CTI is perfectly the same as that used to add CTI, % of spurious ellipticity is corrected in our setup. This is because readout noise is not subject to CTI, but gets over-corrected during correction. Second, if we assume the first issue to be solved, knowledge of the charge trap density within %, and the characteristic release time of the dominant species to be known within % will be required. This work presents the next level of definition of in-orbit CTI calibration procedures for Euclid.
keywords:space vehicles: instruments — instrumentation: detectors — methods: data analysis
The harsh radiation environment above the Earth’s atmosphere gradually degrades all electronic equipment, including the sensitive Charge-Coupled Device (CCD) imaging detectors used in the Hubble Space Telescope (HST) and Gaia (Lindegren et al. 2008), and proposed for use by Euclid (Laureijs et al. 2011). CCD detectors work by collecting photoelectrons which are stored within a pixel created by an electrostatic potential well. After each exposure these electrons are transferred via a process called clocking, where alternate electrodes are held high and low to move charge through the pixels towards the serial register. The serial register is then clocked towards the output circuit where charge-to-voltage conversion occurs providing an output signal dependent on the charge contained within a pixel. The amount of charge lost with each transfer is described by the Charge Transfer Inefficiency (CTI). One of the results of radiation-induced defects within the silicon lattice is the creation of charge traps at different energy levels within the silicon band-gap. These traps can temporarily capture electrons and release them after a characteristic delay, increasing the CTI. Any electrons captured during charge transfer can re-join a charge packet later, as spurious charge, often observed as a charge tail behind each source.
Charge trailing can be (partially) removed during image postprocessing. Since charge transfer is the last process to happen during data acquisition, the fastest and most successful attempts to correct CTI take place as the second step of data reduction, right after the analogue-digital converter bias has been subtracted. (e.g. Bristow 2003). By modelling the solid-state physics of the readout process in HST’s Advanced Camera for Surveys (ACS), then iteratively reversing the model, Massey et al. (2010) demonstrated a -fold reduction in the level of charge trailing. The algorithm was sped up by Anderson & Bedin (2010) and incorporated into STScI’s HST default analysis pipeline (Smith et al. 2012). As the radiation damage accumulated, the trailing got bigger and easier to measure. With an updated and more accurate HST model, Massey (2010) achieved a -fold reduction. In an independent programme for Gaia, Short et al. (2013) developed a model using different underlying assumptions about the solid-state physics in CCDs. Massey et al. (2014) created a meta-algorithm that could reproduce either approach through a choice of parameters, and optimised these parameters for HST to correct % of the charge trailing.
The current level of achievable correction is acceptable for most immediate applications. However, radiation damage is constantly accumulating in HST and Gaia; and increasing accuracy is required as datasets grow, and statistical uncertainties shrink. One particularly challenging example of stringent requirements in future surveys will be the measurement of faint galaxy shapes by Euclid.
In this paper, we investigate the effect of imperfect CTI correction, on artificial images with known properties. We add charge trailing to simulated data using a CTI model , then correct the data using a CTI model with imperfectly known parameters, . After each stage, we compare the measured photometry (flux) and morphology (size and shape) of astronomical sources to their true (or perfectly-corrected) values. We develop a general model to predict these errors based on the errors in CTI model parameters. We focus on the the most important parameters of a ‘volume-driven’ CTI model: the density of charge traps, the characteristic time in which they release captured electrons, and the power law index describing how an electron cloud fills up the physical pixel volume.
This paper is organised as follows. In Sect. 2, we simulate Euclid images and present our analysis methods. In Sect. 3, we address the challenge of measuring an average ellipticity in the presence of strong noise. We present our CTI model and measure the CTI effects as a function of trap release timescale in Sect. 4. Based on laboratory measurements of an irradiated CCD273 (Endicott et al. 2012), we adopt a baseline trap model for the Euclid VIS instrument (Sect. 5). In this context, we discuss how well charge trailing can be removed in the presence of readout noise. We go on to present our results for the modified correction model () and derive tolerances in terms of the trap parameters based on Euclid requirements. We discuss these results in Sect. 6 and conclude in Sect. 7.
2 Simulations and data analysis
2.1 Simulated galaxy images
Charge Transfer Inefficiency has the greatest impact on small, faint objects that are far from the readout register (i.e. that have undergone a great number of transfers). To quantify the worst case scenario, we therefore simulate the smallest, faintest galaxy whose properties are likely to be measured – with an exponential flux profile whose broad wings (compared to a Gaussian or de Vaucouleurs profile) also make it more susceptible to CTI. To beat down shot noise, we simulate noisy image realisations for each measurement. We locate these galaxies pixels from both the serial readout register and the amplifier, uniformly randomising the sub-pixel centre to average out effects that depend on proximity to a pixel boundary. All our simulated galaxies have the same circularly symmetric profile, following the observation by Rhodes et al. (2010) that this produces the same mean result as randomly oriented elliptical galaxies with no preferred direction.
We create the simulated images spatially oversampled by a factor 20, convolve them with a similarly oversampled Point Spread Function (PSF), then resample them to the final pixel scale. We use a preliminary PSF model and the pixels of the Euclid VIS instrument, but our setup can easily be adapted to other instruments, e.g. ACS. To the image signal of electrons, we add a uniform sky background of electrons, as expected for a s VIS exposure, and Poisson photon noise to both the source and the background. After clocking and charge trailing (if it is being done; see Sect. 4.1), we then add additional readout noise, which follows a Gaussian distribution with a root mean square (rms) of electrons, the nominal Euclid VIS value.
In the absence of charge trailing, the final galaxies have mean =, and Full Width at Half Maximum (FWHM) size of , as measured by SExtractor (Bertin & Arnouts 1996). This size, the same as the PSF, at the small end of the range expected from Fig. 4 of Massey et al. (2013) makes our galaxies the most challenging in terms of CTI correction. Examples of input, degraded, and corrected images are shown in Fig. 1.
Separately, we perform a second suite of simulations, containing realisations of a Euclid VIS PSF at . The PSF simulations follow the above recipe, but skip the convolution of the PSF with an exponential disk.
2.2 Image analysis
On each of the sets of images (input, degraded, and corrected), we detect the sources using SExtractor. Moments of the brightness distribution and fluxes of the detected objects are measured using an IDL implementation of the RRG (Rhodes, Refregier & Groth 2001) shape measurement method. RRG is more robust than SExtractor for faint images, combining Gaussian-weighted moments of the image to measure integrated source flux
where is a Gaussian weight function with standard deviation , and the integral extends over ; the position
and the ellipticity
where the second-order brightness moments are
For measurements on stars, we chose a window size , the Euclid prescription for stars. For galaxies, we seek to reproduce the window functions used in weak lensing surveys. We adopt the radius of the SExtractor object (e.g. Leauthaud et al. 2007) that with truncates more of the noise and thus returns more robust measurements.
Note that we are measuring a raw galaxy ellipticity, a proxy for the (reduced) shear, in which we are actually interested (cf. Kitching et al. 2012, for a recent overview of the effects a cosmic shear measurement pipeline needs to address). A full shear measurement pipeline must also correct ellipticity for convolution by the telescope’s PSF and calibrate it via a shear ‘responsivity factor’ (Kaiser, Squires & Broadhurst 1995). The first operation typically enlarges by a factor of and the second lowers it by about the same amount. Since this is already within the precision of other concerns, we shall ignore both conversions The absolute calibration of shear measurement with RRG may not be sufficiently accurate to be used on future surveys. However, it certainly has sufficient relative accuracy to measure small deviations in galaxy ellipticity when an image is perturbed.
3 High precision ellipticity measurements
3.1 Measurements of a non-linear quantity
A fundamental difficulty arises in our attempt to measure galaxy shapes to a very high precision, by averaging over a large number of images. Mathematically, the problem is that calculating ellipticity directly from the moments and then taking the expectation value of all objects, i.e.:
means dividing one noisy quantity by another noisy quantity. Furthermore, the numerator and denominator are highly correlated. If the noise in each follows a Gaussian distribution, and their expectation values are zero, the probability density function of the ratio is a Lorentzian (also known as Cauchy) distribution. If the expectation values of the Gaussians are nonzero, as we expect, the ratio distribution becomes a generalised Lorentzian, called the Marsaglia-Tin distribution (Marsaglia 1965, 2006; Tin 1965). In either case, the ratio distribution has infinite second and first moments, i.e. its variance – and even its expectation value – are undefined. Implications of this for shear measurement are discussed in detail by Melchior & Viola (2012); Refregier et al. (2012); Kacprzak et al. (2012); Miller et al. (2013); Viola, Kitching & Joachimi (2014).
Therefore, we cannot simply average over ellipticity measurements for simulated images. The mean estimator (Eq. 6) would not converge, but follow a random walk in which entries from the broad wings of the distribution pull the average up or down by an arbitrarily large amount.
3.2 “Delta method” (Taylor expansion) estimators for ellipticity
As an alternative estimator, we employ what is called in statistics the ‘delta method’: a Taylor expansion of Eq. (6) around the expectation value of the denominator (e.g. Casella & Berger 2002). The expectation value of the ratio of two random variables , is thus approximated by:
where , , denote the expectation value, standard deviation, and variance of , and its covariance with . The zero-order term in Eq. (7) is the often-used approximation that switches the ratio of the averages for the average of the ratio. We note that beginning from the first order there are two terms per order with opposite signs. Inserting Eq. (5) into Eq. (7), the first-order estimator for the ellipticity reads in terms of the brightness distribution moments as follows:
with the corresponding uncertainties, likewise derived using the delta method (e.g. Casella & Berger 2002):
3.3 Application to our simulations
For our input galaxies, the combined effect of the first-order terms in eq. (8) is %. Second-order contributions to the estimator are small, so we truncate after the first order. However, because of the divergent moments of the Marsaglia-Tin distribution, the third and higher-order contributions to the Taylor series increase again.
Nevertheless, while this delta-method estimator neither mitigates noise bias nor overcomes the infinite moments of the Marsaglia-Tin distribution at a fundamental level, it sufficiently suppresses the random walk behaviour for the purposes of this study, the averaging over noise realisations of the same object. We advocate re-casting the Euclid requirements in terms of the Stokes parameters (; Viola, Kitching & Joachimi 2014). These are the numerators and denominator of eq. (6) and are well-behaved Gaussians with finite first and second moments.
The formal uncertainties on ellipticity we quote in the rest of this article are the standard errors given by eq. (10). Our experimental setup of re-using the same simulated sources (computationally expensive due to the large numbers needed), our measurements will be intrinsically correlated (Sect. 4.2). Hence the error bars we show overestimate the true uncertainties.
4 The effects of fast and slow traps
4.1 How CTI is simulated
The input images are degraded using a C implementation of the Massey et al. (2014) CTI model. During each pixel-to-pixel transfer, in a cloud of electrons, the number captured is
where the sum is over different charge trap species with density per pixel, and is the full-well capacity. Parameter controls the speed at which electrons are captured by traps within the physical volume of the charge cloud, which grows in a way determined by parameter .
Release of electrons from charge traps is modelled by a simple exponential decay, with a fraction escaping during each subsequent transfer. The characteristic release timescale depends on the physical nature of the trap species and the operating temperature of the CCD.
In this paper, we make the simplifying ‘volume-driven’ assumption that charge capture is instantaneous, so . Based on laboratory studies of an irradiated VIS CCD (detailed in Sect. A), we adopt a baseline well fill, and end-of-life total density of one trap per pixel, . In our first, general tests, we investigate a single trap species and explore the consequences of different values of .
4.2 Iterative CTI correction
|Galaxy simulation: in degraded images, including readout noise parameter]|
|Galaxy simulation: after correction in software post-processing (perfect knowledge of charge traps) parameter]|
|Star simulation: in degraded images, including readout noise parameter]|
|Star simulation: after correction in software post-processing (perfect knowledge of charge traps) parameter]|
The Massey et al. (2014) code can also be used to ‘untrail’ the CTI. If required, we use iterations to attempt to correct the image (possibly with slightly different model parameters). Note that we perform this correction only after adding readout noise in the simulated images.
Our main interest in this study is the impact of uncertainties in the trap model on the recovered estimate of an observable (e.g. ellipticity). Therefore, we present our results in terms of differences between the estimators measured for the corrected images, and the input values:
Because for each object of index the noise in the measurements of and are strongly correlated, they partially cancel out. Thus the actual uncertainty of each is lower than quoted. Moreover, because we re-use the same noise realisation in all our measurements (cases of different and ), these measurements are correlated as well.
4.3 CTI as a function of trap timescale
The impact of charge trapping is dependent on the defect responsible. Figure 2 demonstrates the effect of charge trap species with different release times on various scientific observables. To compute each datum (filled symbols), we simulate galaxies, add shot noise, add CTI trailing in the direction (i.e. vertical in Fig. 1), only then add readout noise. Separately, we simulate stars. Using eqs. (8)–(11), we measure mean values of photometry (top panel), astrometry (second panel) and morphology (size in the third, and ellipticity in the bottom panel). Our results confirm what Rhodes et al. (2010) found in a different context.
Three trap regimes are apparent, for all observables. Very fast traps ( transfers) do not displace electrons far from the object; thus their effect on photometry is minimal (top plot in Fig. 2). We observe significant relative changes in position, size, and ellipticity, forming a plateau at low , because even if captured electrons are released after the shortest amount of time, some of them will be counted one pixel off their origin. This is probably an artifact: We expect the effect of traps with to be different in an model simulating the transfer between the constituent electrodes of the physical pixels, rather than entire pixels.
Very slow traps ( transfers) result in electrons being carried away over a long distance such that they can no longer be assigned to their original source image. Hence, they cause a charge loss compared to the CTI-free case. However, because charge is removed from nearly everywhere in the image, their impact on astrometry and morphology is small.
The most interesting behaviour is seen in the transitional region, for traps with a characteristic release time of a few transfer times. If electrons re-emerge several pixels from their origin, they are close enough to be still associated with their source image, but yield the strongest distortions in size and ellipticity measurements. This produces local maxima in the lower two panels of Fig. 2. If these measurements are scientifically important, performance can – to some degree – be optimised by adjusting a CCD’s clock speed or operating temperature to move release times outside the most critical range (Murray et al. 2012).
In the star simulations (crosses in Fig. 2 for degraded images, plus signs for CTI-corrected images), the CTI effects are generally smaller than for the faint galaxies, because the stars we simulate are brighter and thus experience less trailing relative to their signal. Still, we measure about the same spurious ellipticity and even a slightly higher relative size bias for the stars. The explanation is that the quadratic terms in the second-order moments (eq. 5) allow for larger contributions from the outskirts of the object, given the right circumstances. In particular, the wider window size explains the differences between the galaxy and PSF simulations. Notably, the peak in the and curves shifts from for the galaxies to for the stars. Because the wider window function gives more weight to pixels away from the centroid, the photometry becomes more sensitive to slower traps.
For a limited number of trap configurations, we have also tried varying the trap density or the number of transfers (i.e. object position on the CCD). In both cases, the dependence is linear. Overall, for all tested observables, the measurements in the degraded images (Fig. 2, solid symbols) are well-fit by the empirical fitting function
which combines an arc-tangent drop (“D”) and a Gaussian peak (“G”). The best fit-amplitudes (, and ), positions on the axis ( and ) and widths ( and ), are listed in Table LABEL:tab:taufits. The same functional form provides a good match to the residuals after CTI correction, (open symbols in Fig. 2). These residuals are caused by readout noise, which is not subject to CTI trailing, but undergoes CTI correction (see Sect. 5.3.2).
4.4 Predictive model for imperfect correction
We set out to construct a predictive model of , the CTI effect in an observable relative to the underlying true value (eq. 13). There are two terms, the CTI degradation (eq. 14), and a second term for the effect of the ‘inverse’ CTI correction allowing for a slightly imperfect CTI model:
Since CTI trailing perturbs an image by only a small amount, the correction acts on an almost identical image. Assuming the coefficients of eq. (14) to be constant, we get:
where is approximately constant, and depends on the readout noise (see Section 5.3). We could expand this equation as a Taylor series, but the derivatives of do not provide much further insight.
Because eq. (12) is non-linear in the number of signal electrons, our observation (Sect. 4.3) that the effects of CTI behave linearly in is not a trivial result. Assuming this linearly in , we can expand eq. (16) and factor out . The combined effect of several trap species with release timescales and densities can then be written as:
in which we dropped the superscript of for the sake of legibility. We are going to test this model in the remainder of this study, where we consider a mixture of three trap species. We find eq. (17) to correctly describe measurements of spurious ellipticity , as well as the relative bias in source size and flux .
5 Euclid as a concrete example
5.1 Context for this study
To test the general prediction eq. (17), we now evaluate the effect of imperfect CTI correction in simulations of Euclid data, with a full Euclid CTI model featuring multiple trap species (see Sect. 5.2). We call this the experiment.
Akin to Prod’homme et al. (2012) for Gaia, this study is useful in the larger context of the flow down of requirements from Euclid’s science goals (Refregier et al. 2010) to its imaging capabilities (Massey et al. 2013) and instrument implementation (Cropper et al. 2013, 2014). In particular, Massey et al. (2013) highlight that the mission’s overall success will be determined both by its absolute instrumental performance and our knowledge about it. We now present the next step in the flow down: to what accuracy do we need to constrain the parameters of the Massey et al. (2014) CTI model? Future work will then determine which calibration observations are required to achieve this accuracy.
While the final Euclid requirements remain to be confirmed, we adopt the current values as discussed by Cropper et al. (2013). Foremost, the “CTI contribution to the PSF ellipticity shall be per ellipticity component”.
The Euclid VIS PSF model will bear an uncertainty due to CTI, that translates into an additional error on measured galaxy properties. For the bright stars (which have much higher ) tracing the PSF, Cropper et al. (2013) quote a required knowledge of to a precision . We test this requirement with our second suite of simulations, containing realisations of a Euclid VIS PSF at (cf. Sec. 2.1).
In reality, CTI affects the charge transport in both CCD directions, serial and parallel. For the sake of simplicity, we only consider serial CTI, and thus underestimate the total charge trailing. There is no explicit photometric error budget allocated to CTI, while “ground data processing shall correct for the detection chain response to better than % error in photometry in the nominal VIS science images”.
5.2 CTI model for the Euclid VISual instrument
|Trap density [px]|
|Release timescale [px]|
Based on a suite of laboratory test data, we define a baseline model of the most important CTI parameters (, , ). We degrade our set of simulated galaxies using . The experiment then consists of correcting the trailing in the degraded images with slight alterations to . We investigate correction models , resulting in an impressive simulated galaxies used in this study.
Exposure to the radiation environment in space was simulated in the laboratory by irradiating a prototype of the e2v CCD273 to be used for Euclid VIS with a MeV equivalent fluence of (Prod’homme et al. 2014; Verhoeve et al. 2014). Characterisation experiments were performed in nominal VIS conditions of temperature and a readout frequency. We refer to Appendix A for further details on the experiments and data analysis.
We emphasize that our results for pertain to faint and small galaxies, with an exponential disk profile (vz. Sect. 2.1), and placed at the maximum distance from the readout register ( transfers). Furthermore, we assume the level of radiation damage expected at the end of Euclid’s six year mission. Because CTI trailing increases roughly linearly with time in orbit (cf. Massey et al. 2014), the CTI experienced by the typical faintest galaxy (i.e. at half the maximum distance to the readout register and three years into the mission), will be smaller by a factor of compared to the results quoted below.
Where not stated otherwise the nominal Euclid VIS rms readout noise of electrons was used. Table LABEL:tab:traps summarises the baseline model that was constructed based on these analyses. The default well fill power is . Slow traps with clock cycles and dominate our baseline model, with small fractions of medium-fast (, ) and fast (, ) traps. Figure 12 shows how trails change with changing trap parameters.
5.3 Readout noise impedes perfect CTI correction
5.3.1 Not quite there yet: the zeropoint
First, we consider the ellipticities measured in the degraded and corrected images, applying the same baseline model in the degradation and correction steps. The reasons why this experiment does not retrieve the same corrected ellipticity as input ellipticity are the Poissonian image noise and Gaussian readout noise. We quantify this in terms of spurious ellipticity , and shall refer to it as the zeropoint of the experiment. The spurious ellipticity in the serial direction is . Thus, this experiment on worst-case galaxies using the current software exceeds the Euclid requirement of by a factor of . With respect to the degraded image % of the CTI-induced ellipticity are being corrected. Virtually the same zeropoint, , is predicted by adding the contributions of the three species from single-species runs based on the full galaxies. We point out that these results on the faintest galaxies furthest from the readout register have been obtained using non-flight readout electronics (cf. Short et al. 2014).
From our simulation of bright () stars, we measure the residual bias in source size after CTI correction of , in moderate excess of the requirement . While the of the star simulations is selected to represent the typical Euclid VIS PSF tracers, the same arguments of longest distance from the readout register and non-flight status of the electronics apply.
5.3.2 The effect of readout noise
In Fig. 3, we explore the effect of varying the rms readout noise in our simulations about the nominal value of electrons (grey lines) discussed in Sect. 5.3.1. We continue to use the baseline trap model for both degradation and correction. For the rms readout noise, a range of values between and electrons was assumed. For the faint galaxies (Fig. 3, left plot), we find to increase with readout noise in a way well described by a second-order polynomial. A similar, cubic fit can be found for measured from the star simulations (Fig. 3, right plot), but with a hint towards saturation in the highest tested readout noise level.
The most important result from Fig. 3 is that in absence of readout noise, if the correction assumes the correct trap model , it removes the CTI perfectly, with and . The quoted uncertainties are determined by the () galaxy images we simulated. We conclude that the combination of our simulations and correction code pass this crucial sanity check. If the rms readout noise is electrons ( electrons), the spurious ellipticity (the relative size bias) stays within Euclid requirements.
5.4 Sensitivity to imperfect CTI modelling
5.4.1 Morphology biases as a function of well fill power, and determining tolerance ranges
Now that we have assessed the performance of the correction using the same CTI model as for the degradation (given the specifications of our simulations), we turn to the experiment for determining the sensitivities to imperfections in the CTI model. To this end, we assume the zeropoint offset (or ) of Sect. 5.3.1 to be corrected, and ‘shift’ the requirement range to be centred on it (see, e.g., Fig. 4).
Figure 4 shows the experiment for the well fill power . If the degraded images are corrected with the baseline , we retrieve the zeropoint measurement from Sect. 5.3.1. For the experiment, we corrected the degraded images with slightly different well fill powers . The upper plot in Fig. 4 shows the resulting in galaxies, and the lower plot in stars. We find a strong dependence of both the spurious serial ellipticity and on .
In order to determine a tolerance range with respect to a CTI model parameter with baseline value (here, the well fill power ), we fit the measured bias (e.g. , cf. eq. 13) as a function of . By assuming a polynomial
of low order , we perform a Taylor expansion around . In eq. 18, is the zeropoint (Sect. 5.3.1) to which we have shifted our requirement margin. The coefficients are determined using the IDL singular value decomposition least-square fitting routine SVDFIT. For consistency, our fits include as the zeroth order. In Fig. 4, the best-fitting quadratic (linear) fits to () are shown as a solid and dashed line, respectively.
In both plots, the data stick tightly to the best-fitting lines, given the measurement uncertainties. If the measurements were uncorrelated, this would be a suspiciously orderly trend. However, as already pointed out in Sect. 3.3, we re-use the same simulations with the same peaks and troughs in the noise in all data points shown in Figs. 4 to 9. Hence, we do not expect to see data points to deviate from the regression lines to the degree their quoted uncertainties would indicate. As a consequence, we do not make use of the our fits commonly yield for any interpretation.
Because the interpretation of the reduced is tainted by the correlation between our data points, we use an alternative criterion to decide the degree of the polynomial: If the uncertainty returned by SVDFIT allows for a coefficient , we do not consider this or higher terms. For the panels of Fig. 4, this procedure yields (). The different signs of the slopes are expected because appears in the denominator of eq. (4).
Given a requirement , e.g. , the parametric form (eq. 18) of the sensitivity curves allows us to derive tolerance ranges to changes in the trap parameters. Assuming the zeropoint (the bias at the correct value of ) to be accounted for, we find the limits of the tolerance range as the solutions of
with the smallest values of on either sides to . Using, eq. (19), we obtain from the requirement on the spurious ellipticity , for which the quadratic term is small. From the requirement on the relative size bias we obtain . In other words, the ellipticity sets the more stringent requirement, and we need to be able to constrain to an accuracy of at least in absolute terms. This analysis assumes calibration by a single charge injection line at full well capacity, such that eq. (12) needs to be extrapolated to lower signal levels. We acknowledge that Euclid planning has already adopted using also faint charge injection lines, lessening the need to extrapolate.
5.4.2 Ellipticity bias as a function of trap density
We now analyse the sensitivity of towards changes in the trap densities. Figure 5 shows the experiment for one or more of the trap densities of the baseline model. The upper panel of Fig. 5 presents the spurious ellipticity for five different branches of the experiment. In each of the branches, we modify the densities of one or several of the trap species. For example, the upward triangles in Fig. 5 denote that the correction model applied to the images degraded with the baseline model used a density of the fast trap species , tested at several values of with . The densities of the other species are kept to their baseline values in this case. The other four branches modify (downward triangles); (squares); and (diamonds); and all three trap species (circles).
Because a value of reproduces the baseline model in all branches, all of them recover the zeropoint measurement of there (cf. Sect. 5.3.1). Noticing that for the degraded images relative to the input images, we explain the more negative for as the effect of undercorrecting the CTI. This applies to all branches of the experiment. Likewise, with increasing , the residual undercorrection at the zeropoint decreases. Eventually, with even higher , we overcorrect the CTI and measure .
Over the range of we tested, responds linearly to a change in the densities. Indeed, our model (eq. 17), which is linear in the and additive in the effects of the different trap species, provides an excellent description of the measured data, both for and (Fig. 5, lower panel). The lines in Fig. 5 denote the model prediction from a simplified version of eq. (17),
In eq. (20), we assumed the be correct, i.e. .
Next, we compute the tolerance by which, for each branch of the experiment, we might deviate from the correct trap model and still recover the zeropoint within the Euclid requirements of , resp. . Again, we calculate these tolerances about the zeropoints (cf. eq. 20), that we found to exceed the requirements in Sect. 5.3.1, but assume to be corrected for in this experiment.
In accordance with the linearity in , applying the Taylor expansion recipe of Sect. 5.4.1, we find the data in Fig. 5 to be well represented by first-order polynomials (eq. 18). The results for we obtain from eq. (19) are summarised in Table LABEL:tab:tolerances. For all species, the constraints from for faint galaxies are tighter than the ones from for bright stars.
Only considering the fast traps, can change by % and still be within Euclid VIS requirements, given the measured zeropoint has been corrected for. While a tolerance of % is found for , the slow traps put a much tighter tolerance of % on the density . This is expected because slow traps amount to % of all baseline model traps (Table LABEL:tab:traps). Varying the density of all trap species in unison, we measure a tolerance of %.
Computing the weighted mean of the intercepts in Fig. 5, we derive better constraints on the zeropoints: for the faint galaxies, and for the bright stars.
5.4.3 Ellipticity bias as a function of trap release time
|, first pixel matched|
Figure 6 shows the experiment for one or more of the release timescales of the trap model. The upper panel of Fig. 6 presents the spurious ellipticity for five different branches of the experiment. In each of the branches, we modify the release timescales of one or several of the trap species by multiplying it with a factor .
As in Fig. 5, the upward triangles in Fig. 6 denote that the correction model applied to the images degraded with the baseline model used a density of for the fast trap species. The release timescales of the other species are kept to their baseline values in this case. The other four branches modify (downward triangles); (squares); and (diamonds); and all three trap species (circles).
Because a value of reproduces the baseline model in all branches, all of them recover the zeropoint measurement of there. The three trap species differ in how the they induce varies as a function of . One the one hand, for , we observe more negative for , and less negative values for , with a null at . On the other hand, with the slow traps (), we find for , and more negative values than the zeropoint for . The curve of shows a maximum at , with a weak dependence on .
Key to understanding the spurious ellipticity as a function of the is the dependence of for a single trap species that we presented in Fig. 2, and expressed by the empirical fitting function (Eq. 14) with the parameters quoted in Table LABEL:tab:tolerances. While the correction algorithm effectively removes the trailing when the true is used, the residual of the correction will depend on the difference between the for and for the timescale actually used in the correction. This dependence is captured by the predictive model (Eq. 17), which simplifies for the situation in Fig. 6 () to
with (lines in Fig. 6). In the branches modifying and/or , but not , the measurements over the whole range of agree with the empirical model within their uncertainties. If is varied, Eq. (21) overestimates significantly for