DIA: Methods for automatic kernel design

Difference image analysis: Automatic kernel design using information criteria

Abstract

We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially-invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularisation. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularised delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

keywords:
methods: statistical - techniques: image processing - techniques: photometric - methods: data analysis.
12

1 Introduction

In astronomy, the technique of difference image analysis (DIA) aims to measure changes, from one image to another, in the objects (e.g. stars, galaxies, etc.) observed in a particular field. Typically these changes consist of variations in flux and/or position. However, the variations in the object properties that we are interested in are entangled with the differences in the sky-to-detector (or scene-to-image) transformation between pairs of images. Therefore, the DIA method must carefully model the changes in astrometry, throughput, background, and blurring between an image pair in order to extract the required astronomical information.

The state of the art in DIA has evolved substantially over the last decade and a half. Possibly the most complicated part of DIA is the optimal modelling of the convolution kernel describing the changes in point-spread function (PSF) between images. The seminal paper by Alard & Lupton (1998) set the current framework for doing this by detailing the expansion of the kernel as a linear combination of basis functions. Alard (2000) subsequently showed how to model a spatially varying convolution kernel by modelling the coefficients of the kernel basis functions as polynomials of the image coordinates. The most important ingredient then in constructing a kernel solution in the Alard DIA framework is the definition of the set of kernel basis functions. The main developments in this area were achieved by Alard & Lupton (1998), who defined the Gaussian basis functions, Bramich (2008) and Miller, Pennypacker & White (2008) who introduced the delta basis functions (DBFs), and Becker et al. (2012) (hereafter Be12) who conceived of the regularised DBFs. A detailed discussion of the kernel basis functions presented in the DIA literature may be found in Bramich et al. (2013) (hereafter Br13).

The traditional Gaussian basis functions require the specification of numerous parameters while demanding precise sub-pixel image registration for optimal results, as do many other sets of kernel basis functions (e.g. the network of bicubic B-spline functions introduced by Yuan & Akerlof 2008). Consequently, the optimal choice of parameters for generating such sets of basis functions is not obvious, although some investigation into this issue has been performed (Israel, Hessman & Schuh 2007). In contrast, the DBFs have the ultimate flexibility to represent a discrete kernel of any form while requiring the absolute minimal user specification; namely the kernel size and shape (or equivalently the set of “active” kernel pixels). They may even be used to model fractional pixel offsets between images, avoiding the need for image resampling in the absence of other image misalignments (rotation, scale, shear and distortion). Unsurprisingly then, DIA photometry for kernels employing DBFs has been shown to be better than that produced for kernels using Gaussian basis functions (Albrow et al. 2009). However, the use of DBFs yields somewhat noisier kernel solutions than is desirable due to the relatively large number of parameters in the kernel model. To tackle this weakness of the DBFs, Be12 developed the regularised DBFs through the elegant application of Tikhonov regularisation to the kernel model. This refined approach produces very clean and low-noise kernel solutions at the expense of introducing an extra parameter into the kernel definition, where the value of controls the strength of the regularisation. Be12 recommend values of between 0.1 and 1 for square kernels of size 1919 pixels although they caution that the optimal value will likely be data set dependent.

The next logical step in the development of DIA is to investigate how the properties of the image pair under consideration influence the composition of the optimal kernel model (i.e. the optimal set of DBFs, the optimal values of their coefficents, and the optimal value of ). In this context, “optimality” refers both to the Principle of Parsimony, in that the optimal kernel model should constitute the simplest configuration of DBFs that provides a sufficiently good fit to the data, and to appropriate/relevant model performance measure(s). The proposed investigation may be accomplished both by generating and analysing a comprehensive set of simulated images, and by testing on a wide variety of real image data. Neither of these tasks have yet been attempted.

Various model selection criteria have been developed from different statistical view-points as implementations of the Principle of Parsimony (e.g. the Aikaike information criterion - Akaike 1974, the Bayesian information criterion - Schwarz 1978, etc.) and each one may be used to automatically select a parsimonious model from a set of models3. Due to the sheer number of possible combinations of DBFs that may constitute the kernel model, the set of models that can be considered will be limited to a set of feasible candidate kernel models defined via the adoption of an appropriate kernel design algorithm. The performance of each model selection criterion may then be assessed by measuring the quality of the corresponding kernel solution with respect to one or more desired metric(s). The final result will then be a recommendation, dependent on the properties of the image pair under consideration, as to which model selection criterion should be adopted to consistently yield the best kernel solutions for the specified kernel design algorithm.

In this paper, we report on the results of having carried out the proposed investigation for both the unregularised and regularised DBFs (Section 2) using simulated images (Section 5) and real data (Section 6). We restrict attention to the case of solving for a spatially-invariant convolution kernel. The performance of three proposed kernel design algorithms (Section 4) coupled with up to eight model selection criteria (Section 3) was assessed with regards to model error (simulations only), fit quality, and photometric accuracy. In total 19 methods were tested. The conclusions and recommendations from our investigation are detailed in Section 7.

2 Modelling The Convolution Kernel

In this section, we briefly describe the methods used in this paper to solve for the spatially-invariant convolution kernel matching the PSF between two images of the same field.

2.1 Solving For A Spatially-Invariant Kernel: Recap

Consider a pair of registered images of the same field with the same dimensions and sampled on the same pixel grid. To avoid invalidating the assumption of a spatially-invariant kernel model, the image registration should be such that at most there is a translational offset of a few pixels between the images, with no rotational (or other) image misalignments. Let the images be referred to as the reference image and the target image with pixel values and , respectively, where and are pixel indices referring to the column and row of an image.

We model the target image as a model image formed by the convolution of the reference image with a spatially-invariant discrete convolution kernel plus a spatially-invariant (constant4) differential background :

(1)

where the are the pixel values of the model image. As in Alard & Lupton (1998), we model as a linear combination of basis functions:

(2)

where the are the kernel pixel values, and are pixel indices corresponding to the column and row of the discrete kernel, is the number of kernel basis functions, and the are the pixel values of the th discrete kernel basis function with corresponding coefficient . Substitution of equation (2) into equation (1) yields:

(3)

with:

(4)

The image is referred to as a basis image. The model image has parameters. Note that equation (3) may be derived as a special case of equation (8) from Br13.

Assuming that the target image pixel values are independent observations drawn from normal (or Gaussian) distributions and that the parameters and of the model image have uniform Bayesian prior probability distribution functions (PDFs), then the maximum likelihood estimator (MLE) of and may be found by minimising the chi-squared:

(5)

This is a general linear least-squares problem (see Press et al. 2007) with associated normal equations in matrix form:

(6)

where the symmetric and positive-definite matrix is the least-squares matrix, the vector is the vector of model parameters, and is another vector. For the vector of parameters:

(7)

the elements of and are given in terms of the basis images by:

(8)
(9)
(10)

Cholesky factorization of , followed by forward and back substitution is the most efficient and numerically stable method (Golub & Van Loan 1996) for obtaining the solution to the normal equations (i.e. is the vector of MLEs of the model parameters). The inverse matrix is the covariance matrix of the parameter estimates and consequently the uncertainty in each is given by:

(11)

For the spatially-invariant kernel, the photometric scale factor between the reference and target image is a constant:

(12)

As noted by Bramich (2008), it is good practice to subtract an estimate of the sky background level from before solving for and in order to minimise any correlation between and .

We adopt a noise model for the model-image pixel uncertainties of:

(13)

where is the CCD readout noise (ADU), is the CCD gain (e/ADU), and is the flat-field image. The depend on the which renders our maximum likelihood problem as a non-linear problem and also requires that the MLE of the model image parameters is obtained by minimising instead of . However, iterating the solution by considering the and in turn as fixed is an appropriate linearisation of the problem that still allows for the model image parameters to be determined by minimising at each iteration as described above (since the are considered as constant whenever the model image parameters are being estimated). For the first iteration, we estimate the by approximating in equation (13) with . A -sigma-clip algorithm is employed at the end of each iteration except for the first to prevent outlier target-image pixel values from influencing the solution (e.g. cosmic rays, variable stars, etc.). The criterion for pixel rejection is , and we use . Only 3-4 iterations are required for convergence and the final solution is highly insensitive to the initial choice of (e.g. setting all of the to unity for the first iteration gives exactly the same result as setting the by approximating in equation (13) with ). Finally, it should be noted that lack of iteration introduces a bias into the kernel and differential background solution (see Br13 for a discussion and examples).

The difference image is defined by:

(14)

from which we may define a normalised difference image:

(15)

In the absence of varying objects, and for a reliable noise model, the distribution of the values provides an indication of the quality of the difference image; namely, the should follow a Gaussian distribution with zero mean and unit standard deviation. If the follow a Gaussian distribution with significant bias or standard deviation greater than unity, then systematic errors are indicated, which may be due to under-fitting. If they follow a Gaussian distribution with standard deviation less than unity, then over-fitting may be indicated. If they follow a non-Gaussian distribution, then an inappropriate noise model may be at least part of the cause.

2.2 The Delta Basis Functions

The final ingredient required to construct a kernel solution is the definition of the set of kernel basis functions, which in turn defines the set of basis images. In this paper we consider only the delta basis functions, which are defined by:

(16)

where a one-to-one correspondence associates the th kernel basis function with the discrete kernel pixel coordinates , and is the Kronecker delta-function:

(17)

As such, each DBF and its corresponding coefficient represent a single kernel pixel and its value, respectively. Note that this definition of the DBFs ignores the transformation that is required when the photometric scale factor is spatially varying (Br13).

The DBFs have a conveniently simple expression for the corresponding basis images:

(18)

2.3 Regularising The Delta Basis Functions

For the DBFs, Be12 introduced a refinement to the normal equations to control the trade-off between noise and resolution in the kernel solution. They used Tikhonov regularisation (see Press et al. 2007) to penalise kernel solutions that are too noisy by adding a penalty term to the chi-squared that is derived from the second derivative of the kernel surface and whose strength is parameterised by a tuning parameter . The addition of a penalty term to the chi-squared is equivalent to adopting a non-uniform Bayesian prior PDF on the model parameters. The corresponding maximum penalised likelihood estimator (MPLE) of and is obtained by minimising:

(19)

where is the number of data values5 (i.e. target-image pixels) and is an matrix with elements:

(20)

We consider two DBFs to be adjacent if they share a common kernel-pixel edge, connected if they can be linked via any number of pairs of adjacent DBFs, and disconnected if they are not connected. Note that the elements of the last row and column of , corresponding to the differential background parameter , are all zero.

The matrix is the Laplacian matrix representing the connectivity graph of the set of DBFs (cf. graph theory). It is symmetric, diagonally dominant, and positive-semidefinite. All of the eigenvalues of are non-negative while of them are equal to zero. Here, is the number of disconnected sets of connected DBFs within the full set of DBFs (i.e. the number of components of the connectivity graph). Consequently, the rank of is , as is the rank of , which are facts that we will use later in Section 3.4. It is also useful to note that if all of the DBFs are connected to each other, then and are both of rank . In Appendix A, we present a couple of example kernels with their corresponding matrices.

The expression in equation (19) is at a minimum when its gradient with respect to each of the parameters and is equal to zero. Performing the differentiations and rewriting the set of linear equations in matrix form we obtain the regularised normal equations:

(21)

where:

(22)

Obtaining the solution to the regularised normal equations now proceeds as for the normal equations in Section 2.1. The covariance matrix of the parameter estimates is similarly given by .

3 Model Selection Criteria

Here we describe our statistical tool-kit of model selection criteria that we will use for deciding on the best set of DBFs to be employed in the modelling of the convolution kernel. The criteria are valid for linear models, such as our model image in equation (3), and for data drawn from independent Gaussian distributions, which is a valid approximation to the Poissonian statistics of photon detection for CCD image data that only breaks down at very low signal levels (16 e). We direct the reader to Konishi & Kitagawa (2008) for an essential reference on the information criteria presented below.

3.1 Hypothesis Testing For Nested Models

-test

The -test may be used to compare two models A and B with parameter sets and , respectively, that are nested (i.e. ). The -statistic is defined by:

(23)

where and are the chi-squared values of models A and B, respectively (see equation (5)). Under the null hypothesis that model B does not provide a significantly better fit than model A, the -statistic follows a chi-squared distribution with degrees of freedom (DoF). We set our threshold for rejection of the null hypothesis at 1 per cent (e.g. for DoF = 1). We adopt the chi-squared values of models A and B as those calculated during the first iteration of our kernel solution procedure to enable a fair comparison between models since they are both computed using the same pixel uncertainties (i.e. the estimated by approximating in equation (13) with ). However, the values of the model image parameters are still taken as those calculated in the final iteration of the kernel solution procedure.

Model selection using the -test applies only to models A, B, …, Z with sequentially nested parameter sets . Starting with models A and B, the is minimised for each model and the -test is used to determine whether or not model B provides a significantly better fit than model A. If it does not, then model A is accepted as the correct model and the procedure terminates, otherwise the next pair of models B and C are evaluated using the same method. The procedure continues by evaluating sequential model pairs in this fashion until either the -test indicates that the next model does not provide a significantly better fit or until there are no more models to test.

-test

The -test may also be used to compare two nested models A and B. The -statistic is defined by:

(24)

where is the number of data values. Again, under the null hypothesis that model B does not provide a significantly better fit than model A, follows an -distribution with DoF . We set our threshold for rejection of the null hypothesis at 1 per cent (e.g. for DoF = (2,1000)) and we compute the -statistic using the chi-squared values of models A and B calculated during the first iteration of our kernel solution procedure. Model selection with the -test applies to models A, B, …, Z with sequentially nested parameter sets and proceeds in the same way as model selection with the -test.

3.2 Information Criteria For Maximum Likelihood

The principal of maximum likelihood assumes a uniform prior PDF on the model parameters. A consequence of this is that as parameters are added to a model, the maximum likelihood always increases, rendering it useless for the purpose of model selection between models with different dimensionality. Information criteria are used as an alternative for evaluating models with different numbers of parameters. They may be applied regardless of whether the models under consideration are nested or non-nested.

Aic

The Akaike information criterion (AIC; Akaike 1974) is derived as an asymptotic approximation to the Kullback-Leibler divergence (Kullback & Leibler 1951)6, which measures the distance of a candidate model from the true underlying model under the assumption that the true model is of infinite dimension and is therefore not represented in the set of candidate models. The aim of the AIC is to evaluate models based on their prediction accuracy.

A version of the AIC for Gaussian linear regression problems that corrects for the small-sample bias while being asymptotically the same as the AIC for was derived by Sugiura (1978):

(25)

where is the likelihood function for the vector of model parameters , and is a vector of MLEs for the model parameters. Model selection with the AIC is performed by minimising for each model, and then minimising AIC over the full set of models under consideration.

Tic

The Takeuchi information criterion (TIC; Takeuchi 1976) is a generalisation of the AIC (Konishi & Kitagawa 2008) given by:

(26)

where tr is the matrix trace operator. The matrices and are defined as:

(27)
(28)
(29)

where is the likelihood function for the th (single) data point. Model selection with the TIC proceeds as for the AIC.

Bic

The Bayesian approach to model selection is to choose the model with the largest Bayesian posterior probability. By approximating the posterior probability of each model, Schwarz (1978) derived the Bayesian information criterion (BIC) for model selection:

(30)

The BIC generally includes a heavier penalty than the AIC for more complicated models (e.g. in the regime and ), therefore favouring models with fewer parameters than those favoured by the AIC. Model selection with the BIC proceeds as for the AIC.

Bic

Konishi, Ando & Imoto (2004) performed a deeper Bayesian analysis to derive an improved BIC:

(31)

Model selection with the BIC proceeds as for the AIC.

It is worth mentioning that the BIC and BIC are consistent model selection criteria in that they select with high probability the true model from the set of candidate models whenever the true model is represented in the set of candidate models.

3.3 Information Criteria For Maximum Penalised Likelihood

The AIC, AIC, TIC, BIC and BIC apply only to models estimated by maximum likelihood.

Gic

Konishi & Kitagawa (1996) derived a further generalisation of the AIC and TIC, called the generalised information criterion (GIC), that can be applied to model selection for models with parameters estimated by maximum penalised likelihood:

(32)

where is a vector of MPLEs for the model parameters, and:

(33)
(34)

Here is an matrix and we have used the fact that it is symmetric to slightly simplify the Konishi & Kitagawa (1996) expressions for and (their equation (21)). Model selection with the GIC is performed by minimising GIC over for each model, and then selecting the model for which GIC is minimised over the full set of models under consideration.

Bic

Using the same Bayesian analysis as for the derivation of the BIC, Konishi, Ando & Imoto (2004) also extended the BIC to apply to model selection for models with parameters estimated by maximum penalised likelihood. For of rank , and denoting the product of the non-zero eigenvalues of by , they derived:

(35)

Model selection with the BIC proceeds as for the GIC.

3.4 Information Criteria For DIA

We may adapt the various information criteria from Sections 3.2 and 3.3 to our problem of solving for the kernel and differential background in DIA. The model image has parameters and we use the notation , and .

Firstly, we compute the log-likelihood function for data drawn from Gaussian distributions as:

(36)

For model selection purposes, the last term is constant and can be ignored. Secondly, we note that since the are considered as constant at each iteration of the maximum likelihood problem in Section 2.1, the matrices and evaluated at are given by:

(37)
(38)

For computational purposes it is useful to note that is symmetric. From these two expressions, we may derive the following results:

(39)
(40)

Finally, we consider that the solution of the normal equations requires the computation of the Cholesky factorisation , where is a lower triangular matrix with positive diagonal entries , from which we may immediately calculate the determinant of as . Hence, with minimal extra computation, the Cholesky factorisation of yields:

(41)

Therefore, using equations (36) and (39) - (41) for the maximum likelihood problem in Section 2.1, we have the following formulae for the relevant information criteria from Section 3.2:

(42)
(43)
(44)
(45)

Considering now the maximum penalised likelihood problem, for constant we have , which is equal to when evaluated at (using equations (21) and (22)). Then, using , the matrices and evaluated at are given by:

(46)
(47)

Writing , then, from these two expressions, we may derive the following results:

(48)
(49)

Also, the Cholesky factorisation of yields:

(50)

Finally, we note that the matrix is of rank , and hence .

Therefore, using equations (36) and (48) - (50) for the maximum penalised likelihood problem in Section 2.3, we have the following formulae for the relevant information criteria from Section 3.3:

(51)
(52)

4 Kernel Design Algorithms

Let us introduce the concept of a kernel design, which we define as a specific choice of DBFs (or, equivalently, kernel pixels) to be employed in the modelling of the convolution kernel. From a master set of DBFs, the model selection criteria will each select a single “best” kernel design, which requires the evaluation of the criteria via the estimation of the model image parameters for each of the possible kernel designs7. This computational problem is formidable and currently infeasible for values of that are required for typical kernel models (e.g. a relatively small 9x9 kernel pixel grid yields 2.410 potential kernel designs!). Furthermore, branch-and-bound algorithms (e.g. Furnival & Wilson 1974) for speeding up this exhaustive search are only applicable to some of our model selection criteria in Section 3.4.

It is well known that by not considering all of the possible combinations of predictor variables in a linear regression problem (e.g. by using stepwise regression for variable selection), the optimal set of predictors may be misidentified. However, in our case, we know from the nature/purpose of the kernel (and copious amounts of prior experience!) that the true kernel model has a peak signal at the kernel coordinates corresponding to the translational offset between the reference and target images (which is at the kernel origin when they are properly registered) and that this signal decays away from the peak. There may be other peaks (e.g. due to a telescope jump in the target image), but again these also have profiles that decay away from the peak(s). The best kernel designs are therefore generally limited to sets of DBFs in close proximity that form relatively compact and regular shapes. Based on these observations, we have devised two algorithms for automatic kernel design that compare a manageable number of sensible kernel models; the circular kernel design algorithm (Section 4.1) and the irregular kernel design algorithm (Section 4.2).

4.1 The Circular Kernel Design Algorithm

(pix) (pix) (pix) (pix)
0.000 1.000 1 4.472 5.000 69
1.000 1.414 5 5.000 5.099 81
1.414 2.000 9 5.099 5.385 89
2.000 2.236 13 5.385 5.657 97
2.236 2.829 21 5.657 5.831 101
2.829 3.000 25 5.831 6.000 109
3.000 3.162 29 6.000 6.083 113
3.162 3.606 37 6.083 6.325 121
3.606 4.000 45 6.325 6.403 129
4.000 4.123 49 6.403 6.708 137
4.123 4.243 57 6.708 7.000 145
4.243 4.472 61
Table 1: The number of DBFs in a circular kernel design for different ranges of the kernel radius . The ranges are defined by . The table may be extended as appropriate for larger values of .

One very simple way to greatly reduce the number of candidate kernel designs that is in line with the expected kernel properties is to restrict the kernel designs to those that correspond to a circularly shaped pixel grid centred at the origin of the kernel pixel coordinates. We therefore define a circular kernel design of radius as the set of DBFs corresponding to the kernel pixels whose centres lie at or within pixels of the kernel origin, which is taken to be at the centre of the kernel pixel. As is increased, the circular kernel design includes progressively more DBFs leading to a set of nested kernel designs. In Table 1, we list the number of DBFs in a circular kernel design for a range of values of .

The circular kernel design algorithm (CKDA) works for a pair of images and an adopted model selection criterion. The algorithm sequentially evaluates a set of nested model images. It finishes when the current model image under consideration fails the selection criterion, and consequently the previously considered model image is selected. For the and -tests, this means that the current model image does not provide a significantly better fit than the previous one. For the information criteria, this means that the current model image yields a larger value of the criterion than the previous one, where has already been optimised individually for each model if appropriate. The algorithm proceeds as follows:

  1. Fit the model image with the differential background as the only parameter and calculate the desired model selection criterion.

  2. Adopt a circular kernel design of radius  pix, which defines a kernel model with a single DBF. Fit the model image consisting of the differential background and the kernel model, and calculate the desired model selection criterion. If the model image from (i) is selected, then finish.

  3. Increment until the kernel model includes a new (larger) set of DBFs. Fit the model image consisting of the differential background and the new kernel model, and calculate the desired model selection criterion. If the model image from the previous iteration is selected, then finish.

  4. Repeat step (iii) until the algorithm terminates.

Note that the CKDA is intended to be applied to reference and target images that are registered to within a single pixel (but without requiring sub-pixel alignment necessitating image resampling).

Special care must be taken when using sigma-clipping during the fitting of the model images in the CKDA. Since each model image fit within the algorithm has the potential to clip different sets of target-image pixel values, the calculation of the model selection criterion may end up employing different sets of pixels at each step, which leads to undesirable jumps in its value that are unrelated to the properties of the fits. If sigma-clipping is required due to the presence of outlier pixel values, then, to avoid this problem, it is recommended to run the CKDA to conclusion without using sigma-clipping and to use the selected model image to identify and clip the outliers. The CKDA may then be re-run ignoring this same set of clipped pixel values at each step, and the whole process may be iterated more than once if necessary. This issue with sigma-clipping applies to all kernel design algorithms, and also whenever a fair comparison between algorithms is required (see Section 6).

In the early phases of testing the CKDA, we ran the algorithm past the finishing point to check that kernel designs with larger radii than the radius of the selected design do not yield smaller values of the information criterion, which, if this was the case, would indicate that the algorithm is terminating too early at a local minimum. We found that only in a relatively small proportion of the simulations (Section 5) a slightly smaller value of the information criterion is achieved for a kernel design with a larger radius than the selected design (usually 2-3 steps larger in Table 1), and that when this occurs, the values of the model performance metrics (Section 5.2) for the two designs are very similar with no systematic improvement for the kernel design with a larger radius. Given that running the CKDA to larger radii comes at considerable cost in terms of processing power, the termination criterion of the CKDA was fixed at the first minimum of the information criterion. The same conclusions were also found to apply to the irregular kernel design algorithm (Section 4.2).

4.2 The Irregular Kernel Design Algorithm

Another way to limit the number of candidate kernel designs is to “grow” the kernel model as a connected set of DBFs from a single “seed” DBF by including one new DBF at each iteration. We call this the irregular kernel design algorithm (IKDA), and it works for a pair of images and an adopted model selection criterion as follows:

  1. Fit the model image with the differential background as the only parameter and calculate the desired model selection criterion.

  2. Define a master set of DBFs by taking an appropriately large grid of kernel pixels centred on the pixel at the kernel origin. For each DBF in the master set, fit the model image consisting of the differential background and a kernel model with the single DBF, and calculate the desired model selection criterion. If the model image from (i) is selected in all cases, then finish. Otherwise, accept the DBF from the master set that gives the best model image (according to the selection criterion) as the first DBF to be included in the kernel model (referred to as the seed DBF). Remove the seed DBF from the master set.

  3. Find the subset of DBFs from the master set that are adjacent to at least one of the DBFs in the kernel model from the previous iteration. For each candidate DBF in this subset, fit the model image consisting of the differential background and a new kernel model with a set of DBFs that is the union of the set of DBFs in the kernel model from the previous iteration with the candidate DBF. If the model image from the previous iteration is selected in all cases, then finish. Otherwise, accept the candidate DBF that gives the best model image as the next DBF to be included in the kernel model. Remove the accepted candidate DBF from the master set.

  4. Repeat step (iii) until the algorithm terminates.

Note that the IKDA may be applied to reference and target images that are not registered to within a single pixel since step (ii) is effectively a form of image registration. Again, special care must be taken with the application of sigma-clipping within the IKDA (see Section 4.1).

The IKDA may generate different sequences of DBFs during the growth of the kernel model for different model selection criteria. However, for the and -statistics, the IKDA follows the same sequence of DBFs since both statistics are maximised at each iteration of the IKDA by minimising . For similar reasons, the IKDA follows the same sequence of DBFs for the AIC and the BIC. In these cases, the different model selection criteria simply terminate the IKDA at different points in the sequence. Still, regardless of the actual model selection criterion used, we find that the IKDA always grows the kernel solution outwards from the selected seed DBF.

There are various alternative ways in which the kernel model may be grown within the IKDA. We have experimented with dropping the constraint that each new DBF must be adjacent to at least one DBF in the previous kernel model. However, this produced similar kernel solutions to those produced by the IKDA with the adjacency constraint but with an extra scattering of isolated DBFs arbitrarily far from the peak signal in the kernel. We also experimented with relaxing the definition of “adjacent” to include more nearby kernel pixels, but the results from these versions of the IKDA are virtually indistinguishable in terms of the model performance metrics from the results for the IKDA described above (both for the simulated and real data). Hence we have not considered these variations on the IKDA any further.

Finally, we mention that the IKDA may be modified to generate multiple seed DBFs (possibly as part of step (ii) or by generating a new seed DBF after the algorithm terminates for the first time). This modification would be useful for adapting to situations similar to when the telescope has jumped during a target image exposure, and consequently the true kernel model consists of two or more disconnected peaks.

5 Testing Automatic Kernel Design Algorithms On Simulated Images

The main aim of this paper is to find out which combination of kernel design algorithm and model selection criterion consistently selects a kernel model that provides the best performance in terms of model error, fit quality, and photometric accuracy. The conclusions drawn from our investigation will likely depend on the properties of the reference and target images, and hence we must systematically map out the performance of each method accordingly. This task is most efficiently performed by generating and analysing simulated images. Furthermore, with respect to model error, the performance of each method may only be measured through the use of simulations where the true model image is known. Simulations also provide a setting in which the noise model is precisely known since it is used to generate the simulated data. Thus simulations allow for the degree of under- or over-fitting to be assessed accurately. For these reasons, we have performed detailed DIA simulations for a wide range of image properties.

5.1 Generating Simulated Images

We employed a Monte Carlo method for our investigation. We adopted reasonable values for the CCD readout noise and gain of  ADU and  e/ADU, respectively. For each simulation, we generated both noiseless and noisy versions of a reference and target image pair, along with the noise maps used for generating the noisy images, via the following procedure:

  1. The size of the reference image was set to 141141 pixels.

  2. The sky background level for the reference image was drawn from a uniform distribution on the interval  ADU.

  3. The log of the field star density, parameterised as the number of stars per 100100 pixel image region, was drawn from a uniform distribution on the interval , and this density was used to calculate the number of stars to be generated in the reference image.

  4. The pixel coordinates of each star in the reference image were drawn from a uniform distribution over the image area. Also, for each star, the value of , where is the star flux (ADU), was drawn from a uniform distribution on the interval . This flux distribution is appropriate when imaging to a fixed depth through a uniform space density of stars (e.g. a good approximation for certain volumes in our Galaxy). For the purposes of performing PSF photometry on the difference image, and without loss of generality, the pixel coordinates of the brightest star were modified by an integer pixel shift to lie within the central pixel of the reference image.

  5. The same normalised two-dimensional Gaussian profile of full-width at half-maximum (FWHM) pixels was adopted for the profile of each star in the reference image. The value of was drawn from a uniform distribution on the interval  pix, adequately covering the under- to over-sampled regimes.

  6. A square image pixel array of size 141141 pixels was created with all of the pixel values set to the sky background level .

  7. For each star in the reference image, the Gaussian profile was centred at the star pixel coordinates and sampled at 7 times the image resolution over the image area. The over-sampled Gaussian profile was then binned (by averaging) to match the image resolution and re-normalised to a sum of unity. Finally, it was scaled by the star flux and added to the image .

  8. An image of standard deviations (i.e. a noise map) corresponding to was created via:

    (53)

    which may be derived from equation (13) by setting the . A 141141 pixel image of values drawn from a Gaussian distribution with zero mean and unit standard deviation was also generated and used to construct a noisy reference image via:

    (54)
  9. The size of the target image was set to 141141 pixels.

  10. For simplicity, the sky background level for the target image was set to , which is equivalent to assuming a differential background of zero.

  11. A single sub-pixel shift in each of the and image coordinate directions was drawn from a uniform distribution on the interval  pix and applied to the pixel coordinates of the stars in the reference image to generate the coordinates of the same stars in the target image. The fluxes of the stars in the target image were assumed to be the same as their fluxes in the reference image, which is equivalent to assuming non-variable stars and a photometric scale factor of unity.

  12. The convolution kernel matching the PSF between the reference and target images was assumed to be a normalised two-dimensional Gaussian profile of FWHM pixels, where a non-negative or negative value of indicates that the kernel convolves the reference or target image PSF, respectively. The value of was drawn from a uniform distribution on the interval  pix and the FWHM of the Gaussian profile for the stars in the target image was then calculated from:

    (55)
  13. A square image pixel array of size 141141 pixels was created with all of the pixel values set to the sky background level .

  14. The flux profiles of the stars in the target image were added to using the same method as that used in step (vii) for the reference image.

  15. An image of standard deviations corresponding to was created via:

    (56)

    A new 141141 pixel image of values drawn from a Gaussian distribution with zero mean and unit standard deviation was also generated and used to construct a noisy target image via:

    (57)
  16. The images , and were each trimmed to a size of 101101 pixels. Hence the number of data values in each simulation is .

  17. The signal-to-noise (S/N) ratio of the noisy target image SN was calculated as:

    (58)

    The value of (SN) is distributed approximately uniformly due to the way the field star density is generated in step (iii). It is important to note that it does not necessarily follow that a high S/N target image has a bright high-S/N star in the centre. The high S/N in the target image may be the consequence of the presence of a reasonable number of faint stars. In this case, the star at the centre of the image is of low S/N, even though it is the brightest star in the image.

In total, we generated the reference and target images for 548,392 simulations. We call this set of images “Simulation set S1”.

The above method for generating reference and target images represents the case where the reference image has approximately the same S/N ratio as the target image. However, it is common practice in DIA to create a high-S/N ratio reference image by stacking images or integrating longer. We therefore also repeated the whole procedure of generating simulated images for reference images with ten times less variance in each pixel value than the corresponding target images (or times better S/N). This was achieved by scaling the in step (viii) by . This second set of reference and target images, “Simulation set S10”, was generated for a total of 529,571 simulations. Figure 1 shows an example noisy reference and target image pair from one of these simulations.

5.2 Model Performance Metrics

We used the following metrics to measure the quality of each kernel and differential background solution:

  • Model error: The mean squared error MSE measures how well the fitted model image matches the true model image . It is defined by:

    (59)

    Kernel and differential background solutions with the smallest values of MSE exhibit the best performance in terms of model error.

    We also consider the photometric scale factor and the differential background as supplementary measures of model error. For our simulations, the closer to unity the value of and the closer to zero the value of , the better the performance of a kernel and differential background solution with respect to model error. Systematic errors in the photometric scale factor are especially important since a fractional error E in introduces a fractional error of E into the photometry (Bramich et al. 2015).

  • Fit quality: The bias and excess variance in the fitted model image may be measured by the following statistics:

    (60)
    (61)

    MFB is the mean fit bias and MFV is the mean fit variance with units of sigma and sigma-squared, respectively. The closer to zero the value of MFB, and the closer to unity the value of MFV, the better the performance of a kernel and differential background solution with respect to fit quality.

  • Photometric accuracy: To assess the photometric accuracy, we perform PSF fitting on the difference image at the position of the brightest star in the reference image under the assumption that the reference image PSF is perfectly known. In detail, we generate a normalised two-dimensional Gaussian profile of FWHM pixels centred at the pixel coordinates of the brightest star in the reference image (guaranteed by construction to be within half a pixel of the image centre) and sampled at 7 times the image resolution. The over-sampled Gaussian is then binned (by averaging) to match the image resolution, convolved with the kernel solution, trimmed in extent to a circularly shaped pixel grid of radius pixels around the star coordinates, and renormalised. This model PSF for the target image is then optimally scaled to the difference image at the position of the brightest star by simultaneously fitting a scaling factor and an additive constant, and using the known pixel variances in the target image . The difference flux of the brightest star on the photometric scale of the reference image is then computed using .

    The theoretical minimum variance in the difference flux for PSF fitting with a scaling factor only is given by:

    (62)

    where is the true photometric scale factor ( in our simulations) and is the true PSF for the brightest star in the target image (a normalised two-dimensional Gaussian profile of FWHM pixels in our simulations). Since all of the stars in the simulations are non-variable, the best kernel and differential background solutions should yield a distribution of values of with zero mean and unit variance. Hence, for a set of simulations indexed by , appropriate measures for assessing the photometric accuracy are:

    (63)
    (64)

    MPB is the mean photometric bias and MPV is the mean photometric variance with units of and , respectively. We note that even though MPV is normalised by the theoretical minimum variance in the difference flux, it may still achieve values that are less than unity when the target image is over-fit and/or when the model PSF for the target image differs from the true PSF.

5.3 Results

Figure 1: A reference and target image pair from simulation set S10 are shown at the top. The corresponding results for each of the 19 kernel solution methods are shown below. For each method, the difference image, kernel solution, and model performance metrics are displayed. The difference images and kernels are all displayed using the same linear scales of  ADU and , respectively. Processing times were measured for non-optimised code running on a desktop computer with an Intel Core i7-2600 CPU (3.40 GHz) and 16 Gb RAM.
Figure 2: The true PSF for the brightest star in the target image from Figure 1 is shown at the top left. The true PSF is a normalised two-dimensional Gaussian of FWHM 4.78 pix centred in the image stamp using the sub-pixel coordinates of the brightest star. A selection of six model PSFs for the target image are shown in the top row and labelled with the corresponding kernel solution methods. The residuals of these model PSFs from the true PSF are shown in the bottom row. Each row of plots uses the linear scale reproduced at the right-hand end of the row.
Figure 3: Examples from the simulations of the variation of the GIC and BIC criteria as a function of for the kernel solution methods CKDA-GIC (black), CKDA-BIC (red), 19x19-GIC (green), and 19x19-BIC (blue). For the CKDA-GIC and CKDA-BIC methods, the curves correspond to the selected kernel radius. For each curve, the minimum is marked with a solid circle. For the CKDA-GIC and 19x19-GIC methods, the value of the curve at is marked on the left-hand side of the plot with a triangle. Note that while GIC converges to TIC for , BIC diverges as because of the divergence of the term involving in equation 52. Hence no triangles are plotted for the CKDA-BIC and 19x19-BIC methods.

For each possible combination of kernel design algorithm and model selection criterion, we computed kernel and differential background solutions for all of the reference and target image pairs in both of the simulation sets S1 and S10. Furthermore, for comparison purposes, for each simulation we solved for a model image employing a square 19x19-pixel kernel design which corresponds to the unregularised kernel analysed in Be12. We also solved for the same 19x19-pixel kernel design with regularised DBFs where the optimal choice of was determined using either GIC or BIC (equations 51 and 52) which corresponds to the regularised kernel analysed in Be12. In all cases, we used three iterations for each solution, but without employing sigma-clipping since the simulated images do not suffer from outlier pixel values (see Section 2.1). The optimisation of for the GIC and BIC model selection criteria was performed using a binary search algorithm in for the range with a final resolution in of 15%, while also considering the limit . Finally, the corresponding model performance metrics from Section 5.2 were calculated for each solution.

Hereafter we use a string of the form
ALGORITHM-CRITERION to refer to a specific combination of kernel design algorithm (CKDA or IKDA) and model selection criterion (, , AIC, TIC, GIC, BIC, BIC or BIC). For the 19x19-pixel kernel design, we use 19x19-UNREG, 19x19-GIC, and 19x19-BIC. Each of these combinations constitutes a kernel solution method, and hence we have 19 methods to consider.

In Figure 1, we show the difference images, kernel solutions, and model performance metrics for each of the 19 kernel solution methods applied to the reference and target image pair displayed at the top (taken from simulation set S10). The target image is of medium S/N, and the reference and target images are both over-sampled (with  pix). Notice how the regularisation in the 19x19-GIC and 19x19-BIC methods drastically reduces the noise in the kernel compared to the 19x19-UNREG method as demonstrated previously in Be12. Notice also how, as expected, the BIC-type criteria (BIC, BIC and BIC) select kernel designs with fewer DBFs than the kernel designs selected by the AIC-type criteria (AIC, TIC and GIC). Somewhat surprising is the “spidery” form of the kernel solutions generated by the IKDA. A selection of model PSFs for this target image, used to perform PSF fitting on the difference images, are displayed in the top row of Figure 2 alongside the true PSF for the brightest star. The residuals of these model PSFs from the true PSF (bottom row) demonstrate that the spidery form of the IKDA kernel solutions has no discernable detrimental effect, when compared to the other kernel solutions, on the convolution of the reference image PSF to obtain the target image PSF.

To provide an idea of what the functional forms of GIC and BIC look like, we plot these quantities as a function of for two example simulations in Figure 3. Each plot shows the curves for the CKDA-GIC, CKDA-BIC, 19x19-GIC and 19x19-BIC