A New Algorithm For Difference Image Analysis
Abstract
In the context of difference image analysis (DIA), we present a new method for determining the convolution kernel matching a pair of images of the same field. Unlike the standard DIA technique which involves modelling the kernel as a linear combination of basis functions, we consider the kernel as a discrete pixel array and solve for the kernel pixel values directly using linear leastsquares. The removal of basis functions from the kernel model is advantageous for a number of compelling reasons. Firstly, it removes the need for the user to specify such functions, which makes for a much simpler user application and avoids the risk of an inappropriate choice. Secondly, basis functions are constructed around the origin of the kernel coordinate system, which requires that the two images are perfectly aligned for an optimal result. The pixel kernel model is sufficiently flexible to correct for image misalignments, and in the case of a simple translation between images, image resampling becomes unnecessary. Our new algorithm can be extended to spatially varying kernels by solving for individual pixel kernels in a grid of image subregions and interpolating the solutions to obtain the kernel at any one pixel.
keywords:
techniques: image processing, techniques: photometric, methods: statistical1 Introduction
Difference image analysis (DIA) has rapidly moved to the forefront of modern techniques for making timeseries photometric measurements on digital images. The method attempts to match one image to another by deriving a convolution kernel describing the changes in the point spread function (PSF) between images. When applied to a timeseries of images using a high signaltonoise reference image, the differential photometry that can be performed on the difference images regularly provides superior accuracy to more traditional profilefitting photometry, achieving errors close to the theoretical Poisson limits. Moreover, DIA is the only reliable way to analyse the most crowded stellar fields.
One will find DIA in use in many projects studying object variability. For example, microlensing searches (e.g. Bond et al. 2001; Woźniak et al. 2001) have been revolutionised by the ability of DIA to deal with exceptionally crowded fields, and surveys for transiting planets (e.g. Bramich et al. 2005; Mochejska et al. 2005) looking for small 1% photometric eclipses have benefited substantially from the extra accuracy obtained with this method. Also, DIA is not limited to stellar photometry as illustrated by the discovery of light echoes from three ancient supernovae in the Large Magellanic Cloud (Rest et al. 2005).
The first attempts at image subtraction are summarised in the introduction of Alard & Lupton (1998) (from now on AL98) and are based on trying to determine the convolution kernel by taking the ratio of the Fourier transforms of matching bright isolated stars on each image (Tomaney & Crotts 1996). Development of DIA reached an important landmark in AL98 with their algorithm to determine the convolution kernel directly in image space (rather than Fourier space) from all pixels in the images by decomposing the kernel onto a set of basis functions. The algorithm is very successful and efficient, and with the extension to a spacevarying kernel solution described in Alard (2000) (from now on AL00), the method has become the current standard in DIA. In fact, all DIA packages use the associated software package ISIS2.2^{1}^{1}1http://www2.iap.fr/users/alard/package.html (e.g. Woźniak 2000; Gössl & Riffeser 2002), or are implementations of the Alard algorithm (e.g. Bond et al. 2001). We refer to the method described in AL98 and AL00 as the Alard algorithm.
In this letter we suggest a change to the main algorithm to determine the convolution kernel that retains the linearity of the leastsquares problem and yet is simpler to implement, has fewer input parameters and is in general more robust (Section 2). We compare our algorithm directly to the Alard algorithm (Section 3), and suggest more techniques that increase the quality of the subtracted images. We conclude in Section 4.
2 A New Approach To The Kernel Solution
2.1 Motivation
Consider a pair of registered images of the same dimensions, one being the reference image with pixels , and the other the current image to be analysed with pixels , where and are pixel indices refering to the column and row of the image. Ideally the reference image will be the better seeing image of the two and have a very high signaltonoise ratio. This can be achieved in practice by stacking a set of bestseeing images.
As with the method of AL98, we use the model
(1) 
to represent the current image , where we wish to find a suitable convolution kernel and differential background . Formulating this as a leastsquares problem, we want to minimise the chisquared
(2) 
where the represent the pixel uncertainties.
At this point in the Alard algorithm, the problem is converted to standard linear leastsquares by decomposing the kernel onto a set of gaussian basis functions, each multiplied by polynomials of the kernel coordinates and , and by assuming that the differential background is represented by a polynomial function of the image coordinates and . Spatial variation of the convolution kernel is modelled by further multiplying the kernel basis functions by polynomials in and .
This method has a major drawback in that it assumes that the chosen kernel decomposition is sufficiently complex so as to model in detail the convolution kernel. How do we know that we are making the correct choice of basis functions? Different situations may require different combinations of basis functions of varying complexity. In fact, a feature of all current DIA packages (which are all based on the AL98 prescription for kernel basis functions) is the requirement that the user defines the number of gaussian basis functions used, their associated sigma values and the degrees of the modifying polynomials. This sort of parameterisation can end up being confusing for the user, and require a large amount of experimentation to obtain the optimal result for a specific data set.
2.2 Solving For A Spatially Invariant Kernel Solution
With this motivation, we have developed a new DIA algorithm in which we make no assumptions about the functional form of the basis functions representing the kernel. Considering a spatially invariant kernel, we represent the kernel as a pixel array with pixels where and are pixel indices corresponding to the column and row of the kernel. We also define the differential background as some unknown constant . Hence we may rewrite equation (1) as:
(3) 
This equation has unkowns for which we require a solution. Note that the kernel may be of any shape that includes the pixel , and so to preserve symmetry in all directions, we adopt a circular kernel (instead of the standard square shape).
In order to solve for and in the leastsquares sense, we note that the in equation (2) is at a minimum when the gradient of with respect to each of the parameters and is equal to zero. Performing the differentiations and rewriting the set of linear equations in matrix form, we obtain the matrix equation with:
(4) 
where and are generalised indices for the vector of unknown quantities , with associated kernel indices and respectively. Finding the solutions^{2}^{2}2Iteration is required for a selfconsistent solution for and since the solution depends on the pixel variances which in turn depend on the image model values . See Section 2.3 for and requires the construction of the matrix and vector , inverting and calculating .
Every pixel on both the reference image and current image has the potential to be included in the calculation of and . However, we ignore bad/saturated pixels on both images, and also any pixels on the current image for which the calculation of the corresponding model pixel value includes a bad/saturated pixel on the reference image. This implies that a single bad/saturated pixel on the reference image can discount a set of pixels equal to the kernel area on the current image. Hence bad/saturated pixels on the reference image should be kept to a minimum, and excessively large kernels should be avoided.
The kernel sum is a measure of the mean scale factor between the reference image and the current image, and consequently it includes the effects of relative exposure time and atmospheric extinction. We refer to as the photometric scale factor. Although it is not essential, we suggest that a constant background estimate is subtracted from the reference image before solving for the kernel and differential background since this will minimse any correlation between and .
Finally, we mention that a difference image is defined as . Assuming that most objects in the reference image are constant sources, then a difference image will consist of random noise (mainly Poisson noise from photon counting) except where a source has varied in brightness or the background pattern has varied. Sources that are brighter or dimmer at the epoch of the current image relative to the epoch of the reference image will show up as positive or negative flux residuals, respectively, on the difference image. These areas may be measured to yield a difference flux for each object of interest.
2.3 Uncertainties Arise From The Model, Not The Data
We take the following standard CCD noise model for the pixel variances:
(5) 
where is the CCD readout noise (ADU), is the CCD gain (e/ADU) and is the master flat field image. Note that the depend on the image model and consequently, fitting becomes an iterative process. Note also that we assume that the reference image and master flat field image are noiseless since these are high S/N ratio images. Finally, if the current image was registered with the reference image via a geometric transformation, then the flat field that is actually used in the noise model must be the result of the same transformation applied to the original master flat field.
In order to calculate an initial kernel and differential background solution, we set the to the image values . In subsequent iterations, we use the current image model to set the as per equation 5. We also employ a 3 clip algorithm during the iterative model fitting process in order to prevent outlier image pixel values from entering the solution. After each iteration, we calculate the absolute normalised residuals for all pixels. Any pixels with are ignored in subsequent iterations. The iterations are stopped when no more image pixels are rejected and at least two iterations have been performed.
2.4 Solving For A Spatially Variant Kernel Solution
In extending our new method to solving for a spatially variant kernel solution, we preserve flexibility by splitting the image area into an by grid of subregions and solving for the kernel and differential background in each subregion. The coarse grid of kernel and differential background solutions may be interpolated to yield the solution corresponding to any given image pixel. In this way we make no assumptions about how the kernel and differential background vary across the image area. This is in contrast to AL00, whose method employs an extension of the kernel basis functions by further multiplication by polynomials in and , and therefore requires two more input parameters from the user, namely the degrees of the polynomials describing the spatial variation of the kernel and the differential background.
3 Comparisons With The Alard Algorithm
3.1 Initial Tests
To illustrate the potential advantages of our new kernel solution method over that of AL98, we carry out a set of simple tests on a 10241024 pixel CCD image of the globular cluster NGC1904. In each test we use the original image as the reference image and a transformed version of the original image as the current image , where the transformations employed are simple, spatially invariant and typical of astronomical imaging. We attempt to solve for the kernel using our new method, which is implemented in a software package called DANDIA (Bramich in prep.), and we compare the solution to that obtained using the ISIS2.2 software from AL00. We use the ISIS2.2 default parameters specifying 3 gaussian basis functions of pix with modifying polynomials of degree 6, 4 and 3, respectively. For both software packages, we choose to solve for a spatially invariant kernel of size 2727 pixels, and a constant differential background.
The better the match between the convolved reference image and the current image, the smaller the value of the quantity . We guage the relative quality of the kernel solutions by calculating the noise ratio where and are values of calculated for a small 80x80 pixel subregion using ISIS2.2 and DANDIA, respectively.
The results of the tests described below are shown in Figure 1:

In test A, the current image has been created by shifting the reference image by one pixel in each of the positive and spatial directions, without resampling. The corresponding kernel should be the identity kernel (central pixel value of 1 and 0 elsewhere) shifted by one pixel in each of the negative and kernel coordinates. DANDIA recovers this kernel to within numerical rounding errors whereas ISIS2.2 recovers a peak pixel value of 0.995 with other absolute pixel values of up to 0.004. Consequently the residuals in the ISIS2.2 difference image are considerably worse than those for DANDIA, and the noise ratio is .

In test B, the current image has been created by convolving the reference image with a gaussian of FWHM 4.0 pix. Both DANDIA and ISIS2.2 recover the kernel successfully, but DANDIA outperforms ISIS2.2 with .

In test C, we shifted the reference image by half a pixel in each of the positive and spatial directions to create the current image, an operation that required the resampling of the reference image. We used the cubic OMOMS resampling method (see Section 3.2). ISIS2.2 fails to reproduce the highly complicated kernel matching the two images, whereas DANDIA does a nearly perfect job. The noise ratio is .

In test D, we simulate a telescope jump by setting where is a resampled version of the reference image shifted by 3.5 pixels in each of the positive and spatial directions. The corresponding kernel is a combination of the identity kernel and a shifted version of the kernel from test C. DANDIA accurately reproduces this kernel with a central pixel value of 0.60015 whereas ISIS2.2 produces a poor approximation of the kernel with a central pixel value of 0.631. The noise ratio is .
It is evident that the gaussian basis functions used in ISIS2.2 limit the flexibility of the kernel solution to modelling kernels that are centred near the kernel centre and that have scale sizes similar to the sigmas of the gaussians employed. It is only in test B that ISIS2.2 can closely model the kernel, simply because the kernel itself is a gaussian. Tests A, C & D show how ISIS2.2 is unable to model sharp, complicated and offcentred kernels. DANDIA does not suffer from any of these limitations since it makes no assumption about the kernel shape, and hence it performs superbly in all of the above tests.
3.2 Image Resampling
In Section 2, we make the assumption that the reference image and current image are registered, which implies that one of the images has been transformed to the pixel coordinate system of the other image, usually via image resampling. Ideally one should transform the reference image to the current image since the reference image forms part of the model. In this way, the pixel variances in the current image are left uncorrelated from pixel to pixel. However, most implementations of DIA transform the current image to the coordinate system of the reference image using image resampling.
We suggest two improvements to this methodology. Firstly, if resampling is to be employed, one should use an optimal resampling method. We employ the cubic OMOMS (Optimal MaximalOrderMinimalSupport) basis function for resampling, which is constructed from a linear combination of the cubic Bspline function and its derivatives. The OMOMS class of functions have the highest approximation order and smallest approximation error constant for a given support (Blu et al. 2001).
Secondly, our kernel model does not use basis functions that are functions of the kernel pixel coordinates. Consequently, for two images that require only a translation to be registered, the image resampling is incorporated in the kernel solution, avoiding the problem of correlated pixel noise. DIA is used extensively for extracting lightcurves of objects in timeseries images, which usually only have a small pixel shift between images. By translating the current image to the reference image by an integer pixel shift, avoiding image resampling, the kernel solution process can do the rest of the job of matching the reference image to the current image.
3.3 Final Tests
We now test our new algorithm on a pair of 10241024 pixel images of NGC1904 from the same camera with FWHMs of 3.2 pix and 4.9 pix. Using matching star pairs, we derive a linear transformation between the images that consists of a translation with negligible rotation, shear and scaling. From the calibration images, we measure a gain of 1.48 e/ADU and a readout noise of 4.64 ADU, and we construct a master flat field for use in the noise model. On the left of Figure 2, we present 100100 pixel cutouts of the reference image (the better seeing image) and the current image.
When calculating the of the difference images, we use a modified version of equation 5 to account for the noise contribution from the singleexposure reference image:
(6) 
where is the space variant kernel and is a factor correcting for the noise distortion from resampling the reference image. The value of depends on the resampling method used and the coordinate transformation applied. We calculate by generating a 10241024 pixel image of values drawn from a normal distribution with zero mean and unit sigma, resampling the image using the same method and transformation as that applied to the reference image, and then fitting a gaussian to the histogram of transformed pixel values, the sigma of which indicates the value of . For cubic OMOMS resampling and the transformation between our two test images, we obtain 0.884.
Our first pair of tests involves registering the images by resampling the reference image via cubic OMOMS and then using DANDIA (test E) and ISIS2.2 (test F) to generate difference images. For DANDIA, we solve for an array of circular kernels corresponding to a 1010 grid of image subregions, where each kernel contains 317 pixels. The kernel used to convolve each pixel on the reference image is calculated via bilinear interpolation of the array of kernels. The results of test E are displayed in the upper middle panel of Figure 2 where we show the difference image normalised by the pixel noise from equation 6 with a linear scale from 2 to 2. Two variable stars are visible (RR Lyraes) and the cosmic ray from the reference image has created a negative flux on the difference image. In the same panel we plot the histogram of normalised pixel values overlaid with a gaussian fit, and calculate a , ignoring the small pixel areas including the variable stars and the cosmic ray (250 pix). The 100100 pixel cutout corresponds to one image subregion used to determine a kernel solution and hence we may calculate a reduced chisquared by assuming .
For ISIS2.2 we solve for a spatially variant kernel of degree 2 with a spatially variant differential background of degree 3 in addition to the other default kernel basis functions (see Section 3.1; 328 free parameters). The results of test F are shown in the upper right panel of Figure 2. We obtain , and assuming 3 free parameters per image subregion, we obtain .
Tests G & H involve registering the images to within 1 pixel by translating the reference image via an integer pixel shift. Then we apply DANDIA (test G) and ISIS2.2 (test H) to obtain kernel solutions, avoiding the use of resampling. For DANDIA we obtain , and for ISIS2.2 we obtain , with corresponding of 0.99 and 1.00, respectively (see Figure 2).
Visually, the normalised difference image cutouts in Figure 2 are very similar, and differences are only noticeable after detailed scrutiny. However, the analysis reveals that our algorithm performs considerably better than the Alard algorithm (test E performs 0.60 better than test F, and test G performs 0.38 better than test H), and that image resampling degrades the difference images (test G performs 0.48 better than test E, and test H performs 0.70 better than test F). The highest quality difference image was produced by using DANDIA on the two images aligned to within 1 pixel but without resampling (test G, which performs 1.08 better than test F).
4 Conclusions
We have presented a new method for determining the convolution kernel matching a bestseeing reference image to another image of the same field. The method involves modelling the kernel as a pixel array, avoiding the use of possibly inappropriate basis functions, and eliminating the need for the user to specify which basis functions to use via numerous parameters. For images that require a translation to be registered, the kernel pixel array incorporates the resampling process in the kernel solution, avoiding the need to resample images, which degrades their quality and creates correlated pixel noise. Kernels modelled by basis functions may only partly compensate for subpixel translations since the basis functions are centred at the origin of the kernel coordinates.
We have shown that our new method can produce higher quality difference images than ISIS2.2. Ideally the reference image should be aligned with the current image, preferably without resampling, but using OMOMS resampling when necessary. The flexibility of our kernel model allows the construction of difference images for telescope jumps, or trailed images, which is where ISIS2.2 fails. These improvements have important implications for timeseries photometric surveys. Better quality difference images implies more accurate lightcurves, and the increased kernel flexibility will lead to less data loss due to telescope tracking and/or focus errors.
Acknowledgements
D.M. Bramich would like to thank K. Horne and M. Irwin for their useful advice, and A. Arellano Ferro for supplying the test images. This work is dedicated to Phoebe and Chloe Bramich Muñiz.
References
 Alard (2000) Alard C. 2000, A&AS, 144, 363
 Alard & Lupton (1998) Alard C. & Lupton R.H. 1998, Astrophysical Journal, 503, 325
 Blu et al. (2001) Blu T., Thévenaz P. & Unser M., 2001, IEEE Transactions on Image Processing, 10, 1069
 Bond et al. (2001) Bond I.A. et al., 2001, MNRAS, 327, 868
 Bramich et al. (2005) Bramich D.M. et al., 2005, MNRAS, 359, 1096
 Gössl & Riffeser (2002) Gössl C.A. & Riffeser A., 2002, A&A, 381, 1095
 Mochejska et al. (2005) Mochejska B.J. et al., 2005, AJ, 129, 2856
 Tomaney & Crotts (1996) Tomaney A.B. & Crotts A.P.S, AJ, 112, 2872
 Rest et al. (2005) Rest A. et al., 2005, Nature, 438, 1132
 Woźniak et al. (2001) Woźniak P.R., Udalski A., Szymański M., Kubiak M., Pietrzyński G., Soszyński I. & Żebruń K., 2001, Acta Astronomica, 51, 175
 Woźniak (2000) Woźniak P.R., 2000, Acta Astronomica, 50, 421