Performance of an algorithm for estimation of flux, background and location

Performance of an Algorithm for Estimation of Flux, Background and Location on One-Dimensional Signals

Abstract

Optimal estimation of signal amplitude, background level, and photocentre location is crucial to the combined extraction of astrometric and photometric information from focal plane images, and in particular from the one-dimensional measurements performed by Gaia on intermediate to faint magnitude stars. Our goal is to define a convenient maximum likelihood framework, suited to efficient iterative implementation and to assessment of noise level, bias, and correlation among variables. The analytical model is investigated numerically and verified by simulation over a range of magnitude and background values. The estimates are unbiased, with a well-understood correlation between amplitude and background, and with a much lower correlation of either of them with location, further alleviated in case of signal symmetry. Two versions of the algorithm are implemented and tested against each other, respectively, for independent and combined parameter estimation. Both are effective and provide consistent results, but the latter is more efficient because it takes into account the flux-background estimate correlation.

astrometry - instrumentation: miscellaneous - methods: analytical - methods: numerical - techniques: image processing

1 Introduction

Astrometric measurements are often concerned with the limiting precision achievable in the estimate of relative position among celestial sources, imaged by some instrument, and with the practical definition of convenient location algorithms (Lindegren, 1978; Gai et al., 1998). Such mathematical frameworks often identify convenient quantities, in particular an assessment of the limiting achievable precision, e.g. in terms of the Cramér-Rao lower bound on location error (Mendez et al., 2013), or of systematic errors associated to the mismatch between the measurement model and actual data distribution (Gai et al., 2013a). The connection between maximum likelihood (ML), related to Cramér-Rao, and least square approach is also investigated in the literature (Lobos et al., 2015). The models have been tested in real observations, even from the ground (Cameron et al., 2009), where the atmospheric turbulence is usually the limiting factor, and in the lab (Gai et al., 2001), sometimes to very high precision (Zhai et al., 2011).

Often the problem formulation is simplified with the adoption of a one-dimensional (1D) signal model, which is more directly applicable, e.g., to the Gaia measurements over its intermediate to faint magnitude sample, which is briefly reviewed in Sec.1.1, but can nonetheless be considered sufficiently general, in particular by dealing separately with image location on each coordinate. Besides, the problem is sometimes more conveniently formulated in terms of estimating not only the image location, but also the photometric level at the same time, i.e. working in a bidimensional space with respect to the unknowns (Mendez et al., 2014). The rationale is that, even in case of known objects, the actual photon count accumulated in a specific exposure is dependent on the current measurement conditions (atmospheric transmission, effective exposure time, and instrument parameters), which may not be stable or known with accuracy adequate to the precision goals.

In astronomical practice, measurement precision is affected also by background, which is in many cases estimated over image regions close to the object of interest and considered free from residual target flux or spoilers. Background estimation can be performed in many different ways, depending on a number of assumptions related to the actual observing conditions and goals, and requires careful evaluation. It is therefore a critical part of several data reduction and analysis packages, e.g. the public domain AstroImageJ (Collins et al., 2016).

The subject of this paper is an algorithm for the estimation of signal Amplitude, Background, and photoCentre location (ABC) on 1D data corresponding to intermediate magnitude Gaia observations. We deal, therefore, with a three-dimensional (3D) problem, in the sense of Mendez et al. (2014), formulated in Sec. 2 in maximum likelihood terms. The expressions are expanded in a form suited to an iterative solution, assuming the variables are either independent or a combined set. These two approaches are materialized in different algorithms, and their noise and correlation properties are derived, evidencing relevant consequences for symmetric signals.

Because the mathematical expressions involved cannot be easily solved except in the case of extremely simple signal models, the framework is verified by numerical simulation as described in Sec. 3, also taking into account some of the aspects more relevant toward implementation in a data reduction system (Morbidelli et al., 2012).

The two algorithms have comparable performance with respect to noise and systematic error, but the correlation between flux and background, discussed in Sec. 4, makes the independent estimate significantly less efficient.

1.1 Gaia Observations

The Gaia mission (Gaia Collaboration et al., 2016; Perryman, 2005; de Bruijne, 2012) is aimed at global astrometry at the level of a few tens of micro-arcsec (hereafter, ), producing an all-sky catalogue of position, proper motion, and parallax, complete to the limiting magnitude . The Gaia concept (http://sci.esa.int/gaia/) relies on self-consistency of the astrometric information of celestial objects throughout operation, factoring out the instrument parameters and their evolution by calibration of the overall data set. The Hipparcos experience suggests that the approach is viable with respect to the detection and modeling of the instrumental parameter secular evolution and long term variations over time scales longer than a few revolution periods, i.e. above about one day.

The detector of Gaia (Short et al., 2005; Kohley et al., 2012) is a large CCD mosaic operated in Time Delay Integration (TDI) mode, split in different regions for operation purposes: the detection of the incoming stars by the initial two strips of CCDs, the Sky Mapper; wideband imaging on the Astrometric Field; and low dispersion imaging on the Blue/Red Photometer. Additionally, a fraction of the field feeds the Radial Velocity Spectrometer, operating over the magnitude range , which also provides astrophysical characterization of the bright stellar sample (). The Basic Angle Monitoring device provides auxiliary metrology information, i.e. a real-time estimate of the Basic Angle value(Gielesen et al., 2012; Gai et al., 2013b).

The regions of interest over the astrometric focal plane are defined by onboard logic on the detection results in the Sky Mapper: for each detected star, the placement of ensuing observing windows on each CCD is computed.
The elementary exposure has a fixed duration () and provides for most intermediate magnitude stars () a 1D data set of 18 samples, binned in the low resolution, across scan direction. Fainter stars are binned and read on 12 samples. Brighter stars, in the magnitude range , are read and downloaded as bidimensional elementary images. The brightest stars, are read on shorter exposure times, reducing the effective integration by activation of on-chip gates.

1.2 Estimate of Flux, Background and Photocentre

The elementary 1D signal depends on the instrument response (including optics, TDI, attitude, and CCD) and on the individual source magnitude and spectrum. The apparent amplitude of a star is thus slightly different on each exposure during the transit, due to different electro-optical response of subsequent CCDs. Keeping track of the star flux level over the AF is therefore a valuable contribution to instrument monitoring, as many stars are continuously crossing the field at different heights, each sampling a whole CCD strip. The effective broadband magnitude of each star can be defined, e.g., by its average signal level over the whole transit. Source variability, if any, is most likely identified between observations at different epoch.

Moreover, due to field superposition from the two Gaia telescopes, the background from both sky areas is superposed to the image of each star, so that every star is measured over a different background at each epoch. The estimation of background is therefore required at the individual exposure level, since it is not known in advance or easily modeled.

Finally, the most obvious parameter desired for each elementary exposure on the astrometric field is the star location, i.e the estimate of its photocentre.

The next section is devoted to setting up the mathematical framework describing the maximum likelihood search for photometric level, background and image location, in a formulation convenient for iterative numerical implementation.

2 Method: Maximum Likelihood

The signal is one-dimensional, centred on the “true” photoCentre position () and represented in pixel positions , by the model

(1)

where is the flux independent Template (normalised with unit integral), is the Amplitude of the total photometric flux, and is the Background, assumed to be uniform. The signal template is derived from sufficiently large, homogeneous data sets (e.g. from the same CCD and spectral class) in calibration (Gai et al., 2013a; Busonero et al., 2014), to ensure that a good Signal to Noise Ratio (SNR) is achieved.

The elementary exposure generates the actual data samples , affected by photon noise from the signal and background, readout noise, and possibly other contributions. The discrepancy between the detected signal and its model is then

(2)

assumed unbiased and uncorrelated, i.e., respectively,

(3)

where is the signal variance, assumed to be known.

The estimate of centroid, flux and background can be implemented in a maximum likelihood framework by the requirement of minimising an error function (corresponding to the in other applications) chosen as the weighted square discrepancy :

(4)

Hereafter, for simplicity, the index will be dropped; the summations cover the whole pixel range.

The solution minimises the sum of squared differences between samples and corresponding model values. It may be noted that the signal model formulation in Eq. 1 is such that the solution cannot be strictly derived in terms of linear least squares, because, e.g., amplitude and photo-centre are intrinsically coupled. As often happens, it is possible to envisage an iterative solution, based on an acceptable initial guess of the parameters, and successive approximations providing progressively better estimates at each iteration. Given the simple signal profile, i.e. a bell-shaped curve affected by moderate noise for a reasonable SNR, the problem is still expected to have a single optimum solution. Moreover, the least square solution often provides a good approximation to the maximum likelihood solution and to the Cramér-Rao limit (Lobos et al., 2015).

The stationary point of the error functional satisfies the system of equations:

i.e.

(5)

which can be solved with ordinary matrix methods after explicitation of the variables.

The three variables can be expanded at first order with respect to the current (input) estimate, at iteration , of amplitude, background, and centre location , to derive the corrections required to achieve the improved (output) estimates , i.e.

(6)

The improved estimates, output of iteration , become the current estimates fed as input to iteration :

In particular, the normalised template is expanded as

The updated version of the full signal model becomes

At each iteration, the problem can be tackled either by computing the next estimate independently for each parameter, or collectively for the set of three corrections at the same time; hereafter, the two approaches are labeled Independent Estimate (IE) and Combined Estimate (CE), respectively. The former approach is expected to provide a simpler formulation and easier connection with known results; besides, the latter approach is expected to be more sound, at the cost of more cumbersome expressions. Both are evaluated and discussed below, adopting for the sake of simplicity the notation:

(7)

The computational cost is evaluated in simulation.

We take advantage of the maximum likelihood framework also to investigate on the residual errors of the estimates, in particular with respect to bias, variance, and correlation. In general, some correlation may be expected due to derivation of all parameters from the same data, and it can be justified intuitively by the characteristics of the signal model in Eq. 1: e.g. given an underestimate of the signal amplitude , whichever the centre position , the overall photon count can be retrieved only by an overestimate of the background , i.e. the two parameters are anti-correlated.

2.1 Independent Estimate (IE)

Eqs. 5 are expanded separately, respectively with regard to , i.e. to the single variable of interest involved in the specific derivative of the error function. Thus, we get the simple expressions

(8)

where ; the equations are solved independently for each variable, providing the corrections

(9)
(10)
(11)

Some properties of the solution can be derived from the signal properties in Eq. 3. First, we remark that the Eq. 6 can be used also to describe the discrepancy between the “true” value of each parameter and its current estimate at a given iteration, including the last estimate; formally, the former takes the place of the next estimates . Then, the expressions for the solutions in Eqs. 10 can be evaluated to assess the underlying statistics.

It is possible to derive expressions for the parameter variance from Eqs. 9 to 11:

(12)
(13)
(14)

The solution is unbiased, as can be shown by evaluating the expectation value of Eqs. 9 to 11. The right-hand terms vanish, so that the expected discrepancy between true values and their (final) estimates is also zero:

(15)

The actual discrepancy after a finite number of iterations is investigated by simulation in the next sections.

The correlation between parameters can be evaluated in terms of Pearson’s (linear) correlation coefficient formula:

(16)
(17)
(18)

The correlation values shall be investigated in the simulations below.

A simple qualitative consideration can be derived from the above Eqs. 16 to 18: in case of a symmetric signal distribution, with anti-symmetric derivative (i.e. respectively an even and odd function), and in circumstances leading to a symmetric readout region and signal variance, the correlation between photo-centre and either amplitude and background vanishes. This is due to the summation of odd functions (a combination of the template, its derivative, and variance) over a symmetric region, and can be expected to hold approximately also in case of moderate deviation from the ideal case. In practice, exact cancellation would not be achieved because residual errors on the parameters imply computation of the above functions in slightly off-centered positions, thus departing from symmetry.
The obvious correlation between amplitude and background is retained.

2.2 Combined Estimate (CE)

Eqs. 5 are expanded simultaneously for all parameters, truncating to the first order under normal conditions of smoothness.

The overall solution must satisfy the system of equations

(19)
(20)
(21)

and the explicit solution is defined e.g. by Cramer’s rule (still acceptably efficient with three equations). We may build the system determinant

(22)

and, for convenience, the auxiliary determinants

so that the variable corrections are

(23)

As above, the solution is unbiased:

(24)

The computation of parameter variance and covariance is somewhat more cumbersome (although not exceedingly difficult) than in the previous IE case, and its main steps are reported in Appendix Appendix - Covariance Matrix of the Combined Estimate for the interested reader.

The results of combined and independent solutions are compared in the simulations detailed in Sec. 3.

2.3 Initial Guess and Stopping Criterion

A very simple approach is adopted, not taking into account information from external sources on the currently expected value of signal amplitude, sky background and/or centre location.

The background is estimated to be simply a fraction of the minimum pixel signal level:

(25)

The total signal amplitude is then estimated as the integral of the signal subtracted by such background:

(26)

The photo-centre location is finally estimated as the barycentre, or first moment, of the signal distribution:

(27)

As the iterative method works by reducing the square discrepancy between signal and template in Eq. 4, a convenient stopping criterion appears to be the variation of such quantity between subsequent iterations. When the amplitude of discrepancy variation becomes smaller than a suitable acceptance threshold, convergence is assumed to have been reached.

The issue of initial guess of parameters, and of a convenient value for the stopping criterion, is evaluated throughout the simulations.

3 Simulation Results

The estimation described above depends on the signal profile, so that the evaluation of general expressions can only be done on specific cases. To provide an assessment of the performance that may be expected in realistic cases, we investigate the results provided by the above derivation on a set of data spanning over a significant range of instrument response variation (represented by means of optical aberrations) and source spectral types (blackbody temperatures).

3.1 Simulation Implementation

The data set was used in previous studies (Gai et al., 2013a), and is described therein. We briefly recall that each effective detected 1D signal, labeled LSF for similarity with the corresponding optical Line Spread Function, is derived through numeric computation of the diffraction integral, providing the Point Spread Function (PSF). The PSF is then composed with simple source spectra (blackbodies at given temperatures) and detection effects (nominal pixel size, Modulation Transfer Function [MTF] and TDI operation; across scan binning). The process is replicated for a set of independent instances.

For simplicity, the data are generated and processed over 12 pixels over the whole simulation range, even if this does not match in full detail the actual Gaia operations recalled in Sec. 1.1. The simulation is run subsequently on the magnitude range , in steps of , for the set of background level values photons/pixel/exposure.

For each combination of magnitude and background, every LSF instance is properly scaled and used as input to the mathematical framework from Sec. 2, deriving the expected noise level and correlation on amplitude, background and location estimate in Sec. 3.2. Then, the signals are superposed to the appropriate noise level, including shot noise from amplitude and background, and a readout noise set to equivalent photons. The noisy data are fed to our implementation of the IE and CE algorithms, providing experimental results discussed in Sec. 3.3. We also evaluate the computational implications of our implementation.

3.2 Predicted Performance

As a first step, we evaluate the expected error level on independent and combined solutions, and we derive their correlation coefficients. The computation is noiseless in the sense that each instance is defined by the current nominal values of parameters, signal template, and variance.

Figure 1: Noise level, predicted by independent and combined algorithms, on signal amplitude (top panel), background (middle) and location (bottom panel)

Predicted Estimate Noise

We first compare the predictions of the two algorithms in terms of noise and correlation. Intensity values are in photons; location is in micro-arcsec ().

In Fig. 1, the estimated noise on signal amplitude (top panel), background level (middle panel) and photo-centre (bottom panel) is shown, from both independent and combined algorithm, over the magnitude range, for background level  photons per pixel). The overall trends of noise evidence that the performance of either algorithm is quite similar, with slightly lower values from IE. This is not surprising, as IE assumes implicitly to use all of the signal information for the least square estimation of the current parameter only, whereas the combined algorithm, more realistically, spreads the available information among all of them. The “greedy” independent algorithm therefore provides a slight underestimate of the error.

Figure 2: Relative degradation of parameter noise due to background, with respect to the case.

The amplitude error (top panel) grows for increasing magnitude, as can be expected from natural limitations; moreover, the difference between IE and CE noise estimates also slightly increases. On the contrary, the noise on background estimate (middle panel) is smaller at fainter magnitude, with negligible variation of the difference between algorithms. This behaviour is understandable, as higher signal levels completely “flood” the background, making it harder to estimate, and inducing larger residual noise. The location performance from the two algorithms (bottom panel) is basically undistinguishable on this scale.

At the faint end, the error curves evidence larger spread and an increase with respect to the near straight line shown at brighter magnitudes, dominated by the source photon noise. This is not surprising, because at fainter magnitude the overall SNR drops and is progressively more affected by higher background levels. The effect is small on the scale of Fig. 1, therefore the variation with respect to the zero background case is shown in Fig. 2, respectively, for relative amplitude noise (top panel), differential background noise (middle panel) and relative location error (bottom panel).

Predicted Correlation Among Parameters

Over the selected range of magnitude and background, the correlation coefficients of errors between parameter pairs are computed, from Eqs. 16 to 18 for IE and from the corresponding system of equations for CE; the results are shown in Fig. 3, respectively as the correlation coefficient between amplitude and background (top panel, sign reversed), amplitude and location (middle panel), and background and location (bottom panel).

Figure 3: Predicted correlation between parameters: amplitude and background (top); amplitude and location (middle); background and location (bottom).

The correlation between signal amplitude and background (top panel) is not negligible (), and this is not surprising, as mentioned in Sec. 2. The correlation between signal amplitude and location (middle panel), and respectively between location and background (bottom panel), are both very low, and compatible with the sample dispersion (not shown in the figure). For each term, the difference between algorithms is marginal.
The spread among the correlation curves increases at faint magnitudes. As for the predicted noise, this can be related to the increasing relevance of background level, which acts as a uniform pedestal on the signal variance. The relative weight of signal and background changes over the simulation range, and this affects the results on both noise and correlation.

Figure 4: IE simulation noise on amplitude (top), background (middle) and location (bottom) with respect to model prediction

The performance degradation with increasing background is larger at fainter magnitudes (due to decreasing SNR). The effects appear to be acceptable on the estimation of amplitude (top panel; to increase), negligible on background itself (middle panel;  photons/pixel at the faint end), and marginal on location noise (bottom panel) down to the faint magnitude range, where the error grows by up to a factor two for the worst combination of magnitude and background.

3.3 Performance on Noisy Data

The initial guess on parameters is generated from the noisy data according to the initial guess criterion from Sec. 2.3. The noisy data are then processed by both independent and combined algorithms, using the current noiseless LSF as template. The stopping criterion corresponds to a limit on square discrepancy variation ; the convenience of such choice, based on initial experimentation on the algorithm performance, is discussed on the basis of the results presented in Sec. 3.3.5.

The predicted performance is verified on noisy data using a single noise instance for each LSF; the result statistics is therefore a combination of noise propagation and sample variability. The algorithm performance is discussed in statistical terms on estimated parameter noise and discrepancy over the LSF dataset.

Figure 5: RMS discrepancy between IE and CE results on amplitude (top), background (middle) and location (bottom)

Estimate Noise from Simulation

The mean noise on amplitude (top panel), background (middle panel) and location (bottom panel), evaluated as RMS discrepancy with respect to the input (“true”) values, is shown for IE in Fig. 4, with the corresponding predictions (crosses) for the case (Sec. 2.1, Eqs. 12 to 14).

Figure 6: Mean discrepancy of IE estimates with respect to input values on amplitude (top), background (middle) and location (bottom)

The discrepancy between predicted and simulated noise increases with increasing background and fainter magnitudes, both factors associated to decreasing SNR. It may be noted that in such conditions the first order expansion used in Sec. 2 is a progressively less accurate approximation.

Mutual Consistency and Bias

The numerical results of the two algorithms are compared with each other; in particular, we focus on the RMS discrepancy between IE and CE outputs, shown in Fig. 5, respectively, for amplitude (top panel), background (middle panel) and location (bottom panel). It appears that the estimates from both algorithms coincide over most of the range, within a very small fraction of the noise level of each parameter. Therefore, CE results were not shown in Fig. 4, as they would have been mostly indistinguishable from IE values on the plots.

The mean discrepancy between IE results and the input values, considered as an indication of systematic estimation error, i.e. bias, is shown in Fig. 6, respectively, for amplitude (top panel), background (middle panel) and centre (bottom panel). The bias appears to be consistent with zero, taking into account the noise on each parameter (Fig. 4). Given the consistency between IE and CE (Fig. 5), we can conclude that the latter algorithm provides unbiased estimates as well.

Correlation Among Parameters from Simulation

An experimental assessment of the correlation between estimated parameters is also implemented by (a) taking the discrepancy between algorithm results and input values for each parameter; (b) evaluating the mean over the simulation sample of the product of residuals for each parameter pair; and (c) normalising by the corresponding standard deviations.
The residual correlation is shown in Fig. 7, retrieving the predicted values, i.e. about between flux and background, and close to zero for the other combinations. Due to the simulation size, the experimental noise on the correlation coefficients can be expected to be of order of , consistent with the fluctuation in the results.

Figure 7: Experimental correlation among simulation residuals of amplitude vs. background (top), amplitude vs. location (middle), and background vs. location (bottom).

Processing Time

The average number of iterations required for convergence is shown in Fig. 8, respectively, for IE (top) and CE (bottom). The IE algorithm requires a larger number of iterations than CE, with larger variation over the range of magnitude and background. A minimum in the average number of iterations is found for some combinations of magnitude and background level.
It may be noted that, with increasing magnitude and decreasing SNR, the error on initial guess of the parameters has a natural degradation. Besides, at fainter magnitude we expect larger errors on at least some parameters (relative amplitude and location), as from Fig. 1. This means that larger residual errors (in absolute terms) are acceptable because they induce sufficiently small variations on the square discrepancy, thus fitting the stopping criterion. The shape of the convergence curves appear to reflect the interplay between initial guess and stopping criterion.

The average processing time per LSF instance is larger for IE than for CE, respectively against , consistently with the lower number of iterations required for convergence. A histogram of IE and CE processing time per instance, for and  photons/pixel, is shown in Fig. 9. Given the number of iterations associated to this case (Fig. 8), roughly twice as large for IE than for CE, it appears that in our case the time per iteration is comparable.

The simulation is implemented in the Matlab environment on a desktop computer with 3.3 GHz CPU and 8 GB RAM, under 64 bit Windows operating system.

Figure 8: Mean number of iterations required for convergence by IE (top) and CE (bottom)

Square Discrepancy and Convergence

The value of square discrepancy at convergence, averaged over the LSF sample, is shown in Fig. 10 for IE (top) and CE (bottom). It may be noted that the value is consistent with the expectations on , taking into account that the input signal is composed of 12 samples, and three parameters are estimated.

The parameter adjustment trend and the square discrepancy evolution throughout subsequent iterations of IE and CE algorithms have been recorded for a small sample of LSF instances, at a signal amplitude corresponding to and background  photons per pixel, with centre location close to zero. The curves related to the first four LSF instances are shown in Fig. 11, respectively, for amplitude, background, photo-centre, and square discrepancy (top to bottom).

The convergence is much faster for CE than IE; moreover, the CE trend is quite monotonic for all parameters. Conversely, the IE estimates of and corrections evidence significant oscillations around a slow decreasing trend, in phase opposition. We remark that anti-correlation between amplitude and background estimates was expected from prime principles, as mentioned in Sec. 2.

The trend of square discrepancy is monotonically decreasing and retains an appreciable slope throughout several iterations. All curves are in logarithmic units, evidencing a rapid evolution of the algorithm.

The stopping criterion for our main simulation, i.e. , has been selected according to the observation that, at this level, the corrections on all parameters are much smaller that the corresponding predicted noise level (e.g. shown in Fig. 1). Therefore, the numerical noise is well within the intrinsic uncertainty on the measurements.

An alternative threshold value, as well as different initial guesses, are in principle negotiable, depending on implementation trade-offs.

Figure 9: Histogram of processing time for IE and CE, ,  photons/pixel
Figure 10: Average square discrepancy over simulation sample as a function of magnitude for IE (top) and CE (bottom)
Figure 11: Parameter adjustment vs. iterations on four LSF instances by IE and CE algorithms. From top to bottom: amplitude, background, photo-centre, and square discrepancy.

4 Discussion: Algorithm Performance

The IE and CE algorithms have been developed under the concept of an underlying signal model to which measurements are to be matched in a maximum likelihood approach, thanks to a suitable choice of parameters: in our case amplitude, background, and location. The match between model and observations is noise limited, provided they are mutually consistent, as ensured by calibration and monitoring of the instrument response.

Hereafter, we discuss some peculiarities of the two algorithms evaluated.

Both IE and CE appear to provide the same results, with differences (Fig. 5) much smaller than the typical error (Fig. 4) in our simulation. Therefore, the choice between algorithms is not suggested by considerations on the correctness of their results. Our coding was rather straightforward, i.e. not necessarily optimised for either IE or CE, and practical performance depends on a number of additional implementation aspects: operating system, programming language, overall SW infrastructure, availability of efficient libraries, and so on.

In our framework, the CE algorithm (Eqs. 19 to 21) is more efficient than IE (Eqs. 9 to 11), in spite of the larger amount of computation required, since it takes less iterations, each with comparable processing time, as shown in Sec. 3.3.4. In a different implementation, the relationship between CE and IE iteration time may be somewhat different or even reversed, but the number of iterations (by deterministic processing of the same data through the equations) will be preserved.

The IE noise prediction (Fig. 1) is confirmed to be somewhat optimistic with respect to the simulation results (Fig. 4), which are more consistent with the CE noise prediction. However, the discrepancy is small, so that the IE predictions (Eqs. 12 to 14) can be retained as acceptable “easy” estimates.

Figure 12: A vs. B adjustments throughout IE and CE processing for the first four LSF instances (log units)

The rationale of the difference between IE and CE behaviour evidenced in Fig. 11 can be better understood by a direct comparison of and adjustments, plotted against each other in Fig. 12 in logarithmic units. The IE parameter adjustments are zigzagging back and forth, whereas the corresponding CE values follow a direct route toward convergence.

It is our understanding that such behavior is related to the correlation between flux and background, taken into account by CE and totally ignored by IE. The correlation effect can be seen more clearly by superposing the contour plot of square discrepancy to the plot, in Fig. 13.

The flux-background relationship generates a square discrepancy surface rotated with respect to the axes. The CE algorithm operates in a way similar to the gradient descent method (Press et al., 2002), because the shape of the discrepancy function is embedded in the underlying mathematical framework. Conversely, the IE algorithm acts as a descent method on a single coordinate at a time, and cannot therefore aim directly to the overall minimum of the error functional.

The application of the proposed mathematical framework to monitoring and diagnostic of relevant signal description parameters related to either the astrophysical source or the instrument (Busonero & Gai, 2010; Busonero et al., 2012, 2014) will be addressed in our future work.

Figure 13: A vs. B adjustments superposed to square discrepancy contours

5 Conclusion

We analyse an algorithm for estimation of signal amplitude, background, and photo-centre location, on 1D data corresponding to intermediate magnitude Gaia observations, in the maximum likelihood framework. The analytical models assess the expected noise level, bias and estimate correlation as a function of signal profile, magnitude, and background.

The performance predicted by analytical models has been verified by simulation over a range of magnitude and background values, on a sample of instances of instrument response and source temperature. The estimates are unbiased, with low correlation between location and either amplitude or background, and well-understood correlation between amplitude and background. Symmetry properties of the signal further alleviate the correlation among results, in particular making the location estimate independent of both flux and background.

The algorithm iterative implementation proves to be effective; the suggested choice of parameter initial guess and stopping criterion appears to work satisfactorily. The two algorithm versions tested against each other, respectively, for independent and combined estimate of the parameters, are mutually consistent. However, CE is significantly more efficient than IE, as convergence in the latter case appears to be slowed down by the natural correlation between flux and background estimate.

The activity has been partially funded by the Italian Space Agency (ASI) under contracts Gaia Mission - The Italian Participation to DPAC, 2014-025-R.1.2015. We are grateful to our referee, whose remarks greatly helped in clarifying the text and evidencing our results.

Appendix - Covariance Matrix of the Combined Estimate

The elements of the covariance matrix can be derived from Eqs. 19 to 21, with some manipulations, e.g. squaring each of them and multiplying them by each other, then taking the expectation values and using the basic statistics from Eq. 3 and Eq. 24. We get thus a linear system (Eqs. 28 to 33) in the unknowns , , , , and .

Eq. 19 squared:

(28)

Eq. 20 squared:

(29)

Eq. 21 squared:

(30)

Eq. 19 Eq. 20 :

(31)

Eq. 19 Eq. 21 :

(32)

Eq. 20 Eq. 21 :

(33)

The system is based on known information (signal template and variance , nominal flux estimate ), and it can be solved with standard techniques to provide the terms of the covariance matrix. As the solutions are unbiased (Eq. 24), the former three unknowns are the variances of the flux, background, and photo-centre estimates. The latter three unknowns are related to the Pearson’s correlation coefficients according to expressions similar to those in Eqs. 16 to 18.

Footnotes

  1. affiliationmark:
  2. affiliationmark:
  3. affiliationmark:
  4. affiliationmark:
  5. affiliationmark:

References

  1. Busonero, D., & Gai, M. 2010, Mem. Soc. Astron. Italiana Supplementi, 14, 226
  2. Busonero, D., Lattanzi, M. G., Gai, M., Licata, E., & Messineo, R. 2014, in Proc. SPIE, Vol. 9150, Modeling, Systems Engineering, and Project Management for Astronomy VI, 91500K
  3. Busonero, D., Loreggia, D., & Riva, A. 2012, in Proc. SPIE, Vol. 8449, Modeling, Systems Engineering, and Project Management for Astronomy V, 844917
  4. Cameron, P. B., Britton, M. C., & Kulkarni, S. R. 2009, Astron. J., 137, 83
  5. Collins, K. A., Kielkopf, J. F., & Stassun, K. G. 2016, ArXiv e-prints, arXiv:1601.02622
  6. de Bruijne, J. H. J. 2012, Astrophys. Space Sci., 341, 31
  7. Gai, M., Busonero, D., & Cancelliere, R. 2013a, Publ. Astron. Soc. Pac., 125, 444
  8. Gai, M., Carollo, D., Delbò, M., et al. 2001, Astron. Astrophys., 367, 362
  9. Gai, M., Casertano, S., Carollo, D., & Lattanzi, M. G. 1998, Publ. Astron. Soc. Pac., 110, 848
  10. Gai, M., Riva, A., Busonero, D., Buzzi, R., & Russo, F. 2013b, Publ. Astron. Soc. Pac., 125, 1383
  11. Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1
  12. Gielesen, W., de Bruijn, D., van den Dool, T., et al. 2012, in Proc. SPIE, Vol. 8442, Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, 84421R
  13. Kohley, R., Garé, P., Vétel, C., Marchais, D., & Chassat, F. 2012, in Proc. SPIE, Vol. 8442, Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, 84421P
  14. Lindegren, L. 1978, in IAU Colloq. 48: Modern Astrometry, ed. F. V. Prochazka & R. H. Tucker, 197–217
  15. Lobos, R. A., Silva, J. F., Mendez, R. A., & Orchard, M. 2015, Publ. Astron. Soc. Pac., 127, 1166
  16. Mendez, R. A., Silva, J. F., & Lobos, R. 2013, Publ. Astron. Soc. Pac., 125, 580
  17. Mendez, R. A., Silva, J. F., Orostica, R., & Lobos, R. 2014, Publ. Astron. Soc. Pac., 126, 798
  18. Morbidelli, R., Messineo, R., Busonero, D., Riva, A., & Vecchiato, A. 2012, in Proc. SPIE, Vol. 8451, Software and Cyberinfrastructure for Astronomy II, 84513B
  19. Perryman, M. A. C. 2005, in ESA Special Publication, Vol. 576, The Three-Dimensional Universe with Gaia, ed. C. Turon, K. S. O’Flaherty, & M. A. C. Perryman, 15
  20. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 2002, Numerical recipes in C++ : the art of scientific computing
  21. Short, A., Hopkinson, G., Laborie, A., et al. 2005, in Proc. SPIE, Vol. 5902, Focal Plane Arrays for Space Telescopes II, ed. T. J. Grycewicz & C. J. Marshall, 31–44
  22. Zhai, C., Shao, M., Goullioud, R., & Nemati, B. 2011, Proceedings of the Royal Society of London Series A, 467, 3550
113359
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description