1 Introduction
Abstract

The X-Ray Telescope (XRT) onboard the Hinode satellite, launched 23 September 2006 by the Japanese Aerospace Exploration Agency (JAXA) is a joint mission between Japan, the United States, and the United Kingdom to study the solar corona. In particular XRT was designed to study solar plasmas with temperatures between 1 and 10 MK with pixels ( resolution). Prior to analysis, the data product from this instrument must be properly calibrated and data values quantified in order to assess accurately the information contained within. We present here the standard methods of calibration for these data. The calibration is performed on an empirical basis which uses the least complicated correction that accurately describes the data while suppressing spurious features. By analyzing the uncertainties remaining in the data after calibration, we conclude that the procedure is successful, as the remaining uncertainty after calibration is dominated by photon noise. This calibration software is available in the Solar Soft software library.

Section 1 Introduction

The X-Ray Telescope (XRT) [2007] on Hinode [2007] is a high resolution grazing incidence soft X-ray imager launched in 2006. The primary design of the instrument is to measure the hot (thermal) coronal plasma of the sun. Details of the camera system can be found in ?, ?. To fully understand the significance of the photometric observations and quantify the results, it is necessary to calibrate the data and estimate the remaining noise and uncertainty. The radiometric calibration for quantitative photometric analysis described here is performed to improve the reliability and integrity of XRT data. A set of routines to perform these calibrations is included in the program which is available as part of the standard XRT packages within the SolarSoft software library (SSW: ?, ?).

The standard procedures of data calibration applied to visible light telescopes cannot be fully applied to XRT due to a few factors. In particular, we do not have access to a uniform (spectrally or spatially) X-ray source to make flat fields for calibration in flight, which limits our ability to adjust for temporal instrumental sensitivity variations, thus reducing the options for radiometric calibration. In spite of these complications a robust system of calibration has been developed through empirical analysis. These calibrations are not an attempt to determine every source of data degradation but an empirical correction for all notable sources of data inaccuracy that can accurately be corrected.

In addition to discussing the data calibration, we also provide estimates of the systematic uncertainties remaining after data calibration. This includes the variance from the calibration itself (such as from the vignetting and the dark correction), as well as the uncertainty from non-correctable sources such as JPEG compression. For the latter we provide an analysis of the cause of JPEG compression errors and have developed an accurate estimate of the magnitude of these uncertainties. We have found these errors to be small but notable. Calculations of these systematic uncertainties are available with , which provides users a quantifiable measure of the precision of the data. We discuss the magnitude of photon counting errors but have not included these estimates in , as they are strongly dependent on assumptions of the conditions within the particular coronal plasma producing the emissions detected by XRT. Included in the discussion are pixel maps returned by the software which locate pixels that are be corrected (and thus the user should avoid using) such as pixels affected by dust, contamination and saturation.

In Section 2 we will discuss the zero-point determination for XRT, which consists of a discussion of dark frames in Section 2.1, the odd-even bias voltage readout of the camera in Section 2.2, and the calculation of the zero point correction is discussed Section 2.3. In Section 3.1 we discuss the use of Fourier filtering in the calibration, and uncertainties from it in Section 3.2. Section 4 discusses the geometrical (wavelength independent) vignetting. Normalizing the images to a consistent exposure time is discussed in Section 5. The implications of using JPEG compression are discussed in Section 6. Section 7 discusses the pixel maps optionally returned by . Undesired signals we do not treat are discussed in Section 8. We then discuss combining the different sources of uncertainty in Section 9. We then look at the improvements of the calibration with a test case in Section 10.

Section 2 Zero-point Determination

2.1 Dark Overview

Even when no light is incident on the detector, charge will still accumulate, creating extraneous signal. This extraneous signal (along with electronic bias) creates (at a minimum) a zero-point offset that will be present whenever the CCD is read. The calibration system of permits the user several methods to correct for this. In some cases it is possible to speculate on the origin of certain effects (e.g., orbital temperature variation of CCD dark current) while in other cases the source of an effect can be unclear (e.g., the exponential portion of the “ski-ramp” dark shape as shown in Figure 1). In many cases, it is not feasible to separate the different noise sources (such as dark and readout noise) and so it is necessary to treat them together. We are primarily concerned throughout with developing an optimum calibration for XRT data, and have not focussed on why the instrument behaves in a certain way. We have therefore grouped together various effects which are naturally calibrated at a certain stage, even if they have different root causes (e.g., bias, dark current).

The traditional method of dark correction is direct subtraction of the median (or mean) of contemporaneous dark images taken with the telescope shutter closed. This method of dark correction is available to users of , through the optional keyword dark_type=1. In the case of XRT, however, this straightforward method is complicated due to numerous variable noise effects. These variable effects occur on a variety of spatial and temporal scales, and can vary even from one frame to the next. They include an overall ski-ramp shape of the dark along columns, a basal level dependent on CCD temperature and binning, as well as various electronic noise patterns with varying amplitudes and frequencies. Because of this variability, averaging together dark frames, even those taken near in time, can actually increase noise. We have opted, instead, for a semi-empirical approach as the default dark subtraction (dark_type = 0). The approach involves an empirical model dark generated by the routine , whose parameters have been calibrated based on analysis of over 2000 dark frames as discussed in Section 2.3. The mean level of this model dark is then adjusted to conform to the dark frames that are acquired contemporaneously with the X-ray images to be calibrated. A fully empirical model without zero-point adjustment (dark_type = 2) is also included; in Section 2.3 we demonstrate that the default zero-point adjusted (“hybrid”) model yields the best results, both for recovery of the zero point, and for minimizing noise.

2.2 Odd-Even Bias Voltage Differences

A design feature of the CCD camera sets bias voltages in odd and even pixel columns to slightly different levels ( digital numbers (DN) different). This offset is approximately constant in time and its source has not been fully identified. If using direct dark-subtraction (dark_type = 1), this effect is automatically removed from the data. When using the model-based options (including the default), we have opted to correct for this offset by subtracting from half of the columns the median difference between odd and even CCD columns, ignoring pixels where the signal response becomes non-linear (DN - hereafter referred to as “saturated”). This correction is performed by the subroutine (since the pattern appears at the Nyquist frequency).

2.3 The Dark Frame Model

An XRT dark frame is largely constant along rows (), though exhibits a distinctive “ski-ramp” profile along columns (). We found the best functional form with the fewest parameters is given by:

(\theequation)

where is the flux in DN along , and , , , and are fitting constants for a given image (Figure 1). Other functional forms were explored (e.g., polynomials) but none matched the average shape as accurately with so few parameters. Each of the fitting constants has dependencies on other factors. These dependencies were determined by using 2129 darks, each having 2048 pixels, taken between mission start and February 2008 and fitting them to Equation (2.3). Smaller numbers of 22, 44, and 88 binned darks (934 total) were also studied to determine variations of the functional form due to pixel binning, . The behavior of the fit parameters with various CCD and exposure properties have been studied. The ramp amplitude increases non-linearly with exposure time . An approximate fit using a minimum of parameters is described below and illustrated in Figure 2:

Figure 1.: The average column profile of a typical 20482048 dark, overplotted with the four parameter “ski-ramp” fit ( DN).
Figure 2.: Fit of the dependence of the “ski-ramp” amplitude parameter on exposure time.
(\theequation)

where the lower and upper ranges represent the average values of the data when s and s respectively.

The base level depends primarily on CCD pixel binning (), but shows a secondary dependence on CCD temperature and . Specifically,

(\theequation)

where

(\theequation)

and depend on as shown in Table 1. The dependence of on is shown in Figure 3 for (i.e. full-resolution) as an example.

\@tabularlrrr & & &
1 & 86.08 & 0.1695 &
2 & 247.84 & 2.459 &
4 & 517.65 & 4.425 &
8 & 1067.09 & 8.898 &

Table 1.: Variation of base level coefficients from Equation (\theequation) with CCD pixel binning for the dark frame model.
Figure 3.: Quadratic fit of the dependence of “ski-ramp” base parameter on CCD temperature for 1x1 binning.

The ramp width parameter decreases with CCD pixel binning as

(\theequation)

This dependence is shown in Figure 4. The slope increases slightly with as

(\theequation)

which is shown in Figure 5. Smaller fields of view have the dark profile of the corresponding bottom portion of a full-frame dark, i.e.,

(\theequation)
Figure 4.: Fit of the dependence of the “ski-ramp” width parameter .
Figure 5.: Linear fit of dependence of “ski-ramp” slope parameter on CCD temperature.

Optimally, the arithmetic difference between an observed dark frame, , and a model dark , would yield a frame with a mean of zero and only random noise remaining. To test the dark model for deviations from this ideal, we analyzed all of our test darks to determine two simple measures of the goodness-of-fit: the scatter of the average residual , i.e.

(\theequation)

and the average of the scatter within the residuals:

(\theequation)

Since we have adjusted so that the average over all residuals is zero, the first diagnostic, , essentially gives a measure of the scatter in the zero-point determination. The second, , gives a measure of how well the model matches the 2-D dark shape.

The values of the constants in Equation (2.3) have been computed for the entire set of 2129 darks. We also computed analogous values for the case where the dark model, , is defined as the median of the five dark frames taken nearest in time to the dark to be corrected. We found that the model dark was better at determining the shape of the dark with the lowest scatter () while the traditional median was better at reducing scatter in the zero-point amplitude (). There were systematic deviations in the zero-point level on intermediate timescales (months) which were uncorrected by the pure model dark. These uncorrected deviations led us to develop a hybrid model as the default for , wherein the average of the model dark is then adjusted to match the average of the median of the five temporally nearest dark frames. The values for and are then used to compute the combined uncertainties introduced by dark subtraction and bias correction such that

(\theequation)

Results of the analyses are shown in Figures 6, 7, and 8. Note that the number of points in each case is less than the total number analyzed, as some points were removed for excessive radiation hits, or in the cases using median darks, there were insufficient darks of adequate type and quality to generate the median .

One can clearly see in Figure 6 that the median and median-adjusted (lower panel of Figure 6) hybrid models show lower scatter in the zero-point than the pure model adjustment (upper plot in Figure 6), mostly because many intermediate timescale trends are removed. We note that splits into multiple, roughly fixed levels (Figure 7). These levels are primarily due to the data compression level (Figure 8 and JPEG compression discussion in Section 6); compression acts to alter high frequencies in the data, altering high frequency noise. Using the median (Figure 8, right) shows higher average with a larger range at each because of intrinsic noise in compared to the (noise-free) analytic models. There are also some added semi-fixed levels of when compared to the analytic models. These semi-fixed levels appear to be due to a combination of mixtures in median , and the effects of varying numbers of radiation hits for some values.

Figure 6.: Plots of (in DN) for full-frame images with 11 binning. The upper plot uses an empirical model dark (dark_type = 2) for . The lower plot uses a median of five dark frames (dark_type = 1). The results for (dark_type = 1) and hybrid (dark_type = 0) are identical. Note the reduced scatter in the cases of dark_type = 1 or 0 quantified by the lower value of .
Figure 7.: Plots of (in DN) for full frame images with 11 binning. The top plot is for cases using the empirical or hybrid (dark_type = 0 or 2), and the second plot is for the same cases but Fourier filtered (see discussion on Fourier filtering in Section 3.1). The lower two plots are for median filtered (dark_type = 1), with the lowest plot including Fourier filtering. Note the Fourier filtering reliably lowers . The different discrete levels visible are due to changes in due to different JPEG compression levels (more details in Section 6); in the case of dark_type = 1, additional levels are added for mixtures of compression type within a . The noise reduction for the case of dark_type = 1 and full Fourier noise removal is due to the reduction of high frequency noise in the medianing, which leaves less periodic signal for the filter to remove.
Figure 8.: Plots of as a function of compression level , showing scatter about distinct mean levels which vary with and dark_type (left panel, model , dark_type = 0, 2; right panel, median dark , dark_type = 1). The median model (right) shows higher average values (due to noise in ) and multiple concentrations at a given , caused in part by mixtures of different values used to create the median dark.

Section 3 Fourier filtering

3.1 Fourier Filtering

All XRT data exhibit moderate to high frequency “ripples” whose amplitudes and frequencies change in time. While the amplitude of these features is small (a few DN), they can nevertheless be troublesome, especially in fainter parts of an image and portions where the intensity gradients are small (making quasi-regular variations more noticeable). Due to their relatively low amplitudes, the features are most easily seen in dark frames, where the signal level is low and nearly flat. These features do not completely cancel out, even when fairly large numbers of darks are averaged, because of rapid frame-to-frame variability. These features can be easily discerned in Fourier transforms (Figure 9) and come in several varieties: type 1) features with constant horizontal and vertical frequency ( and , respectively) and variable amplitudes; type 2) features with constant , but spanning all with variable amplitude []; type 3) features with constant and variable amplitude [], but in the shape of a moving pulse in , dropping to 0 outside a limited range. The temporal variation of these features can be seen in Figure 10. Type 2 and 3 features are more pervasive than type 1 features.

Figure 9.: Top left: 2-D FFT of a dark taken on 6 November 2006, log-scaled and thresholded between and , showing typical noise features (e.g., localized peaks, streaks spanning at fixed , and pulses with fixed and restricted ). Top right: Log of the fraction of the Fourier amplitude which is filtered out of the same dark by the routine (scale at right). Bottom left: Central 256256 pixels of the same dark (after “ramp” and Nyquist removal) before Fourier filtering. Bottom right: Same as bottom left, after filtering (scale for both bottom panels is at right). Note that high frequency periodic noise is suppressed, but some lower frequency noise remains, due to shielding of low portions of the transform to prevent damage to actual data signals.
Figure 10.: Left: Same 2-D FFT shown in Figure 9 (top right) with some typical noise features; localized peaks fixed in and (circled in red), streaks spanning at fixed (red arrows), and peaks (blue circles) and pulses (blue arrows) with fixed and variable . Right: 2-D FFT of a dark taken  2 min later, displayed as in left panel. Note the motion in of some of the marked features (blue), corresponding to a different noise ripple pattern in the dark (compare Figure 9 bottom panels).

Each data image is corrected with the procedure , which applies a tapered filter to each of these features in Fourier space, suppressing them down to the average noise level measured local to, but outside of, the feature. Two thresholds are used in the filtering. The first threshold is , the number of standard deviations above local fluctuations (in Fourier space) a signal must be before it is suppressed. The second is , the threshold in Fourier amplitude (in standard deviations above the median large-scale Fourier noise level) above which no corrections are performed. This threshold avoids damaging the “real data” part of the transform. Thus, controls how strong a Fourier feature must be before it is suppressed, and governs what part of the transform is considered “real data” by the program and shielded from any alteration. The default values of these parameters are set to = 3.5 and = 4.5.

Generally features of type 1 and 2 are well removed from the solar parts of the transform. Due to their variability in , noise features of type 3 can sometimes remain uncorrected in the final product. An image of a dark frame before and after Fourier filtering is shown in Figure 9.

3.2 Fourier Filter Uncertainties

We have computed the average effect of the Fourier filtering, which typically reduces the scatter by 25% (in the 1x1 binning case), though the reduction is less for higher binning levels. This reduction in scatter for the dark correction from Fourier filtering is included in the calculation of the dark uncertainty described in Equation (\theequation). While generally improving the noise floor of XRT, the use of a Fourier filter to suppress temporally variable readout signals is not without drawbacks and sources of uncertainty. Although we have taken great pains to design the Fourier filter such as not to damage the real data, uncertainties in the best choice of and for a given data set make some added error unavoidable. In addition, some error is introduced by the inadvertent removal of “real” information at and . Thus, while suppressing the spurious readout signals, we introduce small errors due to imperfections in the filtering process itself. Specifically, uncertainties arise in imperfections from how the filter process protects “real” data. To estimate the uncertainty in the filtering process, we altered both parameters ( and ) by and computed a Fourier filter error amplitude image :

(\theequation)

where is the Fourier filtered image with increased by 1 unit, is the Fourier filtered image with decreased by 1 unit, and so on. We then attempted to model using various image properties.

It was found that , above a base level, is mostly comprised of “islands” of enhanced noise in areas of the image with sharp gradients (e.g., near active regions). It was determined that can be reasonably modeled with a properly trimmed, scaled and smoothed version of the original image, according to

(\theequation)

where , and are fitting parameters, and smooth() represents an pixel running mean unweighted (“boxcar”) smoothing, reiterated four times. We studied groups of 8 unbinned full frame images, with varying filters and exposure times, from mission start (October 2006) to May 2008. Each group represented data from one month. We found a value of DN to be a suitable intensity threshold for fitting the data. The other parameters were determined using non-linear least squares fitting across image parameters. Numerous combinations of image parameters were tested, but the best fits were achieved with the mean data level and the average unsigned amplitude of the local spatial gradient of the image (, where is the 2-D spatial derivative of the image using 3 point Lagrangian interpolation via the IDL function).

There was a notable change in the functional dependence of once the XRT CCD became affected by contamination spots in July 2007 [2011] and again in January 2008. Thus we model separately for the three epochs defined by the contamination spots (i.e., pre-July 2007, July 2007 through January 2008, and post-January 2008). For these three epochs, the best-fit parameters are given in Table 2. The likely cause of the variation is the introduction of numerous small sharp “edges” from the spots themselves. While individually the gradients induced by the spots are (typically) on a smaller spatial scale than those of active regions, they are considerably more numerous and more spatially uniform. In summary, the base level increases with the scale of gradients in the image. During the non-spotted epoch, the normalization depends on the average counts, and inversely with the image gradients; after the formation of contamination spots, primarily depends on the mean count rate.

The parameters in Table 2 are appropriate for full-resolution images. Based on test cases, the scaling to other binnings is found to be well described by

(\theequation)

This model is not a precise pixel-for-pixel match to , but rather follows its larger scale structure. Errors scatter around this model on a fine scale. Overall, Fourier filter errors are a minor component of the overall error budget, as will be shown further below.

\@tabularccccEpoch & & &
I & 0.24 & 26 & round[40 ]
& & &
II & 0.26 & 77 & round[26]
& & &
III & 0.26 & 79 & round[28]
& & &

Table 2.: Coefficients for Fourier filter uncertainty for three epochs defined by absence or presence of CCD contamination spots. Epoch I = prior to 2007 July 24; Epoch II = 24 July 2007 through 20 January 2008; Epoch III = after 20 January 2008. For each coefficient, the residual scatter in the fitting is expressed as the error in the logarithm of the coefficient.

Section 4 Vignetting

In the astronomical community there is an ambiguity in describing vignetting, with some authors using the term to describe only the geometrical factors that result in uneven illumination of the focal plane (e.g. due to obscuration by baffles), and other authors using the term to describe all possible effects including, e.g., wavelength-dependent reflectivity of a grazing incidence mirror. In the present work, we conform to the former usage, wherein only geometrical effects independent of the wavelength of incident photons are considered. A known source of wavelength-dependent variation of illumination, other than the reflectivity of grazing incidence telescopes, is photon scattering due to residual roughness of the mirror. Correcting for photon scattering or for wavelength-dependent reflectivity requires knowledge of the photon wavelengths, which in the case of a broadband instrument like XRT is only possible with knowledge of the temperature-dependent emission measure and element abundance of the observed plasma. Such a strongly model dependent analysis is clearly outside the scope of this calibration. However we note that a wavelength-dependent vignetting in the case of XRT should be expected to manifest as a systematic bias of filter-ratio temperatures with respect to off-axis angle, an effect which to our knowledge has not been observed.

The effect of vignetting in XRT was measured before launch at the X-ray Calibration Facility at NASA’s Marshall Space Flight Center in Huntsville, AL during the end-to-end testing. As the telescope was tested in its fully assembled configuration, and with monochromatic Cu-L photons, these tests included all sources of non-uniform illumination of the focal plane within the optical path and focused on the wavelength independent and rotationally symmetric sources of vignetting.

We fit the measured CCD response as a function of off-axis angle with a linear function, normalized to 1 at . The mirror vendor provided a functional vignetting of the form

(\theequation)

where , the manufacturer specified graze angle of the mirror. However, the end-to-end testing measurements did not sample enough off-axis positions to fully populate the image plane, and so interpolation/extrapolation from the sparsely sampled data points does not provide sufficient precision to determine the vignetting function uniquely. At the same time, the end-to-end testing gave no clear evidence for deviations from the expected vignetting profile, a result which indicates that the mirror is the only significant component contributing to vignetting in the focal plane. Later analysis of solar images made at different spacecraft pointings with respect to Sun center supported the conclusion that Equation (\theequation) adequately represents the vignetting detected in XRT images, in all four of the thinnest focal plane analysis filters (Ishibashi, 2008, private communication).

The vignetting is corrected by the program which divides the image by this function, reversing the effects of vignetting. Errors due to the fit remaining after the correction were determined from additional study of the scatter in the dither analysis data mentioned above (Ishibashi, 2008, private communication). We found:

(\theequation)

In the central region of the CCD, the vignetting uncertainty is quite reasonable (0.45%), though it does get large () near the edges of the full field of view. It is worth noting that while the X-ray intensities (and thereby emission measures) measured from off-axis sources are affected by the vignetting, since the vignetting function is multiplicative and wavelength independent, ratios of the intensities are unaffected.

Section 5 Exposure Time Normalization

XRT uses a rotating focal plane shutter with 3 differently-sized slots to control the amount of time that the detector is exposed [2007]. A variety of exposure lengths can be achieved by rotating the shutter through a combination of slots. The CCD is flushed at the beginning of the exposure and read immediately after the end of the exposure, to minimize the accrual of stray light and dark current.

The actual length of time the CCD is exposed to light is measured by an optical encoder on the shutter and noted in the image header (under the keyword ), and used by to normalize the images. Thus the uncertainty in the exposure time is limited by the precision of the stored value which is of order s, and thus negligible compared to the other sources of uncertainty.

Section 6 JPEG Compression

In 2007, the transceiver for Hinode’s X-band antenna failed, forcing all scientific telemetry to use the lower bandwidth of the S-band transceiver. To accommodate this lowered telemetry, the instruments on Hinode have used a stronger compression than the lossless algorithm, DPCM, to conserve telemetry. The alternative compression algorithm adopted is the lossy algorithm of the Joint Photographic Experts Group (JPEG). JPEG compression is one of the most commonly used consumer file compression algorithms, and the file format is ubiquitous in digital photography. The compression is a multistep process and is performed by the Mission Data Processor (MDP) on board the spacecraft. It is very useful to understand the mechanism of JPEG compression in order to understand and calculate the errors caused by the compression. We find (and show below) that even though visible artifacts of the JPEG compression can be detected in some X-ray images, the photometric magnitude of the uncertainty is quite small, on the order of 2–3% for typical images of coronal active regions.

The first step in JPEG compression is to center the data around zero to prepare it for a discrete cosine transform (DCT). The centering is performed by subtracting a pedestal equal to half of the bit limited range of the data. The second step is to subdivide the image into N pixel by M pixel subregions (hereafter referred to as macropixels). Most JPEG compression algorithms, including that used on Hinode, utilize macropixels made of 8 pixel by 8 pixel subregions. A DCT is then applied independently to each macropixel.

The transformed macropixel is then normalized by a quantization table which suppresses the high frequency signals. The strength of JPEG compression is determined and denoted by the particular quantization table used. The high frequency information in the data is lost when the array is recast as an integer array after normalization, which truncates low signal values to zero. By storing only the non-zero amplitudes of the low frequencies by the use of Huffman entropy encoding, high frequency data is discarded and a smaller file size is achieved. The compressed file size depends on the amount of high frequency signal in the original data as well as the particular compression array used. Decompression is performed by reversing the compression process.

For XRT the compression level ranges from to . The compression loses minimal information (generally just round-off error), and creates a file of the size of the raw image and the size of a DPCM compressed image. The compression shrinks the file to just of the raw image and of the DPCM compression, though such high compression significantly alters the original image and is thus rarely used.

The level of compression involved in science level JPEG compression (typically and ) is significantly lower (i.e., less lossy) than commonly used in consumer applications. Also, coronal images often have less high frequency information than is found in consumer images (such as caused by text or hard edges), which makes JPEG artifacts less common in science images than in consumer digital photography. Apart from numerical rounding error, the nature of the transformation generally conserves flux within each macropixel. Most of the error we observe comes about from smearing the high frequency components throughout the macropixel.

To determine the uncertainty created by JPEG compression, we applied JPEG compression to 1253 images of size (pixels) obtained from different science datasets that had used DPCM compression. Compression was performed using an algorithm designed to mimic the method and computational architecture of the MDP. We then studied the discrepancy between the original and compressed images. A discrepancy histogram is shown in Figure 11. The discrepancy does not follow a single Gaussian distribution, suggesting a more sophisticated approach of determining the uncertainty is required.

Figure 11.: Histogram of discrepancies from (low) and (high) compression when compared to the uncompressed data. A single Gaussian does not provide a sufficient fit. The data can be fit by two separate Gaussians, as shown. The blue and green curves represent the two individual Gaussians used, with the red curve representing the sum of these curves.

The most efficient proxy found for the uncertainty in JPEG compression is the range of values within a macropixel. Generally speaking, the larger the range of values within the macropixel, the more high frequency signal within the region for JPEG compression to suppress. Thus the more high frequency signal within a given macropixel, the larger the compression error. Since flux is generally conserved in a macropixel, this results in the smearing that creates the notorious JPEG “block” artifacts.

As suggested by these factors, we find the largest uncertainty from JPEG compression occurs on the edges of active regions, where the signal rapidly transitions from a few DN/pixel in the quiet sun to well over a thousand DN/pixel in the active region. Utilizing data from the large data set of 1253 images, we have made histograms of the average absolute discrepancy per macropixel for each possible value of the (maximum - minimum) pixel range within a macropixel. These histograms are easily fit by Gaussians. The center of these Gaussians gives the JPEG compression uncertainty for each pixel within the macropixel, as is illustrated in Figure 12. We use these empirically determined values to determine estimates of the uncertainty in . The software determines the max-min value for each macropixel of the image and assigns each macropixel an uncertainty using best fit curves as shown in Figure 12. The asymptotic value of the average uncertainty per macropixel for the available values of is shown in Table 3. It is important to remember that the values in Table 3 are asymptotes of the max-min vs average error graphs (Figure 12) and not maximum errors nor are they strictly average errors.

Figure 12.: Plot of the average macropixel max-min values vs absolute uncertainty for and compression (+). Overplotted is a best fit polynomial line which is spliced with the asymptotic value of the curve. This piecewise continuous curve is used by to calculate the JPEG uncertainty.

\@tabularc—c JPEG value &Asymptotic uncertainty (DN)
100 & 0.3
98 & 0.7
95 & 1.55
92 & 2.45
90 & 3.1
85 & 4.5
75 & 7.0
65 & 10.0
50 & 15.0

Table 3.: Asymptotic uncertainty for varying JPEG compression factors. These values of uncertainty are the asymptotes of the average uncertainty per macropixel for each max-min value, as shown in Figure 12.

Section 7 Pixel Maps

There are several properties/effects of the CCD which are noted and mapped by but not otherwise corrected, often because there is no demonstrably reliable way of making a quantitative correction. Certain useful maps are available as optional outputs of . Each effect has been assigned a unique grade in the maps.

Pixels missing in telemetry are replaced by the local data average and noted in the missing pixel map. Saturated pixels (DN 2500, grade = 1), so-called saturation “bleed” pixels (where charge transfer from saturated pixels has corrupted values; grade = 2), contamination spots (grade = 4), dust speck (grade = 8) and possible “hot” pixels (grade = 16) are flagged in the pixel grade map, an output of .

Contamination spots were first seen as the result of the first CCD bakeout on 23 July 2007, where an unknown organic contaminant collected in spots on the CCD [2011]. The data are checked after each CCD bakeout and the spots are periodically remapped. These spots are partially opaque to X-rays, particularly at the longer wavelengths normally admitted by the thinner filters. The spots act as an anti-reflection coating in the visible wavelengths thus increasing the G-band signal in spots. Attempts to create an effective wavelength dependent flat field to correct for the effect of the contamination spots has so far proven unsuccessful, though a cosmetic correction can be performed. Software to perform the cosmetic correction exists in SSW, and one method will be included as an option in the latest update to , however, we stress this is not a scientific calibration of the spots. The use of pixels affected by contamination spots is strongly discouraged.

Dust specks were noted before launch and essentially block most incoming radiation. Hot pixels are defined as persistently over-bright pixels seen in averaged dark frames; the resulting maps are a combined result of independent analysis by R. Kano and coauthor Saar. These pixels are flagged as a precaution; it is not clear that they are significantly degraded in their calibration relative to “normal” pixels.

Section 8 Additional Systematic Effects Outside of the Scope of the

Some effects on the instrument are more difficult to correct. Many are model dependent, and thus beyond our ability to correct/estimate with confidence. Cosmic ray streaks are not corrected by , as the most effective repair is cosmetic, and thus not scientifically robust (though the cosmetic repair is optionally available within ).

The grazing incidence mirror used by XRT is a source of scattered light. This scattered light requires a model dependent and non-trivial deconvolution to correct, and is thus not performed by . Estimates of the uncertainties due to scattered light are similarly difficult to estimate, and as such are not considered.

We have chosen not to estimate the uncertainties from photon counting in , as they rely heavily on models of the emitting plasma, as shown in Section 9. Modeling the photon counting uncertainty requires knowledge of the temperature and density of the emitting plasma, which can then be used to estimate the number of electrons each photon will excite in the CCD, which is strongly wavelength dependent. It is non-trivial to estimate the differential emission measure of solar plasma with broadband imager data. The interested user can estimate these uncertainties using software already in SSW, in particular .

In May of 2012, a calibration shift was detected, believed to have been caused by a small breach in the entrance filter on the outer annulus of the telescope. The fissure in this filter allows extra visible light to fall onto the detector at the back of the telescope. While the full extent of this shift is still under investigation, it has been determined that the calibrations discussed here are not affected by the shift. The correction for this effect is still under development and will be detailed in a later paper after a more complete analysis can be performed.

Due to the normal and expected degradation over its lifetime, the CCD is beginning (as of late 2012) to exhibit signs of charge transfer inefficiency (CTI). CTI is caused by the CCD not fully transferring accumulated charge from one pixel to the next during readout, which creates a faint smeared trail in vertically adjacent pixels. This is a common problem in similar devices and tends to increase during the life of the CCD [2001]. In the case of XRT, the CTI is noticeable in a few pixels in low-signal areas, with a magnitude that is of the same order as the dark noise (a few DN/pixel). In general CTI can be remedied by an annealing process whereby the CCD is exposed to heat, though the onboard heaters for XRT are unable to heat the CCD to high enough temperatures to noticeably improve charge transfer efficiency. At the time of this writing, no reliable quantitative correction procedure for CTI effects has been established.

Section 9 Uncertainties

Combining the Systematic Uncertainties

A preliminary version of the discussion of combining and comparing the systematic uncertainties can be found in ?, ?, and we include an updated presentation here which includes more accurate estimates of the Fourier filter uncertainties (as discussed in Section 3.2). Due to the multiplicative factor of vignetting, the systematic uncertainties (dark, Nyquist, Fourier, vignetting, and JPEG) do not add in simple quadrature. The dark, Fourier filtering and JPEG uncertainties do add simply, yielding

(\theequation)

Since the vignetting correction is a divisor to the image, we must add the uncertainty due to vignetting as a relative uncertainty, and thus

(\theequation)

where the is the fully corrected image, and is the dark corrected image and is the vignetting uncertainty from Equation (\theequation). A comparison between these individual uncertainties is shown in Figure 13. In all data sets we tested where the compression was or stronger, the JPEG uncertainty was the dominant source of systematic uncertainty, though still dwarfed by reasonable estimates of the photon counting uncertainty. It can also be noted that the uncertainties introduced by the calibration shift will add in quadrature into Equation (\theequation).

Figure 13.: A comparison of the different systematic/non-statistical uncertainties from a randomly selected but typical image. Note the varying scales. The JPEG generally dominates the uncertainty when using compression. The upper left is the reverse color raw image. The upper right shows the percent error within the dark noise, which scales inversely with total signal (0–1.0%). The middle left shows the JPEG compression uncertainty for the same image (0–4.2%). It is important to note that very few pixels have 4% uncertainties, most are much lower. The middle right is the logarithm of the ratio of JPEG uncertainty to Dark Uncertainty. Note that the JPEG uncertainty is almost always larger than the dark uncertainty. The lower left is the percent error from the Fourier filtering, which is very small while still reducing the dark uncertainty. All of these plots are normalized by as given by Equation (\theequation). The vignetting uncertainty is not shown, as it is 0.45% across the whole field of view as given by Equation (\theequation). The total systematic uncertainty is in the lower right. This plot is updated from a similar plot found in Kobelski et al., 2012 which did not include the Fourier filtering uncertainties.

Photon Counting Uncertainties

To determine the uncertainties from photon counting, we must attempt to translate the digitized DN value from the detector into the number of photons incident on the detector. This is a difficult (if not ill posed) inversion problem. The difficulties arise partially from the fact that the quantum efficiency and gain of the detector are wavelength dependent, such that the number of electrons produced from a single incident photon varies depending on the wavelength of the photon. With a broadband instrument such as XRT, we thus must estimate the temperature of the emitting plasma to determine the number of incident photons for a given DN. The photon counting uncertainty is temperature dependent and is thus not well known, especially when considering multi-thermal plasmas. An example of this temperature dependence can be seen in Figure 14. As previously mentioned, due to this model dependence, we have not included photon counting uncertainties with the systematic uncertainties that are included in .

Comparing the Uncertainties

We have measured the uncertainties including photon noise for multiple data sets so as to compare the magnitudes. Typical comparisons are shown in Figure 14. The temperatures chosen in Figure 14 ( K and  K) illustrate the variation in photon counting noise across the temperature range of XRT. The dominance of photon noise over the systematic uncertainties is clearly evident. The photon counting noise was calculated using the expected instrumental response to a plasma of a given temperature, as discussed in Narukage et al. [2011]. The success of the calibration can be seen by how small the systematic uncertainties are when compared to the photon counting noise.

The regions of high JPEG uncertainty are generally in the low DN range, where the photon noise is also very great. As the assumed temperature is increased, the dominance of the photon noise becomes even more significant, becoming nearly 30 times larger than the systematic uncertainties. While the photon noise can be limited operationally (e.g. via deeper exposures and pixel binning), the photon counting uncertainty will always dominate the systematic uncertainties. It is also worth noting the small effect of JPEG compression when compared to the omnipresent photon counting noise. All of these factors suggest that while the JPEG compression uncertainty is non-negligible, it is quantifiable and does not significantly impair the data when compared to other factors.

Figure 14.: The top images are 512512 pixel maps of the logarithm of the ratio of photon noise to systematic (non-statistical) noise for Ti_poly observations from January 2011 with contours illustrating the systematic uncertainty percent error. We assumed a log T of 5.5 for the left plot, and 6.9 for the right. The contours give reference values, where the ratio is 0.4 and 0.9. The bottom plot shows the ratio of photon noise to systematic noise as a function of signal for each assumed temperature, and also plots the percent uncertainty for both sources for the image set used above. The dotted line is the photon noise, while the lower dashed line is the systematic noise. In addition to showing the dominance of the photon counting noise compared to systematic uncertainty, these plots also illustrate the strong effect the assumed temperature has on the photon counting uncertainty. This plot is adapted and updated from a similar plot found in Kobelski et al., 2012.

Section 10 Test Case

To illustrate the utility and capabilities of , we demonstrate a sample analysis of XRT observations taken 15 February 2001 of active region AR 11158. This data set was chosen for having a large dynamic range in the observations as well as a fairly standard level of JPEG compression (). The active region produced a few flares, including a GOES X-class flare at 01:56UT. Figure 15 shows an unprepped image and a comparison image to illustrate the effects of the calibration. The prepped image has an improved contrast across the image, especially in the eastern section of the active region. The roughness in the quiet sun regions of the percent change plot comes about from the prep process removing the high frequency noise in this region, thus smoothing the background levels.

Figure 15.: Reverse color Ti_poly image of AR 11158. On the left is the image before processing through , and on the right is the percent change of the same image after the prep process. The percent change is the difference between the unprepped and prepped image, normalized by the unprepped image. The always positive result shows that the raw image always contains more DN/pixel than the prepped image, as extraneous signal is removed by the prep process. The processing improves the perceived contrast of the active region, and removes noise in the low signal regions. The box in the unprepped image marks the area integrated for the light curves plotted in Figure 16.

As can be seen in the right panel of Figure 15, the correction from the calibration is stronger in the quiet sun regions where there is inherently a smaller signal compared to noise. Where more flux is detected, the difference between the prepped and unprepped data becomes smaller, though it is still significant.

Figure 16 illustrates the effects of the calibration process. The top light curve is the unprepped raw and calibrated data from the boxed region in Figure 15. For most of the data, the prep process determines that approximately 40% of the signal is extraneous, as shown in the bottom panel of Figure 16. Deviations from the average value illustrate the dynamics of the calibration, with which small brightenings become more prominent, as seen when comparing to the upper two light curves. The brightenings around 2UT strongly illustrate this effect and show that the difference in the upper and middle plots is from more than just exposure time normalization when compared to the lower plot. Additionally, the ability to estimate systematic uncertainties enables meaningful photometric measurements, particular important for distinguishing small brightenings from random fluctuations of X-ray intensity.

Figure 16.: Light curve of the boxed region in Figure 15. The top light curve is the raw uncalibrated (solid line) and the calibrated data (dotted line), normalized by the number of pixels in the region. The second plot is the data after having been run through , and normalized by exposure time. The more narrow error bars are the calculated systematic uncertainty, the bigger and wider error bars are the photon counting uncertainty. The final plot is the difference between the raw data and the prepped data, normalized by the raw data. The strong deviations from a flat line show the dynamics of the subtraction, i.e. more than just a spatially flat dark image was removed. Note that exposure time and pixel normalization does not matter for the lower plot, all of the normalization factors will cancel out.

Section 11 Conclusion

The current empirical calibration of XRT data provided by is robust and greatly improves the reliability and accuracy of the data. Estimates of systematic uncertainties are provided by to assist users in quantitative photometry of coronal features with XRT. In all cases the systematic uncertainties are found to be smaller than the (model dependent) photon counting uncertainties. The authors and the XRT Team recommend that any radiometric analysis of these data should include the corrections described in this paper and as performed by . This analysis can also serve as a starting point for a more thorough correction of the data for the inclined and motivated user. Most of the methods used here are not limited to the analysis of X-ray data, and are thus viable ways to empirically calibrate data sets from other missions.

Acknowledgements

Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, and with NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). This work was supported by NASA under contract NNM07AB07C with the Smithsonian Astrophysical Observatory.

References

  • 1998 Freeland, S.L., Handy, B.N.: 1998, Data analysis with the SolarSoft system. Solar Phys. 182, 497 – 500.
  • 2007 Golub, L., Deluca, E., Austin, G., Bookbinder, J., Caldwell, D., Cheimets, P., et al.: 2007, The X-Ray Telescope (XRT) for the Hinode mission. Solar Phys. 243, 63 – 86.
  • 2001 Janesick, J.R.: 2001, Scientific charge-coupled devices, SPIE Press Monograph Pm83, SPIE, Bellingham, Washington.
  • 2008 Kano, R., Sakao, T., Hara, H., Tsuneta, S., Matsuzaki, K., Kumagai, K., et al.: 2008, The Hinode X-Ray Telescope (XRT): Camera design, performance and operations. Solar Phys. 249, 263 – 279.
  • 2012 Kobelski, A., Saar, S., McKenzie, D.E., Weber, M., Reeves, K., DeLuca, E.: 2012, Measuring uncertainties in the Hinode X-Ray Telescope. In: Golub, L., De Moortel, I., Shimizu, T. (eds.) Fifth Hinode Science Meeting, ASP Conf. Ser. 456, 241 – 244.
  • 2007 Kosugi, T., Matsuzaki, K., Sakao, T., Shimizu, T., Sone, Y., Tachikawa, S., et al.: 2007, The Hinode (Solar-B) mission: An overview. Solar Phys. 243, 3 – 17.
  • 2011 Narukage, N., Sakao, T., Kano, R., Hara, H., Shimojo, M., Bando, T., et al.: 2011, Coronal-temperature-diagnostic capability of the Hinode/ X-Ray Telescope based on self-consistent calibration. Solar Phys. 269, 169 – 236.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
250731
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description