Theory of Dispersed Fixed-Delay Interferometry for Radial Velocity Exoplanet Searches
The dispersed fixed-delay interferometer (DFDI) represents a new instrument concept for high-precision radial velocity (RV) surveys for extrasolar planets. A combination of Michelson interferometer and medium-resolution spectrograph, it has the potential for performing multi-object surveys, where most previous RV techniques have been limited to observing only one target at a time. Because of the large sample of extrasolar planets needed to better understand planetary formation, evolution, and prevalence, this new technique represents a logical next step in instrumentation for RV extrasolar planet searches, and has been proven with the single-object Exoplanet Tracker (ET) at Kitt Peak National Observatory, and the multi-object W. M. Keck/MARVELS Exoplanet Tracker at Apache Point Observatory. The development of the ET instruments has necessitated fleshing out a detailed understanding of the physical principles of the DFDI technique. Here we summarize the fundamental theoretical material needed to understand the technique and provide an overview of the physics underlying the instrument’s working. We also derive some useful analytical formulae that can be used to estimate the level of various sources of error generic to the technique, such as photon shot noise when using a fiducial reference spectrum, contamination by secondary spectra (e.g., crowded sources, spectroscopic binaries, or moonlight contamination), residual interferometer comb, and reference cross-talk error. Following this, we show that the use of a traditional gas absorption fiducial reference with a DFDI can incur significant systematic errors that must be taken into account at the precision levels required to detect extrasolar planets.
1 The DFDI Concept and the ET Program
1.1 The Need for a New Instrument
Despite the remarkable achievements in extrasolar planet detection over the last decade, identification of many more planets is still needed to constrain formation and evolutionary models. This is partially because of the unexpected diversity of planet properties uncovered, and partially because of a lack of large, well-defined, unbiased target search lists – the primary concern naturally having been to find planets in the first place. To this point many surveys have been subject to completeness issues or in some cases deliberate biases toward planet detection (e.g. da Silva et al. 2006), making it difficult to perform robust statistical analyses of the known planet sample. Armitage (2007) concluded that there is still a strong need for large uniform surveys to enlarge the statistical sample available: drawing on the unbiased survey of Fischer & Valenti (2005), he was only able to find a uniform subsample of 22 of the over 170 planets then known that satisfied the requirements for a statistical comparison with models.
A few thousand stars have been searched between the various RV surveys since the first RV discoveries of giant extrasolar planets around solar-type stars (Mayor & Queloz 1995), including a large fraction of the late-type, stable stars down to visual magnitude . Improved instrument light throughput would help facilitate the survey of fainter stars. (A review of radial velocity (RV) discoveries is given by Udry et al. 2007). Although the rate of detections from transit surveys will likely increase, transit surveys can only detect the small fraction of planets which happen to eclipse their parent stars ( probability for hot Jupiters, from geometrical considerations – Kane et al. 2004). Furthermore, the complementary information gained from RV detections remains of great value. There is therefore a strong case for finding a technique capable of RV surveys down to faint magnitudes and at faster speeds than have been achieved over the last decade. The Exoplanet Tracker (ET) instruments are a new type of fiber-fed radial velocity (RV) instrument based on the ‘dispersed fixed-delay interferometer’ (DFDI), built with the goal of satisfying this requirement.
1.2 The DFDI Principle
The radial velocity technique for detecting exoplanets consists in measuring the reflex motion of the parent star due to an orbiting planet by measuring very precisely the resulting Doppler shifts of the stellar absorption lines. Achieving this requires extremely high precision: internal precisions now typically reach down to the level (Butler et al. 1996; Vogt et al. 2000), and even as low as or better (Pepe et al. 2005). For comparison, a Jupiter analogue in a circular orbit around a solar-type star would cause sinusoidal radial velocity variations with an amplitude of about . Exoplanet radial velocity surveys have traditionally depended on recording very high resolution echelle spectra, either cross correlating the spectra with reference template spectra, or fitting functions to the line profiles themselves to measure the positions of the centroids.
The DFDI technique, upon which the Exoplanet Tracker (ET) instruments are based, comprises a Michelson interferometer followed by a low or medium resolution post-disperser (also referred to by Erskine (2003) as an externally dispersed interferometer, or ‘EDI’, emphasizing the distinction from techniques where the dispersing element is internal to the interferometer). The effective resolution of the instrument is determined primarily by the interferometer, so the post-dispersing spectrograph can be of much lower resolution than in traditional dispersive techniques, and consequently can be smaller, cheaper, and have higher throughput (Ge 2002; Ge et al. 2003a, b). The technique is closely related to Fourier transform spectroscopy: the post-disperser effectively creates a continuum of very narrow bandpasses for the interferometer, increasing the interference fringe contrast. All the information needed is contained in the fringe phase and visibility. It emerges that since we are only interested in the Doppler shift of the lines, measurements are required at essentially only one value of interferometer delay (hence ‘fixed delay’).
The cost of the instrument is comparatively low, and most importantly, it can operate in a single-order mode: where traditional echelle spectrograph techniques operate by spreading a single stellar spectrum over an entire CCD detector in multiple orders, here the spectrum only takes up one strip along the detector. Spectra from multiple stars can therefore be lined up at once on a single detector. In combination with a wide field multi-fiber telescope, this makes multi-object RV planet surveying possible (Ge 2002; Ge et al. 2002; Mahadevan et al. 2003). The multi-object Keck ET instrument based on the DFDI technique is one of the first instruments to be built with this purpose (Ge et al. 2009).111Comparable traditional dispersive multi-object instruments are the VLT GIRAFFE and UVES/FLAMES spectrographs (Loeillet et al. 2008), and the MMT Hectochelle (Szentgyorgyi & Furész 2007)
The very high levels of precision required for planet detection and the difficulty of directly measuring absolute wavelengths mean that some kind of stationary reference spectrum is invariably used as a calibration. Various types of fiducial reference have been employed to overcome these problems (e.g. Griffin & Griffin 1973; Campbell & Walker 1979), but the references of choice have generally become ThAr emission lamps (Baranne et al. 1996) and iodine vapor absorption cells placed within the optical beam path (Butler et al. 1996). In this respect, the ET instruments are the same, and we discuss the use of such references with the DFDI technique in this paper.
1.3 A Brief History
The idea of using the combination of a Michelson interferometer with a postdisperser was first proposed for precision Doppler planet searches by D. J. Erskine in 1997, at Lawrence Livermore National Laboratory (Erskine & Ge 2000; Ge 2002; Ge et al. 2002; Erskine 2003). The same approach is being followed by Edelstein et al. (2006) in the infra-red, in an attempt to find planets around late-type stars. A similar approach is discussed by Mosser et al. (2003) for asteroseismology and the measurement of stellar oscillations; more recently the technique has also been adopted for the USNO Dispersed Fourier Transform Spectrograph (dFTS) instrument (Hajian et al. 2007; Behr et al. 2009) (in this last case, the interferometer delay is also varied so that high resolution spectra can be reconstructed in addition to extracting Doppler shift information – see also Erskine & Edelstein (2004)).
The idea of dispersed interferometry itself is by no means new: Michelson himself recognized the use of interferometers for spectroscopy (Michelson 1903), and even proposed combining a disperser in series with a Michelson interferometer. In this case the disperser, a prism, was placed before the interferometer, allowing only a narrow bandwidth of light to enter the interferometer in the first place. In what was likely the first realization of a DFDI, Edser & Butler (1898) placed a Fabry–Perot type interferometer in front of a spectrograph222It was mistakenly stated in van Eyken et al. (2004a) that Edser & Butler (1898) used a Michelson rather than a Fabry–Perot interferometer, which has certain disadvantages in this application . to produce dispersed fringes (effectively an interferometer comb - see section 2.4), which they used as a fiducial reference for measuring the wavelengths of spectral lines. Such dispersed fringes were later to become known as ‘Edser–Butler fringes’ (Lawson 2000).
Somewhat later, along with the development of P. Connes’ SISAM (“spectromètre interférentiel à sélection par l’amplitude de modulation,” described in Jacquinot 1960), various combinations of interferometers with dispersers began to be seen in the field of astronomy. Examples include Geake et al. (1959), using a Fabry-Perot in front of a spectrograph to increase throughput; and the later SHS (“spatial heterodyne spectroscopy,” Harlander et al. 1992) and HHS (“heterodyned holographic spectroscopy,” Frandsen et al. 1993; Douglas 1997) techniques, using internally dispersed interferometers, where the interferometer mirrors were replaced with gratings. Barker & Hollenbach (1972) outlined an early example of the use of true fixed-delay interferometry for velocimetry, measuring the velocities of laser-illuminated projectiles in the laboratory. The use of a Michelson interferometer for actual astronomical RV measurements was proposed shortly afterward by Gorskii & Lebedev (1977) and Beckers & Brown (1978). Forrest & Ring (1978) also proposed using a Michelson interferometer with a fixed delay for high-precision Doppler measurements of single spectral lines for the detection of stellar oscillations, and more recent examples of similar spectroscopic techniques include Connes (1985) and McMillan et al. (1993, 1994). Others have also used similar techniques for Doppler imaging over very narrow bandpasses, notably the WAMDII (wide-angle Doppler imaging interferometer) and GONG (Global Oscillation Network Group) projects (Shepherd et al. 1985; Harvey & The GONG Instrument Team 1995).
Many of these interferometric instruments, however, suffered from the limitation of having an extremely narrow bandpass, tending to limit their application to only bright targets. The DFDI technique used in the ET instruments allows for an arbitrarily wide bandpass, limited only by the spectrograph capabilities, while still retaining the high resolution spectral information needed for precision velocity measurements. The first such DFDI instruments were built at the Lawrence Livermore National Laboratory and the Lick 1m telescope between 1997 and 1999, and were reported in Erskine & Ge (2000) and Ge et al. (2002). The ET project was undertaken shortly after.
1.4 The ET Project
The ET project began at Penn State University in 2000, continuing at the University of Florida from 2004. Early lab tests were performed at Penn State, and prototype test runs were conducted at the McDonald Observatory Hobby-Eberly Telescope in late 2001, and at the Palomar 200 inch telescope in early 2002 (Ge et al. 2003b; Mahadevan 2006).
Two ET instruments have now been built: the single-object prototype ET (van Eyken et al. 2004b; Mahadevan et al. 2008a), permanently installed at the KPNO 2.1m telescope in 2003 after a temporary test run in August 2002; and the multi-object Keck ET, first installed at the APO Sloan 2.5m telescope in March 2005, upgraded and moved to a more stable location at the same telescope later that year, and then further upgraded and fully installed as facility instrument housed in its own custom-built room in September 2008. The latter instrument will function as the workhorse for the SDSS III “Multi-object APO Radial Velocity Exoplanet Large-area Survey” (MARVELS; Ge et al. 2009).
Proof of concept was achieved using the KPNO ET with the first DFDI planet detection, a confirmation of the known companion to (catalog 51 Pegasi) (van Eyken et al. 2004a). Our first planet discovery, (catalog HD 102195b) (ET-1), was also later made using this instrument (Ge et al. 2006). The multi-object Keck ET is a full scale instrument developed to satisfy the survey requirements laid out in section 1.1, and it is anticipated that it will be able to make a significant contribution to the field of extrasolar planet searches over the next decade (Ge et al. 2009).
2 Instrument Principles and Theory
Although various forms of the DFDI have been employed before, the concept, particularly in its specific application to exoplanet finding, is rather new. Much of the work in understanding the data from the instrument has therefore involved coming to a full understanding of the physics of the instrument itself. Related theory is discussed in a number of sources (for example Goodman 1985; Erskine & Ge 2000; Lawson 2000; Ge 2002; Ge et al. 2002; Erskine 2003; Mosser et al. 2003; van Eyken et al. 2003); an attempt is made here to draw together, expand on, and precisely state the theoretical material needed for a complete understanding of the instrument, and to provide an overview of the physics underlying the instrument’s working from the perspective of precision RV planet detection. The approach taken here allows for some important insights, particularly regarding certain errors arising from the use of a common-path fiducial reference spectrum such as that from an iodine gas absorption cell. In addition we derive in section 3 some useful general formulae that can be applied to estimate analytically the magnitude of both these and a number of other types of error generic to the technique.
Taken together, this discussion should provide some of the fundamentals necessary for understanding and interpreting DFDI data. Appendix B gives a derivation showing the relation to the approach employed by Erskine (2003), to which the approach here is complementary.
2.1 Formation of a Fringing Spectrum
Figure 1 shows a highly simplified schematic of a DFDI, consisting of the two main components, a fiber-fed Michelson interferometer and a disperser, followed by a detector. Light input from the fiber is split into two paths along the arms of the interferometer and then recombined at the beamsplitter. The output is fed to the disperser, represented for convenience as a prism, though generally this will be a spectrograph. An etalon is placed in one of the interferometer arms to create a fixed optical path difference (or ‘delay’), , between the two arms, while allowing for adequate field widening (Hilliard & Shepherd 1966; Mahadevan et al. 2008a). is typically on the order of millimeters. In practice, an iodine vapor cell can also be placed in the optical path before or after the interferometer to act as a fiducial reference (section 2.6).
Inputting a wide collimated beam of monochromatic light into the instrument with both interferometer mirrors exactly perpendicular to the light travel path will give either a bright or a dark fringe at the output of the interferometer (figure 1A), depending on whether the exact path difference between the two arms corresponds to constructive or destructive interference. If we were to scan one of the mirrors back and forth, the flux at the interferometer output would vary sinusoidally as a function of . If we now tilt this mirror along the axis in the plane of the page, we effectively scan a small range of delays along the direction (i.e. perpendicular to the axis of the tilt and in the plane of the mirror, corresponding to the slit direction in the spectrograph). Hence we would see a series of parallel bright and dark fringes, now varying sinusoidally as a function of .333Another way of sampling the fringes is to scan the interferometer delay in very small steps (see Erskine 2003): this allows for certain advantages in calibration as well as a one-dimensional spectrum which requires less detector real-estate, but comes at the disadvantage of requiring an actively controlled interferometer. The principles are the same, however.
Consider first a very high (actually infinite) resolution spectrograph disperser for the sake of argument: following the beam through until it reaches the detector plane would result in a single emission line with fringes along the slit direction, as shown in figure 1C. Switching the input spectrum to white light, which can be thought of as a continuum of neighboring delta functions in wavelength () space, leads to a similar fringe pattern on the detector at every wavelength channel. Due to the fact that, in terms of number of wavelengths, the optical path difference is different for different wavelengths, each fringe is slightly offset in phase from its neighbors (and very slightly different in period). This gives rise to the series of parallel lines known as the interferometer ‘comb,’ shown in figure 1D. Going further and inputting a stellar spectrum into the instrument would simply give the product of the stellar spectrum and the comb, as in figure 1E. Finally, changing to the real case of a low or medium resolution spectrograph as for an ET-type instrument, the comb is no longer (or barely) resolved, and we see a spectrum like that in figure 1F. Such a spectrum is sometimes referred to as a spectrum “channeled with fringes,” also known as Edser-Butler fringes (Edser & Butler 1898; Lawson 2000; Ge 2002). The remaining fringes contain high spatial frequency Doppler information that has been heterodyned down to lower spatial frequencies by the interferometer comb (Erskine 2003; Mahadevan 2006). It is this heterodyning that allows for the use of a low-resolution spectrograph at low dispersion, and is the key to the DFDI technique.
2.2 Fringe Phase and Visibility
Above we outlined a simple intuitive way of understanding the formation of the DFDI fringing spectrum. For a full mathematical description, we proceed by a slightly different route. Each wavelength channel on the detector has an associated sinusoidal fringe running along the slit direction, where by ‘channel,’ we mean specifically an infinitesimally wide strip of the spectrum along the slit direction at pixel position , where need not necessarily be an integer. A given fringe has an associated phase and visibility, where visibility is a measure of the contrast in the fringe, defined as the ratio of the amplitude of the fringe to its central (mean) flux value. Equivalently, this can be stated as , where and are the maximum and minimum flux values in the fringe (Michelson 1903). Here we introduce the concept of a ‘whirl’ (Erskine & Ge 2000): the phase and visibility for a fringe can together be thought of as representing a vector, with the visibility representing the magnitude. These quantities can be determined in a number of ways; in general we simply fit a sinusoid. An ensemble of such vectors representing a full spectrum of channels is called a whirl. The whirl is the directly measured quantity from a fringing spectrum and contains the information relevant to velocity determination. Vector operations such as addition, subtraction, and scalar products can be performed on these whirls just as for the individual vectors (Erskine & Ge 2000).
To understand what determines the values of the phase and visibility for a fringe, we can consider the contribution from each wavelength of light to a particular channel on the detector (remembering that although the channel is infinitesimally wide in its spacial extent in the dispersion direction, it still has a finite bandwidth). Each contributing wavelength has passed through the interferometer, and for an ideal interferometer, will contribute a sinusoid of 100% visibility like that in figure 1C. The flux of these sinusoids on the detector can each be described by , where varies linearly with position along the length of the slit, and represents the real part of a complex expression. Since the spectrograph has finite resolution, a narrow band of such wavelengths will contribute to any given channel, owing to the overlap of line spread functions (LSF’s) from neighboring wavelengths. The measured fringe along the slit direction is a continuous summation of those sinusoids, weighted by the flux of the spectrum contributing to that channel, , where is given by the product of the power spectrum coming into the instrument and the spectrograph response function at that channel on the detector. We use the term ‘spectrograph response function’ throughout to refer to the light throughput as a function of wavelength at a given infinitesimal point on the detector, or equivalently, at a given channel in the image on the detector. (This is distinct from, though closely related to, the LSF – see appendix A.)
Switching from wavelength to wavenumber , and dropping the subscript for simplicity, the summation of sinusoids can be expressed as:
where is the measured flux along the slit direction. The first term on the right hand side is simply the total integrated flux in the channel, which must be real valued. The second term can immediately be identified as the real part of a Fourier transform, , with delay as the conjugate variable to wavenumber, and shows the close relationship between DFDI instruments and Fourier transform spectroscopy (Jacquinot 1960).
Normalizing by dividing through by the total flux, we can define the complex quantity such that
This is the fundamental equation for DFDI fringe formation: the quantity is the ‘complex degree of coherence’ (Goodman 1985), and describes the phase, , and amplitude, , of the normalized fringes (i.e. the visibility), as a function of and the input spectrum. is referred to here as the complex visibility.444The quantity is generally represented by the letter in the literature cited. We use here instead purely for clearer distinction between bold-faced vector and regular-faced amplitude representations. More rigorous derivations of this can be found in Goodman (1985, ch. 5) and Lawson (2000), but this explanation is adequate for our purposes.
In order to understand the actual form of the fringes seen in a DFDI, it is important to realize that the portion of spectrum contributing to any given channel, , has a very narrow passband (for the ET instruments, ). We imagine as being a shifted version of a function , where has characteristic width and is centered at zero wavenumber. We shift in wavenumber so that its center falls at wavenumber , and we have . By the Fourier shift theorem we can write:
The right hand side shows two components. The exponential term represents a linear phase variation with delay, varying on the scale of the period . The second term, the Fourier transform, represents a modulation of this signal. By the reciprocal scaling property of Fourier transforms, the second term can be expected to vary on minimum length scales of the order of the reciprocal of the width of , that is, on scales of . Since , equation 4 represents a sinusoidal fringe of frequency modulated by a slow variation in both phase and amplitude. To see this more clearly, we can substitute equation 4 into the first expression on the right hand side of equation 2 and write:
If we define
we can rewrite equation 5 as:
(where we have simplified the negative in the cosine term using the symmetry of the cosine function). This clearly shows the form of the fringe. Over large ranges of , the fringe appears like a ‘carrier wave,’ given by the cosine term, that is slowly modulated in phase and amplitude by an envelope (the ‘coherence envelope’, Lawson 2000). Over the length of the slit direction on the detector, we sample only a very small range of delays, , where is determined by the interferometer etalon, as before, and is typically a few wavelengths. Over this range, the variation in is small as we show below, so we see only an approximately uniform sinusoid (see figure 2) along a single wavelength channel on the detector. In measuring the phase and visibility of this fringe, we essentially make a measurement of at the fixed delay . The phase offset of the sinusoid is determined by the argument of , . The measured (absolute) fringe visibility is simply the amplitude of the normalized fringe, .
In general, we can estimate a rough order of magnitude for the fractional change in the magnitude of the visibility between consecutive sinusoid peaks by comparing the variation length scales: to order of magnitude, we can expect to vary by of order on scales of , so that over one period of the sinusoid, , it will vary by , where is the spectrograph resolution. Since for any input spectrum, determines the fastest variation scale for the envelope, this represents an upper limit. For the ET instruments, , so that over the length of the slit (a few fringes) . In practice such a small variation will usually be significantly below the measurement errors in fringe phase and visibility due to photon noise for even the brightest sources, and would correspond to a final velocity error of for an instrument similar to the KPNO ET.555Assuming independent channels, phase-velocity scaling factor (see section 2.3), and using the relationship between phase error and visibility error shown in section 3.1, equation 36, so that the expected error is . Even in the event that it is desired to reach such extremely high signal-to-noise ratios (S/Ns), it is in principle a simple matter to fit extra parameters to allow for non-uniformity of the sinusoidal fringe, although this has not been attempted with the ET instruments.
In figure 2, the varying amplitude of the modulating coherence envelope, , is illustrated explicitly, and we see how measuring the fringe over a narrow range of delays around gives an approximately uniform sinusoid. This corresponds directly to the image seen along the length of the slit direction in a given channel on the detector. For illustration the very simple case is shown of white light with through a rectangular bandpass with no absorption lines, so that (and therefore ) is a top-hat function. , therefore, is the corresponding Fourier transform, a sinc function, with zeros at ), which modulates a sinusoid of period . In practice the passband, , will be very narrow, so that the variation of will be much slower compared to the sinusoid than suggested in the figure, and the sinusoid itself will be highly uniform over (i.e. over the length of the slit).
For a more complicated input spectrum, such as that from a star with its multitude of absorption lines, and for a more realistic LSF, the coherence envelope will generally also have a much more complicated shape, though the variations will still be slow in and therefore close to uniform along the slit (i.e. within the upper limit discussed above, since the width of the resolution element still determines the fastest variation scale). Each channel will have its own unique piece of spectrum contributing to it, and therefore each will have its own particular phase and visibility. It is this that gives rise to the varied patterns of fringes that are seen in the final fringing stellar spectra (e.g. figure 1F).
In practice the profile in the slit direction will also be modulated in amplitude by a slit illumination function, but this can be calibrated out or modeled during the fringe fitting, and has no effect on fringe visibility. Though this can present its own practical challenges for data reduction, the illumination function is neglected here for simplicity, and taken to be uniform and equal to unity.
As an aside we note that and are very closely related: from their respective definitions in equations 6 and 3, . The only difference is a phase offset, which, for a given channel at wavenumber and fixed delay , is constant – that is to say, and . Since the instrument is to be used purely for differential measurements, the zero point from which phases are measured is somewhat arbitrary and has no physical significance: we are concerned with changes in phase over time, which will affect both and in the same way. For the analyses presented hereafter, the difference between and is therefore not of great significance, and either can equally well be thought of as the complex visibility. However, for the sake of consistency, is generally intended by the term.
2.3 From Phase to Velocity
where is the complex visibility (or complex degree of coherence), a vector quantity whose phase represents the phase of the measured fringe, and whose magnitude (from 0 to 1) represents the absolute visibility of the measured fringe; represents a Fourier transform evaluated at interferometer path difference , or time delay , where and is the speed of light; is the input spectrum; and is the response function for that particular channel on the detector, so that the spectrum contributing to the channel is given by as before. We take to be fixed at a value (for the purposes of the calculations here, the small difference in across the length of a sinusoidal fringe is of no consequence). Subscripts are added to explicitly indicate functions of wavenumber, , or optical frequency, : we note that the equation is completely equivalent in space with as the conjugate variable, or in space with as the conjugate variable. In general the form being used will be implicit from the context, so we drop these subscripts. We have also replaced the integral over the flux in the denominator with the Fourier transform at zero delay, which is mathematically equivalent (this fact is made use of a number of times later on in this analysis). All the necessary mathematics for determining Doppler shifts and for dealing with the combination of the star and fiducial reference spectra (see section 2.6) derive from this formula.
The key to the DFDI RV technique is the fact that Doppler shifts of the spectrum result in directly proportionate phase shifts of the fringes. This is a direct consequence of the Fourier shift theorem (see e.g. Erskine 2003). If the spectrum shifts such that , and we correctly follow the shift in the dispersion direction (so that we now compare to the wavelength channel corresponding to – assuming that the spectrograph response function maintains the same form in nearby channels, and noting that is not necessarily an integer), then the shift theorem gives:
In other words, we have a phase shift of . By comparing the measured phase of the new fringes with the previously unshifted ones, , it is thus possible, in this simple case where there is no superposed reference spectrum and the instrument is perfectly stable, to derive the Doppler shift without any explicit knowledge of the underlying high resolution spectrum, or of the spectrograph LSF. Using the Doppler shift equation , where represents velocity, conventionally positive in the direction away from the observer, we can write:
where, , the phase-velocity scaling factor which gives the proportionality between phase shift and velocity shift, is defined as:
By combining the many measurements of the phase shift from each channel, , (allowing, if necessary, for the wavelength dependence of ), a very high precision measurement of the differential Doppler velocity shift, , can be made.
2.4 The Interferometer Comb
The interferometer comb, mentioned in figure 1 and the corresponding text, is really just a special case of the discussion in section 2.2, where the input spectrum to the instrument is purely white light continuum. In that case , the product of the input spectrum and the spectrograph response function, is itself equal to the spectrograph response function. The comb is therefore purely a consequence of the response function, arising naturally from equation 3. In fact, the example used of the top-hat function for is a reasonable first approximation for the LSF, and so also for the response function (see appendix A), for a spectrograph where the slit width dominates the resolution. The interferogram in figure 2 is thus a reasonable representation of the behavior of the interferometer comb at finite resolution.
We can see from this that by appropriately choosing the delay and spectrograph slit width we can null out the interferometer comb by finding a minimum in the envelope. Early experiments changing the slit width and delay with ET prototypes did indeed show this kind of sinc-like variation in the comb visibility. This becomes important when using a superimposed reference spectrum, as in section 2.6.1.
It is also instructive to consider an idealized infinite resolution spectrograph. In this case, the response function, , becomes a delta function, so that is also a delta function for all channels . By equation 6, given that is the delta function shifted to , the coherence envelope, , is the normalized Fourier transform of this delta function: at all delays. Equation 7 then gives the very simple form of the resulting interferogram:
where we have stopped representing as a mean value since the width of the channel is negligible.
This 100% visibility ‘infinite resolution’ comb is the underlying form for any DFDI comb. Lowering the resolution will reduce the visibility from 100% at the given fixed delay, as in the example of figure 2, with perhaps an overall phase offset depending on the symmetry of the spectrograph response function (and uniform to the extent that the response function and LSF are uniform across all channels).
The infinite-resolution comb can also be thought of as a an interferometer transmission function. In introducing the instrument (section 2.1), we first described the formation of the DFDI spectrum as a multiplication of the stellar spectrum and the infinite-resolution interferometer comb (i.e. interferometer transmission function), convolved with the LSF down to the spectrograph resolution. This is the approach adopted by Erskine (2003) and Mahadevan (2006), and both views are entirely equivalent. Following the Fourier transform approach outlined here, however, we can proceed somewhat further, and obtain some important insights in understanding systematic errors from the use of a simultaneous fiducial reference spectrum (section 2.6). In principle the Fourier transform approach can also be used to create simulated DFDI spectra without having to assume a uniform LSF at all wavelengths, which is difficult to do in the alternative approach. A derivation relating the two methods is outlined in appendix B.
To show visually how the comb forms, it is depicted schematically in figure 3, plotting contours of flux from equation 12 as a function of wavelength and delay . Since maps linearly to position on the detector and delay maps linearly to position along the slit (at least for an ideal spectrograph and interferometer), this also represents the image that would be seen on the detector if the full ranges could be sampled down to zero wavelength and zero delay. The box in the figure schematically represents the segment of the interferogram that we actually observe with the instrument: a series of tilted parallel fringes (as shown in figure 1D), with a very slow wavelength dependency. For clarity, the figure is not to scale: in practice, the delay is fixed to a much larger value so that the fringes are observed at much higher order, , and the wavelengths observed are much longer, so that any real observed comb is much denser and more uniform, and the variations with wavelength much smaller.
2.5 Calculating the Interferometer Delay
The interferometer delay, , is determined by the etalon in the interferometer, and must be known precisely in order to be able to accurately translate from phase measurements to velocity measurements. The best precision that can be obtained in RV measurements is a trade-off between maximizing the phase-velocity scale (so that a large phase shift results from a small change in velocity) and maximizing the visibility of the fringes (since higher visibility means more accurate measurements of the fringe phases). Since the visibility of the fringes is determined by the match between and the typical spectral line widths to be observed, an optimal value of can be chosen to give the best precision for the expected typical targets for the survey (Ge 2002). This is set at design time, and remains fixed for the instrument.
Annual variations in the RV of a star can be as large as even for an RV-stable target, owing simply to the orbital motion of the Earth around the Sun (which dominates significantly over the Earth’s rotation). If we are to consider approaching precisions on the order of we therefore need to know to better than one part in 60,000. Since depends directly on the interferometer delay (equation 11), determining is synonymous with measuring the delay.
To a first approximation, the delay can be calculated from the properties of the delay in the interferometer. For example, for a monolithic interferometer with arm lengths and and refractive indices and respectively, this is given by (Mahadevan et al. 2008a):
This depends on the assumption that there is negligible dispersion in the etalon glass, i.e., that and are close to independent of wavelength over the wavelength range of interest. Dispersion can in fact be a significant effect, but the assumption should be good to a few percent (Barker & Schuler 1974; D. J. Erskine 2001, private communication), enough for an initial estimate. Accounting fully for the dispersion and allowing to become a function of wavelength, however, is essential where very high velocity precision is required from large bandwidth observations.
A more precise measure of the delay can be determined simply by counting fringes in the interferometer comb. We know from equation 12 that the phase of the comb varies as . Although this equation is for a comb at infinite resolution, the same variation will hold true at lower resolutions: a spectrograph response function broader than a delta function will only reduce the visibility of the interferogram, and possibly add an overall phase offset to the entire interferogram (provided that the shape of the response function is uniform across the detector). Differentiating with respect to wavelength:
where is the fringe order, giving:
In other words, by counting the fringe density over wavelength, we can immediately calculate , and hence . Since there is a dependence in itself, care needs to be taken to account for the dependence properly when determining the fringe density at a given wavelength. This may more easily be done in wavenumber space instead, since the fringe density is uniform with wavenumber, and .
In practice, counting fringes is often not easy, since the comb is often barely resolved (usually by design). As long as the comb is not under-sampled on the detector, this can be overcome by temporarily using a narrower slit in the spectrograph, since in principle the delay should only need to be determined once. Even so, it is usually possible in practice only to count over a range of a few hundreds to one or two thousand fringes. Counting along one row in the dispersion direction of the comb therefore gives an accuracy on the order of one part in 1000. Over a variation, this is still only good to the level. Our method of choice in the past has been simply to observe known stable reference stars over the time baseline of interest and use their known apparent changes in velocity due to the Earth’s motion to calibrate . Provided the reference stars are genuinely stable, and they are positioned in the sky such that their barycentric motions are large, this technique will provide an accuracy in the determination of at least equal to the intrinsic RV stability of the stars.
Other methods are under investigation which should allow more precise measurement of the delay. By averaging fringe counts over many rows of a wide spectrum, and further averaging over many frames, it may be possible to achieve significantly sub-fringe counting accuracy . Other techniques in development using a separate device to directly measure the interferometer delay should provide a robust direct measurement that obviates the need for more laborious empirical delay determination .
2.6 Handling a Fiducial Reference Spectrum
2.6.1 Multiplied Reference
The extremely high sensitivity of the instrument means that numerous instrumental effects can masquerade as velocity shifts. Tiny changes in the interferometer delay due to thermal flexure, for example, will appear as phase shifts in the fringe pattern. The image itself can also shift as a whole on the detector in both the slit and the dispersion directions.
One way of accounting for these instrumental artifacts is to use a fiducial spectrum from some known zero-velocity reference. The simplest way to do this is to bracket the science data, either spatially, running the fiducial spectra along a separate optical path alongside the target spectrum; or temporally, alternating target exposures and reference spectrum exposures along the same optical path. Since the reference spectrum is stationary with respect to the instrument, it will track instrument shifts, which can then be subtracted from the measured stellar shift to reveal the star’s intrinsic motion. (Note that from equation 10, a change in conveniently has mathematically exactly the same effect as a change in velocity, .) These approaches, however, potentially suffer from errors due to their separation from the science data: in the first case, because of non-common path errors due to imperfect optics, and in the second case, because the fiducial exposures are not tracking instrument drift contemporaneously with the data.
An alternative approach is to insert an absorption reference into the optical path – in the case of the ET instruments in the past, a glass cell filled with iodine vapor maintained at a fixed temperature, the traditional reference of choice for RV planet searches. In this way the reference spectrum is multiplied with the stellar spectrum. To do this, for each target to be observed, two fringing ‘template’ spectra are taken, one being pure star with no reference in the beam path, and the other pure reference (for ET, a pure iodine spectrum taken by shining a tungsten continuum lamp through the cell). These templates are then used to separate out the stellar and reference components of the combined star/reference data (referred to here as ‘data’ or ‘measurement’ frames, as distinct from ‘template’ frames). A formalism is required to extract the reference and stellar spectra from the combined spectrum. In order to proceed, we define the following symbols:
— as before, the pixel number in the dispersion direction which identifies the column along which a fringe is measured in the slit direction, corresponding to a single channel. Strictly speaking, the channel is infinitesimally wide on the detector, so that need not necessarily be an integer. Since the spectrum is oversampled, however, it is often a reasonable simplification to think of the entire pixel column representing an infinitesimal sample in the dispersion direction (see appendix A).
— the complex visibility vector (i.e. phase and absolute visibility) for a fringe at channel in a single Doppler measurement frame of combined star/reference data, an ensemble of such values for a spectrum across all comprising a ‘whirl.’
— the measured complex visibility for the star template at channel .
— the measured complex visibility for the reference template at channel .
— the input spectrum for a combined star/reference data frame, where represents a normalization, such as the continuum function, and is the normalized spectral density. is assumed constant to a good approximation over the scale of the width of the response function (see below) and instrument LSF, and .
— the same for the star template spectrum.
— the same for the reference template spectrum.
— such that
— the response function at position on the detector, i.e., the spectrum that contributes to an infinitesimally wide channel at the detector plane if perfect continuum light is passed through the instrument. (Note that is very closely related to the instrument LSF — see appendix A)
— the interferometer delay, fixed to a value of , as usual.
— phase/velocity scaling constant, also as before.
— as before, Fourier transform evaluated at interferometer path difference .
— shorter notation for Fourier transform, for convenience.
— used to denote convolution, evaluated at a delay of .
We assume for now the case where there is neither intrinsic Doppler shift nor any instrument shift in either phase or in the dispersion direction, for both star and reference components. We also assume no photon shot noise. Here the aim is simply to reconstruct the data whirl from the two template whirls. Once this is achieved, it is conceptually a relatively trivial step to allow for shifted and noisy data: the template whirls need only to be shifted iteratively in phase and translated in the dispersion direction until a best-fit solution is found, allowing the intrinsic stellar Doppler shift to be directly calculated. This can be done using any standard least-squares method.
Following equation 8, the complex visibility measured at detector channel for the two templates, and , and the combined star/reference data, , can be written exactly as:
The key lies in expressing equation 18 in terms of 16 and 17. This is made difficult by the convolutions, which appear to require knowledge of the template spectra at all possible values of the delay in order to be evaluated. The nature of the DFDI is such that we measure it only at one value, . An approximation can be used to address this problem, which is described in section 2.6.1.
It is possible to rewrite the input spectrum as:
where is a scaling constant to allow for difference in total flux level between the templates and data, and is a constant over the width of the response function. If we assume either or or both , then the ‘crosstalk’ term, , can be neglected. Since and essentially represent line depths, this means that we are assuming either very shallow lines, or no significant overlap between lines in the two different spectra. Keeping the crosstalk term in place for now for completeness, however, we can continue, substituting equation 19 in the first expression on the right hand side of equation 18:
The factor has canceled because it is constant over the width of the response function, and therefore can be taken outside the Fourier transforms. The denominator of this equation represents a normalization, corresponding to the total flux in channel on the detector. The term in the numerator is due to the interferometer comb, since if white light is passed through the instrument, then , and the cross talk term vanishes. We are then left with:
which describes the interferometer comb. As expected, the properties of the comb are determined purely by the response function, as discussed in section 2.4. There the comb was described first for a delta-function response function, and then for a top hat; the equation here represents the generalization to any shape of response function.
where the scalar quantities and are given by:
Hence we see that we can now represent the combined star/reference data in terms of a linear combination of the measured star and reference templates, along with an error term.
The fraction on the right in equation 22 contains two terms in the numerator, the comb term, , and a cross talk term, . It is in principle possible to arrange the instrument such that at delay the interferometer comb has zero visibility, by choosing the delay and slit width so that is at a zero point of (see section 2.4). Alternatively, it is possible to low-pass Fourier filter the data image before measuring the whirls, essentially simulating a lower spectrograph resolution. In either case, we assume that . If we now also neglect all the cross talk terms following from equation 19, we finally have the whirl addition approximation, which we can write as
and represent scaling factors in the absolute visibilities of the two templates. In the case that we take our normalization functions (, and ) to be continuum normalization functions, then remembering that the evaluation of a Fourier transform at represents the total integrated area under the function, we can try to gain a handle on the expected sizes of these scaling factors. To the extent that the total area under and is not much less than that under (i.e. that the area in discrete absorption lines is small, or and , where is a representative width of the response function), equation 23 implies that . This can easily be seen by rewriting in terms of and alone: we can then assume all the terms , , — the last because both and are everywhere less than one by definition and so must always be even smaller than either — and we find we are then left with , and likewise for .
As far as the addition approximation holds good, and to the extent that and are approximately constant across all channels , it is then a simple matter to allow for Doppler and instrument drift by allowing the template whirls to rotate in phase and translate in the dispersion direction as a function of ; allowing and to vary as free parameters as well, we can minimize in the residuals to find the best fit solution compared to the measured data for the complete ensemble of wavelength channels. The difference between the phase rotation of the star and that of the iodine (remembering to account for wavelength dependence as necessary) yields the intrinsic differential stellar Doppler shift, while the shifts in the dispersion direction allow for Doppler shift of the stellar lines and any instrumental image drift on the detector.
By these definitions, however, there is in fact little reason to assume that and should be constant from channel to channel. Furthermore, an iodine cell reference typically absorbs a total of of the incident light, so that the assumption of small area within the absorption lines is not necessarily robust across the whole spectrum. Inspecting the terms in a little more detail, we can recast them, rewriting equation 23 as:
and likewise for so that we have:
The terms are now written in terms of measurable quantities, namely the total fluxes in each channel for the templates and the data. We also see that they are dependent on the definition of the functions , and . Continuum normalization functions could be determined by simply fitting a smooth continuum function to the measured fluxes. There is, however, nothing in the preceding analysis that requires that , and be continuum functions. Defining them as such allows for an intuitive approach to visualize the effect of absorption lines, but they can in fact be any function, subject only to our requirement that the fractional deviations of the spectra from these functions (as represented by and ) remain small, so that the cross-talk term also remains small. It is arguably more appropriate to define the functions to represent the mean flux across each of their respective wavelength channels: in this case we see that and simplify immediately to exactly unity, independent of wavelength channel, so that they drop out of equations 22 and 24. The difference is absorbed in the cross talk-term through its dependence on and , which in turn are also dependent on and respectively. Written in this way, the whole of the addition approximation error is included in the single cross-talk term, , in equation 22.
We now have an approximate formalism for solving for stellar Doppler shifts from combined star/reference data, where the reference spectrum multiplies the stellar spectrum. The above analysis is only useful, however, in as far as the approximation that the cross talk, , is very small holds well. It appears, however, that as it stands, this approximation is in fact not accurate enough for exoplanet searches. In section 3.3.3, we derive an estimate of the errors resulting from the approximation, and find that systematics as large as or more can arise. Clearly this cannot be neglected. Approaches to correcting or avoiding the error are discussed in section 4.
2.6.2 An Alternative: Combined-beam Reference
One possible solution to the problem of the addition approximation is to actually physically superpose a reference spectrum on top of the stellar target spectrum, for example by splicing two input fibers into one, one coming from the telescope and one from the reference lamp. In this case, the two spectra now combine additively instead of multiplicatively. We can then write:
where and are scaling factors to allow for flux differences between the templates and data (note that two such factors are now required). Once again, following equation 8 we can now write:
where we define:
We see that we now have an exact expression for , with the difference being that we now need to take into account the flux scaling factors and , where previously the flux scaling factor had canceled.
There is also a constraint on the visibility scaling constants and . Since the Fourier transforms at represent total fluxes within the channel, flux conservation means that we can write:
Dividing through by the flux in the combined star/iodine data, , and substituting the visibility scaling constants, we find:
As before, we can solve for phase rotation and dispersion shift by minimization, this time additionally solving for the two flux scaling constants. It is interesting to note that if we multiply through both sides of equation 29 by the denominator, (which represents the total flux along the channel in the combined data), we essentially find we have an expression which is a summation of flux visibility terms. Since visibility is defined as , where and are the maximum and minimum fringe intensities, then multiplying by total flux in the channel gives a quantity equal to the amplitude of the fringe. Hence equation 29 is really simply summing fringe amplitudes, and is exactly what we expect when the two input spectra are combined additively: the resulting image on the detector should simply be a direct flux summation of the respective images that would be obtained individually.
3 Sources of Error
Here we provide derivations of some useful formulae for estimating the errors from certain sources for which we have been able to find analytical approaches. These include photon errors; additive spectral contamination errors, such as moonlight background, crowded targets, etc.; and multiplicative fringe-visibility contamination errors, which include in particular the cross-talk error due to the whirl addition approximation for in-beam absorption reference sources, but which can also be applied to other effects such as residual interferometer comb (again, in the case of an in-beam reference). The latter formulae are potentially applicable to a number of different error sources, and all are likely to be useful for any implementation of a DFDI instrument.
Since this is primarily a theory paper, we do not attempt to provide a comprehensive accounting of error sources: many are instrument implementation specific, or data reduction pipeline specific, and better suited to empirical or semi-empirical assessment through simulations and experimentation. Such work is still ongoing with the ET project. For more complete discussion of specific errors in the ET project, we point the reader to upcoming MARVELS publications on the instrument (J. Ge et al. 2010, in preparation) and pipeline (B. Lee et al. 2010, in preparation); more detailed discussions of errors from earlier ET work can also be found in van Eyken (2007) and Mahadevan (2006). Table 1 provides a summary of the examples of applications of the error formulae provided in the text.
|Noise Source||Subsection||Approx. Magnitude|
|Photon shot noise – multiplied ref.aaAssuming iodine reference – see text for improvements using ThAr in the added-reference case.||3.1.1||3.2|
|Photon shot noise – added ref.aaAssuming iodine reference – see text for improvements using ThAr in the added-reference case.||3.1.2||3.6|
|Photon shot noise – separate ref.aaAssuming iodine reference – see text for improvements using ThAr in the added-reference case.||3.1.3||2.9|
|Residual interferometer combbbApplies only for multiplied reference spectrum.||3.3.2||9|
|Addition approximationbbApplies only for multiplied reference spectrum.||3.3.3||50|
Note. – Error magnitudes as calculated in the text are listed: these are examples for illustration only, and each is highly variable and dependent on specific circumstances. See the text for assumptions made in each case.
3.1 Photon Errors
The errors due to photon shot noise provide an important baseline for any instrument. They indicate the absolute limit to the precision that can be achieved, and drive throughput and (for DFDI instruments) fringe visibility considerations for the optical design. It is entirely reasonable to conceive of a photon-limited DFDI-type instrument. However, even in cases where photon noise is dominated by other effects in the very high precision regime, photon noise inevitably becomes significant at the faint end of the stellar target sample. For the MARVELS/Keck ET, geared toward moderate precision surveys of fainter targets, although other errors dominate the instrument requirements error budget at the brightest () end of the target range, photon noise becomes a significant part of the error at fainter levels (down to , of a total – see Ge et al. (2009)). In the high precision, high-flux regime, (e.g., a planned –level cross-dispersed DFDI upgrade for the KPNO ET), the photon error is also important as it indicates the level below which other sources of systematic and random error must be driven.
The photon error in the phase measurement (and hence velocity measurement) from a single channel can be estimated following Ge (2002). This gives essentially
where is the error in velocity due to channel alone, is the speed of light, is the wavelength, is the optical delay, is the visibility of the fringe, is the total flux in the channel, and is the usual phase-velocity scaling factor (equation 11, ignoring the negative sign since we are interested only in the magnitude).666The small difference in the numerical factor in the denominator of equation 34 ( versus 4) is due to using the rms slope of the fringe, rather than the mean absolute slope used in Ge (2002). Monte Carlo simulations of sinusoid fits suggest that the rms slope gives more accurate results. The terms following the represent the error in phase due to the photon noise, . Following a similar derivation, it is straightforward to show that the error in visibility due to photon noise, , is given by
and hence, assuming independent errors, there is a useful simple relationship between the errors in phase and visibility:
As a general rule, we can see from equation 34 that precision goes with the inverse root of flux, as one would expect, and also as the inverse of visibility: higher flux and/or higher visibility mean better precision. From this formula we can derive the photon errors in the final differential RV for different calibration scenarios.
For simplicity in the following formulae, we take to be constant, taking the wavelength value at the center of the spectrum, since it varies by only from one end of the spectrum to the other in the current ET instruments. For an instrument with a very large bandwidth, however, it may be necessary to consider it properly as a function of channel, . This simply means it cannot be taken outside the brackets as in the following derivations, but otherwise the formalism is the same.
3.1.1 Photon Error for Multiplied Reference
To calculate the expected error in an RV measurement for a single data frame, assuming an instrument configuration where an iodine or other reference spectrum multiplies the input stellar spectrum, we consider the resulting data spectrum as consisting of two components, a star component, and an iodine component. The calculated phase shift due to intrinsic target Doppler shift, is given by:
where here represents a weighted mean over all , and represent the phases for the star and iodine components of the combined star/iodine data (‘measurement’) frame, and and are the phases measured in the separate pure star and iodine templates. For convenience, we immediately map these phases to corresponding ‘velocity’ measurements by multiplying both sides by to give a velocity shift, (though with the caveat that a velocity measurement of a single channel in a single spectrum has no physical meaning in itself until it is differenced with another spectrum):
Using with corresponding subscripts to represent the various errors in this equation, we can expect a total photon error in to be given by:
where represents the standard statistical error in a weighted mean:
In practice, the two template terms in equation 39 are neglected, for two reasons. The first is simply because in general the templates will have significantly higher flux than the data frame: the iodine template can be taken with arbitrarily high flux since it is obtained with a quartz lamp as a source; and the stellar template is usually deliberately taken with higher flux than the data so that it does not compromise the entire data set. The second reason is a little more subtle. All RV measurements with this kind of instrument are differential, measured relative to the two templates which effectively set the zero point of the measurements for the star and iodine, as seen in equation 38. Since this ‘zero point’ is the same for every RV measurement, any error in the zero point will not contribute to the rms scatter in a set of measurements which uses the same templates.
This last statement holds true to a point: accuracy in the templates is still needed in order to disentangle the stellar and iodine components of the combined data. From simulations of ET fringing spectra, we find, for example, that for a multiplied iodine reference, using a G0 or G2V stellar template in place of a G8V template yields an rms error of over large () differential velocity shifts. (Depending on the precision required, this points toward the interesting possibility of using templates of different stars from the target star: this could allow, for example, for higher S/N templates when observing very faint targets, or perhaps for disentangling the signals from double-lined spectroscopic binaries.)
Since photon errors go as , the remaining terms, and , can be estimated by scaling the respective template errors (which, unlike the measurement component errors, can be determined directly from equation 34) by the flux difference between the templates and data, giving:
where the error combination function, , is given by equation 40.
Hence we have a quadrature summation of the photon errors due to the star and reference components of the combined star/iodine data, each being the weighted expected error in velocity across the respective template spectra scaled to the flux level of the data. As one would expect, the error goes with the inverse root of the mean flux in the data spectrum, ; the error in each of the two components will also scale as the inverse of the visibility in the respective fringing spectra. Written in this form, the terms need only be calculated once, representing photon errors for each template: they then can be conveniently scaled and combined to give the error in each data frame for the source.
We note that these formulae for the photon limit are for the values expected given the fringe visibility that was obtained. Various instrument effects – for example defocus, or a non-optimal delay for the stellar line width – can reduce the visibility from its optimum and hence reduce this photon limiting precision.
This has been the formalism employed in calculating the photon error for the Kitt Peak single-object ET for operations with an in-beam iodine cell. As an example, an observation taken at very high flux with the KPNO 2.1m ET run in May 2007 of 36 UMa (stable, , exposure) gives mean signal/noise ratios (S/N) per pixel for star template, iodine template, and data frame of 222, 146 and 179 respectively. These values give photon errors for the star and iodine components of 2.8 and respectively, which when added in quadrature give a total photon error of . The KPNO instrument design is such that both output beams from the Michelson interferometer are recovered, and this result is for only one of the two beams. Averaging over the two beams therefore in fact gives a further improvement of in photon precision; for simplicity, and for comparison with the following sections, we consider only one beam here, however. It is interesting to note that the error due to the iodine reference is in fact comparable in magnitude to that due to the star, since the signal in the iodine component of the data frame is intrinsically limited by the magnitude of the target being observed. Figure 4 shows a comparison of the actual rms (on the very short term) with the calculated photon errors using this formalism, obtained with the KPNO ET on the bright stable star 36 UMa over a total of hrs, showing good agreement. The preceding example calculation is based on a data point at the high-flux end of this data set. (For the purposes of the plot, the two interferometer output beams are averaged.)
These calculations assume that the flux ratio terms remain the same from channel to channel, so that an overall mean scaling can be applied. This is not strictly accurate (e.g., if line depths are very deep and broad, or the pure star and pure iodine continuum functions are very different), but is taken to work to a reasonable approximation, and seems to correspond quite well with real results. In the event that a more accurate calculation is needed, however, it is a simple enough matter to introduce channel-dependent flux ratios for each element within the summations.
3.1.2 Photon Error for Added Reference Spectrum
In the case of the reference spectrum being combined additively, rather than multiplicatively, the photon errors must be calculated differently. However, we can follow a somewhat similar approach. Again, we consider the errors due to star and iodine components of the combined star/iodine data, and neglect the errors due to the templates, so that, as for equation 41:
where is again defined as in equation 40. The individual components and must be reevaluated, however, since the photon noise from the two separate sources will now combine additively (for example, if one of the sources is considerably brighter than the second, its photon noise will dominate over the signal in the second). We can think of an effective visibility for the two components in the combined data, and . Remembering that fringe amplitude is given by the product of the visibility and the mean flux in the fringe, we can write
where and are the fringe visibilities for channel in the star and iodine templates respectively; and , , and are the mean fluxes across the channel for the star template, iodine template, and data (measurement) frame respectively. and are wavelength-independent scaling factors that allow for flux differences between the templates and the respective data components, as in section 2.6.2. Hence we find
where and are the total fluxes across the channel for the star and iodine templates, and is the total flux in the channel for the data frame. Substituting these effective visibilities in equation 34 gives:
Using these we can now evaluate equation 44 to obtain an estimate of the photon limiting error, so that:
where, due to flux conservation, and are subject to the constraint:
Again we have found a quadrature summation of errors due to the star and reference components, scaled to match the respective component fluxes in the data, very similar to equation 43. However, in this case, the scaling factors, and must be determined as parameters during the velocity shift solution, and and represent the fluxes in the star and iodine components of the data, respectively.
This time, we do not attempt to assume channel-independent flux ratios. This is because for additively combined references it becomes possible to consider using emission spectra (e.g., a ThAr lamp) as the reference, rather than the usual iodine absorption spectrum. Clearly the flux ratio between data and reference template frames is very different for regions where there are no reference emission lines compared to those where emission lines are present. It is therefore not reasonable to take the flux terms outside the summation in the error combination function .
To gain a handle on the behavior of equation 48, we can see that if we consider only a single channel, so that for a function , , and assume both that the source and reference visibilities are roughly equal (reasonable for star and iodine, to order of magnitude) and that the total flux remains constant, then to minimize the total error, we need only to minimize the function: Given the constraint of equation 49 it is straightforward to show that this is minimized when , in other words, when the component star and iodine fluxes are approximately equal. When either component has a very small flux compared to the other, one or other of the terms in equation 48 will become very large. Broadly speaking, then, we can see that the fluxes of star and reference need to be balanced in order to minimize photon error.
In practice, rather than the total flux being constant, it is of more interest to hold the stellar component constant and vary the reference component to find the optimum; using real spectra, and allowing differing visibilities, the balance point becomes a little skewed from unity. The exact optimal balance point depends on the spectra in question. Allowing for the gain in optical throughput from losing the absorption in the gas cell reference, this equation at its balance point generally gives photon errors on a similar level to the photon errors for a multiplied iodine reference, if we use iodine spectra as references in both cases (i.e., tungsten-illuminated iodine in the added-reference scheme). Using the same observations as in section 3.1.1 to calculate error estimates as if the spectra had been added, and assuming that the flux level of the star in the template and the hypothetical combined observation is the same, we find an optimal ratio of iodine to star flux of 0.96 and a total photon error of , comprised of iodine and star component errors of and , compared to the total error of for multiplied spectra. That the two are similar is not surprising: adding a reference spectrum to the stellar spectrum at a matching flux level will approximately halve fringe visibility and hence double the error, but also double the flux, reducing the error by , giving a total increase in the error size. This coincidentally matches the increase in error size for in-beam-iodine calibration due to the fact that the iodine typically absorbs of the incident light. (The slight mismatch in the figures calculated is due to the fact that in the multiplicative case, the combined data frame actually had particularly high flux, probably because of better sky transparency at the time the frame was taken than when the template was taken).
The above argument holds true for iodine since the continuum shape and fringe visibilities are broadly similar to those of the stellar spectrum. If we instead use a ThAr emission spectrum for the added reference, we appear to perform even rather better than the in-beam iodine case: the same calculations as above with a ThAr spectrum replacing the iodine spectrum yield a total photon error of , with star and ThAr components of and respectively (with an optimum ratio of mean fluxes of 0.26 – now substantially different because of the very different nature of an emission spectrum). ThAr also shows a weaker dependence on relative flux level, which gives it an advantage in terms of practical application since less effort would need to be expended on matching the brightness to each target observation. This is likely because most of the Doppler information is primarily concentrated in a few bright lines in the ThAr, where it is spread more broadly across the stellar spectrum. Where the ThAr lines are strong, the stellar Doppler information is likely largely lost due to the added photon noise. However, since there are relatively few such lines, there is not too much impact on the total stellar Doppler information, and increasing the ThAr flux does not make as large a difference as for the case of an iodine spectrum. We note, however, that at very high flux levels, saturation of the brightest ThAr emission lines is likely to complicate this analysis somewhat.
Added-beam reference calibration provides one possible solution to the reference addition approximation error discussed in section 3.3.3, and the discussion here should provide a formalism for calculating the photon errors. Such a calibration approach, however, has not yet been attempted within the ET program, although basic simulations bear out these calculations.
3.1.3 Photon Error with a Separate Reference
Finally we consider the simple case where there is no simultaneous common-path reference, but rather a reference separated either spatially or in time. Once again, we find a find a weighted mean velocity shift between (now pure) star measurement and some reference star template, and the same between a pure reference spectrum measurement and a corresponding template. (The reference need not be iodine, but we retain the ‘i’ subscript notation for consistency). The results are differenced to obtain a corrected intrinsic stellar Doppler shift. If we neglect the template errors as before, then the photon errors for the data frame are found again similarly to equation 41:
The difference is that here we use subscripts “s” and “i,” rather than “sm” and “im,” to indicate that we are no longer looking at components of a combined reference/star measurement frame, but at pure star and pure reference measurements respectively. Again substituting the basic photon error equation, 34, we obtain:
The form of the equation is now much simpler, since no flux or visibility scaling is required. Errors again go as the inverse of the visibilities of star and iodine, and as the inverse root of the flux. It may also be the case (indeed, observations should be taken such that it is the case) that the reference spectrum has significantly higher S/N, and therefore its photon errors can be neglected, so that only the first error combination term in the square root remains.
Taking our same data and templates once again, we can calculate a hypothetical photon error for comparison: this time, using iodine as a separate reference yields a total error of comprising star and iodine components of and respectively (note that the iodine error level here is relatively high in comparison to the star component: this is purely because of the exceptionally high flux from the star in these particular observations); using ThAr instead yields a total error of , with star and ThAr components of and (though again we have not included the effects of saturation in the ThAr calculation, which may increase the ThAr errors somewhat).
This approach is appropriate to the MARVELS/Keck ET, where pure star science exposures are bracketed in time with pure iodine reference exposures, and the instrument is highly stabilized in both pressure and temperature. The baseline design requirements anticipate a photon error of at , and at (see section 4.4 and Ge et al. 2009).
3.2 Additive Contaminating Spectra
It is often useful to be able to calculate a rough estimate of the errors due to contaminating additive background spectra. We derive a formalism for doing so here. This formalism will enable us to calculate the effect of background moonlight contamination, contaminating background stars, or double-lined spectroscopic binaries, for example. In addition, we will then be able to extend the formalism to treat multiplicative (i.e., flux independent) contaminants, such as any residual unfiltered comb presence or the iodine/star cross-talk term that causes the reference addition approximation error, and try to assess their relative significance.
Figure 5 shows a fringe along one detector column (in the slit direction) due to the target source alone, with fringe amplitude , mean flux , and phase . For simplicity we assume no iodine fiducial