Prospects for Measuring Abundances of Elements with
Low-resolution Stellar Spectra
Understanding the evolution of the Milky Way calls for the precise abundance determination of many elements in many stars. A common perception is that deriving more than a few elemental abundances ([Fe/H], [/Fe], perhaps [C/H], [N/H]) requires medium-to-high spectral resolution, 10000, mostly to overcome the effects of line blending. In recent work (rix16; tin16) we presented an efficient and practical way to model the full stellar spectrum, even when fitting a large number of stellar labels simultaneously. In this paper we quantify to what precision the abundances of many different elements can be recovered, as a function of spectroscopic resolution and wavelength range. In the limit of perfect spectral models and spectral normalization, we show that the precision of elemental abundances is nearly independent of resolution, for a fixed exposure time and number of detector pixels; low-resolution spectra simply afford much higher S/N per pixel and generally larger wavelength range in a single setting. We also show that estimates of most stellar labels are not strongly correlated with one another once 1000. Modest errors in the line spread function, as well as small radial velocity errors, do not affect these conclusions, and data driven models indicate that spectral (continuum) normalization can be achieved well enough in practice. These results, to be confirmed with an analysis of observed low-resolution data, open up new possibilities for the design of large spectroscopic stellar surveys and for the re-analysis of archival low-resolution datasets.
Subject headings:methods: data analysis — stars: abundances — stars: atmospheres — techniques: spectroscopic
Massively multiplexed stellar spectroscopic surveys are a central part of the current astronomy landscape, aimed at understanding stellar physics, the genesis of elements in the cosmos and the chemical/dynamical evolution of the Milky Way. This field is currently undergoing a revolution in the quality and quantity of spectra (e.g., see review from rix13): current surveys aim to collect high quality spectra for millions of stars. But these extensive datasets bring new analysis and modeling challenges. Novel approaches are emerging (e.g., nes15a; cas16; rix16; tin16) for turning these massive datasets into precise stellar labels, encompassing stellar parameters and elemental abundances of stars.
Spectral resolution, , is a key parameter characterizing spectroscopic surveys, and the goal of this paper is to determine the resolution required to measure stellar labels to a desired precision. Traditionally, stellar spectroscopy has parsed itself into three resolution regimes: low-resolution with 10000, high-resolution with 1000050000, and ultra high-resolution with 50000. Low-resolution 2000–10000 spectroscopic surveys, such as SEGUE (yan09), RAVE (ste06), Gaia Radial Velocity Spectrometer (RVS) (rec16), and LAMOST (luo15), have aimed at deriving fundamental stellar parameters such as , and radial velocity. However, because essentially all stellar spectral lines are blended at low-resolution, only measurements of [Fe/H] and a few elements such as , [C/H], [N/H] have been attempted systematically (e.g., kir10; lee11; lee13; yan13; fra16). But even with a limited number of stellar labels, these surveys are crucial because they can amass the statistical samples necessary to provide a global view of the Galaxy. For example, metallicity distribution functions can be used to infer star formation histories (e.g., cas11; hay15); metallicity gradients provide a window into the global dynamical history of stars (e.g., sch09; gra15; kaw16) and the inside-out formation of the Milky Way (e.g., min14; sch16), and stellar age-metallicity-kinematic dispersion relations identify the extent to which stars become kinematically dispersed over time due to dynamical heating (e.g., mar14; aum16; gra16).
High-resolution spectroscopic surveys such as the on-going APOGEE (maj15), GALAH (des15) and Gaia-ESO (smi14) surveys are collecting stellar spectra with 24000. These surveys are designed to overcome the perceived shortcomings of their low-resolution counterparts, and aim to measure detailed elemental abundances of elements. Precise abundances for many elements are a key to understanding the chemical evolution of the Milky Way, as well as stellar nucleosynthesis. For example, core-collapse supernovae from massive stars produce relative overabundances of -capture elements (e.g., woo95; lim03), whereas type Ia supernovae produce overabundances of iron-peak elements (e.g., iwa99, also review from nom13). Mass loss from AGB stars adds additional complexity to the chemical evolution of the Milky Way (e.g., kar14; ven15). tin12a used principal component analysis and demonstrated there are at least seven pathways for galaxies to collect their metals. One goal of deriving multi-elemental abundances for many stars is to unravel the contributions from these different channels at various evolutionary stages of the Milky Way.
Ultra high-resolution spectra, with 50000, are the gold standard for measuring precise and accurate stellar parameters and detailed abundances. At this resolution many of the strong stellar absorption lines are unblended. However, such spectra make high demands on instrumentation and exposure time. For this reason high-resolution surveys (e.g., fis05; ben14; jof14; jof15; bre15; hei15) contain far fewer stars than medium and low-resolution surveys.
An exciting application of precise abundance measurements for many elements is the concept of chemical tagging (fre02): if stars born from the same molecular cloud share the same or very similar elemental abundances (as suggested by recent observational works bov15; liu16), then elemental abundances can serve as a birth-tag for each star. The goal of “strong” chemical tagging is to look for stellar siblings by searching for similarities in chemical space (fre02). “Strong” chemical tagging has proven to be challenging and is yet to be realized in practice, in part because it requires a vast sample size and very precise elemental abundances (lin13; tin15a). But a weaker form of chemical tagging has been demonstrated to be viable (e.g., qui15; hog16; mar16). “Weak” chemical tagging uses precise measurement of elemental abundances to separate various groups of stars. For example, dwarf galaxies in the Milky Way are separable both from each other and from the Milky Way stellar halo in [/Fe]-[Fe/H] space (e.g., ven04); globular cluster stars have unique abundance patterns that allow their identification in the Milky Way bulge and stellar halo (e.g., mar10; schi16); and the thick disk, thin disk and halo stars can be well separated with their -enhancement measurements (e.g., haw15; hay15).
With strong scientific motivation for precisely measured elemental abundances of many () elements in many () stars, it is worth revisiting the optimal survey configuration to achieve these goals. In this paper, we will demonstrate that – at a given exposure time or survey speed, and for a fixed number of detector pixels, low-resolution spectra can constrain comparably many elements and at the same precision as high-resolution spectra. Moreover, the estimates of elemental abundances show little correlation, once 1000, even though the spectral lines are severely blended at low-resolution. These conclusions apply in the limit of very high quality spectral models, although the influence of bad pixels is smaller than often assumed. These results suggest new strategies for designing future generations of spectroscopic surveys.
This paper is structured as follows – in Section 2, we provide an overview of basic concepts and intuition related to spectra as a function of resolution, and describe how to model spectra in high dimensional space. We explore the information content of low-resolution spectra in Section 3, and in Section 4 we perform spectral fitting on synthetic spectral data with characteristics similar to the APOGEE survey. In Section 5, we discuss some implications of these results and highlight several caveats. We conclude in Section 6.
2. Basic ideas
In this section we present the basic arguments for why there need not be significant loss of abundance information when choosing a spectroscopic survey with 1000, instead of 100000. The arguments presented in this section turn out to be fairly insensitive to the detailed wavelength range chosen for any spectroscopic survey. An important caveat throughout this work is the assumption of highly accurate spectral models. All commonly used ab initio stellar spectral models (e.g., kur96; kur03; kur05; hau99; gus08) have important limitations, e.g., arising from incomplete and/or inaccurate atomic and molecular line parameters and the assumptions of 1D LTE (see also smi14). Therefore some of the results we present speak at present to the information content, in principle, of low-resolution spectra, but may require data-driven models (nes15a; cas16) and/or improved ab initio spectral models in order to apply these results to real data. Moreover, in future work we will directly test many of our conclusions by fitting models to observations of stars with spectra obtained with a variety of spectral resolutions and wavelength ranges.
2.1. The advantage of high-resolution spectra
Photospheric elemental abundances are encoded in the strengths of atomic and molecular absorption lines. The most common classical methods of measuring elemental abundances rely on the measurement of the equivalent width of absorption lines with well-known atomic parameters. Equivalent width measurements can then be placed on a curve-of-growth, given a stellar model, in order to derive the abundance of the species giving rise to the observed feature. One must also know the effective temperature and surface gravity, as these parameters have large and complicated effects on the strengths of lines. Frequently the effective temperature is determined by independent (and not necessarily self-consistent) means, e.g., by color-temperature relations (e.g., cas11), by imposing the excitation equilibrium or by first fitting the full spectrum with a subset of main stellar parameters and elemental abundances (e.g. gar16).
Photospheric lines are broadened by various processes, including pressure broadening, rotation, and macro-/micro-turbulence. These sources of line broadening, combined, are typically of the order km/s, which translates to an intrinsic spectral resolution of star of . In order to resolve spectral lines one would therefore want to obtain spectra at . At lower resolution the lines blend together, at least in cool and metal rich stars (the most common stars in most large stellar spectroscopy surveys). It can be difficult to measure elemental abundances through equivalent widths or line profiles of individual unblended lines. The most straightforward way to make progress in such cases, while preserving the full information content of the spectra, is to self-consistently fit entire spectral regions, which is the approach taken here.
Another advantage of operating at high-resolution is that one can isolate and focus on the spectral lines whose atomic parameters are well known from laboratory work, and discard spectral regions that are not well-modeled, as a way to mitigate systematic uncertainties in the models. This is a clear advantage of working at high-resolution (but see also cze15), although the extent to which this issue can be mitigated or addressed at low-resolution has not been thoroughly addressed.
Finally, there are very subtle effects in the spectra of stars that would be entirely invisible at low-resolution, such as isotope ratios. For example the measurement of the Mg/Mg isotopic ratio, which induces a shift of nm in the MgH spectral lines, or the Li/Li isotopic ratio (lind13), which requires 3D NLTE models to properly model the line shapes and derive reliable isotopic ratios.
2.2. Quantifying information content with gradient spectra
In order to understand why low-resolution spectroscopy could possibly perform comparably well, we need a metric for the theoretically achievable uncertainties for each stellar label. A compact but mathematically rigorous way to do this is the Cramer-Rao bound (cra45; rao45), which we introduced in this particular context in tin16. How well we can estimate a stellar label depends on two things, (a) how much a spectrum varies as we vary the stellar label, i.e., the response function of a spectrum, and (b) the flux uncertainties at each wavelength pixel and their covariances. Let be the covariance matrix of the normalized flux. The Cramer-Rao bound predicts that the covariance matrix of the stellar labels can be calculated via
For each , the “gradient spectrum” measures the variation of the spectral flux at wavelength pixel with label . Eq. 1 then essentially takes the quadrature sum of the variations across different wavelength pixels, weighted by the uncertainties of the observed flux. If the spectral response to label changes is steep, we have large values for and hence small values for – more precise measurements. Similarly, if the observed spectra have a higher S/N, the values for will be smaller which will also result in smaller values for . The sum extends over the available pixels in the spectrum. Throughout this paper we assume that the wavelength sampling is always (we adopt a factor of 3, following the sampling of the APOGEE survey spectra).111Note that for most high-resolution echelle spectrographs the wavelength sampling and spectral resolution are not necessarily directly connected in this way. We simplify the calculation in Eq. 1 by assuming that there are no correlations between adjacent wavelength points, i.e., is a diagonal matrix. If the wavelength points are correlated, it is analogous to having fewer uncorrelated wavelength points. The absolute information content will decrease, but as we will show with different survey wavelength coverage (and hence different numbers of uncorrelated wavelength points), the relative information content between high- and low-resolutions will not alter much. Our conclusions which focus only the relative information content remain robust.
The covariance matrix of the stellar labels and the gradient spectra are the quantities on which we base the majority of our results in this study. Not only it is a mathematically robust way to represent how much spectral information there is in the spectra, it also predicts which elements can be detected above a given significance threshold and the covariance between different stellar label estimates. Clearly, the calculation of depends on the chosen resolution and wavelength range. As we vary the resolution, we will be summing up from different wavelength pixels, and the gradient spectra will also change with resolution. In short, in order to evaluate how low-resolution spectroscopy performs compared to high-resolution spectroscopy, we will study how varies as a function of resolution, spectral type, and wavelength range.
2.3. Many stellar labels from low-resolution spectra
We will now examine, at first qualitatively, how the uncertainties in stellar label estimates vary as a function of resolution in order to gain some basic intuition. It is qualitatively clear that at the same wavelength range and the same S/N per resolution element, a high-resolution spectrum must contain much more information than a low-resolution spectrum. But for spectroscopic surveys there are two important boundary conditions that need to be taken into account for a sensible comparison of spectra taken at different resolutions: the first one is the exposure time per object, which sets the survey speed; the second is the number of available detector pixels onto which each spectrum can be mapped. As detector “real estate” is an important boundary condition in highly multiplexing spectroscopic surveys, higher resolution generally forces the choice of a proportionally smaller wavelength range (assuming a fixed number of pixels per resolution element). As a consequence, the spectra at lower resolution will have higher S/N per resolution element and larger wavelength range (with a greater chance to enclose key diagnostic lines of different elements). Both effects work in favor of the low-resolution spectra. We can now evaluate analytically how changes as we lower the resolution:
The rms depths of narrow spectral lines decrease inversely proportional to the width of the line spread function (LSF) kernel. As a result, the rms values of the gradient spectra scale as . Seen another way, the equivalent width (the total integral) of spectral lines is constant at different resolutions, but the size of a resolution element is proportional to . Therefore, the rms depth per wavelength pixel sampled at each resolution element must scale as so that the equivalent width (the sum of resolution element width gradient) is constant.
On the other hand, for fixed exposure time and object flux, the S/N per pixel will improve by due to Poisson statistics.
Furthermore, for a given number of detector pixels, the wavelength range scales as . Assuming the spectral lines are evenly distributed, we will collect times more spectral lines at low-resolution. As information adds in quadrature, having times more lines will improve the information content by a factor of , thus the precision improves proportional to .
These simple arguments show that to first order low-resolution and high-resolution spectra should achieve the same uncertainty for stellar labels, given the sensible boundary condition of equal survey speed. In fact, provided that we have robust models and the ability to fit all stellar labels simultaneously, the uncertainty should be entirely independent of resolution, at least so long as the assumptions above hold. We will show with simulations in Section 3 that this insensitivity to resolution holds over a perhaps surprisingly large range in . However in practice, the label uncertainty is not entirely independent of resolution, especially at the highest and lowest resolutions:
For elements that have only few spectral lines, expanding the wavelength range does not necessarily generate more information. The newly included wavelength range might be devoid of spectral lines for some elements. So, for low-resolution spectra, we might lose information by a factor of . However, including a wider wavelength range also implies that low-resolution spectra can detect more elements that might have no detectable lines in high-resolution spectra of necessarily narrower wavelength range.
Once the LSF at very high-resolution becomes narrower than the intrinsic broadening of most lines, further increasing the resolution does not improve the gradient spectra. Therefore, for a fixed exposure time the information content of an observed spectrum will decrease at higher resolution. However, such high-resolution will in some cases be critically important for dealing with systematic issues, e.g., identifying and removing telluric features, which are intrinsically narrower than most stellar lines.
At very low-resolution (e.g., as for Gaia’s BP/RP spectra), estimates for stellar labels become more correlated. Mathematically, the covariance matrix becomes less diagonal. In other words, once the estimates of different stellar labels become highly degenerate, their individual estimates become less precise.
When modeling low-resolution spectra one is forced to fit the full spectrum and one must therefore have knowledge of the line spread function (LSF) across a wide wavelength range. This can introduce additional challenges to measuring precise elemental abundances that are not as severe when modeling equivalent widths or line profiles of individual features from high-resolution data.
2.4. Fitting multiple stellar labels simultaneously
We have argued that low-resolution spectra contain the same amount of information for a fixed number of pixels and at fixed exposure time, but we can extract this information only if we are able to fit all stellar labels simultaneously. Generating state-of-the-art model spectra over a wide wavelength range takes several CPU hours for a given set of stellar labels. In a parameter space of 20–60 labels, it is computationally prohibitive to search for the best-fitting stellar labels through minimization – each step in the minimization process will take several CPU hours. The standard approach to this problem is to create a synthetic library on an approximately rectilinear grid in the stellar label space, creating models at each grid point and then interpolating between them (e.g., gar16). However, in this method, the number of models needed grows exponentially with the number of labels, implying insurmountable computational cost for fitting 20–60 labels. We tackled this problem in rix16; tin16 and devised a new algorithm – polynomial spectral models (PSM) – that can fit 20–60 labels simultaneously. In essence, PSM constructs a model for the predicted flux at each wavelength point in the label space that is a polynomial function of all labels. But it does so in a way that requires only 1000 ab initio models, for 20–60 labels. The success of such modeling depends of course on whether the stellar spectra to be fit are well approximated by a PSM. In rix16, we found that a single second order expansion captures almost all the label space spanned by the APOGEE sample of giants with 4000K. rix16 found that the median deviation of the normalized flux between ab initio calculated APOGEE models with parameters and the PSM models is only 0.001. Such an “interpolation error” is negligible as it is an order of magnitude smaller than the typical S/N of an observed spectrum (S/N). Furthermore, finding the best-fitting models with PSM is also extremely efficient because it regularizes the likelihood space in a -minimization. For instance, we found that PSM fitting 100000 APOGEE spectra with parameters requires less than CPU hours. PSM therefore appears to provide a practical solution to the requirement in low-resolution spectra of fitting all labels simultaneously. We will demonstrate how PSM can be used to fit low-resolution spectra in Section 4.
|Survey||Wavelength range approximate (nm)||Survey resolution||Approximation resolution adopted here||Survey status|
|GALAH||470–490, 565–585, 650–675, 760–790||28000||24000||On-going|
|4MOST (high-resolution)||390–435, 515–575, 605–675||24000||24000||Planned|
3. The information content of low-resolution spectra
In Section 2.3 we discussed analytically, and qualitatively, why stellar parameter estimation should not depend strongly on spectral resolution under certain conditions. In this section we explore this issue in more detail by using synthetic model spectra and evaluating how uncertainties on stellar labels, calculated with Eq. 1, vary as a function of spectral resolution. We use model spectra to calculate the gradient spectra, , in Eq. 1 and the label covariance matrices that reflect the label uncertainties, under the assumption that the models are a good description of the data.
We compute 1D LTE model atmospheres from the atlas12 code maintained by R. Kurucz (kur70; kur81; kur93). We adopt the latest line list provided by R. Kurucz,222http://kurucz.harvard.edu which include TiO, HO, CH, CN, CO, OH, MgH amongst other molecules. We evaluate the atmospheric structure with 80 zones of Rosseland optical depth, , with the maximum depth of 1000. We automate the numerical convergence inspection for each calculated atmosphere and adopt the solar abundances from asp09. We adopt the standard mixing length theory with a mixing length of 1.25 and no overshooting for convection. Spectra are evaluated with the radiative transfer code synthe with a nominal resolution 300000 and are subsequently convolved to lower resolutions assuming a normal distribution with a FWHM of .
To calculate approximate gradient spectra, we consider the differences of two spectra that differ by K, , km/s, with respect to a chosen reference point; tin16 elaborated why that is a sensible approximation. For any stellar label and reference point, we calculate the gradient spectra as:
where is the normalized flux of a model spectrum. In this study, we always perform full-consistent calculations – we re-evaluate the atmospheric structure whenever we vary a stellar label, even though in many cases, e.g., for Eu, this is unnecessary (see tin16, for details). This is an important point because many elements have a significant effect on the atmospheric structure, which in turn can affect the emergent spectrum. So for example an enhancement in Na not only affects the atomic Na i lines but also, at a lower amplitude, large regions of the spectrum owing to the change in the atmospheric structure (Na is a major electron donor in cool stars).
To study how the results vary for different stellar types, we consider a few reference points in this study, namely:
Note that since the reference points will be set at solar abundances or scaled-solar abundances, this study mainly investigates the spectral information content of a typical star with absolute abundances , i.e., not a carbon star.
We adopt the following relation between and (hol15):333APOGEE calibrated this relation with giants, so this relation might not apply to the broad range of stellar types in this study. But the goal here is to have a wide variety of and as our reference points, so the exact relation between these two parameters does not impact our results.
The top panels of Fig. 1 illustrate gradient spectra for three elements – C, Fe, and K – assuming a K-giant, solar metallicity reference point. We consider three different resolutions – the nominal model resolution at 300000, a high-resolution mode, 24000, and a low-resolution mode, 6000. We also show the normalized spectra with and without enhancements in the abundances of these three elements in the lower panels. At 300000 and 24000, some of the spectral lines are resolved and unblended. These lines are typically selected to derive elemental abundances. Carbon has many more lines due to molecular contributions. Elements such as potassium have far fewer lines. However, at 6000, all lines are blended. If we wish to derive the elemental abundance of potassium, for example, we will need to model other elements contributing to the blends at the same time. Therefore, to extract spectral information at 6000, we need to model the blended lines by fitting all relevant stellar labels simultaneously.
The top panels of Fig. 1 reveal a few interesting features. For e.g., at 300000, the global depths of spectral lines are not exactly 300000/6000 times deeper than 6000. There are three effects in play: (a) at 300000, the intrinsic broadening is larger than the LSF broadening. As we have discussed in Section 2.3, over-resolving lines does not improve the gradients. (b) When there are many overlapping lines, such as the carbon and iron lines, gradients do not degrade as much at low-resolution. One way to think of this is that overlapping/blended features have larger effective widths, so that convolution does not degrade the gradients in the same way as isolated lines. (c) Since we are convolving a spectral profile instead of a delta function, although the rms depth is proportional to , the minimum point of the convolved profile alone does not necessarily scale exactly with . The last effect has no influence on our arguments in Section 2.3, but the first two effects work in favor of low-resolution spectroscopy. They imply that the gradients only degrade linearly with at certain restricted conditions. For example, the potassium lines at 24000 and 6000 are less affected by these two effects and show a close-to-linear gradient degradation. But going from 300000 to 24000, especially for the carbon and iron lines, the gradients do not degrade proportionally with .
Beside studying how the results vary for different stellar types, we also consider different wavelength ranges in this study. We will assume the wavelength ranges of the APOGEE, GALAH, Gaia-ESO444We assume the GIRAFFE HR10 and HR21 settings, with which most of the Gaia-ESO sample will be observed., 4MOST, WEAVE555Here we assume the low-resolution configuration of WEAVE. Note that WEAVE also plans to observe stars in a high-resolution mode but with a reduced wavelength coverage. Gaia RVS, RAVE, SEGUE and LAMOST surveys. Their wavelength ranges and spectral resolutions are listed in Table 1 (and are visualized in Appendix A). Note that 4MOST plans to work at two configurations. The low-resolution configuration will operate on a larger wavelength range than the high-resolution configuration. We will show in the following subsections that, regardless of the wavelength range and stellar type, low-resolution spectroscopy can measure equally many elements with the same precision as high-resolution spectroscopy.
3.2. Stellar label estimates as a function of spectral resolution
In this section we evaluate how uncertainties of stellar labels vary as a function of spectral resolution, , considering , , and all elements with atomic numbers from 3 to 99 as stellar labels, taking into account the correlations of the main stellar parameters such as , , [Fe/H], with other elemental abundances. We calculate the theoretical uncertainties of these stellar labels using Eq. 1. The output covariance matrix has the size of , showing the covariances of all stellar labels. The diagonal entries of show variances (marginalized over uncertainties of other stellar labels) that one can achieve for each stellar label, and the square roots of these values give the theoretical uncertainties that we will explore in this section. Clearly, the gradient spectra depends on stellar type, wavelength range and metallicity. Therefore, we calculate for different wavelength ranges, different stellar types and two metallicities – and .
We also verified by numerical simulations. We modify a reference spectrum with linear combinations of gradient spectra from all stellar labels and noise up the spectrum. We perform full spectral fitting (using PSM) via -minimization and find that gives the exact estimate of the covariance matrix of stellar labels. Finally, to study how theoretical uncertainties vary with , we convolve gradient spectra to various resolutions, and recalculate for each . We define an element to be detectable if its uncertainty is better than dex at 24000 and S/N.
Fig. 2 show the uncertainties as a function of of all detectable stellar labels (including , and ). The figure overplots results from all stellar types, wavelength ranges and metallicities. We normalize the value of -axis to be unity at 6000. Since uncertainty scales linearly with S/N (see Eq. 1), the ratio of uncertainties plotted in Fig. 2 is independent of S/N. The left panel shows that, assuming the same S/N per pixel and the same wavelength range, the uncertainty degrades mostly linearly with , regardless of stellar label, stellar type, wavelength range and metallicity, as explained in Section 2.3: since the absolute values of gradient spectra decrease proportionally to , the uncertainties should also degrade linearly with .
However, given the same exposure time, low-resolution spectra will have a S/N per pixel that is higher than high-resolution spectra. In the middle panel, we take this into account and rescale the uncertainties in the left panel by .
In the right panel, we also account for the larger wavelength range afforded by low-resolution spectra (given a fixed total number of pixels and a fixed number of pixels per resolution element) and further scale the uncertainties by another factor of , assuming spectral information distributes uniformly over the entire wavelength range. As expected from the arguments in Section 2.3, this factor compensates for the lower-resolution. Remarkably, regardless of stellar type, wavelength range and metallicity, the achievable stellar label uncertainties are indeed nearly independent of spectral resolution, if we have robust models and can fit all stellar labels simultaneously (Fig. 2, right panel).
But Fig. 2 also quantifies how these simple trends are violated at both the very low-resolution and high-resolution ends. At the low-resolution end (e.g., 1000), stellar label estimates become nearly degenerate, resulting in reduced precision, as we have discussed in Section 2.3. As for the high-resolution end, spectral lines are eventually resolved, so further increasing the resolution does not improve the information content. By visual inspection, we found that for our model grid, most spectral lines indeed have intrinsic broadening of the order of . Over-resolving lines beyond this resolution does not improve the gradients and causes the high-resolution to deviate from the linear trend.
3.2.1 4MOST survey as a case study
One can adopt simple arguments to rescale the uncertainties in Fig. 2 for a particular survey design. For example, one can assume that spectral line information is uniformly distributed throughout all wavelengths and derive an improvement in uncertainty when going from the middle to the right panel of Fig. 2. This assumption might be a good approximation for elements that have many spectral lines such as Fe, and -capture elements. But for trace elements, such as Li, K, that have only a few spectral lines, expanding wavelength range does not necessarily improve the information content. To work out a concrete example, we compare the two proposed resolution configurations of the 4MOST survey. Here we consider, for the same exposure time and a larger wavelength range, the tradeoffs in the low- vs. high-resolution setups for this particular survey.
4MOST proposes a high-resolution configuration with a shorter wavelength range of nm, nm, nm and a low-resolution configuration with a wider wavelength range of nm. These two configurations serve as a perfect case study to evaluate how uncertainty of stellar label changes when comparing low S/N high-resolution spectra spanning a narrow wavelength range, to high S/N low-resolution spectra spanning a large wavelength range. In Fig. 3, we assume 8000 for the low-resolution configuration and 20000 for the high-resolution configuration. These resolutions are chosen such that both configurations consume an equal number of detector pixels when compensated with the difference in wavelength range. Also, for the same exposure time, the low-resolution configuration will have a higher S/N – we assume the low-resolution configuration has a better S/N per pixel by a factor of .
Fig. 3 shows the ratio of uncertainties for all detectable elements and stellar parameters of the two configurations. Note that since we are plotting the ratio of uncertainties, the result is independent of the absolute values of S/N per pixel. In the -axis, we sort elements by their uncertainties in the high-resolution configuration. If the two scaling relations as assumed in Fig 2 are exact, in particular, spectral line information is uniformly distributed throughout all wavelengths – for e.g., information from stellar parameters: , and – the ratio should be close to unity. However if the information is concentrated only in a small wavelength range, expanding wavelength range does not collect more spectral information, and in this case, the low-resolution configuration will have a worse uncertainty by a factor of . The upper dashed line shows this value as the upper limit. Fig. 3 shows a clear trend – stellar parameters and elemental abundances that have better uncertainties, such as Fe and Mg, generally have more lines, thus the uniform distribution of spectral information is a more valid approximation, resulting in ratios closer to unity. For elements that are less precisely measured, they are mostly elements that only have a small number of lines that reside in the wavelength region of the high-resolution configuration. Thus expanding the wavelength range at low-resolution does not help in this case. Some elements (e.g., Na, Ba, K, Zn, S) are better measured (ratio) at low-resolution. Expanding the wavelength range includes more lines from these elements that would otherwise be missed by the high-resolution configuration. The last four elements in Fig. 3 (Pt, Ge, Rb, In) highlight the scenario in which the high-resolution configuration does not cover any transitions of these elements, and so these elements are unmeasureable for this particular high-resolution configuration.
Although some elements perform worse at low-resolution even with the same exposure time and number of detector pixels, note that the ratio of uncertainties is bounded by an upper limit of . We can compensate this loss if we spend 20000/8000 times more exposure time with the low-resolution configuration. Since low-resolution spectrographs are generally more accessible and high-resolution spectrographs have other downsides, such as lower instrumental throughput and more restrictive read noise limitations for faint targets, (see discussion in Section 5), it would still seem that low-resolution stellar spectroscopy with 6000 and properly chosen wavelength range is the optimal strategy to design large-scale stellar spectroscopic surveys.
3.3. Number of detectable elements for various surveys
In the previous section we discussed how the ratio of uncertainties vary as a function of spectral resolution. In this section, we will study the absolute uncertainties – i.e., the square root of diagonal entries of in Eq. 1 – given a fixed S/N per pixel, and determine how many elements we can, in principle, detect for various surveys. We assume a S/N per pixel of 100 in this section. We do not show the other values of S/N because the uncertainty scales linearly with S/N (cf. Eq. 1). We emphasize again that these uncertainties can only be attained if the model spectra are perfect, or nearly so.
Fig. 4 shows the theoretically achievable uncertainties of detectable elements and stellar parameters for various surveys, assuming solar metallicity and K-giants. For each survey setup, we assume the adopted resolutions as stated in Table 1. Optical surveys like 4MOST, WEAVE, GALAH, Gaia-ESO, SEGUE and LAMOST can measure up to elements. Strikingly, even for low-resolution spectra like SEGUE and LAMOST that has only 2000, in principle, we can still measure as many elements as high-resolution spectra, provided that we can fit all stellar labels simultaneously and have robust stellar models.
Infrared surveys, such as APOGEE, contain less information (also see Appendix A) and can “only” detect up to elements, consistent with the APOGEE pipeline (hol15; sds16). Not surprisingly, given the same resolution, surveys that have larger wavelength ranges such as 4MOST have smaller uncertainties than surveys that have more limited wavelength ranges like GALAH and Gaia-ESO. But interestingly, even for small wavelength ranges and low-resolution spectra from RAVE or Gaia, we can, in principle, detect about 15 elements, at least for K-giants. Measuring multi-elemental abundances with RAVE and Gaia RVS is an important application of PSM that we are currently exploring.
In particular, for Gaia RVS, we found that most of the spectral information for C and N comes from the CN features. The spectral information for O comes from the CNO equilibrium – when there is more oxygen, more carbon will be locked up in CO instead of CN. Therefore, the elemental abundance of O changes the CN features as well. As a result, one might measure C, N, O either directly or indirectly from the CN features. But we note that CNO are not completely degenerate because there are a few atomic lines from CNO in the RAVE and Gaia-RVS wavelength range that break this degeneracy. Most notably, we have a CI line at 872.713nm, a few OI lines around 844.636nm and an NI line at 868.028nm (but there are other weaker lines). We also note that the calculation of CR bound already took this partial degeneracy into account, and that is why the uncertainties for C, N, O for Gaia RVS are not excellent despite there is a lot of spectral information from the CN features. How well we can measure the CNO from the combination of the molecular features and the (blended) atomic lines is yet to be probed in practice, but it will be an important application of the Payne. Fig. 4 also suggests that high S/N spectra, such as stacked spectra from LAMOST and SEGUE, could detect many more elements than are currently being measured.
Instead of plotting uncertainties of individual elemental abundances, we can also compress this information and plot the cumulative distribution of uncertainties for all elemental abundances, as shown in Fig. 5. The -axis of Fig. 5 shows the cumulative number of elements that we can detect that have smaller theoretical uncertainties than the threshold shown in the -axis. We consider two resolution configurations – a high-resolution configuration of 24000 and a low-resolution configuration of 6000. Each row in Fig. 5 has three separate panels, showcasing three different possible comparisons between the low-resolution and high-resolution configurations, the same way as Fig. 2. To recap, the panels on the left assume the same S/N per pixel and the same wavelength range. The middle panels assume the same exposure time and the right panels further assume the same number of detector pixels. For the middle panels and the right panels, we rescale the uncertainties of low-resolution spectra in the left panels by a factor of and 6000/24000, respective, following Fig. 2.
The top panels of Fig. 5 show the number of detectable elements at 24000 and 6000 for various wavelength ranges, assuming solar metallicity and K-giants. For surveys that have these resolutions, such as 4MOST, WEAVE, GALAH and Gaia-ESO, these panels are just compact representations of Fig. 4. But we caution that for surveys that operate at a much lower resolution, such as LAMOST and SEGUE (2000), results in the top panels might not be directly applicable – these panels only show the number of detectable elements if LAMOST and SEGUE were to operate in 24000 and 6000. Not surprisingly, at a given resolution and assuming the same S/N, the top panels show that a larger wavelength range, such as LAMOST and 4MOST, can detect more elements. These panels also show that, generally speaking, optical wavelength ranges contain more information and can measure more elements than the infrared. But more important, as shown in the right panel, if we assume the same exposure time and the same number of detector pixels, the dashed lines coincide with the solid lines, showing that 6000 spectra can detect as many elements as the 24000 spectra, echoing our earlier conclusions. This conclusion also holds true for the other comparisons that will we discuss next.
Thus far, we have only discussed how the detectability of elements vary as a function of wavelength range. But the detectability also depends on stellar type and metallicity. The middle panels of Fig. 5 show the number of detectable elements for different stellar types, assuming a wavelength range of the 4MOST (high-resolution) survey and solar metallicity. These panels show that cooler stars (e.g., M-giants) can detect more elements than hotter stars (e.g., F-dwarfs). This result is not surprising because cooler stars have more spectral lines, especially contributions from molecular lines. In fact. M-giants almost double the number of detectable elements compared to F-dwarfs. Since part of these cooler features come from molecular contributions and noting the composite nature of molecular features, this demonstrates the importance of full spectral fitting over many stellar labels simultaneously, without which we will not be able to extract information from molecular lines.
Finally, the bottom panels show the number of detectable elements in two different metallicity regimes, and , assuming the wavelength range of 4MOST (high-resolution) and K-giants. We calculate the matrix using gradient spectra with respect to reference points at these two different metallicities. Metal-poor stars have smaller gradient spectra which in turns predict a smaller number of detectable elements. Nonetheless, for optical surveys like 4MOST, although the number of elements is more restricted at the metal-poor regime, the bottom panels show that we can still detect up to 30 elements. Studies in Appendix A also indicate that there is still sufficient spectral information at low metallicity in the optical wavelength. But spectral information is more limited in the infrared, Although not shown, we found that we can only detect about elements at with an APOGEE-like setup.
3.4. Stellar parameter correlation as a function of spectroscopic resolution
Thus far we have only considered the diagonal entries of in Eq. 1 – i.e., the theoretical uncertainties of stellar labels. However, there is more information in . This matrix also infers the correlations of stellar labels. More precisely, for each label pair , the submatrix from the rows and columns of shows the covariance of the -th and -th stellar labels, from which we calculate their correlation via
Uncorrelated estimations of stellar labels are crucial for Galactic studies because correlated estimates could make astrophysical interpretations difficult when looking for trends among stellar labels or searching for structures in chemical space (tin15b). In this section we will study how the correlation varies as a function of spectral resolution.
The left panel of Fig. 6 shows the cumulative correlations from all detectable stellar labels. The -axis indicates the fraction of label pairs that have correlations smaller than the threshold value in the -axis. We only consider the global distribution of correlations from all (detectable) label pairs in this section and refer interested readers to Appendix C for correlations of each label pair. The left panel shows that many label pairs have large correlations at – in fact, more than half of the label pairs have correlations larger than . Strong degeneracies of stellar labels are expected at . At this resolution, there are only wavelength pixels in the APOGEE wavelength range, so most stellar labels are contributing to most of the pixels.
However, increasing resolution to 1000 already removes or strongly diminishes the correlations between labels. About of the label pairs have correlations smaller than 0.15. In detail, as shown in Appendix C, only prominent stellar labels that contribute to most pixels, namely , , , Fe, C, N, O are strongly correlated beyond 1000. The green and red lines in the left panel shows that going to a even higher resolution, such as 100000, no longer decreases the correlations by much. In practice, the correlations at low-resolution should be even smaller compared to high-resolution. Fig. 6 assumes a fixed wavelength range, but as we have discussed earlier, given the same number of detector pixels, low-resolution spectra will have a much more extensive wavelength range, which allows further disentanglement of different contributions from various stellar labels.
The right panel in Fig. 6 shows that this result is general and is independent of stellar type and wavelength range. Instead of plotting the cumulative distributions as in the left panel, the right panel plots the correlation values corresponding to the percentile of the cumulative distributions as a function of spectral resolution. The solid, dashed, dashed-dotted and dotted lines assume different stellar types – K-giants, M-giants, G-dwarfs, and F-dwarfs, respectively, and the lines in different colors show results from various wavelength ranges. All these lines concur with the previous conclusion that stellar labels are not strongly correlated beyond 1000, with the exceptions of the RAVE and Gaia RVS wavelength ranges. RAVE’s and Gaia RVS’s labels are only not strongly correlated beyond 6000 because RAVE and Gaia RVS have a limited wavelength range (nm). With this limited wavelength range, below 6000, there are too few wavelength pixels to distinguish contributions from various stellar labels.
The lines in the right panel show correlations at various resolutions, but for a spectroscopic survey, there is a well-defined survey resolution. An important question then is, are stellar labels correlated at the nominal survey resolutions? To answer this question, we label the survey resolutions with boxes in the right panel of Fig. 6. The black box shows the resolutions of RAVE and Gaia RVS. The green box shows the resolutions of APOGEE, GALAH, Gaia-ESO and 4MOST (high-resolution). The shaded red box shows the resolution of 4MOST/WEAVE in the low-resolution configuration; and the hollow red box shows the resolutions of LAMOST and SEGUE. All boxes are in the region where the correlation curves have already plateaued, indicating that stellar labels will not be strongly correlated from these surveys.
4. Fitting mock spectra and deriving 18 stellar labels with 6000 spectra
Thus far we have shown that, given the same exposure time and the same number of detector pixels, spectral information remains largely independent of resolution. But there is still one crucial question yet to be answered, since most spectral lines at low-resolution are blended, can we model these blended features by fitting all stellar labels simultaneously? In other words, even though we know the spectral information is there, can we extract it? In this section we will generate and fit mock spectra at 6000 and show that we can recover multi-elemental abundances at this resolution, even in the presence of bad pixels, imperfections of LSF modeling, and flux uncertainties.
We choose to study the wavelength range of APOGEE (1500–1700nm) as our test case. We generate flux-normalized synthetic models at 300000 and subsequently convolve them to 6000 with a Gaussian kernel. We follow APOGEE DR12/DR13 and assume a wavelength sampling of . With this sampling there are wavelength pixels at 6000. Here we only study flux-normalized models and will discuss the potential problem with continuum normalization at low-resolution in Section 5.2.
We adopt the same PSM approach as in rix16 and perform full spectral fitting. Here we briefly summarize the idea of PSM. Instead of interpolating spectra, PSM constrains explicit quadratic functions that define how flux varies as a function of stellar labels at each wavelength. One can regard PSM as a second order Taylor expansion of a spectrum. How well PSM performs depends on the “convergence radii” of the Taylor sphere. The key to the success of PSM (and related data-driven models such as the Cannon, cf. nes15a; cas16) is that the Taylor sphere encompasses most of the stellar label space that matters – i.e., the region of stellar label space where stars typically occupy. Previously in rix16, we tested that we can fit all 18 stellar labels (, , and 15 elements) simultaneously and recover these stellar labels at 24000. Our aim here is to extend that analysis to 6000.
In order to test how well we can recover realistic stellar labels, we consider labels from the APOGEE DR12 catalog (hol15), restricting to objects with 40005500 and . We demonstrated that a single PSM region performs sufficiently well within this range for the high-resolution spectra in rix16, and the results below will show that the PSM model works well for low-resolution spectra in this range as well. Going beyond this range might require multiple PSM spheres to cover the full relevant label space (see tin16), but will not alter our conclusions. We also remove objects that have not measured abundances for all elements in the APOGEE DR12 catalog. We randomly choose 1000 stars and use their stellar labels to generate our training set and constrain the PSM functions. We randomly choose another 1000 stars to generate our testing set. The testing set is used to determined how well we can recover their input parameters. Note that to fully define a PSM for 18 stellar labels, we only need a minimal training set of spectra. But rix16 found that overconstraining PSM with more training set, whenever it is still computational feasible, produces a better result. Therefore, we choose to constrain the PSM with 1000 training models.
Fig. 7 shows the recovery of input parameters for the testing spectra. Gaussian random errors are included assuming a S/N per pixel of and . We also assign random values to of the testing spectra pixels and assign large uncertainties to these “bad pixels”. These mimic pixels affected by skylines, cosmic rays or other possible contaminants. They also mimic pixels that are not well-modeled in real life applications and have to be subsequently clipped from spectral fitting. The gray shaded region in each panel illustrates the range and demonstrates that, with robust models, we can recover 18 stellar labels with a precision of dex at 6000. The recovery of has a non-monotonic systematic. This suggests that the PSM model is not a perfect representation of the variation of flux as a function of . A more complicated function might be needed to describe the variation, but even with a simple quadratic function, the systematic is small (km/s). Similarly, we found that, for elements that have a limited number of lines or only very weak signatures such as K, V or Na, systematics in the PSM label estimates can become important at lower S/N. For these elements, a non-parametric extension of the PSM model is needed to go beyond the simplistic quadratic assumption and improve the precision. This is work that we are currently pursuing, but this technical limitation will not alter our conclusion that low-resolution spectra can be (nearly) equally information rich as their high-resolution counterparts.
Another potential source of systematic uncertainty is our imperfect knowledge of the line spread function (LSF). To study how sensitive our results are with imperfect adopted broadening kernel, we model mock spectra that are further convolved with an additional broadening of km/s. Fig. 8 shows the scatter between the best-fit and input stellar label as a function of additional broadening. As before, we also assume bad pixels and adopt S/N per pixel of 200 and 500. The figure shows that, at 6000, the estimates are not severely affected by an additional broadening km/s. But if the LSF errors are larger than , the PSM estimates are biased. As spectral features become shallower and broader with additional broadening, we will overestimate and , which in turn generally causes overestimations of to compensate for the higher temperature. In contrast, although not shown, we checked that bad pixels (with large uncertainties assigned) and flux uncertainties, as studied in Fig. 7 and Fig. 10, do not bias the stellar label estimates at high S/N. On the flip side, the weak dependence with additional broadening shows that we cannot measure km/s at 6000 because broadening is completely dominated by the instrumental LSF broadening at low-resolution. On the other hand, and can be recovered at low-resolution because their broadening effects are not simple convolutions with a kernel and so can be distinguished from the broadening due to the LSF.
Similarly, in Fig. 9, we study the deviation of stellar label estimates as a function of additional (and erroneous) radial velocity. For spectra at low-resolution, since most features are broadened and blended, it might be hard to measure radial velocity precisely through the Doppler shift of spectral lines. It is important to understand how much the stellar label estimates might be affected by erroneous radial velocities. Fig. 9 shows that as long as our estimates of radial velocity are not off by or more, the stellar label estimates are largely unaffected. On the other hand, since the deviation (and the of the spectral fitting) is not sensitive to a radial velocity shift of , the figure also implies that for low-resolution spectra at 6000, we will not be able to measure radial velocity to a precision better than , which is one of the limitations of low-resolution spectra. On the flip side, this study also illustrates that, at least for the S/N and wavelength adopted in Fig. 9, we could measure RV to the precision of about with low-resolution spectra. But we note that the exact RV measurable from low-resolution spectra depends on the particular S/N and wavelength coverage in consideration.
Fig. 10 shows the scatter in the stellar label recovery as a function of S/N and the fraction of assumed bad pixels. With S/N per pixel , PSM can recover most stellar labels better than 0.1dex, even with of bad pixels. The weak dependence with the fraction of bad pixels is not surprising – since information only adds in quadrature, having a single reliable line can carry a lot of weight. Thus, for elements that have many spectral lines, such as Fe, Mg and Si, a high fraction of bad pixels does not substantially change the results. On the other hand, the measurement of stellar labels that only have weak gradients, such as , V, Na, K, at low-resolution, can be seriously compromised by a large fraction of bad pixels and/or large flux uncertainties.
In this paper we have demonstrated that it is possible to derive precise many elemental abundances with low-resolution spectra if one is not limited by systematic shortcomings of spectral models. Perhaps more remarkably we show that – at given exposure time and number of detector pixels – low-resolution spectra can yield elemental abundances as precise as high-resolution spectra, and without strong correlations among stellar labels. In this section we discuss several important caveats to these conclusions and additional complications, both for high and low-resolution analyses.
5.1. Some drawbacks to high-resolution spectroscopy
Practicalities aside, if there is little or no gain in label precision at a given survey speed between resolutions 1000 and 100000, one might then wonder what, if any, downsides exist to pursuing a survey at the upper end of this range? First, as discussed in previous sections, the general independence of elemental abundance precision on resolution assumes that information is spread uniformly throughout the spectrum. This is generally not the case, especially for important classes of elements such as r- and s-process neutron-capture elements. Because of this fact, there is some minimum wavelength range that is necessary to cover in order to probe a given set of elements. This fact would tend to work against collecting high-resolution data as multiple instrument configurations are required and hence the number of detector pixels required is not fixed but increases with resolution.
Spectrographs with very high spectral resolution are not suited for even moderately faint objects: they tend to have lower throughput than low-resolution spectrographs, and the exposure time to overcome the read-noise dominated regime for faint objects is often prohibitive.
5.2. Limitations of low-resolution stellar spectroscopy
The fundamental limitation in analyzing low-resolution spectra is the strong reliance on the ab initio models being of uniformly high quality over a large wavelength range. At high-resolution one can focus on lines with very accurate atomic data and that are known to form in relatively well-understood layers of the atmosphere. At low-resolution every “feature” is in reality a blend of many lines and so it is difficult to isolate the good from the bad regions of the model spectra. As a result, the fraction of wavelength pixels that are affected by strong lines increases at low-resolution, and strong lines usually suffer more from model imperfections. The strong lines are especially sensitive to mircroturbulence, non-LTE, and treatment of the damping wings. Hence, the robust information content of low-resolution spectra can be adversely impacted due to model imperfections.
Deriving a robust model is clearly beyond the scope of this paper. Here we only aim to show that low-resolution spectra are remarkably information rich and contain information of many elemental abundances. However, it is worth emphasizing that a single (or several) bad pixels on their own will not necessarily compromise the fits at low-resolution. In most cases there is a vast amount of redundant information in the spectrum, and so even if for example one or more iron lines are in error, the many other good iron lines will dominate the determination of the final iron abundance. Also, data driven approaches such as The Cannon (nes15a) can construct spectral models for low-resolution spectra (based on accurate and precise training labels, e.g., from high-resolution spectra) that are – almost by construction – without substantive systematic errors.
There are a variety of ways that one can mitigate the effects of model imperfections when fitting low-resolution spectra. At a minimum, one can (and should) fit ultra high-resolution spectral atlases of standard stars such as the Sun and Arcturus. The residuals in the fits to the standards can be convolved to low-resolution and used to down-weigh spectral regions that are poorly described by the models. A more ambitious approach would be to collect a sample of ultra high-resolution spectra and tune the models to fit those data (e.g., by astrophysically calibrating the atomic line parameters, which are often not known to high precision). Ideally the sample for which ultra high-resolution spectra are available should span the full range of parameter space that one is interested in studying at low-resolution. These tuned models should then by design provide excellent fits to low-resolution data.
There are other important aspects of fitting low-resolution spectra that are related to the data quality and characteristics. In principle, with perfectly flux calibrated data, one could choose to fit the fluxed spectrum directly. In practice spectra are often not flux calibrated to the precision required and so some methods for continuum normalization are adopted. At high-resolution one can either choose to measure equivalent widths or fit polynomials to regions of the spectrum that are free of (strong) absorption lines. At low-resolution there are no wavelength ranges that probe only the continuum, and so the method of normalization is more model-dependent. Nonetheless, cas16; ho16; ho17 have shown that even with relatively low-resolution spectra from RAVE (,) and LAMOST (,), the continuum normalization is an issue that can be overcome, at least with the data-driven method.
Accurate continuum normalization for the ab-initio fitting of low-resolution spectra is an important aspect that we are currently pursuing with observed spectra, but is beyond the scope of this paper. Here we propose three ways that might mitigate this problem and will them explore in future studies. (a) One can iterate the fits of the stellar labels (with PSM) and the continuum. This approach was adopted to deal with the low-resolution DEIMOS spectra and has proved to be robust enough to measure multiple elements with low-resolution spectra. (b) With the potential robust , estimates from CMD fitting for billions of stars from Gaia, one might proceed by directly modeling the flux spectra (or at least the slope of the flux spectra) instead of the normalized spectra. (c) Since PSM allows us to fit for many parameters simultaneously, one can include the continuum polynomial coefficients as part of the PSM parameters, and fit the stellar labels and the continuum simultaneously.
Another advantage of high-resolution data is that it is much easier to subtract bright sky lines, which are intrinsically very narrow. This is more of a concern in the NIR where there is a forest of bright sky lines. Yet another advantage of working at high-resolution is that equivalent widths are independent of the LSF and so the precise wavelength-dependent instrumental resolution need not be modeled. At low-resolution the LSF must be accurately modeled in order to derive reliable parameters. In practice this means that a parameterized LSF should become part of the model.
Subtle effects such as asymmetric line profiles, due for example to 3D effects or spot modulation, and small shifts in line centers, due for example to isotopic ratio effects, Zeeman splitting, gravitational redshifting, or convective motions, are just several examples of effects that are unlikely to be detectable, even with perfect models, at low-resolution (detection of these effects with perfect models would also require perfect knowledge of the wavelength solution and LSF). Finally, although not directly related to deriving stellar parameters, another obvious advantage of high-resolution is precision radial velocity measurements (cf. Fig. 9), which have been instrumental to studying exoplanet populations.
Large spectroscopic surveys such as APOGEE, GALAH and Gaia-ESO are now collecting several orders of magnitude more stellar spectra in the Milky Way than all previous surveys combined. But the key to unravel the evolution of the Milky Way depends on how well we can turn stellar spectra into stellar labels – stellar parameters such as , , and many elemental abundances. At resolutions below 20000 most spectral lines are blended. Deriving reliable stellar parameters therefore requires simultaneously fitting dozens of stellar labels in order to model the blended features. Fitting dozens of stellar labels simultaneously has only recently been demonstrated to be possible with the aid of polynomial spectral models (PSM). In light of this new technique, in this paper we explored how the information content of stellar spectra varies as a function of resolution and explored the possibility of deriving multi-elemental abundances from low-resolution spectra. We emphasize that our conclusions only speak to the theoretical information content of low-resolution spectra. In reality, low-resolution spectra will be more limited by systematics of the spectral models as well as continuum normalization. Confirming our results with an analysis of observed low-resolution data will be an important next step. Our findings in this paper are summarized below:
We explore the information content in spectra covering 300–2400nm, considering different wavelength ranges of past, on-going, and future spectroscopic surveys – APOGEE, GALAH, Gaia-ESO, 4MOST, WEAVE, Gaia RVS, RAVE, SEGUE and LAMOST – and different stellar types, from M-giants to F-dwarfs. Assuming that the underlying models (whether ab initio or data-driven) are without systematic errors, we find that optical surveys can measure 50–55 elements, and infrared survey can measure about 20 elements, even with low-resolution, 6000, high S/N spectra. Even smaller wavelength ranges associated with the RAVE and Gaia RVS surveys can potentially measure up to 15 elements at high S/N.
Assuming the same exposure time per star and same number of detector pixels (e.g., constant, and a constant number of pixels per resolution element), the derived uncertainties on stellar labels are essentially independent of resolution for 1000100000.
Even though spectral lines are blended at low-resolution, most stellar labels are not correlated at 1000. This holds generically for elements that produce detectable features at more than one location in the observed spectrum.
We demonstrate that it is possible to recover 18 labels from low-resolution 6000 APOGEE-like model spectra, even in the presence of a significant fraction of bad pixels, imperfections in modeling the LSF, and realistic observational uncertainties.
Deriving precise many elemental abundances from low-resolution spectra could open up new windows for Galactic archeology, and in particular, chemical tagging because the latter requires a vast sample size, which is generally more challenging to obtain at high-resolution. We suggest that, in order to optimize scientific returns, a strategy for future spectroscopic surveys would be to collect a small number of ultra high-resolution (100000) spectra for model calibration purposes but to carry out the main survey at much lower resolution.
Appendix A Information content of stellar spectra
In this section, we explore the total spectral information content as a function of wavelength by adopting the idea of gradient spectra, i.e., how much a spectrum changes as we vary elemental abundances. We calculate gradient spectra for elements with atomic numbers from 3 to 99 (Li to Es), from 300–2400nm, at 300000, and . For the purpose of illustration, the gradient spectra are subsequently boxcar-smoothed with a bin size of nm. Despite exploring an exhaustive list of elements, we find many elements to have zero gradient spectra because there are no significant atomic lines for these elements. We compare the information content at two different metallicities – and , and four different stellar types – M-giants, K-giants, G-dwarfs, F-dwarfs.
Fig. 11 shows the sum of gradient spectra from all elements, illustrating the total spectral information. Since the resolution element is proportional to , a bluer wavelength has a smaller resolution element – in other words, we sample more wavelength pixels at bluer wavelengths. Taking that into account, we further divide the sum of gradients by the wavelength in Fig. 11. Therefore, the -axis has an unit of . But it is the relative amount of information that matters, the absolute scale of the -axis is not important. We note that the total information does not directly infer the number of detectable elements. For example, molecules such as TiO and CN can have an enormous amount of spectral lines, but yet there are not many elements involved. To overcome this shortcoming, Fig. 12 provides another view of the information content (also see bla04, for a similar analysis). We separate the wavelength range into portions of nm, and evaluate how many elements have detectable spectral signatures in each of these portions. We define an element has detectable spectral signatures if there is at least a spectral line with a gradient greater than at 6000. The elements that have detectable spectral signatures at each portion can be different, therefore the total number of detectable elements is larger than the value in the -axis. We refer readers to Fig. 4 for the total number of detectable elements of each survey. The horizontal bars in these figures illustrate the wavelength ranges of various spectroscopic surveys. For 4MOST, the long bar shows the wavelength range of the low-resolution configuration, and the split short bars show wavelength range of the high-resolution configuration.
The top two panels show the information content of , and the bottom two panels show the information content of . As expected, metal-rich stars contain more information and can detect more elements than metal-poor stars because there are more spectral lines. In each of these two panels, we include the molecular contributions in the top panel and leave them out in the bottom panel. Each panel in Fig. 11 and Fig. 12 shows a similar monotonous increment of information for cooler stars. The difference in total information content for an M-giant and an F-dwarf can differ up to two orders of magnitude and 20 elements depending on the wavelength. It is not surprising that cooler stars have much more information because many spectral lines form at a lower temperature, especially from molecular contributions in the infrared. However, since most information in the infrared comes from a few important molecules, the number of elements with detectable spectral signatures is much lower for the infrared, despite its high information content. The number of detectable elements per nm range is 10–30 in the optical but is fewer than elements in the infrared.
Since much of the information in the infrared comes from molecules, and due to the composite nature of molecules, this also vividly demonstrates the importance of methods like the PSM to fit all stellar labels simultaneously. Although optical wavelength contains more information, extinction is much more significant in the optical, therefore, optical surveys are typically limited to the solar neighborhood. Infrared surveys, like APOGEE, are better able to cover a larger region of the Milky Way. Clearly, depending on the science goal, the wavelength range of a spectroscopic survey should be carefully chosen. Interestingly, at optical wavelengths, there are about 10–30 elements per 10nm range, showing that for surveys that are restricted to small wavelength ranges, such as GALAH, we can still easily measure more than 30 elements. Even for surveys like RAVE or Gaia RVS that have very limited wavelength ranges, the information content suggests that, with robust models, we should be able to detect elements. Finally, below nm, spectral lines in M-giants become so dense that they form an absorption trough with zero stellar flux. We have virtually zero gradient spectra for most elements in this trough, and as a result, both the information content and the number of elements with detectable spectral signatures drop precipitously for M-giants at wavelengths bluer than nm.
Appendix B Stellar label uncertainty as a function of spectral resolution
We show in Section 3 and in Fig. 2 that, given the same exposure time and the same number of detector pixels, beyond 1000, stellar label uncertainties are largely independent of spectral resolution. The gain from a higher S/N and a larger wavelength range for low-resolution spectroscopy compensates the linear trend of uncertainty with resolution when assuming the same S/N and wavelength range. In this appendix, we will study the absolute uncertainties of a few stellar labels to demonstrate this result in more detail. Similar to Section 3, we assume an anchor point at 6000, i.e., at 6000, we adopt the wavelength range as assumed and a S/N per wavelength pixel of 100. For other resolutions, we assume a wavelength range inversely proportional to the resolution and a time better/worse in photon noise so that they consume the same number of detector pixels and exposure time. We also assume that spectral information is uniformly distributed over the entire wavelength range, so the larger/smaller wavelength range changes the uncertainty by a factor .
Fig. 13 considers the wavelength ranges of the 4MOST survey (optical, nm) and the APOGEE survey (infrared, 1500–1700nm), adopting spectra for solar metallicity K-giants. We only choose a few stellar labels for the purpose of illustration. Although not shown, the other stellar labels follow roughly the same trend. The green and red filled symbols show the survey resolutions of 4MOST (in low-resolution) and APOGEE. Since LAMOST and SEGUE share a similar wavelength range as 4MOST, we overplotted their survey resolutions as red hollow symbols.
Beside the weak dependence of uncertainty with spectral resolution, Fig. 13 also shows that 4MOST has better uncertainties than APOGEE. However, 4MOST also has a larger wavelength range than APOGEE and more wavelength pixels per wavelength range because the resolution element in the bluer wavelength is shorter. Furthermore, red giants are brighter in the infrared than in the optical. For example, the mean flux of a K-giant in the APOGEE wavelength range is about twice of the mean flux in the 4MOST wavelength range. Therefore given the same exposure time, the S/N for the APOGEE survey is about better. In order to have a fairer comparison, the green dashed and dotted lines take into account these differences by scaling the APOGEE uncertainties accordingly. Since spectral information adds in quadrature, in the dashed lines, we scale the APOGEE uncertainties in the green solid lines by the square root of the ratio of number of pixels between 4MOST and APOGEE. The dotted lines further scale the uncertainties by a factor of due to the brighter flux in the infrared. Fig. 13 shows that even compared to the scaled uncertainties, 4MOST still achieves better precision, demonstrating that the optical wavelength indeed has more spectral information than the infrared, consistent with our assessments in Appendix A.
Fig. 14 shows similar results, but assuming wavelength ranges from other spectroscopic surveys. Both figures show a weak dependence of uncertainty with resolution beyond 1000 demonstrating that this trend is generic and is independent of wavelength range. Although not shown in this appendix (cf. Fig. 2), we also tested that this trend remains the same for other stellar types and metallicities. But on top of these, Fig. 14 also illustrates that surveys having shorter wavelength ranges, e.g., RAVE, Gaia RVS, and GALAH, tend to deviate more from this trend. These surveys have fewer wavelength pixels, as a result, stellar labels become more degenerate and produce deviations from the flat trend seen at higher resolutions.
Throughout this study, we often assume S/N per pixel. But we emphasize that S/N plays no role in most of our discussions because theoretical uncertainty exactly scales linearly with S/N (cf. Eq. 1). Since we focuses on the relative uncertainties in this study, the contributions from S/N cancel out. Therefore, our general conclusions in this study are completely independent of S/N.
Appendix C Correlation of stellar labels as a function of spectral resolution
In Section 3.4 and Fig. 6, we studied the global distribution of correlations from all detectable stellar labels. Here, we show more details in this Appendix. Fig. 15–Fig. 17 show each of the pairwise correlations that comprise the global distribution. We assume the wavelength range of APOGEE, solar metallicity, and K-giants. We define an element to be detectable if its uncertainty at 24000 is better than dex. This definition makes a total 23 stellar labels (20 detectable elements). Fig. 15–Fig. 17 show the pairwise correlations assuming 100, 1000 and 24000, respectively – each panel shows the correlation of a label pair. We shade each panel with the correlation value to guide the eye, and adopt a color scheme in log scale to increase the contrast since most label pairs have moderate correlations between 0.2–0.4. We also tested the other wavelength ranges from different surveys and found that the results remain qualitative the same.
Fig. 15 shows that almost all stellar labels are degenerate at . At this resolution, there are only wavelength pixels in the APOGEE wavelength range. Most stellar labels contribute to the same set of pixels. However, when increasing the resolution to 1000, as shown in Fig. 16, most labels are already not strongly correlated. Only stellar labels that contribute to most pixels – , , , Fe, C, N, O – have strong correlations. Going to an even higher resolution, e.g., 24000 as shown in Fig. 17, no longer decreases the correlations of stellar labels significantly. Stellar labels that have consistent contributions to all pixels continue to correlate in most cases even at the highest resolution. Although not shown, we find that the correlations at 100000 remain practically the same as 24000. Finally, we note that the correlations evident at high-resolution are often missed by stellar characterization methods that do not solve for all parameters at once, e.g., classical equivalent width based techniques or fitting individual elements with selected windows of strong lines assuming the underlying stellar parameters are fixed.