Toward a New Kind of Asteroseismic Grid Fitting

Toward a New Kind of Asteroseismic Grid Fitting

M. Gruberbauer and D. B. Guenther Institute for Computational Astrophysics, Department of Astronomy and Physics, Saint Mary’s University, B3H 3C3 Halifax, Canada T. Kallinger Instituut voor Sterrenkunde, K.U. Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium Institute for Astronomy, University of Vienna, Türkenschanzstrasse 17, 1180 Vienna, Austria
Abstract

Recent developments in instrumentation (e.g., in particular the Kepler and CoRoT satellites) provide a new opportunity to improve the models of stellar pulsations. Surface layers, rotation, and magnetic fields imprint erratic frequency shifts, trends, and other non-random behavior in the frequency spectra. As our observational uncertainties become smaller, these are increasingly important and difficult to deal with using standard fitting techniques. To improve the models, new ways to compare their predictions with observations need to be conceived. In this paper we present a completely probabilistic (Bayesian) approach to asteroseismic model fitting. It allows for varying degrees of prior mode identification, corrections for the discrete nature of the grid, and most importantly implements a treatment of systematic errors, such as the “surface effects.” It removes the need to apply semi-empirical corrections to the observations prior to fitting them to the models and results in a consistent set of probabilities with which the model physics can be probed and compared. As an example, we show a detailed asteroseismic analysis of the Sun. We find a most probable solar age, including a million year pre-main sequence phase, of 4.591 billion years, and initial element mass fractions of , , , consistent with recent asteroseismic and non-asteroseismic studies.

1. Introduction

The success of recent space missions CoRoT and Kepler, designed for the discovery of exoplanets and the analysis of stellar pulsation, have produced a large number of high-quality light curves (Chaplin et al., 2010). With these data sets, obtained over long time bases of several months, we are able to detect variability with semi-amplitudes down to a few parts per million. These observations have now firmly established the existence of solar-type pulsation in a large number of solar-like and red-giant stars. Moreover, observations of an unprecedented number of  Scuti stars and other types of pulsators have also revealed rich mode spectra.

These data are now causing a paradigm shift for many topics in stellar astrophysics. In particular, the determination of fundamental stellar parameters, and any inferences regarding the physics of stellar interiors, have for a long time been restricted to testing theoretical models using classic observables such as photometric indices or spectroscopic data. Even though these methods have become more advanced, for instance by applying complex Bayesian methods to determine stellar ages (Pont & Eyer, 2004; Jørgensen & Lindegren, 2005) and to evaluate competing models (Takeda et al., 2007; Bazot et al., 2008), the value of additional information provided by pulsation modes is tremendous, as they directly probe the whole star. Already, the asteroseismic community is successful in extracting general characteristics of the mode spectra for many different types of stars (e.g., Mathur et al., 2010; Kallinger et al., 2010c) and also in devising promising tools for a comparative interpretation of the observations (e.g., Bedding & Kjeldsen, 2010). Average mode parameters, such as the large and small frequency separations, and the frequency of maximum power, have been shown to successfully constrain stellar parameters although certain correlations remain as a source for uncertainty (see, e.g., Kallinger et al., 2010b; Huber et al., 2011; Gai et al., 2011). These have been incorporated into the current advanced probabilistic pipelines to investigate stellar model grids (Quirion et al., 2010) and already been applied to recent observations (Metcalfe et al., 2010). The next step to improving our knowledge about stellar interiors is to analyze individual pulsation modes in an equally rigorous way, to see where our models agree or disagree.

In the past, -minimization techniques (Guenther & Brown, 2004), or equivalent Bayesian analyses (e.g., Kallinger et al., 2010a), have been introduced to find the pulsation model that most closely reproduce the observed frequencies within a large and dense grid of models. The Bayesian analysis, in this context, only provides an additional framework for constraining solutions to models that match our prior knowledge about the stars’ fundamental parameters. Due to the rich information provided by the pulsation frequencies, these approaches should be successful in many cases, which is why they are being applied also to the most recent Kepler data sets. For instance, Metcalfe et al. (2010) test various approaches from different modelers with different methods that actually use the individual frequencies. However, there are currently (at least) three major problems when applying these techniques.

Stellar rotation, at all but the slowest rotation speeds, has been shown to produce rotational splittings which are incompatible with the traditional linear approximations. It even perturbs the values of the axisymmetric frequencies (e.g., see Deupree & Beslin, 2010, and references therein). In order to correctly take this into account, the rotation speed as a function of stellar depth needs to be known, and extensive computations would be necessary to do these effects justice. Given the large variety of possible rotation profile characteristics, this would greatly expand the dimensionality and size of the pulsation model grid. This implies currently insurmountable computational expenses for the types and sizes of grids that are necessary for a comprehensive asteroseismic analysis of many stars.

Figure 1.— Echelle diagram of solar p modes taken from Broomhall et al. (2009) (filled circles) and an appropriate solar model constructed using YREC. The higher order model frequencies are increasingly deviating from the observations due to deficiencies in modeling the upper stellar layers. The systematic errors of the models are much bigger than the random observational uncertainties.

For stars with a convective envelope, model frequencies at high radial orders differ from observations due to problems in modeling the outer layers (see Figure 1). These so-called surface effects can be compensated by looking at ratios of frequency differences (Roxburgh, 2005), or by “correcting” the observed frequencies through calibration of the surface effects seen for the Sun as proposed by Kjeldsen et al. (2008). It is likely that the surface correction as calibrated for the Sun is not universally applicable, and evidence for this has been mounting (e.g., Bedding et al., 2010). Moreover, neglecting (or correcting for) the surface effects in the observed frequencies is only reasonable when studying properties of the star for which the outer layers are unimportant. However, if we want the theoretical models to more closely reflect reality, we need to include more and better physics to bring the computed frequencies closer to the (un-corrected) observed ones.

Furthermore, the fact that static asteroseismic grids can only have a finite resolution in parameter space is often neglected. If the error bars of the observed frequencies are small compared to the differences between calculated frequencies in adjacent grid points, the likelihood of having a model in the grid that corresponds to the best model one’s code could deliver decreases rapidly. The problem of finding the “true” model and the actual uncertainties with respect to the grid becomes apparent. Even grids with adaptive resolution have the same problem in principle, as the decision for further refining the resolution of a particular region in parameter space must always depend on a number of discrete grid points. This problem is much more severe if our aim is to calculate probabilities (or some summary statistics) to compare different model grids.

In this paper we present a new approach to asteroseismic model grid fitting. Our goal is to find a new way of putting our model physics to the test that can handle all of the aforementioned difficulties. Even restricted to models that are unable to produce all the details of the observations, we want to know which models are most “correct” (i.e., consistent with appropriate fundamental parameters and physics), and how well the solution is constrained. We show how to quantitatively assess our model grids as a function of the observational uncertainties, the uncertainties of the calculated frequencies, and our general prior knowledge about the star and possible shortcomings of our models.

2. Bayesian treatment of systematic errors

2.1. Basics of Bayesian inference

Bayes’ theorem, applied to the problem of inference, states that the probability of a particular hypothesis after obtaining new data (i.e., the posterior) is proportional to the probability of the hypothesis prior to obtaining the new data (i.e., the prior) times the likelihood of obtaining the new data, under the assumption that the hypothesis is true (i.e., the likelihood function). This approach to inference is derived from the product and sum rules of probability theory that have shown to be necessary and sufficient for consistent, quantitative logical reasoning111Strictly speaking, Bayes’ theorem is only one result that derives from these rules. Consistent use of Bayes’ theorem, in particular the assignment of the various terms in Equation (1), also requires knowledge of its origin and consistent application of the product and sum rule. However, for the sake of brevity we will simply call our approach in this manuscript to be “Bayesian” rather than “based on probability theory as extended logic”.(see Jaynes & Bretthorst, 2003).

In this paper, we stay as close as possible to the general notation used in Jaynes & Bretthorst (2003) or Gregory (2005). We start with Bayes’ theorem applied to the problem of comparing observations with the predictions of a model . If the predictions of a model are governed by a set of parameters , and we define the observations to be represented by the symbol (for data), it is commonly formulated by expressing the posterior probability

(1)

The symbol is equivalent to the prior information about the problem that is investigated. The first term in the numerator of Equation (1) is the prior probability of a particular set of parameter values , given the model and our prior information about the problem. It is independent of any new data which are supposed to be analyzed. The second term in the numerator is called the likelihood. It gives the likelihood of obtaining the observed data under the assumption that the predictions of model are correct, given the particular choice of its parameter values . The denominator in Equation (1) is called the global likelihood, or evidence, and is the sum (or integral) of the numerator over the whole parameter space of model . It therefore acts as a normalization constant. Most importantly, if the prior probabilities are adequately normalized, it also represents the likelihood of obtaining the data given the whole model , independent of the particular choice of . Thus, it can be used as a likelihood for comparisons among different alternative models.

More details on the application of Bayes’ theorem, in particular with respect to data analysis in astronomy, can be found in Gregory (2005).

2.2. Systematic errors in the Bayesian framework

One of the strengths of the Bayesian framework is that a parameter , known to be necessary to describe a model , can be marginalized by applying the sum rule. In the case of continuous parameters, the sum turns into an integral, and by integrating the full posterior over the parameter range of , one obtains the marginal posterior

(2)

The marginal posterior retains the overall effects of including parameter in the model, but is independent of any particular choice of its value. In other words, is “removed” from the detailed analysis. This is similar to what is done for calculating the evidence in the denominator in Equation (1). The only difference is that the evidence is the marginal likelihood over all parameters of the model, weighted by the prior.

The reason this is useful is that if the data and the model are known to show systematic differences, like shifts or trends, such “systematic errors” can simply be encoded introducing additional parameters to the model , so that is able to model these effects as well. By subsequently marginalizing over these “fudge parameters”, one is then able to perform a standard Bayesian analysis without any need for knowing the exact value of the systematic error(s). However, even though the exact value is unknown, the presence of the error is being considered in the evaluation of the posterior probabilities. Furthermore, an increasing number of “fudge parameters” comes at a cost, because it potentially decreases the evidence for the model due to the increase in prior volume. As mentioned in Section 2.1 the evidence is used as the value for the likelihood of obtaining the data in Bayesian model comparison. It is therefore possible to compare models with and without “fudge parameters”. Improved models that do not need them, but are able to explain the observations just as well, will be favored.

3. Toward a Bayesian solution to asteroseismic model fitting

3.1. Review and problems of the standard approach

The general problem of asteroseismic model fitting is to match observed frequencies to those calculated from models . If the observed frequencies have individual uncertainties , and the model frequencies have random uncertainties then a -statistic can be calculated according to

(3)

Searching a large grid of stellar models with fundamental parameters close to those estimated for the observed star will produce a minimum in (= best-fit model). In addition, uncertainties can be estimated from the change in as the distance in parameter space to the best fit increases. Calculated with adequate stellar evolution and pulsation codes, it should be possible to infer details about the stellar interior and to obtain precise fundamental parameters.

In order to consistently encode prior information about the fundamental parameters and other model properties, and to make use of all the additional advantages that come with the Bayesian approach (all of which will become clear in the next section), it is much easier to perform the model fitting using probabilities. Assuming that the random uncertainties and are compatible with Normal distributions, one can define

(4)

This leads to the likelihood for observing the data (= the specific values of ), given a single observed and calculated frequency

(5)

Here, stands for the proposition “The observed mode corresponds to the calculated mode .”222Although the explicit notation seems clumsy at first glance, it is actually one of the major assets of the Bayesian approach. It visualizes exactly which propositions we are evaluating, and under which conditions the probabilities are calculated. Slightly different propositions or conditions can yield vastly different results. If the notation is explicit, there are no hidden variables or assumptions. Naturally, we want our models to reproduce all observed frequencies. Assuming that each observed frequency is a statistically independent datapoint, this leads to a product for the likelihood of obtaining all observed frequency values given that the model is correct

(6)

Here, stands for complete set of observed frequencies and their uncertainties. This can then be incorporated in the usual framework for Bayesian inference.

Alas, both of the mentioned, straightforward approaches above suffer from the following problems:

  1. The most appropriate model is not necessarily the one that minimizes Equation (3) or maximizes Equation (6). There are many possible scenarios where this would be the case (e.g., due to surface effects, stellar activity, magnetic field effects, rotational effects). Straightforward application of the formalism above will then lead to wrong or nonsensical results in both best fit and derived uncertainties. Even worse, this would propagate into our assessment of the model physics that were used to produce the models.

  2. In case of such systematic differences, we need to take into account that for each observed pulsation frequency, multiple model frequencies are possible candidates (not necessarily only the closest one). This is particularly problematic in cases were no prior mode identification is available.

  3. As the observational uncertainties decrease, the contrast in (and even more so the contrast in probabilities) between different models increases. If the model that minimizes/maximizes Equation (3)/Equation (6) is not the correct model due to missing physics, this increase in fitting contrast is misleading and unwarranted.

  4. In static grids the finite grid resolution increases the risk of missing the most adequate model that the code could produce. If there are systematic differences between even the most adequate mode and the observations, the “contrast enhancement effect” will be magnified. For the same reason, adaptive grids run into the same problem and will miss the correct parameter space region to finer resolve in the first place.

As a consequence of all these shortcomings, it is clear that a method is needed that considers the possibility of systematic differences. It is also mandatory to consider the finite resolution of our model grids. Solutions to these problems are presented in the following sections.

3.2. The argument for probabilities

There are obvious benefits to quantifying the best fit and the uncertainties in terms of probabilities. With probabilities for each specific model, we automatically obtain probability distributions for each parameter of the model itself. We can furthermore consistently compare different grids and see which set of input physics is more probable, given all our current information and the data.

However, there are much stronger arguments for a probabilistic approach. Marginalization allows us to consistently treat nuisance parameters, while the sum and product rules allow us to clearly formulate the question we are asking. This question is “Given the observed frequencies, our knowledge about the star and model physics, which model(s) best represent the star in terms of its fundamental parameters and general physical properties as probed by the pulsation modes?” In reality, this general question has to be further refined as we encounter more complicated situations like: “We have model frequencies that could potentially show negative or positive systematic offsets, or no such offset at all, when compared to our observations. They could be influenced by rotation or actually be rotationally split frequencies themselves. They could be bumped modes or modes. Given all of these possibilities, which model is the most adequate one, and how well is the solution constrained?”. From the viewpoint of probability theory the only way to treat such a set of possibilities and get meaningful answers is to use the sum rule and product rule, as we will show in the next section.

3.3. Ambiguous mode identification

As a first improvement to the general approach of asteroseismic model fitting, we can involve the sum rule to consistently consider uncertainties (or even ignorance) in our mode identification. In essence, if there is no unique proposition , Equation (6) changes to

(7)

with

(8)

Here the sum over the index means that all possible and mutually exclusive assignments of one observed mode to a number of calculated frequencies have to be taken into account as an “or” proposition333Hereafter, a possibly ambiguous frequency assignment will always be denoted as .. Note that due to the product rule of probability, the terms in each sum now include the conditional prior probabilities . These have to be normalized so that . The most conservative assignment is to assign equal probabilities to each possible scenario. However, if more information is available (e.g., a mode could be identified to be either or with specific probabilities for both cases as found by some peak bagging program), this can easily be encoded at this stage.

The end result is a product of weighted sums of probabilities, where the weights are given by the respective prior probabilities.444A common misconception is that these “priors” are only there to allow us to incorporate prior information. In reality, they are formally required by the product rule and ensure that the result of Equation (7) is always properly normalized. This product is the correctly normalized likelihood for obtaining the data, given the proposition that any one of the proposed scenarios is correct. Note that if there is an unambiguous assignment for every observed frequency, each prior probability and Equation (7) simplifies to Equation (6). Now that we have included our uncertainties concerning the assignment of model frequencies and observed frequencies, we will deal with uncertainties in the validity of the model frequencies themselves in the next section.

3.4. Treatment of systematic errors

As a next step, we now show how to treat the problem of imperfect models. As mentioned before, applying standard techniques that rely on minimizing the quadratic differences between the observations and the models will give incorrect results if systematic differences exist. The alternative of correcting for such imperfections prior to modeling is also undesirable if the correction is not known to be universally applicable.

To treat any systematic deviation from the model frequencies due to unmodeled physical effects, we simply expand the models by considering an additional systematic error parameter for each tested frequency. The aim is to construct new values to compare with the observations according to

(9)

Here, is the absolute value of the systematic error. or and determines whether the model frequency is expected to be systematically higher or lower than the observed frequency. To keep our notation from occupying too much space, we will implicitly assume the value of to be constant throughout the following derivations, and attribute this to our prior information . is an unknown parameter but as long as its lower and upper boundaries can be roughly estimated, it can be treated fully consistently in the probabilistic framework.

In the following, we will again work out an example of only one observed and calculated frequency. Therefore, for the derivation the assignment is unique. We will then provide the extension to multiple frequencies and ambiguous mode identifications.

Using the new parameters, the equivalent to Equation (5) is

(10)

Here the symbol simply denotes the model augmented by the new parameter . Self-evidently, the product rule again requires that we introduce a prior probability . This can either encode prior information about the expected behavior of the error, or be simply assigned by considerations of symmetry. Again it is required that the integral over the prior .

It would now be possible to try to find the that maximizes in Equation (10). However, this is completely irrelevant for our needs. In case of multiple observed frequencies it would also quickly lead to a highly dimensional parameter space that we are not interested in navigating. Instead, we are interested in finding the probabilities of the models . To do this it is necessary to integrate out which we have just introduced. We obtain the marginal likelihood

(11)

This integral naturally depends on the shape of the prior probability distribution for , and can easily be evaluated numerically555For several simple shapes, such as the beta prior introduced in the next section, there also exist analytical solutions.. It represents the likelihood of obtaining the value of the observed frequency given that predicts a frequency but that there is a possibility of a systematic difference , between and . Furthermore, it is fundamentally constrained and properly weighted by the prior we assigned. This result is now easily extended to multiple modes and ambiguous mode identifications. Equation (7) becomes

(12)

and

(13)

In summary, we have to calculate a product of weighted sums of integrals in the form of Equation (11), where the summation is performed over every possible assignment .

Note that even when we choose to consider systematic deviations, we usually do not expect them to be significant for all frequencies. For good models some frequencies should already match well “right out of the box”. In particular, this is true for all frequencies in the idealized case where we have (finally) found a way to correctly model all the effects that previously caused systematic deviations.

One might think that this is taken care of by setting . However, unless the prior is a function at , it is much more likely that . This means that a priori a model will be preferred which shows at least a small deviation from the observations, depending on the observational uncertainties and the steepness of the prior. The limiting case however, the function, corresponds to a whole different model which is simply the standard model without systematic deviations, . Thanks to the sum rule, there is an elegant solution for taking this alternative into account.

For the mutually exclusive logical propositions666Note that from a logical standpoint even if , is still a different model than because the prior is not a function. Therefore, they are always mutually exclusive. and we can calculate

(14)

Note that here means “ or is true”. This is the likelihood of observing the frequency value , given that a systematic deviation either does or does not exist. The principle of indifference as the most conservative approach for the prior probabilities obviously demands , but if more information is available, it can be encoded here. This result is also easily generalized to the case of multiple frequencies and ambiguous mode identification.

3.5. The choice of the prior for

A very important detail to consider when extending the models with systematic error parameters is their prior probabilities . There is a basic choice between two possibilities. The first is to use uninformative (or ignorance) priors, or alternatively, maximum entropy priors. Uninformative priors can be derived from arguments of invariance to specific transformations, while maximum entropy priors should satisfy the maximum entropy criterion for a given set of constraints. The other possibility is to use priors derived from heuristic or physical arguments.

The specific form of the prior probabilities of are part of the model that is evaluated, as indicated by the notation.777The prior is described by rather than, e.g., They are not necessarily “prior” as in a sense of “before obtaining observations”, but conditional probabilities required for the correct normalization, as demanded by the product rule of probabilities. They encode specific ways in which we expect to behave, given our grid of frequencies and our information (which of course can be influenced by previous observations). For instance, if we expect our best model to minimize the systematic deviations, the prior should assign larger probability densities to smaller , so that models with smaller deviations will be more probable. On the other hand, if we expect our best model to show more erratic deviations, a flat uninformative prior is a better choice. After a complete evaluation of the probabilities and likelihoods, the Bayesian evidence will indicate whether the state of information encoded by the priors is supported by the data or not.

As a first important example of an uninformative prior, consider a uniform prior

(15)

This means that all values of are equally likely. With such a prior, every model that predicts frequencies at any value between and has the same maximum likelihood (i.e., the same maximum value for Equation (10)).

On the other hand, a Jeffreys prior

(16)

assigns equal probability per decade and, in terms of the probability density, favors smaller values of . This prior is obviously not defined for , i.e., it requires . This is problematic for, e.g., surface effects that approach zero at low orders. However, when , one can use a modified version of this prior given by

(17)

where is a small constant. For values smaller than , this prior acts more or less like a uniform prior, while for higher values it behaves like the usual Jeffreys prior. This prior is nowadays often used in “peak-bagging” algorithms (e.g., Gruberbauer et al., 2009; Benomar et al., 2009; Handberg & Campante, 2011). However, there is no objective criterion for how to set , and various tests we conducted with our grid fitting code have shown that the choice of can have a large impact on the evidence values.

Consequently, we argue that any priors used for a systematic error parameter should be functions that are clearly defined by the parameter limits, without additional parameters that have large effects on the evidence. The uniform prior888In fact, the uniform prior is consistent with a beta distribution with . is such a prior, as are priors derived from the beta distribution (given in units of our problem)

(18)

With and this simply leads to a linearly decreasing probability density

(19)

It is the only prior that allows for a linearly decreasing probability density, is properly normalized, and reaches zero at . It also leads to an analytical solution for the integral in Equation (11). Thus, it satisfies all our requirements for a prior with which to minimize systematic errors.

We have compared the results obtained from Equation (19) (hereafter: beta prior) with several other plausible choices, such as an exponential distribution with expectation value and a modified Jeffreys prior with , and we find them to yield comparable results and evidence values. Due to the clarity of its definition and lack of additional parameters, we therefore argue that the beta prior is an appropriate choice for a non-flat prior. We will show how to use it in Section 4 in a worked example.

Lastly, priors based on heuristic or physical arguments obviously vary strongly with the specific problem to which the fitting method is applied. As an example, when modeling surface effects on p-mode frequencies, the prior could be Gaussian, following the heuristic frequency correction proposed by Kjeldsen et al. (2008). It would be a function of frequency, expecting greater deviations toward higher-order modes. The width of the Gaussian, however, would be again a rather arbitrary choice, leading to potentially different evidence values. Such priors clearly need to have a strong basis either in theory or prior observations.

3.6. Bumped modes and finite grid resolution

Equation (12) represents the final likelihood for obtaining the observed frequencies given our (extended) model . This model still represents only a single point in a discrete grid.999Note that this is also the case for approaches using an adaptive grid, since each iteration of an adaptive scheme is based on a discrete grid. However, the probability is small that a single model in the grid corresponds to the “true best model” our code can produce. The problem becomes worse as the grid resolution is lowered, or as mode frequencies are changing quickly or unpredictably from one model to the next (e.g., avoided crossings, magnetic shifts). The probabilities (or -values) we obtain will not be a fair assessment of the model physics, even at higher grid resolutions. Even worse, the overall evidence for the grid will be finely tuned to the positions of all models in the grid. This makes it difficult to compare different grids with different physics. We will now show how to improve on this.

In a sequence of models along a single evolutionary track, except for the first and last models, each model has two neighboring models and . In most cases these adjacent models will contain the same modes, and their changing values can be traced from to and . Now we declare the difference between observed and calculated frequency as a new free parameter

(20)

This value is fixed if only a single grid point is considered. However, we can split the evolutionary tracks into segments in between grid points, and define

(21)

and equivalently

(22)

Adding as a new parameter to the equations derived in the earlier sections, we change our focus to evaluate probabilities of model track segments centered around the models . To do this, we again use marginalization to integrate out both and to obtain the marginal likelihood. We obtain

(23)

If the priors for do not vary greatly from one model to the next, and can be considered to be independent parameters. It is therefore possible to use the product rule to separate the conditional probabilities

(24)

Furthermore, since we evaluate the complete evolutionary track segment we can assume a uniform prior probability . With these definitions, the integral over can easily be calculated analytically. The equivalent to Equation (10) becomes

(25)

where is the error function. The remaining integral over again has to be carried out numerically. Figure 2 shows an example for the definitions introduced above, given three models in a solar evolutionary track.

Figure 2.— Example for the definition of and (see the text). Four radial orders of modes from three adjacent models in a high-resolution grid of solar models are shown. The triangles represent the central model. The frequencies for the adjacent models along the evolutionary track sequence are depicted as squares and white circles. Black circles (and error bars) indicate observed frequencies published in Broomhall et al. (2009). and for a single mode are represented using arrows. The insert shows an unzoomed version of the and ridge.

We have now used the free parameter to “trace” each mode through segments of the evolutionary track, and compare it to the observed frequencies, retaining the possibility of systematic differences. Note that our only assumption here is that the mode frequencies change smoothly between the frequencies given by the constraining models. In principle, this approach can be carried out in multiple dimensions (e.g., not only along the evolutionary track in stellar age but also between tracks in mass). As before, an extension to multiple frequencies and ambiguous mode identifications is straightforward.

We stress that this approach only locates the region of highest probability given the current grid, and given unspecified behavior of frequencies in between grid points. It is thus best used for frequencies whose behavior is difficult to capture, e.g., due to mode bumping, or for a first general assessment of a very coarse grid. Given a dense enough grid, regular frequencies that are expected to change approximately linearly from one grid point to the next need to be treated using interpolation, since the integration over the model gaps for individual modes, independently of all other modes, would allow for highly unphysical models.

Therefore, in order to obtain a final best model and uncertainties for the model parameters, the regions of substantial probability should be further refined after the track probabilities have been calculated. Eventually, the grid is resolved enough so that well-defined distributions arise. In dense enough grids, this can easily be accomplished by interpolation of the frequencies in between grid points without violating the condition of hydrostatic equilibrium. This can also be done during run-time with arbitrary precision using probabilities, by interpolating between grid points and using the sum rule to calculate a probability representative of the original grid resolution using the interpolated models. Naturally, modes that change erratically, should be excluded from such an interpolation routine and treated as shown above instead.

3.7. Model probabilities

So far we have only shown how to calculate the likelihood for standard pulsation models, models that contain systematic differences, and also evolutionary track segments. In order to obtain the probabilities for individual models (or track segments), we want to use Bayes’ theorem, assign model priors, and calculate the total evidence for each model grid. The simplest method to assign model priors in the absence of any other prior information, is to use the principle of equipartition and assign a uniform prior

(26)

where is the number of models (or, equivalently, would be the number of track segments) that are analyzed.

Although each model or track only predicts a number of frequencies, it implicitly represents values or ranges for fundamental parameters like or , which can be compared to (and constrained by) different and non-seismic observations. For instance, assuming our prior photometric and spectroscopic observations of a pulsating star indicate then the prior probability density for the model temperature is

(27)

This example assumes that the uncertainty in follows a Gaussian distribution. is a normalization constant depending on the absolute lower and upper plausible limits of .

All the different implicit parameters for which prior observations or other fundamental constraints are available, and hence prior knowledge exists, then can be used for prior probabilities which combine into an overall prior for model . As an example

(28)

if we assume separable priors for simplicity. If probabilities of track segments are calculated, such a prior could be approximated by a product of separable integrals, which are easily evaluated analytically. Again, in order to obtain a proper prior and therefore proper values for the evidence, the integral of the prior probability over all possible models/tracks in a grid should be 1.

By calculating, e.g.,

(29)

or, e.g., in the case of rapidly changing modes with or without systematic errors,

(30)

we obtain the probability of or, respectively, given our prior knowledge (or lack thereof), our grid, and the set of observed frequencies. Note that the denominators of these equations represent the evidence or likelihood for the grid as a whole. We can therefore use these as likelihoods when we want to compare different grids with different input physics.

4. Application to surface effects

As mentioned in the introduction, shortcomings in modeling the outer stellar layers produce systematic deviations in comparison to the observations. These deviations seem to be such that model frequencies tend to be higher than the observed frequencies, and therefore (see Equation (9)). Kjeldsen et al. (2008) have proposed to calibrate a power-law description of the deviations by measuring the surface effects in the Sun, and then fitting this relation to frequencies of other stars. Their correction expressed in terms of our definitions has the general form of

(31)

where is some reference frequency, typically the frequency of maximum power , and and are parameters to be fitted. From their fits, Kjeldsen et al. determined in the Sun, which has subsequently been used for other stars by a number of authors. A comprehensive implementation of this formalism into a -fitting algorithm was presented in a recent study by Brandão et al. (2011). However, even in this more advanced approach, there is still a choice of and required. Moreover, complications for modes of different spherical degree and also bumped modes arise because they do not necessarily conform to this relation. The authors propose to alleviate these problems by introducing additional model-dependent parameters that approximately correct for some of these deviations. While this approach is a great improvement over applying a fixed surface-effect correction (or no correction at all), our approach is much more powerful. It allows for a much greater flexibility and leads to clearly defined probabilistic results.

Figure 3.— Behavior of the beta prior for systematic offsets in an echelle diagram. The squares represent model frequencies, while the shaded “trails” indicate the prior probability density for varying . The left panel uses equal , whereas the right panel uses a power-law with exponent (see Section 4.1). Note that the uniform prior is not shown, since it simply assigns a constant probability density.

4.1. Priors for surface effects

As we want to treat systematic errors of more or less unknown magnitude, the most general approach is to use the flat uniform prior (Equation (15)). Imposing only minor additional constraints, as we argued in Section 3.5, the beta prior (Equation (19)) can also be used to give more weight to models which minimize these unknown errors. We can use both priors and compare the Bayesian evidence to tell us which interpretation of the surface effect is better supported by the data, given our model and everything we know. Moreover, irrespective of which prior is chosen, we also always allow for the possibility of no surface effects at all, as discussed in Section 3.4.

This now gives us enough flexibility to consider a possibly frequency-dependent behavior of the surface effects. However, instead of “predicting” the behavior of as is done by modeling the surface deviations through a power law, we will prescribe the behavior of its upper limit . In contrast, the lower limit should always remain 0, since our ultimate goal is to find models that correctly describe the surface layers (and therefore approach ).

The choice of the largest allowed is not unique, but it should be sensible and used consistently throughout the analysis. A reasonable strategy is to use , the large frequency separation of each specific model, as a sensible upper limit. If the systematic differences between observations and models are larger than the average distance between modes of adjacent radial order, we no longer recognize this as a valid frequency assignment101010This condition may be relaxed at the highest radial orders.. With this upper limit defined, we now want to model different types of surface effects. If we have no preference for any frequency-dependent trend (i.e., all we know is that observed frequencies are lower than model frequencies) we require that all frequencies have equal .

On the other hand we can also use a more specific model, such as Equation (31), but retain the same flexibility. The surface effect as shown in Equation (31) depends on two parameters. The power-law exponent determines how quickly the surface effect increases as we move to higher frequencies, whereas is simply a scaling factor. We are not interested in the scaling parameter, since the scaling (i.e., the magnitude of the offset) is governed by our condition that for each model . It is taken care of by the fact that we are marginalizing over anyway. Since the largest surface effects are expected at the highest frequency in the model, it follows that for a specific

(32)

Figure 3 shows how these definitions affect the prior probability density as we increase the value of for both the constant and the power-law approach. With all the set, we can then use the priors as discussed above for all our calculations. Note that we can also easily evaluate new composite propositions at this stage and compute the probability for a hypothesis that allows for, e.g., a range . This is done in the same way as was explained earlier (see Equation (14)).

4.2. Detailed analysis of the Sun

As an example for how to implement the surface-effect treatment, we will consider the solar , 1, 2, and 3 p modes obtained by using BiSON data (Broomhall et al., 2009). For our models, we used a large and dense solar grid obtained with YREC (Demarque et al., 2008). The model grid spans: masses from 0.95 to 1.05 in steps of 0.005 , initial hydrogen mass fractions from 0.68 to 0.74 in steps of 0.01, initial metal mass fractions from 0.016 to 0.022 in steps of 0.001, and mixing length parameters from 1.8 to 2.4 in steps of 0.1. These parameters are also summarized in Table 1.

Our model tracks begin as completely convective Lane–Emden spheres (Lane, 1869; Chandrasekhar, 1957) and are evolved from the Hayashi track (Hayashi, 1961) through the zero-age main sequence (ZAMS) to 6 Gyr with each track consisting of approximately 2500 models. Only models between 4.0 and 6.0 Gyr are included in the model grid. Constitutive physics include the OPAL98 (Iglesias & Rogers, 1996) and Alexander & Ferguson (1994) opacity tables using the GS98 mixture (Grevesse & Sauval, 1998), and the Lawrence Livermore 2005 equation of state tables (Rogers, 1986; Rogers et al., 1996). Convective energy transport was modeled using the Böhm-Vitense mixing-length theory (Böhm-Vitense, 1958). The atmosphere model follows the () relation by Krishna Swamy (1966). Nuclear reaction cross-sections are from (Bahcall et al., 2001). The effects of helium and heavy element diffusion (Bahcall et al., 1995) were included. Note that our atmosphere models and diffusion effects have been shown to require a larger value of mixing length parameter () than standard Eddington atmospheres () (Guenther et al., 1993).

The pulsation spectra were computed using the stellar pulsation code of Guenther (1994), which solves the linearized, non-radial, non-adiabatic pulsation equations using the Henyey relaxation method. The non-adiabatic solutions include radiative energy gains and losses but do not include the effects of convection. We estimate the random uncertainties of our model frequencies to be of the order of .

Parameter Range Step Size
Mass 0.95–1.05 0.005
0.68–0.74 0.01
0.016–0.022 0.001
1.8–2.4 0.1
00footnotetext: Notes. Masses are given in units of solar masses; is the mixing length parameter.
Table 1Parameter Ranges for the Solar Grid.

We analyzed our grid using adiabatic and non-adiabatic frequencies, and employed three different surface-effect models:

  • M1: frequency-independent surface effects with

  • M2: frequency-dependent, “canonical” surface effects with following Equation (32) with

  • M3: same as M2, but with as a free parameter marginalized from to

For each frequency evaluated throughout our model grid, irrespective of the surface-effect model, we also considered the possibility of no surface effect, i.e., we consistently calculated . To take into account the discrete nature of the grid, we interpolated along the evolutionary tracks during run-time by a factor of 20, thereby increasing the effective “frequency resolution” of the grid to below the random uncertainties of the model frequencies. All models were evaluated with

  • (a) a uniform prior for all track segments

  • (b) a prior using normal distributions for the constraints , , and , where YREC uses the following adopted values for (Cohen & Taylor, 1986) and (the average of the ERB-Nimbus and SMM/ARCRIM measurements; Hickey & Alton (1983))

  • (c) same as (b) but with an additional Gaussian constraint on the age of , derived from the estimated age of the solar system found by Bouvier & Wadhwa (2010) and an average pre-main sequence phase of our models of .

For the we consistently used beta priors, as discussed in the previous section. Our calculations yield the most probable models and uncertainties for all these approaches, and they also give the Bayesian evidence for each approach. The results are summarized in Table 2. We also computed the probabilities using uniform priors, but found similar results with lower evidence (several orders of magnitude) values than for the corresponding beta prior analysis.

Surface HRD
Model Prior (adiabatic) (non-adiabatic)
M1 a
M2 a -186.7
M3 a
M1 b
M2 b -187.1
M3 b
M1 c
M2 c -190.7
M3 c
00footnotetext: Notes. See the text for the definition of models and prior a, b, and c. Results for models M2a, M2b, and M2c, which are analyzed in more detail, are indicated in bold face. Note that small numbers are expected.
Table 2Evidence for the Solar Grid Using the BiSON Data Set
Model Mass Age Probability
M2a 0.73 0.017 0.0153 2.2 0.54
0.72 0.017 0.0153 2.2 0.21
M2a 0.72 0.018 0.0161 2.1 0.68
0.71 0.019 0.0160 2.2 0.17
M2a 0.72 0.016 0.0144 2.2 0.95
0.71 0.017 0.0153 2.3 0.05
00footnotetext: Notes. Age is given in billion years and is computed from the pre-main-sequence birthline. The age from the ZAMS is million years less. and are initial hydrogen and metal mass fractions, is the metal mass fraction in the envelope. Probabilities are given with respect to the specific surface-effect model and prior combination.
Table 3Most Probable Models for the Complete BiSON Data Set Model Fitting.

The non-adiabatic frequencies consistently produce larger evidence values than for the respective adiabatic case. This is no surprise, as the non-adiabatic frequencies are in general better at reproducing the higher frequencies. Overall, model M2a shows the largest evidence, followed by M2b and M3a. Note that M1a, M1b, and M1c, which use frequency-independent priors for the surface effects, and therefore are extremely flexible, fail compared to the other models. Also, M3a and M3b cannot beat their M2 counterparts. These are examples of how marginalization and the consistent normalization of probabilities work together to penalize more flexible models if they cannot generate considerably better results. Model M3c has a greater evidence than M2c, but the most probable stellar models are the same in both cases, suggesting that these models fit well, but do not necessarily adhere in detail to the standard surface correction.

At first glance it might be unsettling that M2a has a slightly greater evidence than M2b (and significantly greater evidence than M2c). This indicates that there are models in our grid which reproduce the pulsation spectrum very well but do not match the solar fundamental parameters. A correctly calibrated grid would produce higher evidences with a prior restricted to the true solution. However, regardless of whether or not we include the fundamental parameter constraints, we are still finding models that match the oscillations constraints reasonably well. Furthermore, recall that the evidence is only the likelihood of obtaining the data, given that the approach is correct.111111In order to obtain correctly normalized probabilities for the different approaches themselves, we have to introduce conditional probabilities like, e.g., or and use Bayes’ theorem. Only comparing the evidence amounts to setting these conditional probabilities to be equal for all tested hypotheses (e.g., ). We know that the a prior is misrepresenting our state of information. The solar prior approach b more correctly encodes what we know about the Sun, and the age prior c puts even tighter constraints on the pulsation models. Ignoring this information (using prior a and setting equal conditional probabilities) is an interesting and necessary exercise to study the consistency of the results, and how the different models, approaches, and priors work. For an actual detailed study of the solar model physics, however, it is not appropriate. We can nonetheless compare the results, restricting ourselves to the non-adiabatic frequencies and the on average best model for each prior, M2. The resulting parameters are displayed in Table 3.

The results obtained without using our prior knowledge of the Sun for model M2a are spread over several models in the parameter space that can fit the observations quite well. However, for the most probable models, the mass and age are inconsistent with our prior knowledge. These models seem to produce smaller surface effects, and are therefore preferred. For model M2b the situation is similar. Although the mass is now fixed to the true solar value, we do not obtain models that are consistent with the solar age.

For M3c a single combination of physical parameters dominates the results and manages to fit well all the constraints we impose (mass, luminosity, , pulsation frequencies, and age). Loosening the conditions on and the luminosity does not significantly change the result. We have also tested slight variations of up to 20 million years in the age prior and do not find the result to be affected. In all cases, we recover a tightly constrained most probable model with and , and an age of . We therefore find a result similar to Houdek & Gough (2011). Given that our models take to reach the main sequence, our result is also consistent with meteoritic age determinations of the solar system to within several million years (see, e.g. Bouvier & Wadhwa, 2010). However, we also recover , which leads to an initial helium mass fraction of . This is different compared to the value of that was found by Houdek & Gough, but more consistent with Asplund et al. (2009).

Fitting the observations to the adiabatic frequencies, including the age prior, we also recover the exact same model. We also tested how sensitive the grid is to the prior constraints in order to estimate the actual impact of the pulsation frequencies on the probabilities. If we only evaluate the combined priors, ignoring the frequencies but including the prior on the age, we obtain , , , , and . This leads us to conclude that the frequencies have a decisive impact and actually select the low-metallicity models, no matter whether adiabatic or non-adiabatic model frequencies are used.

However, it has to be stressed again that the evidence drops by almost two orders of magnitude when we introduce the age prior. This can be understood by the fact that the solution is so well constrained and at the edge of our current parameter space in , and that many other models can also produce similar pulsation spectra. It could also suggest that we might not have covered the true best model parameters yet in our current grid. Therefore, our next goal will be to extend the grid to lower metallicities, and also include different abundance mixtures, but this is beyond the scope of this paper.

Figure 4.— Non-adiabatic (shaded symbols) and adiabatic (open symbols) frequencies of the most probable solar model from evaluating the BiSON frequencies (black circles + error bars) using approach M2c. Note that frequencies have been shifted upward by 5 Hz, before calculating the -axis values in order to prevent the modes from wrapping around.
Figure 5.— Measured surface effects for non-adiabatic (filled circles) and adiabatic (open circles) frequencies of the most probable solar model from evaluating the BiSON frequencies using approach M2c. The uncertainties of the differences are smaller than the symbols. Least-squares power-law fits (see Equation (31)) to the surface effects for the adiabatic (solid line) and non-adiabatic (dashed line) frequencies are also shown.

Figure 4 compares the BiSON observations with our most probable model at the correct solar age. Even with non-adiabatic frequencies, significant surface effects can still be found. The measured surface effects themselves are shown in Figure 5, together with least-squares fits following the relation proposed by Kjeldsen et al. (2008). The magnitude of the surface deviations depends on whether the non-adiabatic or the adiabatic frequencies are used for the fit. Nonetheless, our method manages to identify the same exact model to be the most probable, even using the same surface-effect model, thanks to the power of marginalization. However, the non-adiabatic models are vastly preferred in terms of the Bayesian evidence. This is an example for how the presented approach can be used to iterate toward improved stellar model physics, while still recovering meaningful stellar parameters from current asteroseismic investigations.

We also determined surface-correction power-law exponents for every spherical degree via least-squares fits. For both the non-adiabatic and adiabatic frequencies the best fitting exponents are markedly different from which was both advocated by Kjeldsen et al. (2008) and also used as the basis for our probabilistic surface model M2. This is also the reason why the M3c models have a greater evidence than their M2c counterparts. The fitted values range from for non-adiabatic () frequencies to for adiabatic () frequencies. Moreover, the power-law fits do not match the deviations very well at intermediate radial orders near . From our point of view, fixing the exponent to for a least-squares fit, as for instance done by Brandão et al. (2011), is therefore a potential problem since it does not even match the Sun very well, in particular when improved (e.g., non-adiabatic) physics are implemented. The probabilistic procedure has no problem with these deviations, even though it formally assumes an exponent of , since the magnitude of the surface effects is marginalized for every frequency.

4.3. Asteroseismic analysis of a Sun-like star

To investigate the applicability of our method to current asteroseismic investigations, we also performed an “asteroseismic” analysis of a Sun-like star, simulated by artificially “degrading” the set of observed BiSON frequencies to a precision and accuracy expected from current space-based missions for average Sun-like solar-type oscillators. We first multiplied the uncertainties of the BiSON observations by a factor of 20, and then added corresponding random errors to the frequency values. Furthermore, we did not assume to have detailed prior information on the fundamental parameters. Instead, we fitted the “degraded” data set with a completely flat prior to the same grid as before, again using our surface effect model M2.

Although a different most probable model is identified, the overall results are comparable to our findings for M2a. They show a slightly larger spread of the model probabilities across the grid. Summarizing the uncertainties for the main parameters by calculating the first and second central moments of the probability distribution in our grid we approximately obtain , age = , , , , and .

However, these results become worse if we systematically remove lower order modes which are crucial to “anchor” the surface effect relation. To illustrate, we further degraded our data set by only keeping 13 modes from 1950 to 3580 Hz, 12 modes from 2020 to 3505 Hz, 10 modes between 2080 and 3300 Hz, and 8 modes from 2270 to 3220 Hz. Similar data sets from Kepler and CoRoT with comparable uncertainties and numbers of modes have recently been analyzed in the literature. The results for the model parameters become , age = , , , , and . Although the values are still within we are almost at the border of our parameter space, and higher-mass models systematically outperform lower-mass models.

We know from investigating the BiSON data using our grid that we require to fit all solar observables. Therefore, in an analysis of a Sun-like star, we can constrain the fit to all models with this value or use a prior based on the marginal posterior probability for as determined from the fit to the Sun. In this case we obtain , age = , , , . This is an improvement, but still not comparable to the results obtained when using the full data set.

Thanks to the probabilistic method, however, we can also easily add new observables as further constraints, such as the frequency of maximum power, which can also be inferred from a power spectrum analysis and which approximately scales for Sun-like stars as

(33)

with Hz (Kallinger et al., 2010b). Assuming an observed value of and calculating for each model according to Equation (33) we can then multiply the probability for each model with

(34)

where .

With Hz  for our simulated Sun-like star, we then obtain , age = , , , . Finally, if we were able to determine to about solar precision, the results would be , age = , , , . Therefore, if our observations provide precise additional information such as , it can easily be implemented with our method. It then seems possible to obtain reasonably accurate results for Sun-like stars, even in the absence of low-order modes and without a fixed surface effect correction.

5. Conclusions

In this paper, we have derived a new, completely probabilistic framework for asteroseismic grid fitting. We explicitly used marginalization and the formulation of combined propositions to allow for the quantitative evaluation of the model grid physics. While computationally more intensive than the standard evaluation, this approach has several benefits in that it

  1. allows for the treatment and analysis of systematic errors such as the surface effects, therefore removing the need to apply corrections prior to fitting,

  2. easily implements uncertainties in the mode identification,

  3. takes into account the fact that grids are discrete representations of a continuous parameter space, which is especially important for rapidly varying bumped modes,

  4. provides a consistent framework to use prior knowledge about stellar fundamental parameters or to evaluate additional observables such as , and

  5. produces correctly normalized probabilities and likelihoods, respectively evidences, which can be used to assess the model grid physics and the calibration of the grids.

While the above was explicitly derived using the example of a static grid, the probabilistic approach would also be suited for an adaptive grid approach. The Bayesian evidence could be used as a formidable criterion to decide whether an adaptive grid needs to be further refined or not.

We also showed how to apply our method to study the Sun. The analysis based on our current grid and our prior information matches well the findings of Houdek & Gough (2011), and in general fits the up-to-date picture of the Sun. The age of our best model (measured from the pre-main-sequence birth line) is consistent with the meteoritic solar age. The solar model arrives on the ZAMS approximately after appearing on the birth line. We found the same best model whether non-adiabatic or adiabatic frequencies were used. This shows that our method can adequately deal with different shapes of surface effects, even when using the same (flexible) surface-effect model. One requirement, however, is that there exist enough lower-order modes to “anchor” the fit.

To our knowledge, this work is also the first completely grid-based asteroseismic analysis of the Sun, using all the information provided by the frequencies and prior knowledge about the solar fundamental parameters, that results in the need for initial hydrogen, helium and metal mass fractions more consistent with Asplund et al. (2009) than the traditional higher-metallicity models. At least for our current grid, these values are required to produce a model that “looks” like the Sun, pulsates like the Sun, and has the correct solar age. We stress that a formal fit to the Sun’s oscillation frequencies (Guenther & Brown, 2004) or even targeted nonlinear inversion of the oscillation frequencies (Marchenkov et al., 2000) will not necessarily yield the same model as our approach. With fits it is difficult to provide an unbiased correction for surface effects that at the same time does not overly weight the deeper penetrating modes. Some of the deeper penetrating modes are sensitive to the base of the convection zone where the effects of convective overshoot and turbulence, introduced by rotation shears, are not included in the standard models. Inversion methods, where a standard base model is perturbed to fit the oscillations, are also distinct because even though the perturbed model obtained from inversions reveal regions of the standard model that are inadequate, e.g., the base of the convection zone, the inversion model is not an actual standard model in the sense that it is constrained and generated by the model physics.

We know our best-fit model is inaccurate at the surface and we suspect it is inaccurate at the base of the convection zone (the latter suspicion based on the inadequate model physics for this region). Regardless, the model is probabilistically the best model from the current model grid that matches all the known constraints. We speculate that preferring fits that match the oscillation frequencies at the expense of the other physical constraints may be the reason that helioseismologists have been unable reconcile the observed solar p-mode frequencies with frequencies derived from standard models based on the Asplund mixture and metal abundance (Serenelli et al., 2009; Guzik & Mussack, 2010). We will pursue these matters in a future study where we include model grids based on the Asplund mixture.

While the purpose of our analysis of the Sun is to test the details of our model physics, our method can also be used in general asteroseismic investigations. When applying our technique to stars other than the Sun, e.g., recent asteroseismic targets from the Kepler mission, tight prior constraints as in the solar case are generally not available. However, the probability formalism can simply assign uninformative (e.g., uniform) priors for the unknown parameters and still retain all the remaining benefits like treatment of missing mode identification and of finite grid resolution.

For current asteroseismology, however, the most important feature is the flexible treatment of the surface effects that differs from the usual approach of employing the empirical correction by Kjeldsen et al. (2008) to the frequencies. Instead of measuring the empirical correction for the Sun with the help of a reference model, we use a flexible probabilistic model that allows us to measure surface effects in any star given our current asteroseismic grids. We do not rely on the validity of the solar surface-effect correction and can test new surface-effect models that deviate from the solar power-law approach. Correctly treating the impact of the surface effects on the model probabilities, this also yields correctly propagated uncertainties, and therefore a less biased (but model-dependent) assessment of the stellar fundamental parameters.

The results presented in the previous section indicate that the accuracy of such current asteroseismic analyses is still an open question and heavily dependent on the number of unaffected, lower-order modes. If there are not enough lower-order modes the surface effect will lead to systematic errors in the fundamental parameter determination. However, even in such a case, by looking at how the evidence changes as better physics are included in the models, our method can be used to iterate toward improved models, hopefully solving the surface-effect problem eventually.

We are very grateful to Werner W. Weiss for his valuable input and fruitful discussions. We also thank the referee for improving the quality of the manuscript. MG and DG acknowledge funding from the Natural Sciences & Engineering Research Council (NSERC) Canada. TK is supported by the FWO-Flanders under project O6260-G.0728.11.

References

  • Alexander & Ferguson (1994) Alexander, D. R., & Ferguson, J. W. 1994, ApJ, 437, 879
  • Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481
  • Bahcall et al. (2001) Bahcall, J. N., Pinsonneault, M. H., & Basu, S. 2001, ApJ, 555, 990
  • Bahcall et al. (1995) Bahcall, J. N., Pinsonneault, M. H., & Wasserburg, G. J. 1995, Reviews of Modern Physics, 67, 781
  • Bazot et al. (2008) Bazot, M., Bourguignon, S., & Christensen-Dalsgaard, J. 2008, Mem. Soc. Astron. Italiana, 79, 660
  • Bedding & Kjeldsen (2010) Bedding, T. R., & Kjeldsen, H. 2010, Communications in Asteroseismology, 161, 3
  • Bedding et al. (2010) Bedding, T. R., et al. 2010, ApJ, 713, 935
  • Benomar et al. (2009) Benomar, O., Appourchaux, T., & Baudin, F. 2009, A&A, 506, 15
  • Böhm-Vitense (1958) Böhm-Vitense, E. 1958, ZAp, 46, 108
  • Bouvier & Wadhwa (2010) Bouvier, A., & Wadhwa, M. 2010, Nature Geoscience, 3, 637
  • Brandão et al. (2011) Brandão, I. M., et al. 2011, A&A, 527, A37+
  • Broomhall et al. (2009) Broomhall, A.-M., Chaplin, W. J., Davies, G. R., Elsworth, Y., Fletcher, S. T., Hale, S. J., Miller, B., & New, R. 2009, MNRAS, 396, L100
  • Chandrasekhar (1957) Chandrasekhar, S. 1957, An introduction to the study of stellar structure., ed. Chandrasekhar, S.
  • Chaplin et al. (2010) Chaplin, W. J., et al. 2010, ApJ, 713, L169
  • Cohen & Taylor (1986) Cohen, E. R., & Taylor, B. N. 1986, Codata Bulletin No. 63, (New York: Pergamon Press)
  • Demarque et al. (2008) Demarque, P., Guenther, D. B., Li, L. H., Mazumdar, A., & Straka, C. W. 2008, Ap&SS, 316, 31
  • Deupree & Beslin (2010) Deupree, R. G., & Beslin, W. 2010, ApJ, 721, 1900
  • Gai et al. (2011) Gai, N., Basu, S., Chaplin, W. J., & Elsworth, Y. 2011, ApJ, 730, 63
  • Gregory (2005) Gregory, P. C. 2005, Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with ‘Mathematica’ Support, ed. Gregory, P. C. (Cambridge University Press)
  • Grevesse & Sauval (1998) Grevesse, N., & Sauval, A. J. 1998, Space Sci. Rev., 85, 161
  • Gruberbauer et al. (2009) Gruberbauer, M., Kallinger, T., Weiss, W. W., & Guenther, D. B. 2009, A&A, 506, 1043
  • Guenther (1994) Guenther, D. B. 1994, ApJ, 422, 400
  • Guenther & Brown (2004) Guenther, D. B., & Brown, K. I. T. 2004, ApJ, 600, 419
  • Guenther et al. (1993) Guenther, D. B., Pinsonneault, M. H., & Bahcall, J. N. 1993, ApJ, 418, 469
  • Guzik & Mussack (2010) Guzik, J. A., & Mussack, K. 2010, ApJ, 713, 1108
  • Handberg & Campante (2011) Handberg, R., & Campante, T. L. 2011, A&A, 527, A56+
  • Hayashi (1961) Hayashi, C. 1961, PASJ, 13, 450
  • Hickey & Alton (1983) Hickey, J. R., & Alton, B. M. 1983, in Solar Irradiance Variations of Active Region Time Scales, NASA Conference Publication 2310, ed. B. .J. LaBonte, G. A. Chapman, H. S. Hudson, & R. C. Wilson, 43
  • Houdek & Gough (2011) Houdek, G., & Gough, D. O. 2011, MNRAS, 418, 1217
  • Huber et al. (2011) Huber, D., et al. 2011, ApJ, 743, 143
  • Iglesias & Rogers (1996) Iglesias, C. A., & Rogers, F. J. 1996, ApJ, 464, 943
  • Jaynes & Bretthorst (2003) Jaynes, E. T., & Bretthorst, G. L. 2003, Probability Theory, ed. Jaynes, E. T. & Bretthorst, G. L. (Cambridge University Press)
  • Jørgensen & Lindegren (2005) Jørgensen, B. R., & Lindegren, L. 2005, A&A, 436, 127
  • Kallinger et al. (2010a) Kallinger, T., Gruberbauer, M., Guenther, D. B., Fossati, L., & Weiss, W. W. 2010a, A&A, 510, A106+
  • Kallinger et al. (2010b) Kallinger, T., et al. 2010b, A&A, 522, A1
  • Kallinger et al. (2010c) —. 2010c, A&A, 509, A77+
  • Kjeldsen et al. (2008) Kjeldsen, H., Bedding, T. R., & Christensen-Dalsgaard, J. 2008, ApJ, 683, L175
  • Krishna Swamy (1966) Krishna Swamy, K. S. 1966, ApJ, 145, 174
  • Lane (1869) Lane, J. H. 1869, Amer. J. Sci., 2nd ser., 50, 57
  • Marchenkov et al. (2000) Marchenkov, K., Roxburgh, I., & Vorontsov, S. 2000, MNRAS, 312, 39
  • Mathur et al. (2010) Mathur, S., et al. 2010, A&A, 511, A46+
  • Metcalfe et al. (2010) Metcalfe, T. S., et al. 2010, ApJ, 723, 1583
  • Pont & Eyer (2004) Pont, F., & Eyer, L. 2004, MNRAS, 351, 487
  • Quirion et al. (2010) Quirion, P.-O., Christensen-Dalsgaard, J., & Arentoft, T. 2010, ApJ, 725, 2176
  • Rogers (1986) Rogers, F. J. 1986, ApJ, 310, 723
  • Rogers et al. (1996) Rogers, F. J., Swenson, F. J., & Iglesias, C. A. 1996, ApJ, 456, 902
  • Roxburgh (2005) Roxburgh, I. W. 2005, A&A, 434, 665
  • Serenelli et al. (2009) Serenelli, A. M., Basu, S., Ferguson, J. W., & Asplund, M. 2009, ApJ, 705, L123
  • Takeda et al. (2007) Takeda, G., Ford, E. B., Sills, A., Rasio, F. A., Fischer, D. A., & Valenti, J. A. 2007, ApJS, 168, 297
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
22563
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description