Constraining the recent star formation history of galaxies: an Approximate Bayesian Computation approach.

Constraining the recent star formation history of galaxies: an Approximate Bayesian Computation approach.

Key Words.:
Galaxies: evolution, fundamental parameters

Although galaxies are found to follow a tight relation between their star formation rate and stellar mass, they are expected to exhibit complex star formation histories (SFH), with short-term fluctuations. The goal of this pilot study is to present a method that will identify galaxies that are undergoing a strong variation of star formation activity in the last tens to hundreds Myr. In other words, the proposed method will determine whether a variation in the last few hundreds of Myr of the SFH is needed to properly model the SED rather than a smooth normal SFH. To do so, we analyze a sample of COSMOS galaxies with and using high signal-to-noise ratio broad band photometry. We apply Approximate Bayesian Computation, a state-of-the-art statistical method to perform model choice, associated to machine learning algorithms to provide the probability that a flexible SFH is preferred based on the observed flux density ratios of galaxies. We present the method and test it on a sample of simulated SEDs. The input information fed to the algorithm is a set of broadband UV to NIR (rest-frame) flux ratios for each galaxy. The choice of using colors is made to remove any difficulty linked to normalization when using classification algorithms. The method has an error rate of 21 in recovering the right SFH and is sensitive to SFR variations larger than 1 dex. A more traditional SED fitting method using CIGALE is tested to achieve the same goal, based on fits comparisons through Bayesian Information Criterion but the best error rate obtained is higher, 28. We apply our new method to the COSMOS galaxies sample. The stellar mass distribution of galaxies with a strong to decisive evidence against the smooth delayed- SFH peaks at lower M compared to galaxies where the smooth delayed- SFH is preferred. We discuss the fact that this result does not come from any bias due to our training. Finally, we argue that flexible SFHs are needed to be able to cover that largest SFR-M parameter space possible.

1 Introduction

The tight relation linking the star formation rate (SFR) and stellar mass of star-forming galaxies,the so-called main sequence (MS), opened a new window in our understanding of galaxy evolution (Elbaz et al. 2007; Noeske et al. 2007). It implies that the majority of galaxies are likely to form the bulk of their stars through steady-state processes rather than violent episodes of star formation. However, this relation has a scatter of 0.3 dex (Schreiber et al. 2015) that is found to be relatively constant at all masses and over cosmic time (Guo et al. 2013; Ilbert et al. 2015; Schreiber et al. 2015). One possible explanation of this scatter could be its artificial creation by the accumulation of errors in the extraction of photometric measurements and/or in the determination of the SFR and stellar mass in relation with model uncertainties. However, several studies have found a coherent variation of physical galaxy properties such as the gas fraction (Magdis et al. 2012), Sersic index and effective radius (Wuyts et al. 2011), and U-V color (e.g., Salmi et al. 2012), suggesting that the scatter is more related to the physics than to measurement and model uncertainties. Furthermore, oscillations of the SFR resulting from a varying infall rate and compaction of star-formation have been proposed to explain the MS scatter (Sargent et al. 2014; Scoville et al. 2016; Tacchella et al. 2016) and even suggested by some simulations (e.g., Dekel and Burkert 2014).

To decipher if the scatter is indeed due to star formation history (SFH) variations, one must be able to put constraint on the recent star formation history (SFH) of galaxies, to reconstruct their path along the MS. This information is embedded in the spectral energy distribution (SED) of galaxies. However, recovering it through SED modeling is complex and subject to many uncertainties and degeneracies. Indeed, galaxies are expected to exhibit complex SFHs, with short-term fluctuations, requiring sophisticated SFH parametrizations to model them (e.g., Lee et al. 2010; Pacifici et al. 2013; Behroozi et al. 2013; Pacifici et al. 2016; Leja et al. 2019). The implementation of these models is complex and large libraries are needed to model all galaxies properties. Numerous studies have, instead, used simple analytical forms to model galaxies SFH (e.g., Papovich et al. 2001; Maraston et al. 2010; Pforr et al. 2012; Gladders et al. 2013; Simha et al. 2014; Buat et al. 2014; Boquien et al. 2014; Ciesla et al. 2015; Abramson et al. 2016; Ciesla et al. 2016, 2017). However, SFH parameters are known to be difficult to constrain from broadband SED modeling (e.g., Maraston et al. 2010; Pforr et al. 2012; Buat et al. 2014; Ciesla et al. 2015, 2017; Carnall et al. 2019).

Ciesla et al. (2016) and Boselli et al. (2016) have shown on a sample of well-known local galaxies benefiting from a wealth of ancillary data, that a drastic and recent decrease of the star formation activity of galaxies can be probed as long as a good UV to NIR rest frame coverage is available. They showed that the intensity of the variation of SF activity can be relatively well constrained from broadband SED fitting. Spectroscopy is however needed to bring information on the time when the change in star formation activity occurred (Boselli et al. 2016). These studies were made on well-known sources of the Virgo cluster, for which the quenching mechanism - ram pressure stripping - is known and HI observations allow a direct verification of the SED modeling results. To go a step further, Ciesla et al. (2018) have blindly applied the method on the GOODS-South sample aiming at identifying sources that underwent a recent and drastic decrease of their star-formation activity. They compared the quality of the results from SED fitting using two different SFH and obtained a sample of galaxies where a modeled recent and strong decrease of SFR produced significantly better fits of the broad band photometry. In this work, we aim at improving the method of Ciesla et al. (2018) gaining in power by applying to a subsample of COSMOS galaxies a state-of-the-art statistical method to perform the SFH choice: the Approximate Bayesian Computation (ABC, see, e.g. Marin et al. 2012; Sisson et al. 2018). Based on the observed SED of a galaxy, we want to choose the most appropriate SFH between a finite set. The main idea behind ABC is to rely on many simulated SEDs generated from all the SFHs in competition using parameters drawn from the prior.

The paper is organized as follows: Sect. 2 describes the astrophysical problem and presents the sample. In Sect. 3 we present the statistical approach as well as the results obtained from a catalog of simulated SEDs of COSMOS-like galaxies. In Sect. 4 we compare the results of this new approach with more traditional SED modeling methods, and apply it to real COSMOS galaxies in Sect. 5. Our results are discussed in Sect. 6.

2 Constraining the recent star formation history of galaxies using broad-band photometry

2.1 Building upon the method of Ciesla et al. (2018)

The main purpose of the study presented in Ciesla et al. (2018) was to probe variations in SFH that occurred in very short timescales, i.e. on hundreds of Myrs. Therefore, a large-number statistics was needed to be able to catch galaxies at the moment when these variations happened. They aimed at identifying galaxies that have recently undergone a rapid (500 Myr) and drastic downfall of their SFR (more than 80%) from broadband SED modeling, since large photometric samples can provide the statistics needed to pinpoint these objects.

To perform their study, they took advantage of the versatility of the SED modeling code CIGALE1 (Boquien et al. 2019). CIGALE is a SED modeling software package that has two functions: a modeling function to create SEDs from a set of given parameters and a SED fitting function to derive the physical properties of galaxies from observations. Galaxies SEDs are computed from UV-to-radio taking into account the balance between the energy absorbed by dust in the UV-NIR and remitted in IR. To build the SEDs, CIGALE uses a combination of modules including the star formation history assumption, either analytical, stochastic, or outputs from simulations (e.g., Boquien et al. 2014; Ciesla et al. 2015, 2017), the stellar emission from stellar population models (Bruzual and Charlot 2003; Maraston 2005), the nebular lines, and the attenuation by dust (e.g., Calzetti et al. 2000; Charlot and Fall 2000).

Ciesla et al. (2018) compared the results of SED fitting on a sample of GOODS-South galaxies using two different SFHs: one normal delayed- SFH and one flexible SFH modeling a truncation of the SFH. The normal delayed- SFH is given by the equation:

(1)

where SFR is the star formation rate, t the time, and is the e-folding time. Examples of delayed- SFHs are shown in Fig. 1 for different values of . The flexible SFH is an extension of the delayed- model:

(2)

where is the time at which the star formation is instantaneously affected, and is the ratio between SFR and SFR:

(3)

A representation of flexible SFHs is also shown in Fig. 1. The normal delayed- SFH is at first order a particular case of the flexible SFH for which .

To differentiate between the two models, Ciesla et al. (2018) estimated the Bayesian Information Criterion (BIC, see Sect. 3.2) linked to the two models and put conservative limits on the difference between the two BIC to select the most suited model. They showed that a handful of sources were better fitted using the flexible SFH, that assumes a recent instantaneous break in the SFH, compared to the more commonly used delayed- SFH. In fact, they discussed that these galaxies have indeed physical properties that are different from the main population and characteristic of sources in transition.

The limited number of sources identified in the study of Ciesla et al. (2018) (102 out of 6,680) was due to their will to be conservative in their approach and find a clean sample of sources that underwent a rapid quenching of star formation. Indeed, they imposed that the instantaneous decrease of SFR was more than 80% and that the BIC difference was larger than 10. These criteria prevent a complete study of rapid variations in the SFH of galaxies as many of them would be missed. Furthermore, only decrease of SFR were considered and not the opposite, that is star formation bursts. Finally, their method is time consuming as one has to run the CIGALE code twice, once per SFH model considered, to perform the analysis. To go beyond this drawbacks and improve the method of Ciesla et al. (2018), we consider in the present pilot study a statistical approach, the Approximate Bayesian Computation, combined with classification algorithm to improve both the accuracy and the efficiency of their method.

Figure 1: Examples of delayed- SFHs considered in this work (star formation rate as a function of cosmic time). Different SFHs using 0.5, 1, 5, and 10 Gyr are shown to illustrate the impact of this parameter (light green and dark green solid lines). An example of delayed- SFH with flexibility is shown in solid dark green with the flexibility in green dashed lines for (=1 Gyr & 0.3) and (=0.5 Gyr & 7).

2.2 The sample

In this pilot work, we use the wealth of data available on the COSMOS field. The choice of this field is driven by the good spectral coverage of the data and the large statistics of sources available.

We draw a sample from the COSMOS catalog of Laigle et al. (2016). A first cut is made to restrict ourselves to galaxies with a stellar mass (Laigle et al. 2016) higher than 10 M. Then, we restrict the sample to a relatively narrow range of redshift to minimize its impact on the SED and focus our method to the SFH effect on the SED. We thus select galaxies with redshift between 0.5 and 1, assuring sufficient statistics in our sample. We use the broad bands of the COSMOS catalog, listed in Table LABEL:bands. For galaxies with redshifts between 0.5 and 1, Spitzer/IRAC3 probes the 2.9-3.9 m wavelength range rest frame and Spitzer/IRAC4 probes the 4-5.3 m range rest frame. These wavelength ranges correspond to the transition between stellar and dust emission. To keep this pilot study simple we only consider the UV-to-NIR part of the spectrum, unaffected by dust emission.

Instrument Band (m)
GALEX FUV 0.153
GALEX NUV 0.229
CFHT 0.355
SUBARU B 0.443
SUBARU V 0.544
SUBARU r 0.622
Suprime Cam 0.767
Suprime Cam 0.902
VISTA Y 1.019
VISTA J 1.250
VISTA H 1.639
VISTA Ks 2.142
Spitzer IRAC1 3.6
Spitzer IRAC2 4.5
Table 1: COSMOS broad bands used in this work.

One aspect of the ABC method that is still to be developed is how to handle missing data. In our astrophysical application, we identify several types of missing data. First there is the impact of redshifting that is the fact that a galaxy is undetected at wavelength shorter than the Lyman break at its redshift. Here, the absence of detection provides an information on the galaxy coded in its SED. Another type of missing data is linked to the definition of the photometric surveys: the spatial coverage is not exactly the same in every bands and the different sensitivity limits yields to undetected galaxies due to the faintness of their fluxes. To keep the statistical problem simple in this pilot study, we remove galaxies that are not detected in all bands. This strong choice is motivated by the fact that the ABC method that we use in this pilot study has not been tested and calibrated in the case of missing data such as extragalactic field surveys can produce. The impact of missing data on this method would require an important work of statistical research which is beyond the scope of this paper.

As an additional constraint, we select galaxies with a SNR equal or greater than 10. However, given the importance of the NUV band (Ciesla et al. 2016, 2018) and the faintness of the fluxes compared to the other bands, we relax our criteria to a SNR of 5 for this band. The first motivation for this cut is again to keep our pilot study simple, but we show in Appendix A that indeed this SNR cut is relevant. In the following, we will consider a final sample composed of 12,380 galaxies for which the stellar mass distribution as a function of redshift is shown in Fig. 2 (top panel) and the distribution of the rejected sources in the bottom panel of the same figure.

The stellar mass distribution, from Laigle et al. (2016), of the final sample is shown in Fig. 3. As a sanity check, we verify that above 10 M, the stellar mass, star formation rate, and specific star formation rate distributions are similar. Our selection criteria mostly affect low mass galaxies which is expected since we made SNR cuts.

Given the wide ranges of redshift, stellar masses, and SED shapes considered in our study, there is a normalization aspect that needs to be taken into account. Indeed, this diversity in galaxies’ properties translates into a large distribution of fluxes in a given photometric band, spanning over several orders of magnitude: 8 orders of magnitudes in the FUV band and 6 in the Ks band, for instance. This parameter space is very challenging for classification algorithms. To avoid this problem, we compute flux ratios. First we combine each flux with the closest one in terms of wavelength. This set of colors provides an information on the shape of the SED but effects of the SFH are also expected on wider scales in terms of wavelength. As discussed in Ciesla et al. (2018), discrepancy between the UV and NIR emission assuming a smooth delayed- SFH is the signature that we are looking for indicating a possible change in the recent SFH. To be able to probe these effects, we also normalize each photometric band to the Ks flux and add this set of colors to the previous one. Finally, we set the flux ratios FUV/NUV and FUV/Ks to be when to account for the missing FUV flux density due to the Lyman break at these redshifts.

Figure 2: Stellar mass from Laigle et al. (2016) as a function of redshift for the final sample (top panel) and for the rejected galaxies following our criteria (bottom panel).
Figure 3: Distribution of stellar mass for the sample before the SNR cut (grey) and the final sample (green). The red dotted line indicated the limit above which our final sample is considered as complete. The stellar masses indicated here are from Laigle et al. (2016).

3 Statistical approach

In this section, we present the statistical approach that we will use to infer the most suitable SFH from photometric data. This new approach is applied to the sample described in Sect. 2.2 as a pilot study but can be applied to other datasets and to test other properties than the SFH.

3.1 Statistical modeling

As explained in the previous section, we want to distinguish between two SFH models: the first one is the smooth delayed- SFH, or SFH model , and the second is the same with a flexibility in the last 500 Myr, or SFH model , as presented in Sect. 2.1. The smooth delayed- SFH is thus a specific case of the flexible SFH obtained when there is no burst nor quenching ().

Let denote the broadband data collected about a given galaxy. The statistical issue of deciding which SFH better fits the data can be seen as the Bayesian testing procedure distinguishing between both hypotheses

The procedure will decide in favor of a possible change in the recent history when is significantly different from based on the data . Conducting a Bayesian testing procedure based on the data of a given galaxy is exactly the same thing as the Bayesian model choice distinguishing between two nested statistical models (Robert 2007).

The first statistical model (), that is the delayed- SFH, is composed as follow: Let denote the vector of all parameters necessary to compute the mock SED, denoted . In particular includes the parameters of the SFH. We denote the prior distribution over the parameter space for this statistical model. Likewise for the second SFH model : let =( be the vector of all parameters for the delayed-+flex SFH. This vector includes the same parameters as for the previous SFH, plus two added parameters and . Let be the prior distribution over the parameter space for the second model. We furthermore add a prior probability on the SFH index, and , which are both if we want to remain noninformative.

Finally, we assume a Gaussian noise. Thus, the likelihood of given under the statistical model is a multivariate Gaussian distribution, centered on with a diagonal covariance matrix. The standard deviations are set to because of the assumed value of SNR in the observations. In particular, it means that, up to constant, the loglikelihood is the negative -distance between the observed SED and the mock :

(4)

3.2 Bayesian model choice

Bayesian model choice (Robert 2007) relies on the evaluation of the posterior probabilities which, using Bayes formula, is given by

(5)

where

(6)

is the likelihood integrated over the prior distribution of the -th statistical model. Seen as a function of , is called the evidence or the integrated likelihood of the -th model.

Bayesian model choice procedure innately embodies Occam’s razor. This principle consists in choosing the simplest model as long as it is sufficient to explain the observation2. In this study, the two parametric SFHs are nested: when the parameter of an SFH (flex delayed-) is set to , we have an SFH that is also in the model (delayed-). Because of Occam’s razor, if we choose the SFH with highest posterior probability when analyzing an observed SED that can be explained by both SFHs, we choose the simplest model .

To analyse the dataset , it remains to compute the posterior probabilities. In our situation, the evidence of the statistical model is intractable. It means that it cannot be easily evaluated numerically. Indeed, the function that computes given and is fundamentally a black-box numerical function.

There are two methods to solve this problem. First, we can use a Laplace approximation of the integrated likelihood. The resulting procedure is the one that choose the SFH with the smallest Bayesian Information Criterion (BIC). Denoting the maximum likelihood estimate under the SFH , the non-reduced -distance of the fit, the degree of freedom of model , and the number of observed photometric bands, the BIC of SFH is given by

(7)

Choosing the model with the smallest BIC is therefore an approximate method to find the model with the highest posterior probability. The results of Ciesla et al. (2018) based on BIC are justified on this ground. But the Laplace approximation assumes that the number of observed photometric bands is large enough. Moreover, determining the degree of freedom of a statistical model can be a complex question. For all these reasons, we expect to improve the method of Ciesla et al. (2018) based on BIC in the present paper.

Clever Monte Carlo algorithms to compute the evidence, Equation (6), of each statistical model will give us a much sharper approximation of the posterior probabilities of each SFH . We decided to rely on Approximate Bayesian Computation (ABC, see e.g. Marin et al. 2012; Sisson et al. 2018) to compute .We could have considered other methods (Vehtari and Ojanen 2012) such as bridge sampling, reversible jump MCMC, nested sampling, etc. But these methods require separate runs of the algorithm to analyze each galaxy, and probably more than a few minutes per galaxy. We expect to design a faster method here with ABC.

Finally,to interpret the results, we rely on the Bayes factor of the delayed-+flex SFH () against the delayed-SFH ()given by

The computed value of the Bayes factor is compared to standard thresholds established by Jeffreys (see, e.g., Robert 2007) in order to evaluate the strength of the evidence in favor of delayed-+flex SFH if . Depending on the value of the Bayes factor, Bayesian statisticians are used to say that the evidence in favor of model is either barely worth mentioning (from to ) or substantial (from to ) or strong (from to ) or very strong (from to ) or decisive (larger than ).

3.3 The Approximate Bayesian Computation method

To avoid the difficult computation of the evidence, Equation (6), of model and get a direct approximation of , we resort to the family of methods named ABC model choice (Marin et al. 2018).

  Input:

  • , the observed SED we want to analyse

  • , prior probability of the -th statistical model

  • , prior distribution of parameter of the -th statistical model

  • , probability density of a SED given the -th statistical model, and the parameter , see Eq. (4)

  • , number of simulations from the prior

  • , a function that computes the summary statistics of a SED

Output:
An approximation of the posterior probability of the -th statistical model given the observed data for all .

  1. For to

  2. Generate from the prior

  3. Generate from the prior

  4. Generate from the model

  5. Compute and store

  6. End For

  7. Compute with Eq. (8) for all

 

Table 2: Basic ABC model choice algorithm that aims at computing the posterior probabilities of statistical models in competition to explain the data.

The main idea behind the ABC framework is that we can avoid the evaluation of the likelihood and directly estimate a posterior probability by relying on random simulations , from the joint distribution . Here simulated are obtained as follow: first, we draw a SFH at random, with the prior probability ; then we draw according to the prior ; finally we compute the mock with CIGALE and add a Gaussian noise to the mock SED to get . This last step is equivalent to sampling from given in (4). Basically, the posterior distribution can be approximated by the frequency of SFH among the simulations close enough to .

To measure how close is from , we introduce the distance between vectors of summary statistics and we set a threshold : simulations that satisfy are considered “close enough” to . The summary statistics are primarily introduced as a way to handle feature extraction, whether it is for dimensionality reduction or for data normalization. For the present study, the components of the vector are flux ratios from the SED , chosen for normalization purposes. Mathematically speaking, ies thus approximated by

(8)

The resulting algorithm, named basic ABC model choice, is given in Table 2. Finally, note that, if is the number of simulations close enough to , the last step of Table 2 can be seen as a -nearest neighbor (-nn) method predicting based on the features (or covariates) .

The -nn can be replaced by other machine learning algorithms to obtain sharper results. Indeed, the -nn is known to perform poorly when the dimension of is larger than . For instance, Pudlo et al. (2016) decided to rely on the method called Random Forest (Breiman 2001). The machine learning based ABC algorithm is given in Table 3. All machine learning models given below are classification methods. In our context, they aim at separating the simulated datasets depending on the SFH () that was used to generate them. The machine learning model is fitted on the catalog of simulations , that is to say, it learns how to predict based on the value of . To this purpose, we fit a function and perform the classification task on a new dataset by comparing the fitted to : if , the dataset is classified as generated by SFH ; otherwise, it is classified as generated by SFH . The function depends on some internal parameters not explicitly shown in the notation. For example, this function can be computed with the help of a neural network. A neuron here is a mathematical function that receives inputs and produces an output based on a weighted combination of the inputs; each neuron processes the received data and transmits its output downstream in the network. Generally, the internal parameters are of two kinds: the coordinates of are optimized on data with a specific algorithm, and the coordinates of are called tuning parameters (or hyperparameters). For instance, with neural networks represents the architecture of the network and the amount of dropout; represents the collection of the weights in the network.

The gold standard machine learning practice is to split the catalog of data into three parts: the training catalog and the validation catalog, that are both used to fit the machine learning models, and the test catalog that is used to compare the algorithms fairly and get a measure of the error committed by the models. Actually each fit requires two catalogs (training and validation) because modern machine learning models are fitted to the data with a two step procedure. We detail the procedure for a simple dense neural network and refer to Appendix B for the general case. The hyperparameters we consider are the number of hidden layers, the number of nodes in each layers, and the amount of dropout. We fix a range of possible values for each hyperparameters (see table 5). We select a possible combination of hyperparameters , and train the obtained neural network on the training catalog. Once the weights are optimized on the training catalog, we evaluate the given neural network on the validation catalog and associate the obtained classification error with the combination of hyperparameters used. We follow the same training and evaluating procedure for several hyperparameters combinations , and we select the one obtaining the lowest classification error. At the end of the process, we evaluate the classification error on the test catalog using the selected combination of hyperparameters

The test catalog is willingly left out during the training and the tuning of the machine learning methods. Indeed, the comparison of the accuracy of the approximation returned by each machine learning method on the test catalog ensures a fair comparison between the methods, on data unseen during the fit of .

In this pilot study, we tried different machine learning methods and compared their accuracy:

  • logistic regression and linear discriminant analysis (Friedman et al. 2001), that are almost equivalent linear models, and serve only as baseline methods,

  • neural networks with or hidden layers, the core of deep learning methods, that have proved to get sharp results on various signal datasets (images, sounds)

  • classification tree boosting (with XGBoost, see Chen and Guestrin 2016), which is considered as state-of-the-art methods in many applied situations, and is often the most accurate algorithm when correctly calibrated on a large catalog.

We did not try Random Forest since it cannot be run on a simulation catalog of size as large as the one we are relying on in this pilot study (). Indeed the motivation of the proposed methodology is to bypass the heavy computational burden of MCMC based algorithms to perform statistical model choice. In this study, Random Forest was not able to fulfill this aim unlike the classification methods given above.

  Input and output: same as Table 2

  1. Generate simulations from the joint distribution

  2. Summarize all simulated datasets (photometric SED) with and store all simulated into a large catalog

  3. Split the catalog into three parts: training, validation and test catalogs

  4. Fit each machine learning method on the training and validation catalogs to approximate with

  5. Choose the best machine learning method by comparing their classification errors on the test catalog

  6. Return the approximation computed with the best method

 

Table 3: Machine learning based ABC model choice algorithm that aims at computing the posterior probability of two statistical models in competition to explain the data

3.4 Building synthetic photometric data

To compute or fit galaxies’ SEDs with CIGALE, one has to provide a list of prior values for each model’s parameters. The comprehensive module selection in CIGALE allows to specify entirely the SFH and how the mock SED is computed. The list of prior values for each module’s parameters specifies the prior distribution . CIGALE uses this list of values or ranges to sample from the prior distribution by picking values on on a regular grid. This has the inconvenient of: being very sensitive to the number of parameters (if is the number of parameters, and if we assume different values for each parameter, the size of the grid is ); producing simulations that are generated with some parameters that are equals. Instead, in this study, we advocate in favor of drawing values of all parameters at random from the prior distribution, which is uniform over the specified ranges or list of values. The ranges for each model parameters are chosen to be consistent with those used by Ciesla et al. (2018). In particular, the catalog of simulations drawn at line 1 in Table 3 follow this rule. Each SFH (the simple delayed- or the delayed- flex) is then convolved with the stellar population models of Bruzual and Charlot (2003). The attenuation law described in Charlot and Fall (2000) is then applied to the SED. Finally CIGALE convolves each mock SED into a COSMOS-like set of filters described in Table LABEL:bands.

Parameter Value
Delayed- SFH
(Gyr) 0.5; 9
(Gyr) 0.1; 10
Flexible delayed- SFH
(Gyr) 0.5; 9
(Gyr) 0.1; 10
(Myr) 10, 100, 450
-6; 6
Dust attenuation
0.1; 4
Table 4: Prior range of the parameters used to generate the simulation table of SEDs with redshift between 0.5 and 1.

4 Application to synthetic photometric data

We first applied our methodology on simulated photometric data to evaluate its accuracy. The main interest of such synthetic data is that we control all parameters (flux densities, colors, physical parameters). The whole catalog of simulations was composed of simulated datasets. We split this catalog at random into three parts, as explained in Sect. 3.3, and add an extra catalog for comparison with CIGALE:

  • sources to compose the training catalog,

  • sources to compose the validation catalog,

  • sources to compose the test catalog,

  • additional sources to compose the extra catalog for comparison with CIGALE.

The size of the extra catalog is much smaller to limit the amount of computation time required by CIGALE to run its own algorithm of SED fitting.

4.1 Calibration and evaluation of the machine learning methods on the simulated catalogs

Method Tuning parameter Explored range Best value Error rate (%)
Logistic regression
Linear Discriminant Analysis
-nearest neighbors
-layer neural network dropout 0.2
nodes in each layer
-layer neural network dropout 0.2
nodes in each layer
Tree boosting (XGBoost) number of trees (nround) 400
depth of each tree (max_depth) 12
learning rate (eta)

The best value of each tuning parameter was found by comparing error rates on the validation catalog.
The error rate given in the last column is computed on the test catalog.

Table 5: Calibration and test of machine learning methods

In this section, we present the calibration of the machine learning techniques and their error rates on the test catalog. We then try to interpret the results given by our methodology.

As described in Sect. 3.3, we trained and calibrated the machine learning methods on the training and validation catalog. The results are given in Table 5. Neither Logistic regression nor Linear Discriminant Analysis have tuning parameters that need to be calibrated on the validation catalog. The error rate of these techniques are about on the test catalog. But the modern machine learning methods (-nearest neighbors, neural networks and tree boosting) have been calibrated on the validation catalog. The best value of the explored range for were found by comparing error rates on the validation catalog and are given in Table 5. The error rates of these methods on the test catalog vary between and . Thus, it is clear that there is a significant gain to use non-linear methods. But we see no obvious use in training a more complex algorithm (such as a deeper neural network) for this problem, although it could become useful when increasing the number of photometric bands and the redshift range. Finally, we favor XGBoost for our study. Indeed, while neural networks could probably be tuned more precisely to match or exceed its performances, we find XGBoost easier to tune and to interpret.

Machine learning techniques that fit are often affected by some bias and may require some correction (Niculescu-Mizil and Caruana 2012).Such classification algorithms compare the estimated probabilities of given and return the most likely given . The output can be correct even if the probabilities are biased towards for small probabilities or towards for large probabilities. A standard reliability check shows no such problem for our XGBoost classifier. To this aim, the test catalog is divided into bins: the first bin is composed of simulations with a predicted probability between and , the second with between and …The reliability check procedure ensures that the frequency of the SFH among the -th bin falls within the range , because the predicted by XGBoost are between and .

We studied the ability of our methodology to distinguish the SFH of the simulated sources of the test catalog. The top panel of Fig. 4 shows the distribution of when varies in the test catalog. Naively, a perfect result would have half of the sample with and the other half with . In fact, when , the SFH is also suitable since the models are nested. In this case, Occam’s razor favors the model , and must be less than , see Sect. 3.2. On the contrary, for the SEDs solely explained by the SFH model , is close to .

The distribution (Fig. 4, bottom left panel) has two peaks, one centered around and one between and . This peak at , and not , is expected when one of the model proposed to the choice is included in the second model. In the distribution of the , 20 of the sources have a value higher than 0.97 and 52 lower than 0.4. In the right panels of Fig. 4, we show the distribution of for the galaxies with . With a perfect method, galaxies with should have . Here we see indeed a deficit of galaxies around , however the range of affected goes from to . Therefore, the method is not able to identify galaxies having a variability of its SFR if this variability is only to times the SFR before the variability began. In other words, the method is sensitive to . This is confirmed by the distribution of for galaxies with (Fig. 4, bottom panel). However, there are sources with a associated to low values of . The complete distribution of as a function of is shown in Fig.4.

Figure 4: Study of the statistical power of to detect short-term variations with respect to the value of . Top left panel: Joint distribution of and . Bottom left panel: Distribution of obtained with coming from the test catalog. Right panels: Marginal distributions of for mock sources with (top right panel) and for mock sources with (bottom right panel).

4.2 Importance of particular flux ratios

We try to find which part of the dataset influences the most on the choice of SFH given by our method. The analyse of relies entirely on the summary statistics , the flux ratios. Hence, we tried to understand which flux ratios are most discriminant for the model choice. We wanted to check that the method is not based on a bias of our simulations and wanted to assess which part of the data could be removed without losing crucial information.

We use different usual metrics (e.g. Friedman et al. 2001; Chen and Guestrin 2016) to assess the importance of each flux ratio in the machine learning estimation of . Those metrics are used as indicators of the relevance of each flux ratio for the classification task. As expected, the most important flux ratios for our problem involve the bands at shortest wavelength (FUV at and NUV above, as FUV is no longer available), normalized by either Ks or u. This is expected as these bands are known to be sensitive to SFH (e.g., Arnouts et al. 2013). We see no particular pattern in the estimated importance of the other flux ratios. They are all used for the classification and removing any of them decreases classification accuracy, except for IRAC1/Ks whose importance is consistently negligible across every considered metric.

We also test if the UVJ selection used to classify galaxies according to their star formation activity (e.g., Wuyts et al. 2007; Williams et al. 2009) is able to probe the kind of rapid and recent SFH variations we are investigating in this study. We train an XGBoost classification model using only u/V and V/J in order to evaluate the benefits of using all available flux ratios. This results in a severe increase in classification error, going from using every flux ratios to .

4.3 Comparison with SED fitting methods based on BIC

In this section, we compare the results obtained with the ABC method to those obtained with a standard SED modeling. The goal of this test is to understand and quantify the improvement that the ABC method brings in terms of accuracy of the results. We use the simulated catalog of 30,000 sources, described in the beginning of this section, for which we control all parameters.

The ABC method is also used on this extra catalog. This test is very similar to the training procedure described in Sect. 4.1. Indeed, with this extra catalog, the ABC method has an error rate of 21.2% compared to 21.0% with the previous test sample.

Parameter Value
delayed- SFH
(Gyr) , 15 values linearly sampled
(Gyr) , 15 values linearly sampled
Flexible delayed- SFH
(Gyr) , 15 values linearly sampled
(Gyr) , 15 values linearly sampled
(Myr) 10, 100, 450
, 12 values linearly sampled
Dust attenuation
, 10 values linearly sampled
Table 6: Input parameters used in the SED fitting procedures with CIGALE.

CIGALE is run on the test catalog as well. The set of modules is the same as those used to create the mock SEDs, however the parameters used to fit the test catalog do not include the input parameters that were randomly chosen. This test is intentionally thought to be simple and represent an ideal case scenario. The error rate that will be obtained with CIGALE will therefore represent the best result achievable.

To decide whether a flexible SFH was preferable to a normal delayed- SFH using CIGALE, we adopt on whether a flexible SFH is preferred to a normal delayed- SFH, we adopt the method of Ciesla et al. (2018) described in Sect. 2.1. The quality of fit using each SFH is tested through the use of the Bayesian Information Criterion (BIC).

In detail, the method that we use is the following: First, we make a run with CIGALE using a simple delayed- SFH which parameters are presented in Table LABEL:input_param. A second run is then performed with the flexible SFH. We compare the results and quality of the fits using one SFH or the other. The two models have different number of degrees of freedom. To take this into account, we compute the BIC presented in Sect. 3.2 for each SFH.

We then calculate the difference between BIC and BIC (BIC) and use the threshold defined by Jeffreys (Sect. 3.2) valid either for the BF and the BIC and also used in Ciesla et al. (2018): a BIC larger than 10 is interpreted as a strong difference between the two fits (Kass and Raftery 1995), with the flexible SFH providing a better fit of the data than the delayed- SFH.

We apply this method to the sample containing 15k sources modeled with a delayed- SFH and 15k modeled using a delayed-+flexibility. With this criteria, we find that the error rate of CIGALE, in terms of identifying SEDs built with a delayed-+flex SFH, is 32.5%. This rate depends on the BIC threshold chosen and increases with the value of the threshold as shown in Fig. 5. The best value, 28.7%, is lower than the error rate obtained from a logistic regression or a LDA (see Table 5) but significantly higher than the error rate obtained from our procedure using XGBoost (21.0%) In this best case scenario test for CIGALE, a difference of 7.7% is substantial and implies that the ABC method tested in this study provides better results than a more traditional one using SED fitting. When considering sources with BIC10, i.e. sources for which the method using CIGALE estimates that there is a strong evidence for the flexible SFH, 95.4 are indeed SEDs simulated with the flexible SFH. Using our procedure with XGBoost, and the Bayes factor corresponding threshold of 150 (Kass and Raftery 1995), we find that 99.7 of the sources’ SFH are correctly identified. The ABC method provides a cleaner sample than the CIGALE BIC based method.

Figure 5: Error rate obtained with CIGALE as a funtion of BIC chosen threshold. For comparison we show the error rates obtained by the classification methods tested in Sect. 3.

5 Application on COSMOS data

We now apply our method to the sample of galaxies drawn from the COSMOS catalog, which selection is described in Sect. 2.2. As a result, we show the distribution obtained for this sample of observed galaxies in Fig. 6. We remind that the 0 value indicates that the delayed- SFH is preferred whereas indicates that the flexible SFH is more adapted to fit the SED of the galaxy. As a guide, we indicate the different grades of the Jeffreys scale and provide the number of sources in each grade in Table LABEL:jeffreys. The flexible SFH better models the observations of 16.4% of our sample than the delayed- SFH. However, it also means that for most of the dataset (83.6%), there is no strong evidence for the necessity to increase the complexity of the SFH, a delayed- is sufficient to model the SED of these sources.

Figure 6: Distribution of the predictions produced by our algorithm on the selected COSMOS data. Sources with a close to 1 tend to prefer the delayed-+flex SFH while sources with lower favors a simple delayed- SFH. The green regions numbered from 1 to 5 indicate the Jeffreys scale of the Bayes factor, 1: Barely worth mentioning, 2: Substantial, 3: Strong, 4: Very strong, and 5: Decisive (detailed at the end of Sect. 3.2). The percentage of sources in each grade is provided on the Figure and in Table LABEL:jeffreys.
Grade Evidence against delayed- SFH Number %
1 Barely worth mentioning 1,187 9.6
2 Substantial 466 3.8
3 Strong 209 1.7
4 Very strong 90 0.7
5 Decisive 77 0.6
Table 7: Jeffreys scale and statistics of our sample.
Figure 7: Top panel: Comparison of stellar mass distribution, obtained with CIGALE, for the sample of galaxies with (green) and galaxies with (grey). Middle panel: Comparison of stellar mass distribution, obtained by Laigle et al. (2016), for the sample of galaxies with (green) and galaxies with (grey). Bottom panel: Comparison of sSFR distribution for the sample of galaxies with (green) and galaxies with (grey).

To investigate the possible differences in terms of physical properties of galaxies according to their Jeffreys grade, we divide the sample of galaxies in two groups. The first group corresponds to galaxies with 0.5, galaxies for which there is no evidence for the need of a recent burst or quenching in the SFH, a delayed- SFH is sufficient to model the SED of these sources. We select the galaxies of the second group imposing 0.75, i.e. Jeffreys scale grades of 3, 4, or 5: from strong to decisive evidence against the normal delayed-. In Fig. 7 (top panel), we show the stellar mass distribution of both subsamples. Although the stellar masses obtained with either the smooth delayed- or the flexible SFH are consistent with each other, for each galaxies we use the most suitable stellar mass: if the galaxy has the stellar mass obtained from the delayed- SFH is used, and if the galaxy has the stellar mass obtained with the flexible SFH is used. The stellar mass distribution of galaxies with a delayed- SFH is similar to the distribution of the whole sample, as shown in the middle panel of Fig. 7. However, the stellar mass distribution of galaxies needing a flexibility in their recent SFH shows a deficit of galaxies with stellar masses between 10 and 10 M compared to the distribution of the fool sample. We note that at masses larger than 10 M the distribution are identical, despite a small peak at 10 M. To check if this results is not due to our SED modeling procedure and the assumptions we adopted, we show in the middle panel of Fig. 7 the same stellar mass distributions using this time the values published by Laigle et al. (2016). The two stellar mass distributions, with the one of galaxies with 0.75 peaking at a lower mass, are recovered. This implies that these differences between the distributions are independent from the SED fitting method employed to determine the stellar mass of the galaxies. We note that when the algorithm has been trained, only ratios of fluxes were provided to remove the normalization factor out of the method and the mock SEDs from which the flux ratios were computed were all normalized to 1 M. The stellar mass is at first order a normalization through, for instance, the L-M relation (e.g., Gavazzi et al. 1996). Using flux ratios, the algorithm had no information linked to the stellar mass of the mock galaxies. Nevertheless, applied to real galaxies the result of our procedure yields two different stellar mass distributions between galaxies identified as having smooth SFH and galaxies undergoing a more drastic episode (star formation burst or quenching).

In the bottom panel of Fig. 7, we show the distribution in specific star formation rate (sSFR, ) for the same two samples. The distribution of galaxies with is narrow ( ) and has one peak at ( ), clearly showing the MS of star forming galaxies. Galaxies with high probability to have a recent strong variation of their SFH form a double-peaked distribution with one peak above the MS formed by galaxies with 0.75 ( ), corresponding to galaxies having experienced a recent burst, and a second peak at lower sSFRs than the MS, corresponding to sources having undergone a recent decrease of their star formation activity ( ). In the sample of galaxies with , 28 of these sources are in the peak of galaxies experiencing a burst of star formation activity and 72 seem to undergo a rapid and drastic decrease of their SFR. One possibility to explain this assymetry could be a bias produced by the algorithm, as shown in Fig. 4, more sources with 0.97 tend to be associated to low values of than to . However, in the case of the extra catalog, this disparity is 47 and 53 for high and low , respectively.

The distribution of the two samples in terms of sSFR indicates that, to be able to reach the sSFR of galaxies that are outside the MS, one had to take into account a flexibility in the SFH of galaxies when performing the SED modeling. This is needed to recover as much as possible the parameter space in SFR and M.

6 Conclusions

In this pilot study, we have proposed to use a state-of-the-art statistical method using machine learning algorithm, the Approximate Bayesian Computation, to determine the best-suited SFH to be used to measure the physical properties of a subsample of COSMOS galaxies. These galaxies have been selected in mass (M8.5) and in redshift (). Furthermore, we impose that the galaxies should be detected in all UV-to-NIR bands with a SNR higher than 10. We verified that these criteria do not bias the sSFR distribution of the sample.

To model these galaxies, we considered a smooth delayed- SFH with or without a rapid and drastic change in the recent SFH, that is in the last few hundreds Myr. We have built a mock galaxies SED using the SED fitting code CIGALE. The mock SEDs have been integrated into the COSMOS set of broad band filters. To avoid large dynamical ranges of fluxes which is to be avoided when using classification algorithms, we compute flux ratios.

Different classification algorithms have been tested with XGBoost providing the best results with a classification error of 20.98. As output, the algorithm provides the probability that a galaxy is better modeled using a flexibility in the recent SFH. The method is sensitive to variations of SFR that are larger than 1 dex.

We have compared the results from the ABC new method with SED fitting using CIGALE. Following the method proposed by Ciesla et al. (2018), we compare the results of two SED fits, one using the delayed- SFH and the other one adding a flexibility in the recent history of the galaxy. The Bayesian Information Criterion are computed and compared to determine which SFH provides a better fit. The BIC method provides a high error rate, 28, compared to the 21 obtained with the ABC method. Moreover, since the BIC method requires two SED fits per analyze of a source, it is much slower than the proposed ABC method: we were not able to compare them on the test catalog of sources and we had to introduce a smaller simulated catalog of size to compute their BIC in a reasonable amount time.

We use the result of the ABC method to determine the stellar mass and SFRs of the galaxies using the best-suited SFH for each of them. We compare two samples of galaxies: the first one is galaxies with 0.5, that are galaxies for which the smooth delayed- SFH is preferred, the second one is galaxies with 0.75, galaxies for which there is a strong to decisive evidence against the smooth delayed- SFH. The stellar mass distribution of these two samples is different. The mass distribution of galaxies for which the delayed- SFH is preferred is similar to the distribution of the whole sample. However, the mass distribution of galaxies needing a flexible SFH shows a deficit between 10 and 10 M. Their distribution is however similar to the whole sample’s above M10 M. Furthermore, the results of this study also implies that a flexible SFH is needed to cover the largest parameter space in terms of stellar mass and SFR possible, as seen from the sSFR distributions of galaxies with 0.75.

Acknowledgements.
The authors thank Denis Burgarella and Yannick Roehlly for fruitful discussions and the referee for his valuable comments that helped improve the paper. The research leading to these results was partially financed via the PEPS Astro-Info program of the CNRS. P. Pudlo warmly thanks the Centre International de Rencontres Mathématiques (CIRM) of Aix-Marseille University for its support and the stimulating atmosphere during the Jean Morlet semester ’Bayesian modeling and Analysis of Big Data’ chaired by Kerrie Mengersen.

Appendix A Impact of fluxes SNR on the distribution of

Figure 8: Distribution of the predictions as a function of Ks band SNR (top panel) and NUV SNR (bottom panel). The different colors are for different selection in SNR in each panels.

In Fig. 8, we show the distribution of the estimated probability for the subsample of COSMOS sources described in Sect. 2.2 before applying any SNR cuts. In this figure, all COSMOS sources with M10 M and redshift between 0.5 and 1 are used. The 0 value indicates that the delayed- SFH is preferred whereas indicates that the delayed-+flex SFH is more adapted to fit the SED of the galaxy. To understand what drives the shape of the distribution, we show in the same figure the distributions obtained for different Ks SNR bins (top panel) and NUV SNR bins (bottom panel). Galaxies with low SNR in either NUV and Ks photometric band show flatter distributions. This means that these low SNR sources yields to intermediate values of , translating into a difficulty to choose between the delayed- and the delayed-+flex SFHs.

Appendix B Parameter tuning for Classification methods

The training catalog is used to optimize the value of with a specific algorithm given , and the validation catalog is used to fit the tuning parameters . To fit to a catalog of simulated datasets , , the optimization algorithm specified with the machine learning model maximizes

given the value of . Generally, this optimization algorithm is run for several values of . Then, the validation catalog is used to calibrate the tuning parameters based on data: the accuracy of for many possible values of is computed on the validation catalog and we select the value that leads to the best results on this catalog. The resulting output of this two-step procedure is the approximation , that can be evaluated easily for new dataset . The accuracy of can be measured with various metrics. The most common metric is the classification error rate on a catalog of , , of simulations. We will rely on this metric. It is is defined by the frequency at which the datasets are not well classified, i.e.,

Footnotes

  1. https://cigale.lam.fr/
  2. Indeed, the evidence is a normalized probability density, that represents the distribution of datasets drawn from the -th model, whatever the value of the parameter from its prior distribution. If models and are nested, the region of the data space of non-negligible probability under model has also a non-negligible probability under model . Moreover, since model can fit to much more datasets, the probability density is much more diffuse than the density . Hence, we expect for datasets that can be explained by both models that If the prior probabilities and of both models are equal, it implies that, for datasets that can be explained by both models,

References

  1. Return to [Log-]Normalcy: Rethinking Quenching, The Star Formation Main Sequence, and Perhaps Much More. \apj 832, pp. 7. External Links: 1604.00016, Document Cited by: §1.
  2. Encoding of the infrared excess in the NUVrK color diagram for star-forming galaxies. \aap 558, pp. A67. External Links: Document, 1309.0008 Cited by: §4.2.
  3. The Average Star Formation Histories of Galaxies in Dark Matter Halos from z = 0-8. \apj 770, pp. 57. External Links: 1207.6105, Document Cited by: §1.
  4. Impact of star formation history on the measurement of star formation rates. \aap 571, pp. A72. External Links: 1409.5792, Document Cited by: §1, §2.1.
  5. CIGALE: a python Code Investigating GALaxy Emission. \aap 622, pp. A103. External Links: Document, 1811.03094 Cited by: §2.1.
  6. Quenching of the star formation activity in cluster galaxies. \aap 596, pp. A11. External Links: Document, 1609.00545 Cited by: §1.
  7. Random forests. Machine learning 45 (1), pp. 5–32. Cited by: §3.3.
  8. Stellar population synthesis at the resolution of 2003. \mnras 344, pp. 1000–1028. External Links: arXiv:astro-ph/0309134, Document Cited by: §2.1, §3.4.
  9. Ultraviolet to infrared emission of z \gt 1 galaxies: Can we derive reliable star formation rates and stellar masses?. \aap 561, pp. A39. External Links: 1310.7712, Document Cited by: §1.
  10. The Dust Content and Opacity of Actively Star-forming Galaxies. \apj 533, pp. 682–695. External Links: arXiv:astro-ph/9911459, Document Cited by: §2.1.
  11. How to Measure Galaxy Star Formation Histories. I. Parametric Models. \apj 873 (1), pp. 44. External Links: Document, 1811.03635 Cited by: §1.
  12. A Simple Model for the Absorption of Starlight by Dust in Galaxies. \apj 539, pp. 718–731. External Links: astro-ph/0003128, Document Cited by: §2.1, §3.4.
  13. Xgboost: a scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 785–794. Cited by: 3rd item, §4.2.
  14. The imprint of rapid star formation quenching on the spectral energy distributions of galaxies. \aap 585, pp. A43. External Links: 1510.07657, Document Cited by: §1, §1, §2.2.
  15. Constraining the properties of AGN host galaxies with spectral energy distribution modelling. \aap 576, pp. A10. External Links: 1501.03672, Document Cited by: §1, §2.1.
  16. The SFR-M main sequence archetypal star-formation history and analytical models. \aap 608, pp. A41. External Links: 1706.08531, Document Cited by: §1, §2.1.
  17. Identification of galaxies that experienced a recent major drop of star formation. \aap 615, pp. A61. External Links: Document, 1803.10239 Cited by: §1, §2.1, §2.1, §2.1, §2.1, §2.1, §2.2, §2.2, §3.2, §3.4, §4.3, §4.3, §6.
  18. Wet disc contraction to galactic blue nuggets and quenching to red nuggets. \mnras 438, pp. 1870–1879. External Links: 1310.1074, Document Cited by: §1.
  19. The reversal of the star formation-density relation in the distant universe. \aap 468, pp. 33–48. External Links: astro-ph/0703653, Document Cited by: §1.
  20. The elements of statistical learning. Vol. 1, Springer series in statistics New York. Cited by: 1st item, §4.2.
  21. The phenomenology of disk galaxies.. \aap 312, pp. 397–408. Cited by: §5.
  22. The IMACS Cluster Building Survey. IV. The Log-normal Star Formation History of Galaxies. \apj 770, pp. 64. External Links: 1303.3917, Document Cited by: §1.
  23. The Intrinsic Scatter along the Main Sequence of Star-forming Galaxies at z ~ 0.7. \apj 778, pp. 23. External Links: 1309.4093, Document Cited by: §1.
  24. Evolution of the specific star formation rate function at z 1.4 Dissecting the mass-SFR plane in COSMOS and GOODS. \aap 579, pp. A2. External Links: 1410.4875, Document Cited by: §1.
  25. Bayes factors. Journal of the American Statistical Association 90 (430), pp. 773–795. External Links: Document, http://www.tandfonline.com/doi/pdf/10.1080/01621459.1995.10476572, Link Cited by: §4.3, §4.3.
  26. The COSMOS2015 Catalog: Exploring the 1 < z < 6 Universe with Half a Million Galaxies. The Astrophysical Journal Supplement Series 224, pp. 24. External Links: Document Cited by: Figure 2, Figure 3, §2.2, §2.2, Figure 7, §5.
  27. The Estimation of Star Formation Rates and Stellar Population Ages of High-redshift Galaxies from Broadband Photometry. \apj 725, pp. 1644–1651. External Links: 1010.1966, Document Cited by: §1.
  28. How to Measure Galaxy Star Formation Histories. II. Nonparametric Models. \apj 876 (1), pp. 3. External Links: Document, 1811.03637 Cited by: §1.
  29. The Evolving Interstellar Medium of Star-forming Galaxies since z = 2 as Probed by Their Infrared Spectral Energy Distributions. \apj 760, pp. 6. External Links: 1210.1035, Document Cited by: §1.
  30. Star formation rates and masses of z ~ 2 galaxies from multicolour photometry. \mnras 407, pp. 830–845. External Links: 1004.4546, Document Cited by: §1.
  31. Evolutionary population synthesis: models, analysis of the ingredients and application to high-z galaxies. \mnras 362, pp. 799–825. External Links: arXiv:astro-ph/0410207, Document Cited by: §2.1.
  32. Likelihood-free model choice. In Handbook of Approximate Bayesian Computation, S. A. Sisson, Y. Fan and M. Beaumont (Eds.), Cited by: §3.3.
  33. Approximate bayesian computational methods. Statistics and Computing 22 (6), pp. 1167–1180. Cited by: §1, §3.2.
  34. Obtaining calibrated probabilities from boosting. pp. . External Links: Link Cited by: §4.1.
  35. Star Formation in AEGIS Field Galaxies since z=1.1: The Dominance of Gradually Declining Star Formation, and the Main Sequence of Star-forming Galaxies. \apjl 660 (1), pp. L43–L46. External Links: Document, astro-ph/0701924 Cited by: §1.
  36. The Rise and Fall of the Star Formation Histories of Blue Galaxies at Redshifts 0.2 z 1.4. \apjl 762, pp. L15. External Links: 1210.0543, Document Cited by: §1.
  37. Timing the Evolution of Quiescent and Star-forming Local Galaxies. \apj 824, pp. 45. External Links: 1604.02460, Document Cited by: §1.
  38. The Stellar Populations and Evolution of Lyman Break Galaxies. \apj 559, pp. 620–653. External Links: astro-ph/0105087, Document Cited by: §1.
  39. Recovering galaxy stellar population properties from broad-band spectral energy distribution fitting. \mnras 422, pp. 3285–3326. External Links: 1203.3548, Document Cited by: §1.
  40. Reliable abc model choice via random forests. Bioinformatics 32 (6), pp. 859–866. External Links: Document, Link, Cited by: §3.3.
  41. The bayesian choice: from decision theoretic foundations to computational implementation. Springer Science & Business Media. Cited by: §3.1, §3.2, §3.2.
  42. Dissecting the Stellar-mass-SFR Correlation in z = 1 Star-forming Disk Galaxies. \apjl 754, pp. L14. External Links: 1206.1704, Document Cited by: §1.
  43. Regularity Underlying Complexity: A Redshift-independent Description of the Continuous Variation of Galaxy-scale Molecular Gas Properties in the Mass-star Formation Rate Plane. \apj 793, pp. 19. External Links: 1303.4392, Document Cited by: §1.
  44. The Herschel view of the dominant mode of galaxy growth from z = 4 to the present day. \aap 575, pp. A74. External Links: 1409.5433, Document Cited by: §1.
  45. ISM Masses and the Star formation Law at Z = 1 to 6: ALMA Observations of Dust Continuum in 145 Galaxies in the COSMOS Survey Field. \apj 820, pp. 83. External Links: 1511.05149, Document Cited by: §1.
  46. Parametrising Star Formation Histories. ArXiv e-prints. External Links: 1404.0402 Cited by: §1.
  47. Handbook of approximate bayesian computation. Chapman and Hall/CRC. Cited by: §1, §3.2.
  48. Evolution of density profiles in high-z galaxies: compaction and quenching inside-out. \mnras 458 (1), pp. 242–263. External Links: Document, 1509.00017 Cited by: §1.
  49. A survey of bayesian predictive methods for model assessment, selection and comparison. Statistics Surveys 6, pp. 142–228. Cited by: §3.2.
  50. Detection of Quiescent Galaxies in a Bicolor Sequence from Z = 0-2. \apj 691 (2), pp. 1879–1895. External Links: Document, 0806.0625 Cited by: §4.2.
  51. Galaxy Structure and Mode of Star Formation in the SFR-Mass Plane from z ~ 2.5 to z ~ 0.1. \apj 742, pp. 96. External Links: 1107.0317, Document Cited by: §1.
  52. What Do We Learn from IRAC Observations of Galaxies at 2 < z < 3.5?. \apj 655 (1), pp. 51–65. External Links: Document, astro-ph/0609548 Cited by: §4.2.