Occurrence Rates of Planets orbiting FGK Stars: Combining Kepler DR25, Gaia DR2 and Bayesian Inference
Abstract
We characterize the occurrence rate of planets, ranging in size from 0.516 R, orbiting FGK stars with orbital periods from 0.5500 days. Our analysis is based on results from the “DR25” catalog of planet candidates produced by NASA’s Kepler mission and stellar radii from Gaia “DR2”. We incorporate additional Kepler data products to accurately characterize the the efficiency of planets being recognized as a “threshold crossing events” (TCE) by Kepler’s Transiting Planet Search pipeline and labeled as a planet candidate by the robovetter. Using a hierarchical Bayesian model, we derive planet occurrence rates for a wide range of planet sizes and orbital periods. For planets with sizes 11.75 R and orbital periods of 237500 days, we find a rate of planets per FGK star of ( credible interval). While the true rate of such planets could be lower by a factor of (primarily due to potential contamination of planet candidates by false alarms), the upper limits on the occurrence rate of such planets are robust to . We recommend that mission concepts aiming to characterize potentially rocky planets in or near the habitable zone of sunlike stars prepare compelling science programs that would be robust for a true rate in the range for R planets with orbital periods in 237500 days, or a differential rate of .
Department of Astronomy & Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA \move@AU\move@AF\@affiliationCenter for Exoplanets and Habitable Worlds, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA \move@AU\move@AF\@affiliationCenter for Astrostatistics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA \move@AU\move@AF\@affiliationInstitute for CyberScience, The Pennsylvania State University \move@AU\move@AF\@affiliationDepartment of Astronomy & Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA \move@AU\move@AF\@affiliationCenter for Exoplanets and Habitable Worlds, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA \move@AU\move@AF\@affiliationCenter for Astrostatistics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA, 16802, USA \move@AU\move@AF\@affiliationInstitute for CyberScience, The Pennsylvania State University \move@AU\move@AF\@affiliationDepartment of Physics & Astronomy, N283 ESC, Brigham Young University, Provo, UT 84602, USA \move@AU\move@AF\@affiliationDepartment of Physics & Astronomy, N283 ESC, Brigham Young University, Provo, UT 84602, USA
1 Introduction
In 2009, NASA’s Kepler mission began its mission to characterize the abundance and diversity of exoplanets. Four years of precise photometry of 189,516 target stars (as part of its exoplanet survey) allowed the Kepler project to identify 4,034 strong planet candidates via transit, over half of which have been confirmed by another method or validated based on detailed analyses of light curves and statistical considerations (TCH+2018). The Kepler mission was enormously successful, with an impressive array of discoveries, both anticipated and surprising. However, one of the primary goals of the mission, characterizing the frequency of Earthsize planets in or near the habitable zone of solarlike stars has remained elusive. A combination of factors, from intrinsic stellar variability to failure of Kepler’s reaction wheels, resulted in the Kepler data being sensitive to such planets around a small fraction of the stars surveyed and the detection efficiency of such planets depending sensitively on their host star properties. As a result, considerable care is necessary when inferring the rate of small planets near the habitable zone from Kepler data.
Several previous studies have estimated the rate of planets as a function of planet size and orbital period based on Kepler data. Early studies had less data, but just as importantly, the accuracy of early studies were constrained by limited knowledge of the Kepler pipeline’s detection efficiency and the host star properties. Further complicating matters, HFR+2018 showed that one of the most common algorithms for estimating the planet occurrence rate was systematically biased for planets near Kepler’s detection threshold. While this bias is modest for much of the parameter space explored by Kepler, it can be of order unity for small planets near the habitable zone. Only a few recent studies analyzed the planet occurrence rates using a hierarchical Bayesian model, so as to avoid this bias (FMHM2014; BCM+2015; HFR+2018). FMHM2014 inferred planet occurrence rates using a hierarchical Bayesian model with nonparametric model and a prior that favored the planet occurrence rate changing smoothly. This study analyzed a list of planet candidates from a nonstandard pipeline that was restricted to finding a single planet around each star. BCM+2015 inferred planet occurrence rates using a parametric model (i.e., powerlaw in planet size and orbital period) and a hierarchical Bayesian model. HFR+2018 inferred planet occurrence rates using a nonparametric model, a hierarchical Bayesian model and Approximate Bayesian Computing (ABC), and performed extensive tests on simulated catalogs to validate the algorithm. However, none of these studies made use of Kepler’s final planet catalog (known as DR25; TCH+2018), additional DR25 data products characterizing the detection pipeline’s efficiency (C2017Pixel; BC2017Wf; BC2017Flux; BC2017DetCon; C2017Robo), or recent improvements in the knowledge of host star properties from the ESA’s Gaia mission (GBV+2018). Recent studies indicate that the updates in stellar properties thanks to Gaia can have a significant effect on the inferred planet occurrence rate (Shabram et al. 2019, submitted).
In this manuscript, we report occurrence rates inferred using a hierarchical Bayesian model and Approximate Bayesian Computing (ABC), and incorporating several improvements relative to previous studies, including the DR25 catalog of planet candidates, updates to the host star properties from Gaia, and an improved model for the Kepler efficiency for detecting and vetting of planet candidates. We review the statistical methodology in §2.1 and describe improvements to our model since HFR+2018 in §2.2. In §3.1, we describe the selection of target stars, and present the resulting planet occurrence rates in §3.2. We conclude with a summary of key findings, implications for the rate of Earthsize planets in or near the habitable zone of FGK stars, and future of occurrence rate studies in §4.
2 Methodology
2.1 Approximate Bayesian Computation
Approximate Bayesian Computation (ABC) is a tool for performing Bayesian inference when the likelihood is not available. For example, ABC can be applied to perform inference using a hierarchical Bayesian model to infer the properties of a population based on observations from a survey. Realworld surveys can have many complexities (e.g., measurement uncertainties, target selection, instrumental effects, data reduction pipeline, detection probability that depends on unobserved properties) that make it impractical to write down the correct likelihood. Instead, ABC relies on: 1) the ability to generate samples from a forwardmodel for the intrinsic population and survey, and 2) a physicallymotivated distance function that quantifies the distance between the actual survey results and the results of a simulated survey. For a sufficiently large number of simulated catalogs and a wellchosen distance function, the ABC posterior estimate converges around the true posterior. Thus, ABC is ideally suited to inferring planet occurrence rates based on transit surveys such as Kepler.
Here we provide a qualitative description of naive ABC and the sequential importance sampling method applied in this study. The occurrence rate of planets within a given range of sizes and orbital periods is characterized by model and a set of hyperparameters. We apply a model that is a piecewise constant rate per logarithmic bin in orbital period and planet size. The hyperparameters are simply the occurrence rates for each bin in the 2d grid. For this study, we chose a periodradius grid with similar period and radius bins: {0.5, 1, 2, 4, 8, 16, 32, 64, 128, 256, 500} days & {0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.5, 3, 4, 6, 8, 12, 16} R. We repeatedly draw planets from this model, simulate observations by the Kepler mission, compute a set of summary statistics (in this case the number of detected planets in each bin) and compare the number of planets (within a certain range of planet sizes and/or orbital periods) that would have been detected in the simulated dataset to the actual number of such planets detected by the Kepler mission. A naive application of ABC would involve drawing hyperparameters from their prior distribution and keeping parameters that resulted in matching the observed number of planets in each bin exactly. While straightforward, this approach is computationally unfeasible for our application.
The implementation of ABC used in this study is Population Monte Carlo, wherein multiple generations of simulated data are created. In each generation, a population of model parameters are used to generate and evaluate multiple simulated catalogs. Simulated catalogs with a distance less than some threshold (“”) are kept and the values of their hyperparameters inform the values of hyperparameters proposed for the following generation. Each set of hyperparameter values is assigned a weight, and sequential importance sampling is used to ensure that the weights properly account for the prior probability distribution at each generation. This sequential importance sampling continues until one of several stopping criteria are met which indicate sufficient convergence has been achieved. For a full description of ABC and the particulars of the ABCPMC algorithm we use, see HFR+2018.
While we generally apply the ABC procedure detailed in HFR+2018, there were a few modifications to account for this study. In this study, each generation of ABCPMC consists of 100 to 500 “particles” (increased from 50). We found an increased number of particles to be helpful when applying the algorithm to infer several occurrence rates simultaneously, particularly when some of the rates were weakly constrained by the data (see §2.4). Instead of always using the same number of target stars and value of (which controls the width of the importance sampler’s proposal distribution), we split the ABC simulation into three steps. For the first 15 generations of each ABC simulation, we create simulated catalogs with only targets and set . This accelerates the ABCPMC algorithm as it “zooms in” on the relevant region of parameter space. For future generations, we set and generate larger simulated catalogs of targets, so as to provide a good importance sampling density for future generations. The ABCPMC algorithm continues for more generations until satisfying the stopping criteria described in HFR+2018. In the third step, if the simulation was using less than 200 particles, we run one more generation of ABC using 200 particles drawn using the distance goal () of the last generation to provide a larger posterior sample for plotting and computing percentiles.
2.2 Model Improvements
The majority of the physical model in the Exoplanet System Simulator (known as “SysSim”) used in this study has remained the same as in our previous application of ABC to estimate the occurrence rates of planet candidates around FGK stars based on the Q1Q16 Kepler catalogs in HFR+2018 (see Section 3). Below we summarize several improvements made for this study.
2.2.1 Stellar Properties
The second Gaia data release (DR2) (GBV+2018) provides significantly improved stellar parameters for the overwhelming majority of the Kepler target stars. Improvements in stellar radii determinations result in more accurate planet radii (and more accurate assigning of planet candidates to their appropriate radii bins). For this study, we adopt the planetstar radius ratio reported in Kepler DR25 (TCH+2018), but replace the stellar radius with results from Gaia DR2 (AFC+2018). An initial crossmatch was performed using https://github.com/megbedell/gaiakepler.fun. We filter this target list further to remove poor planet search targets, as described in §3.1. Note that we do not use the stellar radii derived by BHG+2018, as that catalog makes use of additional observations (e.g., spectroscopy, asteroseismology) that are not available for all stars. While using additional observational constraints likely improves the precision for individual stars, using such an inhomogeneous set of stellar parameters risks biasing our results.
Additionally, Gaia’s precise parallax measurements allow for more accurate identification of mainsequence stars based on their position in the colorluminosity diagram. We use this information (in place of estimates based on ) to define a clean sample of FGK mainsequence target stars. Finally, we make use of Gaia’s astrometric information to identify targets likely to be multiple star systems with stars of similar masses/luminosities. The details of our target selection are described in §3.1. This results in a total of Kepler targets for which the Kepler DR25 pipeline and robovetter identified planet candidates with d and .
2.2.2 Incorporating Kepler DR 25 data products
Planet Catalog: This study makes use of the Kepler DR 25 planet catalog and its associated data products (TCH+2018). This represents a substantial improvement over the Q116 catalog used in HFR+2018. The DR25 catalog makes use of the light curves from the full Kepler prime mission and incorporates several improvements to the data reduction, planet search, and vetting algorithms. Even more importantly, DR25 was the first Kepler catalog to be generated using a uniform data reduction, planet search and vetting process. For planet occurrence studies, uniformity of processing is particularly important, as it enables statistical modeling of the detection process.
Planet Detection Efficiency: When computing occurrence rates, it is important to use an accurate detection efficiency model for the probability that a planet with known properties is included in the result catalog of planet candidates. HFR+2018 used a detection efficiency model from CCB+2015 that was calibrated to transit injection tests performed using a previous pipeline. In this study, we update the planet detection efficiency model based on transit injection tests based on the DR 25 pipeline C2017Pixel. Initially, we adopted the detection probability for a transiting planet as a function of the expected effective signaltonoise ratio (SNR),
(1) 
where is the incomplete gamma function, is the gamma function, , , and , based on §5 of C2017Pixel. This expression models the probability that a transiting planet is detected by the pipeline as a threshold crossing event, but does not attempt to model the probability that a threshold crossing event is promoted to a Kepler Object of Interest or a planet candidate. Most previous studies have simply assumed that all true planets that are identified as threshold crossing events survive the vetting process. MPA+2018 presented a first attempt to estimate the impact of the vetting process. In this study, we consider three models for the combined planet detection process (i.e., being identified as a threshold crossing event by the pipeline and labeled as a planet candidate by the robovetter). First, we adopt from Eqn. 1 and assume perfect vetting (i.e., ). Second, we adopt the product of and the MPA+2018 model for vetting efficiency,
(2) 
where is planet radius (in R), is the orbital period (in days), and . Note that this model implies a vetting efficiency that does not explicitly depend on the effective transit SNR, but rather has only an indirect dependence via the planet size and orbital period. Since the power of the vetting process depends primarily on the ability of the robovetter to features in the shape of the light curve, it is likely that the vetting efficiency indeed depends more directly on the effective measurement noise for the target star. Another concern with this model is that it assumes the probability of a planet being detected by the pipeline and a planet passing vetting are uncorrelated. Since both the pipeline detection efficiency and vetting efficiency depend on the transit light curve (and thus key properties such as the transit signal to noise and the number of transits observed), assuming that these are independent seems unwise. These (as well as results described in §2.5) motivated us to create a new model of the planet detection and vetting process.
For the third model, we derive a combined model for the probability of a transiting planet being detected and labeled as a planet candidate by the robovetter using the pixellevel transit injection tests (C2017Pixel) and the associated robovetter results (C2017Robo).
(3) 
where is the number of “valid” transits observed by Kepler and values for , , and are given in Table 2.2.2. (The Kepler photometric reduction pipeline assigned weights to each flux measurement, so as to deweight observations that may be spurious for a variety of reasons (e.g., measurement near a data gap or anomaly). Following C2017Pixel, a transit is labeled as “valid” if the central flux measurement during the transit has a weight greater than 0.5.)
We follow the general procedure recommended in C2017Pixel for fitting a probability as a function of the effective SNR. However, we only consider an injected transit to have been detected if it also is labeled as a planet candidate by the robovetter. The choice of as the second model parameter was motivated by the observation that the vetting efficiency decreases for planets with longer orbital periods and thus smaller . Theoretically, we expect that the dispersion in the measured SNR will depend directly on the number of transits that are contributing to the candidate and only indirectly on orbital period. For example, if two planets have a similar orbital period, but fewer transits of one planet were observed (e.g., due to falling on the CCD module that died early in the Kepler mission), then the distribution of effective SNR would be broader for the planet with fewer observed transits. We adopt this as our baseline model and compare the results using these three models for the combined planet detection process in §4.1.
Depth Function: Each of the above planet detection models depends on computing the expected effective SNR. In HFR+2018, we used the product of the transit depth times the transit duration times the square root of the number of transits for the “signal”. For “noise”, we took the mission average of the 4.5 hour duration combined differential photometric precision (CDPP) for each target star and assumed that the fractional noise scaled with one over the square root of the number of measurements.
In this study, we make use of a new data product provided by DR25, the “1 depth function” (OSDF), which describes the transit depth that would result in a SNR of unity for a given target star, orbital period, and transit duration, after averaging over the epoch of transit (BC2017Wf). Thus, the expected effective SNR becomes simply the transit depth divided by the OSDF for the target star interpolated to the orbital period and appropriate transit duration. The CDPP is effectively an averaged version of the OSDF that assumes a specific transit duration and averages over orbital period. Thus, we expect the OSDF to be a more accurate measure of the effective noise as it accounts for how the photometric noise deviates from white noise. For planets with a modest number of transits, the SNR can be affected by whether they occur at times with more or less noise than typical. This is particularly significant for planets with orbital periods greater than 90 days, since there is a smaller number of transits and the number of transits observed with one CCD module may differ significantly from the number of transits observed with another CCD module.
The original Kepler DR25 OSDFs are defined for every star, all 15 pulse durations that were searched by the pipeline, and for each of a a large () number of orbital periods (BC2017Wf). In order to reduce the memory requirements, we downsample these through linear interpolation to a common period grid for all stars of 1000 logarithmicallyspaced periods from 0.49 to 700 days, which captures the vast majority of the information in these OSDFs.^{1}^{1}1The OSDFs for each star are available from the Kepler mission at https://exoplanetarchive.ipac.caltech.edu/docs/Kepler_completeness_reliability.html. Our downsampled versions are in a more convenient format and available on GitHub. Though the amount of memory required for SysSim to use OSDFs is much higher than before, it’s runtime performance did not increase significantly.
When evaluating the noise of a particular planet, we typically use bilinear interpolation between the period and duration. However, OSDFs are not defined for all periods and durations because the Kepler Transit Search pipeline did not search every combination of period and duration. When the simulated planet’s duration was shorter than the shortest duration searched (and available in OSDF), we assume that the SNR was diluted by the ratio of the searched duration to the actual duration since the search pipeline effectively adds noise to such short duration transits.
When the simulation planet’s duration was longer than the longest duration searched at that period, we adopt the noise of the longest duration searched. The SNR is reduced due to the signal only accumulating for the duration searched rather than the true duration.
The “timeout” corrections for which stars and durations were searched in DR25 have not been incorporated to our model. This affects only a small fraction of our target stars, as discussed in §4.3.
Window Function: For the Kepler pipeline to detect a planet candidate, there must be good data for at least three transits. This becomes nontrivial for long period planets and/or stars not observed for the full mission duration. Following (BCM+2015), HFR+2018 adopted a binomial model for the probability that at least three transits were observed, based the expected number of transits if there were no data gaps and the targetspecific duty cycle.
In this study, we make use of the DR25 Window Function data products which tabulate the probability of detecting at least three transits as a function of orbital period (BC2017Wf). As with OSDFs, the provided Kepler DR25 Window Functions are a function of star, duration, and period. These were downsampled to 1000 linearlyspaced periods from 0.5 to 700 days using linear interpolation.^{2}^{2}2The original versions are also at https://exoplanetarchive.ipac.caltech.edu/docs/Kepler_completeness_reliability.html and our downsampled versions are available on GitHub. For a simulated planet, SysSim uses bilinear interpolation on these window functions to determine the value of the window function for each simulated planet. As in HFR+2018, the SNRdetection probability is multiplied by this value to return the final probability of detecting a planet.
Since window functions are identical for targets that were observed in the same sequence of quarters, we identify the 100 most common window functions for targets that were observed for at least 4 quarters as part of the Exoplanet target list (determined by requiring "EX" to be in the Investigation ID as recorded on MAST (archive.stsci.edu/kepler). These 100 window functions provide an exact match for practically all of our targets. Any targets for which the exact window function was not available were assigned a randomly drawn window function.
Minor improvements affecting detection efficiency: We also incorporate several minor model improvements when calculating the effective signal to noise. For example, HFR+2018 ignored limbdarkening. In this study, we account for limbdarkening when computing the transit depth, using the limb darkening parameters from the Kepler DR 25 stellar catalog, the same values as used for the Kepler pipeline run that produced the DR25 planet candidate catalog. Additionally, we account for dilution of transit depth due to contamination as tabulated in the Kepler Input Catalog. Finally, we have improved the calculation of the transit duration, by using Eqn. 15 of K2010, so as to avoid making the small angle approximation.
2.3 Skyaveraged detection probabilities using CORBITS
In order to improve the walltime efficiency of our ABC calculation, we have implemented a probabilistic simulated catalog approach. Previously, when simulating observations to determine the detection probability of each simulated planet, we assigned a geometric transit probability of either zero or unity depending on whether the planet would appear to transit for a single observer orientation. In this study, we assign each potentially detectable planet a weight proportional to its transit detection probability marginalized over all possible observer orientations. The geometric transit probability for each planet is multiplied by a transit detection efficiency that is marginalized over all impact parameters (that result in a transit) to give the total detection probability. For a single planet, the skyaveraged geometric transit probability for each planet is , where is the stellar radius, is the semimajor axis and is the orbital eccentricity. For stars with multiple planets, SysSim calculates the skyaveraged geometric transit probability for each planet, as well as each pair of planets and each number of transiting planets in the system. The CORBITS package (BR2016) makes this computationally practical by using semianalytic methods to determine the transit probability of each combination of planets associated with a given planetary system. ^{3}^{3}3CORBITS requires that orbits not “cross”, i.e., if , then it requires that . Physically, such configurations are unlikely due to longterm orbital stability. Therefore, when multiple planets are assigned to a single star, we reject configurations that result in crossing orbits and redraw until a configuration without crossing orbits is created (up to a maximum of 20 times). Using the skyaveraged detection probability instead of the detection probability for a single viewing geometry reduces the number of simulated planetary systems needed to accurately infer planet occurrence rates. As part of our model and algorithm verification, we determined that the number of planetary systems drawn could be reduced by a factor of relative to the number of target stars without significantly affecting the accuracy of the inferred occurrence rates. This reduction offsets the overhead from CORBITS, which increases the calculation time from single observer by a factor of . Therefore, the overall reduction is about onethird off the previous calculation time. While this provides only a modest performance improvement for this study, we anticipate that the skyaveraged observing geometry mode will be particularly valuable for future studies that investigate the occurrence rate of systems with multiple transiting planets.
In HFR+2018, the distance function used by ABC was simply the absolute value of the difference in the ratios of detected planets to stars surveyed for the simulated and observed planet catalogs. Since this study averages over viewing geometries, simply summing the detection probabilities would result in the expected number of planet detections which is not directly comparable with the observed number of planets of the actual Kepler data. The expected number of planet detections has a smaller variance than the actual number of planet detections. Therefore, after calculating the detection probabilities for each planet in a simulated catalog, we label each planet as either detected or notdetected based on a Bernoulli draw with success probability equal to each planet’s total detection probability.
2.4 Inferring multiple occurrence rates simultaneously
In HFR+2018, we inferred the occurrence rate for each “bin” in period and radius independently of other bins. Orbital periods are measured so precisely that there is effectively no ambiguity about which period bin a planet should be assigned to. In principle, three effects could cause a planet to be assigned to a different bin than it would be if its properties were known perfectly. First, measurement errors cause the observed planetstar radius ratio to differ from it’s true value. Fortunately, this happens for only a small fraction of planets. Indeed, HFR+2018 found that this effect did not result in significant correlation between neighboring planet radii bins. Second, uncertainties in the stellar properties cause the measured planet radius to differ from the true planet radius, even if the planetstar radius ratio were known precisely. Nearly all previously published studies, including HFR+2018, have neglected this effect. Shabram et al. (2019, submitted) showed that uncertainties in host star radii could have a significant effect on planet occurrence rates. In this study, we infer multiple occurrence rates simultaneously, so as to account for this effect. A third source of uncertainty (unrecognized contamination) is discussed in 4.3, but is not modeled in this study.
Once Gaia provided accurate parallaxes for most Kepler target stars (GBV+2018), accurate stellar radii and uncertainties were derived for those targets (AFC+2018). This makes it feasible to accurately model the effects of uncertainty in stellar radii on planet occurrence rates. Since planet radii are proportional to the host star radius, uncertainty in host star radius directly translate into uncertainty in the planet radius. Even with Gaia DR2 parallaxes, the uncertainty in stellar radius can result in a planet being assigned to the wrong radius bin. For each planet we simulate, the true transit depth is based on the true planet radius and the stellar radius from Gaia (AFC+2018). We draw an assumed stellar radius from an equal mixture of two halfnormal distributions with median equal to the Gaia bestfit radius and use Gaia’s upper and lower uncertainties for the standard deviations of the halfnormals (AFC+2018). If the uncertainty in stellar radius (in either direction) is less than the value of the radius, then we increase that uncertainty to the stellar radius (based on the distribution of stellar radius uncertainties in BHG+2018). Then, we draw the observed transit depth centered on the true transit depth with a width based on its SNR and the diagonal noise model of PR2014 that accounts for finite integration time. Next, we compute the “observed” planet radius from the observed plantstar radius ratio and the assumed stellar radius. In most cases, the observed planet radius results in the planet being assigned to the same bin as it would be if its radius were known precisely. However, sometimes, the observed planet radius results in the planet being assigned to a neighboring bin containing slightly larger or smaller planets. If one were to include the uncertainty in stellar properties, but only modeled planets drawn from a single bin, then the inferred planet occurrence rate would be biased to be larger than the true rate, since simulated planets can “leak” beyond of the boundaries of the radius bin. The magnitude of the effect depends on the size of the bin in radius relative to the size of uncertainties in stellar properties and on the differences between occurrence rates in neighboring bins. In tests, we found that this effect could lead to occurrence rates being inflated by of the measured rate (for bins focusing on small planets with a width of 0.25 R), if each bin were analyzed individually, rather than in a group.
2.4.1 Model Parameterization
Fortunately, our ABC approach naturally accounts for this effect, as long as we simultaneously model the occurrence rate of planets in each size bin. When performing inference with occurrence rates for multiple radius bins, we performed calculations using two sets of priors, each with their own model parameterization. First, we assign each bin in period and radius its own occurrence rate, with a uniform prior. This has the advantage that the prior for any individual bin is easy to understand. However, it has the disadvantage that the total number of planets within a range of orbital periods could be larger than plausible if one considers longterm orbital stability. Second, we allow each period bin to have a total occurrence rate with uniform prior, and assign a fraction of those planets planets to each radius bin. This prior has the advantage that the rate of planets summed over all sizes is constrained. However, the Dirichlet prior over the fractions in each radius bin makes it more difficult to visualize the prior. For most periods and sizes, the choice of prior did not have a significant impact on the posterior for occurrence rates. The differences were more noticeable for small, longperiod planets. Therefore, we will report results for both choices of prior in the regime.
The individual occurrence rates are used for the entire periodradius grid considered. We adopt a uniform prior for over . The upper limit , where we set . This choice is small enough that proposals with more than 3 planets per factor of 2 in period are extremely rare, consistent with expectations based on longterm orbital stability. Once computations are finished, we verified that the inferred posterior distributions were not being truncated by the prior’s upper limit.
When deriving our best estimate of planet occurrence rates in the regime, we will adopt the second model parameterization. For each bin in orbital period (indexed by ), we infer: 1) , the total occurrence rate for all planets within the range of orbital periods and the full range of planet sizes being considered, and 2) a vector : the fraction of planets in the th orbital period bin that have sizes falling within the range of the th radius bin. Thus, the occurrence rate for planets in the th radius bin and th period bin is given by . We adopt a uniform prior for over . The upper limit is motivated by longterm orbital stability, as it is rare for 3 planets to have orbital periods within a factor of 2 of each other (though a few cases are known to exist for very small planets). For each (a vector with fixed), we adopt a Dirichlet prior with concentration parameters proportional to . The smallest concentration parameter is set to unity. The Dirichlet prior ensures that to within numerical precision.
2.4.2 Distance Function
Now that we allow for uncertainty in stellar properties, we revisit the choice of distance function for using with ABC. We performed tests on simulated data to verify our algorithms and to compare the performance of multiple distance functions. Simply summing the absolute value of the difference in the ratios of detected planets to stars surveyed for the simulated and observed planet catalogs often resulted in an ABC posterior with highly disparate widths for different bins. Since bins with a greater number of observed planets have a greater contribution to the total distance, ABC would result in precise occurrence rates for wellpopulated bins, but much less precise estimates of occurrence rates for bins with low rates of planet candidates. For many scientific purposes, one would prefer a similar fractional error in occurrence rates for all bins, rather than a similar absolute error in occurrence rates for all bins. Therefore, we choose a new distance function that weights the absolute value of the difference in the ratio of detected planets to targets for each bin by the square root of the sum of the ratio of detected planets to targets for the simulated and observed planet catalogs for that bin. This modified Canberra distance is given by
(4) 
where, for each th periodradius bin, is the ratio of number of planet candidates detected by Kepler to the number of target stars searched and is ratio of the number of planets detected to target stars in the simulated catalog. We found that this distance function allows ABC to converge more rapidly when inferring occurrence rates for multiple bins simultaneously. We tested that this algorithm accurately simulates both the number of planets detected and the dispersion in the number of planets detected for the simulated catalogs. We also tested that the Canberra distance function accurately estimates the true planet occurrence rate and its uncertainty (by comparing to simulations that do not make use of skyaveraged detection probabilities; see §2.5 and Figure 2.5).
2.4.3 Importance Sampler Proposal Distribution
Initially, we attempted to apply the sequential importance sampler in the ABCPMC algorithm using a multivariate Gaussian for the proposal distribution, as described in BCM+2008 and HFR+2018. However, this proved computationally prohibitive. For example, a proposal of a negative value for any one bin would result in rejecting the entire proposal; when considering several bins that have small occurrence rates, such proposals are common. In order to make the ABCPMC algorithm practical, we experimented with a variety of replacement proposal distributions. Eventually, we settled on using a Beta distribution for the proposal for both parameterizations. When using independent priors for each , the ABCPMC algorithm uses a Beta proposal distribution for each . When using a Dirichlet prior for the relative occurrence rates within a given period bin, the ABCPMC algorithm using a Beta proposal for and each . After each proposal, the initially proposed values for the relative rates were rescaled according to , so that the rescaled rates sum to unity. We validated the algorithm on simulated data sets and found that this dramatically improved the computational efficiency.
2.5 Verification and Validation
Before applying our improved model and ABC algorithm to actual Kepler catalogs, we performed extensive tests to validate and and verify the model, the algorithms, and our implementations.
Figure 2.5 shows a comparison of results using either the skyaveraging mode (described in §2.3) or the single observing geometry mode for one particular bin (orbital periods of 816 days and planet sizes of 1.251.5R) where the ”true“ catalog is set to . The blue (orange) points show the median planet occurrence rate inferred for this bin using a single observing geometry (skyaveraged viewing geometry) and the vertical error bars indicate the 68.3% credible region. The horizontal axis indicates the number of occurrence rates inferred simultaneously. For any single simulated catalog, the number of observed planets is likely greater (or less) than the expected number of observed planets, leading to the posterior median rate being larger (or smaller) than the true rate (). We performed several simulations to verify that the median posterior rate from multiple runs was distributed about the true value. Figure 2.5 shows the results of multiple analyses for a single catalog demonstrating that we recover a posterior median close to the true value.
First, we will focus on results for three bins. Note that SysSim yields very similar results, regardless of whether using a single observing geometry (blue) or when using averaging over all viewing geometries (orange). As expected, including stellar radii uncertainties (green) results in an increased width of posterior distribution relative to an analysis which assumes stellar radii are known perfectly (orange).
Next, we compare results as a function of the number of occurrence rates inferred simultaneously. Each method (single observing geometry or skyaveraged viewing geometries, excluding or including uncertainties in stellar properties) gives statistically indistinguishable results for the median occurrence rate, as long as the number of model parameters being inferred simultaneously does not exceed seven. When analyzing three or five radius bins simultaneously, the posterior widths are very similar. In either case, the posterior width is observed to increase modestly when the model accounts for uncertainties in stellar radii. When analyzing nine radius bins simultaneously, the posterior width has increased noticeably. This behavior is typical of the sequential importance sampler used by the ABCPMC algorithm. When performing inference on nine parameters simultaneously, it becomes less likely that the proposed values for all nine parameters will result in a precise match for the number of observed planets in all nine bins. As a result, the computational time required for ABCPMC to achieve a given distance increases significantly. In practice, the final distance achieved by the ABCPMC algorithm begins to increase substantially as the number of dimensions increases beyond seven, resulting in an increased width of the ABCposterior. Thus, we must balance the desire to minimize bias with the desire to minimize unnecessary increase in posterior width due to computational limitations. Minimizing bias leads us to perform inference on at least three bins simultaneously. Avoiding an unnecessary increase in posterior width limits us to performing inference on bins simultaneously.
Next, we performed several tests to further fine tune the choice of how many occurrence rates to infer simultaneously. When inferring occurrence rates for multiple radius bins simultaneously, we find that only the top and bottom bins display noticeable bias. This is expected since the model effectively assumes that there are no detectable planets with radius greater than the upper limit of the top bin or less than the lower limit of the bottom bin. Since the bias is proportional to the difference between the true and assumed rate of such planets, it is maximized when the assumed rate is zero. When inferring occurrence rates for just three bins simultaneously, one could worry that the estimated rates for both edge bins will be overestimated and this could have an indirect effect on the estimated rate for the central bin. In practice, the bias for the central bin is a secondorder effect and significantly smaller than the other uncertainties. A related effect occurs when two bins (neighboring in radii) have a large difference in occurrence rate. If the “leaking” of planets outof and into the radius bin in question is not symmetric, then the impact of stellar uncertainties is amplified compared to other sets of bins where the occurrence rate is similar above and below the bin in question. This demonstrates that it is important to model the uncertainties in stellar radii and to simultaneously infer the planet occurrence rates when estimating planet occurrence rates for a wide range of planet radii and orbital periods.
Based on the above tests, we settled on reporting results using simulations that simultaneously infer planet occurrence rates for 5 (or 7) radius bins, reporting the results for the interior 3 (or 5) radius bins. (One exception is that we also report the occurrence rate for our largest bin, 1216 R, as explained below.) In order to span the full range of planet radii and to avoid significantly overestimating the ABCposterior width, we perform multiple simulations, each using 5 to 7 radius bins. The specific bin boundaries chosen for the subsets were: {0.25, 0.5, 0.75, 1, 1.25, 1.5}, {0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.5}, {1, 1.25, 1.5, 1.75, 2, 2.5}, {1, 2, 2.5, 3, 4, 6}, {3, 4, 6, 8, 12, 16} R for each period range.
Formally, one should be cautious in using results from an “edge bin”. Therefore, for each subset we throw out the two edge bins and take estimates for only interior bins from each subset. In the case of bins which are interior to multiple subsets, we combine the final populations produced by the separate runs on each subset to produce the occurrence rate estimates. The one edge bin which we do not throw out is the 1216 R bin. Even though the 1216 R bin is an edge, the rate of planets larger than 16 R is so small that there is no practical impact on the measured occurrence rates for 1216 R planets. While we include occurrence rates for R planets in our model calculations, we do so primarily so as to ensure that our estimates for 0.50.75 R planets are accurate. Thus, we do not include the inferred rates for R planets in Figure 3.1 out of an abundance of caution.
3 Application to DR25
3.1 Catalog Selection
The second Gaia data release (DR2) (GBV+2018) provides significantly improved stellar parameters for the overwhelming majority of the Kepler target stars. Improvements in stellar radii determinations result in more accurate planet radii and more accurate assigning of planet candidates to their appropriate radii bins. Additionally, Gaia’s precise parallax measurements allow for more accurate identification of mainsequence stars based on their position in the colorluminosity diagram. We use this information (in place of estimates of ) to define a clean sample of FGK mainsequence FGK target stars. Finally, we make use of Gaia’s astrometric information to identify targets likely to be multiple star systems with stars of similar masses/luminosities.
We construct a catalog of Kepler target stars starting from the Kepler DR25 stellar properties catalog and augmented with data from Gaia DR2 (GBV+2018). The selection criteria used to select Kepler targets likely to be mainsequence FGK stars are as follows (applied in order):

Require that the Kepler magnitude and the Gaia G magnitude are consistent. We compute the median and standard deviation of the difference in the Kepler magnitude and Gaia G magnitude based on the initial crossmatched catalog based on position. If a target’s Kepler magnitude minus Gaia G magnitude differs from the median by more than 1.5 times the standard deviation, then it is excluded from our cleaned catalog. This filter makes sure that a Kepler target is not erroneously matched to a Gaia target corresponding to a background star.

Require Gaia GOF_AL and Gaia astrometric excess noise cut. These criteria indicate a poor astrometric fit. This is often caused by an unresolved astrometric binary star (E2018).

Require Gaia Priam processing flags that indicate the parallax value is strictly positive and both colors are close to the standard locus for mainsequence stars. This rejects Kepler targets which are unlikely FGK main sequence stars or are so distant that Gaia does not have a good parallax measurement and the stellar radius will be highly uncertain.

Require Gaia parallax error is less than the parallax value. This excludes some faint/distant Kepler targets for which accurate stellar radii would not be available. Of all our target selection criteria based on data quality this results in the largest decrease in the number of target stars and planet candidates. In principle, future observations (e.g., improved Gaia data releases) could increase the number of stars with accurate stellar properties, perhaps allowing for more precise planet occurrence rate measurements. However, for FGK mainsequence stars, this criterion only excludes distant and hence faint targets for which Kepler does not have significant sensitivity to small planets in the habitable zone. Therefore, we expect that even future Gaia data releases and other followup observations to improve stellar characterization will not lead to significant improvements in the constraints on the occurrence rate of small planets in or near the habitable zone of mainsequence FGK stars.

Require that Kepler DR 25 provide a valid Kepler stellar mass, data span, duty cycle, and limb darkening coefficients, and that Gaia’s Apsis module provide a valid stellar radius based on Gaia data (AFC+2018).

Require that the Kepler target was observed for quarters and must have been on the Exoplanet target list for at least one quarter. These criteria exclude targets that were observed by Kepler for purposes other than the exoplanet search, as these are unlikely to be FGK main sequence stars. A small number of stars that were part of the exoplanet search will also be excluded if they were only observed for quarters. Kepler does not have significant sensitivity to small planets in the habitable zone of such FGK stars.

Require the target to have a color from Gaia photometry. This color cut results in selecting FGK stars and is more precise than using the temperature from the Kepler Input Catalog and more uniform than using temperatures from the DR25 stellar catalog.

Require the target star to have a luminosity, , where represents the luminosity of a mainsequence star for a given color. To compute we fit a quadratic function to the observed values of as a function of for our Kepler targets. We reject those targets whose observed deviates from by more than 75% and refit the model to the remaining targets. We iterate this process six times before converging on our final model for the mainsequence luminosity . This rejects targets that are significantly above the main sequence, either due to stellar evolution (primarily for F stars) or due to a multiple star system where stars other than the primary contribute >75% of the total flux. While this final criteria likely removes some viable targets for planet hunting (e.g., F stars starting to evolve off the main sequence), it is a small fraction of the total stars searched and Kepler does not have significant sensitivity to small planets near their habitable zone.
The total number of FGK Kepler target stars remaining after these cuts is . While the number of targets is significantly less than the total number observed by Kepler, a clean sample of target stars is preferable for performing planet occurrence rate studies. In principle, a less restrictive set of cuts might provide a larger stellar sample and provide more precise estimates for occurrence rates of large planets. However, such a strategy is unlikely to be useful for measuring the occurrence rate of small planets near the habitable zone, since the vast majority of stars excluded were not FGK mainsequence stars for which Kepler had significant sensitivity to small planets in or near the habitable zone. Our selection criterion intentionally exclude M stars, which will be the subject of a future study.
The Kepler DR25 pipeline and robovetter identified planet candidates associated with these targets with d and . One noticeable change from our previous study is the relatively few planet candidates in the regime. The primary cause for this change is the usage of updated stellar radii from Gaia, which are often larger than the previously determined Kepler stellar radii. This trend tends to boost the inferred planet radii. As a result, the majority of long period, small radii planet candidates shifted to larger radii bins within the periodradius grid once we incorporated Gaia stellar radii. For example, the inferred radius KOI 7016.01(JTB+2015; MTC+2018, also known as Kepler452 b) reported in DR25 is R, but incorporating the updated stellar radii from Gaia DR results in the estimated radius increasing to 1.51 R (with uncertainties increasing proportionally). While the bestestimate for the planetary radius no longer falls within the R bin, the uncertainty in the radius of such planets results in it to contributing to the estimated occurrence rate.
In order to explore which regime planet candidates were removed due to our stellar cuts (as opposed to updated stellar radii), we created a separate target list where we relax the two stellar cuts which most significantly reduced the number of stars in our sample: the FGK luminosity cut and the cuts designed to remove suspected binaries. This catalog has a total number of target stars, with planet candidates within the limits of the periodradius grid. Most significantly, we recover two longperiod, small radius planet candidates (associated with targets KIC 5097856 and 5098334) in the d, bin. After investigating the properties of these two planet candidates, we determined that the candidates were not in our final catalog because their associated target stars had poor astrometric GOF which suggests that their host star is likely part of an unresolved binary. If true, then the unmodeled flux from the binary companion would be diluting the transit depth, causing the true planet radius to be larger than currently estimated. We conclude that our process for identifying a clean sample of target stars worked as intended.
3.2 Baseline Planet Occurrence Rates
The values in each bin of Figure 3.1 state the median of the ABCposterior for the occurrence rates over the full periodradius grid. Results are tabulated in Table 1. The uncertainties shown are the differences between the median and either the 15.87th or 84.13th percentile (whichever has the larger absolute difference) of the ABCposterior for each occurrence rate. These rates make use of the combined detection and vetting efficiency (Eqn. 3). For periodradius bins with zero or one observed planet candidates, we report the 84.13th percentile as an upper limit instead of a median rate and uncertainty. For these bins, the posterior distribution is asymmetrical and summarizing the posterior with a point estimate runs a greater risk of misinterpretation. Figure 4.2.3 depicts the ABC posterior for the occurrence rate for several long period, small radius bins to demonstrate this asymmetry.
The reported upper limits are conservative in the sense that a full calculation (i.e., inferring rate for all radius bins simultaneously within a period range, rather than just 5 at a time) would likely cause a modest reduction in the upper limit. This is because we set a maximum total rate of 3 per factor of 2 in period (as a stability criterion), so a full calculation which adds wellconstrained bins at larger radii would reduce the remaining total rate that can be allotted to the bins with only upper limits. Given the occurrence rates inferred, the effect is negligible for most of the parameter space considered. While a few effects (e.g., unmodeled contaminating flux, detection efficiency decreasing for systems with multiple transiting planets, pipeline timeouts) could cause our model to overestimate the detection efficiency for individual planetary systems, each of these affect only a small fraction of stars (see §4.3. Therefore, we expect that any revisions to our upper limits due to these effects would be limited to a few percent. However, for bins with so little constraining data the estimated posterior may be significantly affected by the choice of prior (see §4.2).
In Figure 3.1, we show the occurrence rate as a function of planet size, marginalized over two separate period ranges, 0.58d and 16128d. The planet radius valley FPH+2017; VAL+2017 is apparent for periods of 16128d, but not for periods less than 8 days. Our use of a cleaned target star sample is designed to reduce effects due to contamination from binary stars, similar to TeCH+2018. The depth and width of the valley are expected to depend on orbital period on theoretical grounds (LF2014; OW2017; GS2018). A narrow and deep valley whose location depends on period would appear to be broader and shallower when one marginalizes over a broad range of periods, as in this figure. Therefore, the depth of the valley shown in Figure 3.1 should be interpreted as a minimum depth when comparing to predictions of model predictions over a narrower range of periods. Comparing the rate of 1.251.5 R planets as a function of orbital periods, we find no evidence that the occurrence rate is decreasing beyond 64 days, as predicted by some models.
\onecolumngrid
Period  Radius  Combined Detection &  No Vetting 

(days)  (R)  Vetting Efficiency  Efficiency 