Microlensing optical depth and event rate toward the Galactic bulge from eight years of OGLE-IV observations
The number and properties of observed gravitational microlensing events depend on the distribution and kinematics of stars and other compact objects along the line of sight. In particular, precise measurements of the microlensing optical depth and event rate toward the Galactic bulge enable strict tests of competing models of the Milky Way. Previous estimates, based on samples of up to a few hundred events, gave larger values than expected from the Galactic models and were difficult to reconcile with other constraints on the Galactic structure.
Here, we used long-term photometric observations of the Galactic bulge by the Optical Gravitational Lensing Experiment (OGLE) to select a homogeneous sample of 8,000 gravitational microlensing events. We created the largest and the most accurate microlensing optical depth and event rate maps of the Galactic bulge. The new maps ease the tension between the previous measurements and Galactic models. They are consistent with some earlier calculations based on bright stars and are systematically smaller than the other estimates based on “all-source” samples of microlensing events. The difference is caused by the careful estimation of the source star population.
The new maps agree well with predictions based on the Besançon model of the Galaxy. Apart from testing the Milky Way models, our maps may have numerous other applications, such as the measurement of the initial mass function or constraining the dark matter content in the Milky Way center. The new maps will also inform planning of the future space-based microlensing experiments by revising the expected number of events.
0000-0001-7016-1692]Przemek Mróz \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0001-5207-5619]Andrzej Udalski \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-2335-1730]Jan Skowron \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-0548-8995]Michał K. Szymański \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-7777-0842]Igor Soszyński \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-9658-6151]Łukasz Wyrzykowski \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-2339-5899]Paweł Pietrukowicz \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0003-4084-880X]Szymon Kozłowski \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-9245-6368]Radosław Poleski \move@AU\move@AF\@affiliationDepartment of Astronomy, Ohio State University, 140 W. 18th Ave., Columbus, OH 43210, USA \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0001-6364-408X]Krzysztof Ulaczyk \move@AU\move@AF\@affiliationDepartment of Physics, University of Warwick, Coventry CV4 7 AL, UK \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-9326-9329]Krzysztof Rybicki \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
0000-0002-6212-7221]Patryk Iwanek \move@AU\move@AF\@affiliationWarsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa, Poland
Gravitational microlensing is detectable when an angular separation between a lens and a source is of the order of, or smaller, than an angular Einstein ring radius:
where is the mass of the lens, is the relative lens-source parallax ( and are distances to the lens and source, respectively), and . The microlensing optical depth toward a given source describes the probability that the source falls into the Einstein radius of some lensing foreground object.
The microlensing optical depth toward one source at distance depends only on the distribution of matter along the line of sight:
where is the mass density of lenses. As the optical depth is independent of the mass function and kinematics of lenses, its measurements allow us to study the distribution of stars and other compact objects toward the Galactic bulge. In practice, however, it is only viable to observe the integrated optical depth, which is averaged over all detectable sources in a given patch of sky and so it may weakly depend on their mass function and the star formation history, as well as interstellar extinction:
where is the number of detectable sources in the range and (kiraga1994).
The differential microlensing event rate toward a given source is:
where is the lens mass, is its Einstein radius, is the local number density of lenses, is the lens-source relative velocity, is the two-dimensional probability density for a given lens-source relative proper motion , and is the mass function of lenses (batista2011). Contrary to the optical depth, the event rate explicitly depends on the mass function of lenses and their kinematics.
From the observational point of view, the optical depth can be estimated using the following formula that was derived by udalski1994c:
where is the total number of monitored source stars, is the duration of the survey, is the Einstein timescale of the -th event (which is defined as ), and is the detection efficiency (probability of finding an event) at that timescale. The event rate is given by:
Direct studies of the central regions of the Milky Way are difficult because of high interstellar extinction and crowding. Precise measurements of the microlensing optical depth and event rate toward the Galactic bulge, although difficult, provide strong constraints on theoretical models of the Galactic structure and kinematics (e.g., han_gould2003; wood2005; kerins2009; awiphan2016; wegg2016; binney2018).
Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefPrevious measurements of the microlensing optical depth toward the Galactic bulge.
The first measurement of the microlensing optical depth toward the Galactic bulge was carried out by udalski1994c and was based on OGLE-I data from 1992–1993 (see Table 1 for a compilation of previous measurements). They found nine microlensing events in a systematic search of light curves, and they calculated , which was greater than contemporary theoretical estimates (; paczynski1991; griest1991; kiraga1994). A similar conclusion was reached by alcock1995; alcock1997b based on MACHO project observations of the Galactic bulge. These seminal papers boosted the development of the field, but as we now know, the calculated optical depths are prone to systematic errors especially due to miscalculation of number of monitored sources. The early photometry was done using the point-spread function fitting method, which in crowded fields faces more challenges than the difference image analysis that is normally used in modern microlensing surveys.
These first measurements of the optical depth led to the realization that most of the observed microlensing events are caused by lenses located in the Galactic bulge and that the inner regions of the Milky Way have a bar-like structure elongated along the line of sight (paczynski1994; zhao1995). The first measurements stimulated the development of improved models of the Galactic bulge (e.g., zhao1996; fux1997; nikolaev1997; peale1998; gyuk1999; sevenster1999; grenacher1999). Nonetheless, all of these models predicted the optical depth in the direction of MACHO fields in the range , a factor of two–four lower than the reported values.
The implementation of the difference image analysis technique (alard1998) led to the improvement of the quality of the photometry in very dense stellar fields toward the Galactic bulge. This enabled the surveys to detect more microlensing events and to precisely measure their parameters. The optical depth measurements based on MACHO (; alcock2000) and MOA-I (; sumi2003) data were still higher than the theoretical predictions. binney2000 and bissantz2002 argued that such high optical depths cannot be easily reconciled with other constraints, such as the Galactic rotation curve and the mass density near the Sun. Nearly two decades later, sumi_penny2016 suggested these measurements suffer from biased source star counts and are overestimated.
In addition, popowski2001 and popowski2002 noticed that previous microlensing optical depth measurements underestimated (or completely ignored) the influence of blending on the estimation of event parameters from the light curves. The Galactic bulge fields are extremely crowded and there should be many faint unresolved stars within the seeing disk of any bright star. The omission of blending results in underestimated Einstein timescales. In highly blended events, as demonstrated by wozniak1997, the event timescale, impact parameter, and blending parameter may be severely correlated, which renders robust timescale measurements difficult.
popowski2001 proposed determining the microlensing optical depth using exclusively red clump giants as sources, because they are subject to little blending and it is easy to estimate their total number. Several measurements of the microlensing optical depth toward the Galactic bulge based on red clump giants were published by EROS (, afonso2003 ; , hamadache2006), MACHO (; popowski2005) and OGLE-II (; sumi2006) groups. These estimates were lower than those based on all-star samples of events (alcock2000; sumi2003).
The current largest microlensing optical depth and event rate maps are based on two years (2006–2007) of observations of the Galactic bulge by the MOA-II survey (sumi2013). sumi2011 and sumi2013 found over 1000 microlensing events in that data set, but only 474 events were used for the construction of event rate maps. All events are located in 22 bulge fields covering about 42 square degrees between and . Three years after the MOA-II publication, sumi_penny2016 realized that the sample of red clump giants, which was used to scale the number of observed sources and thus optical depths and event rates, was incomplete, most likely due to crowding and high interstellar extinction. The completeness increased with the Galactic latitude – from 70% at to 100% in fields located far from the Galactic plane (). This affected the measured optical depth and event rates, which were systematically overestimated at low Galactic latitudes. The revised all-source optical depth measurements were much lower than those published by sumi2013, which alleviated (but did not completely remove) the tension with the previous measurements based on red clump giant stars (popowski2005; hamadache2006; sumi2006). A similar bias may have affected the early MACHO and MOA measurements (alcock2000; sumi2003).
Large samples of microlensing events were also recently reported by wyrzykowski2015; wyrzykowski2016, navarro2017; navarro2018, and kim2018; kim2018_2, but these authors did not attempt to calculate optical depths and event rates.
The original MOA-II optical depth maps (sumi2013) were used by awiphan2016 to modify the Besançon Galactic model (robin2014). For example, they needed to include M dwarfs and brown dwarfs in the mass function of lenses to match the timescale distribution of microlensing events. awiphan2016 noticed that the predicted optical depths at low Galactic latitudes were about 50% lower than those reported by sumi2013. This discrepancy can only be partially explained by sumi_penny2016 findings; the theoretical optical depth is a factor lower than the revised MOA-II measurements. The revised MOA-II data (sumi_penny2016) were also used by wegg2016 to constrain the dark matter fraction in the inner Galaxy.
The accurate microlensing event rates are also of interest for the astronomical community, for example, for the preparation of the future space-based microlensing surveys like the Wide Field Infrared Survey Telescope (WFIRST; spergel2015) or Euclid (penny2013). The current Galactic models seemed to not be precise enough to predict reliable event rates, and they had to be scaled to match the observations (penny2013; penny2019). For example, penny2019 had to multiply the predicted rates by a factor of 2.11 to match Sumi & Penny’s (2016) results.
All these model constraints and predictions are still based on a relatively small sample of microlensing events and many authors have raised the need for optical depths from the larger OGLE sample (e.g., wegg2016; penny2019). In this paper, we aim to address these needs.
The basic information about the OGLE-IV survey and the data set used in the analysis is included in Section 2. Section 3 presents the selection of microlensing events. In Section 4, we estimate the completeness of OGLE star catalogs and the number of observable sources. The calculations of the microlensing event detection efficiency are described in Sections 5–7. The main scientific results and their implications are discussed in Section 8.
The photometric data analyzed in this paper were collected as part of the Optical Gravitational Lensing Experiment (OGLE) sky survey, which is one of the largest long-term photometric sky surveys worldwide. All analyzed observations were collected during the fourth phase of the project (OGLE-IV; udalski2015) during the years 2010–2017. The survey uses a dedicated 1.3-m Warsaw Telescope, located at Las Campanas Observatory, Chile. (The Observatory is operated by the Carnegie Institution for Science). The telescope is equipped with a mosaic camera which consists of 32 CCD detectors each of pixels. The OGLE-IV camera covers a field of view of 1.4 square degrees with a pixel scale of per pixel.
We searched for microlensing events in 121 fields located toward the Galactic bulge that have been observed for at least two observing seasons (filled polygons in Figure 2). These fields cover an area of over 160 square degrees and contain over 400 million sources in OGLE databases. Typical exposure times are 100–120 s and the vast majority of observations is taken through the -band filter, closely resembling that of a standard Cousins system. The magnitude range of the survey is , but the limiting magnitude depends on the crowding of a given field (as shown in Section 4). Fields are grouped and scheduled for observations with one of the several cadences. Some fields switch groups or are paused for the next season.
Nine fields that are observed with the highest cadence (BLG500, BLG501, BLG504, BLG505, BLG506, BLG511, BLG512, BLG534, BLG611) have been already analyzed by mroz2017 with the aim of measuring the frequency of free-floating planets in the Milky Way. Here, we use the sample of microlensing events presented in that paper to calculate optical depths and event rates in the subset of high-cadence fields. We also make use of image-level simulations that have been carried out by mroz2017 to measure the detection efficiency of microlensing events. The data were collected between 2010 June 29 and 2015 November 8. Each light curve consists of 4,500 - 12,000 single photometric measurements, depending on the field.
For the remaining 112 fields, which are the main focus of this paper, we used data collected during a longer period, between 2010 June 29 and 2017 November 1, whenever available. Because of the changes in the observing strategy of the survey, some of these fields were observed for a shorter period of time (from two to five Galactic bulge seasons). Most of these fields (76, i.e., 68%), however, were monitored for nearly eight years. The majority of light curves consist of from a hundred to two thousands data points.
OGLE photometric pipeline is based on the Difference Image Analysis (DIA) method (alard1998; wozniak2000), which allows obtaining very accurate photometry in dense stellar fields. A reference image of each field is constructed by stacking three to six highest quality frames. This reference image is then subtracted from incoming frames and the photometry is performed on subtracted images. Variable and transient objects that are detected on subtracted images are then assigned and stored in either of the two databases. The “standard” database holds light curves of all stellar-like objects previously identified on the reference frame, while “new” objects (those that are not registered as stellar on the reference images) are stored separately. The detailed description of image reductions, calibrations, and OGLE photometric pipeline is included in wozniak2000, udalski2003 and udalski2015.
3 Selection of events in low-cadence fields
The selection algorithm of microlensing events and final selection cuts were similar to those used by mroz2017, although with some small differences. Because the contamination from instrumental artifacts (such as reflections within the telescope optics) in the analyzed fields is much less severe than in high-cadence fields, we were able to relax the selection criteria compared to the earlier work (mroz2017). All criteria are summarized in Table 3.
It is known that photometric uncertainties returned by DIA are underestimated and do not reflect the actual observed scatter in the data. Thus, we began the analysis by correcting the reported uncertainties using the procedure proposed by skowron2016. For stars fainter than approximately , the error bars were corrected using formula , where and are parameters determined for each field separately. They were measured based on the scatter of constant stars (typically, and ). For the brightest stars, there is an additional correction resulting from non-linear response of the detector. The error bar correction coefficients were not available for eleven fields and we closely followed skowron2016 to calculate the missing values. Subsequently, we transformed magnitudes into flux. The search procedure consisted of three steps.
Table 0. \Hy@raisedlink\hyper@@anchor\@currentHrefSelection criteria for high-quality microlensing events in low-cadence OGLE-IV fields.
|All stars in databases||353,789,948|
|No variability outside the 720-day (or 360-day) window centered on the event|
|Centroid of the additional flux coincides with the source star centroid|
|Significance of the bump||23,618|
|Rejecting photometry artifacts|
|mag||Rejecting low-amplitude variables|
|Rejecting objects with multiple bumps||18,397|
|for all data|
|Event peaked between 2010 June 29 and 2017 December 31|
|The maximum impact parameter|
|d||The maximum timescale|
|The maximum -band source magnitude|
|The maximum negative blend flux, corresponding to mag star|
|Rejecting highly-blended events||5,790|
Step 1: We began the analysis with over 350 million objects in the “standard” databases. First, we searched for any kind of brightening in the light curves. We searched for at least three consecutive data points that are at least above the baseline flux . The baseline flux and its dispersion were calculated using data points outside a 720-day window centered on the event, after removing outliers (if the light curve was shorter than six years, we used a 360-day window instead). We required the light curve outside the window to be flat (), which allowed us to remove the majority of variable stars and image artifacts. We also required at least three magnified data points to be detected on subtracted images during the candidate event (), meaning that the centroid of the additional flux coincided with the source star centroid. That selection cut enabled us to remove any contamination from asteroids as well as the contamination from spurious events and photometric artifacts. For each candidate event we calculated , the summation is performed over all consecutive data points at least above the baseline. We required . These simple selection criteria allowed us to reduce the number of candidate microlensing events to 23,618.
Step 2: Subsequent cuts were devised to remove any additional obvious non-microlensing light curves. We removed all objects with two or more brightenings in the light curve – mostly dwarf novae and other erupting variable stars. We discarded all candidate events with amplitudes smaller than 0.1 mag to minimize the contamination from pulsating red giants. The real microlensing events with such a small amplitude typically yield inaccurate estimation of the event timescale, hence, they are not essential for the current analysis. As in mroz2017, we also removed all candidates that were located close to each other and were magnified in the same images – these are spurious detections caused by reflections within the telescope or non-uniform background. In this step, we removed 5,221 objects from the sample.
Step 3: Finally, we fitted the microlensing point-source point-lens model to the light curves of the remaining 18,397 candidates. The microlensing magnification depends on three parameters – the time and projected separation (in Einstein radius units) between the lens and the source during the closest approach, and the Einstein timescale – and is given by:
where . The observed flux is , where and describe the source flux and the unmagnified blended flux, respectively. As the observed flux depends linearly on and , they were calculated analytically using the least-squares method for each set of .
The best-fit parameters were found by minimizing the function using the Nelder-Mead algorithm333We used the C implementation of the algorithm by John Burkardt, which is distributed under the GNU LGPL license (https://people.sc.fsu.edu/jburkardt/c_src/asa047/asa047.html).. During the modeling, we iteratively removed any outliers provided that the adjacent data points were within of the model. To quantify the quality of the fit, we calculated for the entire data set and for data points within of the maximum of the event (i.e., ). We required . We calculated five- and four-parameter models (with the blend flux set to zero). We allowed for some amount of the negative blending in five-parameter fits (, where is the flux corresponding to an 20.5-mag star). If and the four-parameter model was marginally worse () than the five-parameter model, we chose the former.
We were left with 5,790 objects, which will constitute our final sample of microlensing events used for the construction of optical depth and event rate maps in low-cadence fields. The uncertainties of model parameters were estimated using the Markov chain Monte Carlo technique using Emcee sampler coded by foreman2013. To take into account our limits on negative blending, we added the following prior on the blend flux:
where (, which corresponds to ). The best-fit parameters and their uncertainties are reported in Table 3. The uncertainties represent the 68% confidence range of marginalized posterior distributions.
Light curves of selected events were inspected by a human expert, from which we estimate the purity of our sample of microlensing events to be very high (). Figure 3 shows the distribution of fractional uncertainties of Einstein timescales. The median uncertainty is 16% and for 98% of events in the analyzed sample . There are two main factors influencing our measurements of : the source brightness and impact parameter (which corresponds to maximal magnification). The fainter the source is and the larger is the impact parameter, the larger are the uncertainties.
Of the 5,790 events from our sample, 3,958 (68%) have been announced in a real-time by the OGLE Early Warning System (EWS) (udalski2003); the remaining 1,832 events (32%) are new discoveries. For comparison, from 2011–2017, 6,959 microlensing alerts in low cadence fields were announced by the EWS, and about 10% of these are anomalous or binary. The EWS also contains some lower-amplitude events or events on sources fainter than .
We calculated more detailed statistics for the field BLG660 as an example. 180 candidate microlensing events were selected by our “step 1” criteria; the visual inspection of light curves showed that 138 were indeed microlensing events, while 125 were found by the EWS. Two objects reported in the EWS are not microlensing events (variable stars), and three were detected in “new” databases. Nine genuine EWS events were not identified by our search algorithm (mostly because of the variability in the baseline, the low significance of the event, or the small number of magnified data points), and 27 events were detected only by our search algorithm. Furthermore, 111 objects are common. Thus, our search algorithm was able to find of events in that field. However, only 94 events from the field BLG660 (i.e., 64%) satisfied all our selection criteria and were included in the final sample of events. Half of the rejected events have very faint sources () and, as a consequence, their parameters are not well measured. The remaining events are anomalous, do not fulfill the constraints on or , or their light curves are noisy and thus they do not meet the fit quality criterion.
4 Star counts
The number of monitored sources is an essential quantity in microlensing optical depth calculations. While it is usually presumed that star catalogs are nearly complete at the bright end of the luminosity function, this is not true for faint sources. (In fact, the incompleteness in red clump giant counts may have led to the discrepancy between optical depths based on bright and faint sources; sumi2016). The density of stars brighter than in the most crowded regions of the Galactic bulge exceeds 4000 stars per arcmin, which corresponds to about 0.7 unresolved blends in a typical seeing disk () of a star. A faint star can be hidden in a glow of a bright neighbor or two faint stars cannot be resolved and the total brightness of the blend is higher than the brightness of either of the sources. Star catalogs might be therefore highly incomplete, especially in crowded fields. We calculated the number of monitored sources using three independent methods, all of which yielded consistent results.
The most robust approach to counting the source stars is to use deep, high-resolution images of a given field taken with the Hubble Space Telescope (HST). This method is, however, impractical in our study, because sufficiently deep HST pointings are available for only a few sightlines toward the Galactic bulge (e.g., holtzman1998; holtzman2006). Moreover, the observed number density of stars may vary on small angular scales due to variable and patchy interstellar extinction.
We used several HST pointings as a “ground-truth” to test the accuracy of other methods of counting source stars. We used the database of stellar photometry of several Galactic bulge fields obtained using the Wide Field Planetary Camera 2 (WFPC2) onboard HST (holtzman2006). The WFPC2 camera has a field of view of 4.97 arcmin and a pixel scale of or per pixel, depending on the detector (holtzman1995). The observations were taken through the F814W filter and transformed to the Cousins magnitudes. holtzman2006 also provided information on the completeness of the photometry as a function of brightness based on image-level simulations, which allowed us to correct the observed luminosity functions. Our results for six HST fields are reported in Table 3.
The most common approach to assess the number of monitored sources is to use one deep luminosity function of a single field as a template (e.g., alcock2000; sumi2003). The template luminosity function is shifted in brightness and rescaled so that the brightness and number of red clump stars match the observed bright end of the luminosity function in a given direction. We used this method to calculate the number of monitored sources in 452 selected subfields. The template luminosity function was constructed using deep HST observations (holtzman1998) for faint stars () and the OGLE-IV luminosity function of the field BLG513.12 for bright stars (). While the presented method can work well for neighboring regions, it may fail for fields located at or above the Galactic equator (as well as in the Galactic plane, far from the bulge), where the shape of the luminosity function may be different.
We therefore tried a novel approach. The pixel size () of the OGLE-IV camera and typical PSF size of stars on reference images () are much better than in previous experiments (although still inferior to the HST images). We carried out a series of image-level simulations to estimate the completeness of our star catalogs. We injected artificial stars () into random locations on real OGLE images, stacked the images into the deep reference image, and ran our star-detection pipeline (wozniak2000) exactly in the same way as real star catalogs were created. We injected 5000 stars per frame so that the density of the stars tended to increase by less than . We consider the artificial star as detected if 1) the measured centroid is consistent (within 1.5 pix) with the location where the star was placed and the closest star from the original catalog is at least 2.1 pix away (the artificial star is in an “empty” field), or if 2) the measured brightness of the artificial star is closer to the input brightness than to the brightness of the real neighboring star (i.e., ) if such neighbor was detected within 2.1 pix on the original frame (in other words, the real star from the original catalog becomes a blend). The star-detection algorithm can separate neighboring objects as close as less than 2 pix away, but its effectiveness depends on the flux and flux ratio.
We calculated completeness of our star catalogs in fourteen 0.5-mag-wide bins and corrected the observed luminosity functions for each subfield. This approach works well for . In a few cases of the most crowded fields, we needed to extrapolate the luminosity function for the faintest sources () based on two or three earlier bins. Our luminosity function of the field BLG513.12 is consistent with the HST results (holtzman1998) (Figure 3). Star catalogs are nearly complete down to in the most crowded fields and even to in relatively empty fields (Figure 3). The overall completeness (down to ) typically varies from 30% to 80%, depending on the field.
Figure 4 shows the comparison between the star surface density (down to ) measured using image-level simulations () and that estimated from matching the template luminosity function (). On average, , although we noticed a small bias. In fields located close to the Galactic plane, , while far from the plane, . Similarly, the comparison of measured star densities to those inferred directly from HST images (Table 3) indicates that both proposed methods (template matching and image-level simulations) are accurate to about 10–15%. These tests demonstrate that we are presently unable to measure the number of monitored sources with accuracy better than 10%. In turn, optical depths and event rates may suffer from systematic errors at the 10–15% level. Because our sample of microlensing events is large, the accuracy of inferred optical depths and event rates will be mostly limited by the accuracy of the determination of the number of sources, not by small numbers of events as in the previous studies.
5 Distribution of the blending parameter
Due to the high density of stars toward the Galactic bulge and the point spread function size of objects on reference images, some sources cannot be resolved on OGLE template images (this phenomenon is called “blending”). A faint star can be hidden in a glow of a bright neighbor or two faint stars cannot be resolved and the total brightness of the blend is higher than the brightness of either of the sources. We used image-level simulations that were described in the previous Section to derive the distribution of the blending parameter as a function of the brightness of the baseline star. These distributions will be necessary for catalog-level simulations of detection efficiency of microlensing events in our experiment. The blending parameter is defined as the ratio between the source flux and the total flux of the detected object (i.e., the sum of fluxes of the source and unrelated blends).
Previously, wyrzykowski2015 used archival HST observations of the OGLE-III field BLG206 to obtain the distribution of the blending parameter in that field. They matched OGLE stars to individual stars present on the HST image and calculated the ratio of their flux to the total brightness of the object detected on the OGLE template image. Then, they assumed that the distribution of blending is the same across all analyzed OGLE-III fields.
We used image-level simulations to construct distributions of the blending parameter in all analyzed fields. We matched stars injected into images with those detected on reference images (we used a matching radius of 1.5 pix). The blending parameter is simply , where is the input flux and is the flux measured on the template image. Sources were drawn from luminosity functions of corresponding fields.
Figure 5 presents the comparison between the distribution of the blending parameter obtained from our image-level simulations and that from the empirical study of wyrzykowski2015 based on HST images. Both distributions are very similar. The distribution of of bright stars is bimodal – typically the entire flux comes from the source () or the source is much fainter than the blend (). For fainter stars, the blending parameter is distributed more uniformly. There are small differences between the results of our simulations and distributions of wyrzykowski2015, which are likely caused by different template images (OGLE-III reference image was slightly deeper and had better seeing than the OGLE-IV one). Figure 5 shows the distributions of in three fields with different star densities.
6 Catalog-level simulations
In the previous work (mroz2017), image-level simulations provided us with robust measurements of the detection efficiency of microlensing events. These calculations (i.e., injecting microlensing events into real images, performing image-subtraction photometry, and creating photometric databases) require significant amount of computational resources. In fact, simulations of event detection efficiency in nine high-cadence fields (mroz2017) lasted nearly four months on over 800 modern CPUs. As we aimed to measure detection efficiencies in the remaining 112 fields in a finite amount of time, we decided to carry out catalog-level simulations.
We injected microlensing events on top of light curves of objects from the OGLE-IV photometric databases with the source flux drawn from the derived blending distribution. Each data point and its error bar were rescaled by the expected magnification, which depends on microlensing model and blending. Our method conserves the variability and noise in original light curves, as well as information on the quality of individual measurements. Let be the flux of the source and – unmagnified flux from possible blended stars and/or the lens itself. The flux of the baseline object () is magnified during a microlensing event by a factor:
where is the model magnification and is the blending parameter. If there is no blending () then ; if the blending is very strong (), the observed magnification .
To inject a microlensing event into the database light curve, we needed to transform the observed flux and its uncertainty to . The naive transformation