TDLMC. l

Time Delay Lens Modeling Challenge: I. Experimental Design

Xuheng Ding11affiliation: School of Physics and Technology, Wuhan University, Wuhan 430072, China 2 2affiliationmark: , Tommaso Treu22affiliation: Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095-1547, USA , Anowar J. Shajib22affiliation: Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095-1547, USA , Dandan Xu33affiliation: Heidelberg Institute for Theoretical Studies, Schloss-Wolfsbrunnenweg 35, D-69118 Heidelberg, Germany , Geoff C.-F. Chen44affiliation: Department of Physics, University of California, Davis, CA 95616, USA , Anupreeta More55affiliation: Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan , Giulia Despali66affiliation: Max Planck Institute for Astrophysics, Karl-Schwarzschild-Strasse 1, D-85740 Garching, Germany , Matteo Frigo66affiliation: Max Planck Institute for Astrophysics, Karl-Schwarzschild-Strasse 1, D-85740 Garching, Germany , Christopher D. Fassnacht44affiliation: Department of Physics, University of California, Davis, CA 95616, USA , Daniel Gilman22affiliation: Department of Physics and Astronomy, University of California, Los Angeles, CA, 90095-1547, USA , Stefan Hilbert77affiliation: Exzellenzcluster Universe, Boltzmannstr. 2, 85748 Garching, Germany 88affiliation: Ludwig-Maximilians-Universität, Universitäts-Sternwarte, Scheinerstr. 1, 81679 München, Germany , Philip J. Marshall99affiliation: Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94035, USA , Dominique Sluse1010affiliation: STAR Institute, Quartier Agora - Allée du six Août, 19c B-4000 Liège, Belgium , Simona Vegetti66affiliation: Max Planck Institute for Astrophysics, Karl-Schwarzschild-Strasse 1, D-85740 Garching, Germany dxh@astro.ucla.edu
Abstract

Strong gravitational lenses with measured time delay are a powerful tool to measure cosmological parameters, especially the Hubble constant (). Recent studies show that by combining just three multiply-imaged AGN systems, one can determine  to 3.8% precision. Furthermore, the number of time-delay lens systems is growing rapidly, enabling, in principle, the determination of  to precision in the near future. However, as the precision increases it is important to ensure that systematic errors and biases remain subdominant. For this purpose, challenges with simulated datasets are a key component in this process. Following the experience of the past challenge on time delay, where it was shown that time delays can indeed be measured precisely and accurately at the sub-percent level, we now present the “Time Delay Lens Modeling Challenge” (TDLMC). The goal of this challenge is to assess the present capabilities of lens modeling codes and assumptions and test the level of accuracy of inferred cosmological parameters given realistic mock datasets. We invite scientists to model a set of simulated Hubble Space Telescope (HST) observations of 50 mock lens systems. The systems are organized in rungs, with the complexity and realism increasing going up the ladder. The goal of the challenge is to infer  for each rung, given the HST images, the time delay, and a stellar velocity dispersion of the deflector, for a fixed background cosmology. The TDLMC challenge will start with the mock data release on 2018 January 8th. The deadline for blind submission is different for each rung. The deadline for Rung0-1 is 2018 September 8; the deadline for Rung2 is 2019 April 8 and the one for Rung3 is 2019 September 8. This first paper gives an overview of the challenge including the data design, and a set of metrics to quantify the modeling performance and challenge details. After the deadline, the results of the challenge will be presented in a companion paper with all challenge participants as co-authors.

Subject headings:
cosmology: observations — gravitational lensing: strong — methods: data analysis

1. Introduction

During the past decade, the flat Cold Dark Matter (CDM) model has provided an accurate description of the geometry and dynamics of our Universe. This model, now referred to as the standard model, has demonstrated an excellent fit to variety of independent of cosmological observations including the analysis of cosmic microwave background (CMB) by the Planck and WMAP satellies (Planck Collaboration et al., 2014, 2016) and low redshift cosmic probes such as type Ia supernovae (Riess et al., 2016; Betoule et al., 2014), baryon acoustic oscillation (BAO) surveys (Eisenstein et al., 2005; Alam et al., 2017), cosmic shear (Kilbinger, 2015), and the gas fraction in clusters of galaxies (Mantz et al., 2010a, b). Interestingly, however, in the flat CDM model the Hubble Constant () inferred from the extrapolation of the Planck measurements at high redshift is in tension with the local measurement from the traditional cosmic distance ladder (Riess et al., 2016). If this tension were confirmed at higher significance, it would be a major discovery, requiring deviations from the standard flat CDM and possibly new physics. Thus, improving the precision of the measurement of  is a central goal of current cosmological efforts. On the one hand, it is important to improve the quality of each method. On the other hand, it is essential to develop independent methods, providing an independent check on potential systematic errors.

In the past few years, it has been shown that strongly lensed AGN with measured time delays can constrain  with 5% precision per system, given high-quality data and state-of-the-art modeling techniques (Suyu et al., 2010, 2013, 2014). With just three lenses,  was measured to 3.8% precision, assuming flat CDM model, in the context of a blind analysis (Suyu et al., 2017; Sluse et al., 2017; Rusu et al., 2017; Wong et al., 2017; Bonvin et al., 2017). Going forward, this collaboration and future extensions aim to measure  to sub-percent level (Treu & Marshall, 2016).

Whereas analyzing increasingly large samples of strongly lensed AGNs is sufficient to meet the precision goal, it is also crucial to make sure that the measurement is accurate, i.e. it does not suffer from systematic errors that may ultimately provide a noise floor or a bias. For time delay cosmography, systematic errors include both the known unknowns (e.g., time delay measurement, residual uncertainties of the lens model and the line-of-sight structure, see Tie & Kochanek, 2018; Xu et al., 2016; Schneider & Sluse, 2013) and unknown unknowns. While an effective strategy to uncover the latter is performing blind analyses on the real data and checking the mutual consistency (Suyu et al., 2013; Bonvin et al., 2017), the former can be quantified and hopefully corrected for by means of a series of dedicated challenges.

Recently, the accuracy on the measurement of time delay has been estimated via a “Time Delay Challenge” (TDC) in which the realistic mock ‘observed’ lensed AGN light curves were generated and then analyzed by the invited modeling teams (Dobler et al., 2015; Liao et al., 2015). The mock light curves were modeled through a blind analysis, where the true value of the mock time delay were unknown to the participating team. This strategy is crucial to avoid (unconscious) experimenter bias or reverse engineering efforts. In the end, Liao et al. (2015) concluded that with light curves of sufficient quality, achievable with present-day technology time delays can indeed be measured with sub-percent accuracy and precision. Liao et al. (2015) also estimated that under the most favorable assumptions the Large Synoptic Survey Telescope (LSST) should provide around 400 robust time-delay measurements with precision within 3% and accuracy within 1%. More work also needs to be done to address the effects of microlensing on time delay measurements (Tie & Kochanek, 2018). A new challenge, including multi-band data and fine microlensing-induced perturbation, is currently derived to provide a new benchmark for time-delay measurements (Liao et al. in prep).

A second potentially limiting source of systematic errors is the inference of the lensing potential of the main deflector. Even though blind measurements of current samples demonstrate that the lens mass models are sufficiently well constrained at a level of a few percent given high signal-to-noise imaging and stellar velocity dispersion (Suyu et al., 2013, 2014; Wong et al., 2017), it is yet to be demonstrated that the current approaches are sufficient to reach 1% precision and accuracy.

Demonstrating this goal requires a dedicated effort, specific to the issue of Fermat potential. This is the topic of this paper and its companion presenting the “Time Delay Lens Modeling Challenge” (TDLMC).

In this challenge, we (hereafter “Evil” Team) provide realistic simulated time-delay lens data including i) HST-like lensed AGN images, ii) lens time delays, iii) line-of-sight velocity dispersions and iv) external convergence to the participating modeling teams (hereafter “Good” Teams)111We stress here that we follow the tradition of TDC and use the nicknames “Evil” and “Good” Teams. These nicknames do not denote any despicable intention or moral judgment, but were chosen to capture the desire of the challenge designers to produce significantly realistic (and difficult) lens data as well as an incentive for the outside teams to participate.. Likewise, blind analysis is employed to assess the accuracy of lens modeling and cosmological inference. We emphasize that TDLMC is purely a lens modeling challenge. In order to isolate the components of the time delay cosmography measurement, we assume here that the time delays are known precisely and accurately, and we only consider a single plane deflector. Separate challenges have dealt and will deal with the other elements. For simplicity of analysis, we also keep all the cosmological parameters fixed except for .

This paper is structured as follows. Section 2 briefly reviews the lens theory and introduces the ingredients used for the simulations. Section 3, describes the simulated data sets and layout of this challenge. Section 4, introduces four metrics aimed at evaluating the performance of the modeling result, and gives instructions to access the mock data and the timeline for the challenge. Section 5 concludes with a short summary.

2. Lens theory and Ingredients of the Simulations

We briefly review the relevant strong lensing theory in Section 2.1, and introduce the key ingredients including deflector/source surface brightness and deflector mass for simulating the lens image in Sections 2.2 and 2.3, respectively.

2.1. Strong gravitational lensing

For a strong lens system, the scaled deflection angle of a light ray is and the deflection of light rays can be described by the lens equation , where is the lens potential at position on the plane of the sky (image plane) and is the source position in the absence of a deflector (source plane).

The traveling time from the source to the observer depends on both the path of the source light and on the lens gravitational potential of the deflector. These two effects lead to a difference in arrival time for the multiple images. In theory, the time delay between two lensed images is given as follows:

(1)
(2)

where and are the coordinates of the images and in the image plane. is the so-called Fermat potential and is so-called time-delay distance, defined as:

(3)

Here, , and are respectively the angular distances from the observer to the deflector, from the observer to the source, and from the deflector to the source. Thus, the time-delay distance is proportional to the inverse of the Hubble constant, i.e. . By modeling the lens image of the time-delay lens, one can derive the Fermat potential, deduce the  and thus infer the value of the  (and other cosmological parameters).

The projected dimensionless surface mass density is:

(4)

and

(5)

where is the physical projected surface mass density of the deflector and is the critical surface density.

2.2. Surface brightness

To study how results change with increasing complexity, we adopted a variety of approaches to simulate the brightness profiles of the lens and source galaxies.

2.2.1 Sérsic model

As a matter of convenience, in the entry level of the challenge, we choose a common simply-parameterized description of the surface brightness for the lens and source galaxy. This choice is meant primarily for testing the codes, both for “Evil” and “Good” Teams. In the literature, the Sérsic profile (Sersic, 1968) is one of the most commonly used models to describe the surface brightness of galaxies. It ranges from exponential discs to de Vaucouleurs (1948) profiles.

The Sérsic profile is parameterized by:

(6)
(7)

where is the amplitude and Sérsic index controls the shape of the radial surface brightness profile; a larger corresponds to a steeper inner profile and a highly extended outer wing. is a constant which depends on so as to ensure that the isophote at  encloses half of the total light (Ciotti & Bertin, 1999) and denotes the axis ratio.

2.2.2 Realistic source image as AGN host

For the bulk of the challenge, we use more realistic and complex surface brightness distribution for the host galaxy of the lensed AGN. For example, we use real images of galaxies appropriately smoothed and cleaned by foreground/background contaminants as shown in Fig.1.

Figure 1.— Illustration of real HST galaxy image as lensed source host. The bright point source and foreground/background galaxies in the original image (left) are replaced by interpolation of nearby pixels to obtain a clean galaxy image (right) .

2.3. Deflector mass

Likewise, we achieve different levels for complexity of the challenge by increasing the realism of the deflector mass distribution stepping up the ladder.

2.3.1 Elliptical power-law mass distribution

A common simply-parameterized description of the deflector mass density profile is given by elliptical power-law models whose surface mass density is given by:

(8)

described the projected axis ratio. The so-called Einstein radius is chosen such that, when (i.e. spherical limit), it encloses a mean surface density equal to The exponent is the slope of the power-law profile, for massive elliptical galaxies (Treu & Koopmans, 2002, 2004; Koopmans et al., 2009). We refer the reader to the reviews by Schneider (2006); Bartelmann (2010); Treu (2010) for more details.

2.3.2 Simulated realistic galaxy mass distribution

In order to achieve a more realistic deflector mass distribution, we also consider massive early-type lens galaxies produced by cosmological numerical simulations. We only consider a single deflector, and do not include the effects of the line of sight other than via the external convergence introduced before. We choose systems with virial mass approximately 10 M yielding Einstein radii of order for typical source and deflector redshift () and ().

3. Structure of the Challenge

In this section, we first describe the data sets that are made available to the “Good” Teams in Section 3.1. Then, a description of the layers (rungs) of the challenge is given in Section 3.2.

3.1. Data sets

The mock data available to the “Good” teams consist of deep HST images, time delays, stellar velocity dispersion, and external convergence, as described below. The released mock data sets have been tested by analyzing a subset of them with two independent lens modeling software and verifying that the input cosmology (and lens parameters when applicable) could be recovered within the uncertainties.

3.1.1 Hst Image

In order to mimic a typical observational setup in state of the art observations, we choose to simulate high-resolution images obtained with the Hubble Space Telescope (HST), using the Wide Field Camera 3 (WFC3) IR channel in the F160W band. Even though this setup has lower resolution than optical images taken with WFC3-UVIS or ACS, we adopt it in order to minimize the effects of dust extinction and optimize the contrast between the (blue) AGN and (red) host galaxy. We adopt a range of AGN to host flux ratios so as to produce a distribution similar to that observed in real systems (Ding et al., 2017a, b). For simplicity, we do not include any dust extinction, and we assume AGN to be at the center of the host galaxy. Also, multi-band datasets or adaptive optics assisted ground-based images are left for future challenges.

In practice, the following steps are taken in order to simulate realistic lens configurations.

  1. For every set of lens and source parameters, compute high-resolution images of the lensed host, and deflector light.

  2. Convolve with the point spread function (PSF) appropriate for WFC3/F160W.

  3. Compute the image plane positions and fluxes of the lensed AGN images and add them as appropriately scaled PSF in the image plane.

  4. Rebin the oversampled images to the actual data resolution. Using different rebinning patterns, one can simulate eight dithering images in order to drizzle them222MultiDrizzle is adopted for the drizzling, see http://www.stsci.edu/hst/wfpc2/analysis/drizzle.html for more information. in step (6).

  5. Add noise based on realistic observation condition, including background, readnoise and Poisson noise from the source. The exposure time of each one of the eight images is taken to be 1200s, thus the total exposure time is 9600s.

  6. Drizzle the individual images to recover some of the resolution lost due to pixelization. Following common practice, we drizzle eight images into one final image; the corresponding pixel size is 013 and 008, before and after drizzling. This step introduces correlated noise. In order to allow “Good” Teams to model the original data, the eight non-drizzled images of one lens system are provided in addition to the final drizzled image.

A detailed description of these steps is given by Ding et al. (2017a, Section 3). In order to control for numerical issues and for implicit bias in favor of any “Good” Team, we use two independent codes to generate the simulations (half the sample with each code). An example of mock images generated by two independent codes with the same parameters is shown in Fig. 2. The noise maps for the images are provided which contain the standard deviation of the noise.

Figure 2.— The left and middle panels illustrate the simulated HST-like images based on two independent codes with same lens parameters. The right panel shows the difference on the same scale. The pixel scale is 008 after drizzling.

3.1.2 Time Delay

Once the values of lens parameters are set, the difference of the Fermat potential between the AGN images can be calculated with Eq. (2). To calculate the corresponding time delay with Eq. (1), we need to assume a set of cosmological parameters. For simplicity, we draw values randomly from a uniform distribution between 50 to 90 km s Mpc for the Hubble Constant, assuming a flat CDM cosmological model with

We then add measurement uncertainty to the time delay. As discussed in the introduction, the time delay is supposed to be known with sufficient precision and accuracy so that we can test the precision and accuracy of the models. We thus assume zero bias and the smallest random errors that can be obtained with current monitoring strategies. Thus we adopt as random error the largest between 1% and 0.25 days.

3.1.3 External Convergence

In principle, all mass along the line-of-sight contributes to the deflection of light rays. In practice, however, it is often the case that the lensing configuration can be approximated by a single main deflector with the addition of external shear and convergence (). The latter is particularly important because it does not change the image positions and relative fluxes, but it affects the relative Fermat potential, and hence the time delay, according to the following equation.

(9)

If not accounted for, the  can bias the inference of time delay distance and thus . A common practice to constrain the  is to compare the distribution of mass along the line-of-sight with numerical simulations (Rusu et al., 2017; Greene et al., 2013; Colbert et al., 2013; Suyu et al., 2010; Collett et al., 2013; Hilbert et al., 2009).

Since the focus of this challenge is single plane lens modeling, we include the effects of the line-of-sight contribution in the following simplified manner. We randomly generate a  using a random Gaussian distribution with 0 and 0.025 as mean value and standard deviation, respectively. This, from the point of view of modeling, one can adopt a prior on  of . The uncertainty is chosen to represent well-characterized lines of sight, and corresponds to a random uncertainty of 2.5% on , to first approximation.

3.1.4 Lens Velocity Dispersion

Stellar kinematic information is essential for breaking the mass-sheet degeneracy and constraining the lensing potential. Furthermore, the measurement of stellar kinematics provides extra cosmological information (Grillo et al., 2008; Jee et al., 2015, 2016; Shajib et al., 2018). Thus, we provide the model deflector velocity dispersion in addition to the HST images, time delays and . An integrated line-of-sight velocity dispersion is computed by weighting the velocity field by the surface brightness in a square aperture with 1 on a side. Typical seeing condition is rendered by convolving the surface-brightness-weighted line-of-sight velocity dispersion image with a Gaussian kernel with a full width at half maximum (FWHM) of 06.

Following current practice (Shajib et al., 2018; Wong et al., 2017), a random Gaussian noise with 5% standard deviation is added to the model velocity dispersion to account for typical measurement errors.

3.2. Rungs

Lens modeling is usually time-consuming both in terms of human and computer time. Thus, the size of simulated samples is limited by practical considerations. Based on the experience of the evil team and consultations with members of the lensing community a sample size of 50 was considered a good compromise between practicality and the need to explore different conditions with sufficient statistics to uncover potential biases. Thus, we construct the challenge in the following manner. Similar to the time delay challenge we provide an entry level zeroth rung for format checking and testing purposes. The zeroth rung consists of two simple lenses. If the teams can successfully recover  from the zeroth rung, they are encouraged to participate and submit their results for rungs consisting of the real challenge. Each rung consists of 16 lenses. For each rung, sixteen systems are simulated including cusp, fold, cross and double configurations; four examples for each configuration are generated by using two independent codes.

Considering the quality of the data simulated here, constraints on  with a precision of 6% should be possible and thus 48 systems would deliver  to sub-percent precision which would be sufficient to uncover biases at this level. We set a global value of  per rung. To ensure that the “Good” Teams do not infer any information for the previous rung, we reset  at each rung. The complexity and realism of the systems increase with rung level, thus allowing us to separate different aspects of the lens models and understand what needs improvement. Partial submissions for a subset of the rungs will be accepted.

Detailed information for each rung’s design including the lens components and data provided to the “Good” team is given in the following subsections.

3.2.1 Rung0

Rung0 is a training exercise which consists of two lens systems, one two-image (double) and one four-image (quad) configuration. The goal of this rung is to ensure that “Good” Team members understand the format of the data and that no bugs or mistakes could potentially affect the results of the challenge for a specific method.

In view of this goal, a parametric models for surface brightness model and mass profile are selected. We adopt a single Sérsic profile to describe the surface brightness of both the lens and source galaxy, and elliptical power-law models for the lens surface mass density. Also, we randomly added an external shear to the lens potential drawn from a typical range. The AGN images are added as a point sources and the PSF is provided. The lens parameters and cosmological parameters for the simulations are released with the data for the modeling team to check.

3.2.2 Rung1

This rung is meant to be the easiest one of the actual challenging ladder. Thus, the mocks in Rung1 are generated in a similar way as in Rung0, except that we use the images of real galaxies to get realistic surface brightness distribution for the lensed AGN host and the time delays are affected by external convergence (i.e., Section 3.1.3). For Rung0-1, we also provide an oversampled PSF; the pixel size is 013/4=00325. This is to mimic the oversampling that is generally achieved by combining several stars in the science image.

3.2.3 Rung2

Rung2 is meant to test PSF reconstruction features of lensing codes, in addition to the aspects tested in Rung1. For this purpose, we only provide a guess of the PSF but not the one actually used to generate the data.

3.2.4 Rung3

Rung3 is the highest level in this challenge, and thus the simulations are intended to be the most realistic. In addition to all the complexity we have adopted for Rung1 and Rung2, the observables are generated using massive early-type galaxies selected from numerical cosmological simulations.

4. Instructions for Participation and Evaluation Metrics

Access to the simulated lens data is through the following website:

The “Good” Teams are asked to submit their point estimates of  () and the estimated 68% uncertainties (), in the format provided at the website. Multiple entries corresponding to different choices of point-estimators and credibility intervals (e.g. maximum likelihood, mean posterior) are accepted.

4.1. Metrics

Following Dobler et al. (2015); Liao et al. (2015), the “Evil” Team will compute four standard metrics to measure the precision and accuracy. The metrics will be computed for each rung and for the combined three rungs.

The first metric is efficiency which quantifies the fraction of successfully modeled lenses in each rung:

(10)

Defining this metric means the “Good” team do not have to submit the result for each system, but can choose to omit the ones they cannot confidently model. Note that the high efficiency does not necessarily map into a precise and accurate measurement of , since the removal of outliers could be an effective way to avoid catastrophic errors.

The second metric, aiming at evaluating the goodness of the error estimate, is the standard reduced :

(11)

where the  is the true value adopted in each rung.

The third metric is the precision, defined as:

(12)

measuring the average relative uncertainty.

Finally, the fourth metric is the accuracy or bias of the estimator, which we quantify with the fractional residual:

(13)

We expect the metrics to meet the following ranges for the full challenge:

(14)
(15)
(16)

where the range corresponds approximately to the 1 and 99 percentile of the distribution for 48 degrees of freedom according to statistics (Rung1-3 have 48 systems in total). The target for precision is based on the best results obtained so far in the literature with data on comparable quality, while the target for is set by our goal of sub-percent accuracy. We do not set a target for efficiency even though of course low efficiency will be implicitly penalized by small number statistics. For a single rung, the range and accuracy are expected to be less stringent:

(17)
(18)
(19)

The metrics are analyzed individually in each rung. We expect that the performance will drop off climbing up the ladder.

4.2. Timeline and Publication of the Results

The challenge mock data will be released on January 8th, 2018. The deadline for blind submission of the Rung1 is August 8th, 2018, seven month after the data release. The deadline for Rung2 is 2019 April 8 and the one for Rung3 is 2019 September 8. In order to allow for correction of bugs, and to test different algorithms, multiple submissions are accepted. The true input parameters will be published after the deadline of each rungs to allow teams to study in more detail their submission, and/or for non-blind submissions.

This paper is posted on the arXiv as a means to open the challenge. After the deadline, this paper will be submitted to the journal together with the second paper of this series presenting the results and including all “Good” Team members as co-authors. The two papers will be submitted concurrently so as to allow the referee to evaluate the entire process. “Good” Teams are encouraged to publish papers on their own methods using the challenge data, after the submission of paper II. By participating in the challenge, the “Good” Teams agree to not publish the detailed results of their own method before the collective paper II is submitted to the journal.

5. Summary

We presented the time delay lens modeling challenge. The structure of the challenge is as follows. The “Evil” Team produced a set of mock lenses, meant to mimic state of the art data quality. Anyone in the community is invited to participate as a “Good” Team, by modeling the data and submitting a blind estimate of the Hubble Constant. The “Evil” Team will compute four metrics aimed at quantifying the accuracy and precision of the estimates. The overall goal of the challenge is to assess whether current lens modeling techniques are sufficient to ultimately reach a 1% measurement of . The challenge is organized in rungs in order to help identify aspects of the lens modeling effort that may represent bottlenecks and may require additional improvements.

Acknowledgments

We thank Vivien Bonvin, Simon Birrer, Matthew W. Auger, Xiao-Lei Meng for useful suggestions and technical supports. X.D. acknowledges support by China Postdoctoral Science Foundation Funded Project; he is also grateful for Zong-Hong Zhu’s support and funding. T.T. acknowledges support by the Packard Foundation in the form of a Packard Research Fellowship. Xu D. acknowledges support of the Klaus Tschira foundation at HITS. This work was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. G.C.-F.Chen acknowledges support from the Ministry of Education in Taiwan via Government Scholarship to Study Abroad. C.D.F. acknowledges support from the US National Science foundation grant AST-1312329. S.H. acknowledges support by the DFG cluster of excellence ‘Origin and Structure of the Universe’ (www.universe-cluster.de).

References

  • Alam et al. (2017) Alam, S., Ata, M., Bailey, S., et al. 2017, MNRAS, 470, 2617
  • Bartelmann (2010) Bartelmann, M. 2010, Classical and Quantum Gravity, 27, 233001
  • Betoule et al. (2014) Betoule, M., Kessler, R., Guy, J., et al. 2014, A&A, 568, A22
  • Bonvin et al. (2017) Bonvin, V., Courbin, F., Suyu, S. H., et al. 2017, MNRAS, 465, 4914
  • Ciotti & Bertin (1999) Ciotti, L., & Bertin, G. 1999, A&A, 352, 447
  • Colbert et al. (2013) Colbert, J. W., Teplitz, H., Atek, H., et al. 2013, ApJ, 779, 34
  • Collett et al. (2013) Collett, T. E., Marshall, P. J., Auger, M. W., et al. 2013, MNRAS, 432, 679
  • de Vaucouleurs (1948) de Vaucouleurs, G. 1948, Annales d’Astrophysique, 11, 247
  • Ding et al. (2017a) Ding, X., Liao, K., Treu, T., et al. 2017a, MNRAS, 465, 4634
  • Ding et al. (2017b) Ding, X., Treu, T., Suyu, S. H., et al. 2017b, MNRAS, 472, 90
  • Dobler et al. (2015) Dobler, G., Fassnacht, C. D., Treu, T., et al. 2015, ApJ, 799, 168
  • Eisenstein et al. (2005) Eisenstein, D. J., Zehavi, I., Hogg, D. W., et al. 2005, ApJ, 633, 560
  • Greene et al. (2013) Greene, Z. S., Suyu, S. H., Treu, T., et al. 2013, ApJ, 768, 39
  • Grillo et al. (2008) Grillo, C., Lombardi, M., & Bertin, G. 2008, A&A, 477, 397
  • Hilbert et al. (2009) Hilbert, S., Hartlap, J., White, S. D. M., & Schneider, P. 2009, A&A, 499, 31
  • Jee et al. (2015) Jee, I., Komatsu, E., & Suyu, S. H. 2015, Journal of Cosmology and Astroparticle Physics, 11, 033
  • Jee et al. (2016) Jee, I., Komatsu, E., Suyu, S. H., & Huterer, D. 2016, Journal of Cosmology and Astroparticle Physics, 4, 031
  • Kilbinger (2015) Kilbinger, M. 2015, Reports on Progress in Physics, 78, 086901
  • Koopmans et al. (2009) Koopmans, L. V. E., Bolton, A., Treu, T., et al. 2009, ApJ, 703, L51
  • Liao et al. (2015) Liao, K., Treu, T., Marshall, P., et al. 2015, ApJ, 800, 11
  • Mantz et al. (2010a) Mantz, A., Allen, S. W., Ebeling, H., Rapetti, D., & Drlica-Wagner, A. 2010a, MNRAS, 406, 1773
  • Mantz et al. (2010b) Mantz, A., Allen, S. W., Rapetti, D., & Ebeling, H. 2010b, MNRAS, 406, 1759
  • Planck Collaboration et al. (2014) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2014, A&A, 571, A16
  • Planck Collaboration et al. (2016) —. 2016, A&A, 594, A13
  • Riess et al. (2016) Riess, A. G., Macri, L. M., Hoffmann, S. L., et al. 2016, ApJ, 826, 56
  • Rusu et al. (2017) Rusu, C. E., Fassnacht, C. D., Sluse, D., et al. 2017, MNRAS, 467, 4220
  • Schneider (2006) Schneider, P. 2006, in Saas-Fee Advanced Course 33: Gravitational Lensing: Strong, Weak and Micro, ed. G. Meylan, P. Jetzer, P. North, P. Schneider, C. S. Kochanek, & J. Wambsganss, 1–89
  • Schneider & Sluse (2013) Schneider, P., & Sluse, D. 2013, A&A, 559, A37
  • Sersic (1968) Sersic, J. L. 1968, Atlas de galaxias australes (Cordoba, Argentina: Observatorio Astronomico)
  • Shajib et al. (2018) Shajib, A. J., Treu, T., & Agnello, A. 2018, MNRAS, 473, 210
  • Sluse et al. (2017) Sluse, D., Sonnenfeld, A., Rumbaugh, N., et al. 2017, MNRAS, 470, 4838
  • Suyu et al. (2010) Suyu, S. H., Marshall, P. J., Auger, M. W., et al. 2010, ApJ, 711, 201
  • Suyu et al. (2013) Suyu, S. H., Auger, M. W., Hilbert, S., et al. 2013, ApJ, 766, 70
  • Suyu et al. (2014) Suyu, S. H., Treu, T., Hilbert, S., et al. 2014, ApJ, 788, L35
  • Suyu et al. (2017) Suyu, S. H., Bonvin, V., Courbin, F., et al. 2017, MNRAS, 468, 2590
  • Tie & Kochanek (2018) Tie, S. S., & Kochanek, C. S. 2018, MNRAS, 473, 80
  • Treu (2010) Treu, T. 2010, ARA&A, 48, 87
  • Treu & Koopmans (2002) Treu, T., & Koopmans, L. V. E. 2002, ApJ, 575, 87
  • Treu & Koopmans (2004) —. 2004, ApJ, 611, 739
  • Treu & Marshall (2016) Treu, T., & Marshall, P. J. 2016, A&A Rev., 24, 11
  • Wong et al. (2017) Wong, K. C., Suyu, S. H., Auger, M. W., et al. 2017, MNRAS, 465, 4895
  • Xu et al. (2016) Xu, D., Sluse, D., Schneider, P., et al. 2016, MNRAS, 456, 739
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
353952
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description