Quasar Correlation and Bell’s Inequality
Viewing two sources at sufficient distance and angular separation can assure, by light-travel-time arguments, the acausality of their emitted photons. Using these photons to set different apparatus parameters in a laboratory-based quantum-mechanical experiment could ensure those settings are independent too, allowing a decisive, loophole-free test of Bell’s inequality. Quasars are a natural choice for such objects, as they are visible up to high redshift and pointlike. Yet applying them at the ultimate limit of the technique involves flux measurements in opposite directions on the sky. This presents a challenge to proving randomness against either noise or an underlying signal. By means of a “virtual” experiment and simple signal-to-noise calculations, bias in ground-based optical photometry while performing an Earth-wide test is explored, imposed by fluctuating sky conditions and instrumental errors including photometric zeropoints. Analysis for one useful dataset from the Gemini 8-meter telescopes is presented, using over 14 years of archival images obtained with their Multi-Object Spectrograph (GMOS) instrument pair, serendipitously sampling thousands of quasars up to apart. These do show correlation: an average pairwise broadband optical flux difference intriguingly consistent with the form of Bell’s inequality. That is interesting in itself, if not also a harm to experimental setting independence; some considerations for future observations are discussed.
Subject headings:quasars, cosmology, techniques, site testing
That quantum mechanics (QM) must be incomplete, permitting inexplicable outcomes without resort to super-luminal signals or “hidden” unmeasured variables was famously contended in a paper by Einstein, Podolsky and Rosen (1935). Bell (1964) showed how QM theory can be rescued in practice, by experiments instead violating at least one of three foundational tenets: local reality (that each measurement corresponds to a single, real physical state); fair sampling (that sufficient measurements can capture all system states) or setting independence (that two concurrent state measurements cannot have influenced each other). Tests routinely find QM is correct, and have tightly and simultaneously restricted the first two conditions (e.g. Rosenfeld et al. 2017 and references therein) but not excluded the last, by closing the so-called “freedom-of-choice” loophole and fully eliminating experimenter interaction in the result. One possible route is to set experimental parameters via photons from astronomical sources (Friedman, Kaiser and Gallicchio, 2013; Gallicchio, Friedman and Kaiser, 2014), requiring that any interference between two settings have been orchestrated between distant sources and the Earth-based observer. Proof-of-concept QM tests using stars within the Milky Way were already achieved by Handsteiner et al. (2017) and Li et al. (2018) forcing any “collusion” in the outcome back hundreds of years. And a recent exciting development was the extension to quasars (Rauch et al., 2018): the combination of high redshift with large angular separation on the sky can place these entirely outside each others’ light cone; for separations of this occurs when both sources have . The independence of settings triggered by those photons is unspoiled by their communication, and absent correlated errors corrupting that signal prior to detection, forces any unexplained coincidence to be the result of unexpected synchronization between sources. Otherwise, the foundations of QM theory would indeed be in question.
The quasar-based QM experiment performed by Rauch et al. (2018) follows the methodology of Clauser et al. (1969) where an entangled pair of photons emitted from a central source are split between two optical arms and their polarizations are detected at receivers. While those entangled photons are in flight a switching mechanism at each receiver (also co-located with a telescope) selects randomly between two potential polarization measurements. In these first quasar test runs that switch was set by the colour the most recently detected photon, using bright sources with , visible with two 4-m class telescopes from La Palma (N, W, elev. 2396 m a.s.l.). Their fluxes were sufficient to allow large sampling losses: a relative polarization measurement was retained only if both quasar photons arrived during the microseconds while the entangled photons were in flight. Neither the colours of the two quasars nor background noise against which they were detected (notably sky brightness) were seen to be correlated during the runs, which lasted between 12 and 17 minutes. This experiment strongly upheld QM, while safely ruling out collusion between polarization settings.
Can a QM experiment utilizing antipodal, and truly acaussal, quasar pairs be performed? The first hurdle is that such sightlines are effectively impossible from any single location on the Earth. An idealized spaceborne loophole-free QM experiment with sufficient field of regard might do so, possibly even via direct photon-counting of -rays or X-rays. It is notable, however, that despite decades of optical variability studies (e.g. MacLeod et al. 2010) and extensive reverberation mapping having established the characteristic sizes of AGN disks on the order of light days across (Mudd et al. 2018), no simultaneous monitoring campaign of such sources outside each others’ horizon is so far reported in the literature, at any wavelength. The difficulty from the ground is, of course, hindrance by the Earth. Radio telescopes do not gain a benefit in this regard, as dish elevation limits are necessarily above the local horizon, irregardless of Sun position. From the nightside, optical/near-infrared observatories are restricted to separations less than from any single site, hemmed below two airmasses. Dark skies reach only in the visible, and the mean optical brightness of quasars beyond also falls below about 21 mag. Thus for seeing-limited observations, this escapes all except 8-m class telescopes for exposures less than a few minutes. This limit is relevant because unbiased switching demands simultaneous colour-discrimination of quasar photons, not yet demonstrated between two sites in opposite hemispheres. That does not obviate previous QM experiments, but does set a bar for an ideal, irrefutable one.
Practical aspects of connecting the sites aside, at least one useful test dataset to probe is available for Gemini: on Maunakea in Hawaii (N, W, 4213 m) and on Cerro Pachon in Chile (S, W, 2722 m), that when simultaneously viewing two targets near zenith, places those apart on the sky. These have operated near-identical optical imagers continuously for over 15 years, and a public archive eases aggregation of many serendipitous observations. Although these data do not, in themselves, constitute a QM experiment, they do provide a baseline from which to consider a future one: at over 10600 km apart across the Pacific Ocean, no collusion is possible on timescales less than this distance over the speed of light or , which in the restframe at corresponds to seconds.
The next section outlines how limitations on simultaneous photometry of quasars at large angular separation restricts their best expected relative signal-to-noise ratio, whether exploitable in a Bell theorm test to mimic QM or not. The motivation is to determine if any underlying signal may remain hidden. To look for a potential one, a “virtual test” is suggested with sources chosen in a randomized way to avoid bias and sky conditions sampled sufficiently to remove their influence. Following that, the available Gemini dataset is described, which consists of , , , photometry for thousands of quasars with redshifts . The total sample comprises roughly 2 million observational pairs, which in their aggragate (0.25-magnitude 1- uncertainty within 6-degree-wide sampling bins) is just sufficient to show a difference in brightness relative to object separation matching Bell’s inequality. Discussion follows on the prospects in the era of 30-m telescopes, and reaching the necessary photometric accuracy to exclude local noise sources in closing the last observational loophole.
2. Expected Signal-to-Noise Ratio
The intent is to quantify a correlated lack of randomness in external source fluxes relative to local noise, and so a general description of a QM experiment is sufficient to illustrate how this might be connected to setting independence in a Bell-theorem test, even if that were ideal. Generically, quantum theory demands that entangled photons must be found in opposite polarization states; if one is found with horizontal polarization, the state of its entangled twin will always be found vertical. (In the original theoretic treatment, these were the spin states of an entangled electron pair: up or down.) Practical experiments are imperfect, and test this statistically, with many repeated samples. Importantly, any real experiment cannot measure both states in one direction simultaneously, as this requires a setting change. For example, polarimetry necessitates a discriminator, such as a polarizer or the rotation of a waveplate. A choice must be made as to which polarization angle (or arm) to sample. Detecting the state of one entangled photon instantly collapses the wavefunction of both subject to shared uncertainty, demanding for the other a probability density which depends on the angle between waveplate settings. Those states must be anti-correlated when co-incident (there are exactly two states) and thus in order to preserve equal average probabilities of both states, this implies no net correlation at difference, and a functional form for normalized correlation of , where is in degrees (see Figure 1). But the experimenter’s ability to freely choose settings - and so not be complicit in the outcome - cannot be assumed, and when not this takes on the form of the inequality by Bell (1964):
which exceeds unity for all lesser angles, although overall exhibits less correlation. Finding that unequal or “excess” correlation, above equality and beyond what truly random sampling predicts, would reveal fault in the strict QM interpretation.
In the QM tests already described, the switching photons effectively operate the apparatus, automating setting changes photon-by-photon between two polarizer settings by pre-defined selection criteria. Any criterion can only ever be a relative flux measurement over some suitably defined passband and time period. So an issue would arise if the colour of sensed switching photons at the telescope were dominated by local noise. Either the sources or switching mechanisms could retain hidden correlation. Admittedly, unless strong, that may not predict the colour of the next photon to arrive, so not exploitable to mimic QM behaviour. But even if weak, equation 1 is logically one functional form of correlation to look for, which is immediately connected to the relative noise between telescopes. That correlation cannot be avoided from the ground: the observational errors such as seeing, sky background and extinction are well known to depend on airmass, and if not removed perfectly inject some cosine dependence with viewing angle . Imagine that, instead of angular polarizer difference , source correlation is exhibited by their angular separation (). Correlation might then be impossible to fully exclude.
The requirement on how uncorrelated sources must be to avoid this particular bias, if real, can still follow by working backwards from the needs of setting independence: finding sufficient flux difference beyond observational noise for any two randomly-selected sources to be confident that switching based on those was not random. Although quasars are known to fluctuate on timescales of days to many years, and likely do so on timescales as short as the QM experiment, as a group they have well-studied optical brightness distributions. So if any two quasars were sampled in a perfectly unbiased way from a distribution of width magnitudes, they should, over a long-term average, have a maximal difference between them, if amplified by equation 1, of
in magnitudes. The signal amounts to an excess relative to flatness with separation angle, and so the absolute value of this can be taken. And thus the problem becomes one of determining how many quasars to sample at random, for how long, and how accurately to overcome uncertainty in flux measurements, which at minimum will be restricted by the instrumental error in relative flux difference, and from the ground is likely further impacted by variation of sky brightness, seeing, and atmospheric extinction on similar timescales.
Consider identical instruments at two sites with a geographic separation of 90 degrees, that is, two sources viewed simultaneously at zenith would be separated by 90 degrees. (This is essentially the case for Gemini North and South: 95.5 degrees apart.) Under clear skies, atmospheric extinction increases linearly with airmass, inversely with zenith distance (), so for any target under an airmass of 2, the difference between it any other degrees away, is at its extreme
where is half the mean extinction in magnitudes. A similar relationship can be found for sky brightness,
and image quality
where is in units of , and is in arcseconds. The functional form of has a zenith distance more like the standard expression of Kasten (1965), which is , and for seeing () is better modeled with a power. These extremes are shown in Figure 2, represented by a thin line and a dot-dashed line respectively. Later it will be shown that for Gemini , , and ; and so those values are adopted in Figure 2. Their averages are shown as thick curves.
The signal-to-noise ratio of detectable enhancement in flux differences over observational noise for samples thus has the form
where is the photometric error and is the bandpass zeropoint error, both in magnitudes. Note that, as they are maximal errors, are not added in quadrature, and fall off as the square-root of the number of samples, as does uncertainty due to photometric error. Also, the maximum detectable effect for any randomly selected pair, even in a space-based observation, is limited by the peak of and calibration error to be
The width of the distribution of quasar brightnesses in the optical is about magnitude, and photometric accuracy limited by uncertainty in the zeropoint to about 2%, more typically 5%, which suggests this could reasonably be determined to . A detection minimum, for , is shown in Figure 2. The proportional relationship between those can be seen in Figure 3, with a peak that occurs near . For many samples taken over the long term, , which implies that it slowly grows over again by (3.6 for 10000) with photometric error fixed at 0.25 mag. This is indicated by the grey curves, which suggests that to achieve similar constraints either a single pair with minimum observational noise, or a large sample across the full sky may present comparable ways to detect collusion. It is the latter method which is adopted here: obtaining good relative photometry for an unbiased sample of quasars over many years, and looking for a relative dependence with angular separation.
Archival Gemini Multi-Object Spectrograph (GMOS) images were searched for all instances of a known quasar falling into the field, starting from the beginning of regular GMOS operations to the beginning of 2016, spanning 14.5 years. A difficulty with a direct search is that the file header information will include a target name selected by the observer, which may not necessarily correspond to any catalog. Also, this would exclude cases where an object happened to fall on the detector during the observation of another, defined target. So instead the Million Quasars Catalog (MILLIQUAS), Version 4.8 of 22 June 2016 was cross-correlated with the full Canadian Astronomy Data Center (CADC) archive of science frames obtained with GMOS North and South. The MILLIQUAS is a compendium of published catalogs, primarily the Sloan Digital Sky Survey (SDSS) providing a redshift plus optical magnitudes (Blue: , or ; and Red: or ) for each object. No restriction on type was made, including gravitational lenses, but the likelihood of misidentifications is considered small.
The GMOS field of view (FoV) is approximately , with 0.075 arcsec pixels. Using the Common Archive Observation Model (CAOM) Table Access Protocol (TAP) web service and custom Python scripts, a search within a 5 arcmin radius for each object was conducted, which produced 28374 cases where an object was within a GMOS , , , or field, through the full range of right ascension, and between and declination; deleting those corrupted or otherwise unusable yielding 20514 objects (: 4691, : 6709, : 6776, : 2338), and a typical exposure time of about 150 seconds. Gemini records each science frame within clear-sky-fraction bins from CC20 (best 20%-ile) to CCAny (all conditions). This represents 11483 unique (although potentially repeated) objects having a mean redshift of 1.48, no noticable dependence on sky position, and obtained under all conditions under which the telescopes were operational.
Finally, a selection was made to ensure good data quality for each observation. Every object frame was searched for a comparison star from the Fourth U.S. Naval Observatory CCD Astrograph Survey (UCAC4) to serve as a photometric calibration, with published SDSS magnitude. There were 16774 frames that had such a suitable, unsaturated star. This also provided for each an image quality criterion; image Point-Spread Function (PSF) Full-Width-at-Half Maximum (FWHM) was extracted for each star. The FWHM of the object (also confirmed to be unsaturated) was checked against this; in just a few cases where it occured, the smaller of the two was taken. A lower FWHM limit of 0.20 arcsec was applied to ensure that only real images were obtained, without artifacts, and so it reflects seeing conditions. After this, only 9317 objects remained after culling for cloud-cover conditions better than CC80, or usable, with less than 2 magnitudes of extinction.
Synthetic aperture photometry was carried out on the full sample, both objects and comparison stars, using a 4 arcsec diameter aperture throughout. Roughly 20 postage stamps sections were downloaded from the database. The positional accuracy of the frames was found by inspection to not always be better than 2 arcseconds. During the FWHM measuring step, a centroiding algorithm found the central pixel position and then sub-pixel shifted the aperture prior to obtaining the flux. Median sky backgrounds for each frame were subtracted after applying the appropriate detector gains and filter zeropoints, as published on the Gemini webpages. The detectors, and their configurations at the focal plane changed at certain times, either 4 or 6 chips across the FoV with separate amplifers and gaps between them, and each detector has a slightly different amplifier response. On average these are , , and mag in , , and , which fluctuated over that time with deviations of 0.12, 0.02, 0.12 and 0.22 mag respectively.
All resultant photometry was corrected for atmospheric extinction using the calibration stars, where each of those was relative to its UCAC4 magnitude and a mean filter correction. This was calculated using the 237 observations with complete photometry in MILLIQUAS catalogued , , and magnitudes, which allows the calculation of a mean sample colour shift to , , , and of 0.84, 0.49, 0.34, and 0.19 mag: was interpolated as 0.66 mag and extrapolated to 0.18 mag. Of these, 124 were cases of objects included in the UCAC4 catalog, all brighter than 17.5 mag. Correcting stellar colours so that the average sky background difference across the full sample is zero in yields corrections of 0.50, 0.00, -0.27, and -0.22 mag in , , , and filters respectively; although less of a concern in and data, the mean values of the resultant sample in all four filters are very close to neutral (see Figure 4). The objects were also shifted by the same mean filter correction to and uniformly taken as a differential from their catalog magnitudes. In this way, all photometry is relative to the sample average of mag; photometry is sky-background-limited, with some down to the expected 5- point-source limit of about 23 mag: see Figure 5. A lower object cutoff at 21.5 mag excludes those fainter than the mean sky surface brightness, avoiding a colour bias.
The weak influence from sky brightness is illustrated in Figure 6. Only mild angle and sky-brightness restrictions were employed, with a uniform upper sky brightness cutoff of . To meet this, objects must have been at least 30 degress from the Sun or Moon, and Moon phase was restricted to 80% full (see Figure 6). That this agrees well with the expectations from the linear model can be seen in Figure 7. Linear least-squares fits to extinction (comparison magnitudes), sky brightness and seeing (image quality) are shown in green (averages: dashed), giving mag, , and arcsec.
5. Analysis and Results
Correlated observations were based on the sky position and UTC time recorded in each frame header. An Interactive Data Language (IDL) code was written to perform this step, with the following prescription. For each observation, all earlier ones were searched to find those for which the end of their exposure duration overlapped by some fraction with the current one. A positive fraction means that there is some overlap. Perfect overlap, or perfect coincidence, would be a fraction of unity, and all less - down to zero - indicates that for equal fluxes this fraction of photons in the exposure would overlap with those taken in the other: over 50% is considered “coincident.” A negative fraction means that there is no overlap in the exposure durations, however, it is still useful in characterizing the temporal aspects of the sample. Observations were considered “correlated” if they fell within a given temporal window. For example, an initial check was to find if another observation occured within the same observational “day” for the combined Gemini telescopes, which varies during the year, but is about 12 hours plus the timezone difference, or 17 hours. The largest possible window was to look back for frames within 14.5 years, to the beginning of records.
Some fully coincident observations did occur, that is, object pairs with fractional overlap of unity (310 times). One such case with multiple repeated observations is shown in Figure 8, for quasars SDSS J141624.14+134656.6 and SDSS J141632.99+135001.5, observed with GMOS-North 26 times together over the course of about month (the MJD observation times are indicated), when those two happened to fall within the same frame. In most cases, the comparison star was the same (UCAC4 520-056039) and so there is no difference in measured sky background (red circle), extinction (yellow) or seeing (blue dots). Those differ, however, in the few cases where a different calibration star was found (automatically) for the other. Spanning these observations, the uniformity of the photometry is remarkably stable. It is evident that photometry of quasars near the mean sample brightness (catalog magnitudes of 19.7 mag and 19.8 mag) was carried out over multiple observations over month-long timescales within a limit of 0.10 mag, which is considered the limit of significant measurable differences for the rest of the sample.
Several cases of objects with repeat, correlated observations having a brightness difference larger than the significance limit are provided in Table 1, ordered by the span of observations. Only those with at least 10 occurances are retained; uncertainties reported are standard deviations from the mean. These are, effectively, the most correlated objects found in the study. They were observed from both North and South, and through the full range of catalog brightness. There are notably cases of bright (up to 17.5 mag) quasars, although none is reported in the literature as a known variable. For example, 3C 186.0 () is a well-studied radio-loud source with a prominent jet (Chiaberge et al., 2017): observed 11 times beyond the significance limit, found to be on average mag less than its catalog Red magnitude (and even brighter than its SDSS magnitude of 17.88). The largest discrepancies from the catalog are over a magnitude, occuring for fainter objects, as expected. This holds the prospect that monitoring of targets could reveal intrinsic, intra-day brightness fluctuations. There was, however, no case of significant object difference between the two telescopes obtained within a single-day window. Even so, those observations with one object in the FoV (either North or South, but not both), are shown in Figure 9. It can be seen from this that model expectations for intra-day variation in seeing, sky background, and extinction values - for those cases that had a comparison star suffering less than 2 magnitudes of extinction - are consistent with the observations.
The most interesting result is that there is significant correlation at all, even if it is not certain how that might impact a real test. The full sample of all correlations shows no evident bias by object angular separation, either in sky conditions or sample selection. Figure 10 is the result of cross-correlating the full sample: the ratio of redshifts of any two objects as a function of angular separation. Although the visibility of objects from the two sites necessarily results in density variation of this distribution, it is fair: there is no average redshift difference across the sample (green curve). With only 274 observations of objects, there was not a sufficient sample to provide a meaningful comparision of just those: per 6-degree angular bin. The other results of cross-correlating the full dataset are shown in Figure 11. Only cases where objects were within 2 magnitudes of each other and having a relative Blue-Red colour difference less than 0.20 mag were retained. These are displayed as the relative differences in the objects, averaged in 6-degree-wide bins. The error bars are 1- standard deviations within those bins. The results for just the coincident pairs are shown in light blue, that is, those cases where two objects fell within the FoV. Above are the differences for comparison stars; for reference, differences in seeing (blue) sky brightness (red), and the catalog magnitudes (green) are overplotted. These are all relatively flat (the standard deviation in comparison magnitude differences is 0.024 mag), which makes the comparison to Bell’s inequality (thick black curve) remarkable. This is a better fit than either the quantum-mechanical result (thin black curve) or flatness (dashed) line, to within the deviation expected, . Results are similar, with expectedly more scatter, if only -band frames are used (not shown).
These deviations of differences in object brightnesses are significant relative to the measured errors. A significance limit of 0.10 magnitudes is indicated by a horizontal dotted line in Figure 11. The distribution of these differences is displayed as well in Figure 12. In the top panel a vertical dotted line shows the limit of 0.10 magnitudes; a thin black curve is a Gaussian of width 0.25 magnitudes, consistent with purely photometric error; a dashed line is the expected difference for a uniform sample of width 1.25 magnitudes. This is intuitively the limit one would expect, if the sample was randomly drawn from the same distribution, 1.00 magnitudes wide. Note the four or five instances near differences of about 0.50 magnitudes that are above this line, so occuring slightly more often than one would expect from a randomly drawn sample.
With reversed seasons between North and South hemispheres, one would expect some seasonal dependence on when objects were observed, and so although they may be selected on the chance they appear in either telescope FoV this selection will not be uniform over the year. The bottom panel averages in 6-month-long bins those cases which were beyond the significance limit from the sample (black). The thin black curve shows the result if half the sample were selected, decaying geometically back towards the beginning of the window. The yellow circles are the comparisons for the full sample, the green line is the same except sample times have been randomized across the entire 14.5 years. Note that this resampling has no effect on the results displayed in Figure 11; those are absolute values, so the order in which the differences were taken is not relevant, and those have already been cross correlated for the full sample. It is interesting that there appear to be some periods when it was more probable than random that there would be significant differences between two objects. The long timescales of those suggests that conducting an experiment avoiding such a bias may require repeat samples spanning years.
6. Summary and Future Directions
A clean Bell-inequality test employing quasars at large angular separations requires that the measured fluxes of objects at both far-separated telescopes are above the local noise. If not, this could hide a correlated signal. Fair, random sampling across the sky might sense that, if not photon-by-photon, at least on the timescale of minutes. By mining the full GMOS-North and GMOS-South archive, approximately 30,000 broadband , , and quasar images were found, with many repeated; the vast majority of which are merely serendipitous: targeting an unrelated object in the field. Photometry of each frame, which includes a stellar calibration for uniform data-quality selection, allows a careful analysis of the impact of source colour, variable sky conditions and airmass (sky background, extinction, and image quality) without explicit target or observer bias. Almost 10000 sufficiently deep observations for Gemini GMOS North and South, complete with a nearby unsaturated UCAC4 star, allowed photometry with a global zeropoint uncertainty of about 2% over a sample spanning 14.5 years. A “virtual” test was performed on those data, comprising roughly 2 million observational pairs, which in their aggragate have 0.25-magnitude 1- uncertainty within 6-degree-wide bins. This is sufficient to show a lack of flatness in relative object flux-differences with angular separation consistent with the form of Bell’s inequality.
Although the Gemini sample covers the full sky and range of possible angular separations between objects, there were not sufficient sources to provide a meaningful comparison restrcted to those. A potentially confusing factor may be shifting bandpasses with redshift, and correcting to a common colour; a better technique may be spectroscopic, focussed on bright emission lines. There was also limited information on how those individual sources (or the calibration stars) may have varied during this time. A subset of the data with significant differences is one output of this work, and provides a baseline from which to compare. Repeat observations of these at higher photometric precision would seem to provide a check on either real, intrinsic correlation between those source fluxes or false, spurious correlation due to unidentified source-selection or instrumental effects. Those were controlled here by the telescopes and instruments being essentially identical, and blind selection from a prior independent catalog, but a wealth of archival sources of photometry from other telescopes could be added together to improve on this result too; multiple cross-calibration may actually serve to reduce zeropoint errors. Future facilities combined with long-term monitoring, such the Large Synoptic Survey Telescope, will make false correlation via these potential error modes much more difficult to hide.
The current dataset provides only a small number of truly coincident observations (yielding just those cases where two quasars were in the same GMOS FoV) yet that is the goal of this endeavour. It is important to emphasize that fair, unbiased switching in an Earth-wide QM experiment depends on simultaneous control of local noise sources, however the discriminators are set. An attractive aspect of conducting such a QM experiment with Gemini is that these are at two premier sites over 10600 km apart, with a combined view stretching 180 degrees across the sky, and so represent a natural path to a definitive ground-based test. Ultimately, the coming era of 30-metre telescopes in both hemispheres is anticipated, with a aperture advantage of about , plus a smaller PSF (and sky-background error) increasing that to , bringing exposure times for quasars down to about a second, not minutes. Thus, low-noise, truly-synchronous photometry could sample timescales (in the quasar restframe) shorter than the round-trip light travel time between telescopes, and so unambiguously exclude any collusion between measurements due to local noise. As neither the emission processes at either source nor switching-decision at either telescope could have influenced the other, the results would have to be pre-determined before the photons left the sources. In short, the experimental outcome would imply a “cosmic conspiracy” dating back nearly of the look-back time for the visible Universe.
- Einstein, Podolsky and Rosen (1935) Einstein, A., Podolsky, B. & Rosen, N. 1935, Physical Review, 47, 777
- Bell (1964) Bell, J.S. 1964, Physics, Vol. 1, No. 3, 195
- Chiaberge et al. (2017) Chiaberge, M., Ely, J.C., Meyer, E.T., Georganopoulos, M., Marinucci, A., Bianchi, S., Tremblay, G.R., Hilbert, B., Kotyla, J.P., Capetti, A., Baum, S.A., Macchetto, F.D., Miley, G., OâDea, C.P., Perlman, E.S., Sparks, W.B. & Norman, C., 2017, å, 600, 57
- Clauser et al. (1969) Clauser, J.F., Horne, M.A., Shimony, A. & Holt, R.A 1969, Phys. Rev. Lett., 23, 880
- Friedman, Kaiser and Gallicchio (2013) Friedman, A.S., Kaiser, D.I. & Gallicchio, J. 2013, Phys. Rev. D, 88, 044038
- Gallicchio, Friedman and Kaiser (2014) Gallicchio, J., Friedman, A.S. & Kaiser, D.I. 2014, Phys. Rev. Lett., 112, 110405
- Hanbury Brown & Twiss (1956) Hanbury Brown, R. $ Twiss, R.Q. 1956, Nature, 178, 1046
- Handsteiner et al. (2017) Handsteiner, J., Friedman, A.S., Rauch, D., Gallicchio, J., Liu, B., Hosp, H., Kofler, J., Bricher, D., Fink, M. Leung, C., Mark, A., Hien, T., Nguyen, H.T., Sanders, I., Steinlechner, F., Ursin, R., Wengerowsky, S., Guth, A.H., Kaiser, D., Scheidl, T. & Zeilinger, A. 2017,Phys. Rev. Lett., 118, 060401
- Kasten (1965) Kasten, F. 1965, A New Table Table and Approximation Formula for the Relative Optical Air Mass
- Li et al. (2018) Li, M.-H., Wu, C., Zhang, Y. et al. 2018, \apl, 121, 080404
- MacLeod et al. (2010) MacLeod, C.L., IveziÄ, Å½., Kochanek, C.S., KozÅlowski, S., Kelly, B., Bullock, E., Kimball, A., Sesar, B., Westman, D., Brooks, K., Gibson, R., Becker, A.C. & de Vries, W.H. 2010, ApJ, 721, 1014
- Mudd et al. (2018) Mudd, D. et al. 2018, ApJ, 862, 123
- Rauch et al. (2018) Rauch, D., Handsteiner, J., Hochrainer, A. et al. 2018, Phys. Rev. Lett., 121, 080403
- Rosenfeld et al. (2017) Rosenfeld, W., Burchardt, D., Garthoff, R., Redeker, K, Ortegel, N., Rau, M. & Weinfurter, H. 2017, Phys. Rev. Lett., 119, 010402