A self-testing quantum random number generator

A self-testing quantum random number generator

Tommaso Lunghi Group of Applied Physics, Université de Genève, 1211 Genève, Switzerland    Jonatan Bohr Brask Département de Physique Théorique, Université de Genève, 1211 Genève, Switzerland    Charles Ci Wen Lim Group of Applied Physics, Université de Genève, 1211 Genève, Switzerland    Quentin Lavigne Group of Applied Physics, Université de Genève, 1211 Genève, Switzerland    Joseph Bowles Département de Physique Théorique, Université de Genève, 1211 Genève, Switzerland    Anthony Martin Group of Applied Physics, Université de Genève, 1211 Genève, Switzerland    Hugo Zbinden Group of Applied Physics, Université de Genève, 1211 Genève, Switzerland    Nicolas Brunner Département de Physique Théorique, Université de Genève, 1211 Genève, Switzerland

The generation of random numbers is a task of paramount importance in modern science. A central problem for both classical and quantum randomness generation is to estimate the entropy of the data generated by a given device. Here we present a protocol for self-testing quantum random number generation, in which the user can monitor the entropy in real-time. Based on a few general assumptions, our protocol guarantees continuous generation of high quality randomness, without the need for a detailed characterization of the devices. Using a fully optical setup, we implement our protocol and illustrate its self-testing capacity. Our work thus provides a practical approach to quantum randomness generation in a scenario of trusted but error-prone devices.

thanks: These authors contributed equally to this work.thanks: These authors contributed equally to this work.

Given the importance of randomness in modern science and beyond, e.g. for simulation algorithms and for cryptography, an intense research effort has been devoted to the problem of extracting randomness from quantum systems. Devices for quantum random number generation (QRNG) are now commercially available. All these schemes work essentially according to the same principle, exploiting the randomness of quantum measurements. A simple realization consists in sending a single photon on a 50/50 beam-splitter and detecting the output path Rarity et al. (1994); Stefanov et al. (2000); Jennewein et al. (2000). Other designs were developed, based on measuring the arrival time of single photons Dynes et al. (2008); Wahl et al. (2011); Nie et al. (2014); Stipčević and Rogina (2007), the phase noise of a laser Qi et al. (2010); Uchida et al. (2008); Abellán et al. (2014), vacuum fluctuations Gabriel et al. (2010); Symul et al. (2011), and even mobile phone cameras Sanguinetti et al. (2014).

A central issue in randomness generation is the problem of estimating the entropy of the bits that are generated by a device, i.e. how random is the raw output data. When a good estimate is available, appropriate post-processing can be applied to extract true random bits from the raw data (via a classical procedure termed randomness extractor Nisan and Ta-Shma (1999)). However, poor entropy estimation is one of the main weaknesses of classical RNG Dodis et al. (2013), and can have important consequences. In the context of QRNG, entropy estimates for specific setups were recently provided using sophisticated theoretical models Frauchiger et al. (2013); Ma et al. (2013). Nevertheless, this approach has several drawbacks. First, these techniques are relatively cumbersome, requiring estimates for numerous experimental parameters which may be difficult to precisely assess in practice. Second, each study applies to a specific experimental setup, and cannot be used for other implementations. Finally, it offers no real-time monitoring of the quality of the RNG process, hence no protection against unnoticed misalignment (or even failures) of the experimental setup.

It is therefore highly desirable to design QRNG techniques which can provide a real-time estimate of the output entropy. An elegant solution is provided by the concept of device-independent QRNG Colbeck (); Pironio et al. (2010), where randomness can be certified and quantified without relying on a detailed knowledge of the functioning of the devices used in the protocol. Nevertheless, the practical implementation of such protocols is extremely challenging as it requires the genuine violation of Bell’s inequality Pironio et al. (2010); Christensen et al. (2013). Alternative approaches were proposed Li et al. (2011); *li2012 but their experimental implementation suffers from loopholes Dall’Arno et al. (2012). More recently, an approach based on the uncertainty principle was proposed but requires a fully characterized measurement device Vallone et al. (2014).

Here, we present a simple and practical protocol for self-testing QRNG. Based on a prepare-and-measure setup, our protocol provides a continuous estimate of the output entropy. Our approach requires only a few general assumptions about the devices (such as quantum systems of bounded dimension) without relying on a detailed model of their functioning. This setting is relevant to real-world implementations of randomness generation, and is well-adapted to a scenario of trusted but error-prone providers, i.e. a setting where the devices used in the protocol are not actively designed to fool the user, but where implementation may be imperfect. The key idea behind our protocol is to certify randomness from a pair of incompatible quantum measurements. As the incompatibility of the measurements can be directly quantified from experimental data, our protocol is self-testing. That is, the amount genuine quantum randomness can be quantified directly from the data, and can be separated from other sources of randomness such as fluctuations due to technical imperfections. We implemented this scheme with standard technology, using a single photon source and fibered telecommunication components. We implement the complete QRNG protocol, achieving a rate 23 certified random bits per second, with confidence.

Protocol. Our protocol, sketched in Fig. 1, uses two devices which respectively prepare and measure an uncharacterized qubit system. In each round of the protocol, the observer chooses settings among four possible preparations, , and two measurements , resulting in a binary outcome . To model imperfections, we represent the internal state of each device by a random variable— for the preparation device and for the measurement device—which are unknown to the observer. As we work in a scenario where the devices are not maliciously conspiring against the user, we assume the devices to be independent, i.e. , where .

Figure 1: Sketch of the protocol. The self-testing QRNG protocol consists in 3 distinct steps. (1) First, an experiment is performed where, in each round, the user chooses a preparation and a measurement , and obtains an outcome . (2) From the raw data, the distribution can be estimated leading to an estimate for the value of the witness , from which the entropy of the raw data can be quantified. (3) Based on the entropy bound, appropriate post-processing of the raw data is performed, in order to extract the final random bit string.

In each round of the experiment, the preparation device emits a qubit state which depends on the setting and on the internal state . Similarly, the measurement device performs a measurement . Thus the distributions of and determine the distributions of the prepared states and the measurements. As the observer has no access to the variables and , he will observe




Here, and denote the Bloch vectors of the (average) states and measurements, and is the vector of Pauli matrices.

The task of the observer is to estimate the amount of genuine quantum randomness generated in this setup, based only on the observed distribution . This is a nontrivial task as the apparent randomness of the distribution () can have different origins. On the one hand, it could be genuine quantum randomness. That is, if in a given round of the experiment, the state is not an eigenstate of the measurement operator , then the outcome cannot be predicted with certainty, even if the internal states and are known, i.e. . On the other hand, the apparent randomness may be due to technical imperfections, that is, to fluctuations of the internal states and . Consider the following example: The preparation device emits the states and with . For a measurement of the observable , one obtains that . However, this data clearly contains no quantum randomness, since the outcome can be perfectly guessed if the internal state is known.

Our protocol allows the observer to separate quantum randomness from the randomness due to technical noise. The key technical tool of our protocol is a function recently presented in Bowles et al. (2014), which works as a ’dimension witness’. Given data , the quantity


captures the quantumness of the preparation and measurements. Specifically, if the preparations are classical (i.e. there exist a basis in which all states are diagonal), one has that , while a generic qubit strategy achieves Bowles et al. (2014). guarantees that the measurements performed by Bob are incompatible (see sup ()) and since it is then impossible to simultaneously assign deterministic outcomes to them, this enables us to bound the guessing probability and certify randomness. Given , and knowledge of the internal states , , the best guess for is given by . Assuming uniformly distributed and , the average probability of guessing fulfils the following inequality (see sup ())


Therefore the guessing probability can be upper-bounded by a function of , which can be determined directly from the data . Finally, to extract random bits from the raw data, we use a randomness extraction procedure. The number of random bits that can be extracted per experimental run is given by the min-entropy Koenig et al. (2009). Hence is the relevant parameter for determining how the raw data must be post-processed. Note that randomness can be extracted for any , since in this case.

The maximal value of can be reached using the set of preparations and measurements: and , which correspond to the BB84 QKD protocol Bennett and Brassard (1984). In this case, we can certify randomness with min-entropy . Using other preparations and measurements, e.g. if the system is noisy or becomes misaligned, one will typically obtain . Nevertheless, for any value , randomness can be certified, and the corresponding min-entropy can be estimated using equation (A self-testing quantum random number generator). Our protocol is therefore self-testing, since the evaluation of allows quantifying the amount of randomness in the data. In turn, this allows one to perform adapted post-processing in order to finally extract random bits.

To conclude this section, we discuss the assumptions which are required in our protocol:

  1. Choice and distribution of settings. The devices make no use of any prior information about the choice of settings and .

  2. Internal states of the devices are independent and identically distributed (i.i.d). The distributions and do not vary between experimental rounds.

  3. Independent devices. The preparation and measurement devices are independent, in the sense that .

  4. Qubit channel capacity. The information about the choice of preparation retrieved by the measurement device (via a measurement on the mediating particle) is contained in a 2-dimensional quantum subspace (a qubit).

Assumptions (i) and (iii) are arguably rather natural in a setting where the devices are produced without malicious intent. They concern the independence of devices used in the protocol, namely the preparation and measurement devices, and the choice of settings. When these are produced by trusted (or simply different) providers, it is reasonable to assume that there are no (built-in) pre-established correlations between the devices and that the settings can be generated independently, e.g. using a pseudo-RNG. Assumptions (ii) and (iv) are stronger, and will have to be justified for the particular implementation at hand. The content of assumption (ii) is essentially that the devices are memoryless (internal states do not depend on previous events). We believe this assumption can likely be weakened, since randomness can in fact be guaranteed in the presence of certain memory effects, in particular the experimentally relevant afterpulsing effect (see sup ()). Finally, note that assumption (iv) restricts the amount of information about that is retrieved by the measuring device (via a measurement on the mediating particle), but not the information about contained in the mediating particle itself. In other words, it might be the case that information about leaks out from the preparation device via side-channels, but we assume that these side-channels are not maliciously exploited by the measurement device.

Experiment. We implemented the above protocol using a fully-guided optical setup (see Fig. 2 (a)). The qubit preparations are encoded in the polarization state of single photons, generated via a heralded single-photon source based on a continuous wave spontaneous parametric down-conversion process in a periodically poled lithium niobate (PPLN) waveguide Tanzilli et al. (2012). The idler photon is detected with a ID220 free-running InGaAs/InP single-photon detector (SPD) (herald) with 20% detection efficiency and 20 µs dead time. The polarization is rotated using a polarization controller (PC) and an electro-optical birefringence modulator (BM) based on a lithium niobate waveguide phase modulator. The preparations correspond respectively to the diagonal (D), anti-diagonal (A), circular right (R) and circular left (L) polarization states. For the measurement device, polarization measurements are done using a BM and a PC followed by a polarization beam splitter (PBS) and two ID210 InGaAs/InP SPDs (with a 1.5 ns gate and 25% detection efficiency) triggered by a detection at the heralding detector. The measurements correspond respectively to the {D,A} basis and the {R,L} basis. The number of photon pairs generated by the SPDC source is set to obtain a count rate at the heralding detector of about  kHz, which corresponds to a probability of single photon emission of per gate, and a two photon emission per gate. A Field-Programmable-Gate-Array board (FPGA) continuously generates sequences of 3 pseudo-random bits. Upon successful heralding, these 3 bits are used to choose (). Finally, the FPGA records the outcome (whether each ID210 detector has clicked or not).

Figure 2: Implementing the self-testing QRNG. (a) Experimental setup. (b) Real-time evolution of the witness value (blue) and randomness generation rate (bits extracted per second; red). After 3 hours, the air conditioning in the laboratory is switched off, which leads to misalignment of the optical components. In turn, this leads to a significant drop of the witness value and corresponding entropy.

We briefly discuss to which extent the assumptions of the protocol fit to our implementation. First, the choice of preparation and measurement, and , are made by the FPGA using a linear-feedback shift register pseudo-RNG PSE (). This RNG provides a deterministic cyclic function sampled by the heralding detector. Since the sampling is asynchronous with respect to the RNG rate, the output is uniform and (i) is fulfilled. The BMs are separated spatially by 1 m, their temperature is controlled independently, and the voltages are applied with independent electronic circuits. Any cross-talk between them, e.g. due to stray electric fields, can be safely neglected, hence (iii) is also fulfilled. Concerning assumption (ii), we evaluate the distribution after every minute of acquisition. Therefore, we need to consider memory effects with time characteristics shorter than 1 minute. Two main effects should be considered: charge accumulation in the birifringence modulator, and afterpulsing in the detectors, which is a common issue in standard QRNG approaches Dynes et al. (2008); Frauchiger et al. (2013). Importantly, our protocol is robust to afterpulsing, (see sup ()). Charge effects in the modulator are relevant only for modulation slower than 1 Hz Wooten et al. (2000). Finally, the qubit assumption (iv) is arguably the most delicate one. As the choice of preparation is encoded in the polarization of a single photon, (iv) seems justified. However, a small fraction of heralded events corresponds to multi-photon pulses, in which (iv) is not valid. To take these events into account, we extend our theoretical analysis (see sup ()). We show that quantum randomness can still be guaranteed even when (iv) is not fulfilled in all experimental events, provided that the fraction of events violating (iv) can be bounded and is small enough compared to the total number of successful events. To verify this assumption, the probability of single and multi-photon pulses must be properly calibrated. For our single-photon source, the ratio of multi-photon events vs. heralds is given by , and our method can be applied.

We ran the experiment estimating for the data accumulated each minute. As discussed in sup (), the estimation of considers finite-size effects and the size of the randomness extractor is determined based on the value of Troyer and Renner (2012); Frauchiger et al. (2013). In the best conditions, our setup generates about 402 bits/s of raw data (before the extractor). The witness corresponds to a value of . After extraction, we get final random bits at a rate of 23 bits/s with a confidence of . Note that the confidence level is set when accounting for finite size effects; a higher confidence can be chosen at the expense of a lower rate. Note also that this rate is limited by the slow repetition rate of the experiment (limited by the dead time of the heralding detector) and by the losses in the optical implementation (channel transmission is ; total efficiency ). Fig. 2(b) shows the estimated value of over 3.5 hours and the rate at which the final random bits are generated. To demonstrate the self-testing capacity of our protocol, we switched off the air conditioning in the room after 3 hours. This impacts the alignment of the setup. As can be seen from Fig. 2(b), the witness value drops, reflecting the fact that the distributions of internal states ( and ) changed. In turn, this forces us to perform more post-processing, resulting in a lower randomness generation rate. Nevertheless, the quality of the final random bits is still guaranteed. This shows that our setup can warrant the generation of high quality randomness, without active stabilization or precise modelling of the impact of the temperature increase.

The quality of the generated randomness can be assessed by checking for patterns and correlations in the extracted bits. We performed standard statistical test, as defined by NIST, and although not all tests could be performed due to the small size of the sample, all performed tests were successful (see sup ()). We stress that these tests do not constitute a proof of randomness (which is impossible), however failure to pass any of them would indicate the presence of correlations among the output bits.

Finally, we comment on the influence of losses. In the above analysis, we discarded inconclusive events in which the photon was not detected at the measuring device, although the emission of a single-photon was heralded by the source. Therefore, our analysis is subject to an additional assumption, namely that of fair-sampling, which we believe is rather natural in the case of non-malicious devices. Note however that this is not necessary strictly speaking, as our protocol is in principle robust to arbitrarily low detection efficiency Bowles et al. (2014). Performing the data analysis without the fair-sampling assumption (in which case the inconclusive events are attributed the outcome -1) we obtain witness values of , corresponding to . In this case, the rate for generating random bits drops considerably to bits/s, but importantly does not vanish. Hence, our setup can be used to certify randomness without requiring the fair-sampling assumption. We note that even a small increase in efficiency would lead to a large improvement in rate. E.g. an increase from our current to would already give bits/s while an overall efficiency of would be enough to reach bits/s without post-selection, equal to our current post-selected rate.

Conclusion. We have presented a protocol for self-testing QRNG, which allows for real-time monitoring of the entropy of the raw data. This allows adapting the randomness extraction procedure in order to continuously generate high quality random bits. Using a fully optical guided implementation, we have demonstrated that our protocol is practical and efficient, and illustrated its self-testing capacity. Our work thus provides an approach to QRNG, which can be viewed as intermediate between the standard (device-dependent) approach and the device-independent one.

Compared to the device-dependent approach, our protocol delivers a stronger form of security requiring less characterization of the physical implementation, at the price of a reduced rate compared to commercial QRNGs such as ID Quantique QUANTIS which reaches 4Mbits/s. A fully device-independent approach Colbeck (); Pironio et al. (2010), on the other hand, offers even stronger security (in particular assumptions (ii)-(iv) can be relaxed, hence offering robustness to side-channels and memory effects), but its practical implementation is extremely challenging. Proof-of-principle experiments require state-of-the-art setups but could achieve only very low rates Pironio et al. (2010); Christensen et al. (2013). Our approach arguably offers a weaker form of security, but can be implemented with standard technology. Our work considers a scenario of trusted but error-prone devices, which we believe to be relevant in practice.

Note added After submission of this work, several related works have appeared Mitchell et al. (2015); Cañas et al. (2014); Haw et al. (2014).

Acknowledgements. We thank Antonio Acin, Stefano Pironio, Valerio Scarani, and Eric Woodhead for discussions; Raphael Houlmann and Claudio Barreiro for technical support; Batelle and ID Quantique for providing the PPLN waveguide. We acknowledge financial support from the Swiss National Science Foundation (grant PP00P2_138917 and QSIT), SEFRI (COST action MP1006) and the EU project SIQS.

Supplementary material

In this Supplementary material we provide a proof of randomness for our protocol along with the required assumptions in Sec. A. We show that our protocol is robust to detector afterpulsing in Sec. B. We show how to account for multi-photon events in Sec. C, and we account for finite-size effects in Sec. D. Finally, we discuss statistical tests applied to the output data.

Appendix A Proof of randomness

Here we provide a lower bound on the randomness in the observed output using the dimension witness of Ref. Bowles et al. (2014). The devices are assumed to be independent, but each device features an internal source of randomness, represented by the variable for Alice, and variable for Bob. Our goal is to upper bound the probability of guessing the output that one would have if and were known, averaged over all inputs and values of the local random variables. Before proceeding with the proof, we first establish the setting in which we will work and state the assumptions made.

a.1 Setting and assumptions

A priori, the probability of observing a certain output in a given round of the experiment could depend on everything that happened before, and later events could be correlated with the observation of a certain output. However, we will introduce several assumptions which ensure that we can speak about output probabilities without referring to specific rounds as well as the independence of the devices. Let us associate random variables , , , , with the output, the inputs, and the internal variables in round , and let us write for the set of variables etc. Also, let us denote the probabilities for the random variables to take on specific values by lower case symbols, e.g. and .

Our first assumption is that all inputs are independent of each other and the devices. Formally, is independent of for any and of , , , and similarly for . Our second assumption is that the output in a given round depends only on the inputs in that round and the current state of the devices. Formally, is conditionally independent of , , , , and given , , , and . Our third assumption is that the devices do not record the outputs. Formally, and are independent of . Under these assumptions, the probability for a certain string of outputs to occur factorises


This can be seen by repeated application of Bayes’ rule. The probability to correctly guess the output string knowing all the inputs and internal variables in an experiment with rounds is


and it follows that


We now assume that the distribution of the internal randomness is fixed for the duration of the experiment. Formally, the are identically distributed, and the as well. With this assumption, for the sum in the last line above is equivalent to averaging over the inputs and internal variables, that is, it equals


With the final assumption that the devices are independent, formally that the are independent of the , it follows from our proof below that this quantity is bounded by a function of the observed witness value . This implies that in the limit of large


and hence the entropy per bit in the output string is bounded by


We have assumed that the internal random variables are identically distributed in every round. On the physical level, the corresponding requirement is that any external parameters which influence the distributions , , such as e.g. temperature, vary slowly on the time-scale of one experimental run, i.e. the time required to gather enough data to estimate the witness value . In our experimental implementation this time-scale is about one minute. Between different experimental runs there is no requirement for , to stay unchanged. We have also assumed that the internal variables are independent of the outputs. Note however that we believe that these assumptions can be relaxed. For example, detector afterpulsing breaks the second assumption, but randomness can nevertheless be certified in our protocol as demonstrated in Sec. B.

a.2 Proof

Having established the above assumptions, we can now go ahead with our randomness proof without reference to any specific round of the experiment, i.e. we can work just with the distribution . For given inputs and , , the guessing probability for this distribution is


The average guessing probability is the average of over the distribution of inputs and local randomness. To proceed, however, we will first derive an upper bound on , defined to be the average over the inputs only.

Figure 3: Cut through the Bloch sphere showing the measurements of Bob, and a state lying in the same plane. The probabilities of outcome, say, are given by the projections of onto . The probabilities when makes an angle with are indicated. To maximise the average of these, one must choose . Note that choosing a state out of the plane of the measurements can only decrease the guessing probability.

We consider the witness of the main text. We thus have four preparations, and two measurements . Consider choices of preparations and measurements which are uniformly random (as explained in the main text, pseudorandomness is sufficient here), i.e. each combination occurs with probability . We have that


where denotes the angle between Bob’s two measurement. The reasoning of the derivation is as follows. The best guessing probability averaged over inputs of Alice is bounded by the maximum over her inputs. This gives the first inequality and allows us to focus on the best possible state that Alice can send. Next, Bob has two measurements described by Bloch vectors , and is the angle between them. The best guessing probability averaged over his inputs is obtained by sending a state which lies in the middle between his measurements on the Bloch sphere (see Fig. 3). For such a state, the outcome probabilities for the two values of are , and . Choosing the larger value and using the double-angle formula, one arrives at the second inequality.

Now we use the fact that a bound on the angle can be derived from the witness value for fixed local randomness . One has that (see Bowles et al. (2014))


For maximally anti-commuting measurements, we get . Combining (13) and (14), we get


We note that the function is concave and decreasing.

Next, we establish the following convexity property of the witness (in a slight abuse of notation, denotes the observed value of the witness when , are not known)


To see that this holds, consider the entries of the matrix defining . They are of the form . When the devices have internal randomness, we can write


where are the states produced by Alice’s box, and are the projection operators of Bob corresponding to outcome 1, is the Bloch vector corresponding to and is the difference of the Bloch vectors for and (see Bowles et al. (2014)). Now, from Bowles et al. (2014) it follows that


where denotes the angle between the vectors and . Next we notice that, for fixed , there will be a value of such that is maximal. If we label this value and set when this can only increase the expression. We thus obtain:


Using a similar argument, we can eliminate :


We are now ready to bound the guessing probability . Using the definition of , (15), and (16) we have


where in the third line we have used Jensen’s inequality and concavity of , and in the last line we have used that is decreasing. Hence, we finally get


which gives the desired upper bound on the guessing probability as a function of the observed value of the witness . This bound is tight when maximal violation of the witness is achieved, i.e. . In Sec. D, we provide the calculation for the maximum number of extractable random bits.

Finally, we provide a proof of the relation between and the commutativity of the measurements. We write , and we have


where we have used (14) and (16).

Appendix B Certifying randomness in the presence of afterpulsing

In the following we show that although afterpulsing a priory violates the i.i.d. assumption (iii), the self-testing nature of our protocol captures the effect. When afterpulsing is present, the witness value is reduced correspondingly and randomness can still be certified.

To see this, we first consider a hypothetical experiment in which the outputs are generated as follows: in a fraction of events, the experiment follows and ideal quantum qubit implementation while for the remaining events an outcome is generated at random by the measurement device, determined only by some internal random variable independent of the inputs. Let us denote the witness value computed from the whole dataset , and the value which would be obtained from only the quantum events . To an observer who does not know , the non-quantum events look just like uniform noise and the witness values fulfil Bowles et al. (2014). At the same time, this scenario meets all of the assumptions in the proof of randomness of Sec. A. Therefore, for an observer with perfect knowledge of , who can hence perfectly predict the output for the non-quantum events, the guessing probability on the whole dataset is bounded by


We now show that the witness value is reduced in a similar way for afterpulsing, and hence even if the outputs from afterpulsing events can be perfectly predicted, our bound on the randomness still holds.

Consider an experiment generating a set of events. The first thing to notice is that afterpulsing is probabilistic: in any given event either there is an afterpulse or there is not. We can therefore think of as consisting of a set of events with no afterpulse and additional afterpulsing events. Let denote the number of events in with outcome and inputs , , and the events in , and define , similarly. For simplicity let us consider the limit of large such that finite size effects can be neglected. Since the inputs are chosen uniformly . We note that the probability for an afterpulse to occur in a given round of the experiment does not depend on the inputs , in that round. The number of afterpulses is therefore the same for all combinations of ,, and with . In any afterpulsing event, the outcome is also uncorrelated to the inputs , in that round (since ). This means that the effect of afterpulsing when counting events can be written


where, importantly, is independent of (also of and indeed it may be independent of , but this is not important in the following).

The witness value on the dataset is computed from the frequencies . Using the above, we can write


where is the frequency one would have obtained considering only the set . Now, since the last term above is independent of and since the witness is computed solely from terms of the form , we have that


where is the witness value which one would obtain from the events without afterpulsing. Since the reduction in when afterpulses are added is exactly the same as in the scenario above where events with perfectly predictable outputs were added, it follows that even if afterpulse events would be perfectly predictable, the bound (30) on the guessing probability still holds.

Appendix C Accounting for multi-photon events

For real-world sources it is challenging to guarantee that they are of qubit nature. In particular, single-photon sources based on spontaneous parametric down conversion process or weak coherent sources have non-zero probability of emitting more than one photon, violating the qubit assumption.

Given an imperfect source which does not always satisfy the qubit assumption, we would like to say something about the witness violation corresponding to events that do satisfy the assumption. In particular, we would like a lower bound on this violation in terms of the observed, experimental probability distribution and some guarantee on the fraction of non-qubit events. Even without a detailed model of the source, it is possible to determine this fraction e.g. using knowledge of the photon statistics.

c.1 Bounding the violation for given qubit fraction

To derive a bound on the quantum violation, we will assume that each experimental round either satisfies the qubit assumption, or not. That is, the conditional probability distribution for the experiment can be modeled as


where is the fraction of qubit events, is the distribution corresponding to the qubit events, and is an unrestricted distribution. The witness value is given in terms of the probabilities by , where


From the model (34), it follows that the expected witness value must satisfy


where , are the determinants corresponding to distributions and respectively, and

To bound the qubit violation for a given expected observed violation we should minimise subject to the constraint (36). However, if a certain value can be attained for a fixed value of , then attaining all smaller values requires even less qubit violation. We may therefore just as well look for the maximal for fixed . Any value above this maximum guarantees a qubit violation of at least . The maximum has a simple form. It is given by


The first thing we notice is that when in (37) is less than 1, it is always given by the first line. This is the relevant case for certifying randomness in practice. Solving for the qubit violation, given an observed violation less than unity we have the bound


Second, we note that for the maximum (37) is always larger than 1. This means that to be able to certify randomness in practice, we need a minimal fraction of events satisfying the qubit assumption of


Third, for a given value of there is a minimal observed violation below which the bound (38) becomes trivial and no randomness can be certified. We must have


c.2 Estimating the qubit fraction

For an implementation with a particular source, we need an estimate or a lower bound on the fraction of qubit events . Source and detector inefficiency, and transmission losses lead to inconclusive events, and our estimate of should be consistent with how these events are dealt with.

In the scenario of non-malicious, error-prone devices considered here, it is rather natural to discard inconclusive events (e.g. assuming fair-sampling) and then compute from the remaining data. To be able to evaluate (38) in this case, one needs to estimate when inconclusive events are discarded. It is also natural to assume that all events with at most one photon emitted obey the qubit assumption.

With these assumptions, let denote the probability for the source to emit at most one photon and consider an experiment with events and conclusive events. Before post-selection, asymptotically the fraction of events that obey the qubit assumption is then . For a finite number of events, we can put a conservative estimate, i.e., a lower bound, on the number of events that satisfy the qubit assumption, within a given confidence. In particular, under the assumption that we know , the behaviour of the source is modelled by a family of Bernoulli trials parameterized by , and thus the estimation problem can be solved by using the Chernoff-Hoeffding tail inequality. More formally, let be the failure probability of the estimation process and be the margin parameter, then


which implies that is true with probability at least . Equivalently, the fraction of qubit events without post-selection is with probability at least . The margin parameter can be expressed in terms of and as .

To account for post-selection, we conservatively assume that all multi-photon events are conclusive. Asymptotically, the fraction of non-qubit events will be , so . For finite we have that after post-selection


with probability at least , with and given by (41).

Appendix D Security Analysis

In this section, we show that with the observed experimental statistics, it is possible to provide a bound on the number of random bits that can be extracted from the raw data set, , which takes values from a set of all binary strings, of length . Our approach essentially uses the (quantum) leftover hash lemma, which states that the amount of private randomness is approximately equal to the min-entropy characterization of the raw data . More specifically, it says that the number of extractable random bits (that is independent of variables ) is roughly given by . Here, we recall that variables and are the inputs of Alice and Bob, respectively, and is the classical register capturing all information about the local variables and . The min-entropy of given has a clear operational meaning when casted in terms of the guessing probability, i.e., : it measures the probability of correctly guessing when given access to classical side-information .

On a more concrete level, the leftover hash lemma employs a family of universal hash functions to convert into an output string (of size ) that is close to a uniform string conditioned on side-information . In particular, we say that the output string is -close to uniform conditioned on , if


where is the uniform distribution of . The quality of the output string is directly related to the number of extractable random bits, i.e.,


Therefore, to bound , we only need to fix a security level and find a lower bound on the min-entropy term. Using the definition of conditional min-entropy and the assumption that is generated from an iid process, we have


Accordingly, the rate of extraction is , and it converges to the min-entropy rate when (therefore ). At the moment, our bound on is written in terms of the expected value of , which is not directly accessible in the experiment. In order to relate the to the set of experimental statistics , we first use the Chernoff-Hoeffding tail inequality Hoeffding (1963), which provides an upper bound on the probability that the sum of random variables deviates from its expected value. We get


where . Here, relations with oversetting means that the relations are probabilistically true, i.e., the relations hold except with probability . For our purposes later, we denote . In the following, we introduce an estimate of the expected , i.e.,


where and


Next, we need to bound the maximum fraction of non qubit events, . Following the discussion in Sec. C, with post-selection we expect to be ( and are the probabilities of the SPDC to emit, respectively, a double pair or a single-photon pair). In the scenario where preparations are made, by using the Chernoff-Hoedffing tail inequality, we have that


Plugging this into Eq. (C5), we get


Therefore, the effective violation is


Note that the effective violation is obtained by fixing the violation due to non qubit contribution to be zero. In other words, the effective violation measures the amount of randomness in . That is, we have

Finally, by choosing and fixing , the output string is -close to uniform conditioned on . In the actual implementation we chose .

Figure 4: (a) NIST tests of the data at the output of the extractor. (b) Binary image (500500) of the extracted random bits.

Appendix E Output data analysis

We performed tests for assessing the quality of the generated randomness, looking for patterns and correlations in the output data. We performed standard statistical test, as defined by NIST. For each test, the p-value is the result of the Kolmogorov-Smirnov test, and must satisfy to be considered successful. Although not all tests could be performed due to the small size of the sample, all performed tests were successful (see Figure 4-(a)). A more visual approach to detecting patterns is illustrated in Figure 4-(b), where we display 250000 bits in a 500500 matrix as a black-and-white image. Any repeated pattern or regular structure in the image would indicate correlations among the bits. No pattern appears.

Appendix F Example of raw data

Here, for completeness, we present an extract of the raw data from our experiment, see Tab. 1. The data corresponds to one minute of integration, under good alignment conditions. We give the detector counts observed for each detector ( and ), for each measurement setting and preparation setting . As mentioned in the main text, the preparations correspond respectively to the diagonal (D), anti-diagonal (A), circular right (R) and circular left (L) polarization states. The measurements correspond respectively to the {D,A} basis and the {R,L} basis. In other words, we use the preparations and measurements of the BB84 protocol.

Based on the raw data, we evaluate the asymptotic probability distribution using the method presented in Section IV, and then evaluate the witness value . While perfect BB84 preparations and measurements would give in the asymptotic limit, the observed value is reduced. This is partly due to alignment errors, but especially to finite-size effects. To illustrate, we compute the value corresponding to the data in Tab. 1 with and without accounting for finite-size effects. We find and respectively. These values correspond to visibilities of and respectively, with respect to the ideal BB84 preparations and measurements mixed with white noise. Note that is not far from the average observed under good conditions (see the main text).




5903 97 3515 2485


172 5828 2950 3050


2825 3175 5914 86


3565 2435 199 5801
Table 1: Sample of raw data taken during one minute under good alignment conditions.


Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description