An ObservedDataConsistent Approach to the Assignment of Bit Values in a Quantum Random Number Generator
Abstract
The majority of Quantum Random Number Generators (QRNG) are designed as converters of a continuous quantum random variable into a discrete classical random bit value. For the resulting random bit sequence to be minimally biased, the conversion process demands an experimenter to fully characterize the underlying quantum system and implement parameter estimation routines. Here we show that conventional approaches to parameter estimation (such as e.g. Maximum Likelihood Estimation) used on a finite QRNG data sample without caution may introduce binning bias and lead to overestimation of the randomness of the QRNG output. To bypass these complications, we develop an alternative conversion approach based on the Bayesian statistical inference method. We illustrate our approach using experimental data from a timeofarrival QRNG and numerically simulated data from a vacuum homodyning QRNG. Sidebyside comparison with the conventional conversion technique shows that our method provides an automatic online bias control and naturally bounds the best achievable QRNG bit rate for a given measurement record.
pacs:
03.67.Hk, 03.67.Dd, 05.40.aI Introduction
Random numbers are important for an array of applications from encryption and authentication systems encryption (), to Monte Carlo simulations for molecular dynamics, nuclear reactors, and others primality (). As a result, a variety of classical methods (computational pseudorandom number generators, sampling stochastic physical processes, etc.) to generate random number sequences have been developed. An attendant host of tests to certify that a given data sequence is “random” has been also been created nistsuite (); diehard (); ent (). While pseudorandom numbers are useful for many of these applications, including simulations and encryption with suitably high quality sources, their inherent determinism means that any encryption or authentication scheme is in principle breakable with sufficient computational power. This principle applies to any deterministic system, including processes described by classical physics.
On the other hand, the only nondeterministic physical theory with experimentally accessible applications is quantum mechanics bustard (). The additional security provided by non determinism is a requirement for quantum key distribution, for instance, whose security proofs often rely on the concept of true, nondeterministic randomness in order to guarantee successful secret key sharing gisin (). Thus, a wide array of socalled quantum random number generators (QRNG) have been developed. From radioactive decay schmidt () to quantum optical techniques jennewein (); stefanov (), a host of methods involving photon arrival time rogina (); wayne (); shields (); Wahl () and vacuum noise measurements lam (); Gabriel () have been demonstrated. Despite the prevalence of QRNGs and their acknowledged need, many implementations use extractors (such as hashes) to remove large amounts of bias computationally, exposing a potential weakness in their physical implementations. For instance, if an adversary is able to computationally reverse the extractor function that a given QRNG implements in order to achieve random number uniformity and the underlying (“physical”) distribution is strongly biased then he or she will have a bestguess strategy against the QRNG device. Therefore, one’s ability to detect and remove bias before applying an extractor function improves the QRNG’s security.
One of the major sources of bias in QRNGs, aside from environmental noise, is the lack of knowledge of precise values of the QRNG’s physical parameters. The best one may do is to estimate the parameters statistically. But because the estimates are statistical they are intrinsically noisy, and thus assigning a single value to a parameter can lead to errors and bias. Nevertheless, parameter estimator errors are usually ignored in QRNG design and simple point estimators are used. Here, we show that using point estimators may introduce possible binning bias. We argue that using a Bayesian statistical inference method removes this type of bias and propose a binning scheme that extracts the optimum number of bits possible for a given entropy from a given physical random number distribution. When used as a diagnostic for QRNGs in combination with maximum likelihood estimators (MLE), uniform distributions can be generated from sources of quantum randomness. Using Bayesian hypothesis updating techniques, our scheme allows for a test of the quantum model that produced a given set of numbers, potentially allowing for a fast, online quantum test of randomness. This technique has applications to high bit rate QRNGs which need testing and verification to ensure the device remains biasfree during use.
Ii Direct Binning from a Continuous Distribution and Bias
Let be a continuous random variable with probability density function (pdf) , where is a fixed (but unknown) value parameter. The particular form and parametric dependence of is determined by the experimental setup at hand. Our goal in this section is to introduce a typical problem of physical random number generation that can be formulated as follows: Provided independent samples of , are measured in an experiment, convert, if possible, each measurement outcome into a discrete random variable with the probability mass function (pmf) and corresponding domain . A uniform distribution is often important in applications and here we will also concentrate on the case of . Then the problem essentially reduces to constructing a surjection from the set onto the set .
Traditionally, the problem is solved by dividing the domain of , , into mutually nonintersecting bins such that Qi (). When bins are selected such that the probability of the random variable to fall into the th bin is
(1) 
then the surjection can be constructed by following a simple rule: If a measurement result for some then we assign . Of course this mapping works only if the value of the model parameter is known. Since it is usually not the case in the majority of experimental situations, the first order of business is to find a good estimate of the value of . In many cases, the number of possible ways to construct an estimator that provides an unbiased estimate of is infinite CasellaBerger (). Moreover, it is not always possible to find an estimator that has minimal uncertainty, and often one is forced to choose one from a set of almost optimal candidates. In practice the maximum likelihood estimator (MLE) is a common choice.
Given a set of independent samples of , , we can introduce the likelihood function,
(2) 
The likelihood indicates which values of are more likely given measurement data . We can also compute, at least numerically, the value of that maximizes , provided the likelihood function is convex. The resulting estimator is MLE, i.e. .
Using as the “true” parameter value for binning purposes in Eq.(1) might at first appear as a reasonable choice, and this approach is a mainstay in QRNG design. But what happens if instead of one uses some other estimate that differs from only slightly in the value of the likelihood, i.e. ? Choosing over will have an effect on the size of bins generated via Eq.(1). We illustrate this situation in Fig. 1, where the random variable follows gamma distribution (i.e. ) and we are interested in converting each measurement outcome into a uniformly distributed discrete random variable K that can take on values {0,1,2,3}. We fit the same measurement data using two slightly different values of the parameter . The red solid line represents the fit with and the blue solid line has . The vertical dashed blue (red) lines represent bins calculated using Eq.(IV) with and (). The green circle is a particular measurement outcome that we would like to assign a discrete value to. According to our previous discussion, and if we use values and respectively.
Now imagine that and are such that the likelihood function does not provide a reliable differentiation between them, i.e. . Which value of , if any, should we then adopt? There are four possible options:

Choose when is the true estimate ().

Choose when is the true estimate ().

Choose when is the true estimate ().

Choose when is the true estimate ().
The first two choices are trivial since they obviously result in a uniform pmf . The last two choices, however, generate a bias that distorts the uniformity of . To see that we calculate the probability of occupying the th bin provided that is chosen when is the true estimate,
(3) 
where , , and . We notice that, by definition, . Similarly, if we choose when is the true estimate then the th bin probability reads,
(4) 
where . Finally, the plot of pmfs and in Fig. 2, calculated using Eqs.(3) and (4) respectively for , illustrates the effect of parameter under(over)estimation on the uniformity of the random numbers generated using the continuous distribution binning method. The horizontal axis represents the bin number where a measurement outcome is placed as the result of binning. The vertical axis is the probability for different values of to occur. Ideally, if the value of was known exactly, the probability of or would be the same at . This situation is represented by the solid blue line. When the value of is overestimated, – the corresponding pmf in Eq.(4) – depicted by green crosses, exhibits a bias towards placing measurement outcomes into the first two bins. In a similar fashion, in Eq.(3), represented by red circles, corresponds to the situation when the parameter is underestimated and demonstrates bias towards . To quantify the amount of introduced bias we compute values of KullbackLeibler (KL) divergence and between the bin pmf () and the ideal uniform pmf respectively. By definition, KL divergence measures the information lost when the uniform pmf is used to approximate or . We find that bits and bits.
This example shows that discrete random number generation procedures relying on binning a continuous probability distribution with a parametric dependence potentially introduces bias. This happens because the point parameter estimation approach is prone to over(under)estimating the true value of the parameter. Hence, the question arises: Is there a binning method that does not introduce bias? The short answer is yes, and such a method will be introduced in Section IV. In the next section, a slightly different approach to binning is shown in order to motivate the discussion.
Iii Uniform Random Numbers via Integral Transform
A measurement outcome does not depend on the value of the pdf parameter . However, the probability of the outcome does. As we have already seen, this means that the size of the bins also depends on , which makes binning procedure problematic. The reverse situation would be more practical, in which the bin size is fixed (independent of ) but the measurement outcome depends on the pdf parameter. Of course, this does not remedy the problem of bias discussed earlier, but it will be useful in formulating a solution in the next section.
For a given fixed value , the probability that the continuous random variable is less than reads,
(5) 
where we have assumed that . By definition and can be interpreted as a uniform continuous random variable on the interval provided is a continuous function of . The proof is straightforward and can be found elsewhere CasellaBerger (). On the other hand, if the value of is fixed, e.g. , and the value of is unknown then is clearly a function of with the range .
If we divide the interval into uniform bins, each of the size , then for every measurement outcome a discrete random number can be generated by finding such that . This is exactly what we were looking for. By replacing the random variable with using the integral transform in Eq.(5) we switched from having bins that explicitly depended on the model parameter to having constant bin size. The parametric dependence is now shifted to the random variable that we bin, i.e. , and now we need to figure out a way to assign a value to which does not create bias.
Iv Bayesian Inference and BinningBiasFree Random Numbers
We could try to fix the value of by using an estimate of (e.g. MLE) as was done previously in Section II. However, this approach is inherently flawed because any finite data sample estimator – though it can be very close to the true parameter value – will over(under)estimate the true parameter value. However, the concept of likelihood, or, more precisely, the concept of treating the distribution parameter as an unknown (but not random) variable given a set of measurements can be inverted using Bayesian inference to compute the probability of occupying a given bin .
Indeed, the Bayesian approach treats as a quantity whose variation is described by a probability distribution usually referred to as the prior. The prior is a subjective distribution determined by experimenter’s personal beliefs and knowledge about the system of interest prior to any observations on the system. Once is formulated, an observation on the system is made. The prior is then updated with the result of the observation using Bayes rule and the next measurement is taken with the updated prior, often called posterior, as the new prior. If the sampling distribution, i.e. the distribution we draw measurement outcomes from, is (the pdf to observe as a result of our measurement, given the parameter value ) and the measurement result is then the posterior distribution is given by
(6) 
where is the marginal distribution of :
(7) 
The posterior distribution can be subjectively interpreted (since it does depend on the choice of the prior) as a conditional distribution (conditioned on the observed sample) for the parameter . On the other hand, we know that is a function of given the measurement outcome . Therefore, can also be interpreted as a random variable on with distribution function that can be computed using ,
(8) 
where the plus (minus) sign is taken when is an increasing (decreasing) function of , is assumed to be continuous, and has a continuous first derivative. Now we are fully equipped to calculate the probability that a measurement outcome converts into an integer (). It is equivalent to the probability that the random variable falls into the interval given by,
(9) 
This means that we now can assign a bin to a measurement outcome using a simple acceptance/rejection test: We accept into the th bin if and reject in the th bin otherwise. Here is the userdefined acceptance probability. The binning bias can be completely eliminated by setting the value of high (). This means that only the measurement outcomes that have more than 95% of their distribution function localized within a certain bin will be accepted and converted into a discrete random number. All other measurements will be rejected. On the other hand, if is set too low, say, then less measurements will be rejected. However, this may lead to conflicting situations when a measurement outcome could be placed into two or more different bins which, in turn, may lead to binning bias.
Let us consider an example depicted in Figure 3 where two distribution functions (red solid line) and (green solid line) for two independent samples and are plotted. We are interested in converting each measurement outcome into an integer value . Using our acceptance/rejection test with we conclude that is an acceptable measurement that can be converted to . On contrary, will be rejected and no integer value will be assigned to it.
We finally summarize our approach to QRNG data processing as the following 5 step algorithm:

Run QRNG and collect independent samples from the distribution defined by the QRNG.

Construct a prior for all possible values of .

Update the prior times using the Bayes rule Eq.(6). Compute the posterior .

Use the proposed acceptance/rejection test to convert the measured sequence into integer values.
It is worth mentioning that alternatively, instead of waiting to collect a measurement record , one could choose to update the prior online i.e. after each measurement. In this case it is likely that a few first measurement results will be discarded as we accumulate information about the QRNG device at hand. However, after enough information is received to narrow down the parameter distribution, it will be possible to convert upcoming measurements into random bit values.
V Examples
To illustrate how our approach works in an experiment we consider two physical implementations of QRNGs. We first introduce mathematical models to describe the QRNGs of interest in the Section V.1 and then proceed with the analysis of the experimental data and numerical simulations results in Section V.2.
v.1 Physical Models of QRNGs
v.1.1 Photon TimeofArrival QRNG
Let us first consider a QRNG based on measuring timeofarrival statistics of a coherent light source. Our experimental setup consists of a tapered amplifier, emitting spontaneously and subsequently attenuated to a coherent state, that continuously illuminates the surface of a freerunning single photon counting module with 80 ns dead time pooser (). Using a gated FPGA essentially acting as a time to digital converter, we measure the time interval between two consecutive photodetection events. The time interval plays the role of a physical random variable that we would like to convert into a discrete uniform random variable.
To determine statistical properties of , a quantum model of phododetection process is needed. For this purpose we introduce a positiveoperator valued measure (POVM) , where is a projection operator that corresponds to “noclick” measurement of the detector and represents a “click” detection event. Note that . Then the detector click rate (i.e the click probability per unit time) reads,
(10) 
here is the density operator of the laser field and describes the overall detection efficiency. Therefore, the probability to get a click in a short time interval is . On the other hand, the probability to detect no click in the same time interval is . Next consider the time interval between two consecutive detector clicks. We can model the absence of clicks during time by a sequence of successful “noclick” measurements each of the duration . Hence, the probability to observe no clicks during time reads,
(11) 
where we introduced . In the limit of large we obtain,
(12) 
We now can compute the conditional probability to detect a click at given a click was detected at ,
(13) 
Finally, the probability density for the random variable can be obtained by taking a derivative of ,
(14) 
Two main assumptions were made in the derivation of Eq.(14). First, the detection events are independent and identically distributed. This assumption is justifiable in case of moderate laser powers. Second, we have assumed noiseless detection. The later assumption is, unfortunately, not very realistic.
Avalanche photodiode detectors usually introduce two main sources of noise that affect the value of – afterpulsing and timing jitter. Afterpulsing is a false detection event in which electrons that were trapped by quenching in a previous detector gate are rereleased in subsequent detector gates, usually occuring after a true click due to a photon absorption event. The time interval between a true detection and an afterpulse event can be well characterized experimentally and the raw data can be filtered to remove the afterpulsing events by only accepting measurements with . The filtering procedure effectively results in rescaling of the probability density in Eq.(14),
(15) 
where is a characteristic afterpulsing time.
The time jitter is a small error in the measurement of . The recorded time interval between two sequential clicks is a sum of two random variables , where is the “true” time interval with pdf given in Eq.(15) and is a time jitter random variable. One can show that the probability density for reads,
(16) 
where denotes the error function. Notice that if the time jitter is small (), Eq.(V.1.1) coincides with Eq.(15). Since the observed time jitter is indeed small we will model the timeofarrival QRNG using the probability density in Eq.(15) with one parameter .
Let us also discuss how to implement the QRNG data processing algorithm described earlier for this model. The model pdf is given in Eq.(15). An obvious choice for the prior is a noninformative (uniform) prior that assigns constant weight to all values of the parameter . It turns out that in this case the posterior distribution after measurements can be calculated even analytically (instead of standard numerical updating) as,
(17) 
where , and we assume that the characteristic afterpulsing time is known (not a parameter) and denotes the gamma distribution function. Using Eq.(5) we introduce random variables , where and compute their probability distribution using Eq.(8) and Eq.(17),
(18) 
And finally we calculate the probability that falls into the th bin ()
(19) 
where denotes the lower incomplete gamma function. Applying the acceptance/rejection test to for all pairs will convert measurement outcomes into a sequence of uniformly distributed integers on .
v.1.2 Vacuum Quadrature Measurement QRNG
The second system that we consider here is a popular QRNG implementation based on vacuum quadrature measurement. Quantum vacuum fluctuations of the electromagnetic field are measured routinely at optical wavelengths using homodyne detection techniques pironio (). A typical homodyne detector consists of a beam splitter with two input () and two output ports (). Suppose that the input port carries a laser field described by a density operator and the port carries the vacuum. By placing a photodetector in each of the output ports we measure the photon number difference operator between and ,
(20)  
where () are the transmittance (reflectance) of the beam splitter, are detector 1,2 detection efficiencies and () are creation/annihilation operators for the input port (). Therefore, in a general experimental situation, will depend on three parameters (note that ) and the laser field . But since we only perform a numerical simulation of an experiment here and thus can “control” the parameters perfectly, we will assume that we have a 50/50 beam splitter () and 100 percent efficient detectors (). We will also assume that the laser field is in a coherent state, i.e. . Therefore, the expectation value of ,
(21) 
is proportional to the expectation value of the vacuum quadrature operator . Setting we conclude that by measuring the photon number difference in the output ports and we effectively measure the vacuum quadrature, and hence, a particular measurement outcome in a normal random variable with pdf .
In reality measurement results are always affected by electronic noise. The noise is usually model by a normal distribution and thus the outcome of the quadrature measurement is a sum of the “true” quadrature random variable and the noise i.e. . Since and are independent and normally distributed, their sum is also a normally distributed random variable with pdf . Therefore, we will model the output of vacuum quadrature measurement based QRNG as a continuous random variable with the distribution function ,
(22) 
where is an unknown parameter.
With the QRNG model at hand we can now discuss how to apply the data processing algorithm developed in Section IV to the vacuum quadrature measurement QRNG. Once again we start by choosing a prior. We propose to use a noninformative prior as in the previous example. The posterior distribution after measurements can then be calculated analytically and reads,
(23) 
where and denotes the gamma function.
The next step in our procedure is to introduce random variables that later on will be binned. Unlike the previous example, where Eq.(5) was used for that purpose, we will rely on BoxMuller transform CasellaBerger () here. Recall that and , two independent uniform(0,1) random variables, can be converted into two independent normal random variable and using the following transformation,
(24) 
On the other hand, a pair of measurement outcomes and can be converted into two random variables and [0,1]. Since does not depend on the parameter (it is constant for a given pair ), it can be immediately placed into the th bin that satisfies . As to which indeed is a function of , we can derive its probability distribution function using the posterior distribution in Eq.(23),
(25) 
Finally, the probability that falls into the th bin ()
(26) 
where is the lower incomplete gamma function. Applying the acceptance/rejection test to for all will convert into a uniformly distributed integer on . Therefore, a pair of normally distributed outputs of the vacuum homodyne measurement and converts into two uniformly distributed integer random numbers and .
v.2 Experimental Results and Simulations
v.2.1 Photon TimeofArrival QRNG
We collected a sample containing 256,000 measurements of the time interval between two consecutive detection events pooser (). The raw data was filtered and all entries were removed from the sample to mitigate the effect of detector afterpulsing. The resulting filtered sample consisted of 221,890 measurements. We binned the filtered data into 100 bins of equal size and calculated the probability of each bin. The corresponding probability distribution is depicted in Fig. 4 with red circles. Based on the QRNG model discussed in Section V.1.1 we calculated , MLE for the parameter . We used in conjunction with the probability density function in Eq.(14) to fit the experimental data. The result is depicted on Fig. 4 with the solid blue line. Not surprisingly, given the number of measurements, the ML curve fits the data well.
Next we applied our data processing algorithm to the filtered data. We set the acceptance probability and proceeded to convert the data into a set of 4bit random numbers (i.e. measurement results are binned among bins). The number of measurements that passed acceptance/rejection criterion (), and were assigned a bin value (), was 215,538 (out of 221,890). The resulting bin probability distribution is depicted in Fig. 5 using green triangles. The solid blue line corresponds to the ideal 4bit uniform distribution and the red crosses represent a 4bit probability distribution obtained from the same data set using the conventional fixedparameter binning technique with . Both methods generate a visually uniform distribution. The uniformity is also confirmed by the values of Shannon entropy per bit – – for each distribution. For the conventional binning bits and the entropy of the distribution generated by our binning method is bits.
In conventional bin assignment methods, once the distribution parameter value is estimated from a given set of measurements, the number of random bits that can be generated per single measurement is, in principle, only limited by the number of measurements NumericalNoise (). This is because the mean error (standard deviation) of the parameter estimator is ignored in conventional binning. However, if the parameter estimation error is greater than the width of the bin where the measurement result is placed then such a bin assignment is erroneous and this measurement must be ignored and removed from the data. But this is exactly what our bin assignment method with the acceptance probability does. It effectively requires that the bin width should be greater than 4 standard deviations of the random variable . If this requirement is not fulfilled the th measurement can not be assigned a bin reliably and the measurement is discarded. Hence, in contrast to the conventional binning our approach reduces the overall number of measurements. Therefore, for a given initial set of data, the number of random bits per measurement is naturally less in our method. In other words Bayesian updating provides a more conservative estimate of randomness of a QRNG when compared to adhoc binning. To illustrate this we generated 7 and 8bit random number distributions from the same filtered data that we used for the 4bit distribution above and the acceptance probability . The resulting distributions are depicted with green triangles on Fig. 6 and Fig. 7. As before the solid blue line corresponds to the ideal 7(8)bit uniform distributions and the red crosses represent 7(8)bit probability distribution obtained from the same size data sets using the conventional fixedparameter binning technique with . The number of measurements that have passed acceptance/rejection criteria () is 172,736 (122,927) for the 7(8)bit distribution. We also calculated Shannon entropy for the conventional binning, bits, and the entropy of our binning method is bits. On the other hand the entropy in the 8bit case for conventional binning is bits and for the proposed binning method is bits. As previously suspected, we observe a drop in the entropy of the 8bit distribution generated using our technique. This implies that the collected data can reliably be converted into random bit sequences up to 7 bits long. Note that the conventional binning method does not provide us with such a conclusion.
v.2.2 Vacuum Quadrature Measurement QRNG
We simulated vacuum homodyne measurements using a pseudo random number generator. Two independent sets of 50,000 random numbers were created by sampling the normal distributions and respectively. The first set with represents noiseless vacuum quadrature measurement whereas the second set with corresponds to the electronics noise. Thus, the sum of the sets simulates the vacuum homodyning based QRNG that we previously modeled using Eq.(22).
We used the data to produce sets of 6 and 7bit random numbers implementing both the conventional (MLE based) and proposed (Bayesian) binning methods. The resulting distributions are depicted in Fig. 8 and Fig. 9. The green triangles correspond to the probability distributions generated using our technique (), the red circles depict the results of the conventional MLE based binning and black crosses represent conventional binning with the “true” value of the parameter .
Examining Fig. 8 and Fig. 9 visually we observe that our method fails to produce a uniform 7bit distribution indicating that the maximum number of random bits per measurement outcome cannot exceed 6 for the simulated data sample. This is also confirmed by the values of Shannon entropy versus . Of course, generating a larger sample of measurements would allow a higher number of bits per measurement outcome as was the case in the previous Section. This illustrates the interplay between the number of measurement in a sample, acceptance probability, and the number of random bits that can be extracted from the sample.
Vi Summary
In this manuscript we have demonstrated a new binning technique for QRNGs, as well as a formalized approach to characterize traditional binning methods. In particular, adhoc binning approaches are shown to result in possible bias when the model of the physical QRNG system is not taken into account. Using Bayesian hypothesis updating, a physical model can be used to quickly characterize experimental data. This has implications for new types of quantum statistical tests for randomness in a potentially more accessible manner than loopholefree Bell Inequality violation tests.
Acknowledgements.
P. L. would like to thank Bing Qi, Ryan Bennink and Travis Humble for useful discussions. This work was performed at Oak Ridge National Laboratory, operated by UTBattelle for the U.S. Department of energy under contract no. DEAC0500OR22725.References
 (1) A. J. Menezes, P. C. van Oorschot and S. A. Vanstone, Handbook of Applied Cryptography, (CRC Press, London, 1997)
 (2) J. H. Davenport, Papers from the international symposium on symbolic and algebraic computation Pages, ISSAC 123129 (1992).
 (3) âA Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applicationsâ, NIST Special Publication 80022.
 (4) see http://www.stat.fsu.edu/pub/diehard/
 (5) J. Walker, Ent tests, Fourmilab.ch. http://www.fourmilab.ch/random/
 (6) P. J. Bustard, D. Moffatt, R. Lausten, G. Wu, I. A. Walmsley, and B. J. Sussman, Opt. Exp. 19, 25173 (2011).
 (7) N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).
 (8) H. Schmidt, âQuantum mechanical random number generator,â J. Appl. Phys. 41, 462â468 (1970).
 (9) T. Jennewein, U. Achleitner, G. Weihs, H. Weinfurter, and A. Zeilinger, âA fast and compact quantum random number generator,â Rev. Sci. Instrum. 71, 1675â1680 (2000).
 (10) A. Stefanov, N. Gisin, L. Guinnard, and H. Zbinden, âOptical quantum random number generator,â J. Mod. Opt. 47, 595â598 (2000).
 (11) M. StipÄeviÄa and B. Medved Rogina, âQuantum random number generator based on photonic emission in semiconductors,â Rev. Sci. Instrum.78, 045104 (2007).
 (12) M. A. Wayne and P. G. Kwiat, âLowbias highspeed quantum random number generator via shaped optical pulses,â Optics Express 18, 9351 (2010).
 (13) J. F. Dynes, Z. L. Yuan, A. W. Sharpe, and A. J. Shields, Appl. Phys. Lett. 93, 031109 (2008)
 (14) M. Wahl, M. Leifgen, M. Berlin, T. RÃ¶hlicke, H. J. Rahn, and O. Benson, Appl. Phys. Lett., 98, 171105 (2011)
 (15) T. Symul, S. M. Assad, and P. K. Lam, Appl. Phys. Lett. 98, 231103 (2011)
 (16) C. Gabriel, C. Wittmann, D. Sych, R. Dong, W. Mauerer, U. L. Andersen, C. Marquardt, and G. Leuchs, Nat. Phot. 4, 711 (2010)
 (17) S. Pironio, A. AcÃn, S. Massar, A. Boyer de la Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Nature 464, 10211024 (2010)
 (18) Feihu Xu, Bing Qi, Xiongfeng Ma, He Xu, Haoxuan Zheng, and HoiKwong Lo, Opt. Exp. 20, 12366 (2012)
 (19) G. Casella and R. L. Berger, Statistical inference (Thomson Learning, 2002)
 (20) R. C. Pooser, P. G. Evans, T. S. Humble, IEEE Photonics Society Summer Topical Meeting Series, 147148 (2013)
 (21) Increasing the number of bins for a fixed number of measurements also increases fluctuations in the probability of occurrence of a given bin, ultimately reducing the entropy of the distribution to zero.