Thermodynamics of Random Number Generation
We analyze the thermodynamic costs of the three main approaches to generating random numbers via the recently introduced Information Processing Second Law. Given access to a specified source of randomness, a random number generator (RNG) produces samples from a desired target probability distribution. This differs from pseudorandom number generators (PRNG) that use wholly deterministic algorithms and from true random number generators (TRNG) in which the randomness source is a physical system. For each class, we analyze the thermodynamics of generators based on algorithms implemented as finite-state machines, as these allow for direct bounds on the required physical resources. This establishes bounds on heat dissipation and work consumption during the operation of three main classes of RNG algorithms—including those of von Neumann, Knuth and Yao, and Roche and Hoshi—and for PRNG methods. We introduce a general TRNG and determine its thermodynamic costs exactly for arbitrary target distributions. The results highlight the significant differences between the three main approaches to random number generation: One is work producing, one is work consuming, and the other is potentially dissipation neutral. Notably, TRNGs can both generate random numbers and convert thermal energy to stored work. These thermodynamic costs on information creation complement Landauer’s limit on the irreducible costs of information destruction.
pacs:05.70.Ln 89.70.-a 05.20.-y 02.50.-r
Random number generation is an essential tool these days in simulation and analysis. Applications range from statistical sampling , numerical simulation , cryptography , program validation , and numerical analysis  to machine learning  and decision making in games  and in politics . More practically, a significant fraction of all the simulations done in physics  employ random numbers to greater or lesser extent.
Random number generation has a long history, full of deep design challenges and littered with pitfalls. Initially, printed tables of random digits were used for scientific work, first documented in 1927 . A number of analog physical systems, such as reversed-biased Zener diodes  or even Lava® Lamps , were also employed as sources of randomness; the class of so-called noise generators. One of the first digital machines that generated random numbers was built in 1939 . With the advent of digital computers, analog methods fell out of favor, displaced by a growing concentration on arithmetical methods that, running on deterministic digital computers, offered flexibility and reproducibility. An early popular approach to digital generation was the linear congruential method introduced in 1950 . Since then many new arithmetical methods have been introduced [15, 16, 17, 18, 19, 20].
The recurrent problem in all of these strategies is demonstrating that the numbers generated were, in fact, random. This concern eventually lead to Chaitin’s and Kolmogorov’s attempts to find an algorithmic foundation for probability theory [21, 22, 23, 24, 25, 26]. Their answer was that an object is random if it cannot be compressed: random objects are their own minimal description. The theory exacts a heavy price, though: identifying randomness is uncomputable .
Despite the formal challenges, many physical systems appear to behave randomly. Unstable nuclear decay processes obey Poisson statistics , thermal noise obeys Gaussian statistics , cosmic background radiation exhibits a probabilistically fluctuating temperature field , quantum state measurement leads to stochastic outcomes [30, 31, 32], and fluid turbulence is governed by an underlying chaotic dynamic . When such physical systems are used to generate random numbers one speaks of true random number generation .
Generating random numbers without access to a source of randomness—that is, using arithmetical methods on a deterministic finite-state machine, whose logic is physically isolated—is referred to as pseudorandom number generation, since the numbers must eventually repeat and so, in principle, are not only not random, but are exactly predictable [35, 36]. John von Neumann was rather decided about the pseudo-random distinction: “Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin” . Nonetheless, these and related methods dominate today and perform well in many applications.
Sidestepping this concern by assuming a given source of randomness, random number generation (RNG)  is a complementary problem about the transformation of randomness: Given a specific randomness source, whose statistics are inadequate somehow, how can we convert it to a source that meets our needs? And, relatedly, how efficiently can this be done?
Our interest is not algorithmic efficiency, but thermodynamic efficiency, since any practical generation of random numbers must be physically embedded. What are the energetic costs—energy dissipation and power inputs—to harvest a given amount of information? This is a question, at root, about a particular kind of information processing—viz., information creation—and the demands it makes on its physical substrate. In this light, it should be seen as exactly complementary to Landauer’s well known limit on the thermodynamic costs of information destruction (or erasure) [39, 40].
Fortunately, there has been tremendous progress bridging information processing and the nonequilibrium thermodynamics required to support it [41, 42]. This information thermodynamics addresses processes that range from the very small scale, such as the operation nanoscale devices and molecular dynamics , to the cosmologically large, such the character and evolution of black holes [44, 45]. Recent technological innovations allowed many of the theoretical advances to be experimentally verified [46, 47]. The current state of knowledge in this rapidly evolving arena is reviewed in Refs. [48, 49, 50]. Here, we use information thermodynamics to describe the physical limits on random number generation. Though the latter is often only treated as a purely abstract mathematical subject, practicing scientists and engineers know how essential random number generation is in their daily work. The following explores the underlying necessary thermodynamic resources.
First, Sec. II addresses random number generation, analyzing the thermodynamics of three algorithms, and discusses physical implementations. Second, removing the requirement of an input randomness source, Sec. III turns to analyze pseudorandom number generation and its costs. Third, Sec. IV analyzes the thermodynamics of true random number generation. Finally, the conclusion compares the RNG strategies and their costs and suggests future problems.
Ii Random Number Generation
Take a fair coin as our source of randomness.111Experiments reveal this assumption is difficult if not impossible to satisfy. Worse, if one takes the full dynamics into account, a flipped physical coin is quite predictable . Each flip results in a Head or a Tail with probabilities. However, we need a coin that of the time generates Heads and of the time Tails. Can the series of fair coin flips be transformed? One strategy is to flip the coin twice. If the result is Head-Head, we report Heads. Else, we report Tails. The reported sequence is equivalent to flipping a coin with a bias for Heads and for Tails.
Each time we ask for a sample from the biased distribution we must flip the fair coin twice. Can we do better? The answer is yes. If the first flip results in a Tail, independent of the second flip’s result, we should report Tail. We can take advantage of this by slightly modifying the original strategy. If the first flip results in a Tail, stop. Do not flip a second time, simply report a Tail, and start over. With this modification, of the time we need a single flip and the time we need two flips. And so, on average we need flips to generate the distribution of interest. This strategy reduces the use of the fair coin “resource” by 25%.
Let’s generalize. Assume we have access to a source of randomness that generates the distribution over discrete alphabet . We want an algorithm that generates another target distribution from samples of the given source. (Generally, the source of randomness can be known or unknown to us.) In this, we ask for a single correct sample from the target distribution. This is the immediate random number generation problem: Find an algorithm that minimizes the expected number of necessary samples of the given source to generate one sample of the target.222A companion is the batch random number generation problem: Instead of a single sample, generate a large number of inputs and outputs. The challenge is to find an algorithm minimizing the ratio of the number of inputs to outputs [52, 53, 54].
The goal in the following is to analyze the thermodynamic costs when these algorithmically efficient algorithms are implemented in a physical substrate. This question parallels that posed by Landauer [39, 40]: What is the minimum thermodynamic cost to erase a bit of information? That is, rather than destroying information, we analyze the costs of creating information with desired statistical properties given a source of randomness.
Bounding the Energetics:
The machine implementing the algorithm transforms symbols on an input string sampled from an information reservoir to an output symbol string and an exhaust string, using a finite-state machine that interacts with heat and work reservoirs; see Fig. 1. The input Randomness Reservoir is the given, specified source of randomness available to the RNG. The states and transition structure of the finite-state machine implement the RNG algorithm. The output string is then the samples of distribution of interest. The exhaust string is included to preserve state space.
Here, we assume inputs are independent, identically distributed (IID) samples from the randomness reservoir with discrete alphabet . The output includes two strings, one with samples from the target distribution over alphabet and another, the exhaust string. At each step one symbol, associated with variable , enters the machine. After analyzing that symbol and, depending on its value and that of previous input symbols, the machine either writes a symbol to the output string or to the exhaust string. denotes the machine’s state at step after reading input symbol . The last symbol in the output string after the input is read is denoted , where is not necessarily equal to . The last symbol in the exhaust string is . As a result, the number of input symbols read by the machine equals the number of symbols written to either the output string or the exhaust string. To guarantee that the exhaust makes no thermodynamic contribution, all symbols written to s are the same—denoted . Without loss of generality we assume both the input and output sample space is . In the following we refer to the random-variable input chain as , output chain as , and exhaust chain as .
The machine also interacts with an environment consisting of a Thermal Reservoir at temperature and a Work Reservoir. The thermal reservoir is that part of the environment which contributes or absorbs heat, exchanging thermodynamic entropy and changing its state . The work reservoir is that part which contributes or absorbs energy by changing its state, but without an exchange of entropy. All transformations are performed isothermally at temperature . As in Fig. 1, we denote heat that flows to the thermal reservoir by . To emphasize, is positive if heat flows into the thermal reservoir. Similarly, denotes the work done on the machine and not the work done by the machine.333Several recent works [55, 56, 57] use the same convention for , but is defined as the work done by the machine. This makes sense in those settings, since the machine is intended to do work.
After steps the machine has read input symbols and generated output symbols and exhaust symbols. The thermodynamic entropy change of the entire system is [57, App. A]:
By definition, a heat bath is not correlated with other subsystems, in particular, with portions of the environment. As a result, both mutual informations vanish. The term is the heat bath’s entropy change, which can be written in terms of the dissipated heat :
Since by assumption the entire system is closed, the Second Law of Thermodynamics says that . Using these relations gives:
To use rates we divide both sides by and decompose the first joint entropy:
Appealing to basic information identities, a number of the righthand terms vanish, simplifying the overall bound. First, since the Shannon entropy of a random variable is bounded by logarithm of the size of its state space, we have for the ratchet’s states:
Second, recalling that the two-variable mutual information is nonnegative and bounded above by the Shannon entropy of the individual random variables, in the limit we can write:
Similarly, . As a result, we have:
We can also rewrite the joint entropy as:
Since the entropy of exhaust vanishes, . Also, is bounded above by it, also vanishes. This leads to:
This simplifies the lower bound on the heat to:
Rewriting the righthand terms, we have:
These lead to:
Since the inputs are IID, vanishes. Finally, is bounded above by , meaning that . Using these we have:
This can be written as:
As , converges to the randomness reservoir’s Shannon entropy rate and converges to the output’s entropy rate . The tapes’ relative velocity term also converges and we denote the limit as . As a result, we have the rate of heat flow from the RNG machine to the heat bath:
Since the machine is finite state, its energy is bounded. In turn, this means the average energy entering the machine, above and beyond the constant amount that can be stored, is dissipated as heat. In other words, the average work rate and average heat dissipation rate per input are equal: .
This already says something interesting. To generate one random number the average change in work done on the machine and the average change in heat dissipation by the machine are directly related: . More to the point, denoting the lower bound by immediately leads to a Second Law adapted to RNG thermodynamics:
It can be shown that is always larger or equal to  and so .444This is not generally true for the setup shown in Fig. 1 interpreted most broadly. For computational tasks more general than RNG, need not be positive. This tells us that RNG algorithms are always heat dissipative or, in other words, work consuming processes. Random numbers generated by RNGs cost energy. This new RNG Second Law allows the machine to take whatever time it needs to respond to and process an input. The generalization moves the information ratchet architecture  one step closer to that of general Turing machines , which also take arbitrary time to produce an output. We now apply this generalized Second Law to various physically embedded RNG algorithms.
von Neumann RNG:
Consider the case where the randomness resource is a biased coin with unknown probability for Heads. How can we use this imperfect source to generate fair (unbiased ) coin tosses using the minimum number of samples from the input? This problem was first posed by von Neumann . The answer is simple but clever. What we need is a symmetry to undo the source’s bias asymmetry. The strategy is to flip the biased coin twice. If the result is Heads-Tails we report a Head; if it is Tails-Heads we report Tails. If it is one of the two other cases, we neglect the flips and simply repeat from the beginning. A moment’s reflection reveals that using any source of randomness that generates independent, identically distributed (IID) samples can be used in this way to produce a statistically uniform sample, even if we do not know the source’s bias.
Note that we must flip the biased coin more than twice, perhaps many more, to generate an output. More troublesome, there is no bound on how many times we must flip to get a useful output.
So, what are the thermodynamic costs of this RNG scheme? With probability the first two flips lead to an output; with probability the two flips do not, but the next two flips will; and so on. The expected number of flips to generate a fair coin output is . Using Eq. (2) this costs:
Figure 2 shows versus source bias . It is always positive with a minimum at .
This minimum means that generating a fair coin from a fair coin has a heat cost of . At first glance, this seems wrong. Simply pass the fair coin through. The reason it is correct is that the von Neumann RNG does not know the input bias and, in particular, that it is fair. In turn, this means we may flip the coin many times, depending on the result of the flips, costing energy.
Notably, the bound diverges as and as , since the RNG must flip an increasingly large number of times. As with all RNG methods, the positive lower bound implies that generating an unbiased sample via the von Neumann method is a heat dissipative process. We must put energy in to get randomness out.
Consider the randomness extractor , a variation on von Neumann RNG at extreme , that uses a weakly random physical source but still generates a highly random output. (Examples of weakly random sources include radioactive decay, thermal noise, shot noise, radio noise, avalanche noise in Zener diodes, and the like. We return to physical randomness sources shortly.) For a weakly random source , the bound in Eq. (3) simplifies to , which means heat dissipation diverges at least as fast as in the limit .
Knuth and Yao RNG:
Consider a scenario opposite von Neumann’s where we have a fair coin and can flip it an unlimited number of times. How can we use it to generate samples from any desired distribution over a finite alphabet using the minimum number of samples from the input? Knuth and Yao were among the first to attempt an answer . They proposed the discrete distribution generation tree (DDD-tree) algorithm.
The algorithm operates as follows. Say the target distribution is with probabilities ordered from large to small. Define the partial sum , with . This partitions the unit interval into the subintervals with lengths . Now, start flipping the coin, denoting the outcomes . Let . It can be easily shown that has the uniform distribution over the unit interval. At any step , when we flip the coin, we examine . If there exists a such that:
the output generated is symbol . If not, we flip the coin again for or more times until we find a that satisfies the relation in Eq. (4) and report that as the output.
This turns on realizing that if the condition is satisfied, then the value of future flips does not matter since, for , always falls in the subinterval . Recalling that is uniformly distributed over establishes that the algorithm generates the desired distribution . The algorithm can be also interpreted as walking a binary tree,555For details see Ref. . a view related to arithmetic coding . Noting that the input has entropy rate and using Eq. (1) the heat dissipation is bounded by:
Now, let’s determine for the Knuth-Yao RNG. Ref.  showed that:
More modern proofs are found in Refs.  and . Generally, given a general target distribution the Knuth-Yao RNG’s can be estimated more accurately. However, it cannot be calculated in closed form, only bounded. Notably, there are distributions for which can be calculated exactly. These include the dyadic distributions whose probabilities can be written as with an integer. For these target distributions, the DDG-tree RNG has .
Equations (2) and (6) lead one to conclude that the heat dissipation for generating one random sample is always a strictly positive quantity, except for the dyadic distributions which lead to vanishing or positive dissipation. Embedding the DDG-tree RNG into a physical machine, this means one must inject work to generate a random sample. The actual amount of work depends on the target distribution given.
Let us look at a particular example. Consider the case that our source of randomness is a fair coin with half and half probability over symbols and and we want to generate the target distribution over symbols , and . The target distribution has Shannon entropy bits. Equation (6) tells us that should be larger than this. The DDG-tree method leads to the most efficient RNG. Table 1 gives the mapping from binary inputs to three-symbol outputs. can be calculated using the table: . This is approximately bit larger than the entropy consistent with Eq. (6). Now, using Eq. (5), we can bound the dissipated heat: .
Roche and Hoshi RNG:
A more sophisticated and more general RNG problem was posed by Roche in 1991 : What if we have a so-called -coin that generates the distribution and we want to use it to generate a different target distribution ? Roche’s algorithm was probabilistic. And so, since we assume the only source of randomness to which we have access is the input samples themselves, Roche’s approach will not be discussed here.
However, in 1995 Hoshi introduced a deterministic algorithm  from which we can determine the thermodynamic cost of this general RNG problem. Assume the s and s are ordered from large to small. Define and , with . These quantities partition into subintervals and with lengths and , respectively. Consider now the operator that takes two arguments—an interval and an integer—and outputs another interval:
Hoshi’s algorithm works as follows. Set and . Flip the -coin, call the result . Increase by one and set . If there is a such that , then report , else flip the -coin again.
Han and Hoshi showed that :
with . Using this and Eq. (2) we see that the heat dissipation per sample is always positive except for measure-zero cases for which the dissipation may be zero or not. This means one must do work on the system independent of input and output distributions to generate the target sample. Again, using this result and Eq. (2) there exist input and output distributions with heat dissipation at least as large as .
RNG Physical Implementations:
Recall the first RNG we described. The input distribution is a fair coin and the output target distribution is a biased coin with bias . Table 2 summarizes the optimal algorithm. Generally, optimal algorithms require the input length to differ from the output length—larger than or equal, respectively.
This is the main challenge to designing physical implementations. Note that for some inputs, after they are read, the machine should wait for additional inputs until it receives the correct input and then transfers it deterministically to the output. For example, in our problem if input is read, the output would be . However, if is read, the machine should wait for the next input and then generate an output. How to implement these delays? Let’s explore a chemical implementation of the algorithm.
Chemical reaction networks (CRNs) [64, 65] have been widely considered as substrates for physical information processing  and as a programming model for engineering artificial systems [67, 68]. Moreover, CRN chemical implementations have been studied in detail [69, 70]. CRNs are also efficiently Turing-universal , which power makes them appealing. One of their main applications is deterministic function computation [72, 73], which is what our RNGs need.
Consider five particle types—, , , , and —and a machine consisting of a box that can contain them. Particles and can be inputs to or outputs from the machine and particle can be an output from the machine. “Machine” particles and always stay in the machine’s box and are in contact with a thermal reservoir. Figure 3 shows that the left wall is designed so that only input particles ( and ) can enter, but no particles can exit. The right wall is designed so that only output particles (, , and ) can exit.
To get started, assume there is only a single machine particle in the box. Every seconds a new input particle, or , enters from the left. Now, the particles react in the following way:
The time period of each chemical reaction is also . With this assumption it is not hard to show that if the distribution of input particles and is then the distribution of output particles and would be , respectively. Thus, this CRN gives a physical implementation of our original RNG.
Using Eq. (2) we can put a lower bound on the average heat dissipation per output: . Since deriving the bound does not invoke any constraints over input or output particles, the bound is a universal lower bound over all possible reaction energetics. That is, if we find any four particles (molecules) obeying the four reactions above then the bound holds. Naturally, depending on the reactions’ energetics, the CRN-RNG’s can be close to or far from the bound. Since CRNs are Turing-universal  they can implement all of the RNGs studied up to this point. The details of designing CRNs for a given RNG algorithm can be gleaned from the general procedures given in Ref. .
Iii Pseudorandom Number Generation
So far, we abstained from von Neumann’s sin by assuming a source of randomness—a fair coin, a biased coin, or any general IID process. Nevertheless, modern digital computers generate random numbers using purely deterministic arithmetical methods. This is pseudorandom number generation (PRNG). Can these methods be implemented by finite-state machines? Most certainly. The effective memory in these machines is very large, with the algorithms typically allowing the user to specify the amount of state information used . Indeed, they encourage the use of large amounts of state information, promising better quality random numbers in the sense that the recurrence time (generator period) is astronomically large. Our concern, though, is not analyzing their implementations. See Ref.  for a discussion of design methods. We can simply assume they can be implemented or, at least, there exist ones that have been, such as the Unix C-library random() function just cited.
The PRNG setting forces us to forego accessing a source of randomness. The input randomness reservoir is not random at all. Rather, it is simply a pulse that indicates that an output should be generated. Thus, and . In our analysis, we can take the outputs to be samples of any desired IID process.
Even though a PRNG is supposed to generate a random number, in reality after setting the seed [35, 36] it, in fact, generates an exactly periodic sequence of outputs. Thus, as just noted, to be a good PRNG algorithm that period should be relatively long compared to the sample size of interest. Also, the sample statistics should be close to those of the desired distribution. This means that if we estimate from the sample it should be close to the Shannon entropy rate of the target distribution. However, in reality since is a measure over infinite-length samples, which in this case are completely nonrandom due to their periodicity.
This is a key point. When we use PRNGs we are only concerned about samples with comparatively short lengths compared to the PRNG period. However, when determining PRNG thermodynamics we average over asymptotically large samples. As a result, we have or, equivalently, . And so, PRNGs are potentially heat dissipative processes. Depending on the PRNG algorithm, it may be possible to find machinery that achieves the lower bound (zero) or not. To date, no such PRNG implementations have been introduced.
Indeed, the relevant energetic cost bounds are dominated by the number of logically irreversible computation steps in the PRNG algorithm, following Landauer . This, from a perusal of open source code for modern PRNGs, is quite high. However, this takes us far afield, given our focus on input-output thermodynamic processing costs.
Iv True Random Number Generation
Consider situations in which no random information source is explicitly given as with RNGs and none is approximated algorithmically as with PRNGs. This places us in the domain of true random number generators (TRNGs): randomness is naturally embedded in their substrate physics. For example, a spin one-half quantum particle oriented in the direction, but measured in and directions, gives and outcomes with and probabilities. More sophisticated random stochastic process generators employing quantum physics have been introduced recently [75, 76, 77, 78, 79, 80]. TRNGs have also been based on chaotic lasers [81, 82], metastability in electronic circuits [83, 84], and electronic noise . What thermodynamic resources do these TRNGs require? We address this here via one general construction.
True General-Distribution Generator:
Consider the general case where we want to generate a sample from an arbitrary probability distribution . Each time we need a random sample, we feed in and the TRNG returns a random sample. Again, the input is a long sequence s and, as a consequence, . We also have and . Equation (2) puts a bound on the dissipated heat and input work: . Notice here that is a negative quantity. This is something that, as we showed above, can never happen for RNG algorithms since they all are heat-dissipation positive: . Of course, is only a lower bound and may still be positive. However, negative opens the door to producing work from heat instead of turning heat to dissipative work—a functioning not possible for RNG algorithms.
Figure 4 shows one example of a physical implementation. The machine has a single state and the inputs and outputs come from the symbol set , all with zero energies. The system is designed so that the joint state has zero energy and the joint states , , have energy . Recall that every time we need a random sample we feed a to the TRNG machine. Feeding has no energy cost, since the sum of energies of states and is zero and equal to the energy of the state . Then, putting the system into contact with a thermal reservoir, we have stochastic transitions between state and the other states . Tuning the transition probabilities in a fixed time to and assuming detailed balance, all the other transition probabilities are specified by the s and, consequently, for all , we have .
The design has the system start in the joint state and after time with probability it transitions to state . Then the average heat transferred from the system to the thermal reservoir is . Now, independent the current state , we decouple the machine state from the target state . The average work we must pump into the system for this to occur is:
This completes the TRNG specification. In summary, the average heat and the average work are the same and equal to .
Replacing by we have:
which is consistent with the lower bound given above. Though, as noted there, a negative lower bound does not mean that we can actually construct a machine with negative , in fact, here is one example of such a machine. Negative leads to an important physical consequence. The operation of a TRNG is a heat-consuming and work-producing process, in contrast to the operation of an RNG. This means not only are the random numbers we need being generated, but we also have an engine that absorbs heat from thermal reservoir and converts it to work. Of course, the amount of work depends on the distribution of interest. Thus, TRNGs are a potential win-win strategy. Imagine that at the end of charging a battery, one also had a fresh store of random numbers.
Let’s pursue this further. For a given target distribution with elements, we operate such TRNG machines, all generating the distribution of interest. Any of the elements of the given distribution can be assigned to the self-transition . This gives freedom in our design to choose any of the elements. After choosing one, all the others are uniquely assigned to to from largest to smallest. Now, if our goal is to pump-in less heat per sample, which of these machines is the most efficient? Looking closely at Eq. (7), we see that the amount of heat needed by machine is proportional to . And so, over all the machines, that with the maximum is the minimum-heat consumer and that with minimum is the maximum-work producer.
Naturally, there are alternatives to the thermodynamic transformations used in Fig. 4. One can use a method based on spontaneous irreversible relaxation. Or, one can use the approach of changing the Hamiltonian instantaneously and changing it back quasistatically and isothermally .
Let’s close with a challenge. Now that a machine with negative can be identified, we can go further and ask if there is a machine that actually achieves the lower bound . If the answer is yes, then what is that machine? We leave the answer for the future.
Historically, three major approaches have been employed for immediate random number generation: RNG, PRNG, and TRNG. RNG itself divides into three interesting problems. First, when we have an IID source, but we have no knowledge of the source and the goal is to design machinery that generates an unbiased random number—the von Neumann RNG. Second, when we have a known IID source generating a uniform distribution and the goal is to invent a machine that can generate any distribution of interest—the Knuth and Yao RNG. Third, we have the general case of the second, when the randomness source is known but arbitrary and the goal is to devise a machine that generates another arbitrary distribution—the Roche and Hoshi RNG. For all these RNGs the overarching concern is to use the minimum number of samples from the input source. These approaches to random number generation may seem rather similar and to differ only in mathematical strategy and cleverness. However, the thermodynamic analyses show that they make rather different demands on their physical substrates, on the thermodynamic resources required.
We showed that all RNG algorithms are heat-consuming, work-consuming processes. In contrast, we showed that TRNG algorithms are heat-consuming, work-producing processes. And, PRNGs lie in between, dissipation neutral () in general and so the physical implementation determines the detailed thermodynamics. Depending on available resources and what costs we want to pay, the designer can choose between these three approaches.
The most thermodynamically efficient approach is TRNG since it generates both the random numbers of interest and converts heat that comes from the thermal reservoir to work. Implementing a TRNG, however, also needs a physical system with inherent stochastic dynamics that, on their own, can be inefficient depending on the resources needed. PRNG is the most unreliable method since it ultimately produces periodic sequences instead of real random numbers, but thermodynamically it potentially can be efficient. The RNG approach, though, can only be used given access to a randomness source. It is particularly useful if it has access to nearly free randomness source. Thermodynamically, though, it is inefficient since the work reservoir must do work to run the machine, but the resulting random numbers are reliable in contrast to those generated vis a PRNG.
To see how different the RNG and TRNG approaches can be, let’s examine a particular example assuming access to a weakly random IID source with bias and we want to generate an unbiased sample. We can ignore the randomness source and instead use the TRNG method with the machine in Fig. 4. Using Eq. (7) on average to produce one sample, the machine absorbs heat from heat reservoir and turn it into work. Since the required work is very small, this approach is resource neutral, meaning that there is no energy transfer between reservoir and machine. Now, consider the case when we use the RNG approach—the von Neumann algorithm. To run the machine and generate one symbol, on average the work reservoir needs provide work energy to the machine. This thermodynamic cost can be infinitely large depending on how small is. This comparison highlights much different the random number generation approach can be and how is useful depends on the available resources.
The thermodynamic analysis of the main RNG strategies suggests a number of
challenges. Let’s close with several brief questions that hint at several
future directions in the thermodynamics of random number generation. Given that
random number generation is such a critical and vital task in modern computing,
following up on these strike us as quite important. First, is Szilard’s Engine
 a TRNG? What are the thermodynamic costs in harvesting
randomness? A recent analysis appears to have provided the answers
 and anticipates TRNG’s win-win property. Second, the randomness
sources and target distributions considered were rather limited compared to the
wide range of stochastic processes that arise in contemporary experiment and
theory. For example, what about the thermodynamics of generating noise
? Nominally, this and other complex distributions are associated
with infinite memory processes . What are the associated
thermodynamic cost bounds? Suggestively, it was recently shown that
infinite-memory devices can actually achieve thermodynamic bounds
. Third, the random number generation strategies considered here
are not secure. However, cryptographically secure random number generators
have been developed . What type of physical systems can be used
for secure TRNG and which are thermodynamically the most efficient? One
suggestion could be superconducting nanowires and Josephson junctions near
superconducting critical current . Fourth, what are the
additional thermodynamic costs of adding security to RNGs? Finally, there is a
substantial quantum advantage when compressing classical random processes
. What are the thermodynamic consequences of using such quantum
representations for RNGs?
We thank A. Aghamohammadi, M. Anvari, A. B. Boyd, R. G. James, M. Khorrami, J. R. Mahoney, and P. M. Riechers for helpful discussions. JPC thanks the Santa Fe Institute for its hospitality during visits as an External Faculty member. This material is based upon work supported by, or in part by, the John Templeton Foundation and U. S. Army Research Laboratory and the U. S. Army Research Office under contracts W911NF-13-1-0390 and W911NF-13-1-0340.
-  W. G. Cochran. Sampling Techniques. John Wiley & Sons, 2007.
-  B. Jerry. Discrete-event System Simulation. Pearson Education India, 1984.
-  D. R. Stinson. Cryptography: Theory and Practice. CRC press, 2005.
-  R. G. Sargent. Verification and validation of simulation models. In Proceedings of the 37th conference on Winter simulation, pages 130–143
-  J. Stoer and R. Bulirsch. Introduction to Numerical Analysis, volume 12 Springer Science & Business Media, 2013.
-  E. Alpaydin. Introduction to Machine Learning. MIT press, 2014.
-  J. H. Conway. On Numbers and Games, volume 6. IMA, 1976.
-  O. Dowlen. The Political Potential of Sortition: A study of the random selection of citizens for public office, volume 4. Andrews UK Limited, 2015.
-  R. Y. Rubinstein and D. P. Kroese. Simulation and the Monte Carlo method, volume 707. John Wiley & Sons, 2011.
-  D. E. Knuth. The Art of Computer Programming: Semi-Numerical Algorithms, volume 2. Addison-Wesley, Reading, Massachusetts, second edition, 1981.
-  C. D. Motchenbacher and F. C. Fitchen. Low-Noise Electronic Design. John Wiley & Sons, New York, 1973.
-  B. Mende, L. C. Noll, and S. Sisodiya. SGI classic lavarand™. US Patent #5,732,138, 1996.
-  M. G. Kendall and B. B. Smith. Randomness and random sampling numbers. J. Roy. Stat. Soc., 101(1):147–166, 1938.
-  D. H. Lehmer. Mathematical methods in large-scale computing units. In Proc. 2nd Symp. on Large-Scale Digital Calculating Machinery, pages 141–146. Harvard University Press, Cambridge, MA, 1951.
-  B. A. Wichmann and I. D. Hill. Algorithm AS 183: An efficient and portable pseudo-random number generator. J. Roy. Stat. Soc. Series C (Applied Statistics), 31(2):188–190, 1982.
-  L. Blum, M. Blum, and M. Shub. A simple unpredictable pseudo-random number generator. SIAM J. Comput., 15(2):364–383, 1986.
-  M. Mascagni, S. A. Cuccaro, D. V. Pryor, and M. L. Robinson. A fast, high quality, and reproducible parallel lagged-Fibonacci pseudorandom number generator. J. Comput. Physics, 119(2):211–219
-  J. Kelsey, B. Schneier, and N. Ferguson. Yarrow-160: Notes on the design and analysis of the yarrow cryptographic pseudorandom number generator. In International Workshop on Selected Areas in Cryptography, pages 13–33. Springer, 1999.
-  G. Marsaglia. Xorshift RNGs. J. Stat. Software, 8(14):1–6, 2003.
-  J. K. Salmon, M. A. Moraes, R. O. Dror, and D. E. Shaw. Parallel random numbers: As easy as 1, 2, 3. In 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pages 1–12. IEEE, 2011.
-  A. N. Kolmogorov. Three approaches to the concept of the amount of information. Prob. Info. Trans., 1:1, 1965.
-  G. Chaitin. On the length of programs for computing finite binary sequences. J. ACM, 13:145, 1966.
-  P. Martin-Lof. The definition of random sequences. Info. Control, 9:602–619, 1966.
-  L. A. Levin. Laws of information conservation (nongrowth) and aspects of the foundation of probability theory. Problemy Peredachi Informatsii, 10:30–35, 1974. Translation: Problems of Information Transmission 10 (1974) 206-210.
-  M. Li and P. M. B. Vitanyi. An Introduction to Kolmogorov Complexity and its Applications. Springer-Verlag, New York, 1993.
-  A. N. Kolmogorov. Combinatorial foundations of information theory and the calculus of probabilities. Russ. Math. Surveys, 38:29–40, 1983.
-  G. F. Knoll. Radiation Detection and Measurement. John Wiley & Sons, 2010.
-  W. B. Davenport and W. L. Root. Random Signals and Noise. McGraw-Hill New York, 1958.
-  N. Yoshida, R. K. Sheth, and A. Diaferio. Non-gaussian cosmic microwave background temperature fluctuations from peculiar velocities of clusters. Monthly Notices Roy. Astro. Soc., 328(2):669–677, 2001.
-  T. Jennewein, U. Achleitner, G. Weihs, H. Weinfurter, and A. Zeilinger. A fast and compact quantum random number generator. Rev. Sci. Instr., 71(4):1675–1680, 2000.
-  A. Stefanov, N. Gisin, O. Guinnard, L. Guinnard, and H. Zbinden. Optical quantum random number generator. J. Mod. Optics, 47(4):595–598, 2000.
-  A. Acin and L. Masanes. Certified randomness in quantum physics. Nature, 540:213–219, 2016.
-  A. Brandstater, J. Swift, Harry L. Swinney, A. Wolf, J. D. Farmer, E. Jen, and J. P. Crutchfield. Low-dimensional chaos in a hydrodynamic system. Phys. Rev. Lett., 51:1442, 1983.
-  M. Stipčević and K. Ç. Koç. True random number generators. In Open Problems in Mathematics and Computational Science, pages 275–315. Springer, 2014.
-  J. E. Gentle. Random Number Generation and Monte Carlo Methods. Springer Science & Business Media, New York, 2013.
-  R. Y. Rubinstein and B. Melamed. Modern Simulation and Modeling, volume 7. Wiley New York, 1998.
-  J. V. Neumann. Various techniques used in connection with random digits. In Notes by G. E. Forsythe, volume 12, pages 36–38. National Bureau of Standards Applied Math Series, 1963.
-  L. Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260–265. ACM, 1986.
-  R. Landauer. Irreversibility and heat generation in the computing process. IBM J. Res. Develop., 5(3):183–191, 1961.
-  C. H. Bennett. Thermodynamics of computation - a review. Intl. J. Theo. Phys., 21:905, 1982.
-  C. Jarzynski. Equalities and inequalities: irreversibility and the second law of thermodynamics at the nanoscale. Ann. Rev. Cond. Matter Physics, 2(1):329–351 2011.
-  J. M. R. Parrondo, J .M. Horowitz, and T. Sagawa. Thermodynamics of information. Nature Physics, 11(2):131–139, 2015.
-  T. Sagawa. Thermodynamics of information processing in small systems. Prog. Theo. Physics, 127:1–56, 2012.
-  T. M. Fiola, J. Preskill, A. Strominger, and S. P. Trivedi. Black hole thermodynamics and information loss in two dimensions. Phys. Rev. D, 50(6):3987, 1994.
-  S. Das. Black-hole thermodynamics: Entropy, information and beyond. Pramana, 63(4):797–815, 2004.
-  A. Berut, A. Arakelyan, A. Petrosyan, S. Ciliberto, R. Dillenschneider, and E. Lutz. Experimental verification of Landauer’s principle linking information and thermodynamics. Nature, 483:187, 2012.
-  S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki, and M. Sano. Experimental demonstration of information-to-energy conversion and validation of the generalized Jarzynski equality. Nat. Physics, 6:988–992, 2010.
-  K. Maruyama, F. Nori, and V. Vedral. Colloquium: The physics of Maxwell’s demon and information. Rev. Mod. Physics, 81:1, 2009.
-  K. Sekimoto. Stochastic Energetics, volume 799. Springer, New York, 2010.
-  U. Seifert. Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports Prog. Physics, 75(12):126001, 2012.
-  P. Diaconis, S. Holmes, and R. Montgomery. Dynamical bias in the coin toss. SIAM Review, 49(2):211–235, 2007.
-  P. Elias. The efficient construction of an unbiased random sequence. Ann. Math. Stat., pages 865–870, 1972.
-  Y. Peres. Iterating von Neumann’s procedure for extracting random bits. Ann. Statistics, 20(1):590–597, 1992.
-  D. Romik. Sharp entropy bounds for discrete statistical simulation. Stat. Prob. Lett., 42(3):219–227, 1999.
-  D. Mandal and C. Jarzynski. Work and information processing in a solvable model of Maxwell’s demon. Proc. Natl. Acad. Sci. USA, 109(29):11641–11645, 2012.
-  A. C. Barato and U. Seifert. An autonomous and reversible Maxwell’s demon. Europhys. Lett., 101:60001, 2013.
-  A. B. Boyd, D. Mandal, and J. P. Crutchfield. Identifying functional thermodynamics in autonomous Maxwellian ratchets. New J. Physics, 18:023049, 2016.
-  T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, New York, second edition, 2006.
-  H. R. Lewis and C. H. Papadimitriou. Elements of the Theory of Computation. Prentice-Hall, Englewood Cliffs, N.J., second edition, 1998.
-  L. Trevisan and S. Vadhan. Extracting randomness from samplable distributions. In Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium, pages 32–42. IEEE, 2000.
-  D. E. Knuth and A. C. Yao. The complexity of nonuniform random number generation. pages 357–428. Academic Press, New York, 1976.
-  J. R. Roche. Efficient generation of random variables from biased coins. In Information Theory, Proc. 1991 IEEE Intl. Symp., pages 169–169, 1991.
-  M. Hoshi. Interval algorithm for random number generation. IEEE Trans. Info. Th., 43(2):599–611, 1997.
-  O. N. Temkin, A. V. Zeigarnik, and D. G. Bonchev. Chemical reaction networks: A graph-theoretical approach. CRC Press, 1996.
-  M. Cook, D. Soloveichik, E. Winfree, and J. Bruck. Programmability of chemical reaction networks. In Algorithmic Bioprocesses, pages 543–584. Springer, 2009.
-  H. Jiang, M. D. Riedel, and K. K. Parhi. Digital signal processing with molecular reactions. IEEE Design and Testing of Computers, 29(3):21–31, 2012.
-  M. O. Magnasco. Chemical kinetics is turing universal. Phys. Rev. Lett., 78(6):1190, 1997.
-  A. Hjelmfelt, E. D. Weinberger, and J. Ross. Chemical implementation of neural networks and turing machines. Proc. Natl. Acad. Sci. USA, 88(24):10983–10987 1991.
-  L. Cardelli. Strand algebras for DNA computing. Natural Computing, 10(1):407–428, 2011.
-  D. Soloveichik, G. Seelig, and E. Winfree. DNA as a universal substrate for chemical kinetics. Proc. Natl. Acad. Sci., 107(12):5393–5398, 2010.
-  D. Soloveichik, M. Cook, E. Winfree, and J. Bruck. Computation with finite stochastic chemical reaction networks. Natural Computing, 7(4):615–633
-  H. Chen, D. Doty, and D. Soloveichik. Deterministic function computation with chemical reaction networks. Natural computing, 13(4):517–534, 2014.
-  D. Doty and M. Hajiaghayi. Leaderless deterministic chemical reaction networks. Natural Computing, 14(2):213–223, 2015.
-  Unix Berkeley Software Distribution. Random(3). BSD Library Functions Manual, 2016.
-  J. R. Mahoney, C. Aghamohammadi, and J. P. Crutchfield. Occam’s quantum strop: Synchronizing and compressing classical cryptic processes via a quantum channel. Scientific Reports, 6:20495, 2016.
-  C. Aghamohammadi, J. R. Mahoney, and J. P. Crutchfield. Extreme quantum advantage when simulating strongly coupled classical systems. arXiv preprint arXiv:1609.03650, 2016.
-  M. Gu, K. Wiesner, E. Rieper, and V. Vedral. Quantum mechanics can reduce the complexity of classical models. Nature Comm., 3:762, 2012.
-  R. Tan, D. R. Terno, J. Thompson, V. Vedral, and M. Gu. Towards quantifying complexity with quantum mechanics. Euro. Phys. J. Plus, 129(9):1–12
-  C. Aghamohammadi, J. R. Mahoney, and J. P. Crutchfield. The ambiguity of simplicity. Physics Lett. A, in press, 2016. arXiv preprint arXiv:1602.08646.
-  P. M. Riechers, J. R. Mahoney, C. Aghamohammadi, and J. P. Crutchfield. Minimized state complexity of quantum-encoded cryptic processes. Phys. Rev. A, 93(5):052317, 2016.
-  A. Uchida, K. Amano, M. Inoue, K. Hirano, S. Naito, Hiroyuki Someya, Isao Oowada, T. Kurashige, M. Shiki, and S. Yoshimori. Fast physical random bit generation with chaotic semiconductor lasers. Nature Photonics, 2(12):728–732, 2008.
-  I. Kanter, Y. Aviad, I. Reidler, E. Cohen, and M. Rosenbluh. An optical ultrafast random bit generator. Nature Photonics, 4(1):58–61, 2010.
-  D. J. Kinniment and E. G. Chester. Design of an on-chip random number generator using metastability. In Solid-State Circuits Conference, 2002. ESSCIRC 2002. Proceedings of the 28th European, pages 595–598. IEEE, 2002.
-  C. Tokunaga, D. Blaauw, and T. Mudge. True random number generator with a metastability-based quality control. IEEE J. Solid-State Circuits, 43(1):78–85, 2008.
-  M. Epstein, L. Hars, R. Krasinski, M. Rosner, and H. Zheng. Design and implementation of a true random number generator based on digital circuit artifacts. In International Workshop on Cryptographic Hardware and Embedded Systems, pages 152–165. Springer, 2003.
-  L. Szilard. On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings. Z. Phys., 53:840–856, 1929.
-  A. B. Boyd and J. P. Crutchfield. Demon dynamics: Deterministic chaos, the Szilard map, and the intelligence of thermodynamic systems. Phys. Rev. Lett., 116:190601, 2016.
-  W. H. Press. Flicker noises in astronomy and elsewhere. Comments on Astrophysics, 7(4):103–119, 1978.
-  S. Marzen and J. P. Crutchfield. Statistical signatures of structural organization: The case of long memory in renewal processes. Phys. Lett. A, 380(17):1517–1525, 2016.
-  A. B. Boyd, D. Mandal, and J. P. Crutchfield. Leveraging environmental correlations: The thermodynamics of requisite variety. 2016. arxiv.org:1609.05353.
-  C. Easttom. Modern Cryptography: Applied Mathematics for Encryption and Information Security. McGraw-Hill Education, New York, 2015.
-  M. Foltyn and M. Zgirski. Gambling with superconducting fluctuations. Phys. Rev. App., 4(2):024002, 2015.