Functions that preserve prandomness^{1}^{1}1An extended abstract of this paper appeared in FCT 2011 [7].
Abstract
We show that polynomialtime randomness (prandomness) is preserved under a variety of familiar operations, including addition and multiplication by a nonzero polynomialtime computable real number. These results follow from a general theorem: If is an open interval, is a function, and is prandom, then is prandom provided

is pcomputable on the dyadic rational points in , and

varies sufficiently at , i.e., there exists a real constant such that either
or
Our theorem implies in particular that any analytic function about a pcomputable point whose power series has uniformly pcomputable coefficients preserves prandomness in its open interval of absolute convergence. Such functions include all the familiar functions from firstyear calculus.
Keywords: Randomness, prandomness, complexity, polynomial time, measure, martingale, real analysis
Subject Classification: Computational complexity
1 Introduction
Informally, we might call an infinite binary sequence “random” if we see no predictable patterns in the sequence. Put another way, a sequence is random if it looks “typical,” that is, it enjoys no easily identifiable properties not shared by almost all other sequences. Here, the notion of “almost all” comes from Lebesgue measure on the unit interval . What we mean by “easily identifiable,” on the other hand, can vary greatly with the situation. In statistics, random sequences are useful to avoid bias in sampling or in simulating processes (e.g., queueing systems) that are too complex for us to determine exactly. In statistics, desirable properties for random sequences include instances of the law of large numbers: a fixed sequence of length should occur in the sequence asymptotically a fraction of the time, for example. Other examples include the law of the iterated logarithm. In cryptography and network security, “easily identifiable” must be strengthened to “unpredictable by an adversary.” In computer science generally, random sequences should produce successful results most of the time when used in various randomized algorithms.
There is always a tradeoff between the amount of randomness possessed by a sequence and the ease with which it can be produced. Random sequences that can be produced algorithmically (i.e., pseudorandom sequences) are of course desirable, provided they have enough randomness for the task at hand. The study of algorithmic randomness has a long and rich history (see, for example, [6, 5] for references to the literature). Complexity theoretic notions of randomness were first suggested by Schnorr, and resourcebounded measure and randomness were developed more fully by Lutz (see [10]). For a survey on the subject, see [1].
A natural tradeoff in the context of polynomialtime computation is the notion of polynomialtime randomness, or prandomness for short (see Definition 2.2, below), which is closely tied with the notion of pmeasure introduced by Lutz [8, 9]. There are prandom sequences that can be computed in exponential time; in fact, almost all sequences in EXP (in a resourcebounded measure theoretic sense) are prandom. Yet prandom sequences are still strong enough for many common tasks, both statistical and computational. For example, prandom sequences satisfy the laws of large numbers and the iterated logarithm (see [15]), and they provide adequate sources for BPP computations and have many other desirable computational properties (see [10]).
The current work addresses some geometric aspects of prandom sequences. Recently, connections between the geometry of Euclidean space and effective and resourcebounded measure and dimension have been found [11, 12]. The question of how the complexity or measure theoretic properties of a real number are altered when it is transformed via a realvalued function goes back at least to Wall [14], who showed that adding or multiplying a nonzero rational number to a real number whose base expansion is normal^{2}^{2}2An infinite sequence over a letter alphabet is normal iff for any finite string , there are occurrences of as a substring among the first letters of , as tends to infinity. yields another real with a normal base expansion. Doty, Lutz, & Nandakumar recently extended Wall’s result, showing that the finitestate dimension of the base expansion of a real number is preserved under addition or multiplication by a nonzero rational number [4]. At the other extreme of the complexity spectrum, it is not hard to show that algorithmic randomness (MartinLöf randomness [13]) is preserved under addition or multiplication by a nonzero computable real, regardless of the base of the expansion.
In this paper we take a middle ground, considering how polynomialtime computable functions mapping reals to reals preserve prandomness. We show (Theorem 4.1, below) that such a function maps a prandom real to a prandom real provided satisfies a kind of antiLipschitz condition in some neighborhood of : varies from at least linearly in . (This result still holds even if is not monotone in any neighborhood of , or if is only polynomialtime computable on dyadic rational inputs, or if enjoys no particular continuity properties.)
Our result has a number of corollaries: prandomness is preserved under addition and multiplication by nonzero pcomputable reals (complementing the results in [14, 4] and the folklore result about algorithmically random reals); it is also preserved by polynomial and rational functions (with pcomputable coefficients) and all the familiar transcendental functions on the reals, e.g., exponential, logarithmic, and trigonometric functions.
The polynomialtime case presents some technical challenges not present with unbounded computational resources. Roughly speaking, given a polynomialtime approximable function , our goal is to define a betting strategy (i.e., a martingale; see Section 2) that bets on the next bit of the binary expansion of a real number , given previous bits. The strategy is based on the behavior of an assumed strategy that successfully bets on . If we had no resource bounds, then we could approximate at various points as closely as needed to obtain a good sample of ’s behavior on applied to those points, allowing us to mimic and thus succeed on . Since we are polynomialtimebounded, however, we have no such luxury, and we have to settle for rougher approximations of . For example, could succeed on (where there is a long string of ’s before the next in the argument to ) but lose everything on , which is close by. If we only have a poor approximation to , then we cannot distinguish the two cases above, and so is no good at telling us how to bet on the first digit after the decimal point. Fortunately, we may assume that is conservative—in the sense that it does not bet drastically—so that ’s assets are relatively insensitive to slight variations in the real numbers corresponding to the sequences it bets on.
Section 2 has basic definitions, including martingales and prandomness. Section 3 describes the conditions on realvalued functions sufficient to preserve prandomness. Our main results are in Section 4, where we prove that these conditions indeed suffice; Theorem 4.1 is the main result of that section. In Section 5, we show that these conditions hold for a variety of familiar functions. In Section 6, we give evidence that the strongly varying hypothesis in Theorem 4.1 is tight. In Section 7, we provide a result about pmeasure that is analogous to our main result about prandomness. We suggest further research in Section 8.
2 Notation and basic definitions
We let . We let be the set of rational numbers. A dyadic rational is some expressible as for some . We let denote the set of dyadic rational numbers.
For real , we let denote .
In this paper, we only consider the binary expansions of real numbers. If need be, all our results can easily be modified to other bases.
Our basic notions and results about pcomputability, martingales, and randomness in complexity theory are standard. See, for example, [9, 10, 1].
Let and . We let denote the length of , and for any we let be the st bit of . Similarly, for any we let denote the st bit of . For any with , we let denote the substring consisting of the st through the th bit of . We let denote the set of strings in of length . If , we let mean that is a prefix of , and we let mean that is a proper prefix of . We denote the empty string by .
Recall that a martingale is a function such that for every ,
We will also assume without loss of generality that . We say that succeeds on a sequence iff
We say that strongly succeeds on iff
Definition 2.1.
Fix any . A function is computable if there is a function such that
for every and , and in addition, is computable in time . We say that is a approximator for . We say that is pcomputable if is computable for some , and that is a papproximator for if is an approximator for , for some . A real number is computable (respectively, pcomputable) if the constant function is computable (respectively, pcomputable), and we may suppress the first argument in a papproximator for .
Definition 2.2.
Let be any sequence.

For any , is random if no computable martingale succeeds on .

The sequence is prandom if is random for all , i.e., no pcomputable martingale succeeds on .
Definition 2.3.
We will say that a martingale is conservative iff

for any and ,
and

for any , if succeeds on , then strongly succeeds on .
Note that if is conservative, then for all . It is wellknown (and easy to show) that if there is a pcomputable martingale that succeeds on , then there is a conservative pcomputable martingale that succeeds on . Moreover, there is a bound on the running time of the conservative martingale that depends only on the running time of the original martingale (and not on the martingale itself or on ). More precisely,
Proposition 2.4.
For any , there exists such that, for any computable martingale , there exists a conservative computable martingale that (strongly) succeeds on every sequence that succeeds on.
We identify a sequence with a real number via the usual binary expansion: . This correspondence is onetoone except on , where it is twotoone. For every , we define , and we define the dyadic interval
For , we define to be prandom (respectively, random) iff is prandom (respectively, random). If , then we define to be pcomputable (prandom) just as we do for , and similarly for computability and randomness. It is wellknown that no pcomputable real number is prandom.
3 Functions of interest
We will restrict our attention to certain types of realvalued functions of a real variable. We are only interested in the behavior of these functions on prandom inputs. For simplicity, we will only consider functions with domain , but this is in no way an essential restriction. Our functions will possess a certain pcomputability property and a certain strong variation property. Both these properties are local in the sense that we only care about them in the vicinity of a prandom number.
Definition 3.1.
A function is weakly pcomputable if there exists a polynomialtime computable function such that for any and ,
Furthermore, for constant , if is computable in time , then we say that is weakly computable.
Note that a weakly pcomputable function can behave arbitrarily on .
Definition 3.2.
Let be a function and let be some dyadic interval with . We say that is weakly pcomputable on iff there exists a ptime computable function such that for any and ,
If , then we say that is weakly pcomputable at iff is weakly pcomputable on some dyadic interval containing .
All these notions carry over in the obvious way when “pcomputable” is replaced with “computable.”
We say that is locally weakly pcomputable if is weakly pcomputable at all prandom points in .
[Note that is the dyadic rational number corresponding to the string (the concatenation of and ).]
In other words, is weakly pcomputable at iff we can approximate on the dyadic rationals in some dyadic interval containing in polynomial time. Notice that we are not insisting that have any continuity properties. This means in particular that may not uniquely determine on . Notice also that a function may be locally weakly pcomputable but not “globally” pcomputable, being patched together nonuniformly with various pcomputable functions on different dyadic intervals.
We can extend Definition 3.2 to weak pcomputability at an arbitrary point in the natural way.
Definition 3.3.
Let be an interval, let be a function, and let be some point. We say that strongly varies at on iff there is some real constant such that either

for all ,
or

for all ,
In case (1) we say that strongly increases at on , and in case (2) strongly decreases at on .
We say that strongly varies at if strongly varies at on for some open interval containing . We define strongly increasing/decreasing at analogously.
The notion of strong variation is illustrated in Figure 1.
Example 3.4.
If is in a neighborhood of and , then strongly varies at .
4 Main result
Here is our main technical theorem, from which most of the other results in the paper follow easily.
Theorem 4.1.
Let be some interval and some function. Suppose is a prandom point in the interior of . If is weakly pcomputable at and strongly varies at , then is prandom.
4.1 Establishing Theorem 4.1
We start this section with two easy observations which we give without proof.
Observation 4.2.
Let and be integers with , and let . A number is random if and only if is random, if and only if is random, if and only if is random.
The same then obviously holds when “random” is replaced with “prandom.”
Observation 4.3.
Let be an interval, let be a function, let and be any integers with , and let . Define
Then strongly varies at some on (respectively, is computable at ) if and only if all of strongly vary (respectively, are computable) at on , if and only if strongly varies (respectively, is computable) at on , if and only if strongly varies (respectively, is computable) at on . The sense of variation (strongly increasing or strongly decreasing) of is the same as that of and opposite that of .
The same then obviously holds when “computable” is replaced with “pcomputable.”
Theorem 4.1 is a corollary of the next lemma, which gives the theorem its essential technical content. We prove this lemma later in this section. For convenience, we will assume that our function is monotone ascending. We will show later that this is not an essential restriction.
Lemma 4.4.
For any there exists such that, for any weakly computable, monotone ascending and such that strongly increases at on , if is not random, then is not random.
The full strength of Lemma 4.4 will only be used in Section 7. For the rest of the paper, we can content ourselves with the following corollary:
Corollary 4.5.
Let be weakly pcomputable and monotone ascending on . Suppose that and that strongly increases at on . Then if is not prandom, then is not prandom.
To prove Lemma 4.4, we need to construct an computable martingale that succeeds on , given an computable one that succeeds on . If martingale succeeds on , then we can define (for a given string ) to sample the values of on points in . We do this by sandwiching between a lower bound and an upper bound . We get by overestimating ’s total contribution in an interval around (Equation (1), below), and we get by underestimating it (Equation (2)). These estimates become more refined as increases, and, provided is conservative, they reach a common limit as goes to infinity, yielding a welldefined martingale .
Definition 4.6.
Let be monotone ascending on and let be a martingale. For every , let denote the interval , and for every , define
(1) 
and define
(2) 
The only differences between the sums in Equations (1) and (2) are at most two terms where straddles the boundary of . The assumption that is conservative is needed to ensure that these terms are not too large, and thus that and are close to each other. The following lemma is routine and easy to check.
Lemma 4.7.
Let and be as in Definition 4.6. For any , if is any dyadic interval contained in (that is, ), then letting ,
Proof.
The first inequality holds because , and hence is one of the terms in the sum defining . To see the other inequalities on the top line, notice that each term in the expression for (for some ) is equal to the sum of two terms occurring in the expression for . This follows from the fact that any contains both and , and so the latter two intervals are also subsets of .
Clearly, all terms in the sum for are included in the sum for , and so every quantity on the top line is less than or equal to the corresponding quantity on the bottom line.
Finally, the inequalities on the bottom line all hold: If we split each term in the expression for into the equivalent sum
then this accounts for all the terms in (and possibly more). ∎
Definition 4.8.
Let and be as in Definition 4.6. We define the upper shift of to be the function defined for all as
Similarly, we define the lower shift of to be
Since for any fixed , and are both monotone functions of (decreasing and increasing, respectively) by Lemma 4.7, the limits in the definition above clearly exist, and
for all .
For some martingales, the upper and lower shifts may differ, but they coincide for conservative martingales.
Lemma 4.9.
Fix and as in Definition 4.6. Suppose further that is conservative. For any and ,
(3) 
Proof.
Here we only use Property (1) of being conservative. All the terms in the two sums on the lefthand side of the inequality (3) cancel except for at most two dyadic intervals and —the former containing the left endpoint of and the latter containing the right endpoint. Thus we get
∎
Corollary 4.10.
Let and be as in Definition 4.6. If is conservative, then for all .
Proof.
Immediate from Lemma 4.9. ∎
Definition 4.11.
If and are as in Definition 4.6 and is conservative, then we let denote the common value , and we call the pullback of .
On input string , merely samples over the the interval .
Lemma 4.12.
If and are as in Definition 4.6 and is conservative, then its pullback is a martingale.
Proof.
To see that is a martingale, first we notice that
Next, by examining terms in the sums and using Lemma 4.7, we notice that for any and ,
Taking the limit of all sides as , we get
All these quantities are equal, since the two extremes are equal. Thus
∎
The next lemma is key. Here is where we make essential use of the strongly increasing property of . (The hypothesis here is slightly weaker, though).
Lemma 4.13.
Let and be as in Definition 4.6 with being conservative. Suppose that there exist and a real such that
(4) 
for all . If succeeds on and , then succeeds on .
Proof.
Note that Equation (4) implies if and if .
Set . We then have .
Since , has infinitely many ’s and infinitely many ’s. This implies that has infinitely many occurrences of “” as a substring, that is, there are infinitely many such that . Fix any real . Since succeeds on and is conservative, strongly succeeds on , and so there is some such that for all . Fix some such that . Let . We have and . Let be the first bits of , noting that . Here is the situation at :
We have . Also, since , we have , which implies , as noted above. It follows immediately that .
Claim 4.14.
.
Proof of Claim 4.14.
Finally, we need a lemma regarding the computation time of . The challenge in the proof is in finding an easy (i.e., polynomialtime) way to approximate the and values.
Lemma 4.15.
For any there exists such that, for any conservative computable martingale and computable function monotone ascending on with , the pullback martingale of is computable. (As a consequence, if is pcomputable and is weakly pcomputable on , then is pcomputable.)
Proof.
The idea is that, given input of length and accuracy parameter , we will approximate some number between and for some sufficiently large (but still polynomial in and ). We have no hope of computing the sum of Equations (1) or (2) directly, as there are exponentially many terms. Fortunately, large blocks of the sum can be computed all at once by evaluating on shorter inputs. The condition that is only for technical convenience and is not necessary; it is only required that be computable in time .
Given conservative and monotone as above, fix approximators
computable in time and , respectively, such that for all and ,
Fix an input and let .
Fix an . We will choose to be a sufficiently large integer (depending on and ) to be determined later. We prove the lemma by describing a procedure (running time polynomial in ) to compute a number such that .
Here is the procedure:

Compute dyadic rationals , both with denominator , so that approximates to within less than for each endpoint:

Compute and round to the nearest with denominator so that . Notice that
In other words,
(Note that is the left endpoint of .)

If , then let . Otherwise, let be the lexicographical successor of in , and compute . Let be the dyadic rational with denominator closest to . Similarly to , we have
(Note that is the right endpoint of .)

Without loss of generality, we can assume that : if necessary, reset then . These adjustments don’t affect the inequalities above.


Let be the set of all minimal strings such that .

Finally, compute
Notice that no string in is a proper prefix of any other string in ; hence the sets for are pairwise disjoint except for endpoints. Further, it is clear that if (otherwise, ).
We claim that , and hence , can be computed in time polynomial in , with a polynomial time bound exponent depending only on and . This follows from three facts:

and can be computed in time .

Every string in has length at most .

There are at most two strings in of any given length.
Fact 1 is clear from the procedure description. For Fact 2, notice that, since and have denominator , if is any string such that and , then removing the last bit of yields a proper prefix such that as well, and so is not minimal, and thus .
Similarly for Fact 3, if are any three distinct strings given in lexicographical order, and , then . To see this, suppose . Let be the result of removing the last bit of . Then includes and another dyadic interval of length immediately to the left or right of . In either case, the left end point of is not to the left of that of , and the right endpoint of is not to the right of that of . So , which means that is not minimal, and thus .
Thus has at most strings, each of length at most , and so the following greedy algorithm for computing (given and ) runs in time : WHILE DO Let be shortest such that and ENDWHILE return It then follows that can be computed in from (in Step 3, above) in time .
It remains to show that can be chosen so that is sufficiently close to . First, note that, due to the closeness of our approximations to the endpoints of ,
(5) 
(The sum in the middle includes all the terms of the sum on the left, and the sum on the right includes all the terms of the sum in the middle.)
Proof of Lemma 4.4.
Let and be as in Lemma 4.4, and suppose is weakly computable for some . If , then it is clearly not random, and we are done. Otherwise, fix , and assume that is not random. Let , let , and let be the least natural number such that . For all , define
and define . Then is monotone ascending, weakly computable on , and strongly increasing at on by Observation 4.3. Further, since <