Functions that preserve p-randomness
We show that polynomial-time randomness (p-randomness) is preserved under a variety of familiar operations, including addition and multiplication by a nonzero polynomial-time computable real number. These results follow from a general theorem: If is an open interval, is a function, and is p-random, then is p-random provided
is p-computable on the dyadic rational points in , and
varies sufficiently at , i.e., there exists a real constant such that either
Our theorem implies in particular that any analytic function about a p-computable point whose power series has uniformly p-computable coefficients preserves p-randomness in its open interval of absolute convergence. Such functions include all the familiar functions from first-year calculus.
Keywords: Randomness, p-randomness, complexity, polynomial time, measure, martingale, real analysis
Subject Classification: Computational complexity
Informally, we might call an infinite binary sequence “random” if we see no predictable patterns in the sequence. Put another way, a sequence is random if it looks “typical,” that is, it enjoys no easily identifiable properties not shared by almost all other sequences. Here, the notion of “almost all” comes from Lebesgue measure on the unit interval . What we mean by “easily identifiable,” on the other hand, can vary greatly with the situation. In statistics, random sequences are useful to avoid bias in sampling or in simulating processes (e.g., queueing systems) that are too complex for us to determine exactly. In statistics, desirable properties for random sequences include instances of the law of large numbers: a fixed sequence of length should occur in the sequence asymptotically a fraction of the time, for example. Other examples include the law of the iterated logarithm. In cryptography and network security, “easily identifiable” must be strengthened to “unpredictable by an adversary.” In computer science generally, random sequences should produce successful results most of the time when used in various randomized algorithms.
There is always a trade-off between the amount of randomness possessed by a sequence and the ease with which it can be produced. Random sequences that can be produced algorithmically (i.e., pseudorandom sequences) are of course desirable, provided they have enough randomness for the task at hand. The study of algorithmic randomness has a long and rich history (see, for example, [6, 5] for references to the literature). Complexity theoretic notions of randomness were first suggested by Schnorr, and resource-bounded measure and randomness were developed more fully by Lutz (see ). For a survey on the subject, see .
A natural trade-off in the context of polynomial-time computation is the notion of polynomial-time randomness, or p-randomness for short (see Definition 2.2, below), which is closely tied with the notion of p-measure introduced by Lutz [8, 9]. There are p-random sequences that can be computed in exponential time; in fact, almost all sequences in EXP (in a resource-bounded measure theoretic sense) are p-random. Yet p-random sequences are still strong enough for many common tasks, both statistical and computational. For example, p-random sequences satisfy the laws of large numbers and the iterated logarithm (see ), and they provide adequate sources for BPP computations and have many other desirable computational properties (see ).
The current work addresses some geometric aspects of p-random sequences. Recently, connections between the geometry of Euclidean space and effective and resource-bounded measure and dimension have been found [11, 12]. The question of how the complexity or measure theoretic properties of a real number are altered when it is transformed via a real-valued function goes back at least to Wall , who showed that adding or multiplying a nonzero rational number to a real number whose base- expansion is normal
In this paper we take a middle ground, considering how polynomial-time computable functions mapping reals to reals preserve p-randomness. We show (Theorem 4.1, below) that such a function maps a p-random real to a p-random real provided satisfies a kind of anti-Lipschitz condition in some neighborhood of : varies from at least linearly in . (This result still holds even if is not monotone in any neighborhood of , or if is only polynomial-time computable on dyadic rational inputs, or if enjoys no particular continuity properties.)
Our result has a number of corollaries: p-randomness is preserved under addition and multiplication by nonzero p-computable reals (complementing the results in [14, 4] and the folklore result about algorithmically random reals); it is also preserved by polynomial and rational functions (with p-computable coefficients) and all the familiar transcendental functions on the reals, e.g., exponential, logarithmic, and trigonometric functions.
The polynomial-time case presents some technical challenges not present with unbounded computational resources. Roughly speaking, given a polynomial-time approximable function , our goal is to define a betting strategy (i.e., a martingale; see Section 2) that bets on the next bit of the binary expansion of a real number , given previous bits. The strategy is based on the behavior of an assumed strategy that successfully bets on . If we had no resource bounds, then we could approximate at various points as closely as needed to obtain a good sample of ’s behavior on applied to those points, allowing us to mimic and thus succeed on . Since we are polynomial-time-bounded, however, we have no such luxury, and we have to settle for rougher approximations of . For example, could succeed on (where there is a long string of ’s before the next in the argument to ) but lose everything on , which is close by. If we only have a poor approximation to , then we cannot distinguish the two cases above, and so is no good at telling us how to bet on the first digit after the decimal point. Fortunately, we may assume that is conservative—in the sense that it does not bet drastically—so that ’s assets are relatively insensitive to slight variations in the real numbers corresponding to the sequences it bets on.
Section 2 has basic definitions, including martingales and p-randomness. Section 3 describes the conditions on real-valued functions sufficient to preserve p-randomness. Our main results are in Section 4, where we prove that these conditions indeed suffice; Theorem 4.1 is the main result of that section. In Section 5, we show that these conditions hold for a variety of familiar functions. In Section 6, we give evidence that the strongly varying hypothesis in Theorem 4.1 is tight. In Section 7, we provide a result about p-measure that is analogous to our main result about p-randomness. We suggest further research in Section 8.
2 Notation and basic definitions
We let . We let be the set of rational numbers. A dyadic rational is some expressible as for some . We let denote the set of dyadic rational numbers.
For real , we let denote .
In this paper, we only consider the binary expansions of real numbers. If need be, all our results can easily be modified to other bases.
Let and . We let denote the length of , and for any we let be the st bit of . Similarly, for any we let denote the st bit of . For any with , we let denote the substring consisting of the st through the th bit of . We let denote the set of strings in of length . If , we let mean that is a prefix of , and we let mean that is a proper prefix of . We denote the empty string by .
Recall that a martingale is a function such that for every ,
We will also assume without loss of generality that . We say that succeeds on a sequence iff
We say that strongly succeeds on iff
Fix any . A function is -computable if there is a function such that
for every and , and in addition, is computable in time . We say that is a -approximator for . We say that is p-computable if is -computable for some , and that is a p-approximator for if is an -approximator for , for some . A real number is -computable (respectively, p-computable) if the constant function is -computable (respectively, p-computable), and we may suppress the first argument in a p-approximator for .
Let be any sequence.
For any , is -random if no -computable martingale succeeds on .
The sequence is p-random if is -random for all , i.e., no p-computable martingale succeeds on .
We will say that a martingale is conservative iff
for any and ,
for any , if succeeds on , then strongly succeeds on .
Note that if is conservative, then for all . It is well-known (and easy to show) that if there is a p-computable martingale that succeeds on , then there is a conservative p-computable martingale that succeeds on . Moreover, there is a bound on the running time of the conservative martingale that depends only on the running time of the original martingale (and not on the martingale itself or on ). More precisely,
For any , there exists such that, for any -computable martingale , there exists a conservative -computable martingale that (strongly) succeeds on every sequence that succeeds on.
We identify a sequence with a real number via the usual binary expansion: . This correspondence is one-to-one except on , where it is two-to-one. For every , we define , and we define the dyadic interval
For , we define to be p-random (respectively, -random) iff is p-random (respectively, -random). If , then we define to be p-computable (p-random) just as we do for , and similarly for computability and -randomness. It is well-known that no p-computable real number is p-random.
3 Functions of interest
We will restrict our attention to certain types of real-valued functions of a real variable. We are only interested in the behavior of these functions on p-random inputs. For simplicity, we will only consider functions with domain , but this is in no way an essential restriction. Our functions will possess a certain p-computability property and a certain strong variation property. Both these properties are local in the sense that we only care about them in the vicinity of a p-random number.
A function is weakly p-computable if there exists a polynomial-time computable function such that for any and ,
Furthermore, for constant , if is computable in time , then we say that is weakly -computable.
Note that a weakly p-computable function can behave arbitrarily on .
Let be a function and let be some dyadic interval with . We say that is weakly p-computable on iff there exists a ptime computable function such that for any and ,
If , then we say that is weakly p-computable at iff is weakly p-computable on some dyadic interval containing .
All these notions carry over in the obvious way when “p-computable” is replaced with “-computable.”
We say that is locally weakly p-computable if is weakly p-computable at all p-random points in .
[Note that is the dyadic rational number corresponding to the string (the concatenation of and ).]
In other words, is weakly p-computable at iff we can approximate on the dyadic rationals in some dyadic interval containing in polynomial time. Notice that we are not insisting that have any continuity properties. This means in particular that may not uniquely determine on . Notice also that a function may be locally weakly p-computable but not “globally” p-computable, being patched together nonuniformly with various p-computable functions on different dyadic intervals.
We can extend Definition 3.2 to weak p-computability at an arbitrary point in the natural way.
Let be an interval, let be a function, and let be some point. We say that strongly varies at on iff there is some real constant such that either
for all ,
for all ,
In case (1) we say that strongly increases at on , and in case (2) strongly decreases at on .
We say that strongly varies at if strongly varies at on for some open interval containing . We define strongly increasing/decreasing at analogously.
The notion of strong variation is illustrated in Figure 1.
If is in a neighborhood of and , then strongly varies at .
4 Main result
Here is our main technical theorem, from which most of the other results in the paper follow easily.
Let be some interval and some function. Suppose is a p-random point in the interior of . If is weakly p-computable at and strongly varies at , then is p-random.
4.1 Establishing Theorem 4.1
We start this section with two easy observations which we give without proof.
Let and be integers with , and let . A number is -random if and only if is -random, if and only if is -random, if and only if is -random.
The same then obviously holds when “-random” is replaced with “p-random.”
Let be an interval, let be a function, let and be any integers with , and let . Define
Then strongly varies at some on (respectively, is -computable at ) if and only if all of strongly vary (respectively, are -computable) at on , if and only if strongly varies (respectively, is -computable) at on , if and only if strongly varies (respectively, is -computable) at on . The sense of variation (strongly increasing or strongly decreasing) of is the same as that of and opposite that of .
The same then obviously holds when “-computable” is replaced with “p-computable.”
Theorem 4.1 is a corollary of the next lemma, which gives the theorem its essential technical content. We prove this lemma later in this section. For convenience, we will assume that our function is monotone ascending. We will show later that this is not an essential restriction.
For any there exists such that, for any weakly -computable, monotone ascending and such that strongly increases at on , if is not random, then is not -random.
Let be weakly p-computable and monotone ascending on . Suppose that and that strongly increases at on . Then if is not p-random, then is not p-random.
To prove Lemma 4.4, we need to construct an -computable martingale that succeeds on , given an -computable one that succeeds on . If martingale succeeds on , then we can define (for a given string ) to sample the values of on points in . We do this by sandwiching between a lower bound and an upper bound . We get by overestimating ’s total contribution in an interval around (Equation (1), below), and we get by underestimating it (Equation (2)). These estimates become more refined as increases, and, provided is conservative, they reach a common limit as goes to infinity, yielding a well-defined martingale .
Let be monotone ascending on and let be a martingale. For every , let denote the interval , and for every , define
The only differences between the sums in Equations (1) and (2) are at most two terms where straddles the boundary of . The assumption that is conservative is needed to ensure that these terms are not too large, and thus that and are close to each other. The following lemma is routine and easy to check.
Let and be as in Definition 4.6. For any , if is any dyadic interval contained in (that is, ), then letting ,
The first inequality holds because , and hence is one of the terms in the sum defining . To see the other inequalities on the top line, notice that each term in the expression for (for some ) is equal to the sum of two terms occurring in the expression for . This follows from the fact that any contains both and , and so the latter two intervals are also subsets of .
Clearly, all terms in the sum for are included in the sum for , and so every quantity on the top line is less than or equal to the corresponding quantity on the bottom line.
Finally, the inequalities on the bottom line all hold: If we split each term in the expression for into the equivalent sum
then this accounts for all the terms in (and possibly more). ∎
Let and be as in Definition 4.6. We define the upper -shift of to be the function defined for all as
Similarly, we define the lower -shift of to be
Since for any fixed , and are both monotone functions of (decreasing and increasing, respectively) by Lemma 4.7, the limits in the definition above clearly exist, and
for all .
For some martingales, the upper and lower -shifts may differ, but they coincide for conservative martingales.
Fix and as in Definition 4.6. Suppose further that is conservative. For any and ,
Here we only use Property (1) of being conservative. All the terms in the two sums on the left-hand side of the inequality (3) cancel except for at most two dyadic intervals and —the former containing the left endpoint of and the latter containing the right endpoint. Thus we get
Let and be as in Definition 4.6. If is conservative, then for all .
Immediate from Lemma 4.9. ∎
If and are as in Definition 4.6 and is conservative, then we let denote the common value , and we call the -pullback of .
On input string , merely samples over the the interval .
If and are as in Definition 4.6 and is conservative, then its -pullback is a martingale.
To see that is a martingale, first we notice that
Next, by examining terms in the sums and using Lemma 4.7, we notice that for any and ,
Taking the limit of all sides as , we get
All these quantities are equal, since the two extremes are equal. Thus
The next lemma is key. Here is where we make essential use of the strongly increasing property of . (The hypothesis here is slightly weaker, though).
Let and be as in Definition 4.6 with being conservative. Suppose that there exist and a real such that
for all . If succeeds on and , then succeeds on .
Note that Equation (4) implies if and if .
Set . We then have .
Since , has infinitely many ’s and infinitely many ’s. This implies that has infinitely many occurrences of “” as a substring, that is, there are infinitely many such that . Fix any real . Since succeeds on and is conservative, strongly succeeds on , and so there is some such that for all . Fix some such that . Let . We have and . Let be the first bits of , noting that . Here is the situation at :
We have . Also, since , we have , which implies , as noted above. It follows immediately that .
Proof of Claim 4.14.
Finally, we need a lemma regarding the computation time of . The challenge in the proof is in finding an easy (i.e., polynomial-time) way to approximate the and values.
For any there exists such that, for any conservative -computable martingale and -computable function monotone ascending on with , the -pullback martingale of is -computable. (As a consequence, if is p-computable and is weakly p-computable on , then is p-computable.)
The idea is that, given input of length and accuracy parameter , we will approximate some number between and for some sufficiently large (but still polynomial in and ). We have no hope of computing the sum of Equations (1) or (2) directly, as there are exponentially many terms. Fortunately, large blocks of the sum can be computed all at once by evaluating on shorter inputs. The condition that is only for technical convenience and is not necessary; it is only required that be computable in time .
Given conservative and monotone as above, fix approximators
computable in time and , respectively, such that for all and ,
Fix an input and let .
Fix an . We will choose to be a sufficiently large integer (depending on and ) to be determined later. We prove the lemma by describing a procedure (running time polynomial in ) to compute a number such that .
Here is the procedure:
Compute dyadic rationals , both with denominator , so that approximates to within less than for each endpoint:
Compute and round to the nearest with denominator so that . Notice that
In other words,
(Note that is the left endpoint of .)
If , then let . Otherwise, let be the lexicographical successor of in , and compute . Let be the dyadic rational with denominator closest to . Similarly to , we have
(Note that is the right endpoint of .)
Without loss of generality, we can assume that : if necessary, reset then . These adjustments don’t affect the inequalities above.
Let be the set of all -minimal strings such that .
Notice that no string in is a proper prefix of any other string in ; hence the sets for are pairwise disjoint except for endpoints. Further, it is clear that if (otherwise, ).
We claim that , and hence , can be computed in time polynomial in , with a polynomial time bound exponent depending only on and . This follows from three facts:
and can be computed in time .
Every string in has length at most .
There are at most two strings in of any given length.
Fact 1 is clear from the procedure description. For Fact 2, notice that, since and have denominator , if is any string such that and , then removing the last bit of yields a proper prefix such that as well, and so is not -minimal, and thus .
Similarly for Fact 3, if are any three distinct strings given in lexicographical order, and , then . To see this, suppose . Let be the result of removing the last bit of . Then includes and another dyadic interval of length immediately to the left or right of . In either case, the left end point of is not to the left of that of , and the right endpoint of is not to the right of that of . So , which means that is not -minimal, and thus .
Thus has at most strings, each of length at most , and so the following greedy algorithm for computing (given and ) runs in time : WHILE DO Let be shortest such that and END-WHILE return It then follows that can be computed in from (in Step 3, above) in time .
It remains to show that can be chosen so that is sufficiently close to . First, note that, due to the closeness of our approximations to the endpoints of ,
(The sum in the middle includes all the terms of the sum on the left, and the sum on the right includes all the terms of the sum in the middle.)
Since , and the intervals intersect only at endpoints, we can rewrite the sum in the middle of (5) as
the first equality owing to the fact that is a martingale. So Equation (5) becomes
Now we use the fact that for all . From (6) we get
We have , so the above inequality implies