Diagonally non-computable functions and fireworks

Diagonally non-computable functions and fireworks

Laurent Bienvenu    Ludovic Patey
Abstract

A set of reals is said to be negligible if there is no probabilistic algorithm which generates a member of  with positive probability. Various classes have been proven to be negligible, for example the Turing upper-cone of a non-computable real, the class of coherent completions of Peano Arithmetic or the class of reals of minimal Turing degree. One class of particular interest in the study of negligibility is the class of diagonally non-computable (DNC) functions, proven by Kučera to be non-negligible in a strong sense: every Martin-Löf random real computes a DNC function. Ambos-Spies et al. showed that the converse does not hold: there are DNC functions which compute no Martin-Löf random real. In this paper, we show that the set of such DNC functions is in fact non-negligible. More precisely, we prove that for every sufficiently fast-growing computable , every 2-random real computes an -bounded DNC function which computes no Martin-Löf random real. Further, we show that the same holds for the set of reals which compute a DNC function but no bounded DNC function. The proofs of these results use a combination of a technique due to Kautz (which, following a metaphor of Shen, we like to call a ‘fireworks argument’) and bushy tree forcing, which is the canonical forcing notion used in the study of DNC functions.

1 Background

1.1 Negligibility, Levin-V’yugin algebra and DNC functions

Let  be a class of infinite binary sequences (a.k.a. reals) and consider the set

By Kolmogorov’s 0-1 law theorem, its (Lebesgue) measure is either  or . If it is equal to , the class is said to be negligible, a terminology due to Levin and V’yugin [17]. Equivalently, this means that there is no (infinite) probabilistic algorithm which generates a member of  with positive probability.

Various classes have been proven to be negligible, for example the Turing upper-cone of a non-computable  [5] (or a countable class of such ’s), the class of coherent completions of Peano Arithmetic [10], the class of reals of minimal Turing degree [18], the class of shift-complex sequences [20, 12], etc. Likewise, many classes have been showed to be non-negligible (or ‘typical’). Obviously, classes of positive measure such as the class of Martin-Löf random reals are all non-negligible. Much more interesting examples were given by Kautz [11]111Very similar ideas were used by V’yugin in [22, 23]. He showed that the following are non-negligible: the class of hyperimmune sets, the class of 1-generic reals, and the class of reals of CEA degrees. All three results are variations of the same technique. However, the way Kautz presents his technique is quite abstract and in some sense hides its true spirit, namely that what is being used is a probabilistic algorithm. In the paper [19], Rumyantsev and Shen give a more explicit and intuitive explanation of the underlying algorithm, using the metaphor of a fireworks shop in which a customer is trying to either buy a box of good fireworks, or expose the vendor by opening a flawed one, and uses a probabilistic algorithm to maximize his chances of success. Following the tradition of colourful terminology in computability theory (Lerman’s pinball machine, Nies’ decanter argument…), we propose to refer to this method as a fireworks argument, the precise template of which will be recalled in the next section.

The dichotomy between negligibility and non-negligibility has received quite a lot of attention in recent years. We refer the reader to [2, 3, 21] for a panorama of the existing results in this direction.

One class of particular interest in the study of negligibility is the class of diagonally non-computable functions, or ‘DNC functions’ for short. It consists of the total functions  such that for all , where is a standard enumeration of partial computable functions from to . By definition, there is no computable DNC function. On the other hand, as showed by Kučera [15], the class of DNC functions is non-negligible. If we take the point of view of probabilistic algorithms, this is clear: for all  there is only one value for to avoid (namely, if it is defined), so by picking at random between  and some large integer (e.g. ), we ensure a positive probability of success. The situation becomes more interesting when one restricts the class to the subclass

where  is a given function, typically a computable one. The faster  grows, the easier it is to obtain an element of . And indeed, depending on the growth rate of  the class  can be negligible or non-negligible (more specifically, for  computable, is negligible if and only if , this is an unpublished result due to J. Miller, see [3] for a proof).

This notion relativizes to an arbitrary oracle : a DNC function is a function such that for all . Likewise, we set

Of course, the stronger the oracle , the harder it is to compute a DNC function.

In this paper, we study the role of DNC functions in the setting of the Levin-V’yugin algebra, which is the algebra of Turing invariant Borel sets ‘modulo negligibility’. That is, for two Turing-invariant sets and we write if is negligible. We say that and are equivalent if and . We call an equivalence class for this equivalence relation a Levin-V’yugin degree.

Despite having been introduced some time ago and being a very natural notion, little work has been done on the Levin-V’yugin degrees, except for the seminal papers [17, 22, 23] and some ongoing work by Hölzl and Porter. It is during discussions with the authors of the latter that a question arose. Kučera’s result discussed above shows that reals of Martin-Löf random degree are also of DNC degree, so , where is the set of reals Turing-equivalent to a Martin-Löf random real, and those equivalent to a DNC function. On the other hand, it is well-known that there are DNC functions which do not Turing-compute any Martin-Löf random real (see subsection 1.3 below). But does this result translate in the setting of the Levin-V’yugin algebra? More precisely, is it then the case that (i.e., that is non-negligible)? In this paper, we answer this question in the affirmative. Namely, we prove:

Theorem 1 (Main theorem)

For every sufficiently fast-growing computable , every 2-random (i.e., Martin-Löf random relative to ) real  computes some which does not compute any Martin-Löf random real.

Not only is this result interesting in its own right, but its proof is particularly instructive. It combines fireworks arguments with bushy tree forcing, a forcing notion used in many recent papers to study the properties of DNC functions [1, 4, 7, 9, 13]. To our knowledge, our proof is the most elaborate use of a fireworks argument to date. It illustrates quite convincingly the power of the technique, and is likely to yield further applications in the future.

Also interesting is the fact that if we want to study functions which are relative to some oracle , we can state a stronger theorem than what we would get from a straightforward relativization of Theorem 1.

Theorem 2 (Main theorem, relativized)

For any real and a sufficiently fast-growing computable , every real which is both -random and 2-random computes a function which itself computes no Martin-Löf random real.

A straightforward relativization of Theorem 1 would require  to be -random, and would give a function  which does not compute any -Martin-Löf random real, but it could still compute a Martin-Löf random real. Note that taking gives us a stronger result than Theorem 1: Every 2-random real  computes some which is does not compute any Martin-Löf random real.

We finally remark en passant that ‘2-random’ in the hypothesis of Theorem 2 cannot be substituted for ‘Martin-Löf random’, as shown by the following easy proposition.

Proposition 3

There is a Martin-Löf random real which computes no member of .

Proof.

Take any hyperimmune-free Martin-Löf random . Because of hyperimmune-freeness, everything Turing-computed by  is in fact tt-computed by . Moreover, every non-computable real which is tt-computed by a Martin-Löf random real is also of Martin-Löf random degree [6]. In particular, every DNC function it computes is of Martin-Löf random degree. ∎

Remark.

Readers who are experts in algorithmic randomness may not fully be satisfied with this last proposition, and rightfully so. Indeed, there is a wealth of algorithmic randomness notions between Martin-Löf randomness and 2-randomness, and it would be interesting to know precisely which levels of randomness it is sufficient for a real to have in order to compute a member of . A more precise answer is the following: weak-2-randomness is not sufficiently strong, but Demuth randomness is. For the first part of this assertion, observe that in the proof of Proposition 3, one can take  to be weak-2-random (indeed there are hyperimmune-free weak-2-randoms). The second part is more involved and requires a fine analysis of fireworks arguments which will be done in a forthcoming paper by Christopher Porter and the first author. However, the crude ‘2-randomness’ bound we use in this paper is sufficient for our main goal, which is to prove the non-negligibility of .

1.2 Notation and terminology

Unless otherwise specified, a string is a finite sequence of integers. We denote the set of strings by , by the empty string and by the length of a string . Unless specified otherwise, a sequence is an infinite list of integers. The set of sequences is denoted by . We will sometimes need to consider binary sequences (which we also call reals), the set of which we denote by . The -th element of a string or sequence  is denoted by and denotes the finite sequence consisting of the first  values of . A string is a prefix of a string (we also say that extends ), noted , if and .

A sequence is said to be a DNC function (resp. -DNC function) if for all , (resp. ), where is a standard enumeration of partial computable functions from to with oracle.

A tree is a set of strings closed downwards under the prefix relation, i.e., if and then . Members of a tree are often referred to as nodes. A node is a child (or immediate extension) of a node in a tree  if , and are nodes of , and . A leaf of a tree  is a node with no immediate extension in . A path in a tree is a sequence such that every initial segment of is in . The set of paths of a tree forms a closed set denoted by .

1.3 Bushy trees

In this section we present the notion of bushy tree and its main properties. Roughly speaking, bushiness is a purely combinatorial property, which states that a tree is ‘sufficiently fast branching’, in a way that guarantees the existence of a DNC path through the tree. The idea of bushy tree was invented by Kumabe, who used it to construct a DNC function of minimal Turing degree (see [16] for an exposition of this result), which in particular shows that there is a DNC function which computes no Martin-Löf random real (as no Martin-Löf random real can have minimal degree). Since then, bushy trees have been successfully applied to the study of DNC functions. We refer the reader to the excellent survey of Khan and Miller [13].

Definition 1 (Bushy tree)

Fix a function and a string . A tree is -bushy (resp. exactly -bushy) above if every is comparable with and whenever is not a leaf of , it has at least (resp. exactly) immediate children. We call the stem of .

Definition 2 (Big set, small set)

Fix a function and some string . A set is -big above if there exists a finite tree which is -bushy above and such that all leaves of are in . If no such tree exists, is said to be -small above .

Bushy tree forcing consists generally of building decreasing sequences of infinite bushy trees where is -bushy over for some string . Each stem is an initial segment of the constructed sequence. During the construction we maintain a set of “bad” extensions, i.e., of strings to avoid. This set must remain -small above at any stage for some function . Bushy tree forcing is especially convenient for building DNC functions. Let be the set of strings which are not initial segments of any DNC function:

One can easily see that is -small above .

The following three lemmas are at the core of every bushy tree argument. We state them without proof and refer the reader to [13] for details.

Lemma 1 (Concatenation)

Fix a function . Suppose that is -big above a string . Let be a family of subsets of . If is -big above for every , then is -big above .

The concatenation property is often used in the following contrapositive form: If we are given a finite tree -bushy above some string and a “bad” set of extensions to avoid which is -small above , then there exists a leaf of such that is still -small above . In particular, if a set is -big above , there exists an extension of which is in and such that is still -small above .

Lemma 2 (Smallness additivity)

Suppose that are subsets of , , , …, are functions, and . If is -small above  for all , then is -small above .

Lemma 3 (Small set closure)

We say that is -closed if whenever is -big above a string then . Accordingly, the -closure of any set  is the set . If is -small above a string , then its closure is also -small above .

As explained in [13], Lemma 3 is very useful in our constructions. We are given a “bad” set of nodes, which is -small above where is a partial approximation of the object we are constructing. We want to extend still avoiding and in particular preserving -smallness of . Lemma 3 enables us to consider w.l.o.g. that if is an extension of which does not preserves -smallness of , then is already on .

The next lemma is very simple and yet central in this paper. It expresses the fact that if a set  is sufficiently small in an -bushy tree , then there is only a small probability that a random path of the tree meets  (has a member of  as prefix). By “random path”, we mean the probability distribution over paths induced by a downward random walk where one starts at the root and at each step goes from a node to one of its children, all children being given the same probability of being picked.

Lemma 4

Fix two functions positive functions and with . If is an infinite tree -bushy above and is a -small above , then the probability that a random path of  avoids is greater than

Note that this quantity is positive if and only if , due to the identity and the asymptotic estimate .

Proof.

Without loss of generality, we can assume that is -closed (otherwise, take its closure). We prove by induction over that the probability of having avoided  by the time we reach depth  is at least (a quantity equal to  for , by convention). The lemma immediately follows from this fact.

The base case is trivial as  is the only such node and is -small above . Assume it is true for some depth . Suppose we have reached a node of length  such that is -small above . By -bushiness of , has at least immediate extensions. If  were -big above -many immediate extensions of then would be also -big above , by -closedness. Hence is -small above at least immediate extensions of . It follows easily that, conditional to having reached , the probability to avoid  at the next level is at least . This finishes the induction. ∎

Remark.

Lemma 4 makes no computability assumption on  and . However, when  is computable, taking a path of  at random can be performed using a probabilistic algorithm, which will then produce a path avoiding  with probability at least (and this still makes no computability assumption about ).

Now we see how randomness helps us compute DNC functions: take a computable function  such that (which is equivalent to ; for example, will do) and take an -bushy tree . Now, take a path of  at random. Since, as we saw above, is 2-small in , the previous lemma tells us that the probability to get a path of the tree which avoids , and thus is a DNC function, is at least , which is positive.

1.4 Fireworks arguments

A fireworks argument can be seen as “probabilistic forcing” for properties. It is best illustrated by the following theorem, due to Kautz: there exists a probabilistic algorithm which with positive probability produces a -generic real (more precisely, every 2-random real computes a 1-generic real). Let us present the argument in a more abstract way so as to better fit the setting of the next section, where we will (implicitly) force with bushy trees. Let be a computable partial order, and let , , … be a list of uniformly c.e. subsets of . We want to get a probabilistic algorithm to generate an infinite list such that for every set , the requirement holds, where says that there exists a  such that either holds or for every , . (For example, to get Kautz’s result, one takes for the set of finite strings, where if extends and the are all c.e. sets of strings. In this paper, will typically consist of a set of finitely represented bushy trees, and will mean that is a subset of ).

Suppose we have already built the sequence of up to some , and we want to satisfy the requirement . If we did not care about the effectivity of the construction, we could easily satisfy the requirement by distinguishing two cases:

  • Case 1 (the case): there is no such that . In this case nothing needs to be done, is already satisfied!

  • Case 2 (the case): there is some such that . Here it suffices to search for such a (which can even be done effectively since is c.e.), and set after which is satisfied.

Although both cases only require effective actions (do nothing or effectively search for a , respectively), the problem is that one cannot computably distinguish between them, as being in Case 2 is only a c.e. event (hence the name ‘ case’). And indeed in general there is no deterministic algorithm to build a sequence of satisfying all requirements (otherwise one could, for example, computably build a -generic).

There is however, a probabilistic algorithm which builds such a sequence of ’s with positive probability. It works as follows. We start with any . Next, for each , we pick an integer  at random between  and , where  is a fixed computable function (the faster  grows, the higher the probability of success of the algorithm will be). Moreover, for each , we let be a counter, initialized at , which will count how many wrong passive guesses we have made for requirement .

By ‘passive guess’, we mean that in the construction of the ’s, we assume at some step  that we are in the case, i.e., that no is in . It is a passive guess because as we saw, if it is indeed true, requirement is already satisfied and no particular action is needed. Of course, this guess may be incorrect but since is c.e., if it is incorrect we will discover this at some later stage of the algorithm. When this happens, we make a new assumption that there are no in  and so on. If at any point we make a correct passive guess, requirement is satisfied. There is however a danger that all the passive guesses we make for requirement turn out to be wrong. What we do is use the number as a cap on how many times we allow ourselves to make a wrong passive guess for . If for some  the cap is reached at stage , we then make the opposite guess (“active guess”), i.e., that there is a such that holds, try to find such a and take it as our , thus satisfying requirement . This guess is “active” because we need to find such a  before doing anything else. But at least, as we said above, since is a c.e. set, such a  can be effectively found if it exists. Then we take .

This active/passive guessing strategy is still not guaranteed to work, as one bad case remains: if we make an incorrect active guess for some , we then get stuck while waiting for a  in  which we will never find. However, this is the only bad case: if it does not happen for any , then the algorithm succeeds in producing a sequence as wanted. Indeed, for every , either it makes a good passive guess for that never turns out to be wrong, meaning that for some , no is in , or it makes a good active guess that some is in , eventually finds such a , and take it as an extension.

Why can the probability of success of the algorithm be made arbitrarily close to 1? The reason is the following key observation: For all , if all  are fixed, then there is at most one value of the random variable  for which we get stuck in a loop while trying to satisfy requirement . Indeed, suppose we get stuck having chosen a value . This means that we made incorrect passive guesses and then one incorrect active guess. Any other choice would have been fine, because our -th guess would then have been a correct active guess and a  in . And any other choice would also have been fine, as in this case our -th guess would have been a correct passive guess. Thus, the probability to get stuck because of requirement  is at most , giving a total probability of success of the algorithm of at least , which can be made arbitrarily close to  for a well chosen .

Now the last thing we need to check is how much randomness (in the sense of algorithmic randomness) is needed for this probabilistic algorithm to work. Let us explain what we mean. A probabilistic algorithm is nothing but a Turing functional  with access to a ‘random’ (in the classical sense) oracle . This is the  used by the algorithm to make its random decisions. What we have argued above is that the failure set of the algorithm

has probability (by ‘undefined’ we mean that the algorithm does not produce an infinite sequence of ’s).

In fact, this probability can be made arbitrarily small, therefore fireworks arguments do not give us one algorithm but a uniform family of algorithms: For any given integer , one can design, uniformly in , a probabilistic algorithm which fails with probability at most (it suffices to choose the function  such that ). Call the corresponding algorithm, and consider

which is of measure at most . The set , which is the set of ’s on which all the algorithms fail is a null set. This means that if  is ‘sufficiently random’, in the sense of effective randomness, it does not belong to all  and thus some algorithm succeeds using . Which level of algorithmic randomness is actually needed? One should observe that every  is in fact an effectively open set relatively to . Indeed, as we have seen, the only case the algorithm fails is when it waits in vain for a extending some condition  and belonging to some . If such a situation happens, it does so at some finite stage, i.e., having used only a finite initial segment of , hence is open. Moreover, testing whether the algorithm is stuck at a given stage can be done using : indeed, the predicate is a -predicate uniformly in and .

Thus, is a -Martin-Löf test, which shows that every -random real computes, via some functional , an infinite sequence such that for every there is a  such that either holds or for every , , as wanted.

2 Main result

We shall now see how to combine fireworks arguments with bushy tree forcing to prove Theorem 2. We first provide an informal presentation of the proof. Full details will be given in the next section.

2.1 Proof overview

For this construction, we will need a hierarchy of very fast growing computable functions

( is an informal notation: it means that grows ‘much faster’ than ) and another fast-growing function  (which is meant to grow faster that all the but with certain restrictions). At this point, we do not specify precisely what functions and we take. We will see during the construction which properties they need to have to make the argument work.

Contrary to most bushy tree arguments, the whole construction will happen within a single tree , which is exactly -bushy:

Typically, a bushy tree forcing argument constructs a sequence of bushy trees, and the path obtained by forcing is in the intersection of all of those. We will not need such a sequence in our argument. However, some steps of the construction can be understood as “locally” taking a subtree of . What we keep from other bushy tree arguments is the idea of maintaining during the construction a small set of bad strings to be avoided. But again, there is a difference in our construction: to build a DNC function by forcing, one usually starts with the initial small bad set

We will not do this as we need our bad set of strings to be c.e. at all times. Our fireworks argument will (with high probability) build an  which does not compute any Martin-Löf random real. It is only an a posteriori analysis of the construction which will allow us to conclude that  is also DNC with high probability. In the absence of any other requirement, we would just build value by value, picking for each  the value of at random between  and . This would give us a probability of avoiding  of at least , which is positive for fast-growing. But of course there are other requirements our construction needs to meet, namely all the requirements of the form:

: either is partial or there is an  such that

where  is an integer, is a Turing functional from to , and is the prefix-free Kolmogorov complexity function. (Here we use the Levin-Schnorr theorem that a real is Martin-Löf random if and only if for some constant , and all , ).

Let us see how we would ideally like to satisfy such a (single) requirement. Suppose we have already built some string  of length  and consider the set

where means that the Turing reduction  produces at least  bits of output on input , and distinguish two cases:

  • Case 1: is -small above . In this case there is essentially nothing to worry about. We can just continue to build by making random choices. The probability that we hit the set at some point is small, namely it is at most (by Lemma 4), and recall that for . And if we do not hit , then will end up being partial, therefore satisfying .

  • Case 2: is -big above . Each element  is such that , therefore we can decompose  as

    There are strings of length , therefore by Lemma 2, there must be a such that is -big above  (note that in this expression, is a function while is just an integer). Since being big is a -property, such a  can be found effectively knowing  and , and thus the first such found with this effective search must satisfy

    (1)

    for some fixed constants  (the last term is due to the fact that is a list of integers such that the -th integer is less than , therefore has complexity less than ). Since we have

    If  grows fast enough, and  is sufficiently large then this last inequality implies . Thus, any  which passes through a node in  satisfies requirement . Moreover, since  is -big above , this means by definition that there is a finite -bushy tree of stem  all of whose leaves are in . Then, what we can do is effectively find the tree and temporarily restrict our random walk to (picking at each step the next value at random among nodes of ) until we reach a leaf of . This guarantees the satisfaction of and the probability that we hit during this temporary restriction is less than , which can be made small if  is well-chosen.

Of course, the problem is again that, having built , we cannot effectively determine whether we are in Case 1 or in Case 2. This is where the fireworks argument comes into play. We are going to pick a number between and some large , and assume up to times that we are in Case 1 (the case), and if proven wrong times, we will then wait until proven that we are in Case 2 (the case), and if so, implement the above strategy for Case 2. As with other fireworks arguments, the probability that we decide to wait at the wrong moment is at most , which we can thus make arbitrarily small.

These considerations are enough to give us a strategy which ensures, with arbitrarily high probability, the satisfaction of a single requirement while avoiding the set with high probability. However, there is a subtle point to address when we try to satisfy several requirements in parallel. Indeed, what can happen is that the strategy for a first requirement has made the assumption that some set is -small in  - and thus small probability of being hit - while a strategy for a second requirement needs to make a temporary restriction to a tree . While the probability to hit  was small for a random walk within , it could happen that the random walk restricted to has a much greater probability to hit . This is what the assumption for takes care of: Whenever a strategy needs to make such a restriction to a -bushy subtree, we will have that grows much faster than the ’s previously considered in the proof, and thus, if a set is -small, it will still be unlikely that we hit  while choosing a path of a -bushy tree at random for any .

2.2 The full algorithm and formal proof of correctness

We now state the precise theorem regarding the probabilistic algorithm discussed above. The analysis of the level of effective randomness required will be done separately.

Theorem 4

Let and be a sufficiently fast-growing computable function. For every rational , one can effectively design a probabilistic algorithm (= Turing machine with random oracle) which, with probability at least , produces an such that (1) is and (2) computes no Martin-Löf random real.

Let  be the smallest integer such that . Let us first define the functions and we alluded to above. Set to be the function defined by . Then, for all , inductively define

Finally define for all 

Let us now give the details of the algorithm. First, number all the requirements and call the -th requirement. As usual, we organize them in a way that all requirements receive attention infinitely often, and only one requirement receives attention at any given stage. We will see during the verification that a small extra assumption should be added, namely that every requirement should be considered for the first time at some ‘late enough’ stage.

Stage 0: Initialization. The first thing we do is pick for all  a number at random between and . Then, we initialize to be the empty string. For all , set a counter  originally equal to .

Loop (to be repeated indefinitely). Suppose that the values of are already defined (the last one being between and ). Assume some requirement receives attention.

  • If this requirement receives attention for the first time, we make for this requirement the assumption that the set

    is -small above the current  (again, note that this is a assumption so if it is false it will be discovered to be so at some finite stage).

  • If it does not receive attention for the first time, we check whether the current assumption made for this requirement still appears to be true at stage .

    • If it does, we maintain this assumption and simply pick the value of  at random between and .

    • If the assumption is discovered to be false, we increase our ‘error counter’ by .

      • If the new value of remains less than , we forget our previous assumption for requirement and make a new assumption: we now assume that the set is -small above (the current) .

      • If the new value of is equal to , we then wait until we find, for some  of length , a set which is -big above . When this happens, i.e., when we find a finite subtree  of of stem  which is -bushy above  and all of whose leaves are in , we choose the next values of  by a downward random walk restricted to  until we reach a leaf of  (note that we may never find such a tree in which case our algorithm gets stuck at this stage and thus fails to even return an infinite sequence). When a leaf is reached, we mark the -th requirement as satisfied.

The sequence  returned by the algorithm is the minimal element of extending all the values taken by throughout the algorithm. We now turn to the verification of our algorithm, which we have already done for the most part in Section 2.1. We do this via a series of claims.

Claim 1.

The probability that the algorithm gets stuck at some substage of type (b.2.ii) (waiting to find a big tree which does not exist) is at most .

Proof.

This is standard fireworks calculation: all other randomly chosen values being fixed, there is at most one value of which causes the algorithm to get stuck at (b.2.ii) because of requirement . And since is chosen randomly between  and , the probability that the algorithm gets stuck at (a.2.ii) because of requirement is at most . Thus, over all requirements, this gives a probability of at most . ∎

Claim 2.

Conditionally to our algorithm returning an infinite sequence, the probability that all requirements are met is at least .

Proof.

Let us look at a given requirement . This requirement receives attention infinitely often until satisfied. This means that if the algorithm does not get stuck, one of the following happens

  • at some point it makes for a correct assumption during substep (a) or (b.2.i) or,

  • the -th requirement causes the algorithm to enter some substep (b.2.ii) but a tree  is found thereafter.

These two cases are mutually exclusive. In the first case, for some  a set

is correctly assumed to be -small above the current . For any later stage of the algorithm (i.e., at stages ), the value of is chosen at random among values, where is either equal to or to for some in case some other strategy has caused a temporary restriction of the tree. The latter quantity is the smaller of the two, and by definition of the ’s, it follows that .

By the calculations of the proof of Lemma 4, it follows that in this case, the probability of hitting a node of  at some later stage is at most

In the second case the requirement is always satisfied. Indeed, in this case, we find a string  of length  and finite tree  whose leaves are contained in and then make a random walk within  until we reach a leaf. As explained in the previous section, we then have

for some fixed constants . But , so

(2)

for some fixed . By construction of , tends to , thus for  large enough (and this ‘large enough’ can be found computably), the value of above expression is less than , which means that the requirement is satisfied as soon as we reach a leaf of . We thus add the technical extra assumption that requirement is only allowed to receive attention at stage  if in the above expression the right-hand side is smaller than . This essentially changes nothing since it only prevents every requirement to receive attention for finitely many stages.

Given a requirement , we say that stage  is good for  when after having built , in the next iteration of the loop, receives attention and either a true assumption is made at steps (a) or (b.2), or stage (b.2.ii) is reached and a tree  is found. To every requirement corresponds exactly one good stage (and no two requirements have a good stage in common). As we have just argued, if is the good stage for a requirement, the probability that the requirement is satisfied, conditional to the algorithm returning an infinite sequence, is at least . Over all requirements, this gives a probability of at least . ∎

Claim 3.

The probability that we hit a node of during the algorithm is at most .

Proof.

There is at most one ‘bad’ value of the algorithm can choose (namely, , if it is defined). Whenever a value is chosen at random for some , it is either chosen at random between  and or, in case of a temporary restriction to a subtree, between  and for some  (in case of a temporary restriction of the tree). Both quantities are at least , by construction. This gives a total probability of at most of hitting . ∎

The theorem immediately follows from the three claims: the probability that an infinite sequence  is returned and all requirement are satisfied is at least .

2.3 How much randomness do we need?

It remains to conduct, like in Section 1.4, an analysis of the level of algorithmic randomness needed to make the algorithm work. The attentive reader will notice that there are two uses of randomness in the construction: the first one to choose the sequence which will make the fireworks argument work (i.e., the algorithm won’t get stuck), and the second one which helps choosing a node of the tree at random during the construction. For a given , consider the algorithm  with probability of success at least . There are three ways in which it can fail:

  1. It could get stuck at some stage b.2.ii.

  2. It could hit a node which belongs to .

  3. It could make at some stage  a true assumption that a set  is -small, but nonetheless hit a node of  later on (when it hits a node of the set  corresponding to a wrong assumption, this is not a problem because the assumption will be discovered to be wrong later on and a new assumption will be made for the requirement).

Technically, the occurrence of the third case does not necessarily mean that the algorithm has failed, but if neither of these three cases occur the algorithm succeeds, as explained above. The total probability of such events is at most . Moreover, if any event of the above three types happens, it does so at some finite stage, thus after having used only finitely many bits of the random oracle. The open set of oracles that cause events of type 1 and 3 to happen can be effectively enumerated relatively to . Indeed, for the first type this is exactly what is explained in Section 1.4, namely that can check at any given given stage whether the algorithm is stuck at stage b.2.ii. The third type can also be checked using : indeed, the sets we assume to be -small are c.e. sets and since  is computable uniformly in , the smallness can be checked using  and checking whether a node chosen at some stage of the algorithm is in  can also be done effectively in (since is c.e.). Thus we can design a -Martin-Löf test such that covers the set of oracles which make the algorithm fail because of cases 1 or 3.

The second type of failure is even easier to analyse. The set is -c.e., so the set of oracles which cause the algorithm to hit a node of is effectively open relative to . Thus we can design a -Martin-Löf test such that covers the set of oracles which make the algorithm fail because of case 2.

This finishes the proof of Theorem 2: if is -random and 2-random, for  large enough it will be outside and outside , hence the algorithm will succeed on input .

3 Further results

The proof of Theorem 2 can be adapted to prove more results on the class of DNC functions in the Levin-V’yugin algebra. For example, in this proof, we construct a real of degree which computes no Martin-Löf random real, but we do so using Kolmogorov complexity in a rather liberal way. By being very slightly more precise we can get a stronger result.

Theorem 5

Let . For every fixed computable function , for every sufficiently fast-growing computable , every real  which is both -Martin-Löf random and 2-random computes a function which computes no function.

This is a stronger theorem than Theorem 2 because, as we saw in Section 1.1, when is sufficiently fast-growing, every Martin-Löf random real computes a function. Again, the fact that is strictly contained in when grows sufficiently faster than  is known (see [13]), but our theorem shows that this separation holds in the Levin-V’yugin algebra as well.

The first thing to do to adapt our previous proof is to use the relationship between Kolmogorov complexity and DNC functions discovered by Kjos-Hanssen et al. [14]. For a function , call -complex a real  such that . Call a real complex if it is -complex for some computable . Kjos-Hanssen et al. proved that a real computes a function with computable bound if and only if it computes a complex real. More precisely: for any computable , for any computable  which grows sufficiently faster than , if a real  computes a function, it computes an -complex real, and if computes an -complex real, it computes a function.

Proof of Theorem 5.

By the correspondence between DNC functions and complex reals, it suffices to show the following: For every fixed computable function , for every sufficiently fast-growing , every real  which is both -Martin-Löf random and 2-random computes a function which computes no -complex function. We modify the proof of Theorem 2 as follows. The requirements now become:

: either is partial or there is an  such that

The new functions ’s are defined by

and again, for all . The sets  considered in Step (a) of the algorithm are now

The rest of the construction remains the same. The estimate (1) is left unchanged by this modification, but we now have . Together with (1), for  large enough, this guarantees the satisfaction of (where is the reduction with respect to which  is defined). ∎

Using another adaptation of the proof of Theorem 2, we can also transfer to the Levin-V’yugin algebra the following result, due to Miller (see [13, section 3]): there exists a function which computes no complex real. Namely, the following holds.

Theorem 6

Let . Every real  which is both -Martin-Löf random and 2-random computes a function which computes no function for any computable .

Although the general structure is similar, this second adaptation is not straightforward and quite a number of important changes are needed. The first thing to notice is that there is obviously no hope to conduct the whole construction in an -bushy tree for a computable  since we want  to compute no complex real, which is equivalent to computing no computably bounded function. Thus we will need to work in the full tree and at each level , choose dynamically the interval for the random choice of .

For this construction, the requirements are of the form:

: is partial, or is partial, or there is an  such that

where the ’s are still Turing functionals from to , and the ’s are partial computable functions from to . How can we satisfy a single requirement ? Again, suppose some string  of length  has already been built, and consider the set

where  is an upper bound for the Kolmogorov complexity of (which can be found computably since there are computable upper bounds of prefix-free Kolmogorov complexity). By convention, this set is empty if  is undefined. Again, let us analyze the different cases and how to succeed in each of them.

  • Case 1: is undefined. Then the requirement is satisfied vacuously.

  • Case 2: is defined and is -small above , where is the function defined in previous constructions. In this case, it suffices to choose a function and for all pick the value of at random between and . By smallness of , using Lemma 4 as usual, we will avoid the set  with high probability, thus satisfying requirement .

  • Case 3: is defined and is -big above . Each element  is such that , therefore we can decompose  as

    and since there are strings of length , there must be a such that is -big above  and such a  can be found effectively knowing , hence by the choice of ,

    We have , so for  large enough, this guarantees , thus satisfying requirement with .

We want to use a fireworks argument to help us choose between these three cases, but some care is needed since we no longer have a dichotomy. Indeed Case 2 is neither nor . The solution is to introduce a priority ordering over passive guesses. We will first make a number of assumptions that is undefined at different stages. If all of these assumptions turn out to be wrong (= the error counter reaches its cap), we will then make the assumption that is defined at the current stage, wait for the value to be defined, and only then make one assumption that the current  is -small above the current . If proven wrong, we will begin another round of assumptions that is undefined (using a new error counter with a new cap) before making a new ‘Case 2’ assumption. Finally, when many of these ‘Case 2’ assumptions are proven to be wrong, we make one last assumption, a ‘Case 3’ assumption, and if everything goes well we will satisfy the requirement . Thus, for any given requirement, our sequence of assumptions will look like this:

C1, C1, …, C1, C2, C1, …, C1, C2, C1, …, C1, C2, ……….., C2, C1, C1, …, C1, C3

(where C=Case ) unless one of the C1/C2 assumptions is never proven to be wrong, in which case we succeed. This time there are two possible ways for the algorithm to get stuck: either wrongly assume in Case 2 that is defined, and then wait forever for it to converge, or like in the previous proofs, get stuck because of a wrong C3 assumption, waiting in vain to find a big subtree with leaves in a given c.e. set. The probability of either of these events happening can be made arbitrarily small by a fireworks argument.

The idea to handle several requirements at the same time is similar to what was done in our previous constructions, but this time is dynamic: Before making a C2/C3 assumption of type ‘the following set  is small/big’, we need to dynamically decide what ‘big/small’ should mean. What we do is first look at what other smallness assumptions are currently being made for other requirements. If there are current assumptions of type ‘ is -small’ for , then we first choose a function much larger than and evaluate the smallness of  in terms of this new function. In case is then assumed to be small it is just added to the list of current assumptions. In case it is correctly assumed to be big, since is much larger than the other ’s, the probability that we hit one of the during the temporary restriction of the tree will be, as in the previous proofs, close to .

While we hope that the reader is already convinced at this point, we provide the formal details for completeness.

Theorem 7

Let and be a sufficiently fast-growing computable function. For every rational , one can effectively design a probabilistic algorithm which, with probability at least , produces an such that (1) is and (2) computes no complex real.

Again, let us take to be the smallest integer such that . Our algorithm is the following.

Stage 0: Initialization. First, for each requirement , pick a number at random between and . The number is meant to be a cap for the number of wrong C2 assumptions for requirement . Moreover, for each integer between and , pick a number at random between and . Each time C2 makes a wrong assumption, C1 starts a new series of assumptions with as a new cap, where is the number of wrong C2 assumptions. Create two counters , initialized at  ( counts the number of wrong C2 assumptions for requirement and counts the number of wrong C1 assumption during the current run of such assumptions for requirement ). Let be a list of assumptions (coded as integers, with at most one assumption per requirement), initially empty. Finally, initialize to be the empty string.

Loop (to be repeated indefinitely). Suppose that the values of are already defined and some requirement receives attention. Let  be an upper bound of the Kolmogorov complexity of the current tuple . Let be the computable functions such that an assumption ‘ is -small’ is currently in . We (locally) define a function by .

  • If this requirement receives attention for the first time, we make for this requirement the assumption that is undefined, and add this assumption to .

  • If it does not receive attention for the first time, we check whether the current assumption made for this requirement still appears to be true at stage .

    • If it does, we maintain this assumption and simply pick the value of  at random between and .

    • If the assumption is discovered to be false, and it was a C1 assumption ( undefined on some value), we remove this assumption from  and increase by .

      • If the new value of remains less than , we make a new assumption: we now assume that is undefined and add this assumption to .

      • If the new value of reaches , we wait for to become defined. We then make the assumption that the set is -small above  and add this assumption to .

    • If the assumption is discovered to be false, and it was a C2 assumption (smallness of some set ), we remove this assumption from , and increase by .

      • If the new value of remains less than , we make a new assumption: we now assume that