An optimal quantum algorithm for the oracle identification problem

An optimal quantum algorithm for the oracle identification problem

Robin Kothari
David R. Cheriton School of Computer Science and
Institute for Quantum Computing, University of Waterloo
rkothari@uwaterloo.ca
Abstract

In the oracle identification problem, we are given oracle access to an unknown -bit string promised to belong to a known set of size and our task is to identify . We present a quantum algorithm for the problem that is optimal in its dependence on and . Our algorithm considerably simplifies and improves the previous best algorithm due to Ambainis et al. Our algorithm also has applications in quantum learning theory, where it improves the complexity of exact learning with membership queries, resolving a conjecture of Hunziker et al.

The algorithm is based on ideas from classical learning theory and a new composition theorem for solutions of the filtered -norm semidefinite program, which characterizes quantum query complexity. Our composition theorem is quite general and allows us to compose quantum algorithms with input-dependent query complexities without incurring a logarithmic overhead for error reduction. As an application of the composition theorem, we remove all log factors from the best known quantum algorithm for Boolean matrix multiplication.

1 Introduction

Query complexity is a model of computation where quantum computers are provably better than classical computers. Some of the great breakthroughs of quantum algorithms have been conceived in this model (e.g., Grover’s algorithm [Gro96]). Shor’s factoring algorithm [Sho97] also essentially solves a query problem exponentially faster than any classical algorithm. In this paper we study the query complexity of the oracle identification problem, the very basic problem of completely determining a string given oracle access to it.

In the oracle identification problem, we are given an oracle for an unknown -bit string , which is promised to belong to a known set , and our task is to identify while minimizing the number of oracle queries. For a set , we denote this problem . As usual, classical algorithms are given access to an oracle that outputs on input , while quantum algorithms have access to a unitary that maps to for . For a function , where , let denote the bounded-error quantum query complexity of computing . The problem corresponds to computing the identity function with .

For example, let . Then the classical query complexity of is , since every bit needs to be queried to completely learn , even with bounded error. A surprising result of van Dam shows that  [vD98]. As another example, consider the set , where denotes the Hamming weight of . This corresponds to the search problem with 1 marked item and thus  [BBBV97, Gro96].

Due to the generality of the problem, it has been studied in different contexts such as quantum query complexity [AIK04, AIK07], quantum machine learning [SG04, AS05, HMP10] and post-quantum cryptography [BZ13]. Several well-known problems are special cases of oracle identification, e.g., the search problem with one marked element [Gro96], the Bernstein-Vazirani problem [BV97], the oracle interrogation problem [vD98] and hidden shift problems [vDHI06]. For some applications, generic oracle identification algorithms are almost as good as algorithms tailored to the specific application [CKOR13]. Consequently, the main result of this paper improves some of the upper bounds stated in [CKOR13].

Ambainis et al. [AIK04, AIK07] studied the oracle identification problem in terms of and . They exhibited algorithms whose query complexity is close to optimal in its dependence on and . For a given and , we say an oracle identification algorithm is optimal in terms of and if it solves all -bit oracle identification problems with making at most queries and there exists some -bit oracle identification problem with that requires queries. This does not, however, mean that the algorithm is optimal for each set individually, since these two parameters do not completely determine the query complexity of the problem. For example, all oracle identification problems with can be solved with queries, and this is optimal since this class includes the search problem with 1 marked item ( above). However there exists a set of size with query complexity , such as the set of all strings with arbitrary entries in the first bits and zeroes elsewhere.

Let denote the set of oracle identification problems with and . Let the query complexity of be the maximum query complexity of any problem in that set. Then the classical query complexity of is easy to characterize:

Proposition 1.

The classical (bounded-error) query complexity of is .

For , the upper bound follows from the observation that we can always eliminate at least one potential string in with one query. For the lower bound, consider any subset of of size . For , the lower bound follows from any set and the upper bound is trivial since any query problem can be solved with queries.

Now that the classical query complexity is settled, for the rest of the paper “query complexity” will always mean quantum query complexity. When quantum queries are permitted, the case is fully understood. For a lower bound, we consider (as before) any subset of of size , which is as hard as the search problem on bits and requires queries. For an upper bound, we can reduce this to the case of by selecting bits such that the strings in are distinct when restricted to these bits. (A proof of this fact appears in [CKOR13, Theorem 11].) Thus , which is [AIK04, Theorem 3]. In summary, we have the following.

Proposition 2.

For , .

For the hard regime, where , the best known lower and upper bounds are the following, from [AIK04, Theorem 2] and [AIK07, Theorem 2] respectively.

Theorem 1 ([Aik04, Aik07]).

If for some constant , then and for all , .

When gets closer to , their algorithm no longer gives nontrivial upper bounds. For example, if , their algorithm makes queries. While not stated explicitly, an improved algorithm follows from the techniques of [AIN09, Theorem 6], but the improved algorithm also does not yield a nontrivial upper bound when . Ambainis et al. [AIK07] left open two problems, in increasing order of difficulty: to determine whether it is always possible to solve the oracle identification problem for using queries and to design a single algorithm that is optimal in the entire range of .

In this paper we resolve both open problems by completely characterizing the quantum query complexity of the oracle identification problem in the full range . Our main result is the following:

\thmt@toks\thmt@toks

For , .

Theorem 2.

The lower bound follows from the ideas in [AIK04], but needs additional calculation. We provide a proof in Appendix A. The lower bound also appears in an unpublished manuscript [AIN09, Remark 1]. The term in the denominator is relevant only when gets close to ; it ensures that the complexity is in that regime.

Our main result is the algorithm, which is quite different from and simpler than that of [AIK07]. It is also optimal in the full range of as it makes queries when and queries when . Our algorithm has two main ingredients:

First, we use ideas from classical learning theory, where the oracle identification problem is studied as the problem of exact learning with membership queries [Ang88]. In particular, our quantum algorithm is based on Hegedűs’ implementation of the halving algorithm [Heg95]. Hegedűs characterizes the number of queries needed to solve the classical oracle identification problem in terms of the “extended teaching dimension” of . While we do not use that notion, we borrow some of the main ideas of the algorithm. This is further explained in Section 2.

We now present a high-level overview of the algorithm. Say we know that the string in the black box, , belongs to a set . We can construct from a string , known as the “majority string,” which is 1 at position if at least half the strings in are 1 at position . Importantly, for any , the set of strings in that disagree with at position is at most half the size of . Now we search for a disagreement between and using Grover’s algorithm. If the algorithm finds no disagreement, then . If it does, we have reduced the size of by a factor of 2. This gives an algorithm with query complexity , which is suboptimal. We improve the algorithm by taking advantage of two facts: first, that Grover’s algorithm can find a disagreement faster if there are many disagreements to be found, and second, that there exists an order in which to find disagreements that reduces the size of as much as possible in each iteration. The existence of such an order was shown by Hegedűs [Heg95].

The second ingredient of our upper bound is a general composition theorem for solutions of the filtered -norm semidefinite program (SDP) introduced by Lee et al. [LMR11] that preserves input-dependent query complexities. We need such a result to resolve the following problem: Our algorithm consists of bounded-error quantum algorithms that must be run sequentially because each algorithm requires as input the output of the previous algorithm. Let the query complexities of the algorithms be on input . If these were exact algorithms, we could merely run them one after the other, giving one algorithm’s output to the next as input, to obtain an algorithm with worst-case query complexity . However, since these are bounded-error algorithms, we cannot guarantee that all algorithms will give the correct output with high probability. One option is to apply standard error reduction, but this would yield an algorithm that makes queries. Instead, we prove a general composition theorem for the filtered -norm SDP that gives us an algorithm that makes queries, as if the algorithms had no error. A similar result is known for worst-case query complexity, but that gives a suboptimal upper bound of queries. We prove this result in Section 3.

The oracle identification problem was also studied by Atıcı and Servedio [AS05], who studied algorithms that are optimal for a given set . The query complexity of their algorithm depends on a combinatorial parameter of , , which satisfies . They prove . Our algorithm for oracle identification, without modification, makes fewer queries than this bound. Our algorithm’s query complexity is , which resolves a conjecture of Hunziker et al. [HMP10]. We prove this in Section 4.1.

Our composition theorem can also be used to remove unneeded log factors from existing quantum query algorithms. As an example, we show how to improve the almost optimal Boolean matrix multiplication algorithm that requires queries [JKM12], where is the size of the matrices and is the sparsity of the output, to an algorithm with query complexity . We show this in Section 4.2. We conclude with some discussion and open questions in Section 5.

2 Oracle identification algorithm

In this section we explain the ideas that go into our algorithm and prove its correctness. We also prove the query upper bound assuming we can compose bounded-error quantum algorithms without incurring log factors, which we justify in Section 3.

Throughout this section, let be the string we are trying to identify. For any set , let be an -bit string such that is 1 if and 0 otherwise. In words, is if the majority of strings in have bit equal to . Note that the string need not be a member of . In this paper, all logarithms are base 2 and for any positive integer , we define .

2.1 Basic halving algorithm

We begin by describing a general learning strategy called the halving algorithm, attributed to Littlestone [Lit88]. Say we currently know that the oracle contains a string . The halving algorithm tests if the oracle string is equal to . If it is equal, we have identified ; if not, we look for a bit at which they disagree. Having found such a bit , we know that , and we may delete all strings in that are inconsistent with this. Since at most half the strings in disagree with at any position, we have at least halved the number of potential strings.

To convert this into a quantum algorithm, we need a subroutine that tests if a given string is equal to the oracle string and finds a disagreement otherwise. This can be done by running Grover’s algorithm on the bitwise xor of and . This gives us the following simple algorithm.

1:
2:
3:repeat
4:     Search for a disagreement between and . If we find a disagreement, delete all inconsistent strings from . If not, let .
5:until 
Algorithm 1 Basic halving algorithm

This algorithm always finds the unknown string , since always contains . The loop can run at most times, since each iteration cuts down the size of by a factor of 2. Grover’s algorithm needs queries, but it is a bounded-error algorithm. For this section, let us assume that bounded-error algorithms can be treated like exact algorithms and need no error reduction. Assuming this, Algorithm 1 makes queries.

2.2 Improved halving algorithm

Even assuming free error reduction, Algorithm 1 is not optimal. Primarily, this is because Grover’s algorithm can find an index such that faster if there are many such indices to be found, and Algorithm 1 does not exploit this fact. Given an -bit binary string, we can find a 1 with queries in expectation, where is the number of 1s in the string [BBHT98]. Alternately, there is a variant of Grover’s algorithm that finds the first 1 (from left to right, say) in the string in queries in expectation where is the position of the first 1. This follows from the known algorithm for finding the first 1 in a string of size [DHHM06], by running that algorithm on the first bits, for . We can now modify the previous algorithm to look for the first disagreement between and instead of any disagreement.

1:
2:
3:repeat
4:     Search for the first disagreement between and . If we find a disagreement, delete all inconsistent strings from . If not, let .
5:until 
Algorithm 2 Improved halving algorithm

As before, the algorithm always finds the unknown string. To analyze the query complexity, let be the number of times the loop repeats and be the positions of disagreement found. After the first run of the loop, since a disagreement is found at position , we have learned the first bits of ; the first bits agree with , while bit disagrees with . Thus we are left with a set in which all strings agree on these bits. For convenience, we can treat as a set of strings of length (instead of length ). Each iteration reduces the effective length of strings in by , which gives , since there are at most bits to be learned. As before, the loop can run at most times, thus . Finally, let us assume again that these bounded-error search subroutines are exact. Then this algorithm requires queries, which is , by the Cauchy–Schwarz inequality.

2.3 Final algorithm

While Algorithm 2 is an improvement over Algorithm 1, it is still not optimal. One reason is that sometimes a disagreement between the majority string and may eliminate more than half the possible strings. This observation can be exploited by finding disagreements in such a way as to maximize the reduction in size when a disagreement is found. This idea is due to Hegedűs [Heg95].

To understand the basic idea, consider searching for a disagreement between and classically. The most obvious strategy is to check if , , and so on until a disagreement is found. This strategy makes more queries if the disagreement is found at a later position. However, we could have chosen to examine the bits in any order. We would like the order to be such that if a disagreement is found at a later position, it cuts down the size of by a larger factor. Such an ordering would ensure that either we spend very few queries and achieve a factor-2 reduction right away, or we spend more queries but the size of goes down significantly. Hegedűs shows that there is always a reordering of the bits that achieves this. The following lemma is similar to [Heg95, Lemma 3.2], but we provide a proof for completeness.

Lemma 1.

For any , there exists a string and a permutation on , such that for any , , where , the set of strings in that agree with at and disagree with it at .

Proof.

We will construct the permutation and string greedily, starting with the first position, . We choose this bit to be one that intuitively contains the most information, i.e., a bit for which the fraction of strings that agree with the majority is closest to 1/2. This choice will make as large as possible. More precisely, we choose to be any that maximizes . Then let be .

In general, after having chosen and having defined on those bits, we choose to be the most informative bit assuming all previous bits have agreed with string on positions . This choice makes as large as possible. More precisely, define . We choose to be any bit that maximizes . Then let be .

This construction ensures that . Since was chosen to maximize , we have . The size of this set is at least , since . We do not know the value of (e.g., it need not be equal to ), but we do know that it is either 0 or 1. So this term is at least , where the last equality uses for all .

Finally, combining with gives us . Combining this with , which follows from the definition of , yields the result. ∎

We can now state our final oracle identification algorithm.

1:
2:
3:repeat
4:     Let and be as in Lemma 1. Search for the first (according to ) disagreement between and . If we find a disagreement, delete all inconsistent strings from . If not, let .
5:until 
Algorithm 3 Final algorithm

As before, it is clear that this algorithm solves the problem. Let us analyze the query complexity. To compute the query complexity, let be the number of times the loop repeats. Let be the positions of disagreement. We have , as in Algorithm 2.

Unlike the previous analysis, the bound can be loose, since the size of may reduce by a larger factor due to Lemma 1. Instead, we know that each iteration reduces the set by a factor of , which gives us . As before, we will assume the search subroutine is exact, which gives us a query upper bound of , subject to the constraints and . We solve this optimization problem in Appendix B to obtain the following lemma.

\thmt@toks\thmt@toks

Let be the maximum value attained by , subject to the constraints and for all . Then and .

Lemma 2.

Thus Algorithm 3 achieves the upper bound claimed in Theorem 1, under our assumption. We can now return to the assumption that the search subroutine is exact. Since it is not exact, we could reduce the error with logarithmic overhead. However, it is usually unnecessary to incur this loss in quantum query algorithms. In the next section we prove this and rigorously establish the query complexity of Algorithm 3.

3 Composition theorem for input-dependent query complexity

The primary aim of this section is to rigorously establish the query complexity of Algorithm 3. Along the way, we will develop techniques that can be used more generally. Let us begin by describing what we would like to prove. Algorithm 3 essentially consists of a loop repeated times. We write to make explicit its dependence on the input . The loop itself consists of running a variant of Grover’s algorithm on , based on information we have collected thus far about . Call these algorithms . To be clear, is the algorithm that is run the first time the loop is executed, i.e., it looks for a disagreement under the assumption that . It produces an output , which is then used by . looks for a disagreement assuming a modified set , which is smaller than . Let us say that in addition to , also outputs . This ensures that the output of completely describes all the information we have collected about . Thus algorithm now only needs the output of to work correctly.

We can now view Algorithm 3 as a composition of algorithms, . It is a composition in the sense that the output of one is required as the input of the next algorithm. We know that the expected query complexity of is . If these algorithms were exact, then running them one after the other would yield an algorithm with expected query complexity . But since they are bounded error, this does not work. However, if we consider their worst-case complexities, we can achieve this query complexity. If we have algorithms with worst-case query complexities , then there is a quantum algorithm that solves the composed problem with queries. This is a remarkable property of quantum algorithms, which follows from the work of Lee et al. [LMR11]. We first discuss this simpler result before moving on to input-dependent query complexities.

3.1 Composition theorem for worst-case query complexity

We now show a composition theorem for solutions of the filtered -norm SDP, which implies a similar result for worst-case quantum query complexity. This follows from the work of Lee et al. [LMR11], which we generalize in the next section.

As discussed in the introduction, let , and consider functions that map . For any matrix indexed by elements of , we define a quantity . (To readers familiar with the notation of [LMR11], this is the same as their .)

Definition 1.

Let be a square matrix indexed by . We define as the following program.

(1)
subject to: (2)
(3)

We use to refer to both the semidefinite program (SDP) above and its optimum value. For a function , let be its Gram matrix, defined as if and otherwise. Lee et al. showed that , where is the all-ones matrix.

More generally, they showed that this SDP also upper bounds the quantum query complexity of state conversion. In the state conversion problem, we have to convert a given state to . An explicit description of the states and is known for all , but we do not know the value of . Since the query complexity of this task depends only on the Gram matrices of the starting and target states, define and by and for all . Let denote the problem of converting states with Gram matrix to those with Gram matrix . If is the Gram matrix of a function , then is the function evaluation problem. Lee et al. showed that , which generalizes .

We now have the tools to prove the composition theorem for the filtered -norm SDP.

Theorem 3 ([Lmr11]).

Let be functions with Gram matrices . Let be the optimum value of the SDPs for the state conversion problems , i.e., for , . Then, .

This does not appear explicitly in [LMR11], but simply follows from the triangle inequality [LMR11, Lemma A.2]. From this we can also show an analogous theorem for quantum query complexity, which states . We do not prove this claim as we do not need it in this paper.

For our application, we require a composition theorem similar to Theorem 3, but for input-dependent query complexity. However, it is not even clear what this means a priori, since the value does not contain information about input-dependent complexities. Indeed, the value is a single number and cannot contain such information. However, the SDP does contain this information and we modify this framework to be able to access this.

For example, let be the find-first-one function, which outputs the smallest such that and outputs if . There is a quantum algorithm that solves this with queries in expectation. Furthermore, there is a feasible solution for the SDP with , where is the function that appears in (2). This suggests that gives us information about the -dependent query complexity. The same situation occurs when we consider the search problem with multiple marked items. There is a feasible solution with for inputs with ones. This function will serve as our input-dependent cost measure.

3.2 Cost functions

Definition 2 (Cost function).

Let be a square matrix indexed by . We say is a feasible cost function for if there is a feasible solution of with values in eq. (2). Let the set of all feasible cost functions for be denoted .

Note that if is a feasible cost function for , then is an upper bound on the worst-case cost, , which is exactly what we expect from an input-dependent cost. We can now prove an input-dependent analogue of Theorem 3 with playing the role of .

Theorem 4.

Let be functions with Gram matrices . Let be feasible cost functions for , i.e., for , . Then there is a satisfying for all .

As in the case of Theorem 3, this follows from an analogous triangle inequality.

Lemma 3.

Let and be square matrices indexed by . If and , there exists a satisfying for all .

Proof.

Since and , there exist vectors that satisfy the following constraints: with and with .

Now define and . We claim that these vectors are feasible for . The constraints are satisfied since . The cost function for this solution, , is , which gives . ∎

In our applications, we will encounter algorithms that also output their input, i.e., accept as input and output . Note that the Gram matrix of the function is merely , defined as .

Such an algorithm can either be thought of as a single quantum algorithm that accepts as input and outputs or as a collection of algorithms for each , such that algorithm requires no input and outputs on oracle input . These are equivalent viewpoints, since in one direction you can construct the algorithms from by hardcoding the value of and in the other direction, we can read the input and call the appropriate as a subroutine and output . Additionally, if the algorithm makes queries on oracle input , the algorithm we constructed accepts as input, outputs , and makes queries on oracle input . While intuitive for quantum algorithms, we need to establish this rigorously for cost functions.

Theorem 5.

Let be functions with Gram matrices and . For any , let . For every , let be a feasible cost function for , where denotes the matrix restricted to those that satisfy . Then there exists a , such that .

Proof.

We build a feasible solution for out of the feasible solutions for . We have vectors for each that satisfy for all and .

Let and . This is a feasible solution for , since . Note that when , the value of is not known, but this only happens when , which makes the term 0. Lastly, the cost function for this solution is , which is . ∎

3.3 Algorithm analysis

We can now return to computing the query complexity of Algorithm 3. Using the same notation as in the beginning of this section, for any , we define to be the number of times the repeat loop is run in Algorithm 3 for oracle input assuming all subroutines have no error. Similarly, let be the first positions of disagreement found in each run of the loop. Note that together uniquely specify . Let .

We now define functions as , where if . Thus if are the Gram matrices of the functions , then .

We will now construct a solution for , using solutions for the intermediate functions . From Theorem 4 we know that we only need to construct solutions for . From Theorem 5 we know that instead of constructing a solution for , which is , we can construct several solutions, one for each value of . More precisely, let ; then we can construct solutions for for all , where is the matrix restricted to that satisfy .

For any , the problem corresponding to is just the problem of finding the first disagreement between and a known string, which is the essentially the find-first-one function. This has a solution with cost function , which in this case is .

Theorem 6.

Let be the function that outputs the smallest such that and outputs if and let be its Gram matrix. Then there is a such that .

Proof.

Let and . Define as the following.

This is a feasible solution for . Since the constraints are symmetric in and , there are two cases: either or . For the first case, , since and agree on all positions before . For the second case, , since the only bits that and disagree on appear after position . To compute the cost function, note that . For all other , . ∎

Our function is different from this one in two ways. First, we wish to find the first disagreement with a fixed string instead of the first 1. This change does not affect the Gram matrix or the SDP. Second, we are looking for a disagreement according to an order , not from left to right. This is easy to fix, since we can replace with in the definition of the vectors in the proof above.

This shows that for any , there is a feasible cost function for with cost for any that satisfies . Using Theorem 5, we get that for any there is a with for all . Finally, using Theorem 4, we have a with cost .

Since the function uniquely determines , we have a feasible cost function for oracle identification with cost , subject to the constraints of Lemma 2.3, which we have already solved. Along with the lower bound proved in Appendix A, this yields the main result.

Theorem ??.

4 Other applications

4.1 Quantum learning theory

The oracle identification problem has also been studied in quantum learning theory with the aim of characterizing . The algorithms and lower bounds studied apply to arbitrary sets , not just to the class of sets of a certain size, as in the rest of the paper. We show that Algorithm 3 also performs well for any set , outperforming the best known algorithm. The known upper and lower bounds for this problem are in terms of a combinatorial parameter , defined by Servedio and Gortler. They showed that for any , [SG04]. Later, Atıcı and Servedio showed that