# From Low-Distortion Norm Embeddings to Explicit Uncertainty Relations and Efficient Information Locking

## Abstract

The existence of quantum uncertainty relations is the essential reason that some classically unrealizable cryptographic primitives become realizable when quantum communication is allowed. One operational manifestation of these uncertainty relations is a purely quantum effect referred to as *information locking* [DiVincenzo et al. (2004)]. A locking scheme can be viewed as a cryptographic protocol in which a uniformly random -bit message is encoded in a quantum system using a classical key of size much smaller than . Without the key, no measurement of this quantum state can extract more than a negligible amount of information about the message, in which case the message is said to be “locked”. Furthermore, knowing the key, it is possible to recover, that is “unlock”, the message.

In this paper, we make the following contributions by exploiting a connection between uncertainty relations and low-distortion embeddings of Euclidean spaces into slightly larger spaces endowed with the norm.
We introduce the notion of a *metric uncertainty relation* and connect it to low-distortion embeddings of into . A metric uncertainty relation also implies an entropic uncertainty relation.
We prove that random bases satisfy uncertainty relations with a stronger definition and better parameters than previously known. Our proof is also considerably simpler than earlier proofs. We then apply this result to show the existence of locking schemes with key size independent of the message length.
Moreover, we give *efficient* constructions of bases satisfying metric uncertainty relations. The bases defining these metric uncertainty relations are computable by quantum circuits of almost linear size. This leads to the first explicit construction of a strong information locking scheme. These constructions are obtained by adapting an explicit norm embedding due to \citeNInd07 and an extractor construction of \citeNGUV09.
We apply our metric uncertainty relations to exhibit communication protocols that perform equality testing of -qubit states. We prove that this task can be performed by a single message protocol using qubits and bits of communication, where the computation of the sender is efficient.

myremarkRemark \newdefclaimClaim

F.1.1Theory of ComputationComputation by Abstract Devices[Models of Computation] \categoryE.4DataCoding and Information Theory

Algorithms, Theory

A preliminary version of this paper appeared in the Proceedings of the 43rd ACM Symposium on Theory of Computing (STOC 2011), pp. 773–782 and was presented at the Workshop on Quantum Information Processing (QIP 2011).

This research was supported by the Canada Research Chairs program, the Perimeter Institute, CIFAR, FQRNT’s INTRIQ, MITACS, NSERC, ONR through grant N000140811249 and QuantumWorks.

Author’s addresses: O. Fawzi, (Current address) Institute for Theoretical Physics, ETH Zürich, Switzerland, e-mail: ofawzi@phys.ethz.ch; P. Hayden, School of Computer Science, McGill University, Montréal, Québec, Canada, e-mail: patrick@cs.mcgill.ca; P. Sen, School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India, e-mail: pgdsen@tcs.tifr.res.in

## 1 Introduction

Uncertainty relations express the fundamental incompatibility of certain measurements in quantum mechanics [Heisenberg (1927), Robertson (1929)]. They quantify the fact that noncommuting quantum mechanical observables cannot simultaneously have definite values. Far from just being puzzling constraints on our ability to know the state of a quantum system, uncertainty relations are at the heart of why some classically unrealizable cryptographic primitives become realizable when quantum communication is allowed. For example, so-called *entropic* uncertainty relations introduced in [Bialynicki-Birula and Mycielski (1975), Deutsch (1983)] are the main ingredients for modern security proofs for quantum key distribution [Tomamichel and
Renner (2011), Tomamichel
et al. (2012)] and for secure computation in the bounded and noisy quantum storage models [Damgård et al. (2005), Damgård et al. (2007), König
et al. (2012)]. A simple example of an entropic uncertainty relation was given by \citeNMU88. Let denote a “rectilinear” or computational basis of and be a “diagonal” or Hadamard basis and let and be the corresponding bases obtained on the tensor product space . All vectors in the rectilinear basis have an inner product with all vectors in the diagonal basis upper bounded by in absolute value. The uncertainty relation of \citeNMU88 states that for *any* quantum state on qubits described by a unit vector , the average measurement entropy satisfies

(1) |

where denotes the outcome probability distribution when is measured in basis and denotes the Shannon entropy. Equation (1) expresses the fact that measuring in a random basis , where is uniformly chosen from the set , produces an outcome that has some uncertainty irrespective of the state being measured.

A surprising application of entropic uncertainty relations is the effect known as *information locking* [DiVincenzo et al. (2004)] (see also [Leung (2009)]). Suppose Alice holds a uniformly distributed random -bit string . She chooses a random basis and encodes in the basis . This random quantum state is then given to Bob. How much information about can Bob, who does not know , extract from this quantum system via a measurement? To better appreciate the quantum case, observe that if were encoded in a classical state , then would “hide” at most one bit about ; more precisely, the mutual information between and is at least . For the quantum encoding , one can show that for *any measurement* that Bob applies on whose outcome is denoted , the mutual information between and is at most [DiVincenzo et al. (2004)]. The missing bits of information about are said to be *locked* in the quantum state . If Bob had access to , then can be easily obtained from : The one-bit key can be used to *unlock* bits about .

A natural question is whether it is possible to lock more than bits in this way. In order to achieve this, the key has to be chosen from a larger set. In terms of uncertainty relations, this means that we need to consider bases to achieve an average measurement entropy larger than (equation (1)). In this case, the natural candidate is a set of *mutually unbiased bases*, the defining property of which is a small inner product between any pair of vectors in different bases. Surprisingly, it was shown by \citeNBW07 and \citeNAmb09 that there are up to mutually unbiased bases that only satisfy an average measurement entropy of , which is only as good as what can be achieved with two measurements (1). In other words, looking at the pairwise inner product between vectors in different bases is not enough to obtain uncertainty relations stronger than (1). It is for this reason that so little is understood about uncertainty relations for measurements. (See [Wehner and
Winter (2010)].)

To achieve an average measurement entropy of for small while keeping the number of bases subexponential in , the only known constructions are probabilistic and computationally inefficient. \citeNHLSW04 prove that random bases satisfy entropic uncertainty relations of the form (1) with measurements with an average measurement entropy of . This leads to an encoding that locks bits about using a key of bits. Recently, \citeNDup09 and \citeNFDHL10 prove that random encodings exhibit a locking behaviour in a stronger sense and that it is possible to lock up to bits for any arbitrarily small constant while still using a key of bits. To obtain an explicit construction, standard derandomization techniques are not known to work in this setting. For example, unitary designs [Dankert et al. (2009)] define an exponential number of bases. Moreover, using a -biased subset of the set of Pauli matrices [Ambainis and Smith (2004), Desrosiers and Dupuis (2010)] fails to produce a locking scheme unless the subset has a size of close to (see Appendix D).

### 1.1 Our results

In this paper, we study uncertainty relations in the light of a connection with low-distortion embeddings of into . The intuition behind this connection is very simple. Consider the measurements defined by a set of orthonormal bases of . The bases satisfy an uncertainty relation if for every -qubit state and “most” bases , the vector representing in is “spread”. One way of quantifying the spread of a vector is by its norm, i.e., the sum of the absolute values of its components. A vector of unit norm is well spread if its norm is close to its maximal value of .

Embeddings from a Euclidean space into (or more generally, any finite-dimensional normed space) that approximately preserve the norm up to a scaling factor are typically studied in the area of asymptotic geometric analysis [Dvoretzky (1961), Milman (1971), Figiel et al. (1977), Milman and Schechtman (1986)]. More recently, low-distortion embeddings — and in particular from into — started to gain interest in the computer science community for their applications to approximation algorithms and compressed sensing [Indyk (2006), Indyk (2007), Guruswami et al. (2008)]. For our applications in quantum information theory, the relevant norm is not the norm but rather a closely related norm called .

Motivated by these considerations, we measure the uncertainty of a distribution by taking a marginal and measuring its closeness to the uniform distribution. This is a stronger requirement than having large Shannon entropy and it leads to the definition of a *metric uncertainty relation* (Definition 2.2). Using standard techniques from asymptotic geometric analysis, we prove the existence of strong metric uncertainty relations (Theorem 2.3). This result can be seen as a strengthening of Dvoretzky’s theorem [Dvoretzky (1961), Milman (1971)] for the special case of the norm. In addition to giving a stronger statement with better parameters, our analysis of the uncertainty relations satisfied by random bases is simpler than earlier proofs [Hayden
et al. (2004), Dupuis
et al. (2010)]. In particular, for large , we prove the existence of entropic uncertainty relations with average measurement entropy strictly increasing with the number of measurements. This result leads to better results on the existence of locking schemes (Corollary 3.2). We also show in Theorem 3.3 how to use these locking schemes to build quantum hiding fingerprints as defined by \citeNGI10.

Moreover, adapting an explicit low-distortion embedding of to with due to \citeNInd07, we obtain explicit bases of that satisfy strong metric uncertainty relations for a number of bases that is polynomial in . Measuring in these bases can be performed by almost linear size quantum circuits. The use of a strong permutation extractor is the main new ingredient that makes our “quantization” of Indyk’s construction satisfy stronger uncertainty relations than do general mutually unbiased bases. A strong permutation extractor (Definition 2.4) is a small family of permutations of bit strings with the property that for any probability distribution on input bit strings with high min-entropy, applying a typical permutation from the family to the input induces an almost uniform probability distribution on a prefix of the output bits. It is a special kind of randomness extractor, a combinatorial object with many applications to the theory of pseudorandomness and to cryptography; see [Shaltiel (2002), Vadhan (2007)]. Our construction of efficiently computable bases satisfying strong metric uncertainty relations involves an alternating application of approximately mutually unbiased bases and strong permutation extractors. Our approximately mutually unbiased bases consist of sets of single-qubit Hadamard gates. Moreover, we build efficiently computable and invertible permutations that define an extractor using the results of \citeNGUV09.

Even though the idea of combining mutually unbiased bases and extractors comes from [Indyk (2007)], in hindsight, it is very natural from the point of view of quantum cryptography. Measurements in (approximately) unbiased bases are used in almost all quantum cryptographic protocols. The objective of such a step is usually to bound the probability that an adversary can guess the outcome of the associated measurement. Once such a bound is guaranteed, one can distill the randomness produced into almost uniform random bits using a step of privacy amplification which makes use of a randomness extractor. Our quantization of Indyk’s construction can be seen as a repeated “coherent” application of these two steps.

We use these uncertainty relations to build explicit locking schemes whose encoding and decoding operations can be performed by quantum circuits of size almost linear in the length of the message (see Table 1.1). Moreover, we also obtain a locking scheme where both the encoding and decoding operations consist of a classical computation with polynomial runtime and a quantum computation using only a small number of single-qubit Hadamard gates (Corollary 3.2). Performing these quantum operations can in principle be done using the same technology as implementing the BB84 quantum key distribution protocol [Bennett and Brassard (1984)], but our idealized scheme must still be made robust to noise and imperfect devices. It should be noted that for this simple scheme, the message is encoded in a slightly larger quantum system. This locking scheme can be used to obtain string commitment protocols [Buhrman et al. (2008)] that are efficient in terms of computation and communication.

We also give an application of our uncertainty relations to a problem called quantum identification. Quantum identification is a communication task for two parties Alice and Bob, where Alice is given a pure quantum state and Bob wants to simulate measurements of the form on where is a pure quantum state. This task can be seen as a quantum analogue of the problem of equality testing [, Kushilevitz and Nisan (1997)] where Alice and Bob hold -bit strings and and Bob wants to determine whether using a one-way classical channel from Alice to Bob. \citeNHW10 showed that classical communication alone is useless for quantum identification. However, having access to a negligible amount of quantum communication makes classical communication useful. Their proof is non-explicit. Here, we describe an efficient encoding circuit that also uses less quantum communication: it allows the identification of an -qubit state by communicating only a single message of qubits and classical bits.

### 1.2 Related work

Aubrun, Szarek and Werner \shortciteASW10,ASW10b used a connection between low-distortion embeddings and quantum information. They show in [Aubrun
et al. (2010)] that the existence of large subspaces of highly entangled states follows from Dvoretzky’s theorem for the Schatten -norm^{1}

In a cryptographic setting, \citeNDPS04 used ideas related to locking to develop quantum ciphers that have the property that the key used for encryption can be recycled. In [Damgård et al. (2005)], they construct a quantum key recycling scheme (see also [Oppenheim and Horodecki (2005)]) with near optimal parameters by encoding the message together with its authentication tag using a full set of mutually unbiased bases.

### 1.3 Notation and basic facts

We use the following notation throughout the paper. For a positive integer , we define .

#### Probability

Random variables are usually denoted by capital letters , while denotes the distribution of , i.e., . The notation means that has distribution . is the uniform distribution on the set . To measure the distance between probability distributions on a finite set , we use the total variation distance or trace distance . We will also write for . When , we say that is -close to . A useful characterization of the trace distance is (this equality is known as Doeblin’s coupling lemma [Doeblin (1938)]). Another useful measure of closeness between distributions is the fidelity , also known as the Bhattacharyya distance and related to the Hellinger distance. We have the following relation between the fidelity and the trace distance

(2) |

The Shannon entropy of a distribution on is defined as where the is taken here and throughout the paper to be base two. We will also write for . The mutual information between two random variables and is defined as . The min-entropy of a distribution is defined as . We say that a random variable is a -source if . To refer to the -th component of a vector , we usually write except when already has a subscript, in which case we use . The Hamming weight of a binary vector (number of ones) is denoted by and the Hamming distance between two binary vectors (number of components that are different) is written as .

#### Quantum mechanics

The state of a pure quantum system is represented by a unit vector in a Hilbert space. Quantum systems are denoted and are identified with their corresponding Hilbert spaces. The dimension of a Hilbert space is denoted by . Every Hilbert space comes with a preferred orthonormal basis that we call the computational basis. The elements of this basis are labeled by integers from to . For a Hilbert space of the form , this canonical basis will also be labeled by strings in . means that the Hilbert spaces and are isomorphic.

To describe a distribution over quantum states (also called a mixed state), we use a density operator , where refers to the projector on the line defined by . A density operator is a Hermitian positive semidefinite operator with unit trace. The density operator associated with a pure state is abbreviated by omitting the ket and bra . is the set of density operators acting on . The Hilbert space on which a density operator acts is sometimes denoted by a superscript, as in . This notation is also used for pure states .

In order to describe the joint state of a system , we use the tensor product Hilbert space , which is sometimes simply denoted . If describes the joint state on , the state on the system is described by the partial trace . If is a unitary acting on , and a state in , we sometimes use to denote the state , where the symbol is reserved for the identity map on .

The most general way to obtain classical information from a quantum state is by performing a measurement. A measurement is described by a positive operator-valued measure (POVM), which is a set of positive semidefinite operators that sum to the identity. If the state of the quantum system is represented by the density operator , the probability of observing the outcome labeled is for all . For a state , denotes the distribution of the outcomes of the measurement of in the basis . We have . Similarly, for a mixed state , we define .

The trace distance between density operators acting on is defined by . The von Neumann entropy of a quantum state is defined by . It will also be denoted . For a bipartite state , the quantum mutual information is . We use Fannes’ inequality [Fannes (1973)], or more precisely an improvement by \citeNAud07, which states that for any states and on ,

(3) |

with .

## 2 Uncertainty relations

**Outline of the section** In this section, we start by introducing uncertainty relations and setting up some notation (Section 2.1). Then, we define metric uncertainty relations in Section 2.2. In Section 2.3, we prove the existence of strong metric uncertainty relations. Explicit constructions are given in Section 2.4.

### 2.1 Background

Consider a set of orthonormal bases of the Hilbert space . Each basis defines a measurement on . The outcomes of these measurements are indexed by . The outcome distribution when the measurement is performed on the state is defined by
for all .
An uncertainty relation for a set of orthonormal bases expresses the property that for any state , there are some measurements in whose outcomes given state have some uncertainty. A common way of quantifying this uncertainty is by using the Shannon entropy. The set of bases is said to satisfy an *entropic uncertainty relation* if there exists a positive number such that for all states ,

Note that by choosing a state in the basis , we obtain . As , this implies that cannot be larger than .

It is more convenient here to talk about uncertainty relations for a set of unitary transformations. Let be the computational basis of . We associate to the unitary transformation the basis . On a state , the outcome distribution is described by

As can be seen from this equation, we can equivalently talk about measuring the state in the computational basis. An entropic uncertainty relation for can be written as

(4) |

Entropic uncertainty relations have been used to prove the security of quantum key distribution and of cryptographic protocols in the bounded and noisy quantum storage models [Tomamichel and Renner (2011), Damgård et al. (2007), Berta et al. (2012)]. Other applications of entropic uncertainty relations are given in Section 3. For more details on entropic uncertainty relations and their applications, see the recent survey [Wehner and Winter (2010)].

### 2.2 Metric uncertainty relations

Here, instead of using the entropy as a measure of uncertainty, we use closeness to the uniform distribution. In other words, we are interested in sets of unitary transformations that for all satisfy

for some . refers to the total variation distance between the distributions and . This condition is very strong, in fact too strong for our purposes, and we will see that a weaker definition is sufficient to imply entropic uncertainty relations. Let . (For example, if consists of qubits, might represent the first qubits and the last qubits.) Moreover, let the computational basis for be of the form where and are the computational bases of and . Instead of asking for the outcome of the measurement on the computational basis of the whole space to be uniform, we only require that the outcome of a measurement of the system in its computational basis be close to uniform. More precisely, we define for ,

We can then define a metric uncertainty relation. Naturally, the larger the system, the stronger the uncertainty relation for a fixed system. {definition}[Metric uncertainty relation] Let and be Hilbert spaces. We say that a set of unitary transformations on satisfies an -metric uncertainty relation on if for all states ,

(5) |

[] Observe that this implies that (5) also holds for mixed states: for any ,

**Metric uncertainty relations imply entropic uncertainty relations** In the next proposition, we show that a metric uncertainty relation gives rise to an entropic uncertainty relation. It is worth stressing that there are no restrictions on measurements.
{proposition}
Let and be a set of unitaries on satisfying an -metric uncertainty relation on :

Then

where is the binary entropy function. {proof} Recall that the distribution (see equation (5) for a definition) on is a marginal of the distribution . Thus . Using Fannes-Audenaert’s inequality (3), we have for all

By averaging over and using the concavity of , we get the desired result.

**Explicit link to low-distortion embeddings** Even though we do not explicitly use the link to low-distortion embeddings, we describe the connection as it might have other applications. In the definition of metric uncertainty relations, the distance between distributions was computed using the trace distance. The connection to low-distortion metric embeddings is clearer when we measure closeness of distributions using the fidelity. We have

(6) |

where the norm is defined by {definition}[ norm] For a state ,

We use when the systems and are clear from the context. Observe that this definition of norm depends on the choice of the computational basis. The norm will always be taken with respect to the computational bases.

For to satisfy an uncertainty relation, we want

This expression can be rewritten by introducing a new register that holds the index . We get for all by writing

(7) |

Using the Cauchy-Schwarz inequality, which in this context reads for any , we have that for all ,

(8) |

we see that the image of by the linear map is an almost Euclidean subspace of . In other words, as the map is an isometry (in the sense), it is an embedding of into with distortion [Matoušek (2002)].

Observe that a general low-distortion embedding of into does not necessarily give a metric uncertainty relation as it need not be of the form . When , a metric uncertainty relation is related to the notion of Kashin decomposition [Kashin (1977)]; see also [Pisier (1989), Szarek (2006)].

**A remark on the composition of metric uncertainty relations** There is a natural way of building an uncertainty relation for a Hilbert space from uncertainty relations on smaller Hilbert spaces. This composition property is also important for the cryptographic applications of metric uncertainty relations presented in the second half of the paper, in which setting it ensures the security of parallel composition of locking-based encryption.
{proposition}
Consider Hilbert spaces , , , . For , let be a set of unitary transformations of satisfying an -metric uncertainty relation on .
Then, satisfies a -metric uncertainty relation on .
{proof}
Let and let denote the distribution obtained by measuring in the computational basis of .
Our objective is to show that

(9) |

We have

(10) |

where is the outcome distribution of measuring the system of . The distribution can also be seen as the outcome of measuring the mixed state

in the computational basis . Thus, we have for any ,

Moreover, for , the distribution on defined by is the outcome distribution of measuring in the computational basis of the state

where is the density operator describing the state of the system given that the outcome of the measurement of the system is . We can now use the fact that satisfies a metric uncertainty relation. Taking the average over and in equation (10), we get

This observation is in the same spirit as [Indyk and Szarek (2010), Proposition 1], and can in fact be used to build large almost Euclidean subspaces of .

### 2.3 Metric uncertainty relations: existence

In this section, we prove the existence of families of unitary transformations satisfying strong uncertainty relations. The proof proceeds by showing that choosing random unitaries according to the Haar measure defines a metric uncertainty relation with positive probability. The techniques used are quite standard and date back to Milman’s proof of Dvoretzky’s theorem [Milman (1971), Figiel et al. (1977)]. In fact, using the connection to embeddings of into presented in the previous section, this existential theorem can be viewed as a strengthening of Dvoretzky’s theorem for the norm [Milman and Schechtman (1986)]. Explicit constructions of uncertainty relations are presented in the next section.

In order to use metric uncertainty relations to build quantum hiding fingerprints, we require an additional property for . A set of unitary transformations of is said to define -approximately mutually unbiased bases (-MUBs) if for all elements and of the computational basis and all , we have

(11) |

-MUBs correspond to the usual notion of mutually unbiased bases.

[Existence of metric uncertainty relations] Let and . Let and be Hilbert spaces with and . Then, for all , there exists a set of unitary transformations of satisfying an -metric uncertainty relation on : for all states ,

Moreover, for and such that , the unitaries can be chosen to also form -MUBs. {myremark}[] The proof proceeds by choosing a set of unitary transformations at random. See (14) and (15) for a precise bound on the probability that such a set does not form a metric uncertainty relation or a -MUB. {proof} The basic idea is to evaluate the expected value of for a fixed state when is a random unitary chosen according to the Haar measure. Then, we use a concentration argument to show that with high probability, this distance is close to its expected value. After this step, we show that the additional averaging of independent copies results in additional concentration at a rate that depends on . We conclude by showing the existence of a family of unitaries that makes this expression small for all states using a union bound over a -net. The main ingredients of the proof are stated here but only proved in Appendix A.

We start by computing the expected value of the fidelity , which can be seen as an norm.

[Expected value of over the sphere] Let be a random pure state on . Then,

We then use the inequality to get

By the concavity of the function on the interval ,

The last inequality comes from the hypothesis of the theorem that . In other words, for any fixed , the average over of the trace distance between and the uniform distribution is at most . Setting , the next step is to evaluate . This is done using a concentration inequality on the product of spheres. For completeness, an approach that is more elementary is presented in the appendix (Lemma 1 and Lemma 1). While this second approach is more elementary, it leads to slightly worse constants and additional technical constraints on the relation between and . However, these constraints do not substantially affect our conclusion.

[Concentration inequality on the product of spheres] Let and be such that for all pure states and in ,

Let be independent random pure states in dimension . Then for all ,

where is a constant. We can take . {proof} We start by applying Example 6.5.2 of [Milman and Schechtman (1986)] to obtain concentration around the median. Then to prove concentration around the mean, we can use the general Proposition V.4 also from [Milman and Schechtman (1986)]. We apply this concentration result to . We start by finding an upper bound on the Lipshitz constant