Quantum Information Processing with Continuous Variables and Atomic Ensembles

Quantum Information Processing with Continuous Variables and Atomic Ensembles

January 2011

Quantum information theory promises many advances in science and technology. This thesis presents three different results in quantum information theory.

The first result addresses the theoretical foundations of quantum metrology. It is now well known that quantum-enhanced metrology promises improved sensitivity in parameter estimation over classical measurement procedures. The Heisenberg limit is considered to be the ultimate limit in quantum metrology imposed by the laws of quantum mechanics. It sets a lower bound on how precisely a physical quantity can be measured given a certain amount of resources in any possible measurement. Recently, however, several measurement procedures have been proposed in which the Heisenberg limit seemed to be surpassed. This led to an extensive debate over the question how the sensitivity scales with the physical resources such as the average photon number and the computational resources such as the number of queries that are used in estimation procedures. Here, we reconcile the physical definition of the relevant resources used in parameter estimation with the information-theoretical scaling in terms of the query complexity of a quantum network. This leads to a novel and ultimate Heisenberg limit that applies to all conceivable measurement procedures. Our approach to quantum metrology not only resolves the mentioned paradoxical situations, but also strengths the connection between physics and computer science.

A clear connection between physics and computer science is also present in other results. The second result reveals a close relationship between quantum metrology and the Deutsch-Jozsa algorithm over continuous-variable quantum systems. The Deutsch-Jozsa algorithm, being one of the first quantum algorithms, embodies the remarkable computational capabilities offered by quantum information processing. Here, we develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter estimation protocol or the Deutsch-Jozsa algorithm. The procedure estimates a value of an unknown parameter with Heisenberg-limited precision or solves the Deutsch-Jozsa problem in a single run without the use of any entanglement.

The third result illustrates how physical principles that govern interaction of light and matter can be efficiently employed to create a computational resource for a (one-way) quantum computer. More specifically, we demonstrate theoretically a scheme based on atomic ensembles and the dipole blockade mechanism for generation of the so-called cluster states in a single step. The entangling protocol requires nearly identical single-photon sources, one ultra-cold ensemble per physical qubit, and regular photo detectors. This procedure is significantly more efficient than any known robust probabilistic entangling operation.





I, MARCIN ZWIERZ, declare that the work presented in this thesis, expect where otherwise state, is based on my own research and has not been submitted previously for a degree in this or any other university. Parts of the work reported in this thesis have been published as follows:


  1. M. Zwierz, C. A. Pérez-Delgado, and P. Kok. General optimality of the Heisenberg limit for quantum metrology. Phys. Rev. Lett. 105, 180402 (2010)

  2. M. Zwierz, C. A. Pérez-Delgado, and P. Kok. Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables. Phys. Rev. A 82, 042320 (2010)

  3. M. Zwierz and P. Kok. Applications of atomic ensembles in distributed quantum computing. International Journal of Quantum Information 8, 181-218 (2010)

  4. M. Zwierz and P. Kok. High-efficiency cluster-state generation with atomic ensembles via the dipole-blockade mechanism. Phys. Rev. A 79, 022304 (2009)







I am truly grateful to my supervisors Pieter Kok and Stefan Weigert for all their patient help, guidance, encouragement and support. This gratitude is also warmly extended to my collaborator Carlos Pérez-Delgado for all his valuable suggestions. Many special thanks to the people who made my time in Sheffield particularly enjoyable: Frank Bello, Andrew Carter, Alexander Chalcraft, Christopher Duffy, Entesar Ganash, Domnic Hosler, Carlos Pérez-Delgado, Mark Pogson, Nusrat Rafique, Andrew Ramsay, Samantha Walker and friends from the Department of Geography. I would also like to express my deepest gratitude to my wife Agnieszka and my parents for their love and endless support. Finally, I would like to thank the White Rose Foundation for funding my programme of study.
List of Figures

1.5 \listofsymbolsll CX & Controlled-X
CZ & Controlled-Z
EPR & Einstein-Podolsky-Rosen
POVM & Positive Operator Valued Measure
SQL & Standard Quantum Limit
CVs & Continuous Variables
MOT & Magneto-Optical Trapping
BEC & Bose-Einstein Condensate
GLM & Giovannetti Lloyd Maccone
BFCG & Boixo Flammia Caves Geremia
RB & Roy Braunstein
DJ & Deutsch-Jozsa
EIT & Electromagnetically Induced Transparency
STIRAP & STImulated Raman Adiabatic Passage
DLCZ & Duan Lukin Cirac Zoller
DQC & Distributed Quantum Computing
GHZ & Greenberger–-Horne–-Zeilinger
HOM & Hong-Ou-Mandel
SPDC & Spontaneous Parametric Down-Conversion




To Agnieszka and my parents


Part I Introduction

Chapter \thechapter Quantum Information Processing

1 Introduction

Quantum information theory is a novel branch of science that exploits the remarkable features of quantum mechanics to store, manipulate and transfer information in ways that are unattainable to any classical device. It is arguably one of the most exciting branches of science that promises a huge impact on many other disciplines. Quantum information theory lays at the intersection of theoretical and experimental physics, and computer science. Thus, the impact it may have on these disciplines is quite clear. Surprisingly, quantum effects also seem to have a large significance in some phenomena in biology such as the light-harvesting complexes that are capable to efficiently transmit a single quantum of light on a relatively long distance or the avian compass that birds use to navigate in the magnetic field of Earth. Therefore, a fundamentally deeper understanding of some biological systems may be impossible without an “insight” from the field of quantum information. Also, molecular chemistry can be greatly influenced by quantum information science, if we are able to build a quantum simulator that would allow us to study the behaviour of complex molecules. Quantum computation over discrete or continuous-variable quantum systems is main field of quantum information theory. The quantum phenomena can also be harnessed to perform measurements of physical quantities with a precision inaccessible to any classical device.

The organisation of Chapter I reflects the order at which different subjects are introduced in the remaining chapters. In Sec. 2, we recall basic notions of quantum computation such as a qubit and a quantum gate. In Sec. 3, we introduce a more practical and less abstract form of quantum computation, namely distributed quantum computation. Distributed quantum computation is closely related to quantum communication. This relation is so close that people perceive them as two faces of the same coin, that is, if you can establish quantum network and transfer information between its nodes, you can perform quantum computation. In the same section, we present the measurement-based model of quantum computation that can be implemented in a distributed manner. In Sec. 4, we review the basic foundations of quantum metrology - an important discipline of quantum information theory that is concerned with high-precision measurements. In Sec. 5, we introduce an alternative to quantum computation based on discrete quantum systems (qubits), namely continuous-variable quantum computation. In this section, we review basic properties of continuous quantum systems and present some basic continuous-variable quantum gates. Finally, in Sec. 6, we introduce the concept of an atomic ensemble, a physical system that can be used in distributed quantum computation.

2 Quantum computation

The construction of a quantum computer is an important goal of modern science, which requires an effort from both experimental and theoretical physicists, and quantum computer scientists. A quantum computer is a computing device whose operation is based on the principles of quantum mechanics. The quantum computer exploits the non-classical and counterintuitive phenomena of quantum mechanics such as superposition, entanglement, quantum interference and quantum measurement to perform some computations more efficiently than any classical computer [2]. The basic unit of information for a quantum computer is called a quantum bit or qubit. A qubit is an abstraction of a two-dimensional quantum system that consists of two addressable quantum states, so-called basis states and , that is, the computational basis. A qubit is represented as a vector that lives in a two-dimensional Hilbert space. The and states are analogous to the 0 and 1 of a classical bit. In contrast with classical bits, qubits can exist in any superposition of basis states such as , where and are complex numbers called amplitudes that obey . This is the so-called superposition principle. A qubit can exist in a superposition of both basis states, until we try to observe it by performing a measurement. By means of a measurement, we find a qubit in one of the basis states with a probability given by a square of the amplitudes: for the state and for the state. For the state , the qubit has an equal probability: 50%, of being in the or state. Therefore, if we repeat the measurement in the computational basis many times, on average half of the outcomes will yield a classical value of either 0 or 1 and the state of the qubit will be collapsed to the basis state or , respectively. The superposition principle applies not only to a single qubit but to many qubits as well. In the case of two qubits and , the state of a composite system is given by the tensor product of the form:


The two-qubit composite state is a vector in 4-dimensional Hilbert space. Naturally, this reasoning generalises to any number of qubits. The most intriguing kind of composite states in quantum mechanics are so-called entangled states. One of the entangled states of two qubits is given by


This state together with three other two-qubit entangled states is the so-called set of Bell basis states. What is so special about entangled states? First of all, the entangled states cannot be factored into a tensor product: for any basis states. Furthermore, one may notice that if the measurement of the first qubit yields 0 then the state of the second qubit is instantaneously collapsed to . The same occurs for the second qubit . For the entangled state , the measurement results are perfectly correlated. Although the qubits may be separated by a large distance, their behaviour is in some sense synchronised, i.e., the measurement of one of them affects instantaneously the state of the other. This non-local character or the so-called “spooky action at a distance” of the Bell pair is called entanglement. In order to show that qubits share nonclassical correlations, that is, they are entangled, we also need perfect correlations in the basis, where . The true importance of entanglement is still unclear, however, it is considered essential for quantum computation [3, 4]. In fact, many “quantum tricks” such as quantum teleportation or superdense coding rely heavily on the entangled states.

The quantum computer processes information by applying some set of quantum operations on qubits according to a blueprint called a quantum algorithm [2]. These operations consist of linear, unitary evolutions : single and two-qubit operations (the so-called gates), and measurements (a measurement can also “process” information as can be readily seen in section 3.2) [5]. The unitarity of the quantum gates, , implies that the quantum computation is reversible. The single-qubit operations can be represented graphically in the Bloch sphere. A Bloch sphere is a geometrical representation of the state space of a qubit and any unitary single-qubit gate can be described as a rotation in the Bloch sphere. The three most important single-qubit gates are the so-called Pauli operators , and . In matrix notation, Pauli operators have the following representations in the , basis

In the computational basis, the operator is a bit flip, and the operator is a phase flip, that is, a phase rotation in the Bloch sphere. The operator can be constructed from and operators [5]. Another extremely useful and essential for quantum computation operation is the Hadamard gate . In matrix notation, the Hadamard operation is given by

The Hadamard gate applied to the basis state and returns the balanced superposition states and , respectively. Therefore, the Hadamard gate gives rise to the superposition states of possibly large number of qubits. The last of the crucial single-qubit gates is the general phase shift operation represented as

For the phase shift gate takes the form of the Pauli operator. When and , the phase shift operator corresponds to the -phase gate and gate, respectively.

Two important two-qubit gates are the controlled-X (CX) and the controlled-Z (CZ), which are applied between the so-called control and target qubits. The matrix representation of these gates is the following

The CX operation flips the state of the target qubit by applying the operation only when the control qubit is in the basis state (the state of the control qubit is left unchanged). In other words, the CX stores the result of addition modulo 2 of both qubit states in the state of the target qubit. In the case of CZ gate, the operation is applied to the target qubit if the control one is present in the basis state (again the state of the control qubit is left unchanged), otherwise states of both qubits are unchanged. The importance of the CX and CZ gates stems from the fact that together with the Hadamard gate , we can create entangled states of any number of qubits initially prepared in one of the basis states. Furthermore, the CX or CZ gates and single-qubit gates serve as a basis building block for any other two-qubit gate [2].

The linearity of the quantum gates means that qubits in any superposition state of the computational basis can be manipulated by applying these gates. This suggests that a single quantum computer can process information in parallel, a phenomenon known as quantum parallelism. Therefore, by means of the superposition principle, linear quantum gates and quantum interference amplitudes of the favoured states that represent the correct answer to the computational problem can be enhanced. In a nutshell, this is why quantum computers are capable of solving some computational problems more efficiently than any classical computer. The phenomena described in this section may possibly constitute the foundation for the power of quantum computation. However, it is still unknown how large is the class of computational problems that can be solved efficiently on a quantum computer with respect to its classical counterpart [2]. Therefore, we are still not confident whether quantum computation is, in principle, more powerful than classical computation.

In the next section, we abandon the abstract way of thinking about quantum computation and introduce an architecture that can be used to physically build a quantum computer, the so-called distributed quantum computer.

3 Distributed quantum computation

There are many physical systems in which a qubit and a quantum computer as a whole can be realised. One can represent a qubit as a spin of an electron, a nucleus or even an atom [6, 7, 8, 9, 10, 11]. Other physical representations of qubits are based on Josephson junctions (so-called superconducting qubits) or quantum dots [12, 13, 14, 15, 6, 16, 17]. One of the prominent approaches to the physical implementation of a qubit and quantum computer is linear quantum optics [18, 19]. One can use coherent and squeezed states of light or even a single photon (Fock state or polarisation degree of freedom of a photon) to represent a quantum bit [18, 19]. The drawback of photonic systems for quantum computation is the fact that there is no direct interaction between photons. Nevertheless, photons are perfect carriers of quantum information and can be utilised in the distributed model of quantum computation as quantum communication channels [2, 20].

At the present time, a number of models of quantum computation exist, such as adiabatic quantum computing, or the most widely used standard circuit model of quantum computation. Regardless of the model of quantum computation, anyone trying to build a quantum computer faces two main challenges:

  1. the problem of decoherence, that is, how well we can suppress the unwanted influence of the environment on our quantum computer,

  2. the problem of scalability of basic modules of our quantum computer.

The difficulties associated with the fragility of quantum information (decoherence) and scalability of a quantum computer architecture are one of the most important cornerstones of the distributed version of quantum computation. Decoherence, i.e., the deterioration of the quantum state, affects each qubit and introduces errors to the computation. This has to be suppressed to the lowest level possible, but crucially below the fault tolerance threshold [21]. As one would expect, any interesting, i.e., complicated, computational problem usually employs many qubits. The most well known quantum algorithms, Shor’s factoring algorithm, Grover’s database search algorithm and the Deutsch-Jozsa algorithm have been demonstrated experimentally but only for few qubits [22, 23, 24]. These experiments are proof-of-principle experiments of quantum computation power. All of these suggest that a truly useful and powerful quantum computer has to be robust and scalable machine. In the case of many qubits, which may interact with the environment and their neighbours, protection against decoherence becomes quite a challenging task. The scalability and decoherence issue are the main difficulties that are addressed by distributed quantum information processing. It may be much more feasible to build a number of small-scale remotely distributed quantum computers (processors) and connect them together instead of one large machine. In the distributed model of quantum computation, a small number of stationary qubits are placed in the (distant) nodes of a large network. A distributed quantum computer may also be based on a model of quantum computation that is inherently distributed such as a measurement-based model of quantum computation [25]. Here, the computation is done via single-qubit measurements and feed-forward operations on a large, multi-qubit, entangled graph state [26, 27]. The stationary qubits are usually encoded in the ground levels of trapped atoms, ions or quantum dots and therefore can additionally serve as a good quantum memory [28]. This kind of qubit implementation allows for fast and reliable single qubit operations and rather straightforward measurement techniques. In this setting a possibly large collection of small-scale quantum processors can solve a single computational problem as long as they communicate the outputs of their computations with each other or with a central quantum processor. Robust communication between any two stationary nodes (qubits) is usually provided via flying qubits - single photon qubits [29]. Computation with a distributed quantum network consists of the preparation of initial states, which may involve exchange of classical and quantum information between nodes. Next, computation at each node is performed and then all the partial results from each node are sent to the central processor [30]. The central node gathers results and returns the final answer to the computational problem. Since the quantum computation is probabilistic in nature, one may have to repeat the distributed computation many times until the required result is obtained. The advantages of the distributed model of quantum computation, which result from the spatial separation of stationary qubits, are the following:

  1. each qubit is uniquely addressable. Therefore, control and measurement of an individual quantum processor is completely decoupled from the rest of the computational resources. Naturally, better protection against decoherence originating from the interaction with the environment is more feasible too.

  2. enhanced flexibility. By means of the optical elements qubits may interact with each other more easily. Entanglement can in principle be generated between any two stationary qubits. Moreover, the distributed character of the architecture allows for applications not only in quantum computation but in quantum communication too.

Even though each node of a quantum network consists of a small number of qubits, decoherence still will lead to errors and deterioration over time [31]. In order to avoid this scenario, one may encode logical qubits in many physical qubits and apply error correcting procedures [31]. The main disadvantage of the distributed model of quantum computation is the lack of local interaction between nodes, therefore the need for entangling procedures. Naturally, the distributed quantum computation has to operate on distributed versions of known standard quantum algorithms. In other words, the centralised quantum algorithm has to be distributed over nodes of a large quantum network too. This adds an additional cost associated with communication to the overall cost of a computation [32]. Consequently, one has to decide how to partition a single problem between many remotely distributed quantum processors in an optimal way and then how to communicate and collect outputs of these processors, effectively finding the final solution to the computational problem [32]. This issue was first addressed by Eisert et al. where they considered how to distribute the CX and number of other important gates between two quantum processors [33]. Eisert et al. proved that implementation of distributed version of the CX gate requires one pre-shared EPR pair and communication of two classical bits between two individual quantum nodes. Since the CX gate is a basic building block of any other multi-qubit gate and together with general single-qubit operation it constitutes a universal set of gates for universal quantum commutation, the distributed model of quantum computation is universal [2, 34]. Apart from devising the non-local version of gates, Eisert et al. addressed the problem of minimal resources, both classical and quantum ones, and optimal procedures that are required to implement these distributed gates [33].

In most models for distributed quantum computing one assumes that all quantum processors work perfectly [34]. Moreover, one is able to transfer and store, manipulate, and retrieve quantum states from each of the nodes of an arbitrary quantum network. Concerning communication, there are few possibilities allowed. In some models, communication is done only with qubits or only with classical bits. Commonly, some amount of entanglement is prepared between the qubits when the quantum network is initialised. Often various nodes share EPR pairs and communication is established with only classical bits (quantum teleportation) or classical bits and qubits (super-dense coding) [35]. Obviously, generation of the pre-shared entanglement can be quite challenging especially for large networks. In some cases the cost of entanglement preparation can render the distributed quantum computation with pre-shared entanglement inefficient in comparison with other models of distributed quantum computing based on disentangled states. Nevertheless, use of the pre-shared entangled states under ideal conditions is usually advantageous over uncorrelated ones [30]. Furthermore, even for noisy communication channels one can employ purification procedures [30]. Naturally, the resources one exploits to solve a computational problem will depend on the problem at hand and available methods for entanglement generation. On the other hand, in some models of distributed quantum computing, nodes communicate with each other without any pre-shared entanglement by means of flying qubits (single photons).

3.1 Quantum communication

In quantum communication protocols, photons serve as carriers of quantum information between nodes of communication network. In most of the quantum communication protocols, an important task for photons is to generate perfect entangled states between distant nodes. This is not a trivial task. Each photon that carries quantum information between the nodes of a quantum network is prone to losses. The probability that photon is lost is given by


where is the communication distance and is the characteristic channel attenuation distance [20, 1]. This implies an exponential attenuation that decreases the fidelity of quantum communication protocols. The solution that addresses these limitations was given in terms of quantum repeater and purification protocols. Some of the well known quantum repeater and purification protocols are probabilistic. This imposes a requirement for a medium that would facilitate an interaction between photons, and store photonic qubits, i.e., a stationary qubit. Hence, the concept of an optical quantum memory realised in atomic vapour (atomic ensemble) was introduced. Consequently, an optical quantum memory is a necessary ingredient in many quantum communication protocols and an essential ingredient in many optical quantum information processing protocols.

In general, a quantum memory has to fulfil the following requirements: efficient mapping of a photon into the memory, long storage times and efficient retrieval of a photon back from the memory. Moreover, one has to be able to control the state of a quantum memory at all times. The storage time itself has to be much longer than the characteristic time scale of an application in which quantum memory is used. Not all of these requirements have to be met for all quantum applications. In fact, for some applications such as quantum computation, the first and third requirement can be lifted and a quantum memory can serve as a qubit itself, the so-called stationary or matter qubit. Ideally, all operations that concern quantum memories should be highly efficient and deterministic. Unfortunately, this is never the case and all realistic quantum memories are imperfect. Hence, a question arises: how to evaluate the performance of a quantum memory? The most commonly used measure of quantum memory performance is the average fidelity , i.e., state overlap between the input and output quantum states [2, 36]. A quantum memory characterised with unit average fidelity perfectly maps the input state, stores it for some time and returns it unchanged. Naturally, a truly quantum memory has to outperform any classical memory for quantum state storage [36]. A classical memory fidelity for quasi-classical bright coherent states is , [37] and for an arbitrary qubit states, the maximal classical fidelity is [38]. Therefore, any truly quantum memory has to exceed these classical bounds. Fidelity is not the only measure for quantifying the performance of quantum memories. Similar to the case of the requirements, an appropriate measure for quantum memory performance depends on a particular application [36].

3.2 The one-way model of quantum computation

A natural candidate for a distributed model of quantum computation is the so-called measurement-based or one-way model of quantum computation realised on graph states [5, 25, 39]. Although, the very first experimental proposal for a one-way quantum computer was based on optical lattices (where cold atoms are locally trapped in a standing-wave potential created by counter-propagating laser fields [40, 41, 42]), nevertheless this model of quantum computation is especially well suited for the distributed implementation. What is a graph state? Graph or cluster states are large entangled states that act as a universal resource for a one–way quantum computer [5, 26, 27]. The cluster states are represented in the form of a lattice or a graph. We associate with every node of a graph an isolated qubit in the state subsequently connected, that is, entangled, with adjacent qubits via the CZ operations


where , are the computational basis states, is the Pauli operator and denotes the identity matrix. Commonly, graph states are described in terms of the stabilizer operators. A set of commuting operators constitutes a stabilizer of the quantum state under which the state is invariant. The stabilizer formalism allows us to describe multi-qubit quantum states and their evolution in terms of few stabilizer operators, which usually consist of operators from the Pauli group on qubits. The Pauli group on a single qubit is a group under matrix multiplication consisting of the identity matrix and Pauli matrices multiplied by , factors. The Pauli group on qubits is an tensor product of the Pauli group [2]. The state of a cluster consisting of qubits is completely specified by the following set of eigenvalue equations:




where is the set of all neighbours of qubit [26]. The are Hermitian stabilizer operators whose eigenstates, i.e., the graph states, are mutually orthogonal and form a basis in the Hilbert space of the cluster [26]. Cluster states and quantum algorithms implemented on them may be related to mathematical graphs [26, 27]. A graph is a pair of a finite set of vertices connected with edges from the set . A cluster is identified with the vertices of a graph [27]. The set of edges is given by [27]. Edges are realised by CZ operations and connect two vertices of a graph (Fig. 1). The well-known graph theory notation is a very useful tool in analysing properties of the cluster states.

Figure 1: A graph state. Nodes represent physical qubits which are connected via the CZ operations. Horizontal strings of physical qubits constitute logical qubits. The vertical links between logical qubits represent two-qubit CZ gates.

Let us now review some details of the one-way model of computation. In the measurement-based model of quantum computing, the entire resource for quantum computation is provided from the beginning as a graph state (Fig. 1). Quantum computation consists of single-qubit measurements on the graph states and every quantum algorithm is encoded in a measurement blueprint. A measurement of a qubit in the eigenbasis, i.e., in the computational basis, removes a qubit from a graph and all links to its neighbours are broken. Consequently, a cluster is reduced by one qubit, and possible corrective operations are applied to its neighbours depending on the measurement outcome (if the measurement result is 0 then nothing happens, but when the measurement outcome is 1 a phase-flip is applied to all neighbours). By means of a measurement, any cluster can be carved out from a generic, fully connected cluster (Fig. 1).

Other single-qubit measurements are performed in the basis


For the measurement is realised in the eigenbasis. An interesting feature of measurement is that two neighbouring measurements in a linear cluster remove measured qubits and connects their neighbours with each other resulting in a shortened cluster. For , the measurement is performed. In the case of a measurement, the measured qubit is removed from a cluster and its neighbours are connected (up to a corrective phase operation). Measurements in the and eigenbases propagate quantum information through a cluster. In general, any quantum computation proceeds as a series of measurements governed by an appropriate blueprint. The choice of measurement basis for every physical qubit is encoded in this measurement blueprint. Moreover, all measurement bases depend on the outcomes of the preceding measurements. This implements the so-called feed-forward operation. Although the result of any measurement is completely random, information processing is possible because of the feed-forward operations. The feed-forward operations ensure that measurement bases are correlated and a deterministic computation can be realised. In this way quantum information propagates (due to the feed-forwarding which implies time ordering in one way) through the cluster until the last column of qubits, which are then ready to be read out. Readouts are performed in the eigenbasis up to Pauli corrections and the output of the computation is given as a classical bit string [26].

Figure 2: A linear 4-qubit cluster. Nodes represent a physical qubits which are connected via the CZ operations.

A simple example of a measurement-based computation can presented on a linear 4-qubit cluster given by


with . Although this is a very basic cluster, it allows us to perform an arbitrary single-qubit rotation in only three (measurement) steps:

  • measure qubit 1 in basis ,

  • measure qubit 2 in basis depending on the outcome of the previous measurement,

  • measure qubit 3 in basis depending on the outcome of the previous measurement.

Following these measurements an arbitrary single-qubit rotation (up to corrective Hadamard and Pauli , operations) is applied to the fourth qubit in a linear cluster according to the unitary transformation given by [43]


We again emphasize the importance of the feed-forward operations. The angles of the rotation and by implication the final corrective operations depend on the outcomes of previous measurements [43].

On the basis of cluster states a universal set of quantum gates can be implemented, e.g., single-qubit gates such as the Hadamard, the /2-phase gate and /8 gate, and a two-qubit CX gate [2, 39]. Most importantly, the measurement-based model of quantum computation on cluster states is completely equivalent to the standard circuit model, thus the one-way model is capable of efficiently simulating any quantum circuit. Consequently, the measurement-based model of computation is a universal model of quantum computation [39].

Cluster states are a very promising resource for quantum information processing. One possible way of creating large networks of qubits is by trapping small atomic ensembles in optical lattices or placing them in the distributed nodes of a quantum network. Therefore, in Sec. 6, we introduce the concept of an atomic medium as a quantum memory for light. Since a cluster state consists of a large set of entangled qubits, efficient protocols for generating entanglement between nodes of a network are required. We review some of the well-known entangling procedures in Chapter III and present a new procedure based on some manipulation techniques for atomic ensembles that are described in detail in Chapter III.

In the next section, however, we present foundations of another important discipline of quantum information theory, namely quantum metrology.

4 Quantum metrology

Quantum metrology, or quantum parameter estimation theory, is an important and relatively young branch of science that received a lot of attention in recent years. It studies high-precision measurements of physical parameters, such as phase, based on systems and physical evolutions that are governed by the principles of quantum mechanics. The main theoretical objective of this field is to establish the ultimate physical limits on the information we can gain from a measurement. From an experimental perspective, quantum-enhanced metrology promises many advances in science and technology since an optimally designed quantum measurement procedure outperforms any classical procedure. Furthermore, improved measurement techniques frequently lead not only to the technological advancement but also to a fundamentally deeper understanding of Nature. The main figure of merit in the field of quantum metrology for both theorists and experimentalists is the precision with which the value of an unknown parameter can be estimated.

4.1 The quantum Cramér-Rao bound

In this section, we introduce the two most crucial concepts in quantum metrology, namely the Fisher information and the quantum Cramér-Rao bound. The Fisher information is a quantity that measures the amount of information about the parameter we wish to estimate revealed by the measurement procedure. Given the Fisher information, we can bound the minimal value of uncertainty in the parameter with the quantum Cramér-Rao bound. Here, we consider the estimation of a single parameter . The most general parameter estimation procedure corresponding to any conceivable experimental setup is shown in Fig. 8. This procedure consists of three elementary steps:

  1. prepare a probe system in an initial quantum state ,

  2. evolve it to a state by a unitary evolution , where the Hermitian operator is the generator of translations in the parameter ,

  3. subject the probe system to a generalised measurement , described by a Positive Operator Valued Measure (povm) that consists of elements , where denotes the measurement outcome.

Figure 3: The general parameter estimation procedure involving state preparation , evolution , and generalised measurement with outcomes , which produces a probability distribution .

The conditional probability of finding measurement outcome is given by the Born rule


with . Given the probability distribution , we can derive the expression for the Fisher information and subsequently the quantum Cramér-Rao bound. The following derivation is due to Braunstein and Caves [44], and can also be found in Kok and Lovett [1]. We start the derivation by noting that the above measurement procedure returns the measurement outcome with probability instead of a more desired single value for the parameter with probability . Therefore, we need to relate these two values with a help of a special function called an estimator. The estimator for a parameter is a function that allows us to find the value of the parameter given the measurement outcome . For an estimator , we define with


When , the estimator is unbiased. Given independent measurement outcomes we can write


Following the definition of , we can easily verify that Eq. 12 holds for any estimator . Next, we take the derivative to of Eq. 12 and rewrite it as


Now we apply the Cauchy-Schwarz inequality:


with and defined as


Hence, we obtain


We identify the first term with the Fisher information and rewrite this inequality as




The Fisher information measures the average squared rate of change of the conditional probability distribution (derived from a measurement) with the parameter . Therefore, higher sensitivity of the probe system to the parameter in question implies higher Fisher information. Strictly speaking, the Fisher information quantifies the amount of information about parameter extracted from the probe system prepared in by a generalised measurement described by the povm. The unit of the Fisher information is given by the inverse squared unit of the parameter in question, that is, . The above inequality relates the Fisher information and the average error in the estimator . However, we want to express it in terms of the average error in the actual value of . Therefore, we use the following expression for the error :


The derivative accounts for a possible change in the units between the average value of the estimator and parameter . In order to find a relationship between and , we use to calculate


and we use Eq. 19 to further find


Given above equations, we find


This relation together with Eq. 19 leads to the quantum Cramér-Rao bound on the minimum value of the mean squared error in the parameter


The last inequality holds for unbiased estimators: . The minimal error in depends on the inverse of times the measurement procedure is repeated and the Fisher information. The Cramér-Rao bound is a theoretical limit and in general it is not tight. In order to attain this bound, we have to use the probe system in an appropriate initial quantum state and then subject it to a suitable measurement. In other words, for a given measurement procedure we need to find an optimal initial quantum state and an optimal measurement observable.

There exist two important regimes of the quantum Cramér-Rao bound, the so-called Standard Quantum Limit (sql) and the Heisenberg Limit. The sql or the shot noise limit is a classical limit for which each measurement reveals a constant amount of information about the parameter. The Heisenberg limit is imposed by the laws of quantum mechanics and for many years it was considered optimal and unbreakable. However, the optimality of the Heisenberg limit has been questioned recently. The Heisenberg limit and its optimality for the most general parameter estimation procedures will be the subject of Chapter II.

4.2 The statistical distance

The Fisher information defined in Eq. 18 is a function of the probability distribution . In this section, we introduce the concept of the statistical distance between two probability distributions and relate it to the Fisher information. From a conceptual perspective, this corresponds to a parameter estimation procedure producing two distinct probability distributions and associated with two possible values of the parameter: and . The statistical distance measures how different these probability distribution are.

First, we define a space of probability distributions with a distance between two distributions defined on it [1]. Then, we introduce two infinitesimally close probability distributions and . The infinitesimal statistical distance for and is given by


We can divide both sides by assuming that depends on a parameter


This relates the Fisher information to the derivative of the statistical distance over squared, i.e., the rate of change of the statistical distance with the parameter.

One of the most widely used systems for quantum metrology are optical systems such as interferometers fed with different states of light. A comprehensive description of various states of light can be given in terms of continuous variables. This approach applies not only to the field of quantum metrology but also to the field of optical quantum computation. Given their importance to many distinct subfields of quantum information, we introduce continuous variables in the next section.

5 Continuous variables

Continuous variables (CVs) may serve as a useful tool for describing various states of light. More importantly, in the context of quantum computation, CVs present an interesting alternative to discrete quantum systems, such as qubits. In this section, we introduce the notion of continuous variables and some basic operations that can be performed on them.

In general, continuous variables are eigenstates of an operator with a continuous spectrum [1]. There are a number of operators with continuous spectrum such as position, momentum, and quadrature operators of the electromagnetic field whose eigenstates can implement the continuous variables. We are especially interested in the last one, i.e., an optical representation of CVs.

We model a single mode of a free electromagnetic radiation field as a quantum harmonic oscillator. We write down the Hamiltonian of a harmonic oscillator in terms of creation and annihilation operators as


where denotes the frequency of harmonic oscillator. The creation and annihilation operators are field operators that create or annihilate single excitations (quanta) of the radiation field in a well-defined single mode. The annihilation operator is associated with a quantised amplitude of a single excitation [45]. We can rewrite the Hamiltonian of a harmonic oscillator in terms of the so-called quadrature operators


with and defined in terms of creation and annihilation operators by


For simplicity, we can define dimensionless quadrature operators as


Given the bosonic commutation relation (), the dimensionless quadrature operators obey the following commutation relation


This commutation relation is reminiscent of the commutation relation for canonically conjugate position and momentum operators with [46]. Hence, the quadrature operators are traditionally regarded as the position and momentum of the electromagnetic harmonic oscillator. Naturally, the quadratures have nothing to do with the position and the momentum of a single quantum since they are defined in the phase space of a harmonic oscillator [45]. Since we think about the quadratures as position- and momentumlike quantities, their spectrum is unbounded and more importantly continuous. Therefore, we may use their eigenstates as an implementation of the continuous variables. We introduce eigenstates of single-mode quadrature operators satisfying


The eigenstates are orthogonal: , and complete


According to the quantum-mechanical formalism, the eigenstates of canonically conjugate operators are related to each other by the Fourier transform, thus we may write


with . To this end, we have introduced the continuous variables as the eigenstates of quadrature operators (position and momentum) of the electromagnetic field. Now, in order to perform a continuous-variable quantum computation, we need to create an initial CV state, i.e., a register, then apply an appropriate interaction Hamiltonian to induce evolutions on the continuous variables, and perform a measurement that reveals the result of computation [47]. The continuous-variable quantum computation was introduced by Lloyd and Braunstein [48]. In principle, there are two distinct types of operations associated with CVs [46]

  1. Gaussian operations that include linear phase-space displacements, interaction Hamiltonians at most quadratic in and and homodyne detections (measurements of the quadratures of electromagnetic field),

  2. non-Gaussian operations that include interaction Hamiltonians at least cubic (non-linear) in and or operations conditioned on non-Gaussian measurements such as photon counting.

First, we focus our attention on the Gaussian operations. We introduce linear (in the quadrature operators) Hamiltonians. The displacement operator that allows us to move between different eigenstates of the position operator can be written as


A straightforward calculation verifies that truly is a displacement operator. When applied to a position eigenstate it gives: . For a momentum eigenstate , the effect of is the following


It simply introduces a phase shift in the front of a momentum eigenstate. Since we have two conjugate quadrature operators, the form of another linear operator is given by


Its action on the eigenstates of quadrature operators is the opposite to the action of and reads


In summary, the and linear operators displace the continuous variables to another eigenstate or introduce a state-dependent phase shift [1]. These operators implement phase-space displacements and constitute the continuous-variable generalisation of Pauli bit flip and phase flip operators. Naturally, this set of operations is too limited for a fully functional quantum computer, therefore, we introduce Hamiltonians quadratic in the quadrature operators. One of the most important unitary operators in the field of quantum computation is the Fourier transform given by


When we apply the Fourier transform to a position eigenstate , we have


The action of the Fourier transform on a position eigenstate yields a momentum eigenstate with numerical value (subscript denotes a momentum domain). Furthermore, with a help of the Fourier transform, a momentum eigenstate can be written as a superposition of all possible position eigenstates. The application of the Fourier transform to a momentum eigenstate has an analogous effect, i.e., it gives a position eigenstate . The Fourier transform is the continuous-variable version of the Hadamard gate for discrete quantum systems. Other useful quadratic Hamiltonians include the phase gate (a squeezing operator) applied on a single-mode system:


and continuous-variable versions of the CX and CZ gates applied on two CVs:


A truly powerful quantum computer has to be able to perform a universal quantum computation. Are the above CV operations sufficient to implemented any quantum computation? The generalised Gottesman-Knill theorem states that a CV quantum computer equipped with linear and quadratic Hamiltonians, i.e., the Gaussian operations, and allowing for classical feed-forward can be efficiently simulated on a classical computer. We note that it is interesting that a number of CV protocols which rely heavily on entanglement such as quantum teleportation satisfy the conditions of the Gottesman-Knill theorem and may be simulated efficiently on a classical computer [49, 46]. However, to move beyond a classical domain and at the same time implement universal quantum computation, we require arbitrary Hamiltonians to induce arbitrary evolutions. Fortunately, we can generate any interaction Hamiltonian corresponding to an arbitrary Hermitian polynomial of and given a small set of elementary interaction Hamiltonians. Before presenting this universal set, let us show why the linear and quadratic operations can never give us the higher-order polynomials. We invoke the Baker-Campbell-Hausdorff relation


Here, and operators are at most quadratic in and . Therefore, the commutator and all repeated commutators can produce polynomials of order at most two. In conclusion, to generate an arbitrary polynomial we require interaction Hamiltonians at least cubic in the position and momentum operators and . The most well known Hamiltonian of this type is the so-called Kerr Hamiltonian . The higher-order Hamiltonians belong to the class of non-Gaussian operations and, therefore, are much harder to generate. However, to perform universal quantum computation only one of such higher-order Hamiltonians, e.g., , suffices [1].

The universal set of elementary operations for universal continuous-variable computation consists of

  1. linear operations, e.g., , ,

  2. quadratic operations, e.g., , ,

  3. a single non-linear (non-Gaussian) operation of higher-order, typically the Kerr Hamiltonian ,

  4. multi-mode interaction Hamiltonian applied on at least two modes, e.g., CX, CZ operations or the beam splitter interaction,

  5. homodyne measurement.

This set of operations can generate any multi-mode Hermitian polynomial in the canonical position and momentum operators. For the continuous variables implemented as the quadratures of the electromagnetic field, the universal set of elementary operations can be generated using linear optical elements, such as a simple phase shift (Fourier transform), and the non-linear optical medium such as a Kerr nonlinearity.

The only basic ingredient (omitting error correction [50, 51]) of our continuous-variable quantum computer that is still missing is a physical input state that can be used as a register with which we encode our information. The position and momentum eigenstates represent an idealised implementation of the continuous variables. When one inspects the orthogonality conditions one easily notices that these eigenstates are non-normalisable and, therefore, unphysical, i.e., they cannot be generated in the laboratory. The way to deal with this difficulty is by approximating idealised eigenstates with a normalised Gaussian states. The Gaussian position and momentum eigenstates centered around the position value and momentum value can be written as [1]


where is the width of the Gaussian state with . Depending on a value of , the Gaussian state represents various quantum states of light. When , the Gaussian state corresponds to an infinitely squeezed (in the position domain) state and represents an infinitely anti-squeezed state. For , we associate Gaussian states with coherent states of light. The Gaussian states of light can be generated unconditionally, however, their quality depends on the amount of squeezing applied. Naturally, the coherent states are free from these imperfections. As one expects, all Gaussian operations map Gaussian states onto Gaussian states.

Continuous variables are especially well suited for quantum communication protocols. Therefore, a number of applications have been generalised to CVs. These include quantum teleportation [49, 37, 52] and entanglement swapping, quantum super-dense coding, quantum error correction, quantum cryptography [53, 54, 55] and entanglement distillation [46]. On the other hand, continuous-variable quantum computing has received much less attention. In Chapter II, we present a comprehensive analysis of a parameter estimation protocol and the Deutsch-Jozsa algorithm in the setting of continuous-variable quantum systems. We devise a simple procedure that unifies quantum metrology and the Deutsch-Jozsa algorithm. We are not aware of a counterpart of this protocol existing in the setting of discrete quantum systems.

6 Atomic ensembles

An atomic ensemble or atomic vapour is a gas that consists of several hundred of the same species of atoms, typically alkali atoms such as Cesium or Rubidium, trapped at room temperature or trapped and cooled to K temperature. An atomic ensemble may serve as a good quantum memory for light. As the preceding sections may suggest, quantum memories can often be viewed as interfaces for either continuous-variable states or discrete states [36]. The behaviour of continuous-variable memories is described in terms of quadrature operators and subjected to homodyne measurements. The discrete memories are described with a help of and operators that annihilate or create single quanta of light which are then measured with photon counting detectors [36]. The remainder of this section and Chapter III are focused on discrete quantum memories, that is, single-photon memories.

Any good and efficient quantum memory has to meet the following requirements. The atoms have to possess a long-lived ground state that is easily populated by optical pumping techniques. Moreover, the macroscopic ensemble should have a large optical depth , where is the atom number density, is the absorption cross section of an atom and denotes the length of atomic medium. In other words, the atomic ensemble should easily, i.e., effectively, interact with light pulses. This is in fact one of the main advantages of atomic ensembles for interface purposes. A large number of atoms increases the coupling strength of an interaction between light and matter, and therefore allows us to coherently manipulate the quantum state of the ensemble with light and vice versa. Moreover, a large number of atoms helps to suppress the negative impact of decoherence on information stored in an atomic ensemble [20, 36, 56, 57, 58].

Figure 4: A picture of an atomic ensemble consisting of a cloud of atoms trapped in a glass cell (taken from the homepage of the Experimental Quantum Optics Group at the Niels Bohr Institute in Copenhagen).

The simplest way to prepare an atomic ensemble is to trap a cloud of alkali atoms in a glass cell (see Fig. 4). This is the so-called hot atomic vapour or room temperature atomic vapour. Room temperature atomic ensembles are used extensively because of their simplicity and large optical depth, which is the key figure of merit for quantum memory efficiency. These kinds of interfaces will inherently suffer from thermal motion and therefore from Doppler broadening. Moreover, atoms moving in and out of the interaction region may limit the performance of a quantum memory. One of the widely used methods to overcome this problem is utilisation of a buffer gas [59, 60]. A few torr of a noble gas, typically neon or helium, limits the thermal diffusion of atoms inside a vapour [60, 61]. Another advantage of a buffer gas is the suppression of decoherence from the collisions between alkali atoms and with the walls of a cell. By means of a buffer gas, the atoms can retain coherence for more than collisions [59]. Although a buffer gas seems to be indispensable, too high buffer gas pressure may also introduce some incoherent processes to the operation of a quantum memory [59]. One of the most recent techniques for suppression of the collisional and motional decoherence involves buffer gas cooled below 7K. In an experiment by Hong et al. [62], Rubidium atoms are cooled by a buffer gas and the diffusion time is slowed down. Moreover, the optical depth of a medium in this experiment is very large ( 70). The mentioned setup combines simplicity and large optical depth of a room temperature atomic vapour with slow atomic motion that is characteristic for another technique of trapping alkali atoms, namely so-called magneto-optical trapping (MOT) [62].

A MOT technique combines laser cooling and trapping with magnetic fields. Atoms trapped with MOT are cooled down to the K temperature, therefore the collisional and motional decoherence becomes negligible in comparison with a typical operational time scale of a quantum memory. The shortcoming of a cold atomic ensemble is rather low optical depth (). The very principle on which the MOT is operating, i.e., the magnetic field, also introduces another difficulty. The magnetic field causes decoherence of the ground states usually realised as magnetic Zeeman sublevels of a ground state. This problem can be overcome by switching off the MOT trap and then performing operations on a quantum memory [36]. However, lack of the magnetic field trapping allows atoms to slowly diffuse and therefore limits the lifetime of a quantum memory. Nevertheless, by means of the MOT trap atomic vapours can be prepared in the form of a “frozen” gas with lifetime much longer than in the case of a room temperature vapours.

The last widely used method for confining large numbers of atoms to a small sample is called Bose-Einstein condensation. A Bose-Einstein condensate (BEC) has extremely large optical depth. However, the preparation of a BEC is an extremely challenging experiment.

There are a number of effects that influence the overall efficiency of quantum memories based on atomic ensembles. In spite of many efforts the efficiency of quantum memories reaches at the best 70% [36]. The main source of low fidelity is a low optical depth . Only an optically thick medium, that is, highly dense and/or large medium, can effectively interact with the light fields. The broadening of the optical transitions, both homogenous and inhomogeneous, is another source of decoherence for quantum memories. The homogenous broadening is mainly due to the spontaneous emission and results in the inefficiency of storage that depends on the optical depth as , where is the optical depth without the homogenous broadening [36]. The inefficiency of storage of light pulses based on techniques such as electromagnetically induced transparency or Raman interaction scales as [36]. For atomic ensembles at room temperature the inhomogeneous broadening is due to the thermal motion and associated with it Doppler broadening of the atomic lines. The Doppler broadening induces shifts in the energy level structure of the atoms in completely incoherent fashion and results in the inefficiency of storage that scales as , where is the optical depth in a presence of the homogenous broadening [36]. For a sufficiently dense and/or large medium, inhomogeneous broadening is less dominant than homogenous broadening. Apart from the Doppler broadening, the thermal or atomic motion is responsible for atomic collisions, which are yet another factor that limits fidelity of a quantum memory.

Part II Quantum Metrology, the Deutsch-Jozsa Algorithm and Continuous Variables

Chapter \thechapter General Optimality of the Heisenberg Limit for Quantum Metrology

7 Introduction

Parameter estimation is a fundamental pillar of science and technology, and improved measurement techniques for parameter estimation have often led to scientific breakthroughs and technological advancement. Caves [63] showed that quantum mechanical systems can in principle produce greater sensitivity over classical methods, and many quantum parameter estimation protocols have been proposed since [1]. The field of quantum metrology started with the work of Helstrom [64, 65], who derived the minimum value for the mean square error in a parameter in terms of the density matrix of the quantum system and a measurement procedure. This was a generalisation of a known result in classical parameter estimation, called the Cramér-Rao bound. Braunstein and Caves [44] showed how this bound can be formulated for the most general state preparation and measurement procedures. While it is generally a hard problem to show that the Cramér-Rao bound can be attained in a given setup, at least it gives an upper limit to the precision of quantum parameter estimation.

The quantum Cramér-Rao bound is typically formulated in terms of the Fisher information, an abstract quantity that measures the maximum information about a parameter that can be extracted from a given measurement procedure. One of the central questions in quantum metrology is how the Fisher information scales with the physical resources used in the measurement procedure. We usually consider two scaling regimes: First, in the standard quantum limit (sql) [66] or shot-noise limit the Fisher information is constant, and the error scales with the inverse square root of the number of times we make a measurement. Second, in the Heisenberg limit [67] the error is bounded by the inverse of the physical resources. Typically, these are expressed in terms of the size of the probe system, e.g., (average) photon number. However, it has been clearly demonstrated that this form of the limit is not universally valid. For example, Beltrán and Luis [68] showed that the use of classical optical nonlinearities can lead to an error with average photon number scaling . Boixo et al[69] devised a parameter estimation procedure that sees the error scale with with , and Roy and Braunstein [70] construct a procedure that achieves an error that scales with . The central question is then: What is the real fundamental Heisenberg limit for quantum metrology? We could redefine this limit accordingly to scale as , but in practice this bound will never be tight and therefore of limited use.

In this chapter, we give a natural definition of the relevant physical resources for quantum metrology based on the general description of a parameter estimation procedure, and we prove the asymptotical bound on the mean squared error based on this resource count. We will show that the resource count is proportional to the size of the probe system only if the interaction between the object and the probe is non-entangling over the systems constituting the probe. In Sec. 8, we study the query complexity of quantum metrology networks, which will lead to a resource count given by the expectation value of the generator of translations in the parameter . In Sec. 9, we prove that the mean error in is asymptotically bounded by the inverse of this resource count. We argue that this is the fundamental Heisenberg limit for quantum metrology. Furthermore, in Sec. 10, we clarify the origin of the term “Heisenberg limit”. Finally, we illustrate how this general principle can resolve paradoxical situations in which the Heisenberg limit seems to be surpassed.

Figure 5: a) General parameter estimation procedure involving state preparation , evolution and generalised measurement with outcomes , which produces a probability distribution . In terms of quantum networks, the evolution can be written as a number of queries of the parameter . b) Example for of the usual situation described by , where each system performs a single query, and the number of queries equals the number of systems (the grey box represents ); c) for the number of queries does not always equal the number of systems: any two systems can jointly perform a single query, and the number of queries then scales quadratically with the number of systems; d) for all possible subsets of systems perform a single query. The number of queries scales exponentially with the number of systems.

8 Parameter estimation and resources

The most general parameter estimation procedure is shown in Fig. 5a). Consider a probe system prepared in an initial quantum state that is evolved to a state by . This is a unitary evolution when we include the relevant environment into our description, and it includes feed-forward procedures. The Hermitian operator is the generator of translations in , the parameter we wish to estimate. The system is subjected to a generalised measurement , described by a Positive Operator Valued Measure (povm) that consists of elements , where denotes the measurement outcome. These can be discrete or continuous (or a mixture of both). The probability distribution that describes the measurement data is given by the Born rule , and the maximum amount of information about that can be extracted from this measurement is given by the Fisher information


This leads to the quantum Cramér-Rao bound [64, 44]


where is the mean square error in the parameter , and is the number of times the procedure is repeated. The sql is obtained when the Fisher information is a constant with respect to , and the Heisenberg limit is obtained in a single-shot experiment () when the Fisher information scales quadratically with the resource count. The sql and the Heisenberg limit therefore relate to two fundamentally different quantities, and , respectively. We need to reconcile the meaning of these two limits if we want to compare them in a meaningful way.

To solve this problem, we can define an unambiguous resource count for parameter estimation by recognising that a quantum parameter estimation protocol can be written as a quantum network acting on a set of quantum systems, with repeated “black-box” couplings of the network to the system we wish to probe for the parameter [71]. The quantum networks arise naturally in the circuit model of quantum computation. A quantum network consists of wires that connect successive quantum gates. The wires represent movement of quantum systems through space or time, and gates perform simple computational tasks on the information carried by these quantum systems [2]. In general, a quantum network involves many quantum systems and many quantum gates. Traditionally, we represent a quantum gate as a function with fixed number of input parameters and fixed number of output parameters [2]. In the following analysis, we employ a special type of the quantum gate called a black-box or a quantum oracle. A black-box is a unitary operator defined by its action on quantum systems whose internal workings are usually unknown. As any other quantum gate, a black-box is a function that can be univariate or multi-variate. When the function is multi-variate, a query to the black-box must take the form of multiple input parameters. Likewise, when the operator that describes the fundamental “atomic” interaction between the queried system and the probe is a two-body interaction on the probe, then a query can consist only of precisely two input bodies. The scaling of the error in is then determined by the query complexity of the network. The number of queries is not always identical to the number of physical systems in the network.

In Fig. 5b-d) we consider three examples. The quantum network with univariate black-boxes in b) was analysed by Giovannetti, Lloyd, and Maccone [71]. Suppose that each grey box in Fig. 5 is a unitary gate , where denotes the system, and is a positive Hermitian operator. It is convenient to define the generator of the joint queries as


because all commute with each other. The number of queries is then equal to the number of terms in , or . In Fig. 5c) the black-box is bi-variate. This is a type of Hamiltonian considered by Boixo, Flammia, Caves, and Geremia [69], and takes the form


A physical query to a black-box characterised by must consist of two systems, labeled and . Since each pair interaction is a single query, the total number of queries is . Finally, in Fig. 5d) we depict the network corresponding to the protocol by Roy and Braunstein [70]. It is easy to see that the number of terms in the corresponding generator is given by , and the number of queries is therefore .

A similar argument can be made to find the correct number of queries for all types of networks. The key principle is that a physical query to a quantum system consists of probe-systems that together undergo an operation, which can potentially entangle them. The entangling power of the black-box operation over multiple input systems accounts for the super-linear scaling of with . Only when does not have any entangling power across the input, we are guaranteed to have . This is in agreement with Refs. [69] and [70] where scales super-linearly in , but is always linear in , as defined here. Since we have a systematic method for increasing (and ) given the atomic interaction , this uniquely defines an asymptotic query complexity of the network. Since both and count the number of queries, this allows us to meaningfully compare the sql with the Heisenberg limit.

Given that in Eq. (47) , we have to find a general procedure that bounds , based on the physical description of the estimation protocol in Fig. 5a). Previously, we showed that is the number of black-box terms in , and a straightforward choice for the resource count is therefore . An important subtlety occurs when corresponds to a proper Hamiltonian. The origin of the energy scale has no physical meaning, and the actual value of can be changed arbitrarily. Hence, we must fix the scale such that the ground state has zero energy (equivalently, we may choose , where is the smallest eigenvalue, and the identity operator). In most cases, this is an intuitive choice. For example, it is natural to associate zero energy to the vacuum state, and add the corresponding amount of energy for each added photon. Technically, this corresponds to the normal ordering of the Hamiltonian of the radiation field in order to remove the infinite vacuum energy. Slightly less intuitive is that the average energy of spins in a Greenberger-Horne-Zeilinger state is no longer taken to be zero, but rather times the energy splitting between and .

While the expectation value of is easy to calculate, it is not the only way to obtain a bound of from . Other seemingly natural choices are the variance and the semi-norm. For example, if we write , the variance is


for some positive number and positive operator . This gives , where e.g., in Ref.  [69] . Similarly, since all expectation values are positive and finite. In other words, in terms of the scaling behaviour with , we can use either the variance or the expectation value. However, there are important classes of quantum systems for which the variance of the energy diverges, such as systems with a Breit-Wigner (or Lorentzian) spectrum [72, 73]. Furthermore, for the NOON states written as , the variance of the energy is zero [1]. The variance of a Hermitian operator is upper bounded by the operator semi-norm


where the operator semi-norm is defined as with and being the maximal and minimal eigenvalue of , respectively. Again, the semi-norm does not exist for a large class of states, such as optical Gaussian states. In these cases the resource count, and by implication the scaling of the error, would be ill-defined.

Also, from a physical perspective the higher-order moments do not describe “amounts” in the same way as the first moment does, and refer instead to the shape of the distribution. This is a further argument that is the natural choice for the resource count. Sometimes, it is unclear how the query complexity is defined, for example when the estimation procedure does not involve repeated applications of the gates , or when an indeterminate number of identical particles, such as photons, are involved. Nevertheless, the generator is always well-defined in any estimation procedure, and we can use its expectation value to define the relevant resource count.

The resource count in terms of is completely general for all possible quantum networks. The most general quantum interaction acting on the probe system is represented by the unitary transformation


This general interaction consists of applications of , interspersed with arbitrary unitary gates . The arbitrary unitary gates together with ancillary systems may be used to introduce adaptive (feed-forward) strategies to the estimation procedure. For a general interaction , we can use an argument by Giovannetti et al[71] to show that the expectation value of the generator of is given by




Since all the have the same spectrum as (the spectrum of the generator of is unchanged by the ’s), then the expectation value is unaffected by the intermediate arbitrary unitary gates, and the scaling is therefore still determined by .

9 Optimality proof of the Heisenberg limit

After establishing the appropriate resource count, we are finally in a position to prove the optimality of the Heisenberg limit for quantum parameter estimation in its most general form. The Fisher information can be related to a statistical distance on the probability simplex spanned by . Consider two probability distributions and . The infinitesimal statistical distance between these distributions is given by [74, 75]


Dividing both sides by we obtain