Universal quantum simulation with prethreshold superconducting qubits: Single-excitation subspace method

Universal quantum simulation with prethreshold superconducting qubits: Single-excitation subspace method

Michael R. Geller mgeller@uga.edu Department of Physics and Astronomy, University of Georgia, Athens, Georgia 30602, USA    John M. Martinis Department of Physics and Google Inc., University of California, Santa Barbara, California 93106, USA    Andrew T. Sornborger Department of Mathematics, University of California, Davis, California 95616, USA    Phillip C. Stancil Department of Physics and Astronomy, University of Georgia, Athens, Georgia 30602, USA Center for Simulational Physics, University of Georgia, Athens, Georgia 30602, USA    Emily J. Pritchett HRL Laboratories LLC, 3011 Malibu Canyon Road, Malibu, California 90265, USA    Hao You Department of Physics and Astronomy, University of Georgia, Athens, Georgia 30602, USA Center for Simulational Physics, University of Georgia, Athens, Georgia 30602, USA Department of Chemistry and Physics, Georgia Regents University, Augusta, Georgia 30912, USA    Andrei Galiautdinov Department of Physics and Astronomy, University of Georgia, Athens, Georgia 30602, USA
July 5, 2019
Abstract

Current quantum computing architectures lack the size and fidelity required for universal fault-tolerant operation, limiting the practical implementation of key quantum algorithms to all but the smallest problem sizes. In this work we propose an alternative method for general-purpose quantum computation that is ideally suited for such “prethreshold” superconducting hardware. Computations are performed in the -dimensional single-excitation subspace (SES) of a system of tunably coupled superconducting qubits. The approach is not scalable, but allows many operations in the unitary group SU() to be implemented by a single application of the Hamiltonian, bypassing the need to decompose a desired unitary into elementary gates. This feature makes large, nontrivial quantum computations possible within the available coherence time. We show how to use a programmable SES chip to perform fast amplitude amplification and phase estimation, two versatile quantum subalgorithms. We also show that an SES processor is well suited for Hamiltonian simulation, specifically simulation of the Schrödinger equation with a real but otherwise arbitrary Hamiltonian matrix. We discuss the utility and practicality of such a universal quantum simulator, and propose its application to the study of realistic atomic and molecular collisions.

pacs:
03.67.Lx, 85.25.Cp

Physical Review A 91, 062309 (2015)

I Introduction and Motivation

i.1 The promise of quantum computation

A universal quantum computer, if one could be built, would transform information technology by providing vastly increased computational power for certain specialized tasks, such as quantum simulation Feynman (1982); Lloyd (1996); Zalka (1998); Aspuru-Guzik et al. (2005); Georgescu et al. (2014) and prime factorization Shor (1997); Ekert and Jozsa (1996). Superconducting electrical circuits operating in the quantum regime Schoelkopf and Girvin (2008); Clarke and Wilhelm (2008) have emerged as an extremely promising platform for realizing a large-scale, practical machine. Yet the quantum algorithms actually demonstrated to date—with any architecture—have been limited to only tiny, few-qubit instances Chuang et al. (1998); Weinstein et al. (2001); Vandersypen et al. (2001); Gulde et al. (2003); Chiaverini et al. (2004); Peng et al. (2005); Chiaverini et al. (2005); Negrevergne et al. (2005); Brickman et al. (2005); Lu et al. (2007); Lanyon et al. (2007); DiCarlo et al. (2009); Lanyon et al. (2010); Barreiro et al. (2011); Schindler et al. (2011); Lu et al. (2011); Lanyon et al. (2011); Mariantoni et al. (2011); Reed et al. (2012); Lucero et al. (2012); Martin-Lopez et al. (2012); Feng et al. (2013); Barends et al. (2014); Chow et al. (2014). In superconducting circuit or circuit QED implementations, which benefit from the inherent scalability of modern solid-state electronics, this barrier in qubit number does not reflect any limitation of the underlying device fabrication or infrastructure requirements, but rather that larger problem sizes would also require longer computations and hence additional coherence. Quantum algorithms typically have (uncompiled) circuits that are spatially narrow but temporally very deep. In this work we propose an alternative approach to superconducting quantum information processing that allows one to circumvent this restriction and realize much larger computations within the available coherence time.

A general-purpose quantum computer that is useful for practical applications must, of course, be error corrected and scalable. The standard model of an error-corrected quantum computer is the gate-based fault-tolerant universal quantum computer, where “errors” acting on all device components and at any step during the computation can be corrected as long as they are weak enough—below an error threshold Aharonov and Ben-Or (1997); Kitaev (1997); Knill et al. (1998)—and not highly correlated in space or time Terhal and Burkard (2005); Aliferis et al. (2006); Aharonov et al. (2006); Novais et al. (2008). Scalability means that the number of physical qubits required to perform a particular computation—the physical volume of the quantum computer—scales as a polynomial function (preferably linear) of the problem size. It also means that it is actually possible, in practice, to add more qubits.

A realistic picture of an error-corrected superconducting quantum computer based on the surface code Bravyi and Kitaev (); Raussendorf and Harrington (2007) is beginning to emerge Fowler et al. (2012). The surface code is the most practical, best performing fault-tolerant approach known to date, and is especially amenable to implementaion with superconducting circuit technology. However, the resources required for a practical machine are considerable: Fowler et al. [Fowler et al., 2012] estimated that factoring a 2000-bit number would require about physical qubits, using Beauregard’s modular exponentiation Beauregard (2003) and a surface code quantum computer operating at fidelity. If there was no decoherence or noise, and no errors of any kind to correct, then it would be possible to factor an -bit number with the Beauregard algorithm using only ideal qubits, or 4003 ideal qubits in the case considered. Thus, error correction imposes a physical qubit overhead of . Note that in quantifying the error-correction overhead here we distinguish between ideal (error free) qubits—the fictional entities usually appearing in quantum algorithms—and logical qubits, which must also include the many additional ancillas necessary for fault-tolerant gate implementation. Similarly, You et al. [You et al., 2013] estimated that it would take about physical qubits to calculate the ground state energy of a 100-spin transverse-field Ising model to accuracy using the same -fidelity surface code quantum computer. This well known statistical mechanics model maps especially well to a quantum computer, and for spins on a line would require only ideal qubits for a calculation of the ground state energy (using iterative phase estimation). So the physical/ideal ratio in this quantum simulation example is , the same as for factoring. Therefore we expect that, in practice, surface code error correction will impose an overhead of

(1)

where we have allowed for future optimization and other improvements. Crudely, a factor of about 10 in the overhead estimate comes from replacing ideal qubits with enough logical qubits to both encode those ideal qubits and to distill the auxiliary states needed to perform fault-tolerant operations on them, and a factor of about comes from replacing each logical qubit with enough physical qubits to enable a sufficiently long computation.

i.2 Prethreshold quantum computation

The complexity of building even a small fault-tolerant universal quantum computer suggests that this objective may take some time to achieve. In the meantime, experimental quantum information processing is limited to either the very small problem sizes discussed above, or to nonuniversal approaches such as analog quantum simulation Georgescu et al. (2014); Lewenstein et al. (2007); Bloch et al. (2008); Houck et al. (2012), quantum annealing Johnson et al. (2011); Boixo et al. (2014), or other special-purpose methods Sornborger (2012); Aaronson and Arkhipov (2013). In this work we label any quantum computation without an error-corrected universal quantum computer as prethreshold, referring to the threhold theorems of fault-tolerant quantum computation, because exceeding a fidelity threshold is a necessary condition for large-scale error correction.

Table 1 compares three broad approaches to quantum computation with prethreshold hardware: Small system refers to gate-based computations with a few qubits, which have been used to test fundamental concepts of quantum information processing, demonstrate hardware functionality, and assess qubit and gate performance. The SES method is also general purpose, but should enable quantum speedup (this is discussed below). However neither approach is scalable. Analog quantum simulation and other scalable, special-purpose approaches trade universality for a faster route to speedup.

The SES quantum computer described in this work is universal in the sense that it can implement any gate-based algorithm or quantum circuit. As a simulator it can directly emulate any (real) Hamiltonian, including time-dependent Hamiltonians. When we refer to a simulated Hamiltonian in this context we mean a Hamiltonian written in some basis—a real, symmetric matrix with no special structure. We assume that the Hamiltonian matrix to be simulated has been specified externally, as is typically the case when using a classical computer. The SES processor solves the Schrödinger equation defined by this Hamiltonian.

small system SES method analog QS/spec purp
scalable
universal
speedup
arb accuracy
arb runtime
Table 1: Three approaches to prethreshold quantum computation and simulation. The left column lists the attributes achievable by an error-corrected universal quantum computer.

The superconducting SES method introduced here has features in common with the single-photon protocols of Reck et al. Reck et al. (1994) and Cerf et al. Cerf et al. (1998), as both use only one excitation, and therefore do not utilize genuine entanglement. The optical realization uses a recursive algorithm to first decompose a given unitary of interest into a sequence of SU(2) beam-splitter transformations. This decomposition determines an arrangement of beam splitters, phase shifters, and mirrors, that will unitarily transform input ports (optical modes) to output ports according to the desired . However, the superconducting realization is better suited for quantum simulation than the optical approach because the Hamiltonian is directly programmed. In particular, to optically simulate Schrödinger evolution under a given Hamiltonian matrix , one would have to first compute the evolution operator on a classical computer, and then decompose it into beam-splitter transformations, but avoiding the classical computation of (or the time-ordered exponential if is time dependent) is the motivation for the quantum computation in the first place opt (). Neither the superconducting SES nor the single-photon optical approaches are scalable—they both require exponential physical resources—and should not be considered as viable alternatives to the standard paradigm of error-corrected universal quantum computation. But they are both suitable prethreshold methods.

Ii Quantum Computation in the Ses

ii.1 Hardware model: The programmable SES chip

Consider the following model of an array of coupled superconducting qubits,

(2)

written in the basis of uncoupled-qubit eigenstates. Here and

(3)

The are qubit transition energies and the are qubit-qubit interaction strengths; both are assumed to be tunable. (Factors of are suppressed throughout this paper.) is a real, symmetric matrix with vanishing diagonal elements. We also require microwave pulse control of at least one qubit, and simultaneous readout (projective measurement in the diagonal basis) of every qubit. The model (2) describes a fully connected network or complete graph of qubits, which we refer to as an SES processor. This should be contrasted with local quantum computer models that have coupling only between nearby qubits (nearest neighbors, for example). The SES method can be applied with a wide variety of qubit-qubit interaction types (see Appendix A), but without loss of generality we restrict ourselves here to the simple coupling of (2). Alternatively, tunably coupled resonators (with tunable frequencies) can be used instead of qubits res (). Although we assume an architecture based on superconducting circuits (or circuit QED), the SES method might apply to other future architectures as well.

The quantum computer model (2) might be considered unscalable, because of the tunable coupling circuits and wires, a position that we also adopt here. In gate-based universal quantum computation, the fully connected and local quantum computer models are equivalent in the sense that any quantum circuit implemented by a fully connected quantum computer can be implemented by a local quantum computer after adding chains of SWAP gates, which only introduce polynomial overhead. However, this equivalence is restricted to the standard gate-based approach and does not apply here.

Superconducting qubits have been reviewed in Refs. Schoelkopf and Girvin (2008) and Clarke and Wilhelm (2008). Although the model (2) can be realized with several qubit designs, the transmon qubit Koch et al. (2007) currently has the best performance Paik et al. (2011); Rigetti et al. (2012); Barends et al. (2013); Chen et al. (2014). For concreteness we assume a qubit frequency in the range of to and coupling strength in the range to . An -qubit SES processor also requires coupler circuits and the associated wires or waveguides. A variety of tunable couplers can be used for this purpose. Here we consider a modification of the tunable inductive coupler developed by Chen et al. Chen et al. (2014) for superconducting Xmon qubits; this design has been demonstrated to implement tunability without compromising high coherence. Our modification replaces the direct electrical connection of each qubit to a coupler circuit wire with an inductive coupling to the wire, which scales better. An SES chip layout that avoids excessive crossovers is illustrated in Fig. 1. The tunable interaction strength for this coupler design is derived in Appendix B.

Figure 1: (Color online) Circuit layout for SES processor, with the crosses representing Josephson junctions. Each horizontal circuit is an Xmon qubit with capacitance , tunable junction inductance , and additional coils (each with self-inductance and mutual inductance ) for coupling to other qubits. Dotted lines indicate dc and microwave control lines for each qubit, as well as readout circuits. Each coupler wire contains a Josephson junction with inductance tuned by a magnetic flux . Control lines for SES matrix elements are also indicated. This circuit is discussed further in Appendix B.

ii.2 Single-excitation subspace

The idea we explore in this paper is to perform a quantum computation in the -dimensional single-excitation subspace of the full -dimensional Hilbert space. This is the subspace spanned by the computational basis states

(4)

with We call the set of the SES basis states. It is simple to prepare the quantum computer in an SES basis state from the ground state , and it will remain there with high probability if the following conditions are satisfied:

  1. The coupling strengths are much smaller than the , which is usually well satisfied in superconducting circuits.

  2. Single-qubit operations such as and rotations about the or axes are not used during the computation. However, rotations are permitted and are very useful (these can be implemented as rotations, which do not require microwaves). rotations about or can be used to prepare SES basis states from the system ground state .

  3. The quantum computation time is significantly shorter than the single-qubit population relaxation time .

An SES pure state is of the form

(5)

For example, the states (5) include the maximally entangled -type state

(6)

Although the state (6) is entangled, and could be used to violate Bell’s inequality, the entanglement is somewhat artificial Lloyd (1999) as there is only one “particle”.

ii.3 SES Hamiltonian

The advantage of working in the SES can be understood from the following expression for the SES matrix elements of model (2), namely

(7)

Because the diagonal and off-diagonal elements are directly and independently controlled by the qubit frequencies and coupling strengths, respectively, we have a high degree of programmability of the SES component of the quantum computer’s Hamiltonian. This property allows many -dimensional unitary operations to be carried out in a single step, bypassing the need to decompose into elementary gates, and also enables the direct quantum simulation of real but otherwise arbitrary time-dependent Hamiltonians. However, we have some restrictions:

  1. is always real, whereas the most general Hamiltonian matrix is complex Hermitian. The experimentally available control parameters, consisting of qubit frequencies and coupling strengths, are sufficient to control the independent parameters of an real symmetric matrix.

  2. There are experimental limitations on the range of values that the and can take. We define to be the magnitude of the largest coupling available in a particular experimental realization; a current realistic value is about .

We will leave the discussion of possible generalizations to complex SES Hamiltonians for future work. The limitations on the ranges of the and do not, by themselves, limit the class of Hamiltonians that can be simulated, because a model Hamiltonian intended for simulation is first rescaled to conform to that of the SES chip (this is explained below).

It will be useful to refer to a “typical” SES Hamiltonian , which we assume to have the following properties: is a real, symmetric matrix with each element taking values in the range to . This form follows from (7) after removing an unimportant term proportional to the identity matrix,

(8)

where is a convenient (possibly time-dependent) reference frequency. Then we assume that the qubit frequencies can be tuned within of , and we assume that the couplers can be tuned between to . A possible choice for is the mean qubit frequency .

Figure 2: (color online) Spectral bandwidth of matrices versus . Data (open circles) are averaged over 1000 random instances of . The solid line is the function . The dashed line is the function

It will also be useful to consider the statistical properties of an ensemble of typical SES matrices. We can always write a time-independent SES Hamiltonian in the standard form

(9)

where is real symmetric matrix with every element satisfying

(10)

We define a real random matrix ensemble of dimension as follows: The diagonal elements are independent random variables , each uniformily distributed between and . The elements are independent random variables also uniformily distributed between and . The remaining elements are fixed by the symmetry requirement. The standard deviation of each element is

Figure 3: (color online) Spectral bandwidth for larger . The solid line is the function . The dashed line is the function

The first property we study is the mean spectral bandwidth of , the difference between the largest and smallest eigenvalues. Let be the ordered eigenvalues of . From the Wigner semicircle law Mehta (1991) we expect that, in the large limit,

(11)

where the overbar denotes averaging over the ensemble defined above. The bandwidth of typical SES states in an -qubit processor therefore scales at large as

(12)

In Figs. 2 and 3 we plot the simulated bandwidth of as a function of , and compare the simulation data with the asymptotic form (11). From Fig. 2 we conclude that for modest SES matrix sizes,

(13)

The second property we study is the mean level spacing of . Let

(14)

be the mean spacing between adjacent eigenvalues. Averaging (14) over the ensemble defined above, we expect that in the large limit

(15)

In Fig. 4 we plot the simulated average level spacing of as a function of , and compare the simulation data with the asymptotic form (15). From Fig. 4 we conclude that for modest SES matrix sizes,

(16)

The results (13) and (16) give two relevant energy scales present in a typical SES spectrum.

Any unitary quantum circuit or operation acting on qubits can be mapped to and implemented on an SES chip with qubits (this exponential growth of is what makes the SES method unscalable). We can say that the SES processor simulates the -qubit system, with the advantage of being able to perform multi-qubit operations in a single step. This feature provides the computational advantage of the SES approach and is illustrated throughout this paper. It will be useful to specify an explicit one-to-one mapping between the bases of the associated Hilbert spaces, which we take to be

(17)

The left-hand-sides are the standard computational basis states of the simulated -qubit system (not to be confused with the computational basis states of the -qubit SES processor). Similarly, any unitary quantum circuit or operation acting on -level qudits can be mapped to and implemented on an SES processor with qubits; the natural mapping is a straightforward generalization of (17).

Figure 4: (color online) Level spacing of matrices versus . Data (open circles) are averaged over 1000 random instances of . The solid line is the function . The dashed line is the function

The operation of a real SES chip will be nonideal, and it is important to consider the effects of decoherence and other errors on its performance. This is discussed in detail in Sec. IV. The main conclusion is that although decoherence and unitary control errors do limit the accuracy of an SES computation or simulation, the effects of decoherence are much less restrictive here than with the standard gate-based approach (hence the ability to implement larger problem sizes). In practice, the complexity of fabricating a large programmable SES chip will likely limit its application before decoherence does.

Iii Applications of the Ses Method

iii.1 Uniform state preparation

Our first example will be to generate the entangled state (6) in a single step: Consider the real Hamiltonian

(18)

where

(19)

The Hamiltonian (18) describes a graph where qubit 1 is symmetrically coupled to all other qubits, which are themselves uncoupled (a star network). The case of qubits is shown in Fig. 5.

The SES chip is initially prepared in basis state . Only two eigenfunctions—let’s call them —have overlap with , so the evolution is effectively a two-channel problem. The spectrum is as follows: States have energy ; all other eigenfunctions are degenerate with . Evolution for half a period corresponding to the splitting , namely

(20)

leads to the desired operation

(21)

apart from a phase. This can be implemented in a few ns with superconducting circuits.

Figure 5: Star network for uniform state preparation.

We would like to make a few remarks that apply to this application, as well as to many others: First, the magnitude of the interaction strength used in (18) is arbitrary; any convenient interaction strength satisfying is sufficient. To make the operation as fast as possible, however, we have choosen . Second, it is not necessary to use a time-independent interaction strength. Any single-step “pulse sequence” of the form

(22)

with

(23)

and a constant (time-independent) matrix, satisfies an area theorem

(24)

where

(25)

and is the time-ordering operator. The identity (24) implies that any time-dependent coupling satisfying (25) can be used, simplifying experimental implementation.

iii.2 Grover search algorithm

Next we show how to use a programmable SES chip to implement the Grover search algorithm Grover (1997), which introduced the powerful amplitude amplification technique that has led to speedup for many other algorithms. Grover’s procedure for a single marked state in a database of size is

(26)

Here

(27)

is a unitary operator that performs an inversion about the average,

(28)

is the oracle, a diagonal matrix with the th element equal to and the others equal to , and is the uniform superposition (6).

The operator (27) can be implemented in a single step by using the SES Hamiltonian with

(29)

for a time

(30)

This leads to the desired operation

(31)

up to a phase factor.

The oracle (28) can be simply generated by a rotation on qubit . This rotation can be implemented as a rotation, which does not require microwaves. Each iteration of the amplitude amplification can therefore be implemented in just two steps, for any , allowing even small SES chips to perform computations that would otherwise require thousands of elementary gates.

iii.3 Eigenvalue estimation

Next we show how to use a programmable SES processor to implement energy eigenvalue estimation, an application of the important phase estimation algorithm Kitaev (); Cleve et al. (1998); Abrams and Lloyd (1999) that is used in many other applications. This example also illustrates how to translate an algorithm expressed in quantum circuit language to an SES protocol.

The eigenvalue estimation procedure calculates an -bit estimate of the phase of the eigenvalue accumulated by an eigenfunction under the action of If the evolution time is chosen to satisy the eigenvalue (assumed to be positive) can be calculated from . To reduce the number of required qubits we use the iterative phase estimation circuit Dobsicek et al. (2007) shown in Fig. 6, which uses only a single ancilla. As the number of desired bits of precision increases, one either performs a longer quantum computation—reusing the eigenfunction —or performs computations in series, each requiring an eigenfunction preparation step.

The algorithm measures bits of one at a time, beginning with the least significant bit , and working backwards to the most significant bit . Each step (except for the first) uses knowledge of the previously measured bits. We denote the bit being measured in a given step by , with The circuit for step is shown in Fig. 6, where the rotation angle is

(32)

which depends on the values of the previously measured bits .

Figure 6: Quantum circuit to compute the th bit of . Here H is the Hadamard gate, with the model Hamiltonian and the evolution time, and is a -rotation. is an eigenfunction of . The last operation is measurement of the first qubit in the diagonal basis; the result is .

The main practical difficulty with prethreshold applications of phase estimation is implementation of the

(33)

operation, which typically requires a Trotter approximation (and, in addition, a sparse Hamiltonian). However, the SES method allows any controlled unitary

(34)

to be implemented in a single step when can (which is possible when is symmetric). To see this, assume that , where is a real matrix, and write the matrix (34) as

(35)

where is the identity matrix and where we take the first qubit to be the control. Next map the -dimensional Hilbert space to the SES processor according to

(36)

The operation (34) can therefore be written as

(37)

which can be implemented by an SES processor in a single step. The elements and on the right-hand-side of (37) are each matrices, with the zero (null) matrix.

We turn now to the SES eigenvalue estimation protocol: Let be a real model Hamiltonian on which we wish to perform phase estimation, and denote the basis of by . The SES implementation requires qubits. The first objective in the protocol is to prepare the initial state

(38)

of Fig. 6, where is an eigenfunction of . We will perform the state preparation adiabatically, which is restricted to states of minimum or maximum energy; here we prepare the ground state of and estimate the ground state energy .

Adiabatic ground state preparation is usually implemented by programming a convenient initial Hamiltonian that does not commute with , relaxing into the ground state of , and then slowly changing the system Hamiltonian from to . However, in the SES approach it is necessary to use nonequilibrium adiabatic evolution, because the physical ground state is outside the SES. The processor is initially prepared in the basis state . The next step is to produce the SES state equivalent to

(39)

which, according to the map (36), is

(40)

Note that (40) is a uniform superposition of the first half of SES basis states. To prepare this we use a variation of (21), namely

(41)

where

(42)

is a block-diagonal Hamiltonian. Here is an matrix of the form (19), and is the zero matrix. The operation time in (41) is

(43)

This completes the preparation of the input (40) to the adiabatic evolution stage.

At the beginning of the adiabatic evolution stage we program the SES Hamiltonian to be

(44)

where is an Hamiltonian with the following properties:

  1. is real.

  2. .

  3. The ground state of is the uniform superposition state (40).

  4. The ground state is separated from the other eigenstates by an energy gap that is a nondecreasing function of .

A possible choice when is a power of two is the “transverse field” Hamiltonian

(45)

However, the explicit matrix forms of (45) for large are complicated and the tensor-product structure is somewhat artifical for our purposes. Instead we use

(46)

where is an matrix of the form (29). The initial Hamiltonian (46) has eigenvalues

(47)

where . The ground state is

(48)

with energy The remaining eigenfunctions are degenerate with energy .

At later times the SES Hamiltonian is varied as

(49)

Here

(50)

is a positive constant that ensures that can be programmed into the SES processor. This stage of the protocol is standard: In the long adiabatic limit, the processor will be found at in the desired state (38) with high probability.

Next we implement the SES equivalent of the circuit given in Fig. 6, beginning with the Hadamard gate

(51)

which we write as

(52)

where

(53)

Then we have

(54)

where is the diagonal matrix

(55)
(56)

This leads to

(57)

where is the Hamiltonian , with

(58)

and

(59)

The controlled-evolution step has been discussed above in (34) through (37). Applying this result to the operation (33) leads to

(60)

Here and are matrices, with the model Hamiltonian, which we assume to be real. Now let be defined as in (50). Then

(61)

where

(62)

To perform the controlled-evolution operation, the Hamiltonian in (62) is to be programmed into the SES processor for a time .

Finally, we implement the rotation

(63)

where

(64)

This operation can be generated by applying the Hamiltonian for a time

The final stage of the eigenvalue estimation protocol is the SES equivalent of ancilla measurement (see Fig. 6), resulting in the observed value . One way to do this is to perform a simultaneous projective measurement of every qubit in the SES processor. If the excitation at iteration is observed to be in qubit , we conclude that

(65)

This result follows from the correspondence (36). The disadvantage of this naive measurement protocol is that it fully collapses the SES wave function, so the eigenfunction needs to be re-prepared before the next iteration.

A simple variation of this protocol, however, avoids the state re-preparation step about half the time: Here we simultaneously measure only the first qubits. In this case we might observe the excitation to be in qubit , or we may not find it at all. Then we conclude that

(66)

If the measurement fully collapses the state, and we must re-prepare the eigenfunction . But if we have learned only that the excitation is in the subspace spanned by

(67)

which yields no information about .

It is possible to avoid the eigenfunction re-preparation step altogether by using an example of the ancilla-assisted SES method: Here we couple an -qubit SES processor to an ancilla qubit, with a degree of connectivity that depends on the application. (The measurement application requires coupling to qubits.) Alternatively, we can regard one of the qubits in a fully connected array of qubits as an ancilla. The essential point is that the device now explores the single-excitation and double-excitation subspaces. The only disadvantage of this measurement protocol is that it requires steps per iteration.

The idea is to measure the multi-qubit operator

(68)

which projects qubits into a state of definite parity. If at iteration the eigenvalue of (68) is observed to be , the single excitation is in the subspace spanned by and we conclude that . If the eigenvalue is , the excitation is in the space spanned by and we conclude . The measurement of the operator (68) can be carried out with a single ancilla qubit using the circuit given in Fig. 7.

Figure 7: Quantum circuit to measure the parity operator (68). The first qubit is an ancilla and the others are the first qubits of the -qubit SES processor. The circuit uses CNOT gates.

It is useful to discuss the nonscalability of the SES method in the context of the eigenvalue estimation application. Typically is exponentially large in the number of particles, making classical simulation impractical. An ideal (error-free) quantum computer would require only qubits to run the phase estimation circuit of Fig. 6. However, the large circuit depths required for the controlled evolutions have limited prethreshold applications to very small examples. The SES implementation requires qubits, but can perform the controlled evolutions in a single step.

iii.4 Schrödinger equation solver for time-independent Hamiltonian matrices

Next we consider the problem of wave function propagation by a real but otherwise arbitrary time-independent Hamiltonian ,

(69)

This application, and especially its time-dependent extension discussed below, play to the strengths of the SES chip and suggest a useful prethreshold computational tool. We assume that is a real, symmetric matrix, and we call the model Hamiltonian. Here is the length of simulated time (for example, the duration of some physical process). To map this problem to an SES processor we first find the smallest positive constant such that every matrix element of

(70)

is between and . Here is the identity matrix. When we are “compressing” the model Hamiltonian down to that of the SES chip, whereas when we are expanding it. Such a rescaling is required because the characteristic energy scales of the model and SES chip are usually different. With the SES processor we then perform the equivalent evolution

(71)

where

(72)

The total time required to perform a single run of the quantum computation is therefore

(73)

where is the qubit measurement time. For superconducting qubits we can assume to be about Sete et al. (2013), which includes the time needed for classical post-processing. (Note that the shortest high-fidelity readout time demonstrated to date, including resonator ring-down time, is closer to Jeffrey et al. (2014). The faster “catch-disperse-release” protocol of Ref. Sete et al. (2013) has not yet been demonstrated.)

A single run of the quantum computer (with readout) simulates a single repetition of an experiment: Initialization, Schrödinger evolution, and measurement. It is important to emphasize that such a protocol implements a weak simulation, providing a single sample from the distribution of possible measurement outcomes, not the probability distributions themselves as is normally computed classically. (This limitation is not specific to the SES method and applies to state propagation with an error-corrected universal quantum computer as well.) For some applications the distinction between weak and strong simulation might be minor. However in other cases it is necessary to estimate the occupation probabilities accurately. We discuss the runtime overhead for strong SES simulation below in Sec. III.6.

How long does a classical simulation of (69) take? This of course depends on the model Hamiltonian (including its dimension and spectral norm), the value of , and the classical processor and simulation algorithm used. To assess the possibility of quantum speedup, however, it is sufficient to find the minimum time required to classically simulate a given run of an ideal SES processor, with a “typical” SES Hamiltonian (a real random symmetric matrix with all entries between and ), and significantly less than the coherence time. For this analysis we consider the case

(74)

The total quantum computation time (73) in this example is therefore about

(75)

We have studied the classical simulation runtime for this problem, comparing, on a single core com (), three standard numerical algorithms:

  1. State propagation via Hamiltonian diagonalization. For a given , the unitary matrix of its eigenvectors and diagonal matrix of its eigenvalues are first computed. Then we numerically compute the product

    (76)

    where is the initial state.

  2. Matrix exponentiation via Padé approximation with scaling and squaring Higham (2005). Here we directly compute and then multiply by .

  3. Krylov subspace projection Sidje (1998). In this case the product itself is directly calculated.

    Figure 8: (color online) Classical simulation runtime on a single core com () versus matrix dimension , in seconds. Here the computational task is solution of the Schrödinger equation with a time-independent Hamiltonian by matrix diagonalization. Data (circles) were determined by averaging the runtimes over 1000 random instances of . The solid line is the function ; the scaling becomes at larger . The runtime for a Hamiltonian is about .

In all cases we assume an initial state of the form

(77)

which corresponds to a single SES basis state, and we average the computation times over 1000 random instances of . Although the three methods have similar speed and accuracy for the particular problem simulated here, the matrix diagonalization method was the fastest, followed by matrix exponentiation. We also tested Runge-Kutta integration and matrix exponentiation via Chebychev polynomial expansion Tal-Ezer and Kosloff (1984), which were not competitive with the above methods for the specific application considered. In Fig. 8 we plot the measured single-core runtimes for the optimal classical algorithm (Hamiltonian diagonalization) versus matrix dimension . We observe that the quantum simulation time (75) is much shorter than all of the single-core runtimes considered.

Our objective is to achieve speedup relative to a state-of-the-art supercomputer, not a single core. The classical simulation runtime should then be evaluated on a supercomputer, using an optimally distributed parallel algorithm. However, we can bound the parallel performance by using the single-core result and assuming perfect parallelization efficiency: We approximate a petaflop supercomputer by gigaflop cores, and conclude that the classical runtime can be no shorter than times the single-core time. (This is a conservative estimate because high parallelization efficiency is not expected for problem sizes smaller than the number of cores.) We therefore conclude that, for this particular state propagation application, the classical simulation runtime is no shorter than

(78)

while the quantum simulation can be performed in a few hundred nanoseconds. The breakeven dimension according to (78) is about qubits. This is quite large given the full connectivity requirement, and it is not known whether such a device could be built in practice. However the breakeven dimension in the time-dependent case (discussed below) is considerably smaller.

In our estimate of the classical simulation runtime we have not included the time needed to store the Hamiltonian matrix in memory or perhaps compute it from a separate procedure. Similarly, for the quantum simulation time estimate we have not included the time required to send the controls to the qubits and couplers before the simulation. Furthermore, not every simulation will exhibit a speedup; this depends on the particular simulated Hamiltonian and the simulated time duration .

An interesting aspect of the Schrödinger equation solver is that the complexity is O(1): The quantum simulation time is independent of . This implies that the SES method yields an exponential speedup for this application. However such complexity considerations are probably not meaningful given that the method is not scalable.

iii.5 Schrödinger equation solver for time-dependent Hamiltonians: Simulation of molecular collisions

Finally, we discuss what is perhaps the most interesting application of the SES method known to date, the solution of the Schrödinger equation with a time-dependent Hamiltonian. This is a straightforward generalization of the time-independent case, but we expect the time-dependent case to be more useful in practice. In this section we provide a detailed example of time-dependent Hamiltonian simulation with a small SES chip.

Time-dependent Hamiltonian simulation is implemented by varying the SES matrix elements (7) according to some protocol, which can be done with nanosecond resolution. This does not require any additional runtime, the time complexity is still constant, and the total quantum simulation runtime for a evolution is again given by (73). Although the classical runtime is problem specific, we can again assess the possibility of speedup by estimating the time required to classically simulate an ideal SES processor, in this case with all matrix elements varying on a nanosecond timescale. There are two types of numerical simulation algorithms we consider:

  1. Runge-Kutta integration. Here we solve the system of coupled ordinary differential equations

    (79)

    Although the Runge-Kutta runtime is slower than diagonalization for a time-independent Hamiltonian, it does not slow down significantly when is time dependent.

  2. Time slicing combined with diagonalization. This algorithm is based on an approximate decomposition of the time-dependent problem into a sequence of constant-Hamiltonian intervals, each of width . The time must be significantly smaller than the characteristic timescale of matrix element variation, for example . Then

    (80)

    time slices are required, and the classical runtime using this approach will be approximately times longer than (78). For the evolution, .

We find that Runge-Kutta integration is the fastest approach for the specific problem considered here. In Fig. 9 we plot the measured single-core runtimes for the optimal classical algorithm (Runge-Kutta integration) versus matrix dimension . Bounding the performance of this algorithm on a petaflop supercomputer by including a factor of (recall discussion from Sec. III.4), we conclude that for this application the classical simulation runtime is no shorter than

(81)

while the quantum simulation can be performed in a few hundred nanoseconds. The breakeven dimension according to (81) is around qubits. As expected, this value is much smaller than the breakeven in the time-independent case. An SES Schrödinger equation solver of modest size might be able to achieve quantum speedup relative to a petaflop supercomputer.

Figure 9: (color online) Classical simulation runtime on a single core com () versus matrix dimension , in seconds. Here the computational task is solution of the Schrödinger equation with a time-dependent Hamiltonian by Runge-Kutta integration. Data (circles) were determined by averaging the runtimes over 1000 random instances of . The solid line is the function . The simulation time for a Hamiltonian is about .

We turn now to a detailed example of time-dependent Hamiltonian simulation with a small programmable SES chip. One particularly interesting application of the method is to the quantum simulation of atomic and molecular collisions. Collisions are especially well suited for SES simulation because they typically involve modest Hilbert spaces—tens to thousands of channels—and in the time-dependent formulation involve Hamiltonians that are naturally bounded in time. In particular, the initial and final asymptotic Hamiltonians for neutral scatterers are diagonal (in the adiabatic basis), whereas the off-diagonal elements rapidly turn on and then off during the collision itself, inducing transitions between the channels. Although the Born-Oppenheimer potential energy surfaces used here do require a classically inefficient electronic structure precomputation, the largest potential energy surface calculations Braams and Bowman (2009); Bowman et al. (2011) are far ahead of the largest classical collision simulations performed to date Wang (2006); Schiffel and Manthe (2010); Yang et al. (2015). SES implementation of the semiclassical Born-Oppenheimer problem therefore has the potential to push molecular collision simulations to new unexplored regimes. (We note that there are related chemical reaction simulation methods developed by Lidar et al. Lidar and Wang (1999) and by Kassal et al. Kassal et al. (2008) that do not require a precomputed potential surface, but these require an error-corrected quantum computer to implement and are not prethreshold methods.) Another useful feature of the scattering application is the convenient mapping of each molecular channel to a single SES basis state, which is possible because of the similar way initial states are prepared and final states measured in both the processor and a collision experiment. We also find that the atomic physics time and energy scales turn out to map nicely to that of superconducting qubits after optimal rescaling.

Figure 10: (color online) Time dependence of the matrix elements of the scattering Hamiltonian for a collision with and in atomic units. The diagonal matrix elements (solid curves) are similar in magnitude and cannot be resolved in this figure. The off-diagonal elements (dashed curves) are much smaller than the diagonal elements and also cannot be resolved here. In this example the collision energy is and the impact occurs at .

To illustrate this application we consider a three-channel Na-He collision (an unpublished preliminary account of this application is given in Ref. Pritchett et al. ()). The three channels included in our model and their correspondence with SES basis states are

(82)

and

(83)

The square brackets indicate the molecular structure of the channels. In this model, the helium atom remains in its electronic ground state during the collision (the excitation energies of its excited states are too high to be relevant here), whereas sodium can be excited from its ground state to either of two excited states, both denoted by . In the physical system, the channels (82) and (83) have additional degeneracies, including spin degeneracies, but they do not affect the collision probabilities calculated here. Precomputed Born-Oppenheimer energies and nonadiabatic couplings of the Na-He system Lin et al. (2008) are stored for fixed values of the internuclear distance , and we make a standard semiclassical (high energy) approximation and assume that the scatterers follow a straight-line trajectory. Then, for an impact occuring at a time , the internuclear separation varies according to

(84)

where is the initial relative velocity and the impact parameter of the collision. The relative velocity is related to the collision energy in the center-of-mass frame through where is the reduced mass. The procedure outlined in Appendix C then leads to the scattering Hamiltonian shown in Fig. 10.

Figure 11: (color) Scattering probabilities for a Na-He collision with and in atomic units. The system is initially prepared in channel 1. The collision occurs at .

In Fig. 11 we plot the probabilities for the Na-He system to be found in channel after being initially prepared in channel 1, the ground state. The final values are the probabilities for an elastic () or inelastic () collision with a given and , which we find to be

(85)

To simulate this process with a programmable SES chip we must first rescale the physical or model Hamiltonian so that it fits on the SES processor. Doing this optimally is critical to the utility of any time-dependent simulation so we will discuss it in some detail: Suppose for the moment that our model Hamiltonian is given by a time-independent real symmetric matrix . Dividing by any positive constant while rescaling the evolution time by the same factor obviously leaves the dynamics invariant. Because we want the quantum simulation to be as fast as possible, we choose the smallest value of that makes compatible with the SES processor (every matrix element of is between and ). As mentioned above, if we are compressing the model’s energy scales to fit on the SES chip, whereas if we are expanding them. This naive approach to rescaling, however, does not take advantage of the fact that we can always shift by a constant (which changes the corresponding states by a phase factor that we do not measure). Including this gauge transformation results in the rescaling used above in (70). The time required to simulate an evolution of duration is simply given by (72), but now we will go further and regard (72) as giving the linear relationship between the simulated and physical times during a process. To generalize this construction to a time-dependent model Hamiltonian we first compute the mean of its diagonal elements,

(86)

and then find, at each time the smallest positive such that every matrix element of

(87)

is between and . The function defines the resulting nonlinear relation between the physical and simulated times according to

(88)

Equation (87) gives the simulated Hamiltonian as a function of the physical time , and (88) is then inverted to find the desired , which in turn is programmed into the SES chip.

Figure 12: (color online) Rescaling function for the Na-He collision simulation with . Collision parameters are and in atomic units. We find that has an asymptotic value around and reaches during the collision.
Figure 13: (color online) Nonlinear time scaling for the