Simulating Chemistry usingQuantum Computers

Simulating Chemistry using Quantum Computers


The difficulty of simulating quantum systems, well-known to quantum chemists, prompted the idea of quantum computation. One can avoid the steep scaling associated with the exact simulation of increasingly large quantum systems on conventional computers, by mapping the quantum system to another, more controllable one. In this review, we discuss to what extent the ideas in quantum computation, now a well-established field, have been applied to chemical problems. We describe algorithms that achieve significant advantages for the electronic-structure problem, the simulation of chemical dynamics, protein folding, and other tasks. Although theory is still ahead of experiment, we outline recent advances that have led to the first chemical calculations on small quantum information processors.

1 Introduction

One of the greatest challenges in quantum chemistry is to fully understand the complicated electronic structure of atoms and molecules. Over the last century, enormous progress has been made in describing the general behavior of relatively simple systems. In particular, combined with physical insights, elegant computational approaches, ranging from wavefunction methods to quantum Monte Carlo and density functional theory, have been developed. The challenge is that the Hilbert spaces of quantum systems grow exponentially with system size. Therefore, as these methods are extended to higher accuracy or to larger systems, the computational requirements become unreachable with current computers. This problem is not merely a consequence of technological limitations, but stems from the inherent difficulty of simulating quantum systems with computers based on classical mechanics. It is therefore important to know if the computational bottlenecks of classical computers can be solved by a computing model based on quantum mechanics—quantum computation—whose development has revolutionized our understanding of the connections between computer science and physics.

The idea of mapping the dynamics of a quantum system of interest onto the dynamics of a controllable quantum system was proposed in 1982 by Feynman [43] and developed in 1996 by Lloyd [84]. Such a quantum computer would be able to obtain information inaccessible with classical computers. Consequently, quantum simulation promises to be a powerful new tool for quantum chemistry. In this article, we review the recent applications of quantum simulation to chemical problems that have proven difficult on conventional computers. After introducing basic concepts in quantum computation, we describe quantum algorithms for the exact, non-adiabatic simulation of chemical dynamics as well as for the full-configuration-interaction treatment of electronic structure. We also discuss solving chemical optimization problems, such as lattice folding, using adiabatic quantum computation. Finally, we describe recent experimental implementations of these algorithms, including the first quantum simulations of chemical systems.

2 Quantum Computation

2.1 Differences between quantum and classical computation

There are fundamental differences between quantum and classical computers. Unlike the classical bit, which is always either a ‘0’ or a ‘1’, the basic unit of quantum information is the qubit (Fig. 1), which can be in a superposition of and : . States of qubits are elements of an exponentially large, -dimensional, Hilbert space, spanned by a basis of the form , where each is or . This enables entanglement, a feature necessary for the advantage of quantum computers. As an example of entanglement, the two-qubit state , one of the Bell states, can’t be written as a product state .

The linearity of quantum theory implies that a quantum computer can execute classical computations in superposition. For example, if the input state contains all possible input values of a function , the function can be computed using a unitary operation as


With a single call to , the quantum computer produces a state that contains information about all the possible outputs of .

Nevertheless, quantum computation has several limitations. For example, the no-cloning theorem [94, 63] states that an unknown quantum state cannot be copied perfectly. More importantly, the information of a general quantum state cannot be read out with a single projective measurement, because that would collapse a superposition into one of its components. Therefore, while the state in Eq. 1 contains information about all possible outputs, that information is not immediately accessible. Instead, a quantum algorithm has to be designed in a way that makes it easy to measure a global property of , without making it necessary to compute all the individual . Algorithms of this kind are discussed in the following sections.

2.2 Approaches to quantum computing

There are several models, or ways of formulating, quantum computation. Most work in quantum simulation has been done in the circuit and adiabatic models. While the two are known to be computationally equivalent—any computation that can be performed in one model can performed in the other in a comparable amount of time [6, 66, 89]—different problems are solved more naturally in different models. We discuss the two models in turn, but note that other models hold promise for the development of future simulation algorithms, including topological quantum computing [69, 90], one-way quantum computing [108, 109], and quantum walks [65].

The circuit model

The cornerstone of quantum computation is the generalization of the classical circuit model, composed of classical bits and logical gates, to a quantum circuit model [35, 13, 94]. A quantum circuit is a multi-qubit unitary transformation which maps a set of initial states to some final states. Usually, a unitary gate is decomposed into elementary gates which involve a few (one or two) qubits each.

In classical computing, the nand gate is universal [95], meaning that any logical circuit can be constructed using nand gates only. Similarly, in quantum computing, there are sets of unitary operations that form universal gate sets. A quantum computer that can implement such a set is called universal, and can perform any unitary transformation to an arbitrary accuracy. It turns out that the set containing all single-qubit gates in addition to any two-qubit entangling gate, such as cnot, is universal [63] (Fig. 1). An entangling gate can be realized by any physical interaction that can generate entanglement between qubits. Examples of experimental implementations of quantum gates have been reviewed [77], and we will cover some of the experiments relevant to quantum simulation in Sec. 5.

Beside the elementary gates, an important quantum transformation is the quantum Fourier transform (QFT). It transforms any quantum state into its Fourier representation,


where are the discrete Fourier coefficients of . The QFT can be efficiently implemented using a quantum circuit [94]: for qubits, the number of elementary gates required is . For comparison, the classical fast Fourier transform requires gates. We take advantage of the QFT in Sec. 3.2 for the simulation of quantum dynamics, and in Sec. 3.3 for the measurement of observables.

Figure 1: Qubit, elementary gates and phase estimation algorithm: (a) The quantum state of a qubit can be represented in a Bloch sphere. (b) The action of the Hadamard gate on a qubit is shown in (a). The CNOT gate, together with single qubit gates, form a universal gate set. The quantum circuit for the phase estimation algorithm PEA is shown on the right. Here and QFT is the quantum Fourier transform. The eigenvalues in this case have 4-digit accuracy.

Adiabatic quantum computation

An alternative to the gate model is the adiabatic model of quantum computation [41]. In this model, the quantum computer remains in its ground state throughout the computation. The Hamiltonian of the computer is changed slowly from a simple initial Hamiltonian to a final Hamiltonian whose ground state encodes the solution to the computational problem. The adiabatic theorem states that if the variation of the Hamiltonian is sufficiently slow, the easy-to-prepare ground state of will be transformed continuously into the ground state of . It is desirable to complete the evolution as quickly as possible; the maximum rate of change is mostly determined by the energy gap between the ground and first excited states during the evolution [86, 7, 126, 125]. The applications of adiabatic quantum computation to simulation include preparing quantum states of interest and solving optimization problems such as protein folding [102]. We discuss the details in Secs. 3.4.1 and 4, respectively.

2.3 Quantum complexity theory

To understand the computational advantages of quantum algorithms for chemical simulation, we discuss some aspects of computational complexity theory, which defines quantum speed-up unambiguously. A proper measure of the complexity of an algorithm is how many operations (or how much time) it takes to solve problems of increasing size. Conventionally, a computational problem is described as easy or tractable if there exists an efficient algorithm for solving it, one that scales polynomially with input size (for an input of size , as for some ). Otherwise, the problem is hard. This distinction is admittedly a rough one: for reasonable problem sizes, an “inefficient” algorithm scaling exponentially as would be faster than an “efficient” algorithm. Nevertheless, this convention has proven useful because, in practice, polynomially scaling algorithms generally outperform exponential ones.

The class of all problems1 that are easy for classical computers (classical Turing machines) is called p [9]. The class of all problems whose answer can be verified in polynomial time is np. For example, even though we don’t know how to factor numbers efficiently, factoring is in np because we can check the proposed answer efficiently by multiplication. Note that p np because any easy problem can be verified easily. Whether p np is a famously open question; however, it is widely believed that they are not equal, that is, that there are problems in np that cannot be solved easily [1]. The hardest among them belong to the class np-hard: if any np-hard problem can be solved efficiently, then so can any problem in np.

Likewise, bqp are those problems that are easy for a quantum computer [130]. The quantum analogue of np is called qma and contains those problems easy to check on a quantum computer. In analogy with np-hard problems, qma-hard contains the hardest problems in qma. Shor’s factoring algorithm [112] is significant because it provides an example of a problem in bqp which is widely thought (although not proven) to be outside of p; that is, a problem believed to be hard on classical computers that is easy for a quantum computer.

The relationships between the complexity classes mentioned above are illustrated in Fig. 2. In the remainder of this review, we explore the advantages of quantum simulation over its classical counterpart, in part, by situating various simulation tasks in the computational classes illustrated in Fig. 2.

Figure 2: Computational complexity classes. Shown are the conjectured relationships between the computational complexity classes discussed in this review. Simulating the time-evolution of chemical systems (denoted by the star) is in bqp but widely believed to be outside of p (assuming a constant error and simulated time). That is, it is easy on quantum computers, but probably hard—even in principle—on conventional ones.

3 Quantum Simulation

Quantum simulation schemes can be divided into two broad classes. The first is dedicated quantum simulation, where one quantum system is engineered to simulate another quantum system. For example, quantum gases in optical lattices can be used to simulate superfluidity [12]. The other, more general, approach is universal quantum simulation, simulating a quantum system using a universal quantum computer2. Although we will focus on universal quantum simulation because most chemical proposals assume a universal quantum computer, we will mention dedicated simulators where appropriate.

One of the main goals of quantum simulation is to determine the physical properties of a particular quantum system. This problem can usually be conceptualized as involving three steps:

  1. Initialize the qubits in a state that can be prepared efficiently,

  2. Apply a unitary evolution to this initial state,3 and

  3. Read out the desired information from the final state.

We note at the outset that it is not possible to simulate an arbitrary unitary evolution on a quantum computer efficiently. An arbitrary unitary acting on a system of spins has free parameters, and would require an exponential number of elementary quantum gates to implement. However, in quantum chemistry, it is usually not necessary to simulate arbitrary dynamics, since natural systems aren’t arbitrary [84]. Instead, the interactions between, say, molecular orbitals are local—featuring at most -body interactions—and this crucial aspect of their structure can be exploited for their efficient simulation. That is, the Hamiltonian generating the unitary evolution is a sum of polynomially many terms, each of which acts on at most polynomially many degrees of freedom. A local Hamiltonian generates a time-evolution that can be decomposed into time-steps according to the Lie-Trotter formula,


The approximation can be improved by increasing the number of time steps or by using higher-order generalizations of this formula [54, 18]. Finally, since each factor acts on only a sub-region of the Hilbert space and can therefore be efficiently simulated, so can a product of polynomially many such factors. Hence, the time it takes to perform the simulation scales polynomially with the simulated time . Most methods of quantum simulation make use of the Trotter decomposition, and we will describe in more detail their applications in chemistry. We will not discuss all the available methods, for which the reader is directed to comprehensive reviews [27, 110, 22].

In the following, we describe two ways in which chemical wavefunctions can be encoded on a quantum computer, second- and first-quantization approaches (see Table 1 for a comparison). For each approach, we outline the methods of preparing certain classes of initial states and propagating them in time. Afterward, we discuss the methods of measurement of observables and preparation of ground and thermal states, which do not depend essentially on the way the wavefunction is encoded.

Second-quantized First-quantized

Wavefunction encoding

Fock state in a given basis:

On a grid of sites per dimension:

Qubits required to represent the wavefunction

One per basis state (spin-orbital)

per particle (nuclei & electrons)

Molecular Hamiltonian

Coefficients pre-computed classically

Interaction calculated on the fly

Quantum gates required for simulation
with number of basis states with number of particles

  • Compact wavefunction representation (requires fewer qubits)

  • Takes advantage of classical electronic-structure theory to improve performance

  • Already experimentally implemented

  • Better asymptotic scaling (requires fewer gates)

  • Treats dynamics better

  • Can be used for computing reaction rates or state-to-state transition amplitudes

Table 1: Comparison of second- and first-quantization approaches to quantum simulation.

3.1 Second quantization

We start by considering the purely electronic molecular problem, in which the Born-Oppenheimer approximation has been used to separate the electronic and nuclear motion. The wavefunction of the electrons can be expanded in an orthonormal basis of molecular spin-orbitals . Corresponding to this basis are the fermionic creation and annihilation operators and . There is a very natural mapping between the electronic Fock space and the state of qubits: Having qubit in the state (or ) indicates that spin-orbital is unoccupied (or occupied).

An important subtlety is that electrons in a molecule, unlike the individually addressable qubits, are indistinguishable. Put differently, while the operators and obey the canonical fermionic anticommutation relations, , the qubit operators that change to and vice versa do not. This problem can be solved by using the Jordan-Wigner transformation to enforce the correct commutation relations on the quantum computer [99, 118, 119, 10]. The Jordan-Wigner transformation for this case results in the following mapping between the fermionic operator algebra and the qubit spin algebra:


where and .

The electronic Hamiltonian in the second-quantized form is


where the spin-orbital indices each range from 1 to . Here, one-electron integrals involve the electronic kinetic energy and the nuclear-electron interaction , and the two-electron integrals contain the electron-electron interaction term . For simulation on a quantum computer, this Hamiltonian is recast into the spin algebra using the Jordan-Wigner transformation, Eq. 4, and the time-evolution it generates is implemented using the Trotter decomposition, Eq. 3. Note that contains terms, and each of these terms generates a time evolution of the form


Each of these operators requires elementary quantum gates to implement because of the Jordan-Wigner transformation. Since there are altogether terms that need to be implemented separately, the total cost of simulating scales as  [131].

While any basis can be chosen to represent , it is desirable to choose a basis as small as possible that adequately represents the system under study. Electronic structure experience provides for many good starting points, such as the Hartree-Fock basis or the natural orbitals [33]. No matter which basis is chosen, a lot of the computation can be carried out on classical computers as pre-processing. In particular, the coefficients and can be efficiently pre-computed on classical computers. That way, only the more computationally demanding tasks are left for the quantum computer.

Using the Hartree-Fock basis allows us to use the Hartree-Fock reference state as an input to the quantum computation [10]. A salient feature is that such states are Fock states, which are easy to prepare on the quantum computer: some qubits are initialized to and others to . In fact, any single-determinant state can be easily prepared in this way. Furthermore, it is possible to prepare superpositions of Fock basis states as inputs for the quantum computation. While an arbitrary state might be difficult to prepare, many states of interest, including those with only polynomially many determinant contributions, can be prepared efficiently [99, 118, 119, 127]. The problem of preparing an initial state that is close to the true molecular ground state is addressed in Sec. 3.4.

The chief advantage of the second-quantization method is that it is frugal with quantum resources: only one qubit per basis state is required, and the integrals can be pre-computed classically. For this reason, the first chemical quantum computation was carried out in second quantization (see Sec. 5). Nevertheless, there are processes, such as chemical reactions, which are difficult to describe in a small, fixed basis set, and for this we turn to discussing first-quantization methods.

3.2 First quantization

The first-quantization method, due to Zalka [138, 132, 62], simulates particles governed by the Schrödinger equation on a grid in real space4. For a single particle in one dimension, space is discretized into points, which, when represented using qubits, range from to . The particle’s wavefunction can be expanded in this position representation as . The Hamiltonian to be simulated is


and the resulting unitary can be implemented using the quantum version of the split-operator method [42, 76]:


The operators and are diagonal in the position and momentum representations, respectively. A diagonal operator can be easily implemented because it amounts to adding a phase to each basis state . Furthermore, it is easy on a quantum computer to switch between the position and momentum representations of a wavefunction using the efficient quantum Fourier transform. Therefore, simulating a time evolution for time involves alternately applying and with the time steps chosen to be sufficiently short to secure a desired accuracy. Finally, the scheme can be easily generalized to many particles in three dimensions: a system of particles requires qubits, for each degree of freedom.

The first-quantization method can be applied to many problems. The earliest applications established that as few as 10-15 qubits would be needed for a proof-of-principle demonstration of single-particle dynamics [120] (later improved to 6-10 [15]). The method could also be used to faithfully study the chaotic dynamics of the kicked rotor model [81]. The first chemical application was the proposal of a method for the calculation of the thermal rate constant [83] (see Sec. 3.3).

We investigated the applicability of the first-quantization method to the simulation of chemical dynamics [62]. The simplest approach is to consider all the nuclei and electrons explicitly, in which case the exact non-relativistic molecular Hamiltonian reads


where is the distance between particles and , which carry charges and , respectively. As before, the split-operator method can be used to separate the unitaries that are diagonal in the position and momentum bases. Note that a Jordan-Wigner transformation is not required; preserves permutational symmetry, meaning that if the initial state is properly (anti-)symmetrized (see below), it will stay so throughout the simulation.

Since the Born-Oppenheimer approximation (BOA) has been widely used in quantum chemistry, it might seem extravagant to explicitly simulate all the nuclei and electrons. Nevertheless, the exact simulation is, in fact, faster than using the BOA for reactions with more than about four atoms [62]. The reason for this is the need to evaluate the potential on the fly on the quantum computer. In the exact case, the potential is simply the pairwise Coulomb interaction; on the other hand, evaluating the complicated, many-body potential energy surfaces that are supplied by the BOA is a much more daunting task, even considering that one can use nuclear time-steps that are about a thousand times longer. That is, exact simulation minimizes arithmetic, which is the bottleneck of the quantum computation; by contrast, the bottleneck on classical computers is the prohibitive scaling of the Hilbert space size, which is alleviated by the BOA.

In order to carry out simulations, it is important to prepare suitable initial states. Zalka’s original paper [138] contained a very general state-preparation scheme, later rediscovered [48, 64, 71] and improved [114]. The scheme builds the state one qubit at a time by performing a rotation (dependent on the previous qubits) that redistributes the wavefunction amplitude as desired. For example, Gaussian wavepackets or molecular orbitals can be constructed efficiently. We discussed how to combine such single-particle wavefunctions into many-particle Slater determinants, superpositions of determinants, and mixed states in [129]. In particular, the (anti-)symmetrization algorithm of [2] was improved and used to prepare Slater determinants necessary for chemical simulation. Furthermore, we outlined a procedure for translating states that are prepared in second-quantization language into first-quantized wavefunctions, and vice versa. Techniques for preparing ground and thermal initial states are discussed in Sec. 3.4.

The first-quantization approach to quantum simulation suffers from the fact that even the simplest simulations might require dozens of qubits and millions of quantum gates [62]. Nevertheless, it has advantages that would make it useful if large quantum computers are built. Most importantly, because the Coulomb interaction is pairwise, simulating a system of particles requires gates, a significant asymptotic improvement over the second-quantized scaling of , where is the size of the basis set.

3.3 Measuring observables

We have discussed how to prepare and evolve quantum states on a quantum computer. Information about the resulting state must be extracted in the end; however, full characterization (quantum state tomography) generally requires resources that scale exponentially with the number of qubits. This is because a measurement projects a state into one consistent with the measurement outcome. Because only a limited amount of information can be extracted efficiently, one needs a specialized measurement scheme to extract the desired observables, such as dipole moments, correlation functions, etc.

In principle, an individual measurement can be carried out in any basis. However, since experimental measurement techniques usually address individual qubits, a method is needed to carry out more complicated measurements. In particular, in order to measure an observable , one would like to carry out a measurement in its eigenbasis . This is achieved by the phase estimation algorithm (PEA) [68, 3]:


where and are the eigenvalues of ; is the unitary controlled by the ancilla qubits, which are initialized in the state . When measuring the ancilla, the eigenvalue will be measured with probability and, if the eigenstates are non-degenerate, the wavefunction will collapse to the eigenvector .5 For the PEA to be efficient, it must be possible to simulate the pseudo-dynamics efficiently. In particular, if we are interested in molecular energies, the observable is the Hamiltonian , and we need to simulate the actual dynamics (see Sec. 3). Note that the PEA is closely related to classical algorithms for preparing eigenstates by Fourier analysis of a propagating system [34, 122]. As in classical Fourier analysis, the (pseudo-)dynamics must be simulated for longer in order to achieve a higher precision in the . More precisely, for a final accuracy of , the PEA must run for a time [94, 23].

Because quantum measurement is inherently random, repeating a measurement on multiple copies of the same system helps in determining expectation values of observables. The central limit theorem implies that measuring copies of a state results in a precision that scales as (the standard quantum limit, SQL). For example, repeating the PEA gives an SQL estimate of the coefficients ; these can be used to calculate the expectation value , also to the SQL. When entanglement is available, one can achieve precision scaling as —this is the Heisenberg or quantum metrology limit [47]. An algorithm for the expectation values of observables has been proposed that can get arbitrarily close to the Heisenberg limit [72].

The first algorithm for measuring a chemical observable was Lidar and Wang’s calculation of the thermal rate constant by simulating a reaction in first quantization and using the PEA to obtain the energy spectrum and the eigenstates [83]. These values were used to calculate the rate constant on a classical computer by integrating the flux-flux correlation function. We improved on this method with a more direct approach to the rate constant [62]. We showed how to efficiently obtain the product branching ratios given different reactant states—if the initial state is a thermal state (see Sec. 3.4.2), this gives the rate constant directly. Furthermore, the method was used to obtain the entire state-to-state scattering matrix. A method for reaction rates using a dedicated quantum simulator where artificial molecules are experimentally manipulated, has also been proposed [113].

More generally, correlation functions provide information about a system’s transport and spectroscopic properties. On a quantum computer, the correlation function of any two observables can be estimated efficiently if their pseudo-dynamics can each be simulated efficiently [99, 119]. The method does not suffer from the dynamic sign problem that plagues classical Monte Carlo methods for sampling correlation functions. An alternative approach is the measurement of correlation functions using techniques of linear-response theory [124].

Molecular properties such as the dipole moment or the static polarizability are also of chemical interest. They are derivatives of the molecular energy with respect to an external parameter, such as the electric field. We showed how to calculate them [61] using the PEA and the quantum gradient algorithm [59]. The algorithm is insensitive to the dimensionality of the derivatives, an obstacle to classical computers. For example, the molecular gradient and Hessian can be computed—and used to optimize the geometry—with a number of energy evaluations independent of system size.

3.4 Preparing ground states and thermal states

In Secs. 3.1 and 3.2, we discussed the preparation of various initial states for quantum simulation. We postponed discussing the preparation of ground and thermal states because of subtleties to which we now turn.

Ground state preparation by phase estimation

A large part of quantum chemistry is concerned with the calculation of ground-state properties of molecules, making it desirable to prepare such states on a quantum computer. In the previous section, we described how the PEA can be used to measure a quantum state in the eigenbasis of a Hermitian operator. This suggests a method for preparing a ground state: measuring in the eigenbasis of the Hamiltonian will project a state to the ground state with probability .

The problem, therefore, is to prepare a state close to the ground state, from which we can project the ground-state component. Choosing a random state is bound to fail, since the overlap is expected to be exponentially small in the number of qubits : . This means that one would have to repeat the PEA exponentially many times before chancing upon the ground state.

Methods of quantum chemistry can be used to improve the overlap. We studied the ground-state preparation of HO and LiH in second quantization, based on the Hartree-Fock (HF) approximation [10]. The goal was to prepare the ground state of the full configuration interaction (FCI) Hamiltonian, so that its energy could be read out by the PEA, thus solving the electronic structure problem. Since these molecules were considered at equilibrium geometries, the HF guess was sufficient for the algorithm to estimate the ground-state energies of these molecules with high probability. The overlap can be further improved by choosing a more sophisticated approximation method such as a multi-configuration self-consistent field (MCSCF) wavefunction [128].

Alternatively, the overlap can be increased using adiabatic quantum computation (Sec. 2.2.2). We applied adiabatic state preparation (ASP) to the case of the hydrogen molecule H in the STO-3G basis at various bond lengths [10]. As the bond length increases, the HF state has decreasing overlap with the exact state, reaching 0.5 at large separations. ASP works by preparing the ground state of the HF Hamiltonian and then slowly changing to the FCI Hamiltonian. The speed of the variation of the Hamiltonian is limited by the energy gap between the ground state and the first excited state. In the case of H, this method allowed the preparation of the FCI ground state with a high fidelity.

Procedures similar to ASP have been proposed to study low-energy states of some toy models in physics [97] and superconductivity [134]. It is also possible to encode a thermal state into the ground state of a Hamiltonian [5, 116], offering a way to prepare a thermal state, a problem further discussed in the next section.

Thermal state preparation

While not often a subject of quantum-chemical calculations, the thermal states are of significance because they can be used to solve many problems, ranging from statistical mechanics to the calculation of thermal rate constants. Classical algorithms typically rely on Markov chain Monte Carlo (MCMC) methods, which sample from the Gibbs density matrix, , where is the partition function. The challenge is that it is generally impossible to sample from the eigenstates of a certain Hamiltonian if they are not pre-determined (which is often more challenging).

With a quantum computer, assuming the PEA can be efficiently implemented, we can prepare the thermal state of any classical or quantum Hamiltonian from a Markov chain constructed by repeating a completely positive map [124]. A limitation of that approach is that the Metropolis step can make too many transitions between states of very different energies, sometimes leading to a slow convergence rate of the resulting Markov chain. This issue was addressed by building up the Markov chain by applying random local unitary operations [123]. The resulting operation is a Metropolis-type sampling for quantum states; although the underlying Markov chain is classical in nature, performing it on a quantum computer provides the benefit of being able to use the PEA without explicitly solving the eigenvalue equations. However, quantum computers can implement Markov chains corresponding to thermal states of classical Hamiltonians with a quadratic speed-up [121, 117, 133, 106, 107, 26].

Zalka’s state preparation algorithm (see Sec. 3.2) is applicable to preparing the coherent encoding of thermal states (CETS) ,


which is equivalent to the Gibbs density matrix, , if one register is traced out. If the eigenstates and eigenvalues are known, it is possible to construct the CETS directly [129]. On the other hand, combining ideas from belief-propagation [87] and quantum amplitude amplification [63], we were able to construct the CETS of classical Hamiltonians with a quadratic quantum speedup [137].

Lastly, a thermal state can be prepared by modeling the physical interaction with a heat bath [138, 124]. However, the computational cost of these methods is not well understood.

QMA-hardness and future prospects

Unfortunately, the procedures for ground- and thermal-state preparation outlined above are not fully scalable to larger systems. A simple way to see this is to imagine a system composed of identical, non-interacting molecules. Even if one molecule can be prepared with a ground-state overlap of by any method, the fidelity of the -molecule state will be exponentially small, [73]. ASP would fail when the energy gap got so small that the Hamiltonian would have to be varied exponentially slowly.

More generally, there are broad classes of Hamiltonians for which finding the ground state energy (and therefore also a thermal state) is known to be qma-hard, that is, most likely hard even on a quantum computer (see Sec. 2.3) [70, 4, 66, 98, 111]. Nevertheless, the scaling of the ground- and thermal-state energy problems for chemical systems on a quantum computer is an open question. It is possible that algorithms can be found that are not efficient for all qma-hard Hamiltonians, but nevertheless succeed for chemical problems.

4 Optimization With Adiabatic Quantum Simulation

Figure 3: Lattice folding using quantum adiabatic hardware: In vivo folding of proteins involves the assistance of molecular chaperone proteins whose main function is to assist the folding of the newly synthesized polypeptide. (a) In vacuo, the four-amino-acid peptide could fold clockwise or counterclockwise, depending if the third amino acid moves downwards or upwards, respectively. (b) The chaperone molecule (represented as the pink region) obstructs the third amino acid from moving downward, making the counterclockwise folding the only global minima in the energy landscape. (c) Energy landscape (Eq. 14): Each overlap of the amino acids with the chaperone raises the free energy by four units, while overlap among amino acids in the chain raises the free energy by two units. The four binary variables encode the direction of the third and fourth bond of the peptide (see text). The quantum adiabatic hardware found the right solution 78 % of the times [103].

We describe the use of quantum computers to solve classical optimization problems related to chemistry and biology. This class of problems plays an important role in fields such as drug design, molecular recognition, geometry optimization, protein folding, etc.  [53, 44].

Of all the models of quantum computation, adiabatic quantum computation (AQC) is perhaps the best suited for dealing with discrete optimization problems. As explained in Sec. 2.2.2, the essential idea behind AQC is to encode the solution to a computational problem in a (final) Hamiltonian ground state which is prepared adiabatically.

Although final Hamiltonians have been proposed for various problems related to computer science [41, 56, 93, 92, 29], only recently we derived constructions [102] for problems of chemical interest such as the lattice heteropolymer problem [100, 88, 75], an np-hard problem [52]. It can be used as a model of protein folding [37], one of the cornerstones of biophysics. Note that the quantum-computational implementation of the protein folding problem does not assume that the protein is treated quantum mechanically. Instead, the quantum computer is being used as a tool to solve the classical optimization problem. In the lattice folding problem, the sequence of amino acids is coarse-grained to a sequence of beads (amino acids) connected by strings (peptide bond). This chain of beads occupies points on a two- or three-dimensional lattice; a valid configuration (fold) is a self-avoiding walk on the lattice and its energy is determined by the interaction energies among amino acids that are non-bonded nearest neighbors in the lattice. The hydrophobic-polar (HP) model [80] is the simplest realization of this problem. The amino acids are broken into two groups, hydrophobic (H) and polar (P). Whenever two non-bonded H amino acids are nearest neighbors in the lattice, the free-energy of the protein is reduced by one unit of energy, . The remaining interactions do not contribute to the free energy . The lattice folding problem consists in finding one of more folds that minimize the free energy of the protein. By the thermodynamic hypothesis [39], such fold(s) correspond to the conformation of the native conformation(s) of the protein.

The theory behind the quantum-computational implementation of lattice folding is guided by the proposed quantum adiabatic platform on superconducting qubits [60]. This scheme is designed to find solutions to the problem,


where , , and . Given a set of and the interaction matrix , the goal is to find the assignment , that minimizes .

The time-dependent Hamiltonian is chosen to be,


where has a simple-to-prepare ground state and , with denotes the Pauli matrix acting on the th qubit, and is the running time. The time-dependent functions and are such that and . Therefore, at the beginning (end) of the simulation, the ground state corresponds to the ground state of (). Note that, as desired, is the ground state of . Measurement of this final state provides the solution to our problem.

The theoretical challenge is to map the lattice folding free energy function into the form of Eq. 12 [102, 104]. In two dimensions, we use two binary variables determining the direction of each bond between two amino acids (beads). If a particular bond points upwards, we write “11”; if it points downwards, leftwards or rightwards, we write “00”, “10”, or “01”, respectively. For an -amino-acid protein, we need two binary variables for each of the bonds. Fixing the direction of the first bond reduces the number of variables to binary variables. Any possible -bead fold can be represented by the string of binary variables of the form , where we set the direction of the first bond to be right (“01”).

As an example, the free energy function [104] associated with the folding of a four-amino-acid peptide assisted by a “chaperone” protein (see Fig. 3) is


By substituting values for the four binary variables defining the directions of the second () and third () bonds, we can verify that the 16 assignments provide the desired energy spectrum (Fig. 3). Eq. 14 is not in the form of Eq. 12. We converted this energy function from its quartic form to a quadratic form, using two extra ancilla binary variables [104]. After the substitution , the free energy function now resembles that of Eq. 12. An early experimental realization is described in Sec. 5.

Note that solving the HP model is np-hard [16, 32, 52]. AQC is equivalent to the circuit model, so it is unlikely able to solve np-hard problems in polynomial time (see Sec. 2.3). Real world problems (and the instances defining biologically relevant proteins) are not necessarily structureless. Taking advantage of the structure or information about a particular problem instance is one of the ideas behind new algorithmic strategies [8, 40, 105]. An example is to introduce heuristic strategies for AQC by initializing the calculation with a educated guess [105].

5 Experimental Progress

Experimental quantum simulation has rapidly progressed [24, 22] since the early simulation of quantum oscillators using nuclear magnetic resonance (NMR) [115]. Here we review chemical applications of available quantum-computational devices.

Quantum optics

On an optical quantum computer, various degrees of freedom of single photons, such as polarization or path, are used to encode quantum information [96, 74]. This architecture was used for the first quantum simulation of a molecular system, a minimal basis model of the hydrogen molecule H [79]. Qubits were encoded in photon polarization, while two-qubit gates were implemented probabilistically using linear-optical elements and projective measurement. The minimal-basis description of H used two spin-orbitals per atom. Since the FCI Hamiltonian is block-diagonal with blocks, two qubits sufficed for the experiment: one for storing the system wavefunction, and one for the readout of the PEA. The PEA was implemented iteratively, extracting one bit of the value of the energy at a time. Twenty bits of the energy were obtained, and the answer was exact within the basis set. Fig. 4 describes the experiment and the potential energy surfaces that were obtained.

Figure 4: Experimental simulation of the H molecule on a linear-optical quantum computer [79]. a) Two-qubit iterative version of the phase estimation algorithm for evaluating molecular energies. b) Decomposition of the algorithm into gates. c) The layout of the optical elements used to implement the quantum gates on photonic polarization qubits. d) The computed potential energy surfaces of the H molecule in a minimal basis set. The results are the exact (in the basis) full configuration interaction energies, to 20 bits of precision.

Nuclear magnetic resonance

Nuclear spins can serve as qubits, being addressed and read-out using an NMR spectrometer [14]. The first experimental quantum simulation, of a harmonic oscillator, was performed using NMR [115]. The platform has since been used to simulate a number of model systems [91, 23, 135, 101], leading up to the recent simulation of H [38]. The H experiment used C-labeled chloroform, in which the carbon and hydrogen nuclear spins form two qubits. The experiment achieved 45 bits of precision (15 iterations of PEA, 3 bits per iteration) in the ground state energy. Adiabatic state preparation (Sec. 3.4) was implemented for various bond distances.

Superconducting systems

The circulating current (clockwise or counterclockwise) flowing in a micron-sized loop of a superconductor can be used as a qubit [136, 82]. Examples of applications based on superconducting qubits include the tailor-made generation of harmonic oscillator states [55] and the implementation of the Deutsch-Jozsa and Grover quantum algorithms [36]. Recently, the free energy function discussed in Sec. 4 for the four-amino-acid peptide assisted by a chaperone protein (see Fig. 3) has been experimentally realized [103]. A microprocessor consisting of an array of coupled superconductor qubits has been used to implement the time-dependent Hamiltonian in Eq. 13, with as the initial Hamiltonian [60, 58, 49]. The quantum hardware operating at a temperature of 20 mK found the correct solution with a probability of 78%. Characterization of this device is currently underway [51, 17, 50, 78].

Trapped ions

Qubits can also be encoded in the electronic states of cold trapped ions, offering one of the most controllable systems available today [30, 57, 20]. This platform has already produced sophisticated simulations of physical systems [45, 67, 46], but chemical applications are still to come.

6 Conclusions

We have outlined how a quantum computer could be employed for the simulation of chemical systems and their properties, including correlation functions and reaction rates. A method for lattice protein folding was also discussed. Although we focused on the adiabatic and circuit models, these are not the only universal models of quantum computation and it may be possible to make further algorithmic progress with models such as topological quantum computing [69, 90], one-way quantum computing [108, 109], and quantum walks [65, 28, 85].

We reported on the first experiments relevant to chemistry and we expect more to come in the near future. With recent technological advances, there are many prospects for the future of quantum simulation. However, as more qubits are added to experiments, more effort will be needed to control decoherence, since error correction procedures [94] might not be sufficient in practice due to the spatial and temporal overheads required [31]. Instead, it may be possible to build resilient quantum simulators or to incorporate the noise into the simulation.

Although practical quantum computers are not available yet, quantum information theory has already influenced the development of new methods for quantum chemistry. For instance, density matrix renormalization group has been extended using quantum information, and its applications to chemistry have been vigorously pursued [25]. By studying the simulation of chemical systems on quantum computers, we can also expect new insights into the complexity of computing their properties classically.

In analogy to classical electronics, one could say that, as of 2010, the implementation of quantum information processors is in the vacuum-tube era. A development parallel to that of the transistor would allow for rapid progress in the capabilities of quantum information processors. These larger devices would allow for routine execution of exact, non-adiabatic dynamics simulations, as well as full-configuration interaction calculations of molecular systems that are intractable with current classical computing technology.

Summary Points

  • A universal quantum computer can simulate chemical systems more efficiently (in some cases exponentially so) than a classical computer.

  • Preparing the ground state of an arbitrary Hamiltonian is a hard (qma-complete) problem. However, the ground state of certain chemical Hamiltonians can be found efficiently using quantum algorithms.

  • Simulation of quantum dynamics of physical systems is in general efficient with a quantum computer.

  • Properties of quantum states can be obtained by various measurement methods.

  • Classical optimization problems, such as lattice protein folding, can be studied by means of the adiabatic quantum-computational model.

  • Quantum simulation for chemistry has been experimentally realized in quantum optics, nuclear magnetic resonance, and superconducting devices.

Future Issues

  • Developing quantum simulation methods based on alternative models of quantum computation is an open research direction.

  • Dedicated quantum simulators built so far are mostly for simulating condensed matter systems. It is desirable to make experimental progress on simulating chemical systems.

  • Decoherence is currently the major obstacle for scaling up the current experimental setups. Progress in theoretical and experimental work is needed to overcome decoherence.

  • We have not covered methods of quantum error correction, which will be important for large scale simulations.


  1. Strictly speaking, decision problems, those with a yes-or-no answer. However, other problems can be recast as decision problems; for example, instead of asking “What is the ground-state energy of molecule ?” we might ask “Is the ground-state energy of less than ?”
  2. The terms “analog” and “digital” have also been used for dedicated and universal quantum simulation, respectively [24].
  3. Non-unitary open-system dynamics have been studied as well [11].
  4. We also note the method of [21], which in our terminology is a hybrid between second- and first-quantization methods. It associates a qubit to the occupation of each lattice site.
  5. Other methods for eigenvalue measurement include pairing adiabatic quantum evolution with Kitaev’s original scheme [19] and applications of the Hellmann-Feynman theorem [97].


  1. S. Aaronson. The limits of quantum computers. Sci. Am., March:62, 2008.
  2. D. S. Abrams and S. Lloyd. Simulation of many-body Fermi systems on a universal quantum computer. Phys. Rev. Lett., 79(13):2586–2589, 1997.
  3. D. S. Abrams and S. Lloyd. Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors. Phys. Rev. Lett., 83(24):5162–5165, 1999.
  4. D. Aharonov and T. Naveh. Quantum NP - A survey. arXiv:quant-ph/0210077, 2002.
  5. D. Aharonov and A. Ta-Shma. Adiabatic quantum state generation. SIAM J. Comput., 37(1):47–82, 2008.
  6. D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic quantum computation is equivalent to standard quantum computation. SIAM J. Comput., 37(1):166–194, 2007.
  7. M. H. S. Amin. On the inconsistency of the adiabatic theorem. arXiv:0810.4335, 2008.
  8. M. H. S. Amin and V. Choi. First-order quantum phase transition in adiabatic quantum computation. Phys. Rev. A., 80(6):062326, 2009.
  9. S. Arora and B. Barak. Computational Complexity: A Modern Approach. Cambridge University Press, 2009.
  10. A. Aspuru-Guzik, A. D. Dutoi, P. J. Love, and M. Head-Gordon. Simulated quantum computation of molecular energies. Science, 309(5741):1704–1707, 2005.
  11. D. Bacon, A. M. Childs, I. L. Chuang, J. Kempe, D. W. Leung, and X. Zhou. Universal simulation of Markovian quantum dynamics. Phys. Rev. A, 64(6):062302, 2001.
  12. W. S. Bakr, A. Peng, M. E. Tai, R. Ma, J. Simon, J. I. Gillen, S. Folling, L. Pollet, and M. Greiner. Probing the superfluid-to-Mott-insulator transition at the single-atom level. Science, page DOI:10.1126/science.1192368, 2010.
  13. A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter. Elementary gates for quantum computation. Phys. Rev. A, 52:3457, 1995.
  14. J. Baugh, J. Chamilliard, C. M. Chandrashekar, M. Ditty, A. Hubbard, R. Laflamme, M. Laforest, D. Maslov, O. Moussa, C. Negrevergne, M. Silva, S. Simmons, C. A. Ryan, D. G. Cory, J. S. Hodges, and C. Ramanathan. Quantum information processing using nuclear and electron magnetic resonance: Review and prospects. Physics in Canada, Oct.-Dec., 2007.
  15. G. Benenti and G. Strini. Quantum simulation of the single-particle Schrödinger equation. Am. J. Phys., 76(7):657–662, 2008.
  16. B. Berger and T. Leighton. Protein folding in the hydrophobic-hydrophilic (HP) model is NP-complete. J. Comput. Biol., 5:27–40, SPR 1998.
  17. A. J. Berkley, M. W. Johnson, P. Bunyk, R. Harris, J. Johansson, T. Lanting, E. Ladizinsky, E. Tolkacheva, M. H. S. Amin, and G. Rose. A scalable readout system for a superconducting adiabatic quantum optimization system. arXiv:0905.0891, 2009.
  18. D. Berry, G. Ahokas, R. Cleve, and B. Sanders. Efficient quantum algorithms for simulating sparse Hamiltonians. Comm. Math. Phys., 270:359, 2007.
  19. J. D. Biamonte, V. Bergholm, J. D. Whitfield, J. Fitzsimons, and A. Aspuru-Guzik. Adiabatic quantum simulators, 2010.
  20. R. Blatt and D. Wineland. Entangled states of trapped atomic ions. Nature, 453(7198):1008–1015, 2008.
  21. B. M. Boghosian and W. Taylor. Simulating quantum mechanics on a quantum computer. Physica D, 120(1-2):30 – 42, 1998.
  22. K. L. Brown, W. J. Munro, and V. M. Kendon. Using quantum computers for quantum simulation. arXiv:1004.5528, 2010.
  23. K. R. Brown, R. J. Clark, and I. L. Chuang. Limitations of quantum simulation examined by simulating a pairing Hamiltonian using nuclear magnetic resonance. Phys. Rev. Lett., 97(5):050504, 2006.
  24. I. Buluta and F. Nori. Quantum simulators. Science, 326:108 – 111, 2009.
  25. G. K.-L. Chan, J. J. Dorando, D. Ghosh, J. Hachmann, E. Neuscamman, H. Wang, and T. Yanai. An introduction to the density matrix renormalization group ansatz in quantum chemistry. Prog. Theor. Chem. and Phys., 18:49, 2008.
  26. C. Chiang and P. Wocjan. Quantum algorithm for preparing thermal Gibbs states-detailed analysis. arXiv:1001.1130, 2010.
  27. A. M. Childs. Quantum information processing in continuous time. Ph.D., MIT, Cambridge, MA, 2004.
  28. A. M. Childs. Universal computation by quantum walk. Phys. Rev. Lett., 102:180501, 2009.
  29. V. Choi. Adiabatic quantum algorithms for the NP-complete maximum-weight independent set, exact cover and 3-SAT problems. arXiv:1004.2226, 2010.
  30. J. I. Cirac and P. Zoller. Quantum computations with cold trapped ions. Phys. Rev. Lett., 74(20):4091, 1995.
  31. C. R. Clark, T. S. Metodi, S. D. Gasster, and K. R. Brown. Resource requirements for fault-tolerant quantum simulation: The ground state of the transverse Ising model. Phys. Rev. A, 79:062314, 2009.
  32. P. Crescenzi, D. Goldman, C. Papadimitriou, A. Piccolboni, and M. Yannakakis. On the complexity of protein folding. J. Comput. Biol., 5:597–603, 1998.
  33. E. Davidson. Properties and uses of natural orbitals. Rev. Mod. Phys., 44(3):451–464, 1972.
  34. M. J. Davis and E. J. Heller. Multidimensional wave functions from classical trajectories. J. Chem. Phys., 75(8):3916, 1981.
  35. D. Deutsch. Quantum computational networks. Proc. R. Soc. Lond. A, 425:73, 1989.
  36. L. DiCarlo, J. M. Chow, J. M. Gambetta, L. S. Bishop, B. R. Johnson, D. I. Schuster, J. Majer, A. Blais, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf. Demonstration of two-qubit algorithms with a superconducting quantum processor. Nature, 460(7252):240–244, 2009.
  37. K. A. Dill, S. B. Ozkan, M. S. Shell, and T. R. Weikl. The protein folding problem. Ann. Rev. Biophys., 37:289–316, 2008. PMID: 18573083.
  38. J. Du, N. Xu, P. Wang, S. Wu, and D. Lu. NMR implementation of a molecular hydrogen quantum simulation with adiabatic state preparation. Phys. Rev. Lett., 104:030502, 2010.
  39. C. J. Epstein, R. F. Goldberger, and C. B. Anfinsen. Genetic control of tertiary protein structure - studies with model systems. Cold. Spring. Harb. Sym., 28:439, 1963.
  40. E. Farhi, J. Goldstone, D. Gosset, S. Gutmann, H. Meyer, and P. Shor. Quantum adiabatic algorithms, small gaps, and different paths. arXiv:0909.4766v2, 2009.
  41. E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Quantum computation by adiabatic evolution. arXiv:quant-ph/0001106, 2000.
  42. M. Feit, J. Fleck Jr, and A. Steiger. Solution of the Schrödinger equation by a spectral method. J. Comput. Phys., 47(3):412–433, 1982.
  43. R. Feynman. Simulating physics with computers. Int. J. Theor. Phys., 21(6):467–488, 1982.
  44. C. A. Floudas and P. M. Pardalos. Optimization in Computational Chemistry and Molecular Biology - Local and Global Approaches. Springer, 1 edition, 2000.
  45. A. Friedenauer, H. Schmitz, J. T. Glueckert, D. Porras, and T. Schaetz. Simulating a quantum magnet with trapped ions. Nature Phys., 4(10):757–761, 2008.
  46. R. Gerritsma, G. Kirchmair, F. Zähringer, E. Solano, R. Blatt, and C. F. Roos. Quantum simulation of the Dirac equation. Nature, 463:68, 2010.
  47. V. Giovannetti, S. Lloyd, and L. Maccone. Quantum-enhanced measurements: Beating the standard quantum limit. Science, 306:1330, 2004.
  48. L. Grover and T. Rudolph. Creating superpositions that correspond to efficiently integrable probability distributions. arXiv:quant-ph/0208112, 2002.
  49. R. Harris, J. Johansson, A. J. Berkley, M. W. Johnson, T. Lanting, S. Han, P. Bunyk, E. Ladizinsky, T. Oh, I. Perminov, E. Tolkacheva, S. Uchaikin, E. M. Chapple, C. Enderud, C. Rich, M. Thom, J. Wang, B. Wilson, and G. Rose. Experimental demonstration of a robust and scalable flux qubit. Phys. Rev. B., 81(13):134510, 2010.
  50. R. Harris, M. W. Johnson, T. Lanting, A. J. Berkley, J. Johansson, P. Bunyk, E. Tolkacheva, E. Ladizinsky, N. Ladizinsky, T. Oh, F. Cioata, I. Perminov, P. Spear, C. Enderud, C. Rich, S. Uchaikin, M. C. Thom, E. M. Chapple, J. Wang, B. Wilson, M. H. S. Amin, N. Dickson, K. Karimi, B. Macready, C. J. S. Truncik, and G. Rose. Experimental investigation of an eight qubit unit cell in a superconducting optimization processor. arXiv:1004.1628, 2010.
  51. R. Harris, T. Lanting, A. J. Berkley, J. Johansson, M. W. Johnson, P. Bunyk, E. Ladizinsky, N. Ladizinsky, T. Oh, and S. Han. Compound Josephson-junction coupler for flux qubits with minimal crosstalk. Phys. Rev. B., 80(5):052506, 2009.
  52. W. E. Hart and S. Istrail. Robust proofs of NP-Hardness for protein folding: General lattices and energy potentials. J. Comput. Biol., 4(1):1–22, 1997.
  53. A. K. Hartmann and H. Rieger. New Optimization Algorithms in Physics. Wiley-VCH, 2004.
  54. N. Hatano and M. Suzuki. Finding exponential product formulas of higher orders. In A. Das and B. K. Chakrabarti, editors, Lecture Notes in Physics, volume 679, pages 37–68. Springer Berlin, 2005.
  55. M. Hofheinz, H. Wang, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, J. Wenner, J. M. Martinis, and A. N. Cleland. Synthesizing arbitrary quantum states in a superconducting resonator. Nature, 459(7246):546–549, 2009.
  56. T. Hogg. Adiabatic quantum computing for random satisfiability problems. Phys. Rev. A., 67(2):022314, 2003.
  57. M. Johanning, A. F. Varón, and C. Wunderlich. Quantum simulations with cold trapped ions. J. Phys. B., 42:154009, 2009.
  58. M. W. Johnson, P. Bunyk, F. Maibaum, E. Tolkacheva, A. J. Berkley, E. M. Chapple, R. Harris, J. Johansson, T. Lanting, I. Perminov, E. Ladizinsky, T. Oh, and G. Rose. A scalable control system for a superconducting adiabatic quantum optimization processor. Supercond. Sci. Tech., 23(6):065004, 2010.
  59. S. P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 95:050501, 2005.
  60. W. M. Kaminsky, S. Lloyd, and T. P. Orlando. Scalable superconducting architecture for adiabatic quantum computation. arXiv:quant-ph/0403090, 2004.
  61. I. Kassal and A. Aspuru-Guzik. Quantum algorithm for molecular properties and geometry optimization. J. Chem. Phys., 131(22):224102, 2009.
  62. I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. Aspuru-Guzik. Polynomial-time quantum algorithm for the simulation of chemical dynamics. Proc. Natl. Acad. Sci., 105(48):18681–6, 2008.
  63. P. Kaye, R. Laflamme, and M. Mosca. An Introduction to Quantum Computing. Oxford University Press, 2007.
  64. P. Kaye and M. Mosca. Quantum networks for generating arbitrary quantum states. arXiv:quant-ph/0407102, 2004.
  65. J. Kempe. Quantum random walks - an introductory overview. Contemporary Physics, 44:307, 2003.
  66. J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamiltonian problem. SIAM J. Comput., 35:1070–1097, 2006.
  67. K. Kim, M.-S. Chang, S. Korenblit, R. Islam, E. E. Edwards, J. K. Freericks, G.-D. Lin, L.-M. Duan, , and C. Monroe. Quantum simulation of frustrated ising spins with trapped ions. Nature, 465:560, 2010.
  68. A. Kitaev. Quantum measurements and the abelian stabilizer problem. arXiv:quant-ph/9511026, 1995.
  69. A. Kitaev. Fault-tolerant quantum computation by anyons. Ann. Phys., 303(1):2–30, 2003.
  70. A. Kitaev, A. H. Shen, and M. N. Vyalyi. Classical and Quantum Computation. American Mathematical Society, 2002.
  71. A. Kitaev and W. A. Webb. Wavefunction preparation and resampling using a quantum computer. arXiv:0801.0342, 2008.
  72. E. Knill, G. Ortiz, and R. Somma. Optimal quantum measurements of expectation values of observables. Phys. Rev. A, 75:012328, 2007.
  73. W. Kohn. Nobel lecture: Electronic structure of matter—wave functions and density functionals. Rev. Mod. Phys., 71(5):1253–1266, 1999.
  74. P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn. Linear optical quantum computing with photonic qubits. Rev. Mod. Phys., 79(1):135–40, 2007.
  75. A. Kolinski and J. Skolnick. Lattice Models of Protein Folding, Dynamics and Thermodynamics. Chapman & Hall, 1996.
  76. D. Kosloff and R. Kosloff. A Fourier method solution for the time dependent Schrödinger equation as a tool in molecular dynamics. J. Comput. Phys., 51(1):35–53, 1983.
  77. T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O’Brien. Quantum computers. Nature, 464(7285):45–53, 2010.
  78. T. Lanting, R. Harris, J. Johansson, M. H. S. Amin, A. J. Berkley, S. Gildert, M. W. Johnson, P. Bunyk, E. Tolkacheva, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, E. M. Chapple, C. Enderud, C. Rich, B. Wilson, M. C. Thom, S. Uchaikin, and G. Rose. Observation of co-tunneling in pairs of coupled flux qubits. arXiv:1006.0028, 2010.
  79. B. P. Lanyon, J. D. Whitfield, G. G. Gillett, M. E. Goggin, M. P. Almeida, I. Kassal, J. D. Biamonte, M. Mohseni, B. J. Powell, M. Barbieri, A. Aspuru-Guzik, and A. G. White. Towards quantum chemistry on a quantum computer. Nature Chem., 2(2):106–111, 2010.
  80. K. F. Lau and K. A. Dill. A lattice statistical-mechanics model of the conformational and sequence-spaces of proteins. Macromolecules., 22(10):3986–3997, 1989.
  81. B. Lévi, B. Georgeot, and D. L. Shepelyansky. Quantum computing of quantum chaos in the kicked rotator model. Phys. Rev. E, 67(4):046220, 2003.
  82. B. G. Levi. Superconducting qubit systems come of age. Phys. Today., 62(7):14, 2009.
  83. D. A. Lidar and H. Wang. Calculating the thermal rate constant with exponential speedup on a quantum computer. Phys. Rev. E, 59(2):2429–2438, 1999.
  84. S. Lloyd. Universal quantum simulators. Science, 273(5278):1073–1078, 1996.
  85. N. B. Lovett, S. Cooper, M. Everitt, M. Trevers, and V. Kendon. Universal quantum computation using discrete time quantum walk. Phys. Rev. A, 81:042330, 2010.
  86. A. Messiah. Quantum Mechanics. Dover Publications, 1999.
  87. M. Mezard and A. Montanari. Information, Physics, and Computation. Oxford University Press, 2009.
  88. L. Mirny and E. Shakhnovich. Protein folding theory: from lattice to all-atom models. Annu. Rev. Biophys. Bio., 30:361–396, 2001.
  89. A. Mizel, D. A. Lidar, and M. Mitchell. Simple proof of equivalence between adiabatic quantum computation and the circuit model. Phys. Rev. Lett., 99(7):070502, 2007.
  90. C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. D. Sarma. Non-Abelian anyons and topological quantum computation. Rev. Mod. Phys., 80(3):1083, 2008.
  91. C. Negrevergne, R. Somma, G. Ortiz, E. Knill, and R. Laflamme. Liquid-state NMR simulations of quantum many-body problems. Phys. Rev. A, 71(3):032344, 2005.
  92. H. Neven, V. S. Denchev, G. Rose, and W. G. Macready. Training a large scale classifier with the quantum adiabatic algorithm. arXiv:0912.0779, 2009.
  93. H. Neven, G. Rose, and W. G. Macready. Image recognition with an adiabatic quantum computer. I: Mapping to quadratic unconstrained binary optimization. arXiv:0804.4457, 2008.
  94. M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
  95. L. Null and J. Lobur. The essentials of computer organization and architecture. Jones & Bartlett Pub, 2003.
  96. J. L. O’Brien. Optical quantum computing. Science, 318(5856):1567–1570, 2007.
  97. S. Oh. Quantum computational method of finding the ground-state energy and expectation values. Phys. Rev. A, 77(1):012326, 2008.
  98. R. Oliveira and B. M. Terhal. The complexity of quantum spin systems on a two-dimensional square lattice. Quant. Inf. Comp., 8:900–924, 2008.
  99. G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme. Quantum algorithms for fermionic simulations. Phys. Rev. A, 64(2):022319, 2001.
  100. V. S. Pande, A. Y. Grosberg, and T. Tanaka. Heteropolymer freezing and design: Towards physical models of protein folding. Rev. Mod. Phys., 72(1):259, 2000.
  101. X. Peng, J. Du, and D. Suter. Quantum phase transition of ground-state entanglement in a Heisenberg spin chain simulated in an NMR quantum computer. Phys. Rev. A, 71(1):012307, 2005.
  102. A. Perdomo, C. Truncik, I. Tubert-Brohman, G. Rose, and A. Aspuru-Guzik. Construction of model hamiltonians for adiabatic quantum computation and its application to finding low-energy conformations of lattice protein models. Phys. Rev. A, 78(1):012320, 2008.
  103. A. Perdomo-Ortiz, M. Drew-Brook, N. Dickson, G. Rose, and A. Aspuru-Guzik. Experimental realization of a 8-qubit quantum-adiabatic algorithm for a lattice protein model: Towards optimization on a quantum computer. In preparation, 2010.
  104. A. Perdomo-Ortiz, B. O’Gorman, and A. Aspuru-Guzik. Construction of energy functions for self-avoiding walks and the lattice heteropolymer model: resource efficient encoding for quantum optimization. In preparation, 2010.
  105. A. Perdomo-Ortiz, S. Venegas-Andraca, and A. Aspuru-Guzik. A study of heuristic guesses for adiabatic quantum computation. Quantum Inf. Process., 2010.
  106. D. Poulin and P. Wocjan. Preparing ground states of quantum many-body systems on a quantum computer. Phys. Rev. Lett., 102(13):130503, 2009.
  107. D. Poulin and P. Wocjan. Sampling from the thermal quantum Gibbs state and evaluating partition functions with a quantum computer. Phys. Rev. Lett., 103:220502, 2009.
  108. R. Raussendorf and H. J. Briegel. A one-way quantum computer. Phys. Rev. Lett., 86(22):5188, 2001.
  109. R. Raussendorf, D. E. Browne, and H. J. Briegel. Measurement-based quantum computation on cluster states. Phys. Rev. A, 68(2):022312, 2003.
  110. R. Schack. Simulation on a quantum computer. Informatik - Forschung und Entwicklung, 21:21–27, 2006.
  111. N. Schuch and F. Verstraete. Computational complexity of interacting electrons and fundamental limitations of density functional theory. Nature Phys., 5(10):732–735, 2009.
  112. P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput., 26(5):1484–1509, 1997.
  113. A. Smirnov, S. Savel’ev, L. Mourokh, and F. Nori. Modelling chemical reactions using semiconductor quantum dots. Europhys. Lett., 80:67008, 2007.
  114. A. N. Soklakov and R. Schack. Efficient state preparation for a register of quantum bits. Phys. Rev. A, 73(1):012307–13, 2006.
  115. S. Somaroo, C. H. Tseng, T. F. Havel, R. Laflamme, and D. G. Cory. Quantum simulations on a quantum computer. Phys. Rev. Lett., 82:5381–5384, 1999.
  116. R. Somma, C. Batista, and G. Ortiz. Quantum approach to classical statistical mechanics. Phys. Rev. Lett., 99(3):1–4, 2007.
  117. R. Somma, S. Boixo, H. Barnum, and E. Knill. Quantum simulations of classical annealing processes. Phys. Rev. Lett., 101(13):130504, 2008.
  118. R. Somma, G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme. Simulating physical phenomena by quantum networks. Phys. Rev. A, 65(4), 2002.
  119. R. Somma, G. Ortiz, E. Knill, and J. Gubernatis. Quantum simulations of physics problems. Int. J. Quantum Inf., 1:189, 2003.
  120. G. Strini. Error sensitivity of a quantum simulator. I: a first example. Fortschritte der Physik, 50(2):171–183, 2002.
  121. M. Szegedy. Quantum speed-up of Markov chain based algorithms. In FOCS ’04: Proc. 45th Ann. IEEE Symp. Found. Comput. Sci., pages 32–41, Washington, DC, USA, 2004. IEEE Computer Society.
  122. D. J. Tannor. Introduction to Quantum Mechanics: A Time-Dependent Perspective. University Science Books, 2006.
  123. K. Temme, T. Osborne, K. Vollbrecht, D. Poulin, and F. Verstraete. Quantum Metropolis sampling. arXiv:quant-ph/0911.3635, 2009.
  124. B. Terhal and D. DiVincenzo. Problem of equilibration and the computation of correlation functions on a quantum computer. Phys. Rev. A, 61(2):022301, 2000.
  125. D. M. Tong. Quantitative condition is necessary in guaranteeing the validity of the adiabatic approximation. Phys. Rev. Lett., 104(12):120401, 2010.
  126. D. M. Tong, K. Singh, L. C. Kwek, and C. H. Oh. Sufficiency criterion for the validity of the adiabatic approximation. Phys. Rev. Lett., 98(15):150402–4, 2007.
  127. H. Wang, S. Ashhab, and F. Nori. Efficient quantum algorithm for preparing molecular-system-like states on a quantum computer. Phys. Rev. A, 79(4):042335, 2009.
  128. H. Wang, S. Kais, A. Aspuru-Guzik, and M. R. Hoffmann. Quantum algorithm for obtaining the energy spectrum of molecular systems. Phys. Chem. Chem. Phys., 10(35):5388–93, 2008.
  129. N. J. Ward, I. Kassal, and A. Aspuru-Guzik. Preparation of many-body states for quantum simulation. J. Chem. Phys., 130(19):194105, 2009.
  130. J. Watrous. Quantum computational complexity. In Encyclopedia of Complexity and System Science. Springer Berlin, 2009.
  131. J. Whitfield, J. Biamonte, and A. Aspuru-Guzik. Quantum computing resource estimate of molecular energy simulation. arXiv:1001.3855, 2010.
  132. S. Wiesner. Simulations of many-body quantum systems by a quantum computer. arXiv:quant-ph/9603028, 1996.
  133. P. Wocjan and A. Abeyesinghe. Speedup via quantum sampling. Phys. Rev. A, 78(4):042336, 2008.
  134. L.-A. Wu, M. S. Byrd, and D. A. Lidar. Polynomial-time simulation of pairing models on a quantum computer. Phys. Rev. Lett., 89(5):057904, 2002.
  135. X. Yang, A. Wang, F. Xu, and J. Du. Experimental simulation of a pairing Hamiltonian on an NMR quantum computer. Chem. Phys. Lett., 422:20–24, 2006.
  136. J. Q. You and F. Nori. Superconducting circuits and quantum information. Phys. Today., 58(11):42–47, 2005.
  137. M.-H. Yung, D. Nagaj, J. D. Whitfield, and A. Aspuru-Guzik. Simulation of classical thermal states on a quantum computer: A renormalization group approach. arXiv:1005.0020, 2010.
  138. C. Zalka. Simulating quantum systems on a quantum computer. Proc. Roy. Soc. A, 454(1969):313–322, 1998.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description