Simulating Chemistry using Quantum Computers
Abstract
The difficulty of simulating quantum systems, wellknown to quantum chemists, prompted the idea of quantum computation. One can avoid the steep scaling associated with the exact simulation of increasingly large quantum systems on conventional computers, by mapping the quantum system to another, more controllable one. In this review, we discuss to what extent the ideas in quantum computation, now a wellestablished field, have been applied to chemical problems. We describe algorithms that achieve significant advantages for the electronicstructure problem, the simulation of chemical dynamics, protein folding, and other tasks. Although theory is still ahead of experiment, we outline recent advances that have led to the first chemical calculations on small quantum information processors.
1 Introduction
One of the greatest challenges in quantum chemistry is to fully understand the complicated electronic structure of atoms and molecules. Over the last century, enormous progress has been made in describing the general behavior of relatively simple systems. In particular, combined with physical insights, elegant computational approaches, ranging from wavefunction methods to quantum Monte Carlo and density functional theory, have been developed. The challenge is that the Hilbert spaces of quantum systems grow exponentially with system size. Therefore, as these methods are extended to higher accuracy or to larger systems, the computational requirements become unreachable with current computers. This problem is not merely a consequence of technological limitations, but stems from the inherent difficulty of simulating quantum systems with computers based on classical mechanics. It is therefore important to know if the computational bottlenecks of classical computers can be solved by a computing model based on quantum mechanics—quantum computation—whose development has revolutionized our understanding of the connections between computer science and physics.
The idea of mapping the dynamics of a quantum system of interest onto the dynamics of a controllable quantum system was proposed in 1982 by Feynman [43] and developed in 1996 by Lloyd [84]. Such a quantum computer would be able to obtain information inaccessible with classical computers. Consequently, quantum simulation promises to be a powerful new tool for quantum chemistry. In this article, we review the recent applications of quantum simulation to chemical problems that have proven difficult on conventional computers. After introducing basic concepts in quantum computation, we describe quantum algorithms for the exact, nonadiabatic simulation of chemical dynamics as well as for the fullconfigurationinteraction treatment of electronic structure. We also discuss solving chemical optimization problems, such as lattice folding, using adiabatic quantum computation. Finally, we describe recent experimental implementations of these algorithms, including the first quantum simulations of chemical systems.
2 Quantum Computation
2.1 Differences between quantum and classical computation
There are fundamental differences between quantum and classical computers. Unlike the classical bit, which is always either a ‘0’ or a ‘1’, the basic unit of quantum information is the qubit (Fig. 1), which can be in a superposition of and : . States of qubits are elements of an exponentially large, dimensional, Hilbert space, spanned by a basis of the form , where each is or . This enables entanglement, a feature necessary for the advantage of quantum computers. As an example of entanglement, the twoqubit state , one of the Bell states, can’t be written as a product state .
The linearity of quantum theory implies that a quantum computer can execute classical computations in superposition. For example, if the input state contains all possible input values of a function , the function can be computed using a unitary operation as
(1) 
With a single call to , the quantum computer produces a state that contains information about all the possible outputs of .
Nevertheless, quantum computation has several limitations. For example, the nocloning theorem [94, 63] states that an unknown quantum state cannot be copied perfectly. More importantly, the information of a general quantum state cannot be read out with a single projective measurement, because that would collapse a superposition into one of its components. Therefore, while the state in Eq. 1 contains information about all possible outputs, that information is not immediately accessible. Instead, a quantum algorithm has to be designed in a way that makes it easy to measure a global property of , without making it necessary to compute all the individual . Algorithms of this kind are discussed in the following sections.
2.2 Approaches to quantum computing
There are several models, or ways of formulating, quantum computation. Most work in quantum simulation has been done in the circuit and adiabatic models. While the two are known to be computationally equivalent—any computation that can be performed in one model can performed in the other in a comparable amount of time [6, 66, 89]—different problems are solved more naturally in different models. We discuss the two models in turn, but note that other models hold promise for the development of future simulation algorithms, including topological quantum computing [69, 90], oneway quantum computing [108, 109], and quantum walks [65].
The circuit model
The cornerstone of quantum computation is the generalization of the classical circuit model, composed of classical bits and logical gates, to a quantum circuit model [35, 13, 94]. A quantum circuit is a multiqubit unitary transformation which maps a set of initial states to some final states. Usually, a unitary gate is decomposed into elementary gates which involve a few (one or two) qubits each.
In classical computing, the nand gate is universal [95], meaning that any logical circuit can be constructed using nand gates only. Similarly, in quantum computing, there are sets of unitary operations that form universal gate sets. A quantum computer that can implement such a set is called universal, and can perform any unitary transformation to an arbitrary accuracy. It turns out that the set containing all singlequbit gates in addition to any twoqubit entangling gate, such as cnot, is universal [63] (Fig. 1). An entangling gate can be realized by any physical interaction that can generate entanglement between qubits. Examples of experimental implementations of quantum gates have been reviewed [77], and we will cover some of the experiments relevant to quantum simulation in Sec. 5.
Beside the elementary gates, an important quantum transformation is the quantum Fourier transform (QFT). It transforms any quantum state into its Fourier representation,
(2) 
where are the discrete Fourier coefficients of . The QFT can be efficiently implemented using a quantum circuit [94]: for qubits, the number of elementary gates required is . For comparison, the classical fast Fourier transform requires gates. We take advantage of the QFT in Sec. 3.2 for the simulation of quantum dynamics, and in Sec. 3.3 for the measurement of observables.
Adiabatic quantum computation
An alternative to the gate model is the adiabatic model of quantum computation [41]. In this model, the quantum computer remains in its ground state throughout the computation. The Hamiltonian of the computer is changed slowly from a simple initial Hamiltonian to a final Hamiltonian whose ground state encodes the solution to the computational problem. The adiabatic theorem states that if the variation of the Hamiltonian is sufficiently slow, the easytoprepare ground state of will be transformed continuously into the ground state of . It is desirable to complete the evolution as quickly as possible; the maximum rate of change is mostly determined by the energy gap between the ground and first excited states during the evolution [86, 7, 126, 125]. The applications of adiabatic quantum computation to simulation include preparing quantum states of interest and solving optimization problems such as protein folding [102]. We discuss the details in Secs. 3.4.1 and 4, respectively.
2.3 Quantum complexity theory
To understand the computational advantages of quantum algorithms for chemical simulation, we discuss some aspects of computational complexity theory, which defines quantum speedup unambiguously. A proper measure of the complexity of an algorithm is how many operations (or how much time) it takes to solve problems of increasing size. Conventionally, a computational problem is described as easy or tractable if there exists an efficient algorithm for solving it, one that scales polynomially with input size (for an input of size , as for some ). Otherwise, the problem is hard. This distinction is admittedly a rough one: for reasonable problem sizes, an “inefficient” algorithm scaling exponentially as would be faster than an “efficient” algorithm. Nevertheless, this convention has proven useful because, in practice, polynomially scaling algorithms generally outperform exponential ones.
The class of all problems
Likewise, bqp are those problems that are easy for a quantum computer [130]. The quantum analogue of np is called qma and contains those problems easy to check on a quantum computer. In analogy with nphard problems, qmahard contains the hardest problems in qma. Shor’s factoring algorithm [112] is significant because it provides an example of a problem in bqp which is widely thought (although not proven) to be outside of p; that is, a problem believed to be hard on classical computers that is easy for a quantum computer.
The relationships between the complexity classes mentioned above are illustrated in Fig. 2. In the remainder of this review, we explore the advantages of quantum simulation over its classical counterpart, in part, by situating various simulation tasks in the computational classes illustrated in Fig. 2.
3 Quantum Simulation
Quantum simulation schemes can be divided into two broad classes. The first is dedicated quantum simulation, where one quantum system is engineered to simulate another quantum system. For example, quantum gases in optical lattices can be used to simulate superfluidity [12]. The other, more general, approach is universal quantum simulation, simulating a quantum system using a universal quantum computer
One of the main goals of quantum simulation is to determine the physical properties of a particular quantum system. This problem can usually be conceptualized as involving three steps:

Initialize the qubits in a state that can be prepared efficiently,

Apply a unitary evolution to this initial state,
^{3} and 
Read out the desired information from the final state.
We note at the outset that it is not possible to simulate an arbitrary unitary evolution on a quantum computer efficiently. An arbitrary unitary acting on a system of spins has free parameters, and would require an exponential number of elementary quantum gates to implement. However, in quantum chemistry, it is usually not necessary to simulate arbitrary dynamics, since natural systems aren’t arbitrary [84]. Instead, the interactions between, say, molecular orbitals are local—featuring at most body interactions—and this crucial aspect of their structure can be exploited for their efficient simulation. That is, the Hamiltonian generating the unitary evolution is a sum of polynomially many terms, each of which acts on at most polynomially many degrees of freedom. A local Hamiltonian generates a timeevolution that can be decomposed into timesteps according to the LieTrotter formula,
(3) 
The approximation can be improved by increasing the number of time steps or by using higherorder generalizations of this formula [54, 18]. Finally, since each factor acts on only a subregion of the Hilbert space and can therefore be efficiently simulated, so can a product of polynomially many such factors. Hence, the time it takes to perform the simulation scales polynomially with the simulated time . Most methods of quantum simulation make use of the Trotter decomposition, and we will describe in more detail their applications in chemistry. We will not discuss all the available methods, for which the reader is directed to comprehensive reviews [27, 110, 22].
In the following, we describe two ways in which chemical wavefunctions can be encoded on a quantum computer, second and firstquantization approaches (see Table 1 for a comparison). For each approach, we outline the methods of preparing certain classes of initial states and propagating them in time. Afterward, we discuss the methods of measurement of observables and preparation of ground and thermal states, which do not depend essentially on the way the wavefunction is encoded.
Secondquantized  Firstquantized  

Wavefunction encoding 
Fock state in a given basis: 
On a grid of sites per dimension: 

Qubits required to represent the wavefunction 
One per basis state (spinorbital) 
per particle (nuclei & electrons) 

Molecular Hamiltonian 
Coefficients precomputed classically 
Interaction calculated on the fly 

Quantum gates required for simulation 
with number of basis states  with number of particles  
Advantages 


3.1 Second quantization
We start by considering the purely electronic molecular problem, in which the BornOppenheimer approximation has been used to separate the electronic and nuclear motion. The wavefunction of the electrons can be expanded in an orthonormal basis of molecular spinorbitals . Corresponding to this basis are the fermionic creation and annihilation operators and . There is a very natural mapping between the electronic Fock space and the state of qubits: Having qubit in the state (or ) indicates that spinorbital is unoccupied (or occupied).
An important subtlety is that electrons in a molecule, unlike the individually addressable qubits, are indistinguishable. Put differently, while the operators and obey the canonical fermionic anticommutation relations, , the qubit operators that change to and vice versa do not. This problem can be solved by using the JordanWigner transformation to enforce the correct commutation relations on the quantum computer [99, 118, 119, 10]. The JordanWigner transformation for this case results in the following mapping between the fermionic operator algebra and the qubit spin algebra:
(4a)  
(4b) 
where and .
The electronic Hamiltonian in the secondquantized form is
(5) 
where the spinorbital indices each range from 1 to . Here, oneelectron integrals involve the electronic kinetic energy and the nuclearelectron interaction , and the twoelectron integrals contain the electronelectron interaction term . For simulation on a quantum computer, this Hamiltonian is recast into the spin algebra using the JordanWigner transformation, Eq. 4, and the timeevolution it generates is implemented using the Trotter decomposition, Eq. 3. Note that contains terms, and each of these terms generates a time evolution of the form
(6) 
Each of these operators requires elementary quantum gates to implement because of the JordanWigner transformation. Since there are altogether terms that need to be implemented separately, the total cost of simulating scales as [131].
While any basis can be chosen to represent , it is desirable to choose a basis as small as possible that adequately represents the system under study. Electronic structure experience provides for many good starting points, such as the HartreeFock basis or the natural orbitals [33]. No matter which basis is chosen, a lot of the computation can be carried out on classical computers as preprocessing. In particular, the coefficients and can be efficiently precomputed on classical computers. That way, only the more computationally demanding tasks are left for the quantum computer.
Using the HartreeFock basis allows us to use the HartreeFock reference state as an input to the quantum computation [10]. A salient feature is that such states are Fock states, which are easy to prepare on the quantum computer: some qubits are initialized to and others to . In fact, any singledeterminant state can be easily prepared in this way. Furthermore, it is possible to prepare superpositions of Fock basis states as inputs for the quantum computation. While an arbitrary state might be difficult to prepare, many states of interest, including those with only polynomially many determinant contributions, can be prepared efficiently [99, 118, 119, 127]. The problem of preparing an initial state that is close to the true molecular ground state is addressed in Sec. 3.4.
The chief advantage of the secondquantization method is that it is frugal with quantum resources: only one qubit per basis state is required, and the integrals can be precomputed classically. For this reason, the first chemical quantum computation was carried out in second quantization (see Sec. 5). Nevertheless, there are processes, such as chemical reactions, which are difficult to describe in a small, fixed basis set, and for this we turn to discussing firstquantization methods.
3.2 First quantization
The firstquantization method, due to Zalka [138, 132, 62], simulates particles governed by the Schrödinger equation on a grid in real space
(7) 
and the resulting unitary can be implemented using the quantum version of the splitoperator method [42, 76]:
(8) 
The operators and are diagonal in the position and momentum representations, respectively. A diagonal operator can be easily implemented because it amounts to adding a phase to each basis state . Furthermore, it is easy on a quantum computer to switch between the position and momentum representations of a wavefunction using the efficient quantum Fourier transform. Therefore, simulating a time evolution for time involves alternately applying and with the time steps chosen to be sufficiently short to secure a desired accuracy. Finally, the scheme can be easily generalized to many particles in three dimensions: a system of particles requires qubits, for each degree of freedom.
The firstquantization method can be applied to many problems. The earliest applications established that as few as 1015 qubits would be needed for a proofofprinciple demonstration of singleparticle dynamics [120] (later improved to 610 [15]). The method could also be used to faithfully study the chaotic dynamics of the kicked rotor model [81]. The first chemical application was the proposal of a method for the calculation of the thermal rate constant [83] (see Sec. 3.3).
We investigated the applicability of the firstquantization method to the simulation of chemical dynamics [62]. The simplest approach is to consider all the nuclei and electrons explicitly, in which case the exact nonrelativistic molecular Hamiltonian reads
(9) 
where is the distance between particles and , which carry charges and , respectively. As before, the splitoperator method can be used to separate the unitaries that are diagonal in the position and momentum bases. Note that a JordanWigner transformation is not required; preserves permutational symmetry, meaning that if the initial state is properly (anti)symmetrized (see below), it will stay so throughout the simulation.
Since the BornOppenheimer approximation (BOA) has been widely used in quantum chemistry, it might seem extravagant to explicitly simulate all the nuclei and electrons. Nevertheless, the exact simulation is, in fact, faster than using the BOA for reactions with more than about four atoms [62]. The reason for this is the need to evaluate the potential on the fly on the quantum computer. In the exact case, the potential is simply the pairwise Coulomb interaction; on the other hand, evaluating the complicated, manybody potential energy surfaces that are supplied by the BOA is a much more daunting task, even considering that one can use nuclear timesteps that are about a thousand times longer. That is, exact simulation minimizes arithmetic, which is the bottleneck of the quantum computation; by contrast, the bottleneck on classical computers is the prohibitive scaling of the Hilbert space size, which is alleviated by the BOA.
In order to carry out simulations, it is important to prepare suitable initial states. Zalka’s original paper [138] contained a very general statepreparation scheme, later rediscovered [48, 64, 71] and improved [114]. The scheme builds the state one qubit at a time by performing a rotation (dependent on the previous qubits) that redistributes the wavefunction amplitude as desired. For example, Gaussian wavepackets or molecular orbitals can be constructed efficiently. We discussed how to combine such singleparticle wavefunctions into manyparticle Slater determinants, superpositions of determinants, and mixed states in [129]. In particular, the (anti)symmetrization algorithm of [2] was improved and used to prepare Slater determinants necessary for chemical simulation. Furthermore, we outlined a procedure for translating states that are prepared in secondquantization language into firstquantized wavefunctions, and vice versa. Techniques for preparing ground and thermal initial states are discussed in Sec. 3.4.
The firstquantization approach to quantum simulation suffers from the fact that even the simplest simulations might require dozens of qubits and millions of quantum gates [62]. Nevertheless, it has advantages that would make it useful if large quantum computers are built. Most importantly, because the Coulomb interaction is pairwise, simulating a system of particles requires gates, a significant asymptotic improvement over the secondquantized scaling of , where is the size of the basis set.
3.3 Measuring observables
We have discussed how to prepare and evolve quantum states on a quantum computer. Information about the resulting state must be extracted in the end; however, full characterization (quantum state tomography) generally requires resources that scale exponentially with the number of qubits. This is because a measurement projects a state into one consistent with the measurement outcome. Because only a limited amount of information can be extracted efficiently, one needs a specialized measurement scheme to extract the desired observables, such as dipole moments, correlation functions, etc.
In principle, an individual measurement can be carried out in any basis. However, since experimental measurement techniques usually address individual qubits, a method is needed to carry out more complicated measurements. In particular, in order to measure an observable , one would like to carry out a measurement in its eigenbasis . This is achieved by the phase estimation algorithm (PEA) [68, 3]:
(10) 
where and are the eigenvalues of ; is the unitary controlled by the ancilla qubits, which are initialized in the state . When measuring the ancilla, the eigenvalue will be measured with probability and, if the eigenstates are nondegenerate, the wavefunction will collapse to the eigenvector .
Because quantum measurement is inherently random, repeating a measurement on multiple copies of the same system helps in determining expectation values of observables. The central limit theorem implies that measuring copies of a state results in a precision that scales as (the standard quantum limit, SQL). For example, repeating the PEA gives an SQL estimate of the coefficients ; these can be used to calculate the expectation value , also to the SQL. When entanglement is available, one can achieve precision scaling as —this is the Heisenberg or quantum metrology limit [47]. An algorithm for the expectation values of observables has been proposed that can get arbitrarily close to the Heisenberg limit [72].
The first algorithm for measuring a chemical observable was Lidar and Wang’s calculation of the thermal rate constant by simulating a reaction in first quantization and using the PEA to obtain the energy spectrum and the eigenstates [83]. These values were used to calculate the rate constant on a classical computer by integrating the fluxflux correlation function. We improved on this method with a more direct approach to the rate constant [62]. We showed how to efficiently obtain the product branching ratios given different reactant states—if the initial state is a thermal state (see Sec. 3.4.2), this gives the rate constant directly. Furthermore, the method was used to obtain the entire statetostate scattering matrix. A method for reaction rates using a dedicated quantum simulator where artificial molecules are experimentally manipulated, has also been proposed [113].
More generally, correlation functions provide information about a system’s transport and spectroscopic properties. On a quantum computer, the correlation function of any two observables can be estimated efficiently if their pseudodynamics can each be simulated efficiently [99, 119]. The method does not suffer from the dynamic sign problem that plagues classical Monte Carlo methods for sampling correlation functions. An alternative approach is the measurement of correlation functions using techniques of linearresponse theory [124].
Molecular properties such as the dipole moment or the static polarizability are also of chemical interest. They are derivatives of the molecular energy with respect to an external parameter, such as the electric field. We showed how to calculate them [61] using the PEA and the quantum gradient algorithm [59]. The algorithm is insensitive to the dimensionality of the derivatives, an obstacle to classical computers. For example, the molecular gradient and Hessian can be computed—and used to optimize the geometry—with a number of energy evaluations independent of system size.
3.4 Preparing ground states and thermal states
In Secs. 3.1 and 3.2, we discussed the preparation of various initial states for quantum simulation. We postponed discussing the preparation of ground and thermal states because of subtleties to which we now turn.
Ground state preparation by phase estimation
A large part of quantum chemistry is concerned with the calculation of groundstate properties of molecules, making it desirable to prepare such states on a quantum computer. In the previous section, we described how the PEA can be used to measure a quantum state in the eigenbasis of a Hermitian operator. This suggests a method for preparing a ground state: measuring in the eigenbasis of the Hamiltonian will project a state to the ground state with probability .
The problem, therefore, is to prepare a state close to the ground state, from which we can project the groundstate component. Choosing a random state is bound to fail, since the overlap is expected to be exponentially small in the number of qubits : . This means that one would have to repeat the PEA exponentially many times before chancing upon the ground state.
Methods of quantum chemistry can be used to improve the overlap. We studied the groundstate preparation of HO and LiH in second quantization, based on the HartreeFock (HF) approximation [10]. The goal was to prepare the ground state of the full configuration interaction (FCI) Hamiltonian, so that its energy could be read out by the PEA, thus solving the electronic structure problem. Since these molecules were considered at equilibrium geometries, the HF guess was sufficient for the algorithm to estimate the groundstate energies of these molecules with high probability. The overlap can be further improved by choosing a more sophisticated approximation method such as a multiconfiguration selfconsistent field (MCSCF) wavefunction [128].
Alternatively, the overlap can be increased using adiabatic quantum computation (Sec. 2.2.2). We applied adiabatic state preparation (ASP) to the case of the hydrogen molecule H in the STO3G basis at various bond lengths [10]. As the bond length increases, the HF state has decreasing overlap with the exact state, reaching 0.5 at large separations. ASP works by preparing the ground state of the HF Hamiltonian and then slowly changing to the FCI Hamiltonian. The speed of the variation of the Hamiltonian is limited by the energy gap between the ground state and the first excited state. In the case of H, this method allowed the preparation of the FCI ground state with a high fidelity.
Procedures similar to ASP have been proposed to study lowenergy states of some toy models in physics [97] and superconductivity [134]. It is also possible to encode a thermal state into the ground state of a Hamiltonian [5, 116], offering a way to prepare a thermal state, a problem further discussed in the next section.
Thermal state preparation
While not often a subject of quantumchemical calculations, the thermal states are of significance because they can be used to solve many problems, ranging from statistical mechanics to the calculation of thermal rate constants. Classical algorithms typically rely on Markov chain Monte Carlo (MCMC) methods, which sample from the Gibbs density matrix, , where is the partition function. The challenge is that it is generally impossible to sample from the eigenstates of a certain Hamiltonian if they are not predetermined (which is often more challenging).
With a quantum computer, assuming the PEA can be efficiently implemented, we can prepare the thermal state of any classical or quantum Hamiltonian from a Markov chain constructed by repeating a completely positive map [124]. A limitation of that approach is that the Metropolis step can make too many transitions between states of very different energies, sometimes leading to a slow convergence rate of the resulting Markov chain. This issue was addressed by building up the Markov chain by applying random local unitary operations [123]. The resulting operation is a Metropolistype sampling for quantum states; although the underlying Markov chain is classical in nature, performing it on a quantum computer provides the benefit of being able to use the PEA without explicitly solving the eigenvalue equations. However, quantum computers can implement Markov chains corresponding to thermal states of classical Hamiltonians with a quadratic speedup [121, 117, 133, 106, 107, 26].
Zalka’s state preparation algorithm (see Sec. 3.2) is applicable to preparing the coherent encoding of thermal states (CETS) ,
(11) 
which is equivalent to the Gibbs density matrix, , if one register is traced out. If the eigenstates and eigenvalues are known, it is possible to construct the CETS directly [129]. On the other hand, combining ideas from beliefpropagation [87] and quantum amplitude amplification [63], we were able to construct the CETS of classical Hamiltonians with a quadratic quantum speedup [137].
QMAhardness and future prospects
Unfortunately, the procedures for ground and thermalstate preparation outlined above are not fully scalable to larger systems. A simple way to see this is to imagine a system composed of identical, noninteracting molecules. Even if one molecule can be prepared with a groundstate overlap of by any method, the fidelity of the molecule state will be exponentially small, [73]. ASP would fail when the energy gap got so small that the Hamiltonian would have to be varied exponentially slowly.
More generally, there are broad classes of Hamiltonians for which finding the ground state energy (and therefore also a thermal state) is known to be qmahard, that is, most likely hard even on a quantum computer (see Sec. 2.3) [70, 4, 66, 98, 111]. Nevertheless, the scaling of the ground and thermalstate energy problems for chemical systems on a quantum computer is an open question. It is possible that algorithms can be found that are not efficient for all qmahard Hamiltonians, but nevertheless succeed for chemical problems.
4 Optimization With Adiabatic Quantum Simulation
We describe the use of quantum computers to solve classical optimization problems related to chemistry and biology. This class of problems plays an important role in fields such as drug design, molecular recognition, geometry optimization, protein folding, etc. [53, 44].
Of all the models of quantum computation, adiabatic quantum computation (AQC) is perhaps the best suited for dealing with discrete optimization problems. As explained in Sec. 2.2.2, the essential idea behind AQC is to encode the solution to a computational problem in a (final) Hamiltonian ground state which is prepared adiabatically.
Although final Hamiltonians have been proposed for various problems related to computer science [41, 56, 93, 92, 29], only recently we derived constructions [102] for problems of chemical interest such as the lattice heteropolymer problem [100, 88, 75], an nphard problem [52]. It can be used as a model of protein folding [37], one of the cornerstones of biophysics. Note that the quantumcomputational implementation of the protein folding problem does not assume that the protein is treated quantum mechanically. Instead, the quantum computer is being used as a tool to solve the classical optimization problem. In the lattice folding problem, the sequence of amino acids is coarsegrained to a sequence of beads (amino acids) connected by strings (peptide bond). This chain of beads occupies points on a two or threedimensional lattice; a valid configuration (fold) is a selfavoiding walk on the lattice and its energy is determined by the interaction energies among amino acids that are nonbonded nearest neighbors in the lattice. The hydrophobicpolar (HP) model [80] is the simplest realization of this problem. The amino acids are broken into two groups, hydrophobic (H) and polar (P). Whenever two nonbonded H amino acids are nearest neighbors in the lattice, the freeenergy of the protein is reduced by one unit of energy, . The remaining interactions do not contribute to the free energy . The lattice folding problem consists in finding one of more folds that minimize the free energy of the protein. By the thermodynamic hypothesis [39], such fold(s) correspond to the conformation of the native conformation(s) of the protein.
The theory behind the quantumcomputational implementation of lattice folding is guided by the proposed quantum adiabatic platform on superconducting qubits [60]. This scheme is designed to find solutions to the problem,
(12) 
where , , and . Given a set of and the interaction matrix , the goal is to find the assignment , that minimizes .
The timedependent Hamiltonian is chosen to be,
(13) 
where has a simpletoprepare ground state and , with denotes the Pauli matrix acting on the th qubit, and is the running time. The timedependent functions and are such that and . Therefore, at the beginning (end) of the simulation, the ground state corresponds to the ground state of (). Note that, as desired, is the ground state of . Measurement of this final state provides the solution to our problem.
The theoretical challenge is to map the lattice folding free energy function into the form of Eq. 12 [102, 104]. In two dimensions, we use two binary variables determining the direction of each bond between two amino acids (beads). If a particular bond points upwards, we write “11”; if it points downwards, leftwards or rightwards, we write “00”, “10”, or “01”, respectively. For an aminoacid protein, we need two binary variables for each of the bonds. Fixing the direction of the first bond reduces the number of variables to binary variables. Any possible bead fold can be represented by the string of binary variables of the form , where we set the direction of the first bond to be right (“01”).
As an example, the free energy function [104] associated with the folding of a fouraminoacid peptide assisted by a “chaperone” protein (see Fig. 3) is
(14) 
By substituting values for the four binary variables defining the directions of the second () and third () bonds, we can verify that the 16 assignments provide the desired energy spectrum (Fig. 3). Eq. 14 is not in the form of Eq. 12. We converted this energy function from its quartic form to a quadratic form, using two extra ancilla binary variables [104]. After the substitution , the free energy function now resembles that of Eq. 12. An early experimental realization is described in Sec. 5.
Note that solving the HP model is nphard [16, 32, 52]. AQC is equivalent to the circuit model, so it is unlikely able to solve nphard problems in polynomial time (see Sec. 2.3). Real world problems (and the instances defining biologically relevant proteins) are not necessarily structureless. Taking advantage of the structure or information about a particular problem instance is one of the ideas behind new algorithmic strategies [8, 40, 105]. An example is to introduce heuristic strategies for AQC by initializing the calculation with a educated guess [105].
5 Experimental Progress
Experimental quantum simulation has rapidly progressed [24, 22] since the early simulation of quantum oscillators using nuclear magnetic resonance (NMR) [115]. Here we review chemical applications of available quantumcomputational devices.
Quantum optics
On an optical quantum computer, various degrees of freedom of single photons, such as polarization or path, are used to encode quantum information [96, 74]. This architecture was used for the first quantum simulation of a molecular system, a minimal basis model of the hydrogen molecule H [79]. Qubits were encoded in photon polarization, while twoqubit gates were implemented probabilistically using linearoptical elements and projective measurement. The minimalbasis description of H used two spinorbitals per atom. Since the FCI Hamiltonian is blockdiagonal with blocks, two qubits sufficed for the experiment: one for storing the system wavefunction, and one for the readout of the PEA. The PEA was implemented iteratively, extracting one bit of the value of the energy at a time. Twenty bits of the energy were obtained, and the answer was exact within the basis set. Fig. 4 describes the experiment and the potential energy surfaces that were obtained.
Nuclear magnetic resonance
Nuclear spins can serve as qubits, being addressed and readout using an NMR spectrometer [14]. The first experimental quantum simulation, of a harmonic oscillator, was performed using NMR [115]. The platform has since been used to simulate a number of model systems [91, 23, 135, 101], leading up to the recent simulation of H [38]. The H experiment used Clabeled chloroform, in which the carbon and hydrogen nuclear spins form two qubits. The experiment achieved 45 bits of precision (15 iterations of PEA, 3 bits per iteration) in the ground state energy. Adiabatic state preparation (Sec. 3.4) was implemented for various bond distances.
Superconducting systems
The circulating current (clockwise or counterclockwise) flowing in a micronsized loop of a superconductor can be used as a qubit [136, 82]. Examples of applications based on superconducting qubits include the tailormade generation of harmonic oscillator states [55] and the implementation of the DeutschJozsa and Grover quantum algorithms [36]. Recently, the free energy function discussed in Sec. 4 for the fouraminoacid peptide assisted by a chaperone protein (see Fig. 3) has been experimentally realized [103]. A microprocessor consisting of an array of coupled superconductor qubits has been used to implement the timedependent Hamiltonian in Eq. 13, with as the initial Hamiltonian [60, 58, 49]. The quantum hardware operating at a temperature of 20 mK found the correct solution with a probability of 78%. Characterization of this device is currently underway [51, 17, 50, 78].
Trapped ions
6 Conclusions
We have outlined how a quantum computer could be employed for the simulation of chemical systems and their properties, including correlation functions and reaction rates. A method for lattice protein folding was also discussed. Although we focused on the adiabatic and circuit models, these are not the only universal models of quantum computation and it may be possible to make further algorithmic progress with models such as topological quantum computing [69, 90], oneway quantum computing [108, 109], and quantum walks [65, 28, 85].
We reported on the first experiments relevant to chemistry and we expect more to come in the near future. With recent technological advances, there are many prospects for the future of quantum simulation. However, as more qubits are added to experiments, more effort will be needed to control decoherence, since error correction procedures [94] might not be sufficient in practice due to the spatial and temporal overheads required [31]. Instead, it may be possible to build resilient quantum simulators or to incorporate the noise into the simulation.
Although practical quantum computers are not available yet, quantum information theory has already influenced the development of new methods for quantum chemistry. For instance, density matrix renormalization group has been extended using quantum information, and its applications to chemistry have been vigorously pursued [25]. By studying the simulation of chemical systems on quantum computers, we can also expect new insights into the complexity of computing their properties classically.
In analogy to classical electronics, one could say that, as of 2010, the implementation of quantum information processors is in the vacuumtube era. A development parallel to that of the transistor would allow for rapid progress in the capabilities of quantum information processors. These larger devices would allow for routine execution of exact, nonadiabatic dynamics simulations, as well as fullconfiguration interaction calculations of molecular systems that are intractable with current classical computing technology.
Summary Points

A universal quantum computer can simulate chemical systems more efficiently (in some cases exponentially so) than a classical computer.

Preparing the ground state of an arbitrary Hamiltonian is a hard (qmacomplete) problem. However, the ground state of certain chemical Hamiltonians can be found efficiently using quantum algorithms.

Simulation of quantum dynamics of physical systems is in general efficient with a quantum computer.

Properties of quantum states can be obtained by various measurement methods.

Classical optimization problems, such as lattice protein folding, can be studied by means of the adiabatic quantumcomputational model.

Quantum simulation for chemistry has been experimentally realized in quantum optics, nuclear magnetic resonance, and superconducting devices.
Future Issues

Developing quantum simulation methods based on alternative models of quantum computation is an open research direction.

Dedicated quantum simulators built so far are mostly for simulating condensed matter systems. It is desirable to make experimental progress on simulating chemical systems.

Decoherence is currently the major obstacle for scaling up the current experimental setups. Progress in theoretical and experimental work is needed to overcome decoherence.

We have not covered methods of quantum error correction, which will be important for large scale simulations.
Footnotes
 Strictly speaking, decision problems, those with a yesorno answer. However, other problems can be recast as decision problems; for example, instead of asking “What is the groundstate energy of molecule ?” we might ask “Is the groundstate energy of less than ?”
 The terms “analog” and “digital” have also been used for dedicated and universal quantum simulation, respectively [24].
 Nonunitary opensystem dynamics have been studied as well [11].
 We also note the method of [21], which in our terminology is a hybrid between second and firstquantization methods. It associates a qubit to the occupation of each lattice site.
 Other methods for eigenvalue measurement include pairing adiabatic quantum evolution with Kitaev’s original scheme [19] and applications of the HellmannFeynman theorem [97].
References
 S. Aaronson. The limits of quantum computers. Sci. Am., March:62, 2008.
 D. S. Abrams and S. Lloyd. Simulation of manybody Fermi systems on a universal quantum computer. Phys. Rev. Lett., 79(13):2586–2589, 1997.
 D. S. Abrams and S. Lloyd. Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors. Phys. Rev. Lett., 83(24):5162–5165, 1999.
 D. Aharonov and T. Naveh. Quantum NP  A survey. arXiv:quantph/0210077, 2002.
 D. Aharonov and A. TaShma. Adiabatic quantum state generation. SIAM J. Comput., 37(1):47–82, 2008.
 D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic quantum computation is equivalent to standard quantum computation. SIAM J. Comput., 37(1):166–194, 2007.
 M. H. S. Amin. On the inconsistency of the adiabatic theorem. arXiv:0810.4335, 2008.
 M. H. S. Amin and V. Choi. Firstorder quantum phase transition in adiabatic quantum computation. Phys. Rev. A., 80(6):062326, 2009.
 S. Arora and B. Barak. Computational Complexity: A Modern Approach. Cambridge University Press, 2009.
 A. AspuruGuzik, A. D. Dutoi, P. J. Love, and M. HeadGordon. Simulated quantum computation of molecular energies. Science, 309(5741):1704–1707, 2005.
 D. Bacon, A. M. Childs, I. L. Chuang, J. Kempe, D. W. Leung, and X. Zhou. Universal simulation of Markovian quantum dynamics. Phys. Rev. A, 64(6):062302, 2001.
 W. S. Bakr, A. Peng, M. E. Tai, R. Ma, J. Simon, J. I. Gillen, S. Folling, L. Pollet, and M. Greiner. Probing the superfluidtoMottinsulator transition at the singleatom level. Science, page DOI:10.1126/science.1192368, 2010.
 A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter. Elementary gates for quantum computation. Phys. Rev. A, 52:3457, 1995.
 J. Baugh, J. Chamilliard, C. M. Chandrashekar, M. Ditty, A. Hubbard, R. Laflamme, M. Laforest, D. Maslov, O. Moussa, C. Negrevergne, M. Silva, S. Simmons, C. A. Ryan, D. G. Cory, J. S. Hodges, and C. Ramanathan. Quantum information processing using nuclear and electron magnetic resonance: Review and prospects. Physics in Canada, Oct.Dec., 2007.
 G. Benenti and G. Strini. Quantum simulation of the singleparticle Schrödinger equation. Am. J. Phys., 76(7):657–662, 2008.
 B. Berger and T. Leighton. Protein folding in the hydrophobichydrophilic (HP) model is NPcomplete. J. Comput. Biol., 5:27–40, SPR 1998.
 A. J. Berkley, M. W. Johnson, P. Bunyk, R. Harris, J. Johansson, T. Lanting, E. Ladizinsky, E. Tolkacheva, M. H. S. Amin, and G. Rose. A scalable readout system for a superconducting adiabatic quantum optimization system. arXiv:0905.0891, 2009.
 D. Berry, G. Ahokas, R. Cleve, and B. Sanders. Efficient quantum algorithms for simulating sparse Hamiltonians. Comm. Math. Phys., 270:359, 2007.
 J. D. Biamonte, V. Bergholm, J. D. Whitfield, J. Fitzsimons, and A. AspuruGuzik. Adiabatic quantum simulators, 2010.
 R. Blatt and D. Wineland. Entangled states of trapped atomic ions. Nature, 453(7198):1008–1015, 2008.
 B. M. Boghosian and W. Taylor. Simulating quantum mechanics on a quantum computer. Physica D, 120(12):30 – 42, 1998.
 K. L. Brown, W. J. Munro, and V. M. Kendon. Using quantum computers for quantum simulation. arXiv:1004.5528, 2010.
 K. R. Brown, R. J. Clark, and I. L. Chuang. Limitations of quantum simulation examined by simulating a pairing Hamiltonian using nuclear magnetic resonance. Phys. Rev. Lett., 97(5):050504, 2006.
 I. Buluta and F. Nori. Quantum simulators. Science, 326:108 – 111, 2009.
 G. K.L. Chan, J. J. Dorando, D. Ghosh, J. Hachmann, E. Neuscamman, H. Wang, and T. Yanai. An introduction to the density matrix renormalization group ansatz in quantum chemistry. Prog. Theor. Chem. and Phys., 18:49, 2008.
 C. Chiang and P. Wocjan. Quantum algorithm for preparing thermal Gibbs statesdetailed analysis. arXiv:1001.1130, 2010.
 A. M. Childs. Quantum information processing in continuous time. Ph.D., MIT, Cambridge, MA, 2004.
 A. M. Childs. Universal computation by quantum walk. Phys. Rev. Lett., 102:180501, 2009.
 V. Choi. Adiabatic quantum algorithms for the NPcomplete maximumweight independent set, exact cover and 3SAT problems. arXiv:1004.2226, 2010.
 J. I. Cirac and P. Zoller. Quantum computations with cold trapped ions. Phys. Rev. Lett., 74(20):4091, 1995.
 C. R. Clark, T. S. Metodi, S. D. Gasster, and K. R. Brown. Resource requirements for faulttolerant quantum simulation: The ground state of the transverse Ising model. Phys. Rev. A, 79:062314, 2009.
 P. Crescenzi, D. Goldman, C. Papadimitriou, A. Piccolboni, and M. Yannakakis. On the complexity of protein folding. J. Comput. Biol., 5:597–603, 1998.
 E. Davidson. Properties and uses of natural orbitals. Rev. Mod. Phys., 44(3):451–464, 1972.
 M. J. Davis and E. J. Heller. Multidimensional wave functions from classical trajectories. J. Chem. Phys., 75(8):3916, 1981.
 D. Deutsch. Quantum computational networks. Proc. R. Soc. Lond. A, 425:73, 1989.
 L. DiCarlo, J. M. Chow, J. M. Gambetta, L. S. Bishop, B. R. Johnson, D. I. Schuster, J. Majer, A. Blais, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf. Demonstration of twoqubit algorithms with a superconducting quantum processor. Nature, 460(7252):240–244, 2009.
 K. A. Dill, S. B. Ozkan, M. S. Shell, and T. R. Weikl. The protein folding problem. Ann. Rev. Biophys., 37:289–316, 2008. PMID: 18573083.
 J. Du, N. Xu, P. Wang, S. Wu, and D. Lu. NMR implementation of a molecular hydrogen quantum simulation with adiabatic state preparation. Phys. Rev. Lett., 104:030502, 2010.
 C. J. Epstein, R. F. Goldberger, and C. B. Anfinsen. Genetic control of tertiary protein structure  studies with model systems. Cold. Spring. Harb. Sym., 28:439, 1963.
 E. Farhi, J. Goldstone, D. Gosset, S. Gutmann, H. Meyer, and P. Shor. Quantum adiabatic algorithms, small gaps, and different paths. arXiv:0909.4766v2, 2009.
 E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Quantum computation by adiabatic evolution. arXiv:quantph/0001106, 2000.
 M. Feit, J. Fleck Jr, and A. Steiger. Solution of the Schrödinger equation by a spectral method. J. Comput. Phys., 47(3):412–433, 1982.
 R. Feynman. Simulating physics with computers. Int. J. Theor. Phys., 21(6):467–488, 1982.
 C. A. Floudas and P. M. Pardalos. Optimization in Computational Chemistry and Molecular Biology  Local and Global Approaches. Springer, 1 edition, 2000.
 A. Friedenauer, H. Schmitz, J. T. Glueckert, D. Porras, and T. Schaetz. Simulating a quantum magnet with trapped ions. Nature Phys., 4(10):757–761, 2008.
 R. Gerritsma, G. Kirchmair, F. Zähringer, E. Solano, R. Blatt, and C. F. Roos. Quantum simulation of the Dirac equation. Nature, 463:68, 2010.
 V. Giovannetti, S. Lloyd, and L. Maccone. Quantumenhanced measurements: Beating the standard quantum limit. Science, 306:1330, 2004.
 L. Grover and T. Rudolph. Creating superpositions that correspond to efficiently integrable probability distributions. arXiv:quantph/0208112, 2002.
 R. Harris, J. Johansson, A. J. Berkley, M. W. Johnson, T. Lanting, S. Han, P. Bunyk, E. Ladizinsky, T. Oh, I. Perminov, E. Tolkacheva, S. Uchaikin, E. M. Chapple, C. Enderud, C. Rich, M. Thom, J. Wang, B. Wilson, and G. Rose. Experimental demonstration of a robust and scalable flux qubit. Phys. Rev. B., 81(13):134510, 2010.
 R. Harris, M. W. Johnson, T. Lanting, A. J. Berkley, J. Johansson, P. Bunyk, E. Tolkacheva, E. Ladizinsky, N. Ladizinsky, T. Oh, F. Cioata, I. Perminov, P. Spear, C. Enderud, C. Rich, S. Uchaikin, M. C. Thom, E. M. Chapple, J. Wang, B. Wilson, M. H. S. Amin, N. Dickson, K. Karimi, B. Macready, C. J. S. Truncik, and G. Rose. Experimental investigation of an eight qubit unit cell in a superconducting optimization processor. arXiv:1004.1628, 2010.
 R. Harris, T. Lanting, A. J. Berkley, J. Johansson, M. W. Johnson, P. Bunyk, E. Ladizinsky, N. Ladizinsky, T. Oh, and S. Han. Compound Josephsonjunction coupler for flux qubits with minimal crosstalk. Phys. Rev. B., 80(5):052506, 2009.
 W. E. Hart and S. Istrail. Robust proofs of NPHardness for protein folding: General lattices and energy potentials. J. Comput. Biol., 4(1):1–22, 1997.
 A. K. Hartmann and H. Rieger. New Optimization Algorithms in Physics. WileyVCH, 2004.
 N. Hatano and M. Suzuki. Finding exponential product formulas of higher orders. In A. Das and B. K. Chakrabarti, editors, Lecture Notes in Physics, volume 679, pages 37–68. Springer Berlin, 2005.
 M. Hofheinz, H. Wang, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, J. Wenner, J. M. Martinis, and A. N. Cleland. Synthesizing arbitrary quantum states in a superconducting resonator. Nature, 459(7246):546–549, 2009.
 T. Hogg. Adiabatic quantum computing for random satisfiability problems. Phys. Rev. A., 67(2):022314, 2003.
 M. Johanning, A. F. Varón, and C. Wunderlich. Quantum simulations with cold trapped ions. J. Phys. B., 42:154009, 2009.
 M. W. Johnson, P. Bunyk, F. Maibaum, E. Tolkacheva, A. J. Berkley, E. M. Chapple, R. Harris, J. Johansson, T. Lanting, I. Perminov, E. Ladizinsky, T. Oh, and G. Rose. A scalable control system for a superconducting adiabatic quantum optimization processor. Supercond. Sci. Tech., 23(6):065004, 2010.
 S. P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 95:050501, 2005.
 W. M. Kaminsky, S. Lloyd, and T. P. Orlando. Scalable superconducting architecture for adiabatic quantum computation. arXiv:quantph/0403090, 2004.
 I. Kassal and A. AspuruGuzik. Quantum algorithm for molecular properties and geometry optimization. J. Chem. Phys., 131(22):224102, 2009.
 I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. AspuruGuzik. Polynomialtime quantum algorithm for the simulation of chemical dynamics. Proc. Natl. Acad. Sci., 105(48):18681–6, 2008.
 P. Kaye, R. Laflamme, and M. Mosca. An Introduction to Quantum Computing. Oxford University Press, 2007.
 P. Kaye and M. Mosca. Quantum networks for generating arbitrary quantum states. arXiv:quantph/0407102, 2004.
 J. Kempe. Quantum random walks  an introductory overview. Contemporary Physics, 44:307, 2003.
 J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamiltonian problem. SIAM J. Comput., 35:1070–1097, 2006.
 K. Kim, M.S. Chang, S. Korenblit, R. Islam, E. E. Edwards, J. K. Freericks, G.D. Lin, L.M. Duan, , and C. Monroe. Quantum simulation of frustrated ising spins with trapped ions. Nature, 465:560, 2010.
 A. Kitaev. Quantum measurements and the abelian stabilizer problem. arXiv:quantph/9511026, 1995.
 A. Kitaev. Faulttolerant quantum computation by anyons. Ann. Phys., 303(1):2–30, 2003.
 A. Kitaev, A. H. Shen, and M. N. Vyalyi. Classical and Quantum Computation. American Mathematical Society, 2002.
 A. Kitaev and W. A. Webb. Wavefunction preparation and resampling using a quantum computer. arXiv:0801.0342, 2008.
 E. Knill, G. Ortiz, and R. Somma. Optimal quantum measurements of expectation values of observables. Phys. Rev. A, 75:012328, 2007.
 W. Kohn. Nobel lecture: Electronic structure of matter—wave functions and density functionals. Rev. Mod. Phys., 71(5):1253–1266, 1999.
 P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn. Linear optical quantum computing with photonic qubits. Rev. Mod. Phys., 79(1):135–40, 2007.
 A. Kolinski and J. Skolnick. Lattice Models of Protein Folding, Dynamics and Thermodynamics. Chapman & Hall, 1996.
 D. Kosloff and R. Kosloff. A Fourier method solution for the time dependent Schrödinger equation as a tool in molecular dynamics. J. Comput. Phys., 51(1):35–53, 1983.
 T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O’Brien. Quantum computers. Nature, 464(7285):45–53, 2010.
 T. Lanting, R. Harris, J. Johansson, M. H. S. Amin, A. J. Berkley, S. Gildert, M. W. Johnson, P. Bunyk, E. Tolkacheva, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, E. M. Chapple, C. Enderud, C. Rich, B. Wilson, M. C. Thom, S. Uchaikin, and G. Rose. Observation of cotunneling in pairs of coupled flux qubits. arXiv:1006.0028, 2010.
 B. P. Lanyon, J. D. Whitfield, G. G. Gillett, M. E. Goggin, M. P. Almeida, I. Kassal, J. D. Biamonte, M. Mohseni, B. J. Powell, M. Barbieri, A. AspuruGuzik, and A. G. White. Towards quantum chemistry on a quantum computer. Nature Chem., 2(2):106–111, 2010.
 K. F. Lau and K. A. Dill. A lattice statisticalmechanics model of the conformational and sequencespaces of proteins. Macromolecules., 22(10):3986–3997, 1989.
 B. Lévi, B. Georgeot, and D. L. Shepelyansky. Quantum computing of quantum chaos in the kicked rotator model. Phys. Rev. E, 67(4):046220, 2003.
 B. G. Levi. Superconducting qubit systems come of age. Phys. Today., 62(7):14, 2009.
 D. A. Lidar and H. Wang. Calculating the thermal rate constant with exponential speedup on a quantum computer. Phys. Rev. E, 59(2):2429–2438, 1999.
 S. Lloyd. Universal quantum simulators. Science, 273(5278):1073–1078, 1996.
 N. B. Lovett, S. Cooper, M. Everitt, M. Trevers, and V. Kendon. Universal quantum computation using discrete time quantum walk. Phys. Rev. A, 81:042330, 2010.
 A. Messiah. Quantum Mechanics. Dover Publications, 1999.
 M. Mezard and A. Montanari. Information, Physics, and Computation. Oxford University Press, 2009.
 L. Mirny and E. Shakhnovich. Protein folding theory: from lattice to allatom models. Annu. Rev. Biophys. Bio., 30:361–396, 2001.
 A. Mizel, D. A. Lidar, and M. Mitchell. Simple proof of equivalence between adiabatic quantum computation and the circuit model. Phys. Rev. Lett., 99(7):070502, 2007.
 C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. D. Sarma. NonAbelian anyons and topological quantum computation. Rev. Mod. Phys., 80(3):1083, 2008.
 C. Negrevergne, R. Somma, G. Ortiz, E. Knill, and R. Laflamme. Liquidstate NMR simulations of quantum manybody problems. Phys. Rev. A, 71(3):032344, 2005.
 H. Neven, V. S. Denchev, G. Rose, and W. G. Macready. Training a large scale classifier with the quantum adiabatic algorithm. arXiv:0912.0779, 2009.
 H. Neven, G. Rose, and W. G. Macready. Image recognition with an adiabatic quantum computer. I: Mapping to quadratic unconstrained binary optimization. arXiv:0804.4457, 2008.
 M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
 L. Null and J. Lobur. The essentials of computer organization and architecture. Jones & Bartlett Pub, 2003.
 J. L. O’Brien. Optical quantum computing. Science, 318(5856):1567–1570, 2007.
 S. Oh. Quantum computational method of finding the groundstate energy and expectation values. Phys. Rev. A, 77(1):012326, 2008.
 R. Oliveira and B. M. Terhal. The complexity of quantum spin systems on a twodimensional square lattice. Quant. Inf. Comp., 8:900–924, 2008.
 G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme. Quantum algorithms for fermionic simulations. Phys. Rev. A, 64(2):022319, 2001.
 V. S. Pande, A. Y. Grosberg, and T. Tanaka. Heteropolymer freezing and design: Towards physical models of protein folding. Rev. Mod. Phys., 72(1):259, 2000.
 X. Peng, J. Du, and D. Suter. Quantum phase transition of groundstate entanglement in a Heisenberg spin chain simulated in an NMR quantum computer. Phys. Rev. A, 71(1):012307, 2005.
 A. Perdomo, C. Truncik, I. TubertBrohman, G. Rose, and A. AspuruGuzik. Construction of model hamiltonians for adiabatic quantum computation and its application to finding lowenergy conformations of lattice protein models. Phys. Rev. A, 78(1):012320, 2008.
 A. PerdomoOrtiz, M. DrewBrook, N. Dickson, G. Rose, and A. AspuruGuzik. Experimental realization of a 8qubit quantumadiabatic algorithm for a lattice protein model: Towards optimization on a quantum computer. In preparation, 2010.
 A. PerdomoOrtiz, B. O’Gorman, and A. AspuruGuzik. Construction of energy functions for selfavoiding walks and the lattice heteropolymer model: resource efficient encoding for quantum optimization. In preparation, 2010.
 A. PerdomoOrtiz, S. VenegasAndraca, and A. AspuruGuzik. A study of heuristic guesses for adiabatic quantum computation. Quantum Inf. Process., 2010.
 D. Poulin and P. Wocjan. Preparing ground states of quantum manybody systems on a quantum computer. Phys. Rev. Lett., 102(13):130503, 2009.
 D. Poulin and P. Wocjan. Sampling from the thermal quantum Gibbs state and evaluating partition functions with a quantum computer. Phys. Rev. Lett., 103:220502, 2009.
 R. Raussendorf and H. J. Briegel. A oneway quantum computer. Phys. Rev. Lett., 86(22):5188, 2001.
 R. Raussendorf, D. E. Browne, and H. J. Briegel. Measurementbased quantum computation on cluster states. Phys. Rev. A, 68(2):022312, 2003.
 R. Schack. Simulation on a quantum computer. Informatik  Forschung und Entwicklung, 21:21–27, 2006.
 N. Schuch and F. Verstraete. Computational complexity of interacting electrons and fundamental limitations of density functional theory. Nature Phys., 5(10):732–735, 2009.
 P. W. Shor. Polynomialtime algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput., 26(5):1484–1509, 1997.
 A. Smirnov, S. Savel’ev, L. Mourokh, and F. Nori. Modelling chemical reactions using semiconductor quantum dots. Europhys. Lett., 80:67008, 2007.
 A. N. Soklakov and R. Schack. Efficient state preparation for a register of quantum bits. Phys. Rev. A, 73(1):012307–13, 2006.
 S. Somaroo, C. H. Tseng, T. F. Havel, R. Laflamme, and D. G. Cory. Quantum simulations on a quantum computer. Phys. Rev. Lett., 82:5381–5384, 1999.
 R. Somma, C. Batista, and G. Ortiz. Quantum approach to classical statistical mechanics. Phys. Rev. Lett., 99(3):1–4, 2007.
 R. Somma, S. Boixo, H. Barnum, and E. Knill. Quantum simulations of classical annealing processes. Phys. Rev. Lett., 101(13):130504, 2008.
 R. Somma, G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme. Simulating physical phenomena by quantum networks. Phys. Rev. A, 65(4), 2002.
 R. Somma, G. Ortiz, E. Knill, and J. Gubernatis. Quantum simulations of physics problems. Int. J. Quantum Inf., 1:189, 2003.
 G. Strini. Error sensitivity of a quantum simulator. I: a first example. Fortschritte der Physik, 50(2):171–183, 2002.
 M. Szegedy. Quantum speedup of Markov chain based algorithms. In FOCS ’04: Proc. 45th Ann. IEEE Symp. Found. Comput. Sci., pages 32–41, Washington, DC, USA, 2004. IEEE Computer Society.
 D. J. Tannor. Introduction to Quantum Mechanics: A TimeDependent Perspective. University Science Books, 2006.
 K. Temme, T. Osborne, K. Vollbrecht, D. Poulin, and F. Verstraete. Quantum Metropolis sampling. arXiv:quantph/0911.3635, 2009.
 B. Terhal and D. DiVincenzo. Problem of equilibration and the computation of correlation functions on a quantum computer. Phys. Rev. A, 61(2):022301, 2000.
 D. M. Tong. Quantitative condition is necessary in guaranteeing the validity of the adiabatic approximation. Phys. Rev. Lett., 104(12):120401, 2010.
 D. M. Tong, K. Singh, L. C. Kwek, and C. H. Oh. Sufficiency criterion for the validity of the adiabatic approximation. Phys. Rev. Lett., 98(15):150402–4, 2007.
 H. Wang, S. Ashhab, and F. Nori. Efficient quantum algorithm for preparing molecularsystemlike states on a quantum computer. Phys. Rev. A, 79(4):042335, 2009.
 H. Wang, S. Kais, A. AspuruGuzik, and M. R. Hoffmann. Quantum algorithm for obtaining the energy spectrum of molecular systems. Phys. Chem. Chem. Phys., 10(35):5388–93, 2008.
 N. J. Ward, I. Kassal, and A. AspuruGuzik. Preparation of manybody states for quantum simulation. J. Chem. Phys., 130(19):194105, 2009.
 J. Watrous. Quantum computational complexity. In Encyclopedia of Complexity and System Science. Springer Berlin, 2009.
 J. Whitfield, J. Biamonte, and A. AspuruGuzik. Quantum computing resource estimate of molecular energy simulation. arXiv:1001.3855, 2010.
 S. Wiesner. Simulations of manybody quantum systems by a quantum computer. arXiv:quantph/9603028, 1996.
 P. Wocjan and A. Abeyesinghe. Speedup via quantum sampling. Phys. Rev. A, 78(4):042336, 2008.
 L.A. Wu, M. S. Byrd, and D. A. Lidar. Polynomialtime simulation of pairing models on a quantum computer. Phys. Rev. Lett., 89(5):057904, 2002.
 X. Yang, A. Wang, F. Xu, and J. Du. Experimental simulation of a pairing Hamiltonian on an NMR quantum computer. Chem. Phys. Lett., 422:20–24, 2006.
 J. Q. You and F. Nori. Superconducting circuits and quantum information. Phys. Today., 58(11):42–47, 2005.
 M.H. Yung, D. Nagaj, J. D. Whitfield, and A. AspuruGuzik. Simulation of classical thermal states on a quantum computer: A renormalization group approach. arXiv:1005.0020, 2010.
 C. Zalka. Simulating quantum systems on a quantum computer. Proc. Roy. Soc. A, 454(1969):313–322, 1998.