Universal quantum Hamiltonians
Quantum many-body systems exhibit an extremely diverse range of phases and physical phenomena. Here, we prove that the entire physics of any other quantum many-body system is replicated in certain simple, “universal” spin-lattice models. We first characterise precisely what it means for one quantum many-body system to replicate the entire physics of another. We then show that certain very simple spin-lattice models are universal in this very strong sense. Examples include the Heisenberg and XY models on a 2D square lattice (with non-uniform coupling strengths). We go on to fully classify all two-qubit interactions, determining which are universal and which can only simulate more restricted classes of models. Our results put the practical field of analogue Hamiltonian simulation on a rigorous footing and take a significant step towards justifying why error correction may not be required for this application of quantum information technology.
List of Theorems
- Definition (Analogue Hamiltonian simulation)
- Theorem (follows from [JR52], Theorem 4 and [Mar67], Theorem 2)
- Theorem (Encodings)
- Definition (Standard encoding)
- Definition (Local encoding)
- Lemma (Lemma 3.3 of [AE11])
- Corollary (Product-preserving encodings)
- Lemma (First-order simulation [BH17])
- Lemma (Second-order simulation [BH17])
- Lemma (Third-order simulation [BH17])
- Theorem (essentially [OT08])
- Definition (Spatial sparsity [OT08])
List of Figures \@mkbothLIST OF FIGURESLIST OF FIGURES
Part I Extended overview
The properties of any physical system are captured in its Hamiltonian, which describes all the possible energy configurations of the system. Amongst the workhorses of theoretical many-body and condensed matter physics are spin-lattice Hamiltonians, in which the degrees of freedom are quantum spins arranged on a lattice, and the overall Hamiltonian is built up from few-body interactions between these spins. Although these are idealised, toy models of real materials, different spin-lattice Hamiltonians are able to model a wide variety of different quantum phases and many-body phenomena: phase transitions [Sac07], frustration [Die13], spontaneous symmetry-breaking [ADZ12], gauge symmetries [Kog79], quantum magnetism [SRFB08], spin liquids [ZKN16], topological order [Kit03], and more. In this work, we prove that there exist particular, simple spin models that are universal: they can replicate to any desired accuracy the entire physics of any other quantum many-body system (including systems composed not only of spins, but also bosons and fermions). This implies, in particular, that the ground state, full energy spectrum and associated excited states, all observables, correlation functions, thermal properties, time-evolution, and also any local noise processes are reproduced by the universal model.
Note that this is a very different notion of “universality” from that of universality classes in condensed matter and statistical physics [Car96]. Universality classes capture the fact that, if we repeatedly “zoom out” or course-grain the microscopic degrees of freedom of a many-body system, models that are microscopically different become increasingly similar (converge to the same limit under this “renormalisation group flow”), and their macroscopic properties turn out to fall into one of a small handful of possible classes. The “universality” we are concerned with here [lCC16] has a completely different and unrelated meaning. It is closer to the notion of universality familiar from computing. A universal computer can carry out any possible computation, including simulating completely different types of computer. Universal models are able to produce any many-body physics phenomena, including reproducing the physics of completely different many-body models.
One might expect that universal models must be very complicated for their phase diagram to encompass all possible many-body physics. In fact, some of the models we show to be universal are amongst the simplest possible. In particular, we prove that allowing only the strengths of the local interactions to vary, the Heisenberg model on a 2D square lattice of spin-1/2 particles (qubits) with nearest-neighbour interactions and non-uniform coupling strengths is universal. This is an important and somewhat surprising example, as it is a 2D model with the simplest possible local degrees of freedom (qubits), short-range, two-body interactions, and the largest possible local symmetry (full invariance). Yet our results prove that by varying only the coupling strengths, the 2D Heisenberg model can replicate in a complete and rigorous sense the entire physics of any other many-body model, including models with higher spatial dimensions, long-range interactions, other symmetries, higher-dimensional spins, and even bosons and fermions.
In addition to the new relationships this establishes between apparently very different quantum many-body models, with implications for our fundamental understanding of quantum many-body physics, there are also potential practical applications of our results in the field of analogue quantum simulation. There is substantial interest nowadays in using one quantum many-body system to simulate the physics of another, and one of the most important applications of quantum computers is anticipated to be the simulation of quantum systems [BMK10, GAN14, CZ12].
Two quite different notions of Hamiltonian simulation are studied in the literature. The first concerns simulating the time-dynamics of a Hamiltonian on a quantum computer using an algorithm originally proposed by Lloyd [Llo96], and refined and improved in the decades since [BACS07, BCC14, BCK15, LC16]. This is the quantum computing equivalent of running a numerical simulation on a classical computer. However, it requires a scalable, fault-tolerant, digital quantum computer. Except for small-scale proof-of-principle demonstrations, this is beyond the reach of current technology.
The second notion, called “physical” or “analogue” – in the sense of “analogous” – Hamiltonian simulation, involves directly engineering the Hamiltonian of interest and studying its properties experimentally. (Akin to building a model of an aerofoil and studying it in a wind tunnel.) This form of Hamiltonian simulation is already being performed in the laboratory using a variety of technologies, including optical lattices, ion traps, superconducting circuits and others [STH99, Nat12, GAN14]. Just as it is easier to study a scale model of an aerofoil in a wind tunnel than an entire aeroplane, the advantage of artificially engineering a Hamiltonian that models a material of interest, rather than studying that material directly, is that it is typically easier to measure and manipulate the artificially-engineered system. It is possible to measure the state of a single atom in an optical lattice [SWE10, BGP09, GZHC09]; it is substantially harder to measure e.g. the state of a single electron spin in a particular 2D layer within a cuprate superconductor.
Many important theoretical questions regarding analogue quantum simulation remain open, despite its practical significance and experimental success [STH99, Nat12, GAN14]. Which systems can simulate which others? How can we characterise the effect of errors on an analogue quantum simulator? (Highlighted in the 2012 review article [CZ12] as one of the key questions in this field.) On a basic level, what should the general definition of analogue quantum simulation itself be? The notion of universality we develop here enables us to answer all these questions.
1 Background and previous work
This computationally-inspired notion of physical universality has its origins in earlier work on “completeness” of the partition function of certain classical statistical mechanics models [VdNDB08, KZ12a, DlCDVdNB09, DlCDBMD09, KZ12b, XDlCD11]. Recent results by one of us and De las Cuevas built on those ideas to establish the more stringent notion of universality for classical spin systems [lCC16]. Related, more practically-focused notions have also been explored in recent work motivated by classical Hamiltonian engineering experiments [LHZ15]. Here we consider the richer and more complex setting of quantum Hamiltonians, which requires completely different techniques.
Pinning down precisely what it means for one many-body model to simulate the complete physics of another requires applying mathematical theory of spectrum-preserving maps and Jordan algebras developed between the 1950s and 80s, which has to our knowledge not arisen before in quantum information theory. Using this, we are able to derive from basic operational considerations the precise conditions under which one quantum many-body Hamiltonian exhibits exactly the same physics as another. This is a necessary precursor to establishing our main result – showing for the first time that there exist universal quantum Hamiltonians – and allows us to prove these new physics results with full mathematical rigour (see part II for full technical details). However, to our knowledge no general understanding – or even definition – of when one Hamiltonian simulates the complete physics of another existed in the literature. Thus this precursor to our main result may be of significance in itself.
For our explicit constructions that establish the existence of universal Hamiltonians, we are able to draw on a long literature in the field of Hamiltonian complexity [KKR06, OT08, AGIK09, BL08, CM16, BH17], studying the computational complexity of estimating ground state energies by mapping the problem to ever-simpler quantum many-body systems. The computational complexity of the ground state energy problem for the Heisenberg model with arbitrarily varying local fields was shown to be the maximum possible (QMA-complete [KSV02]) by Schuch and Verstraete [SV09]. The availability of local fields breaks the symmetry of the model, simplifying the analysis. The complexity of the pure Heisenberg model without local fields was not known until very recently, when it was shown by two of us [CM16] to also be QMA-complete.
These results per se only concern the ground state energy, and moreover only the computational complexity aspects of this single quantity; they do not need to address any of the physics of the resulting Hamiltonians beyond the ground state energy. Nonetheless, the “perturbative gadget” techniques developed to prove Hamiltonian complexity results [KKR06, OT08] turn out to be highly useful in constructing the full physical simulations required for our results. By combining our new and precise understanding of physical Hamiltonian simulation with these perturbative gadget techniques, we are able to design new “gadgets” that transform one many-body Hamiltonian into another whilst preserving its entire physics and local structure, as required to construct universal models.
In this way, we are able to show how certain quantum many-body models can be transformed step-by-step into any other many-body Hamiltonian, thereby establishing that these models are universal. On a high level, as discussed in [lCC16] for the classical case, this process can in some sense be viewed as the “opposite” of a renormalisation group flow: depending on the initial microscopic parameter settings, universal Hamiltonians can “flow” under this sequence of transformations to any other many-body Hamiltonian, which can have very different physical characteristics. Note that our result is constructive: it provides an efficiently computable algorithm that, given a description of any quantum many-body Hamiltonian, produces the parameter settings of the universal model that simulate this Hamiltonian.
In fact, we go beyond exhibiting individual examples, and completely classify the simulation power of all two-qubit interactions. From this, we see that essentially all many-body models in 2D (or higher) with two-qubit interactions and individually tunable coupling strengths are universal (see part II). The 2D Heisenberg model, and also the 2D XY model (which has local invariance), arise as important specific cases of this.
2 Hamiltonian simulation
We start by establishing precisely what it means for one quantum many-body system to simulate another. Any non-trivial simulation of one Hamiltonian with another will involve encoding the first within the second in some way. We want this encoding to “replicate all the physics” of the original system. To reproduce all static, dynamic and thermodynamic properties of , the encoding needs to fulfil a long list of operational requirements:
Clearly should be a valid Hamiltonian: .
should reproduce the complete energy spectrum of : . More generally, should preserve the outcomes (eigenvalues) of any measurement : .
Individual interactions in the Hamiltonian should be encoded separately: . Otherwise, one would have to solve the full many-body Hamiltonian in order to encode it, in which case there is little point simulating it in the first place.
There should exist a corresponding encoding of states, , such that measurements on states are simulated correctly: for any observable , .
should preserve the partition function (potentially up to a physically unimportant constant rescaling): .
Time-evolution according to should simulate time-evolution according to .
Errors or noise on the system should correspond to errors or noise on the system.
We prove (see part II) that, remarkably, the very basic requirements (iii), (ii) and (i) already imply that all other operational requirements are satisfied too. Furthermore, any encoding map that satisfies them must have a particularly simple mathematical form:
for some unitary and non-negative integers , such that . ( denotes complex conjugation of .)
This characterisation of Hamiltonian encodings holds if the simulation is to exactly replicate all the physics of the original. But in practice no simulation will ever be exact. What if the simulator Hamiltonian only replicates the physics of the original Hamiltonian up to some approximation? As long as this approximation can be made arbitrarily accurate, will be able to replicate the entire physics of to any desired precision.
Furthermore, it is clearly sufficient if replicates the physics of for energies below some energy cut-off if that cut-off can be made arbitrarily large (see Figure 1). Due to energy conservation, any initial state with energy less than the energy cut-off will not be affected by the high-energy sector. Indeed, as long as the cut-off is larger than the maximum energy eigenvalue of , this means that can simulate all possible states of . This also holds for all thermodynamic properties; any error in the partition function due to the high-energy sector is exponentially suppressed as a function of the cut-off. In practice, one is often only interested in low-temperature properties of a quantum many-body Hamiltonian, as these are the properties relevant to quantum phases and quantum phase transitions. In that case, the energy cut-off does not even need to be large, merely sufficiently above the lowest excitation energy. We therefore want to simulate in the low-energy subspace of .
Finally, for a good simulation we would also like the encoding to be local, in the sense that each subsystem of the original Hamiltonian corresponds to a distinct subset of particles in the simulator Hamiltonian. This will enable us to map local observables on the original system to local observables on the simulator system, as well as to efficiently prepare states of the simulator system.
By making all the above mathematically precise, we show that this necessarily leads to the following rigorous notion of Hamiltonian simulation (also see Figure 1):
Definition (Analogue Hamiltonian simulation)
A many-body Hamiltonian simulates a Hamiltonian to precision below an energy cut-off if there exists a local encoding , where for some isometries acting on 0 or 1 qudits of the original system each, and and are locally orthogonal projectors, such that:
There exists an encoding such that and ;
Here, denotes the projector onto the subspace spanned by eigenvectors of with eigenvalues below , and we write .
The first requirement in Definition (i) states that, to good approximation (within error ), the local encoding approximates an encoding onto low-energy states of . The second requirement, 2(ii), says that the map gives a good simulation of to within error .
This definition, which we show follows from physical requirements, turns out to be a refinement of a definition of simulation introduced in prior work [BH17] in the context of Hamiltonian complexity theory. There are two important differences. We allow the encoding map to be anything that satisfies the physical requirements (iii), (ii) and (i) from the previous section, which can be more complicated than a single isometry. On the other hand, we restrict to be local, since we require simulations to preserve locality. Note that if and , the simulation is exact. Increasing the accuracy of the simulation will typically require expending more “effort”, e.g. by increasing the energy of the interactions.
We are usually interested in simulating entire quantum many-body models, rather than individual Hamiltonians. By “model”, we mean very generally here any family of Hamiltonians. In the many-body models usually encountered in physics, these Hamiltonians are typically related to one another in some way. For example, the 2D Heisenberg model consists of all Hamiltonians with nearest-neighbour Heisenberg interactions on a 2D square lattice of some given size.
When we say that a model can simulate another model , we mean it in the following strong sense: any Hamiltonian on qudits (-dimensional spins) from model can be simulated by some Hamiltonian on qudits from model , and this simulation can be done to any precision with as large an energy cut-off as desired. The simulation is efficient if each qudit of the original system is encoded into a constant number of qudits in the simulator (each in Definition maps to qudits), is efficiently computable from , and the energy overhead and number of qubits of the simulation scales at most polynomially ( and ).
3 Implications of simulation
We arrived at a rigorous notion of Hamiltonian simulation by requiring the simulation to approximate the entire physics to arbitrary accuracy. This is clearly very strong. Just as exact simulation preserves all physical properties perfectly, approximate simulation preserves all physical properties approximately. First, all energy levels are preserved up to any desired precision . Second, by locality of , for any local observable on the original system there is a local observable on the simulator and a local map such that applying to perfectly reproduces the effect of applied to . This applies to all local observables, all order parameters (including topological order), and all correlation functions. Thus all these static properties of the original Hamiltonian are reproduced by the simulation.
Third, Gibbs states of the original system correspond to Gibbs states of the simulator, and the partition function of is reproduced by , up to a physically irrelevant constant rescaling and an error that can be exponentially suppressed by increasing the energy cut-off and improving the precision . More precisely, if the original and simulator Hamiltonians have local dimension , then
Since it is able to reproduce the partition function to any desired precision, all thermodynamic properties of the original Hamiltonian are reproduced by the simulation.
Finally, all dynamical properties are also reproduced to any desired precision. More precisely, the error in the simulated time-evolution grows only linearly in time (which is optimal without active error correction), and can be suppressed to any desired level by improving the approximation accuracy and :
We can also derive some important consequences for simulation errors and fault-tolerance. A recurring criticism of analogue Hamiltonian simulation is that, because it does not implement any error-correction, errors will accumulate over time and swamp the simulation. A common counter-argument is that any real physical system is itself always subject to noise and errors. If the properties of its Hamiltonian are sensitive to noise, the behaviour of the real physical system will also include the effects of the noise, and it is that which we wish to simulate. There is truth to both sides. In the absence of error-correction, errors will accumulate over time. It is also true that the same will happen in the original physical system, so this may not matter. But only if noise and errors in the simulation mimic the noise and errors experienced by the real physical system we are trying to simulate.
Fully justifying this would require modelling the noise and error processes in the physical system, and showing that the natural noise and error processes occurring in the particular simulator being used do indeed faithfully reproduce the same effects. Even then, the validity of this argument rests on the validity of the noise model. Ultimately, determining whether or not a simulation is accurate always comes down to testing its predictions in the laboratory. But with our precise definition of Hamiltonian simulation in hand, we can take a significant step towards justifying generally why lack of error correction may not be an issue. Most natural noise models are local: physical errors tend to act on neighbouring particles, not across the entire system. The definition of Hamiltonian simulation arrived at in the previous section immediately implies that local errors in the original system correspond to local errors in the simulator.
But we go further than this. We prove that, under a reasonable physical assumption, a local error affecting the simulator system approximates arbitrarily well the encoded version of some local error on the original system. More precisely, if we take the energy cut-off to be large enough, errors on the simulator system are unlikely to take the simulated state out of the low-energy space of . Assume that this happens with probability at most , for some . Then for any noise operation acting on qudits of the simulator system, there is always some noise operation on at most qudits of the original system (which we can easily write down) such that, for any state , the effect of on the simulator approximates (again, to any desired precision) the effect of on the system being simulated. Or, to state this mathematically:
where and are superoperators. The fact we can prove the result this way around is crucial: it shows that any local noise and errors that might occur in our simulator simply reproduce the effects of local noise and errors in the original physical system. This is much stronger than merely showing that errors on the original system can be simulated.
4 Universal Hamiltonians
The notion of Hamiltonian simulation we have arrived at is extremely demanding. It is not a priori clear whether any interesting simulations exist at all. In fact, not only do such simulations exist, we prove that there are even universal quantum simulators. A model is “universal” if it can simulate any Hamiltonian whatsoever, in the strong sense of simulation discussed above. Depending on the target Hamiltonian, this simulation may or may not be efficient. In general, the simulation will be efficient for target Hamiltonians with local interactions in the same (or lower) spatial dimension. Whereas, whilst universal models can also simulate Hamiltonians in higher spatial dimensions with only modest (polynomial) system-size overhead, this comes at an exponential cost in energy.
Remarkably, even certain simple 2D quantum spin-lattice models are universal. To show this, we in fact prove a still stronger result. We completely classify all two-qubit interactions (i.e. nontrivial interactions between two spin-1/2 particles) according to their simulation ability. This classification tells us which two-qubit interactions are universal. The universal class turns out to be identical to the class of QMA-complete two-qubit interactions from quantum complexity theory [CM16], where QMA is the quantum analogue of the complexity class NP [KSV02].
The classification also shows that there are two other classes of two-qubit interaction, with successively weaker simulation ability. Combining our Hamiltonian simulation results with previous work [BH17], we find that there is a class of two-qubit interactions that can simulate any stoquastic Hamiltonian, i.e. any Hamiltonian whose off-diagonal entries in the standard basis are non-positive. This is the class of Hamiltonians believed not to suffer from the sign-problem in numerical Monte-Carlo calculations. Another class is able, by previous work [lCC16], to simulate any classical Hamiltonian, i.e. any Hamiltonian that is diagonal in the standard basis.
The 2D Heisenberg- and XY-models (with non-uniform coupling strengths) are important examples which we show fall into the first category, hence are universal simulators. The 2D (quantum) Ising model with transverse fields falls into the second category, so can simulate any other stoquastic Hamiltonian [BH17]. The 2D classical Ising model with fields falls into the third category, so is an example of a universal classical Hamiltonian simulator [lCC16].
The universality proof involves chaining together a number of steps, some of which are shown in Figure 3. In fact, most of the technical difficulty lies in proving universality of the Heisenberg and XY interactions. Once these are shown to be universal, it is relatively straightforward to use recently developed techniques [CM16, PM17] to show that any other Hamiltonian from the universal category can simulate one of these two. Hence, by universality of the Heisenberg or XY interactions, such Hamiltonians can also simulate any other Hamiltonian. We now sketch the universality proof for these two interactions. (See part II for full technical details.)
The Heisenberg interaction (where are the Pauli matrices) has full local rotational symmetry. Mathematically, this is equivalent to invariance under arbitrary simultaneous local unitary rotations . The XY interaction is invariant under arbitrary rotations in the z-plane, i.e. with for any angle . Any Hamiltonian composed of just one of these types of interaction inherits the corresponding symmetry. Thus all its eigenspaces also necessarily have this symmetry. Yet if it is to be universal, it must simulate Hamiltonians that do not have this symmetry.
Before addressing symmetry, however, there is a more elementary obstacle to overcome. All matrix elements of or are real numbers (in the standard basis). Thus any Hamiltonian built out of these interactions is also real. Yet if it is to be universal, it must simulate Hamiltonians with complex matrix elements.
A simple encoding overcomes this restriction, by adding an additional qubit and encoding the real and imaginary parts of separately, controlled on the state of the ancilla qubit. The Hamiltonian is clearly real and is easily seen to be an encoding of , since , where . To make this encoding local, it can be adjusted to a simulation where there is an ancilla qubit for each qubit of the system, but these ancillas are forced by additional strong local interactions to be in the two dimensional subspace spanned by .
To overcome the symmetry restriction, we develop more complicated simulations based around the use of “perturbative gadgets” (a technique originally introduced to prove QMA-completeness results in Hamiltonian complexity theory [KKR06, OT08]). In a perturbative gadget, a heavily weighted term (for some large constant ) dominates the overall Hamiltonian such that the low-energy part of is approximately just the ground space of . Within this low-energy subspace, an effective Hamiltonian is generated by and can be calculated using a rigorous version of perturbation theory [BH17]. The first-order term in the perturbative expansion is given by projected into the ground space of , as one might expect. But if this term vanishes, then the more complicated form of higher order terms may be exploited to generate more interesting effective interactions.
For most of our simulations, is used to project a system of ancilla qubits into a fixed state, such that the effective Hamiltonian that this generates couples the remaining qubits. This type of gadget is known in the Hamiltonian complexity literature as a mediator qubit gadget [OT08], because the ancilla qubits are seen to “mediate” an effective interaction between the other qubits in the system.
But in order to break the symmetry of the Heisenberg and XY interactions, it is necessary for the encoded Hamiltonian to act not on the physical qubits of the system, but on qubits encoded into a subspace of multiple physical qubits. To achieve this, we design a four-qubit gadget where the strong term, consisting of equally weighted interactions across all pairs of qubits, has a two-fold degenerate ground space. This two-dimensional space can be used to encode a qubit. This gadget is used repeatedly to encode all qubits of the systems separately, as illustrated in Figure 2. We then add less heavily weighted interactions acting between qubits in different gadgets, in order to generate effective interactions between the encoded qubits. These interactions are calculated using a precise version of second-order perturbation theory, which accounts rigorously for the approximation errors resulting from neglecting the higher-order terms [BH17]. Combined with a new mediator gadget, together with previously known gadgets [OT08] which allow many-body interactions to be simulated using two-body interactions, this suffices to show that the Heisenberg and XY interactions can simulate all real local Hamiltonians, and hence all local Hamiltonians using the complex-to-real encoding described above.
In order to show that Hamiltonians with arbitrary long-range interactions can be simulated with a 2D lattice model, there is a final step: embedding an arbitrary interaction pattern within a square lattice. This can be achieved by effectively drawing the long-range interactions as lines on the lattice, and using further perturbative gadgets to remove crossings between lines [OT08]. This step requires multiple rounds of perturbation theory, which can result in the final Hamiltonian containing local interaction strengths that scale exponentially in the number of particles. Thus the final simulation, whilst efficient in terms of the number of particles and interactions, is not necessarily efficient in terms of energy cost for arbitrary Hamiltonians. For example, we do not know how to construct an energy-efficient simulation of a 3D lattice Hamiltonian using a 2D lattice model, nor do we necessarily expect it to be possible. However, full efficiency is recovered when the original Hamiltonian is spatially sparse [OT08] (a class which encompasses all 2D lattice Hamiltonians). Finally, if we want to simulate indistinguishable particles, one can verify that standard techniques for mapping fermions or bosons to spin systems give the required simulations. (See part II for details.)
We close by highlighting some of the limitations of our results, and possible future directions. One should note that whilst our strong notion of simulation preserves locality in the sense that a few-particle observable in the original system will correspond to a few-particle observable in the simulator, simulating e.g. a 3D system in a 2D system necessarily means that the corresponding observables in the simulation will not always be on nearby particles. Also, to simulate higher-dimensional systems in 2D, our constructions require very large coupling strengths.
From the analogue Hamiltonian engineering perspective, our results show that surprisingly simple types of interactions suffice for building a universal Hamiltonian simulator. Together with the ability to prepare simple initial states, these would even suffice to construct a universal quantum computer, or to perform universal adiabatic quantum computation. (However, error correction and fault-tolerance, which are essential for scalable quantum computation, would require additional active control.) The converse point of view is that, as these apparently restrictive models turn out to be universal, simulating them on a quantum computer may be more difficult than previously thought. Furthermore, our mathematical constructions require extremely precise control over the strengths of individual local interactions across many orders of magnitude. Though some degree of control is possible in state-of-the-art experiments [Nat12, GAN14], the requirements of our current universal models are beyond what is currently feasible. On the other hand, it is already possible to engineer more complex interactions than those we have shown to be universal. Now we have shown for that universal models exist, and need not be extremely complex, it may be possible to construct other universal models tailored to particular experimental setups.
From the fundamental physics perspective, an important limitation of our current results is that the models we show to be universal are not translationally invariant. (The same is also true of the earlier classical results [lCC16].) Although we show there are universal models in which all interactions have an identical form, our proofs rely heavily on the fact that the strengths of these interactions can differ from site to site. Classic results showing that local symmetries together with translational-invariance can restrict the possible physics [MW66, Hoh67] suggest breaking translational-invariance may be crucial for universality. On the other hand, much of the intuition behind our proofs comes from Hamiltonian complexity, where recent results have shown that translational-invariance is no obstacle [GI09, BCO16].
In light of our results, determining the precise boundary between simplicity and universality in quantum many-body physics is now an important open question for future research.
Part II Technical content
6 Notation and terminology
As usual, denotes the set of linear operators acting on a Hilbert space . For conciseness, we sometimes also use the notation for the set of all matrices with complex entries. denotes the subset of all Hermitian matrices. denotes the identity matrix. For integer , denotes the set .
If are rings, a ring homomorphism is a map that is both additive and multiplicative: and . Similarly, a ring anti-homomorphism is an additive map that is anti-multiplicative: . If , we say the map is unital.
For a ring , the corresponding Jordan ring is the ring obtained from by replacing multiplication with Jordan multiplication . A Jordan homomorphism on is an additive map such that . If is not of characteristic 2, this is equivalent to the constraint that . Note that any ring homomorphism is a Jordan homomorphism, but the converse is not necessarily true.
denotes the spectrum of , i.e. the set of values such that is not invertible. (This of course coincides with the set of eigenvalues, ignoring multiplicities.) We say that is invertibility-preserving if is invertible in for all invertible . We say that is spectrum-preserving if for all .
For an arbitrary Hamiltonian , we let denote the orthogonal projector onto the subspace . We also let denote the restriction of some other arbitrary Hamiltonian to , and write and .
We say that a Hamiltonian is -local if it can be written as a sum of terms such that each acts non-trivially on at most subsystems of . That is, and where the identity in each term in the sum acts on the subsystems where that does not. An operator on a composite Hilbert space “acts trivially” on the subsystems where it acts as identity, and “acts non-trivially” on the remaining subsystems. We will often employ a standard abuse of notation, and implicitly exend operators on subsystems to the full Hilbert without explicitly writing the tensor product with identity, allowing us e.g. to write simply . We say that is local if it is -local for some that does not depend on 111Technically, this makes sense only for families of Hamiltonians , where we consider to be growing..
We let , , denote the Pauli matrices and often follow the condensed-matter convention of writing for etc. For example, is short for and is known as the Heisenberg (exchange) interaction. The XY interaction is .
Let be a -qudit Hermitian matrix. We say that locally diagonalises if is diagonal. We say that a set of Hermitian matrices is simultaneously locally diagonalisable if there exists such that locally diagonalises for all . Note that matrices in may act on different numbers of qudits, so can be of different sizes.
We will often be interested in families of Hamiltonians. For a subset of interactions (Hermitian matrices on a fixed number of qudits), we define the family of -Hamiltonians to be the set of Hamiltonians which can be written as a sum of interaction terms where each term is either picked from , with an arbitrary positive or negative real weight, or is an arbitrarily weighted identity term. For example, is a -Hamiltonian if it can be written in the form for some . A model is a (possibly infinite) family of Hamiltonians. Typically the Hamiltonians in a model will be related in some way, e.g. all Hamiltonians with nearest-neighbour Heisenberg interactions on an arbitrarily large 2D lattice (the “2D Heisenberg model”).
7 Hamiltonian encodings
Any non-trivial simulation of one Hamiltonian with another will involve encoding the first within the second in some way. Write for some “encoding” map that encodes a Hamiltonian into some Hamiltonian . Any such encoding should fulfil at least the following basic requirements. First, any observable on the original system should correspond to an observable on the simulator system. Second, the set of possible values of any encoded observable should be the same as for the corresponding original observable. In particular, the energy spectrum of the Hamiltonian should be preserved. Third, the encoding of a probabilistic mixture of observables should be the same as a probabilistic mixture of the encodings of the observables.
To see why this last requirement holds, imagine that we are asked to encode observable with probability , and observable with probability . Then, for any state on the simulator system, the expected value of the encoded observable acting on should be the same as the corresponding probabilistic mixture of the expected values of the encoded observables and acting on . In order for this to hold for all states , we need the mixture of observables to be encoded as the corresponding probabilistic mixture of encodings of and .
These operational requirements correspond to the following mathematical requirements on the encoding map :
for all .
for all .
for all and all .
Of course, there are many other desiderata that we would like to satisfy, such as preserving the partition function, measurement outcomes, time-evolution, local errors, and others. For the Hamiltonian itself, we almost certainly want to not only be convex, but also real-linear: , so that a Hamiltonian expressed as a sum of terms can be encoded by encoding the terms separately. However, we will see later that meeting just the above three basic requirements necessarily implies also meeting all these other operational requirements (which we will make precise).
It turns out there is a simple and elegant characterisation of what such encodings have to look like. To prove this, we will need the following theorem concerning Jordan ring homomorphisms.
For any , any Jordan homomorphism of the Jordan ring can be extended in one and only one way to a homomorphism of the matrix ring .
Any unital, invertibility-preserving, real-linear map is a Jordan homomorphism.
The argument is standard (see e.g. [HŠ03]).
, thus since is invertibility-preserving. In particular, for every projector . Since is also Hermitian, this implies is a projector.
By the spectral decomposition, any can be decomposed as where are mutually orthogonal projectors and . For , is a projector, thus is a projector and , so that . Therefore, .
For any map , the following are equivalent:
For all , and all :
There exists a unique extension such that for all and, for all and :
There exists a unique extension such that for all with of the form
for some non-negative integers , and unitary , where and denotes complex conjugation.
Note that (iii) is basis-independent, despite the occurrence of complex conjugation; taking the complex conjugation with respect to a different basis is equivalent to modifying , which just gives another encoding. Given that is unique, for the remainder of the paper we simply identify with . In particular, this allows us to assume that is of the form specified in Item (iii). The characterisation creftype 5 can equivalently be written as
for some orthogonal projectors and such that ; this alternative form will sometimes be useful below. We think of the system on which and act as an ancilla, and often label this “extra” subsystem by the letter .
We first show that is a Jordan homomorphism. Item (i)a states that preserves , and Item (i)b implies that is unital and invertibility-preserving on , with . We next check that together with Item (i)c are equivalent to real-linearity of . For any , setting , and using Item (i)c gives
Apply creftype 7 to to get , showing that is homogeneous for all real scalars. Additivity follows by combining Item (i)c and homogeneity: . Therefore is also real-linear so by Lemma is a Jordan homomorphism.
By Theorem , there exists a unique homomorphism such that for all . As agrees with on , it satisfies (ii)a. As is a homomorphism, it satisfies (ii)d and (ii)c by definition; this also implies that for any , so (ii)e holds.
We finally prove (ii)b. It is sufficient to show that , because if this holds we can expand any matrix as for some Hermitian matrices and to obtain
To show , we first write as a linear combination of products of Hermitian matrices. That this can be done is an immediate consequence of the fact that is the enveloping associative ring of . However, it can also be seen explicitly by writing
for any , and some ; summing this product over , we obtain . Thus we can write for Hermitian matrices , , . By taking adjoints on both sides, it follows that . So we have
Existence and uniqueness of were already shown in the previous part. In the proof of the remaining claim, for readability we just use to denote this unique extension. First define the complex structure (where the latter notation is a convenient shorthand). We have
thus has eigenvalues . Furthermore,
so is anti-Hermitian, hence diagonalisable by a unitary transformation.
For any , we have
so that . Thus and are simultaneously diagonalisable for all . therefore decomposes into a direct sum of the eigenspaces of , on which acts invariantly.
Now, restricting to either of these invariant subspaces,
Thus decomposes into a direct sum of a *-representation and an anti-*-representation222By “anti-*-representation” we mean an anti-linear algebra homomorphism, not a *-antihomomorphism (which would be a linear map preserving adjoints that reverses the order of multiplication). . Since for any vector , , these (anti-)*-representations are necessarily non-degenerate.
By a standard result on the representations of finite-dimensional C*-algebras [Dav91, Corollary III.1.2], any non-degenerate *-representation of is unitarily equivalent to a direct sum of identity representations. If is an anti-*-homomorphism, let . Then , , , and . Thus where is a *-homomorphism. Therefore, any non-degenerate anti-*-representation is unitarily equivalent to a direct sum of complex conjugates of identity representations, which completes the argument.
The above theorem characterises encodings of observables. This immediately tells us how to encode physical systems themselves, expressed as Hamiltonians: since the Hamiltonian itself is an observable, the encoding map must have the same characterisation.
It is easy to see from the characterisation in Item (iii) of the Theorem that any encoding preserves all interesting physical properties of the original Hamiltonian. For example, the set of eigenvalues is preserved, up to possibly duplicating each eigenvalue the same number of times, implying preservation of the partition function (up to an unimportant constant factor). It is also easy to see that any encoding properly encodes arbitrary quantum channels: if are the Kraus operators of the channel, then
7.1 A map on states,
We now show that, for any encoding , there exists a corresponding map that encodes quantum states such that encoded observables applied to encoded states have correct expectation values.
First, note that for any observable and any state on the simulator system, we have
and we label the second subsystem as discussed after creftype 6. Note that and are both positive but not necessarily normalised, but is normalised.
Therefore any map on states such that will preserve measurement outcomes appropriately. One natural choice is
Then in the former case , ; and in the latter case the roles of and are reversed.
We now show that simulates time-evolution correctly too. We have
This is why they are labelled with the letters and : the part evolves forwards in time while the part evolves backwards in time. Taking , we have proven the following result.
For any encoding , the corresponding map satisfies the following:
For any time ,
It is worth highlighting the last point. We see that if , evolving according to for time simulates evolving according to for time , as we would expect; but that if , we simulate evolution according to for time . That is, if our encoding only includes copies of , we simulate evolution backwards in time. To avoid this issue, we define the concept of a standard encoding as one where , and hence which is able to simulate evolution forward in time.
Definition (Standard encoding)
An encoding is a standard encoding if .
7.1.1 Gibbs-preserving state mappings
The choice of in creftype 27 is convenient, as it allows us to use the same mapping for both the Hamiltonian and for observables. However, it does not map Gibbs states of the original system to Gibbs states of the simulator. If we have limited ability to manipulate or prepare states of the simulator, it may be difficult to prepare a state of the form creftype 27. At equilibrium, the system will naturally be in a Gibbs state. From this perspective, it would be more natural if the state mapping identified Gibbs states of the original system with Gibbs states of the simulator.
An alternative choice of does map Gibbs states to Gibbs states:
where and . However, to obtain the correct measurement outcome probabilities, we now need to choose a slightly different mapping for observables:333The Hamiltonian is of course also an observable. With this choice of state mapping, to construct the simulator Hamiltonian we must still use the mapping . But if we want to measure the Hamiltonian – i.e. carry out the measurement on the simulator that corresponds to measuring the energy of the original system – we must measure .
For simplicity, in the remainder of the paper we will state and prove our results for the choice of state mapping from creftype 27, so that both Hamiltonians and observables are encoded by . However, our results also go through with the appropriate minor modifications for the choice of Gibbs-preserving