Eulerian Hamiltonian simulation

# Hamiltonian quantum simulation with bounded-strength controls

## Abstract

We propose dynamical control schemes for Hamiltonian simulation in many-body quantum systems that avoid instantaneous control operations and rely solely on realistic bounded-strength control Hamiltonians. Each simulation protocol consists of periodic repetitions of a basic control block, constructed as a suitable modification of an “Eulerian decoupling cycle,” that would otherwise implement a trivial (zero) target Hamiltonian. For an open quantum system coupled to an uncontrollable environment, our approach may be employed to engineer an effective evolution that simulates a target Hamiltonian on the system, while suppressing unwanted decoherence to the leading order. We present illustrative applications to both closed- and open-system simulation settings, with emphasis on simulation of non-local (two-body) Hamiltonians using only local (one-body) controls. In particular, we provide simulation schemes applicable to Heisenberg-coupled spin chains exposed to general linear decoherence, and show how to simulate Kitaev’s honeycomb lattice Hamiltonian starting from Ising-coupled qubits, as potentially relevant to the dynamical generation of a topologically protected quantum memory. Additional implications for quantum information processing are discussed.

###### pacs:
03.67.Lx, 03.65.Fd, 03.67.-a

mit-ctp 4504

## 1 Introduction

The ability to accurately engineer the Hamiltonian of complex quantum systems is both a fundamental control task and a prerequisite for quantum simulation, as originally envisioned by Feynman [1, 2, 3]. The basic idea underlying Hamiltonian simulation is to use an available quantum system, together with available (classical or quantum) control resources, to emulate the dynamical evolution that would have occurred under a different, desired Hamiltonian not directly accessible to implementation [4]. From a control-theory standpoint, the simplest setting is provided by open-loop Hamiltonian engineering in the time domain [5, 6], whereby coherent control over the system of interest is achieved solely based on suitably designed time-dependent modulation (most commonly sequences of control pulses), without access to ancillary quantum resources and/or measurement and feedback. While open-loop Hamiltonian engineering techniques have their origin and a long tradition in nuclear magnetic resonance (NMR) [8, 7], the underlying physical principles of “coherent averaging” have recently found widespread use in the context of quantum information processing (QIP), leading in particular to dynamical symmetrization and dynamical decoupling (DD) schemes for control and decoherence suppression in open quantum systems [9, 10, 11, 12, 13, 14].

As applications for quantum simulators continue to emerge across a vast array of problems in physics and chemistry, and implementations become closer to experimental reality [3, 15, 16], it becomes imperative to expand the repertoire of available Hamiltonian simulation procedures, while scrutinizing the validity of the relevant control assumptions. With a few exceptions (notably, the use of so-called “perturbation theory gadgets” [17]), open-loop Hamiltonian simulation schemes have largely relied thus far on the ability to implement sequences of effectively instantaneous, “bang-bang” (BB) control pulses [18, 19, 20, 21, 22, 23, 24, 25]. While this is a convenient and often reasonable first approximation, instantaneous pulses necessarily involve unbounded control amplitude and/or power, something which is out of reach for many control devices of interest and is fundamentally unphysical. In the context of DD, a general approach for achieving (to at least the leading order) the same dynamical symmetrization as in the BB limit was proposed in [26], based on the idea of continuously applying bounded-strength control Hamiltonians according to an Eulerian cycle, so-called Eulerian DD (EDD). From a Hamiltonian engineering perspective, EDD protocols translate directly into bounded-strength simulation schemes for specific effective Hamiltonians – most commonly, the trivial (zero) Hamiltonian in the case of “non-selective averaging” for quantum memory (or “time-suspension” in NMR terminology). More recently, EDD has also served as the starting point for bounded-strength gate simulation schemes in the presence of decoherence, so-called dynamically corrected gates (DCGs) for universal quantum computation [27, 28, 29, 30].

In this work, we show that the approach of Eulerian control can be further systematically exploited to construct bounded-strength Hamiltonian simulation schemes for a broad class of target evolutions on both closed and open (finite-dimensional) quantum systems. Our techniques are device-independent and broadly applicable, thus substantially expanding the control toolbox for programming complex Hamiltonians into existing or near-term quantum simulators subject to realistic control assumptions.

The content is organized as follows. We begin in Sect. II by introducing the appropriate control-theoretic framework and by reviewing the basic principles underlying open-loop simulation via average Hamiltonian theory, along with its application to Hamiltonian simulation in the BB setting. Sect. III is devoted to constructing and analyzing simulation schemes that employ bounded-strength controls: while Sec. III.A reviews required background material on EDD, Sec. III.B introduces our new Eulerian simulation protocols for a generic closed quantum system. In Sec. III.C we separately address the important problem of Hamiltonian simulation in the presence of slowly-correlated (non-Markovian) decoherence, identifying conditions under which a desired Hamiltonian may be enacted on the target system while simultaneously decoupling the latter from its environment, and making further contact with DCG protocols. Sect. IV presents a number of illustrative applications of our general simulation schemes in interacting multi-qubit networks. In particular, we provide explicit protocols to simulate a large family of two-body Hamiltonians in Heisenberg-coupled spin systems additionally exposed to depolarization or dephasing, as well as to achieve Kitaev’s honeycomb lattice Hamiltonian starting from Ising-coupled qubits. In all cases, only local (single-qubit, possibly collective) control Hamiltonians with bounded strength are employed. A brief summary and outlook conclude in Sec. V.

## 2 Principles of Hamiltonian simulation

### 2.1 Control-theoretic framework

We consider a quantum system , with associated Hilbert space , whose evolution is described by a time-independent Hamiltonian . As mentioned, Hamiltonian simulation is the task of making evolve under some other time-independent target Hamiltonian, say, . Without loss of generality, both the input and the target Hamiltonians may be taken to be traceless. Two related scenarios are worth distinguishing for QIP purposes:

Closed-system simulation, in which case coincides with the quantum system of interest, (also referred to as the “target” henceforth), which undergoes purely unitary (coherent) dynamics;

Open-system simulation, in which case is a bipartite system on , where represents an uncontrollable environment (also referred to as bath henceforth), and the reduced dynamics of the target system is non-unitary in general.

In both cases, we shall assume the target system to be a network of interacting qudits, hence , for finite and . In the general open-system scenario, the joint Hamiltonian on may be expressed in the following form,

 H=HS⊗IB+IS⊗HB+∑αSα⊗Bα, (1)

where the operators ( and () act on () respectively, and all the bath operators are assumed to be norm-bounded, but otherwise unspecified (potentially unknown). The closed-system setting is recovered from Eq. (1) in the limit . Likewise, we may express the target Hamiltonian in a similar form, with two simulation tasks being of special relevance: , in which case the objective is to realize a desired system Hamiltonian while dynamically decoupling from its bath , thereby suppressing unwanted decoherence [11]; or, more generally, and , where the simulated, dynamically symmetrized error generators may allow for decoherence-free subspaces or subsystems to exist [13, 31].

Formally, the dynamics is modified by an open-loop controller acting on the target system according to

 H↦H(t)=H+Hc(t),Hc(t)≡∑uhu(t)=∑ufu(t)Xu, (2)

where the operators and the (real) functions represent the available control Hamiltonians and the corresponding, generally time-dependent, control inputs respectively. Clearly, if the Hamiltonian is contained in the admissible control set, the corresponding control problem is trivial and the desired time-evolution,

 ~U(t)=e−i~Ht,t≥0,

can be exactly simulated continuously in time. However, this level of control need not be available in settings of interest, including open quantum systems where control actions are necessarily restricted to the target system alone, in Eq. (2). Following the general idea of “analog” quantum simulation [3], we shall assume in what follows a restricted set of control Hamiltonians (in a sense to be made more precise later) and focus on the task of approximately simulating the desired time evolution at a final time , or more generally, stroboscopically in time, that is, at instants , where

 ~tM=M~T,M∈N,

and is a fixed minimum time interval. Choosing sufficiently small allows in principle any desired accuracy in the approximation to be met, with the limit formally recovering the continuous limit.

Specifically, let and denote the unitary propagators associated to the total and the control Hamiltonians in Eq. (2), respectively:

 U(t) = Texp{−i∫t0[H+Hc(τ)]dτ}, (3) Uc(t) = Texp{−i∫t0Hc(τ)dτ}, (4)

where we have set and indicates time-ordering, as usual. Then, for a given pair , we shall provide sufficient conditions for to be “reachable” from and, if so, devise a cyclic control procedure such that the resulting controlled propagator

 U(tM)≈~U(~tM),tM=MTc,M∈N, (5)

where is the cycle time of the controller, that is, . In general, we shall allow for to differ from , corresponding to an overall scale factor in the simulated time, as it will become apparent later. If, for a fixed input Hamiltonian , arbitrary target Hamiltonians are reachable for given control resources, the simulation scheme is referred to as universal. In this case, complete controllability must be ensured by the tunable Hamiltonians in conjunction with the “drift” [6]. In contrast, we shall be especially interested in situations where control over is more limited.

Similar to DD protocols, Hamiltonian simulation protocols are most easily constructed and analyzed by effecting a transformation to the “toggling” frame associated to in Eq. (4) [7, 11, 14]. That is, evolution in the toggling frame is generated by the time-dependent, control-modulated Hamiltonian

 H′(t)=U†c(t)HUc(t), (6)

with the corresponding toggling-frame propagator being related to the physical propagator in Eq. (3) by . Since the control propagator is cyclic and is time-independent, it follows that and, furthermore, acquires the periodicity of the controller, . Thus, the stroboscopic controlled dynamics of the system is determined by

 U(tM)=[U′(Tc)]M. (7)

Average Hamiltonian theory [7, 35] may then be invoked to associate an effective time-independent Hamiltonian to the evolution in the toggling-frame:

 U(Tc)=U′(Tc)≡exp(−i¯HTc), (8)

where is determined by the Magnus expansion [32], Explicitly, the leading-order term, determining evolution over a cycle up to the first order in time, is given by

 ¯H(0)=1Tc∫Tc0H′(τ)dτ=1Tc∫Tc0U†c(τ)HUc(τ)dτ, (9)

with (absolute) convergence being ensured as long as [34]. Subject to convergence condition, higher-order corrections for evolution over time can also be upper-bounded by (see Lemma 4 in [33])

 ∥∥∞∑ℓ=κt¯H(ℓ)∥∥≤cκ[(t∥H∥)κ+1],cκ=O(1). (10)

Ideally, one would like to achieve , so that equality would hold in Eq. (5) for all . In what follow, we shall primarily focus on achieving first-order simulation instead, by engineering the control propagator in such a way that

 ¯HTc≈¯H(0)Tc=~H~T, (11)

whereby, using Eq. (10) with ,

 U(tM)=e−i¯HMTc=e−i¯H(0)MTc+O[(MTc∥H∥)2]≈~U(~tM). (12)

In general, the accuracy of the approximation in Eq. (11) improves as the “fast control limit”, , is approached. Physically, this corresponds to requiring that the shortest control time scale (pulse separation) involved in the control sequence be sufficiently small relative to the shortest correlation time of the dynamics induced by [35, 36]. While the problem of constructing general high-order Hamiltonian simulation schemes is of separate interest, we stress that second-order simulation can be readily achieved, in principle, by ensuring that is time-symmetric, namely, for . Since all odd-order Magnus corrections vanish in this case [35], it follows (by using again Eq. (10), with ), that , correspondingly boosting the accuracy of the simulation.

### 2.2 Hamiltonian simulation with bang-bang controls

BB Hamiltonian simulation provides the simplest control setting for achieving the intended objective, given in Eq. (5). Two main assumptions are involved: (i) First, we must be able to express the target Hamiltonian as

 ~H=N∑j=1wjU†jHUj,W≡∑jwj, (13)

where are unitary operators on and the non-negative real numbers (not all zero). (ii) Second, the available control resources include a discrete set of instantaneous pulses on , whose application results in a piecewise-constant control propagator over , with corresponding toggling-frame propagators , , [9, 14]. Assumptions (i)-(ii) together allow for the time-average in Eq. (9) to be mapped to a convex (positive-weighted) sum. Eq. (13) may be interpreted as a sufficient condition for the target Hamiltonian to be reachable from given open-loop unitary control on alone. Reachable Hamiltonians must thus be at least as “disordered” as the input one in the sense of majorization [21, 14].

Specifically, Eq. (13) leads naturally to the following BB simulation scheme. Given simulation weights , define the following simulation intervals and timing pattern:

 τj≡wj~T,tj≡j∑k=1τk,t0=0,tN≡Tc=W~T. (14)

A piecewise-constant control propagator for the basic simulation block to be repeated may then be constructed as follows:

 Missing dimension or its units for \hskip (15)

By using Eq.  (9), it is immediate to verify that

 ¯H(0)=1TcN∑j=1τjU†jHUj=~TTc~H, (16)

resulting in the desired controlled evolution, Eqs. (11)-(12), provided that the convergence conditions for first-order simulation under are obeyed. Since, in practice, technological limitations always constrain the cycle duration to a finite minimum value , such conditions ultimately determine the maximum simulated time up to which evolution under may be reliably simulated using the physical Hamiltonian .

In analogy with BB DD schemes, realizing the prescription of Eq. (15) requires to discontinuously change the control propagator from to , via an instantaneous BB pulse at the th endpoint . As a result, despite its conceptual simplicity, BB simulation is unrealistic whenever large control amplitudes are not an option, and the evolution induced by during the application of a control pulse must be considered from the outset. This demands redesigning the basic control block in such a way that the actions of and are simultaneously accounted for.

## 3 Hamiltonian simulation with bounded controls

### 3.1 Eulerian simulation of the trivial Hamiltonian

The key to overcome the disadvantages of BB Hamiltonian simulation is to ensure that the control propagator varies smoothly (continuously) in time during each control cycle. We achieve this goal by relying on Eulerian control design [26]. To introduce the necessary group-theoretical background, we begin by revisiting how, for the special case of a target identity evolution (that is, , also corresponding to a “noop” gate, in terms of the end-time simulated propagator), EDD can be naturally interpreted as a bounded-strength simulation scheme.

In the Eulerian approach, the available control resources include a discrete set of unitary operations on , say, , , which are realized over a finite time interval through application of bounded-strength control Hamiltonians , with . That is,

 Uγ≡uγ(Δ),uγ(δ)=Texp{−i∫δ0hγ(τ)dτ}. (17)

Note that the choice of the control Hamiltonians is not unique, which allows for implementation flexibility. The unitaries are identified with the image of a generating set of a finite group under a faithful, unitary, projective representation  [26]. That is, let be a finite group of order , such that each element may be written as an ordered product of elements in a generating set of order , be the representation map [37], and . The Cayley graph of relative to can be thought of as pictorially representing all elements of as strings of generators in . Each vertex represents a group element and a vertex is connected to another vertex by a directed edge “colored” (labeled) with generator if and only if . The number of edges in is thus equal to . Because a Cayley graph is regular, it always has an Eulerian cycle that visits each edge exactly once and starts (and ends) on the same vertex [38, 39]. Let us denote with the ordered list of generators defining an Eulerian cycle on which, without loss of generality, starts (and ends) at the identity element of .

Once a control Hamiltonian for implementing each generator as in Eq. (17) is chosen, an EDD protocol is constructed by assigning a cycle time as and by applying the control Hamiltonians sequentially in time, following the order determined by the Eulerian cycle . Thus,

 UEDDc(tj)=UγjUEDDc(tj−1),j=1,…,N, (18)

where is the image of the generator labeling the th edge in . As established in [26], the lowest-order average Hamiltonian associated to the above EDD cycle has the form , where for any operator acting on , the map

 ΠG(A)=1|G|∑g∈GU†gAUg (19)

projects onto the centralizer of (i.e., commutes with all ), and

 FΓ(H)=1|Γ|∑γ∈Γ1Δ∫Δ0uγ(τ)†Huγ(τ)dτ (20)

implements an average of over both the control interval and the group generators. Accordingly, bounded-strength simulation of is achieved provided that the following DD condition is obeyed:

 ΠG[FΓ(H)]=0. (21)

By Schur’s lemma, this is automatically ensured if the group representation acts irreducibly on . Formally, the BB limit may be recovered by letting for all [26], reflecting the ability to directly implement all the group elements (with no overhead, as if ) and corresponding to uniform simulation weights .

### 3.2 Eulerian simulation protocols beyond noop: Construction

We show how the Eulerian cycle method can be extended to bounded-strength simulation of a non-trivial class of target Hamiltonians. We assume that may be expressed as a convex unitary mixture of the group representatives ,

 ~H=∑g∈GwgU†gHUg,wg≥0,W=∑gwg. (22)

We construct the desired control protocol starting from an Eulerian cycle on . Specifically, the idea is to append to each of the control slots that define an EDD scheme a free-evolution (or “coasting”) period of suitable duration, in such a way that the net simulated Hamiltonian is modified from to as given in Eq. (22). A pictorial representation of the basic control block is given in Fig. 1. As in Eq. (17), let denote the minimum time duration required to implement each generator, hence, to smoothly change the control propagator from a value to along the cycle. While such “ramping up” control intervals have all the same length, each “coasting” interval is designed to keep the control propagator constant at for a duration determined by the corresponding weight . Since the control is switched off during coasting, continuity of the overall control Hamiltonian may be ensured, if desired, by requiring that

 hγ(0)=0=hγ(Δ),γ=1,…,L, (23)

in addition to the bounded-strength constraint.

An Eulerian simulation protocol may be formally specified as follows. As before, let the th time interval be denoted as , , with and defining the cycle time . For each , let as in the BB case. The duration of the th coasting period is then assigned as

 Θj≡τgj|Γ|, (24)

resulting in the following timing pattern [compare to Eq. (14)]:

 tj=j∑k=1(Δ+Θk)=jΔ+1|Γ|j∑k=1τgk,tN≡Tc=NΔ+W~T. (25)

As the expression for the cycle times makes it clear, the resulting protocol may be equivalently interpreted in two ways: starting from an EDD cycle, corresponding to and , we introduce the coasting periods to allow for non-trivial simulated dynamics to emerge; or, starting from a BB simulation scheme for , corresponding to , we introduce the ramping-up periods to allow for control Hamiltonians to be smoothly switched over . Either way, bounded-strength protocols imply a time-overhead relative to the BB case, recovering the BB limit as as expected. Explicitly, the control propagator for Eulerian simulation has the form:

 UEUSc(tj−1+δ)=uγj(δ)Ugj−1 for δ∈[0,Δ], (26) UEUSc(tj−1+Δ+θ)=Ugj for θ∈[0,Θj]. (27)

The resulting first-order Hamiltonian under Eulerian simulation is derived by evaluating the time-average in Eq. (9) with the control propagator given by Eqs. (26)-(27). Direct calculation along the lines of [26] yields:

 ¯H(0) = 1TcN∑j=1[∫Δδ=0Uc(tj−1+δ)†HUc(tj−1+δ)dδ +∫Θjθ=0Uc(tj−1+Δ+θ)†HUc(tj−1+Δ+θ)dθ] = 1TcN∑j=1[∫Δδ=0U†gj−1uγj(δ)†Huγj(δ)Ugj−1dδ+∫Θjθ=0U†gjHUgjdθ] = 1Tc∑g∈G[U†g(∑γ∈Γ∫Δδ=0uγ(δ)†Huγ(δ)dδ)Ug+|Γ|τg|Γ|U†gHUg],

where the last equality follows from two basic properties of Eulerian cycles: firstly, the list (and also ) of the vertices that are being visited contains each element precisely times; secondly, in traversing the Cayley graph, each group element is left exactly once by a -labeled edge for each generator . Thus, by recalling the definitions given in Eqs. (19) and (20), we finally obtain

 ¯H(0)=NΔTcΠG[FΓ(H)]+~TTc∑g∈GwgU†gHUg=~TTc~H, (28)

which indeed achieves the intended first-order simulation goal, Eqs. (11)-(12), as long as convergence holds and the DD condition of Eq. (21) is obeyed.

The simulation accuracy may be improved by symmetrizing in time. In analogy to symmetrized EDD protocols [9], this can be easily accomplished by running the protocol and then suitably running it again in reverse. Specifically, let the duration of the coasting interval be changed as . Run the protocol as described above until time . Then, from time until time , modify Eqs. (26)-(27) as follows:

 UEUSc[Tc−(tj−1+Δ)+δ]=uγj(Δ−δ)Ugj−1 for δ∈[0,Δ], UEUSc[Tc−(tj−1+Δ+Θj)+θ]=Ugj for θ∈[0,Θj],

for . Provided that one is able to implement , we again obtain

 ¯H(0)=2NΔTcΠG[FΓ(H)]+~TTc∑g∈GwgU†gHUg,

while satisfying for , hence ensuring that .

### 3.3 Eulerian simulation while decoupling from an environment

The ability to implement a desired Hamiltonian on the target system , while switching off (at least to the leading order) the coupling to an uncontrollable environment , is highly relevant to realistic applications. That is, with reference to Eq. (1), the objective is now to simultaneously achieve , by unitary control operations acting on alone. Because the first-order Magnus term is additive [recall Eq.  (9)], it is appropriate to treat each summand of individually, leading to a relevant average Hamiltonian of the form

 ¯H(0)=¯HS⊗IB+∑α¯Sα⊗Bα+IS⊗HB,

where for a generic operator on we let

 ¯A≡1Tc∫Tc0U†c(τ)AUc(τ)dτ.

We can then apply the analysis of Sec. 3.2 to the internal system Hamiltonian () and each error generator () separately, to obtain in both cases a simulated operator of the form given in Eq. (28):

 ¯A=NΔTcΠG[FΓ(A)]+~TTc∑g∈GwgU†gAUg.

Since the task is to decouple from while maintaining the non-trivial evolution due to , the reachability condition of Eq. (22) must now ensure that

 ~HS = ∑g∈GwgU†gHSUg, (29) 0 = ∑g∈GwgU†gSαUg,∀α. (30)

Accordingly, it is necessary to extend the DD assumption of Eq. (21) to become

 ΠG[FΓ(HS)] = 0, (31) ΠG[FΓ(Sα)] = 0,∀α, (32)

such that holds for each of the summands in . Altogether we recover

 ¯H(0)=~TTc~HS⊗IB+IS⊗HB.

It is interesting in this context to highlight some similarities and differences with DCGs [27], which also use Eulerian control as their starting point and are specifically designed to achieve a desired unitary evolution on the target system while simultaneously removing decoherence to the leading [27, 28, 30] or, in principle, arbitrarily high order [29]. By construction, the open-system simulation procedure just described does provide a first-order DCG implementation for the target gate : in particular, the requirement that Eqs. (29)-(30) be obeyed together (for the same weights ) is effectively equivalent to evading the “no-go theorem” for black-box DCG constructions established in [28], with the coasting intervals and the resulting “augmented” Cayley graph playing a role similar in spirit to a (first-order) “balance-pair” implementation. Despite these formal similarities, a number of differences exist between the two approaches: first, an obvious yet important difference is that DCG constructions focus directly on synthesizing a desired unitary propagator, as opposed to a desired Hamiltonian generator. Second, while the internal system Hamiltonian, , is a crucial input in a Hamiltonian simulation problem, it is effectively treated as an unwanted error contribution in analytical DCG constructions, in which case complete controllability over the target system must be supplied by the controls alone. Although in more general (optimal-control inspired) DCG constructions [30], limited external control is assumed and may become essential for universality to be maintained, emphasis remains, as noted above, on end-time synthesis of a target propagator. Finally, a main intended application of DCGs is realizing low-error single- and two-qubit gates for use within fault-tolerant quantum computing architectures, as opposed to robust Hamiltonian engineering for many-body quantum simulators which is our focus here.

### 3.4 Eulerian simulation protocols: Requirements

Before presenting explicit illustrative applications, we summarize and critically assess the various requirements that should be obeyed for Eulerian simulation to achieve the intended control objective of Eq. (5) in a closed or, respectively, open-system setting:

1. Time independence. Both the internal Hamiltonian and the target Hamiltonian are taken to be time-independent (and, without loss of generality, traceless).

2. Reachability. The target Hamiltonian must be reachable from , that is, there must be a control group , with a faithful, unitary projective representation mapping , such that Eq. (22) holds. For dynamically-corrected Eulerian simulation in the presence of an environment, this requires, as noted, that for the same weights , the desired system Hamiltonian is reachable from while the trivial (zero) Hamiltonian is reachable from each error generator separately, such that both Eqs. (29)-(30) hold together.

3. Bounded control. For each generator of the chosen control group , we need access to bounded control Hamiltonians , such that application of over a time interval of duration realizes the group representative , additionally subject (if desired) to the continuity condition of Eq. (23).

4. Decoupling conditions. Suitable DD conditions, Eq. (21) in a closed system or Eqs. (31)-(32) in the open-system error-corrected case, must be fulfilled, in order for undesired contributions to the simulated Hamiltonians to be averaged out by symmetry to the leading order.

5. Time-efficiency. If the choice of is not unique for given , the smallest group should be chosen, in order to keep the number of intervals per cycle, , to a minimum. In particular, efficient Hamiltonian simulation requires that (hence also ) scales (at most) polynomially with the number of subsystems .

The key simplification that the time-independence Assumption (1) introduces into the problem is that the periodicity of the control action is directly transferred to the toggling-frame Hamiltonian of Eq. (6), allowing one to simply focus on single-cycle evolution. Although this assumption is strictly not fundamental, general time-dependent Hamiltonians may need to be dealt with on a case-by-case basis (see also [40, 41, 42]). A situation of special practical relevance arises in this context for open systems exposed to classical noise, in which case and the system-bath interaction in Eq. (1) is effectively replaced by a classical, time-dependent stochastic field. Similar to DD and DCG schemes, Eulerian simulation protocols remain applicable as long as the noise process is stationary and exhibiting correlations over sufficiently long time scales [9, 43].

The reachability Assumption (2) is a prerequisite for Eulerian Hamiltonian simulation schemes. Although BB Hamiltonian simulation need not be group-based, most BB schemes follow this design principle alike. Assumption (3), restricting the admissible control resources to physical Hamiltonians with bounded amplitude (thus finite control durations, as opposed to instantaneous implementation of arbitrary group unitaries as in the BB case) is a basic assumption of the Eulerian control approach. As remarked, our premise is that the available Hamiltonian control is limited, restricted to only the target system if the latter is coupled to an environment, and typically non-universal on ; in particular, we cannot directly express and apply , or else the problem would be trivial. In addition to error-corrected Hamiltonian simulation in open quantum systems, scenarios of great practical interest may arise when the control Hamiltonians are subject to more restrictive locality constraints than the system and target Hamiltonians are (e.g., two-body simulation with only local controls, see also Sec. 4.1).

The required decoupling conditions in Assumption (4) are automatically obeyed if the representation acts irreducibly on . This follows directly from Schur’s lemma, together with the fact that the map defined in Eq. (20) is trace-preserving, and both and can be taken to be traceless. While convenient, irreducibility is not, however, a requirement. When the representation is reducible, care must be taken in order to ensure that Assumption (4) is nevertheless obeyed. It should be stressed that this is possible independently of the target Hamiltonian . Therefore, if the choice () works for one Eulerian simulation scheme (whether is irreducible or not), then it can be used for Eulerian simulation with any target that belongs to the reachable set from , that is, that can satisfy Eq. (22).

We close this discussion by recalling that it is always possible for a finite-dimensional target system to find a control group for which both Assumptions (2) and (4) are satisfied, by resorting to the concept of a transformer [22, 14]. A transformer is a pair , where is a finite group and is a faithful, unitary, projective representation such that, for any traceless Hermitian operators and on with , one may express

 B=∑g∈GwgU†gAUg,wg≥0.

We illustrate this general idea in the simplest case of a single qubit, . Let denote the Pauli matrices and the unitary matrix defined by

 R=i−12(ii−11), (33)

which corresponds to a rotation by an angle about an axis . Direct calculation shows that and that conjugation by cyclically shifts the Pauli-matrices, i.e., , and . Consider now the group given by the presentation

 G=⟨x,y,z,r | x2=y2=z2=r3=1,xz=y,r−1xr=y,r−1yr=z,r−1zr=x⟩.

Using the defining relations of this group, its elements can always be written as , where and . Clearly, the assignment given by yields a faithful, unitary, irreducible representation since the Pauli matrices commute up to phase. It is shown in [22] that the pair defines a transformer in the sense given above, namely, any traceless matrix may be reached from any fixed traceless, nonzero matrix , for suitable non-negative weights . The irreducibility property for any transformer pair can be easily established by contradiction [44].

A drawback of the transformer formalism is that general transformer groups tend to be large, making purely transformer-based simulation schemes inefficient. In practice, given the native system Hamiltonian , the challenge is to find a group that grants a reasonably efficient scheme while satisfying Assumptions (2) and (4), and subject to the ability to implement the required control operations. As we shall see next, transformer-inspired ideas may still prove useful in devising simulation schemes in the presence, for instance, of additional symmetry conditions.

## 4 Illustrative applications

In this section, we explicitly analyze simple yet paradigmatic Hamiltonian simulation tasks motivated by QIP applications. While a number of other interesting examples and generalizations may be envisioned (as also further discussed in the Conclusions), our goal here is to give a concrete sense of the usefulness and versatility of our Eulerian simulation approach in physically realistic control settings. In particular, we focus on achieving non-local Hamiltonian simulation using only bounded-strength local (single-qubit) control, in both closed and open multi-qubit systems.

### 4.1 Eulerian simulation in closed Heisenberg-coupled qubit networks

Let us start from the simplest case of a system consisting of qubits, interacting via an isotropic Heisenberg Hamiltonian of the form

 H=Hiso=J(X⊗X+Y⊗Y+Z⊗Z)≡J(X1X2+Y1Y2+Z1Z2),

where has units of energy and the second equality defines an equivalent compact notation. We are interested in a class of target XYZ Hamiltonians of the form

 ~H=HXYZ=JxX1X2+JyY1Y2+JzZ1Z2,Ju∈R. (34)

For instance, , corresponds to an isotropic XX model, whereas if with , an XXZ interaction is obtained, the special value corresponding to the important case of a dipolar Hamiltonian. The construction of a simulation protocol starts from observing that Hamiltonians as in Eq. (34) are reachable from , in the sense of Eq. (22), based on single-qubit control only.

Specifically, let , and let the representation map to . That is, is mapped to the following set of unitaries:

 {Ug}={I⊗I,X⊗I,Y⊗I,Z⊗I}≡G1={I,X1,Y1,Z1}. (35)

Choosing the generators of to be and , we assume that we have access to the control Hamiltonians

 hx(t)=fx(t)X1andhz(t)=fz(t)Z1,

where the control inputs and satisfy and , for . Recalling Eq. (17), this yields the control propagators

 ux(δ) = cos[∫δ0fx(τ)dτ]I−isin[∫δ0fx(τ)dτ]X1, uz(δ) = cos[∫δ0fz(τ)dτ]I−isin[∫δ0fz(τ)dτ]Z1,

with and (up to phase), as desired.

Note that for any single-qubit Hamiltonians and , averaging over the unitary group in Eq. (35) results in the following projection super-operator:

 Missing dimension or its units for \hskip (36)

In general, the map is trace-preserving and, in this case, it acts non-trivially only on the first qubit. Thus, is trace-preserving on the first qubit. Since each term in is traceless in the first qubit, the decoupling condition follows directly from Eq. (36), even though the relevant representation is, manifestly, reducible.

Having satisfied our main requirements for Eulerian simulation, reachability of XYZ Hamiltonians as in Eq. (34) is equivalent to the existence of a solution to the following set of conditions:

 J(wI+wX1−wY1−wZ1)=Jx, J(wI−wX1+wY1−wZ1)=Jy, (37) J(wI−wX1−wY1+wZ1)=Jz,

for non-negative weights . While infinitely many choices exist in general, minimizing the total weight keeps the simulation time overhead to a minimum. For instance, it is easy to verify that a dipolar Hamiltonian of the form

 ~H=Hdip=−J(X1X2+Y1Y2−2Z1Z2)

may be simulated with minimum time overhead by choosing weights

 wI=12,wX1=0=wY1,wZ1=32.

The Cayley graph associated with the resulting Eulerian simulation protocol is depicted in Fig. 2, with the explicit timing structure of the control block as in Fig. 1 and control segments per block. It is worth observing that although the weights and are zero in the particular case at hand, all group members of are nonetheless required, and the unitaries and still show up in the simulation scheme (during the ramping-up sub-intervals, as evident from Eq. (26)). This is crucial to guarantee that the unwanted term is projected out.

The above analysis and simulation protocols can be easily generalized to a chain of qubits (or spins), subject to nearest-neighbor (NN) homogeneous Heisenberg couplings, that is, a Hamiltonian of the form

 H=H(NN)iso=n−1∑i=1J(XiXi+1+YiYi+1+ZiZi+1)≡n−1∑i=1J→σi⋅→σi+1,

where for later reference we have introduced the standard compact notation and we assume for concreteness that is even. In this case, we need only change the unitary representation of to be defined by the two generators and , resulting in the set of unitaries [42]

 {Ug}={I,X1X3…Xn−1,Y1Y3…Yn−1,Z1Z3…Zn−1}≡Godd,

Physically, the required generators and correspond to control Hamiltonians that are still just sums of 1-local terms, and that act non-trivially on odd qubits only:

 hx(t)=fx(t)(X1+X3+…+Xn−1),hz(t)=fz(t)(Z1+Z3+…+Zn−1).

We expect that the design of Eulerian simulation schemes for more general scenarios where both the input and the target are arbitrary two-body Hamiltonians (including, for instance, long-range couplings) will greatly benefit from the existence of combinatorial approaches for constructing efficient DD groups [45, 41]. A more in-depth analysis of this topic is, however, beyond our current scope.

### 4.2 Error-corrected Eulerian simulation in open Heisenberg-coupled qubit networks

Imagine now that the Heisenberg-coupled system considered in the previous section is coupled to an environment , and the task is to achieve the desired XYZ Hamiltonian simulation while also removing arbitrary linear decoherence to the leading order. The total input Hamiltonian has the form

 H=H(NN)iso⊗IB+IS⊗HB+n∑i=1→σi⊗→Bi,→Bi≡(Bx,i,By,i,Bz,i), (38)

where and , for each and , are operators acting on , whose norm is sufficiently small to ensure convergence of the relevant Magnus series, similar to first-order DCG constructions [27, 28]. The target Hamiltonian then reads

 ~H=HXYZ⊗IB+IS⊗HB,

in terms of suitable coupling-strength parameters as in Eq. (34). As before, we start by analyzing the case of qubits in full detail. Our strategy to synthesize a dynamically corrected simulation scheme involves two stages: (i) We will first decouple from