A Appendix: Decay of correlations

Asymptotic quantum many-body localization from thermal disorder

Abstract

We consider a quantum lattice system with infinite-dimensional on-site Hilbert space, very similar to the Bose-Hubbard model. We investigate many-body localization in this model, induced by thermal fluctuations rather than disorder in the Hamiltonian. We provide evidence that the Green-Kubo conductivity , defined as the time-integrated current autocorrelation function, decays faster than any polynomial in the inverse temperature as . More precisely, we define approximations to by integrating the current-current autocorrelation function up to a large but finite time and we rigorously show that vanishes as , for any such that is sufficiently large.

1 Introduction

1.1 Localization and its characterization

The phenomenon of localization was introduced in the context of non-interacting electrons in random lattices in [1]. It is now widely accepted that in such systems, a delocalization-localization (or metal-insulator) transition occurs as the disorder strength is increased. This transition is often discussed by referring to the nature of the one-particle wavefunctions that are exponentially localized in space in the insulator, but delocalized in the metallic regime. The localized phase has been studied with mathematical rigor starting with [2], whereas for the delocalized regime, this has not been successful up to now.

The natural question how interactions modify this transition has received renewed attention lately. Both theoretical [3, 4] and numerical [5, 6] work suggests that the localization-delocalization transition persists, at least for short range interactions. When talking about models with interaction, most authors choose a model where the localization is manifest in the absence of interaction (whereas, in the original model of [1], it was a highly nontrivial result). For example, a simple model from [5] is the random-field Ising chain

(1.1)

where are the Pauli-matrices at site and are i.i.d. random variables with . We think of many-body localization as the property that a local in space excess of energy does not spread into the rest of the system. However, before formalizing this intuition, we give another possbile definition of many-body localization, used e.g. by [5, 7], in the model defined by (1.1). Let label eigenfunctions of , then ‘many-body localization’ at infinite temperarure ( is the inverse temperature) could be defined as the occurrence of the inequality

(1.2)

where on the right hand side refers to the thermal average and refers to disorder average. Of course, one can also ask whether this inequality holds at , in which case the average over eigenfunctions on the left-hand side should be restricted to those eigenfunctions with an energy density corresponding to the inverse temperature , and the right hand side does not automatically vanish. Depending on the disorder strength, the validity of (1.2) can then depend on the temperature as well. The appeal of the inequality (1.2) is that it violates the so-called Eigenstate Thermalization Hypothesis (ETH) which states that most eigenvectors of the Hamiltonian define an ensemble that is equivalent to the standard (micro)-canonical ensemble; i.e. with the notation as in (1.2), it states that, for for any , the bound

(1.3)

is satisfied for a fraction of eigenfunctions that approaches as . Even though the ETH has not been proven for any interesting non-integrable system (the difficulty of doing so is related to the difficulty of proving delocalization), it has nevertheless been accepted by the theoretical physics community, starting with the works [8, 9]. It is however important to point out that the ETH also fails for ballistic systems like the ideal crystal for which there is surely no localization in the sense of non-spreading of energy excess.

There is at present no mathematical proof of many-body localization. Some progress was made for the (one-particle) Anderson model on a Cayley tree in [10], which is often quoted as a toy model for many-body localization and, recently, an approach via iterative perturbation theory for the model (1.1) was initiated by [7] (see [11] for an outline of their strategy in the one-particle setting).

As already indicated, we prefer a characterization that stresses the dynamics of energy fluctuations, and therefore we consider the Green-Kubo formula for the heat conductivity

(1.4)

where are local energy currents at site . Many-body localization is then understood as the vanishing of . The picture underlying such a definition is that means that energy excitations do not spread diffusively (or faster than diffusively) through the system. Let us bypass the question of the relation between these two characterizations of many-body localization; in the few cases where there exists up to date a convincing argument for many-body localization, those arguments would imply , as well. In any case, it seems to us that the characterization via the conductivity is clearly physically relevant.

In classical mechanics, one can consider models of the same flavour: One-particle localization occurs in a chain of harmonic oscillators with random masses. Adding anharmoncity to this setup yields a model that is a candidate for many-body localization, but the expectation seems to be that these models do not exhibit strict many-body localization. However, the phenomenology can still manifest itself through the dependence of on the anharmonicity . Form the works [12, 13, 14, 15], one conjectures that,

(1.5)

In other words, the conductivity has a non-perturbative origin for small . Below, we refer to this scenario as ’asymptotic localization’.

1.2 Thermal disorder instead of quenched disorder

Whereas all the models hinted at above have disorder in the Hamiltonian, this paper is concerned with the question whether one can in principle replace the disorder by thermal fluctuations, i.e. disorder due to the thermal Gibbs state. As far as we see, this question does not have any one-particle analogue but it is natural in many-body systems. Indeed, whereas disorder can model defects, it is also sometimes used as a model for slow degrees of freedom that are, in principle, influenced by the rest of the system.

The fact that randomness in the strict sense of the word is not necessary for localization had up to now been investigated by replacing the random field in the Hamiltonian by a quasi-random field, which is quite different from what we do. In the one-particle setup, this led to the study of models like the Aubry-André model [16], and recently it was argued [17] that also in the many-body setting, quasi-randomness suffices for many-body localization. To explain our setup and question, we now introduce our model. We consider a variant of the Bose-Hubbard model:

(1.6)

where are annihilation/creation operators of a boson at site and . For , this model is exactly the Bose-Hubbard model. In fact, the model we study is slightly more general than (1.6) to avoid conceptual complications related to conserved quantities and nonequivalence of ensembles, see Section 2.2, however this is not relevant for the discussion here. W.r.t. the thermal state at , the occupations behave as i.i.d. random variables whose distribution is given by

(1.7)

We split our Hamiltonian as

(1.8)

and we treat as a perturbation of . Intuitively, a perturbative analysis is possible, if for a pair of eigenstates of , we have the non-resonance condition

(1.9)

where . Since the distance between consecutive eigenvalues (level spacing) of the operator grows roughly as and the matrix elements of , locally at site , grow as (since they are quadratic in the field operators), the condition (1.9) seems satisfied for most pairs if , that is, with high probability w.r.t. the probability measure (1.7) when is sufficiently small. This is the basic intuition why this model should exhibit some localization effect at high temperature1. However, because of the many-body setup, it is not straightforward that the above claims make sense. In particular, it is certainly false that one could apply perturbation theory directly to the eigenstates of . Indeed, since the number of eigenstates should be thought of as and the range of energies has width , the level spacing (difference between nearest levels) vanishes fast as . Therefore, the locality of the operators is a crucial issue that should be used in making the above heuristics precise. Instead of having resonant and non-resonant configurations , we will assign to any ’resonance spots’ (where a local version of (1.9) fails).

Up to now, the heuristic reasoning is in fact no different from the one that would develop for the disordered Ising chain, except that we replaced the ’disorder distribution’ by ’distribution in the uncoupled Gibbs state’. The difference kicks in when one realizes that the non-resonance condition is not static but it can change as the dynamics changes the occupations . Therefore, it is not sufficient to argue that resonant spots are sparse, but one should investigate the dynamics of these resonance spots and exclude that this dynamics induces a current. The most intuitive part of this issue takes the form of a question in graph theory: The vertices of the graph are the configurations and the edges are pairs of configurations that satisfy some resonance condition. If the connected components of this graph are small, i.e. they typically consist of a few configurations, then this hints at localization. The main problem to be overcome in the present article is to show that, indeed, typical graphs decompose into many small disconnected components. Our analysis is however only valid in the limit , and for this reason, we do not know yet, even at an heuristic level, whether our model exhibits many-body localization in the strict sense (see also Section 3 and the recent paper [18, 19]), that is, whether the conductivitiy for with , or whether the localization is only asymptotic as in (1.5), i.e.

(1.10)

In this paper, we give a strong indication why at least (1.10) should hold, even in higher dimensions , see Theorem 2.1. This is done by approximating the current-current correlation function by truncation at times that grow like an arbitrary polynomial in and proving (1.10) for these approximations. We refer to Section 3 for a more detailed overview of the main ideas.

Similar reasoning was developed earlier in [15] for disordered classical systems, and in [20], for classical systems where the setup is analogous to the present paper, i.e. disorder is replaced by thermal fluctuations.

1.3 Outline of the paper

In Section 2, we introduce the model in precise terms and we state our results and in Section 3 we outline the strategy and we present a glossary of the most important symbols used throughout the proof. Section 4 deals with the iterative diagonalization of our Hamiltonian, excluding the resonant configurations (see explanation above). The sum of all terms that were not treated by iterative diagonalization is called ’the resonant Hamiltonian’, indicated by the symbol . Sections 5 and 6 contain the analysis of the resonant Hamiltoninian . As such, they are fully independent of Section 4 and they form the main part of our work. In Section 7, we finally combine the results of Section 4 with the analysis of Sections 5 and 6 to prove our results. In the appendix, we establish exponential decay of correlations at small for our model.

Acknowledgements. We benefited a lot from discussions with John Imbrie and David Huse and we also thank them for their encouragement regarding this work.

W.D.R thanks the DFG for financial support and the University of Helsinki for hospitality. F.H. thanks the University of Helsinki and Heidelberg University for hospitality, as well as the ERC MALADY and the ERC MPOES for financial support.

2 Model and result

2.1 Preliminaries

Let be a finite set. We define the Hilbert space

(2.1)

i.e. at each site there is an infinite-dimensional ’spin’-space. For an operator acting on we denote by (’support’ of ) the minimal set such that for some acting on , and the identity on for any . We do not distinguish between and , and we will denote them by the same symbol.

Let be the bosonic annihilation/creation operators on :

(2.2)

We write for the annihilation/creation operators acting on site , and, as announced above, we do not distinguish between and . We also define the number operators

(2.3)

The vectors diagonalizing the operators play a distinguished role in our analysis. For a finite set , we define the phase space with elements

(2.4)

such that and we often use as a label for the function i.e.  for .

2.2 Hamiltonian

We introduce the Hamiltonian of our model in finite volume and with free boundary conditions;

(2.5)

where , means that are nearest neighbours, and the exponent . By standard methods (e.g. Kato-Rellich), one checks that is self-adjoint on the domain of . The term destroys the conservation of the total occupation number . In the sequel, we will assume that , so that total energy is the only conserved quantity. Nonetheless, all our results remain valid when or vanish. The reason why we find it important to destroy the second conserved quantity is that similar models with two conserved quantities typically exhibit non-equivalence of ensembles. As explained in [21], one expects in a microcanonical ensemble equilibrium states where a macroscopic part of the particles (the total number of particles would correspond to in our model) is concentrated on a single lattice site. We want to stress that this type of ’statistical localization’ has nothing in common with the localisation mechanism in the present paper.

To avoid constants later on, we demand that .

2.3 States

The thermal equilibrium state of the system at inverse temperature and in finite volume is defined as

(2.6)

We are interested in the high-temperature regime, where the finite-volume states have a unique infinite-volume limit (for, say, in the sense of Van Hove), independent of boundary conditions. Morally speaking, this results belongs to standard knowledge, but, literally, it does not, because of the infinite one-site Hilbert space. In principle, we deal with this issue in the appendix, but, since we in fact only need exponential decay of correlations, uniformly in , we will not explicitly address the construction of the infinite volume state. We drop the volume and inverse temperature from the notation for the time being, writing simply . It is understood that sums over are always restricted to the volume .

2.4 Currents

We fix once and for all the vector and we study the current in this direction. First, we decompose the Hamiltonian as

(2.7)

where

(2.8)

We define local current operators by

(2.9)

Since the operators act on at most sites, all that contribute a nonzero term to the sum in (2.9) are nearest neighbours of . One way to convince oneself that this is a meaningful definition is to consider first the total current through the (restriction of a) hyperplane as the time-derivative of the total energy to the left of this hyperplane, i.e.

(2.10)

Then it follows that

(2.11)

Note that, by the time-invariance of the equilibrium state, , we have

(2.12)

2.5 Green-Kubo formula

To study the Green-Kubo formula, we introduce an empiric average of the local current over space and time:

(2.13)

where the scaling anticipates a central limit theorem, relying on the fact that the equilibrium expectation of vanishes:

(2.14)

This follows directly from (2.12) by using the decomposition . We introduce the finite-time conductivity

(2.15)

A basic intuition in transport theory states that in systems with normal (diffusive) transport, the current-current correlations decay in an integrable way, resulting in the convergence of the finite-time conductivity to the conductivity with . At present, this has however not been proven in any interacting Hamiltonian system. Instead, we study the behaviour of the approximants for arbitrarily large (polynomial in ) and we show that at all such times, the conductivity vanishes:

Theorem 2.1 (Conductivity in small limit).

There is a such that for any ,

(2.16)

As already explained in the introduction, we take this result as a strong indication that also

(2.17)

To make this precise, we should understand what type of processes dominate the dynamics after very long times, i.e. superpolynomial in . In [20], we argued for models of classical mechanics that in the case that the dynamics becomes chaotic at such large times, the conjecture (2.17) is definitely true. This was done by introducing an energy-conserving stochastic term in the dynamics of arbitrarily small strength and proving that the conductivity (which in that case can be shown to be finite) has the same order of magnitude as the stochastic term. This is not attempted in the present paper. On the other hand, without such a stochastic term, it remains an enormous task to prove that the conductivity is even finite and nonzero, see for example [22] for an exposition of this problem.

An alternative way to view our results, is to compare them to Nekhoroshev estimates in classical systems. Such estimates typically establish results very reminiscent of ours, but they are restricted to a finite number of degrees of freedom. We refer to [20] for a more thorough discussion of this point and for relevant references.

2.6 Splitting of the current

From a technical point of view, the key result in this paper is a splitting of the current into an oscillatory part and a small part. To describe it, let us introduce a multi-dimensional strip (whose width is called ) containing the hyperplane ;

(2.18)

and we often drop the parameters by simply writing .

Theorem 2.2 (Splitting of current).

For any , and sufficiently small , depending on , the following holds uniformly in the volume and the choice of : There are collections of operators , such that

(2.19)

and

  1. The operators and are supported in , i.e. , and whenever is not connected.

  2. and have zero average:

  3. They are bounded as

    (2.20)

Here, denote constants with that depend only on the dimension , and the exponent . The parameters can additionally depend on .

The relevance of this theorem in establishing asymptotic energy localization is explained in more details below.

3 Overview of the method

Before embarking into the proof of our results, let us informally describe the main steps leading to them. Let us first observe that Theorem 2.1 is readilly deduced from Theorem 2.2, as detailed in Section 7.5. Indeed, to start with, the first sum in the right hand side of (2.19) just represents local energy oscillations; the contribution of such an oscillation to the current (2.13) is given by

Next, the terms in the second in sum in the right hand side of (2.19) possibly contribute to the conductivity, but are very small in the Hilbert-Schmidt norm based on the thermal state . In fact, they are seen to decay as an arbitrary large power in , if is taken large enough, thanks to the presence of the term ’’ in the exponent of the bound in (2.20). Fianally, the terms in the exponents in (2.20) ensure that we can perform sums over the connected sets .

We can thus now focus on the derivation of Theorem 2.2. Let us start by explaining the origin of the oscillatory term in (2.19). For the sake of the argument, let us consider a strongly localized solid. So we imagine that the unitary change of basis that diagonalizes is written as , where the anti-hermitian matrix is a sum of almost local terms (see Section 4.2 for a precise definition of what almost local means). The diagonalized Hamiltonian then takes the form

(3.1)

where the terms quickly decay to as (we took here for simplicity). We now could say that defined in (2.10) was the naive left part of the total energy. We define

But then, from (2.10), we find

(3.2)

On the one hand, the locality properties of allow to conclude that is localized near the hyperplane , so that the first term in this last equation may be identified with the first sum in the right hand side of (2.19). On the other hand would here vanish. In reality, we will however not be able to fully diagonalize , so that a “rest term” appears in (2.19). Technically, this step consisting in deriving Theorem 2.2 once the change of basis and the opertaor are known, is performed in Sections 7.2-7.4. This leads to heavy computations, as the operator that we manage to obtain is far less simple than (3.1); this issues from both conceptual (resonances) and technical questions (high energies).

We now need to find a change of basis that will remove as much oscillations as possible, and then analyze the Hamiltonian in the new basis. The construction of the change of basis is performed in Section 4 (the notation does not appear yet in Proposition 4.3; it only shows up in Section 4.5 when we restrict our attention to finite volumes). As already stressed in the introduction, the interaction between atoms can be treated as a perturbation at high temperature, thanks to the choice in the Hamiltonian (1.6): resonances are only met in some exceptional places in the solid (see figure 1). To make this a bit more transparent at this level of the discussion, we can rewrite given by (1.6) as

(3.3)

With these notations, both typical self-energy differences and terms in are of order , so that is indeed a perturbative parameter. We will however not explicitly make use of these notations in the proofs.

Figure 1: Resonances in first order in perturbation. For simplicity we assume . The situation on the left is typical at high temperature, and non-resonant, as the self-energy difference is much larger than the interaction energy: The situation on the right is rare and resonant: the self-energy difference even vanishes in this case.

We construct via an iterative KAM-like scheme, recently developed by Imbrie and Spencer [7] in the contex of quenched disordered systems. Naively, the scheme works as follows. In a first step, we determine so that takes the form , for some new self energy and some new perturbation . For this, we write and, assuming that is a sum of local terms of order , we expand in powers of :

(3.4)

Writing and , we get rid of the first order in by setting

(3.5)

( if by definition). The fact that resonances are rare precisely means that for most of the pairs of states and , the matrix element is well defined and of order . Let us ignore resonances for the moment. We would then conclude that the expansion (3.4) was justified. So, we would also have determined a renormalized Hamiltonian with a perturbation of order . This strategy could then be iterated, constructing a sequence of change of variables , where is a sum of local terms of order . Doing so, we would readilly conclude that is strongly localized in the sense of (3.1).

As an aside, let us observe that it is only possible to find so that (3.5) holds if the pertubation has no diagonal element. This explains that in principle we need to renormalize the self-energy ( as mentionned), in order to absorbe all the diagonal part of the new Hamiltonian. It is however not what we do in practice, as we simply treat these extra diagonal elements as resonances. However, in order to investigate possible true localization in translation-invariant models, there would be a deeper reason to take account of the self-energy renormalization. Indeed, if this phenomenon is ignored, it is readilly seen that, for fixed , resonances would eventually become typical as one goes on to higher orders in perturbation theory. Since, at higher orders, atoms may change level by more than one unit, the interaction could now just swap the levels of any two near atoms (whereas at the first order this was only possible if the energies were nearly the same as depicted on the right interaction of figure 1). So all atoms would be in resonance with their neighbours, allowing energy to travel into the solid. On the other hand, such a drastic conclusion could not be reached if the renormalization of the self-energy was taken into account. It has in fact been suggested in [18] that this effect could guarantee that resonances rarefy as one moves to higher orders. To support this view, we indeed observe that the perturbative splitting of the levels could and should be exploited to show localization in the one-body Anderson model when the disorder only takes a finite numner of values, a model for which localization is clearly expected to hold.

Let us come back to the description of the scheme initiated in (3.4). It is clear that resonances, even if very rare, cannot just be ignored as we pretended up to now. We just do as much as we can: the perturbation is splitted into a resonant and non-resonant term (see (4.30)), and (3.5) is only solved with replaced by the non-resonant part of . While the change of variables are now well defined and enjoy good decay properties, this replacement comes with a price. A first, technical, consequence is that the speed of the iteration procedure is much slowed down. Indeed, in this version of the scheme, we just let the resonant part as it is, so that at each step, resonant terms of order are present in the perturbation. Though they do not create any trouble as such, it is seen that, itarating the scheme once more, non-resonant terms are generated that would be too large for a superexponential bound like to survive. Instead, we can only obtain that is a sum of terms of order (so we do not progress faster than in usual perturbation theory).

The true problem is however that, after a large but finite number of iterations, we are left with a Hamiltonian containing still a perturbation of order (see the term in (4.31), and, later on, the resonant Hamiltonian defined in (5.1)). The resonant Hamiltonian is well sparse, but not as much as needed to get our results: a look at figure 1 shows indeed that the probability of two atoms to be resonant is at best bounded by . Before indicating how we will get off the hook, let us stress here that the analysis of resonances reveals a fundamental difference between quenched and thermal disorder.

To see this, let us for example consider the first order resonances in a quenched disorder spin chain, as studied by [5] [7]. In this model, it is possible determine bonds on the lattice such that resonances can only occur on these bonds. Moreover, if the disorder is strong enough, these potentially resonant bonds form small isolated islands. In this case, it is then in fact possible to completely get rid of the resonant Hamiltonian at each step of the procedure. Indeed, one can diagonalize the Hamiltonian “on the resonant islands”, meaning that we conjugate it with a change of basis that affects only the terms in that act inside the islands. This rotation is non-perturbative, but does not entail any delocalization, as the resonant spots do not percolate. At the opposite, in the translation invariant set-up, it is no longer possible to visualize resonances on the physical lattice. Instead, we directly need to analyze a percolation problem in the full set of states (it should however be noticed that the eigenstates of the resonant Hamiltonian could still be localized even in the presence of a giant percolation cluster, but we are not aware of any convincing argument supporting this view). This is a rather delicate problem, illustrated on figure 2.

Figure 2: In translation invariant chains, resonances do travel into the system. Let us assume that next to nearest neighbor level swapping is allowed (which anyway occurs in second order in perturbation). More precisely, this means that a configuration can be transformed into
  1. if

  2. if

  3. if

With a bit of trial and error, we discover that the left configuration can be transformed into the right configuration in a few steps. This means that the time evolution of the state on the left under the dynamics generated by the resonant Hamiltonian can have an overlap with the state on the right. We see that the most right atom can enter in resonance with the other ones, though it was not initially so.

We will not attempt to diagonalize the resonant Hamiltonian. Instead, the total energy will be separated into a left and right part, in a state dependent way, by a surface close to that “slaloms” between the resonances. This is described in Section 6, see in particular Figure 3 where the spirals indicate the resonant spots. So, we will arrive in the situation described by (3.2): the second term in the right hand side of this equation will now be sufficiently sparse current, while the first term still is just an oscillation.

To see how to define this surface, we need to analyze the motion of resonances (see Section 5). Let us first restrict the Hamiltonian to a large but fixed volume around a point on (a volume that will not be sent to infinity). We show the following. Let us pick up a state in , and let us collect all the other states in that could have an overlap with the time evolution of under the dynamics generated by the resonant Hamiltonian. We show that for an overwhelming majority of states , there exists small isolated islands in such that any of the state that we have collected, only differ from on these islands. The set of states for which this does not hold is small enough to be neglected. On the one hand, we can convince ourselves of the validity of this statement by looking at figure 1. To simplify, let us assume that resonances are first order, and only occur when two levels are swapped as it is the case for the interaction on the right. Then on that example, it is seen that the only resonant island is located on the sites 5,6,7, assuming that atoms have been labeled from 1 to 8. On the other hand, a look at figure 2 hints that this statement could be violated if was sent to infinity for fixed . Indeed, as the volume gets larger and larger, configurations that are rare locally, eventually occur. It is thereofre concivable that a big resonant spot starts invading the full space, connecting configurations that would have remained separted if the perturbation was confined to the volume .

So we have found a way to construct the surface close to in the volume , but this is not completely satisfactory as we take the thermodynamic limit before sending . Two issues are raised. First, if the dimension is larger than one, we may take a volume around each point in and construct a piece of surface in each of these volumes, but we then have to glue them together. Second, even in one dimension, where the surface just reduces to a single point, we must analyze what extra-current is produced if the Hamiltonian is now defined on the full space. Let us bypass here the first question, that leads to intricate constructions (see Section 6), as the second one appears to us as more fundamental. We actually observe that the set of states for which an extra current is produced when reintroducing the interaction at the border is extremely small. Indeed, a non zero current could only be created if a small energy change at the border, induced by the perturbation, could completely modify the island picture up to the center of . However, in most cases, the configuration of the islands is far less fragile: a very atypical configuration would be required for a single change at the border to propagate in the bulk of (too few atoms appear on figure 1 to see this neatly, but one can be readily become convinced by adding a few sites). We thus see that the current is indeed very sparse.

This summarizes most of the conceptual points addressed in this article.

Glossary

Here is an overview of symbols that appear in different parts of the article (excluding the appendix). The middle column gives the page where the symbol appears for the first time.
Potentials (script fonts: );

4.4 Potential of the model Hamiltonian (without interaction).
4.31 Renormalized potential: nonresonant, resonant, diagonal.
4.39 Finite-range approximations to renormalized potentials.

Operations on potentials (Calligraphic fonts);

4.1 Cutoff in occupation number.
4.30 Projection onto resonant, nonresonant parts.
4.60 Total renormalization transformation.
4.37 Restriction to diagonal.
4.67 Restriction to volume .
4.38 Restriction to range .

Notions from the analysis of the resonant Hamiltonian, for configurations and components ;

5.1 The set of moves, in volume .
5.2 Moves with support in that are active from , .
5.27 Moves with support in that are not too far from to be active.
5.37 Slight modification of .
5.2 Partition of phase space in volume , into components .
6.1 Left, right regions depending on component .
6.19 Left, right resonant Hamiltonian.
4.64 Unitary restriction of transf.  to volume .
6.2 Balls (within ) centered at .

Important parameters;

4.3 resonance threshold, set to in (5.2).
4.1 occupation cutoff, set to in Thm. 7.1.
5.2, 5.1 Exponents of .

Norms, with and a state (density matrix);

4.38 Euclidian norm.
4.8 operator norm.
4.8 (non standard) weighted operator norm.
4.10 weighted potential norm.
6.25 Hilbert-Schmidt norm from scalar product .

4 Perturbative diagonalization of

In this section, we introduce the formalism of interaction potentials and we implement an iterative diagonalization scheme, acting on interaction potentials.

4.1 Energy cutoff

In our analysis, we find it convenient to introduce a high-energy cutoff, even though, in principle, the main reasoning of the paper is the more applicable, the higher the energy. Given a number and an operator with finite range , we set

(4.1)

and, analogously, we define by replacing by . Note that in general, . The cutoff will be chosen, at the end of the analysis, to be , for some small

4.2 Interaction potentials

The Hamiltonian is strictly local, i.e. it is a sum of terms that act on at most two lattice sites. When performing an iterative diagonalization, this will no longer be true and hence we first introduce a weaker notion of locality by introducing interaction potentials.

Definition 4.1.

An interaction potential is a map from finite, connected sets to bounded operators on . A Hamiltonian in finite volume associated to a potential is defined by (4.2) For simplicity, we henceforth assume that, for any interaction potential , if is not connected and we omit the restriction to connected from sums like (4.2).

In the literature, one almost always uses the notation but we have chosen to avoid confusion with the Hamiltonian defined in (2.5). Obviously, the denomination ’Hamiltonian’ is a misnomer in case the operators are not Hermitian. For a potential , we define the cutoff potential

(4.3)

and analogously for . An important example of a potential is the potential specifying our model Hamiltonian itself, with an energy cutoff. It is defined by

(4.4)

We also define the potential of the free Hamiltonian

(4.5)

so that indeed

(4.6)

Note however that other choices are possible for ; different potentials can define the same Hamiltonian.

Norms

Note that interaction potentials form a linear space under the addition . We introduce a family of suitable norms on interaction potentials, based on the following weighted operator norms: For an operator on , we define an associated operator on by

(4.7)

such that, in particular, where is the standard operator norm. Further, for , we set

(4.8)

For , we define simply and we note that

(4.9)

Note that these definitions are independent of provided . For , the -norm penalizes off-diagonal elements in the number basis. The corresponding class of norms on interaction potentials is

(4.10)

There is no compelling reason to consider , but we often do so for reasons of simplicity.

4.3 Operations on interaction potentials

Given two interaction potentials we define a new potential

(4.11)

and we note that every term in the sum on the right hand side vanishes unless . In particular, if assign zero to every non-connected set , then so does . The motivation for this definition is of course that, for any volume

(4.12)

Often, we prefer to use the notation

(4.13)

If one imagines that is an anti-Hermitian operator and hence that it generates a time evolution, then one might ask how this time-evolution affects a potential . To address such questions, we define (for the moment as a formal series)

(4.14)

Provided this series converges (in one of the norms ), we can conclude that

(4.15)

In particular, for any time , we can consider the time-evolution

(4.16)

The intuition that is still a bonafide interaction potential, though with range growing with , is captured by the so-called Lieb-Robinson bounds that have received a lot of attention lately [23]. In some sense, we rederive such bounds in the following lemma (in particular ), which helps us to handle multiple commutators of potentials. We do not require Hermiticity, but we are restricted to small potentials, corresponding to small time in the setup above.

Lemma 4.1.

Let and , let be interaction potentials and let be bounded operators. In all inequalities below, both sides can be infinite.

  1. (4.17)
  2. (4.18)
  3. If , then, for any bounded sequence

    (4.19)

    In particular, by choosing , the potential on the left hand side equals .

Proof.

Point is trivial. To address points , we introduce some more structure. Let us first define, for a function on finite subsets of , the norm on potentials

(4.20)

The following class of functions will be of relevance:

(4.21)

We establish

Lemma 4.2.

For any and ,

(4.22)
Proof.
(4.23)

To deal with the first term and second term, we dominate, respectively,

(4.24)
(4.25)

and . The claim follows. ∎

In the same spirit, we now estimate, for ,