A Approximate Gaussian distribution of the atoms

# Synthetic lattices, flat bands and localization in Rydberg quantum simulators

## Abstract

The most recent manifestation of cold Rydberg atom quantum simulators that employs tailored optical tweezer arrays enables the study of many-body dynamics under so-called facilitation conditions. We show how the facilitation mechanism yields a Hilbert space structure in which the many-body states organize into synthetic lattices, which feature in general one or several flat bands and may support immobile localized states. We focus our discussion on the case of a ladder lattice geometry for which we analyze in particular the influence of disorder generated by the uncertainty of the atomic positions. The localization properties of this system are characterized through two localization lengths which are found to display anomalous scaling behavior at certain energies. Moreover, we discuss the experimental preparation of an immobile localized state, and analyze disorder-induced propagation effects.

###### pacs:

Over the past few decades, advances in the manipulation of cold and ultra-cold atomic gases rendered them into a versatile quantum simulation platform Bloch et al. (2008, 2012). Indeed, several paradigmatic many-body models have been studied experimentally, including the Luttinger liquid Hofferberth et al. (2008), the Tonks-Girardeau gas Kinoshita et al. (2004) as well as Bose-Hubbard Greiner et al. (2002a, 2003) and Fermi-Hubbard Hamiltonians Köhl et al. (2005), permitting to directly observe several predicted phenomena, such as quantum revivals Greiner et al. (2002b), Lieb-Robinson bounds Cheneau et al. (2012), and topological phase transitions Hadzibabic et al. (2006).

Among many different physical systems apt to act as quantum simulators, ensembles of Rydberg atoms Saffman et al. (2010); Löw et al. (2012); Gallagher (1994) stand out for their strong interactions, which are now known to give rise to an intricate phenomenology, including devil’s staircases Weimer and Büchler (2010); Levi et al. (2016); Lan et al. (2015), aggregate formation and melting Schempp et al. (2014); Lan et al. (2016), Rydberg crystals Schauß et al. (2015), optical bistability Carr et al. (2013); Šibalić et al. (2016) and phase transitions or universal scaling Löw et al. (2009); Marcuzzi et al. (2014); Gutiérrez et al. (2015). These systems are currently employed for a variety of tasks, such as quantum information processing Jaksch et al. (2000); Weimer et al. (2010); Saffman (2016) and the simulation of quantum spin systems Labuhn et al. (2016); Schauß et al. (2015). Several among these instances employ the so-called facilitation (or anti-blockade) mechanism (see e.g., Refs. Ates et al. (2007); Amthor et al. (2010); Gärttner et al. (2013); Schönleber et al. (2014); Lesanovsky and Garrahan (2014); Urvoy et al. (2015); Valado et al. (2016)) to actuate a form of quantum transport.

In quantum systems, it is well-established that transport can be heavily affected by the presence of quenched disorder, a phenomenon known as Anderson localization Anderson (1958). In the presence of randomly-distributed impurities in a metal, for example, different paths taken by an electron can interfere destructively, leading to localization. In one and two dimensions, this effect is so relevant that for arbitrarily small disorder all wavefunctions are localized and transport is effectively impossible Mott and Twose (1961); Ishii (1973). Since their first prediction, these effects have been experimentally observed in a range of systems, spanning electron gases Cutler and Mott (1969), cold atoms Billy et al. (2008); Roati et al. (2008); Semeghini et al. (2015), thin films Liao et al. (2015) or periodically-driven nitrogen molecules Bitter and Milner (2016).

Apart from the case of quenched disorder, localized states can also arise in tight-binding models from particular lattice geometries. In these cases, destructive interference leads to the emergence of flat bands. Models with flat bands typically allow the construction of localized eigenstates, and have been experimentally realized with cold atoms Shen et al. (2010), photonic lattices Mukherjee et al. (2015), and synthetic solid-state structures Slot et al. (2017); Drost et al. (2017). When disorder is introduced in such systems, these pre-existing localized states couple to the dispersive, system-spanning ones and start acting like scatterers, inducing a richer phenomenology, such as localization enhancement Leykam et al. (2017), Anderson transitions in lower-dimensional systems Bodyfelt et al. (2014), and disorder-induced delocalization Goda et al. (2006).

In this paper we demonstrate that Rydberg lattice quantum simulators Schauß et al. (2015); Labuhn et al. (2016); Bernien et al. (2017) permit the exploration of disorder phenomena in the presence of flat bands. We show that under facilitation conditions – when the system parameters are set such that Rydberg states can only be excited next to an already existing excitation – the Hilbert space acquires a regular lattice structure featuring flat bands. In this picture, the uncertainty of atomic positions translates into a disordered potential on the newly created synthetic lattice. Scenarios similar to these were previously theoretically analyzed in Leykam et al. (2017); Bodyfelt et al. (2014). Here we show that they emerge naturally in Rydberg quantum simulators employing optical tweezer arrays Labuhn et al. (2016); Jau et al. (2016); Bernien et al. (2017). We illustrate our findings for the case of a so-called “Lieb ladder”: we analyze the scaling of the localization length and discuss the spreading dynamics of a local flat-band eigenstate under the action of different disorder strengths.

Facilitation, Hilbert space structure and flat bands— We start by considering a regular foo () lattice of optical tweezers, each loaded with a single Rydberg atom, and with nearest-neighbor distance . A laser is shone with a frequency detuned by with respect to an atomic transition between the electronic ground state and a Rydberg level . We work here in natural units . Atoms in the Rydberg state interact, at distance , via an algebraically-decaying potential , with for dipole-dipole (van-der-Waals) interactions (without loss of generality we choose ). Within the rotating wave approximation the Hamiltonian of this system reads

 ^H=ΩN∑k=1^σ(k)x+ΔN∑k=1^nk+12N∑k=1m≠kV(dkm)^nm^nk, (1)

where is the laser Rabi frequency, and are lattice indices, denotes the distance between atoms in sites and , and . The facilitation condition is obtained by setting , so that an isolated excited atom makes the transitions of its neighbors resonant with the laser. In the following, we consider , so that non-facilitated atoms are sufficiently off-resonant to neglect their excitation. Furthermore, we require which still ensures that an isolated excitation can facilitate the production of another on a neighboring site, but suppresses the creation of additional excitations in the neighborhood. For example, in one dimension . In the following, we neglect these strongly suppressed transitions, effectively splitting the Hilbert space into subspaces separated by energy scales . Each subspace comprises a set of quasi-resonant states separated by scales (see Ref. Marcuzzi et al. (2017) for more details on this structure). Intuitively, this means that a single excitation can at most produce one more in the neighborhood, after which either the former facilitates the de-excitation of the latter, or vice versa.

Amidst all various subspaces, the simplest non-trivial choice corresponds to the one consisting of all states with either a single excitation or a single pair of excitations on neighboring sites Mattioli et al. (2015); Marcuzzi et al. (2017). Hence, as sketched in Fig. 1 for a few planar examples, a lattice structure emerges in the Hilbert space which closely resembles the real-space geometry of the tweezer arrays. These synthetic lattices are constructed via the following rules:

(i) in the original lattice structure, draw the links joining nearest neighbors; (ii) identify each site with the state having a single excitation on that site. This exhausts all “one-excitation” states in the subspace; (iii) each “pair” state can be straightforwardly associated to the link joining the two excited atoms; hence, place an additional site in the midpoint of each link and associate it with the corresponding “pair” state. The links in this new-found structure now effectively represent a pair of states connected by the Hamiltonian, which can be therefore seen as a tight-binding model on a generalized synthetic lattice. In the case of a square lattice, the new structure (see Fig. 1) is the Lieb lattice and is known to feature one flat and two dispersive bands which meet with a linear dispersion at the edges of the first Brillouin zone. However, this construction is general and can be extended to any kind of regular foo () lattice. Most of these structures will support flat bands as well: It can be shown SM () that, calling () the number of one-excitation (pair) states in a unit cell, the number of flat bands must be . For the examples of Fig. 1, the square, triangular and honeycomb lattices have , and respectively. These flat bands constitute extensively-degenerate eigenspaces of the Hamiltonian; as such, it is often possible to recombine the usual (plane-wave-like) Bloch solutions to form a set of localized (or immobile) eigenstates.

Disorder— Disorder enters the picture through the uncertainty in the atomic positions. Even small displacements from the center of the traps can significantly shift the atomic transitions off resonance from the laser frequency, thereby hindering the facilitation mechanism Marcuzzi et al. (2017). In fact, the interaction potential seen by an atom at a distance from an excitation will be . At small disorder ( and ) the energy shifts can be approximated by SM (). These random variables only affect pair states, creating a disordered potential landscape over the pair (red) sites in Fig. 1.

In order to characterize the disorder, we denote by the optical tweezer trapping frequency (assumed hereafter to be isotropic in space), by the atomic mass and by the temperature. The probability distribution of a trapped atom can then be approximately described as a Gaussian of width around the trap center. We require now that (I) : this implies that one can use the semiclassical estimate and moreover that the thermal de Broglie wavelength of the atom is much smaller than the distribution width. In other words, the atom can be approximately considered localized somewhere within the trap according to a classical probability distribution. (II) , with the duration of an experiment: this ensures that the atoms will not appreciably move from their positions in this time frame and thus the disorder is quenched. (III) , or in other words the dynamics of the internal degrees of freedom is much faster than the one of the kinetic ones, so that within an experiment one can probe the action of the disordered Hamiltonian on the system while keeping the specific realization of the disorder fixed. The properties of the probability distribution of energy shifts are discussed in SM (); here we just mention that amplitudes of shifts over different pair sites are not independent, but correlated.

Disordered Lieb ladder— In the remainder of our discussion, we shall focus on a ladder configuration, i.e. a quasi-one-dimensional lattice formed by placing two linear chains parallel to each other at a lattice spacing . For this example, the synthetic lattice (a “1D Lieb lattice”) in the Hilbert space is sketched in Fig. 2(a). The unit cell consists of five sites with and and the band structure features one zero-energy flat and four dispersive bands [Fig. 2(d)].

This Lieb ladder constitutes one of the simplest examples where flat bands produce a non-trivial interplay with the on-site disorder Leykam et al. (2017). In a Rydberg quantum simulator, however, the disorder only appears on pair states, i.e. all the blue sites of the synthetic lattice [Fig. 1(a)] are unaffected by it. To investigate the effect of this unusual disorder scenario we study in the following the scaling behavior of the localization length for small disorder strengths. This quantity encodes the localization properties of the energy eigenstates, whose amplitude is typically peaked in a specific area of the lattice and decays exponentially as at large distances .

For a ladder like the one under study, two different values of can be extracted at any given energy, which we denote by and order according to . To elucidate the reason, one can perform an appropriate change of basis (“detangling transformation” Flach et al. (2014); Leykam et al. (2017)) through which the Lieb ladder is mapped onto two uncoupled one-dimensional lattices [see Fig. 2(b)], a chain (in green, supporting the two innermost dispersive bands) and a stub lattice (in orange, supporting the flat and two outermost dispersive bands) SM (). At small disorder, one can thus associate either localization length to one of the two detangled chains.

The localization length are found numerically via a transfer matrix formalism and are displayed in Fig. 3(a) as a function of the disorder strength and the energy .

In Fig. 3(b) we display log-log plots of the correlation lengths at selected energies as functions of , which illustrate algebraic scaling , for sufficiently small . Where possible, we connect our findings to those presented in Ref. Leykam et al. (2017), where the same geometry is studied with independent disorder on all sites. The usual scaling for Anderson localization corresponds to at energies outside a band (“out”), at a band edge (“edge”) and inside a band (“in”). The energies selected in Fig. 2 correspond to (out/in), (edge/in), (in/in), (in/edge) and (edge/out). Here the entries in the brackets refer to the two band structures depicted in Fig. 2(c,d): (orange/green). 6 In Ref. Leykam et al. (2017) an “anomalous” scaling was found at and . This was attributed to the fact that disorder, in the detangled picture, is not merely on-site but couples the two chains. This in turn may produce resonances between states in the middle of a band and states at the edge of the other when the latter displays vanishing group velocity. Comparing these values with the ones obtained for our situation, we observe reasonable agreement at , and , plus for the “edge” scaling at . The anomalous “in” scaling at seems instead to be “cured” as we retrieve a result compatible with the usual Anderson one (). As we show in SM (), this is likely to be due to the alternating structure of the disorder in the synthetic lattice, which in the detangled picture results in the absence of random couplings between sites [see Fig. 2(b)], present instead in Ref. Leykam et al. (2017).

We find however discrepancies at , where both localization lengths are close to and do not seem to match with either of the expected values (edge) or (in, anomalous). An explanation for this behavior, which does not seem to be related simply to the alternating structure of the disorder SM (), is currently lacking and requires further investigations.

Localized flat band state dynamics—

Experimentally measuring the localization lengths studied above is challenging due to the required large systems size and small disorder amplitudes. However, one can probe the influence of disorder by initializing the system in a specific state and tracking the subsequent dynamics by measuring the on-site excitation probabilities Schauß et al. (2015); Labuhn et al. (2016); Bernien et al. (2017). A particularly interesting choice for an initial state is localized and an eigenstate of the flat band. Such state is not propagating in the absence of disorder. We show in SM () that it takes the form , being entirely localized at rungs of the ladder [see Fig. 4(a)]. States of this form can be prepared experimentally via single site addressing SM ().

The time evolution of the excitation density is shown in Fig. 4(b). The effect of the disorder becomes apparent in the width SM () of the density packet which quickly reaches a stationary state. It is interesting to observe that, as shown in Fig. 4(c), the stationary value of shows a non-monotonic behavior as a function of . This can be understood as follows: at very small (but finite) disorder strength the initial state (energy ) is almost a flat band eigenstate and it therefore only minimally spreads (see e.g. Refs. Vicencio et al. (2015); Goda et al. (2006)). As is increased, this picture breaks down and the state more and more strongly hybridizes with other states, allowing transport over larger distances. At the same time, however, the localization lengths at decrease. Hence, an interplay ensues: the spreading of the state increases with as long as the localization length remains larger (). Once the decrease in the localization scale catches up with the increase of , the behavior is dominated by localization and, as expected, decreases with increasing disorder strength.

Conclusions and Outlook— We have shown that Rydberg quantum simulators allow to explore localization phenomena in synthetic lattices with flat bands and unconventional types of disorder (correlated, alternating). The current study focuses on the Lieb ladder and on a particular excitation sector. Higher-dimensional lattices hosting more excitations are straight-forwardly realizable in experiment. It is thus a future theoretical challenge to shed light on these intricate and unexplored scenarios.

Acknowledgments— The research leading to these results has received funding from the European Research Council under the European Unionâs Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement No. 335266 (ESCQUMA), the EPSRC Grant No. EP/M014266/1, and the H2020-FETPROACT-2014 Grant No. 640378 (RYSQ). I.L. gratefully acknowledges funding through the Royal Society Wolfson Research Merit Award.

Supplemental Material: Synthetic lattices, flat bands and localization in Rydberg quantum simulators

## Appendix A Approximate Gaussian distribution of the atoms

In order to show how the Gaussian distribution of the atomic positions arises, we consider here an atom of mass sitting in a one-dimensional optical trap of frequency . The results will be straightforwardly generalizable to the three-dimensional case as the three Cartesian coordinates decouple and can be treated independently. We work in a regime of temperatures much lower compared to the trap depth, but larger than the trap frequency, i.e., . The first assumption allows us to treat the trap as an harmonic potential, yielding a Hamiltonian

 ^Htrap≈^p22m+m2ω2^x2, (2)

where and are the quantum position and momentum operators, respectively. The thermal state of the system is described by the Gibbs form

 ρth=1Ze−β^Htrap, (3)

where and is the partition function

 Z=tr{e−β^Htrap}. (4)

Employing the standard mapping

 ^p=i√ℏmω2(^b†−^b),^x=√ℏ2mω(^b†+^b) (5)

in terms of bosonic creation () and annihilation () operators , one readily obtains

 H=ℏω(^b†^b+12) (6a) and Z=∑ne−βℏω(n+1/2)=12sinh(βℏω2). (6b)

Calling the position eigenvector , the probability density functions of the atomic position is defined as

 ppos(x)=⟨x|^ρth|x⟩. (7)

Its analytical form can be extracted from the Feynman propagator for the harmonic oscillator , which, in the time interval , reads (see, e.g., Holstein (1998); Barone et al. (2003) for detailed derivations)

 K(x,y,t)=√mω2πℏisin(ωt)××exp{imω2ℏsin(ωt)[(x2+y2)cos(ωt)−2xy]}. (8)

Substituting and one finds

 K(x,x,−iβ)=⟨x|e−βH|x⟩=√mω2πℏsinh(ωβℏ)××exp{−mωℏsinh(ωβℏ)(cosh(ωβℏ)−1)x2}. (9)

Dividing by the partition function (4) one finally finds the Gaussian

 ppos(x)=√mω(cosh(ωβℏ)−1)πℏsinh(ωβℏ)××exp{−mωℏsinh(ωβℏ)(cosh(ωβℏ)−1)x2}. (10)

The variance can be read off directly and amounts to

 σ2=ℏsinh(ωβℏ)2mω(cosh(ωβℏ)−1). (11)

Since we assumed , i.e., , we can expand this expression to lowest order, which yields

 σ2=1mω2β=kBTmω2, (12)

as reported in the main text.

The distribution (10) is straightforwardly generalized to three-dimensions and traps centered along a single linear chain at positions with an integer:

 p(k)pos(r)=1(2π)3/2σ1σ2σ3e−r212σ21−r222σ22−(r3−(k−1)⋅R0)22σ23. (13)

For clarity, we remark here that the indices in the expression above distinguish between Cartesian components only, e.g, and are the components of the same atom along the and directions. In the following, when necessary the trap index will always appear before the component one, e.g., is the -th component of the th atom’s position. For a ladder, a second set of position distributions would be added with the same Gaussian form up to .

## Appendix B Distribution of the distances and interactions for a single chain.

Here we focus on a single one-dimensional chain as most of the properties which affect the results in the main text are due to the presence of an extended longitudinal direction. Still, the considerations made for the marginal distributions for pairs of atoms directly apply to any regular lattice configuration as well. The distribution of the differences can be found in the Supplemental Material of Ref. Marcuzzi et al. (2017) and, for isotropic traps, reads

 Missing \left or extra \right (14)

where . The correlations between different components can be worked out via the inverse da Fonseca and Petronilho (2001)

 C=A−1=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝2−100−12−100−12−1⋯00−12⋮⋱⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠, (15)

implying,

 ⟨dk,idq,j⟩−⟨dk,i⟩⟨dq,j⟩=σ2δij(2δk,q−δk,q+1−δk,q−1). (16)

Subsequent distances are therefore (anti-)correlated, and these correlations pass onto any (non-trivial) function of the distances, and in particular the energy displacements .

As a consistency check, we remark that is a matrix, whose determinant satisfies the recursion relation

 detC(L)=2detC(L−1)−detC(L−2) (17)

with “seed” (or initial conditions) and , which is solved by . Consequently, the factor produced by the Gaussian integration over all variables exactly cancels the appearing in the normalization factor, as expected.

### b.1 Marginal distribution for a single pair of atoms

The s are identically distributed, so we can select any given one (and drop its index for brevity) and integrate over the other variables from equation (14). This yields

 pdiff(d)=1(4π)3/2σ3e−14σ2[d21+d22+(d3−R0)2]==1(4π)3/2σ3e−14σ2[d2+R20−2d3R0], (18)

where denotes the distance between a pair of neighboring atoms. The distribution for this new variable can be then obtained via a solid angle integration and reads

 pdist(d)=d24(π)1/2σ3e−14σ2(d2+R20)∫π0dθsinθe−dR0cosθ2σ2==d√πσR0e−14σ2(d2+R20)sinh(dR02σ2). (19)

The distribution of an energy shift is now just a change of variables () away, according to . For the sake of generality, we keep generic in

 d(δV)=(CαV0+δV)1α, (20)

where , which implies

 d′(δV)=−1αC1/αα(V0+δV)1+1/α. (21)

Hence, the distribution of energy shifts for a pair is

 P(δV|V0,R0,σ)=R0σα√πV0(1+δVV0)1+2α×× e−R204σ2⎡⎢⎣1+(1+δVV0)−2α⎤⎥⎦×× sinh⎡⎣R202σ2(1+δVV0)−1α⎤⎦. (22)

It is relatively simple to see that, if we define the dimensionless quantities and , we can simplify this expression further:

 Missing or unrecognized delimiter for \right (23)

The probability distribution function in Eq. (23) is defined in the domain ; for , in the limit it behaves as

 P(δv|s)∝ε−1−2αe−14s2ε−2αsinh[ε−1α2s2]→0, (24)

as the (vanishing) exponential factor dominates. In the opposite limit , instead, the distribution behaves asymptotically as

 P(δv|s)≈12α√πs3e−14s2δv−1−3/α. (25)

This shows that this distribution is fat-tailed. In particular, all the distribution moments with are not defined and, for both (dipole-dipole interactions) and (van der Waals), this includes all integer moments (e.g., the mean and variance). These fat tails are the consequence of the approximation of an atom’s position distribution as a Gaussian everywhere in space, i.e., including points much further away from the center of a trap than a few s. In other words, it appears to be an artifact of the description, rather than something occurring in a real experiment. The result of this approximation is to allow for an extremely small (but not vanishing) probability that two atoms can be arbitrarily close, which, due to the algebraic scaling of the interactions, produces considerable energy shifts. Moments like the mean and variance are therefore dominated by the very rare events in which two atoms lie very close to each other. The rarity of such events is encoded in the exponential suppression in Eq. (25). In principle, these unphysical fat tails could affect our results, as it is known that, in the Anderson problem, the scaling of the localization length is modified when Cauchy-like distributions are chosen instead of more regular ones. However, as mentioned above, the fat tails in our case are strongly suppressed and one needs to assess how likely it is to actually probe them in a simulation or an experiment. For that purpose, let us first notice that the asymptotic behavior reported in (25) emerges when the argument of the function in Eq. (23) is small, i.e., still assuming , for

 δv≫(2s2)−α. (26)

Let us calculate now the probability of generating an energy shift within the tails, i.e.,

 Ps≡P(δv>(2s2)−α)=∫∞(2s2)−αdδvP(δv|s). (27)

Employing now the asymptotic expression (25) we obtain

 Ps=P(δv>(2s2)−α)≈4s33√πe−14s2. (28)

This result apparently does not depend on , but we need to remember that the derivation assumes , and is therefore only consistent if . Considering or , though, this is satisfied already for rather large disorder amplitudes, e.g., , which then yields . Due to the exponential factor, these probabilities decrease very fast with . For , for instance, we get and in the range spanned in the plots reported in the main text this becomes , which is clearly impossible to observe in any reasonable experiment or numerical procedure. Hence, we can safely assume the unphysical fat tails to be completely irrelevant in the determination of our numerical results in the regime considered.

### b.2 Distribution bias towards positive energy shifts

As can be observed from Fig. 3 in the main text, there appears to be a bias of the distribution towards positive energy shifts for small disorder, which makes the features present in the localization length plots bend towards higher energies. Considering the marginal discussed in the previous section, this seems counter-intuitive, since the distribution (23) seems to shift, for increasing , towards negative values instead, eventually becoming peaked very close to . It is also possible to provide an intuitive explanation for this, since, as two neighboring traps becomes wider and wider, it becomes much more likely for two atoms to lie at a larger distance than the one separating the two centers. The limiting case is indicative, as one can imagine that the atoms’ positions can be picked uniformly in space on length scales .

Although we do not hold at the moment a convincing explanation of why the shift seems to point in the opposite direction, we believe it is due to the correlated nature of the full distribution (which indeed would be consistent with not seeing the correct behavior in a marginal) and we provide a physical argument which partially supports this conjecture. For simplicity, we shall work here with a chain, rather than a ladder. We hypothesize that (A) the number of atoms is large, i.e., (this choice is such that the number of distances is ); (B) the disorder is extremely small . A useful simplification from (B) is that we can effectively reduce the dimensionality of the problem and only consider the position displacement along the direction: in fact, if we write , then

 d=|d|=R0+δz+O(δx2,δy2,δz2)≈R0+δz. (29)

Hence, we can extract the probability of the distances directly from Eq. (14):

 pdist(d1,…,dL)=σ−L√L+1(√2π)L××e−12σ2∑k,q(dk−R0)Akq(dq−R0), (30)

with the same (up to increasing by ). Clearly, the marginal for any given variable is also Gaussian and its mean and variance can be straightforwardly extracted from the expression above:

 ⟨dk⟩=R0 and √⟨d2k⟩−R20=√σ2(A−1)kk=√2σ. (31)

These variables are therefore identically distributed and, due to the boundedness of the covariance (see matrix in Eq. (15)), satisfy a generalized weak law of large numbers, as we demonstrate below for this very special case: let us define and consider the probability of a fluctuation around the mean value. By Chebyshev’s inequality,

 P(|D−R0|>ε)≤VarDε2, (32)

where

 VarD=⟨(∑k(dk−R0))2⟩L2==1L2∑k,q⟨(dk−R0)(dq−R0)⟩=σ2L2∑k,qCk,q, (33)

where are the elements of the matrix in Eq. (15). This sum is not difficult to calculate, since each row but the first and the last totals , whereas the first and last contribute each, implying . As a consequence, by choosing a sufficiently large the r.h.s. in Eq. (32) can be made arbitrarily small or, more precisely,

 P(|D−R0|>ε)≤δ, (34)

and therefore in probability. Note that, had we been dealing with independent variables, we would have retrieved a result where instead of . On a less formal level, this can be understood as follows: consider that, in this effective one-dimensional picture, the sum of all distances corresponds to the distance between the first and last atoms. When generating the positions independently, this variable will not be affected by the random nature of the positions of all the intermediate ones; instead, if one were to generate the distances as independent variables, these would effectuate a random walk (with drift ) and thus the effective uncertainty in the position of the last atom, assuming knowledge of the first one, would be of order and increase with the length of the chain.

Now, consider that, having chosen repulsive interactions (), the interaction potential is a convex function. Hence, we can write down Jensen’s inequality as

 ∑kV(dk)L≥V(∑kdkL). (35)

By the weak law of large numbers, for large we can effectively replace the r.h.s.  of the inequality above with , which leaves us with the approximate statement

 ∑kV(dk)≳LV(R0), (36)

i.e.,

 ∑k(V(dk)−V(R0))≳0, (37)

i.e.,

 ∑kδVk≳0. (38)

Albeit not a rigorous proof, this argument provides a clear indication that, for sufficiently large system sizes, the correlations among the variables will make positive biases in the energy shifts preferable to negative ones, in agreement with the qualitative features observed in the main text. Accepting this claim, there must be at least a point where this bias is strictly positive. It then follows that there exists a right neighborhood of in which the bias increases with .

## Appendix C Hilbert space reductions and restricted Hamiltonians

The Hamiltonian introduced in the main text reads

 ^H=ΩN∑k^σ(k)x^H1+ΔN∑k^nk+N∑k=1m≠k12V(dkm)^nm^nk^H0, (39)

where denotes the distance between the -th and -th atoms. In order to exploit the large energy separations present in the system, we switch to the interaction picture

 ^HI(t)=ei^H0t^H1e−i^H0t=Ω∑kei^H0t^σ(k)xe−i^H0t. (40)

Recalling that for every and that we can simplify the -th addend in Eq. (40)

 ei^H0t^σ(k)xe−i^H0t=eit^nk(Δ+∑m≠kV(dkm)^nm)^σ(k)x××e−it^nk(Δ+∑m≠kV(dkm)^nm)==eit(2^nk−1)(Δ+∑m≠kV(dkm)^nm)^σ(k)x, (41)

where in the first equality we singled out in the exponentials all the terms which depend upon ; all the remaining ones cancel out. The Hamiltonian can then be written as

 ^HI(t)=Ω∑keit(2^nk−1)(Δ+∑m≠kV(dkm)^nm)^σ(k)x. (42)

We apply a rotating-wave approximation to discard all terms which oscillate fast in time. This implies that the oscillation frequency should be for a term to be neglected. Note that the frequency is however operator-valued:

 ω=(2^nk−1)(Δ+∑m≠kV(dkm)^nm). (43)

Since the prefactor is of order , it is the second factor which is decisive for the selection. We introduce now for every site a projector over all states where there is a single excitation among the neighbors of and no additional one within a radius . Its specific structure depends clearly on the structure of the lattice, but if we define by the set of nearest-neighboring sites of and by the set of sites within a distance from which are neither site itself nor one of the sites in , then we can give an implicit definition according to

 ^Pk=∑q∈Fk^nq∏q′∈Fk,q′≠q(1−^nq′)∏q′′∈Sk(1−^nq′′). (44)

Checking that the expression above satisfies is straightforward if one recalls that and . The relevance of the projector is that it precisely identifies the constraints – identified in the main text – under which a spin (or atom) is able to flip (or being excited/de-excited). Slightly more formally,

 (Δ+∑m≠kV(dkm)^nm)^Pk≈(Δ−V(R0))^Pk=0, (45)

where we have neglected all contributions from excitations beyond a distance of . Furthermore, note that according to definition (44) acts trivially on site and thus commutes with all local operators which instead exclusively act on that site; in particular, . Defining for brevity the projector onto the orthogonal subspace (, ) we thus have

 ^σ(k)x=(^Pk+^Qk)^σ(k)x(^Pk+^Qk)==^Pk^σ(k)x^Pk+^Qk^σ(k)x^Pk=0+^Qk^σ(k)x^Pk=0+^Qk^σ(k)x^Qk==^Pk^σ(k)x+^Qk^σ(k)x. (46)

Hence, we can separate the interaction Hamiltonian into two contributions:

 ^HI(t)≈Ω∑k^Pk^σ(k)x++eit(2^nk−1)(Δ+∑m≠kV(dkm)^nm)^Qk^σ(k)x. (47)

The space of configurations onto which has support can be further split into three classes:

• States where site has two or more excited nearest neighbors;

• States where site has only one excited neighbor, but there is at least another excitation within a radius ;

• States where no neighbors of are excited.

In case (A) the interaction potential on site is ; accounting for the facilitation condition we find ; these terms are thereby oscillating very fast and can be discarded. Terms of type (B) are facilitated by the single neighboring excitation, but the presence of an additional one within a distance implies that

 Δ+∑m≠kV(dkm)^nm≥V(2R0) (48)

and therefore , which allows us to neglect all type-(B) contributions as well. Terms belonging to class (C) are instead more delicate, since an appropriate combination of the interactions with many excitations at different distances could approximately cancel out the detuning . For instance, for dipole-dipole interactions () the potential obeys ; considering a honeycomb lattice with excited next-nearest neighbors at distance and a single excited next-next-next-next-nearest (or fourth-nearest for brevity) neighbor at distance one finds

 Δ+∑m∉FkV(dkm)→−V(R0)+5V(R1)+V(R4)==V(R0)(−1+53√3+133)≈−0.00071V(R0). (49)

However, configurations such as the one described above always require a large local density of excitations, and hence can only affect Hilbert subspaces at higher energies than the ones considered in the main text, separated at least by some factors of . As long as we consider the low-energy Hilbert subspaces, it is thus fine to neglect terms of type (C) as well. Overall, in the subspaces we are interested in we can approximate

 ^HI(t)≈Ω∑k^Pk^σ(k)x. (50)

Going back to the original Schrödinger representation is now straightforward and yields

 ^H≈Ω∑k^Pk^σ(k)x+Δ∑k^nk+N∑k=1m≠k12V(dkm)^nm^nk. (51)

Note that in the specific subspace (let us call it ) considered in the main text, the one including all possible one-excitation states plus all possible pairs of neighboring ones, the diagonal part acts trivially as the null operator and can thus be discarded, implying

 ^HH1=Ω∑k^Pk^σ(k)x. (52)

We remark that the same derivation can be followed in the presence of weak disorder by changing the definition of in Eq. (39) to

 ^H1=Ω∑k^σ(k)x+12∑k≠qδV(dkq)^nk^nq. (53)

Since the second term is diagonal and commutes with , the calculation of the interaction picture is straightforward:

 ^HI(t)=Ω∑keit(2^nk−1)(Δ+∑m≠kV(dkm)^nm)^σ(k)x++12∑k≠qδV(dkq)^nk^nq (54)

and one can follow the same steps outlined above.

### c.1 Hilbert space lattice structure

Having derived the restricted Hamiltonian (52) we can now identify the geometric structure of the Hilbert space in the basis of eigenstates of . To start with, we introduce the following definitions for the basis itself: we call states with a single excitation present on site , whereas we denote by states with a pair of excitations on sites and . Fixing the number of tweezers, the Hilbert subspace we work in is therefore defined as

 Missing or unrecognized delimiter for \right (55)

where we recall that is the set of nearest neighbors of site . Note that, since the pair states are doubly counted; however, this clearly still leads to the generation of the same vector space. Alternatively, one can also define an equivalence relation