1 Introduction

# Condensed Matter Applications of Entanglement Theory

\MakeTitel 1

## 1 Introduction

The aim of this lecture is to show how the tools of quantum information theory and in particular entanglement theory can help us to understand the physics of condensed matter systems and other correlated quantum many-body systems. In a certain sense, many-body systems are all around in quantum information: The defining feature of these systems is that they consist of many subsystems of the same dimension, equipped with a natural tensor product structure2 , and such structures appear in quantum information as quantum registers in quantum computers, as parties in multipartite protocols, and so on.

Most condensed matter systems exhibit only weak quantum correlations (this is, entanglement) between the individual subsystems. These systems are well described by a mean field ansatz (i.e., a product state), and their behavior can be understood using Landau theory. However, some of the most exciting phenomena discovered in condensed matter physics in the last decades, such as the fraction quantum Hall effect or high-temperature superconductivity, are based on systems with strong interactions in which entanglement plays an essential role. Given the refined understanding of entanglement which has been developed in the field of quantum information theory, it is thus natural to apply quantum information concepts, and in particular the theory of entanglement, towards an improved understanding of quantum many-body systems. Indeed, an active field of research has grown during the last decade at the interface of quantum information theory and quantum many-body physics, and the aim of this lecture is to give an introduction to this area.

In this lecture, we will highlight two complementary topics at the interface between condensed matter and quantum information: In the first part (Sec. 23), we will show how we can use insights on the entanglement structure of condensed matter systems to develop powerful methods to both numerically simulate and analytically characterize correlated many-body systems; and in the second part (Sec. 4) we will show how the field of quantum complexity theory allows us to better understand the limitations to our (numerical) understanding of those systems.

### 1.1 Entanglement structure of ground states: The area law

For clarity of the presentation, we will in the following restrict our attention to quantum spin systems on a lattice (such as a line or a square lattice in 2D), with a corresponding Hilbert space (where each spin has levels, and the lattice has sites); generalizations to fermionic systems and beyond lattices will be discussed later. Also, we will for the moment focus on ground state problems, i.e., given some Hamiltonian acting or our spin system, we will ask about properties of its ground state . The approach we pursue will be variational – we will try to obtain a family of states which gives a good approximation of the ground state, for which quantities of interest can be evaluated efficiently, and where the best approximation to the ground state can be found efficiently. For instance, mean-field theory is a variational theory based on the class of product states (for spin systems) or Slater determinants (for electronic systems).

So which states should we use for our variational description of quantum spin systems? Of course, one could simply try to parametrize the ground state as

 |Ψ⟩=∑i1,…,iNci1…iN|i1,…,iN⟩ , (1)

and use the as variational parameters. Unfortunately, the number of parameters grows exponentially with , making it impossible to have an efficient description of for growing system sizes. On the other hand, we know that efficient descriptions exist for physical Hamiltonians: Since is a sum of few-body terms (even if we don’t restrict to lattice systems), a polynomial number of parameters (with the bodiness of the interaction) allows to specify , and thus its ground state. This is, while a general -body quantum state can occupy an exponentially large Hilbert space, all physical states live in a very small “corner” of this space. The difficulty, of course, is to find an efficient parametrization which captures the states in this corner of Hilbert space, while at the same time allowing for efficient simulation methods.

In order to have a guideline for constructing an ansatz class, we choose to look at the entanglement properties of ground states of interacting quantum systems. To this end, we consider a ground state on a lattice and cut a contiguous region of length (in one dimension) or an area (in two dimensions), cf. Fig. 1.

It is a well-known result from entanglement theory [1] that the von Neumann entropy of the reduced density matrix of region ,

 S(ρA)=−ρAlogρA ,

quantifies the entanglement between region and the rest of the system. For a random quantum state, we expect this entanglement to be almost maximal, i.e., on the order of (where is the number of spins in region ). Yet, if we study the behavior of for ground states of local Hamiltonians, it is found that essentially scales like the boundary of region , , with possible corrections for gapless Hamiltonians which are at most logarithmic in the volume, . This behavior is known as the area law for the entanglement entropy and has been observed throughout for ground states of local Hamiltonians (see, e.g., Ref. [2] for a review); for gapped Hamiltonians in one dimension, this result has been recently proven rigorously [3].

## 2 One-dimensional systems: Matrix Product States

### 2.1 Matrix Product States

#### Construction and Matrix Product notation

Since the entropy quantifies the entanglement between region and its complement, the fact that scales like the boundary of suggests that the entanglement between region and the rest of the system is essentially located around the boundary between the two regions, as illustrated in Fig. 2.

We will now construct an ansatz for many-body quantum systems, starting from the insight that the entanglement is concentrated around the boundary of regions; for the moment, we will focus on one-dimensional systems. Clearly, since we want to have this property for any partitioning of the lattice, we cannot just place entangled pairs as in Fig. 2, but we have to choose a more subtle strategy. To this end, we consider the system at each site as being composed of two “virtual” subsystems of dimension each, as illustrated in Fig. 3a. Then, each of the two subsystems is placed in a maximally entangled state

 |ωD⟩=D∑i=1|i,i⟩

with the corresponding subsystems of the adjacent sites, as shown in Fig. 3b. The maximally entangled states are called “bonds”, with the “bond dimension”. This construction already satisfies the area law: For any region we cut, there are exactly two maximally entangled states crossing the cuts, bounding the entanglement by . Finally, we apply linear maps at each site , which creates a description of a state on a chain of -level systems, cf. Fig. 3c. (Note that the rank of the reduced density operator of any region cannot be increased by applying the linear maps .) The construction can be carried out either with periodic boundary conditions, or with open boundary conditions by omitting the outermost virtual subsystems at the end of the chain. The total state of the chain can be written as

 |Ψ⟩=(P1⊗⋯⊗PN)|ωD⟩⊗N , (2)

where the maps act on the maximally entangled states as illustrated in Fig. 3c.

This class of states can be rewritten as follows: For each site , define a three-index tensor , , , where the can be interpreted as matrices, such that

 Ps=∑i,α,βA[s]i,αβ|i⟩⟨α,β| . (3)

Then, the state (2) can be rewritten as

 |Ψ⟩=∑i1,…,iNtr[A[1]i1A[2]i2⋯A[N]iN]|i1,…,iN⟩ , (4)

i.e., the coefficient in (1) can be expressed as a product of matrices.3 For this reason, these states are called Matrix Product States (MPS). For systems with open boundary conditions, the matrices and are and matrices, respectively, so that the trace can be omitted. More generally, can be chosen differently across each link.

#### Tensor network notation

The defining equation (4) for Matrix Product States is a special case of a so-called tensor network. Generally, tensor networks are given by a number of tensors , , etc., where each tensor usually only depends on a few of the indices. Then, one takes the product of the tensors and sums over a subset of the indices,

 ci1…ik=∑ik+1,…,iKAi1,i2,…,iKBi1,i2,…,iK⋯ .

For instance, in (4) the tensors are the , and we sum over the virtual indices , yielding

 ci1…iN=∑α,β,γ,…,ζA[1]i1,αβA[2]i2,βγ⋯A[N]iN,ζα=tr[A[1]i1A[2]i2⋯A[N]iN] .

Tensor networks are most conveniently expressed in a graphical language. Each tensor is denoted by a box with “legs” attached to it, where each leg corresponds to an index – a three-index tensor is then depicted as

 \raisebox−5.690551pt\includegraphics[scale=0.55]figs/single−tensor.

Summing over a joint index is denoted by connecting the corresponding legs, e.g.,

 \raisebox−11.381102pt\includegraphics[scale=0.55]figs/tensor−contraction.

In this language, the expansion coefficient [Eq. (1)] of an MPS (which we will further on use interchangably with the state itself) is written

 \raisebox−14.226378pt\includegraphics[scale=0.55]figs/tn−mps. (5)

We will make heavy use of this graphical language for tensor networks in the following.

#### Approximating states as Matrix Product States

As it turns out, MPS are very well suited to describe ground states of one-dimensional quantum systems. On the one hand, we have seen that by construction, these states all satisfy the area law. On the other hand, it can be shown that all states which satisfies an area law, such as ground states of gapped Hamiltonians [3], as well as states for which the entanglement of a block grows slowly (such as for critical 1D systems), can be well approximated by an MPS [4, 3]: Given a state on a chain of length for which the entropy of any block of length is bounded by , , there exists an MPS which approximates up to error4

 ||Φ⟩−|ΨD⟩|=:ϵ≤const×NecSmaxDc . (6)

Note that even if grows logarithmically with , the numerator is still a polynomial in . This is, in order to achieve a given accuracy , we need to choose a bond dimension which scales polynomially in and , and thus, the total number of parameters (and, as we will see later, also the computation time) scales polynomially as long as the desired accuracy is at most .

#### States with exact Matrix Product form

Beyond the fact that ground states of gapped Hamiltonians can be efficiently approximated by MPS, there is a large class of states which can be expressed exactly as MPS with a small bond dimension . First of all, any product state is trivially an MPS with . The GHZ state

 |GHZ⟩=|0⋯0⟩+|1⋯1⟩

is an MPS with , with a translational invariant MPS representation , with , and . The W state

 |W⟩=|100⋯00⟩+|010⋯00⟩+⋯+|000⋯01⟩

is an MPS with , and for , , and , where and . Other states which have an MPS representation with small are the cluster state used in measurement-based computation [6, 7] (with ) or the 1D resonating valence bond state which appears as the ground state of the so-called Majumdar-Ghosh model [8, 9] (with ), and the AKLT state which we will get to know in Section 2.3.1.

### 2.2 Variational calculation with MPS: The DMRG method

As we have discussed at the end of Section 2.1.3, MPS approximate ground states of local Hamiltonians efficiently, as the effort needed for a good approximation scales only polynomially in the length of the chain and the desired accuracy. Thus, it seems appealing to use the class of MPS as a variational ansatz to simulate the properties of quantum many-body systems. However, to this end it is not sufficient to have an efficient description of relevant states – after all, the Hamiltonian itself forms an efficient description of its ground state, but it is hard to extract information from it! Rather, a good variational class also requires that we can efficiently extract quantities of interest such as energies, correlation functions, and the like, and that there is an efficient way to find the ground state (i.e., minimize the energy within the variational class of states) in the first place.

#### Evaluating expectation values for MPS

Let us start by discussing how to compute the expectation value of a local operator (such as a term in the Hamiltonian) for an MPS. To this end, note that

 ⟨Ψ|h|Ψ⟩=∑i1,…,iNj1,…,jNci1…iNc∗j1…jNδi1,j1⋯δik−1,jk−1hjkjk+1ikik+1δik+2,jk+2⋯δiN,jN

where

 h=∑ik,ik+1jk,jk+1hjkjk+1ikik+1|ik,ik+1⟩⟨jk,jk+1|

acts on sites and . Using the graphical tensor network notation, this can be written as

 \raisebox−25.60748pt\includegraphics[scale=0.55]figs/psi−h−psi.\vspace∗+0.1cm (7)

In order to evaluate this quantity, we have to contract the whole diagram (7). In principle, contracting arbitrary tensor networks can become an extremely hard problem (strictly speaking, PP-hard [10]), as in some cases it essentially requires to determine exponentially big tensors (e.g., we might first have to compute from the tensor network and from it determine the expectation value). Fortunately, it turns out that the tensor network of Eq. (7) can be contracted efficiently, i.e., with an effort polynomial in and . To this end, let us start from the very left of the tensor network in Eq. (7) and block the leftmost column (tensors and ). Contracting the internal index, this gives a two-index tensor

 Lαα′=∑iA[1]iα¯A[1]iα′ ,

which we interpret as a (bra) vector with a “double index” of dimension . Graphically, this can be denoted as

 \raisebox−25.60748pt\includegraphics[scale=0.55]figs/def−L,

where we use a doubled line to denote the “doubled” index of dimension . We can now continue this way, and define operators (called transfer operators)

 (E[s])ββ′αα′=∑iA[s]i,αβ¯A[s]i,α′β′

which we interpret as mapping the double index to , and graphically write as

 \raisebox−25.60748pt\includegraphics[scale=0.55]figs/def−E. (8)

Similarly, we define operators

 \includegraphics[scale=0.55]figs/def-Eh (9)

and

 \raisebox−25.60748pt\includegraphics[scale=0.55]figs/def−R.

All of these operators can be computed efficiently (in the parameters and ), as they are vectors/matrices of fixed dimension , and can be obtained by contracting a constant number of indices.

Using the newly defined objects , , , and , the expectation value , Eq. (7), can be rewritten as {align*} ⟨Ψ—h —Ψ⟩& = \mathbbL\mathbbE^[2]⋯\mathbbE^[k-1] \mathbbE_h \mathbbE^[k+2]⋯\mathbbR
&= \includegraphics[scale=0.55]figs/eval-via-top . This is, can be computed by multiplying a -dimensional vector times with matrices. Each of these multiplication takes operations, and thus, can be evaluated in operations. There are terms in the Hamiltonian, and thus, the energy can be evaluated in time , and thus efficiently; in fact, this method can be easily improved to scale as .5 Similarly, one can see that e.g. correlation functions or string order parameters can be reduced to matrix multiplications and thus evaluated in . Exactly the same way, evaluating expectation values for MPS with periodic boundary conditions can be reduced to computing the trace of a product of matrices of size . Each multiplication scales like , and using the same tricks as before, one can show that for systems with periodic boundary conditions, expectation values can be evaluated in time .

In summary, we find that energies, correlations functions, etc. can be efficiently evaluated for MPS, with computation times scaling as and for open and periodic boundary conditions, respectively.

#### Variational optimization of MPS

As we have seen, we can efficiently compute the energy of an MPS with respect to a given local Hamiltonian . In order to use MPS for numerical simulations, we still need to figure out an efficient way to find the MPS which minimizes the energy for a given . To this end, let us first pick a site , and try to minimize the energy as a function of , while keeping all other MPS tensors , , fixed. Now, since is a linear function of , we have that

 ⟨Ψ|H|Ψ⟩⟨Ψ|Ψ⟩=→A[k]†X→A[k]→A[k]†Y→A[k]

is the ratio of two quadratic forms in . Here, denotes the vectorized version of , where is interpreted as a single index. The matrices and can be obtained by contracting the full tensor network (7) except for the tensors and , which can be done efficiently. The which minimizes this energy can be found by solving the generalized eigenvalue equation

 X→A[k]=EY→A[k]

where is the energy; again, this can be done efficiently in . For MPS with open boundary conditions, we can choose a gauge6 for the tensors such that  [11, 12] – this reduces the problem to a usual eigenvalue problem, and avoids problems due to ill-conditioned .

This shows that we can efficiently minimize the energy as a function of the tensor at an individual site . In order to minimize the overall energy, we start from a randomly chosen MPS, and then sweep through the sites, sequentially optimizing the tensor at each site. Iterating this a few times over the system (usually sweeping back and forth) quickly converges to a state with low energy. Although in principle, such an optimization can get stuck [14, 15], in practice it works extremely well and generally converges to the optimal MPS (though some care might have to be put into choosing the initial conditions).

In summary, we find that we can use MPS to efficiently simulate ground state properties of one-dimensional quantum systems with both open and periodic boundary conditions. This simulation method can be understood as a reformulation of the Density Matrix Renormalization Group (DMRG) algorithm [16, 17], which is a renormalization algorithms based on keeping the states which are most relevant for the entanglement of the system, and which since its invention has been highly successful in simulating the physics of one-dimensional quantum systems (see Refs. [11, 18, 19] for a review of the DMRG algorithm and its relation to MPS, and a more detailed discussion of MPS algorithms).

### 2.3 MPS as a framework for solvable models

Up to now, we have seen that MPS form a powerful framework for the simulation of one-dimensional systems, due to their ability to efficiently approximate ground states of local Hamiltonians, and due to the fact that the necessary variational minimization can be carried out efficiently. As we will see in the following, MPS can also be used as a framework to construct solvable models whose properties can be understood analytically, which makes them a powerful tool for resolving various analytical problems regarding quantum many-body systems.

#### The AKLT model and parent Hamiltonians

The paradigmatic analytical MPS model is the so-called AKLT state, named after Affleck, Kennedy, Lieb, and Tasaki [20, 21]. The construction of the translational invariant AKLT state is illustrated in Fig. 4: The auxiliary systems are spin- particles, which are put into a singlet (i.e., spin ) state7 . The two virtual spin-’s at each site have jointly spin , and the map projects onto the spin- subspace, thereby describing a translationally invariant chain of spin- particles.

Let us now see what we can say about the reduced density operator of two consecutive sites in the AKLT model. The following argument is illustrated in Fig. 5: On the virtual level, we start from two “open” bonds, each of which has spin (while we might have more information about their joint state, we are free to neglect it), as well as one “interior” bond which has spin ; thus, the virtual state we start from has spin

 12⊗0⊗12=0⊕1 .

The projections onto the spin subspace, on the other hand, do not change the total spin (they only affect the weight of different subspaces), which implies that has spin or as well. On the other hand, is a state of two physical sites, each of which has spin ; this is, it could have spin . We therefore find that there is a non-trivial constraint on arising from the AKLT construction: cannot have spin .

We can now use this to construct a non-trivial Hamiltonian which has the AKLT state as its ground state. Namely, let be a local Hamiltonian acting on sites and , projecting onto the spin subspace of the two sites. Then, by the preceding argument, with the AKLT state, and thus, with

 H=∑ihi,i+1 ,

we have that

 H|ΨAKLT⟩=0 .

On the other hand, , and thus, the AKLT state is a ground state of the AKLT Hamiltonian .

While the argument we have given right now seems specific to the AKLT model, it turns out that the very same way, we can construct a so-called “parent Hamiltonian” for any MPS. To this end, note that the reduced density operator for consecutive sites lives in a dimensional space, yet it can have rank at most (since the only free parameters we have are the outermost “open” bonds). Thus, as soon as , does not have full rank, and we can build a local Hamiltonian acting on consecutive sites as which has the given MPS as its ground state.

Of course, it is not sufficient to have a non-trivial Hamiltonian with the AKLT state (or any other MPS) as its ground state, but we want that the AKLT state is the unique ground state of this Hamiltonian. Indeed, it turns out that there is a simple condition which is satisfied for almost any MPS (and in particular for the AKLT state) which allows to prove that it is a unique ground state of a suitable chosen parent Hamiltonian; for the case of the AKLT model, it turns out that this Hamiltonian is the two-body Hamitonian which we defined originally.8 Secondly, it can be shown that the parent Hamiltonian of MPS is always gapped, this is, there is a finite gap above the ground space which does not close even as .

Together, this proves that the AKLT state is the unique ground state of a gapped local Hamiltonian. It is an easy exercise to show that the AKLT Hamiltonian equals

 hi,i+1=12Si⋅Si+1+16(Si⋅Si+1)2+13 ,

and is thus close to the spin- Heisenberg model. Indeed, proving that the spin- Heisenberg model is gapped, as conjectured by Haldane, was one motivation for the construction of the AKLT model, showing that MPS allow for the construction of models of interest for which certain properties can be rigorously proven.

The fact that MPS naturally appear as ground states of local Hamiltonians makes them a very powerful tool in studying the properties of one-dimensional gapped systems. In particular, one can use them to classify phases of one-dimensional systems: On the one hand, we know that the ground state of any gapped one-dimensional Hamiltonian is well approximated by an MPS. This allows us to replace the original Hamiltonian by the parent Hamiltonian of the MPS while keeping a gap (i.e., without crossing a phase transition). Given two MPS characterized by tensors and , we can now build a smooth interpolation between the two models (i.e., Hamiltonians) by appropriately interpolating between and : due to the very way the parent Hamiltonian is constructed, this will give rise to a smooth path of parent Hamiltonians as well [24]. This is particularly interesting if one imposes symmetries on the Hamiltonian, which (for a system with a unique MPS ground state ) implies that . For an MPS with such a symmetry, it can be shown [25, 26] that the symmetry is reflected in a local symmetry of the MPS map , , and classifying the possible inequivalent types of allows to classify the different phases in one-dimensional systems in the presence of symmetries [27, 28, 24].

#### Correlation length and the transfer operator

As we have discussed in the preceding section, MPS form a powerful tool for the construction of solvable models. Let us now have a closer look at what we can say about the behavior of physical quantities. While we have discussed how to extract physically relevant quantities such as energies and correlation functions numerically in Sec. 2.2.1, the purpose of this section is different: Here, we focus on a translationally invariant exact MPS ansatz, and we are asking for simple (and ideally closed) expressions for quantities such as the correlation length.

In Sec. 2.2.1, the central object was the transfer operator,

 \raisebox−25.60748pt\includegraphics[scale=0.55]figs/def−E−nosite.

As compared to Eq. (8), we have omitted the site-dependency index . It is easy to see that – up to local unitaries – the MPS is completely determined by the transfer operator : If and give rise to the same transfer operator, , then we must have , with an isometry.9 In the following, we would like to see how specific properties can be extracted from the transfer operator.

One property which is of particular interest are two-point correlations. To obtain those, let us first define

 \raisebox−25.60748pt\includegraphics[scale=0.55]figs/def−EA= ∑ij⟨j|O|i⟩Ai⊗¯Aj .

Then, the correlation between operator at site and at site on a chain of length is

 ⟨A1Bj+2⟩=tr[EAEjEBEN−j−2]tr[EN] . (10)

If we now assume that has a unique maximal eigenvalue , which we normalize to , with eigenvectors and , then for large , , and thus Eq. (10) converges to . If we now expand

 Ej=|r⟩⟨l|+∑k≥2λjk|rk⟩⟨lk|

(where for ), we find that

 ⟨A1Bj+2⟩=⟨l|EA|r⟩⟨l|EB|l⟩+∑k≥2λjk⟨l|EA|rk⟩⟨lk|EB|l⟩ ,

and thus decays exponentially with ; in particular, the correlation length is given by the ratio of the second largest to the largest eigenvalue of ,

 ξ=−1ln∣∣λ2λ1∣∣ ,

i.e., it is determined solely by simple spectral properties of the transfer operator. Let us note that the same quantity, i.e. the ratio between the two largest eigenvalues, also bounds the gap of the parent Hamiltonian [13, 30].

The proof above that correlations in MPS decay exponentially is not restricted to the case of a unique maximal eigenvalue of the transfer operator: In case it is degenerate, this implies that the correlation function has a long-ranged part (i.e., one which is not decaying), such as in the GHZ state, while any other contribution still decays exponentially. In particular, MPS cannot exhibit algebraically decaying correlations, and in case MPS are used to simulate critical systems, one has to take into account that a critical decay of correlations in MPS simulations is always achieved by approximating it by a sum of exponentials.

## 3 Tensor network states beyond one dimension and ground state problems

### 3.1 PEPS: Tensor networks in two dimensions

As we have seen, MPS are very well suited for simulating ground state properties of one-dimensional systems. But what if we want to go beyond one-dimensional systems, and, e.g., study interacting spin systems in two dimensions? Two-dimensional systems can exhibit a rich variety of phenomena, such as topologically ordered states [31, 32] which are states distinct from those in the trivial phase, yet which do not break any (local) symmetry. Moreover, two-dimensional spin systems can be highly frustrated due to the presence of large loops in the interaction graph, and even classical two-dimensional spin glasses can be hard to solve [33]. In the following, we will focus on the square lattice without loss of generality.

A first idea to simulate two-dimensional systems would be to simply use an MPS, by choosing a one-dimensional ordering of the spins in the two-dimensional lattice. While this approach has been applied successfully (see, e.g., Ref. [34]), it cannot reproduce the entanglement features of typical ground states in two dimensions as one increases the system size: As we have discussed in Section 1.1, two-dimensional systems also satisfy an area law, i.e., in the ground state we expect the entanglement of a region with its complement to scale like its boundary, . To obtain an ansatz with such an entanglement scaling, we follow the same route as in the construction of MPS: We consider each site as being composed of four -dimensional virtual subsystems, place each of them in a maximally entangled state with the corresponding subsystem of each of the adjacent sites, and finally apply a linear map

 Ps:CD⊗CD⊗CD⊗CD→Cd

at each site to obtain a description of the physical state on a 2D lattice of -level sytems. The construction is illustrated in Fig. 6.

Due to the way they are constructed, these states are called Projected Entangled Pair States (PEPS). Again, we can define five-index tensors , where now

 Ps=∑iαβγδA[s]i,αβγδ|i⟩⟨α,β,γ,δ| ,

and express the PEPS in Fig. 6 graphically as a tensor network

 \includegraphics[scale=0.8]figs/peps-tn

(where we have omitted the tensor labels). Similar to the result in one dimension, one can show that PEPS approximate ground states of local Hamiltonians well as long as the density of states grows at most polynomially with the energy [35, 36], and thereby provide a good variational ansatz for two-dimensional systems. (Note, however, that it is not known whether all 2D states which obey an area law are approximated well by PEPS.)

#### Contraction of PEPS

Let us next consider what happens if we try to compute expectation values of local observables for PEPS. For simplicity, we first discuss the evaluation of the normalization , which is obtained by sandwiching the ket and bra tensor network of ,

 ⟨Ψ|Ψ⟩=\raisebox−42.679134pt\includegraphics[scale=0.55]figs/peps−contr. (11)

This can again be expressed using transfer operators

 \includegraphics[scale=0.55]figs/top−def−2d

(the should be thought of as being “viewed from the top”), leaving us with the task of contracting the network

 \raisebox−42.679134pt\includegraphics[scale=0.55]figs/peps−contr−top.

[This easily generalizes to the computation of expectation values, where some of the have to be modified similar to Eq. (9)]. Different from the case of MPS, there is no one-dimensional structure which we can use to reduce this problem to matrix multiplication. In fact, it is easy to see that independent of the contraction order we choose, the cluster of tensors we get (such as a rectangle) will at some point have a boundary of a length comparable to the linear system size. This is, we need to store an object with a number of indices proportional to – and thus an exponential number of parameters – at some point during the contraction, making it impossible to contract such a network efficiently. (Indeed, it can be proven that such a contraction is a computationally hard problem [10].)

This means that if we want to use PEPS for variational calculations in two dimensions, we have to make use of some approximate contraction scheme, which of course should have a small and ideally controlled error. To this end, we proceed as follows [37]: Consider the contraction of a two-dimensional PEPS with open boundary conditions,

 \raisebox−62.596063pt\includegraphics[scale=0.55]figs/approx−contr−1. (12)

Now consider the first two columns, and block the two tensors in each column into a new tensor (with vertical bond dimension ):

 \raisebox−62.596063pt\includegraphics[scale=0.55]figs/approx−contr−2. (13)

This way, we have reduced the number of columns in (12) by one. Of course, this came at the cost of squaring the bond dimension of the first column, so this doesn’t help us yet. However, what we do now is to approximate the right hand side of (13) by an MPS with a (fixed) bond dimension for some . We can then iterate this procedure column by column, thereby contracting the whole MPS, and at any point, the size of our tensors stays bounded. It remains to be shown that the elementary step of approximating an MPS [such as the r.h.s. of (13)] by an MPS with smaller bond dimension can be done efficiently: To this end, it is sufficient to note that the overlap is linear in each tensor of , and thus, maximizing the overlap

 ∣∣⟨Φ|Ψ⟩∣∣2⟨Ψ|Ψ⟩

can again be reduced to solving a generalized eigenvalue problem, just as the energy minimization for MPS in the one-dimensional variational method. Differently speaking, the approximate contraction scheme succeeds by reducing the two-dimensional contraction problem to a sequence of one-dimensional contractions, i.e., it is based on a dimensional reduction of the problem.

This shows that PEPS can be contracted approximately in an efficient way. The scaling in is naturally much less favorable than in one dimension, and for the most simple approach one finds a scaling of for open boundaries, which using several tricks can be improved down to . Yet, the method is limited to much smaller as compared to the MPS ansatz. It should be noted that the approximate contraction method we just described has a controlled error, as we know the error made in in each approximation step. Indeed, the approximation is very accurate as long as the system is short-range correlated, and the accuracy of the method is rather limited by the needed to obtain a good enough approximation of the ground state. Just as in one dimension, we can use this approximate contraction method to build a variational method for two-dimensional systems by successively optimizing over individual tensors [37].

#### Alternative methods for PEPS calculations

The PEPS construction is not limited to square lattices, but can be adapted to other lattices, higher dimensions, and even arbitrary interaction graphs. Clearly, the approximate contraction scheme we just presented works for any two-dimensional lattice, and in fact for any planar graph. In order to approximately contract systems in more than two dimensions, note that the approximate contraction scheme is essentially a scheme for reducing the dimension of the problem by one; thus, in order to contract e.g. three-dimensional systems we can nest two layers of the scheme. In cases with a highly connected PEPS graph (e.g., when considering systems with highly connected interaction graphs such as orbitals in a molecule), one can of course still try to find a sequential contraction scheme, though other contraction methods might be more promising.

The contraction method described in Section 3.1.1 is not the only contraction scheme for PEPS. One alternative method is based on renormalization ideas [38, 39, 40]: There, one takes blocks of e.g.  tensors and tries to approximate them by a tensor with lower bond dimension by appropriate truncation,

 \raisebox−28.452756pt\includegraphics[scale=.55]figs/terg.

Finding the best truncation scheme requires exact knowledge of the environment, i.e., the contraction of the remaining tensor network. Since this is as hard as the original problem, heuristic methods to approximate the environment (such as to only contract a small number of surronding tensors exactly, and imposing some boundary condition beyond that) have been introduced. While these approximations are in principle less accurate and the error is less controlled, their more favorable scaling allows for larger and thus potentially better approximations of the ground state.

Another approach to speed up PEPS contraction is using Monte Carlo sampling [41, 42, 43]: We can always write

 ⟨Ψ|O|Ψ⟩⟨Ψ|Ψ⟩=∑ip(i)⟨i|O|Ψ⟩⟨i|Ψ⟩ , (14)

where the sum runs over an orthonormal basis , and ; in particular, we want to consider the local spin basis . If we can compute and (where the latter reduces to the former if is a local operator), then we can use Monte Carlo sampling to approximate the expectation value . In particular, for PEPS can again be evaluated by contracting a two-dimension tensor network; however, this network now has bond dimension rather than . Thus, we can apply any of the approximate contraction schemes described before, but we can go to much larger with the same computational resources; it should be noted, however, that the number of operations needs to be multiplied with the number of sample points taken, and that the accuracy of Monte Carlo sampling improves as .

#### PEPS and exactly solvable models

Just as MPS, PEPS are also very well suited for the construction of solvable models. First of all, many relevant states have exact PEPS representations: For instance, the 2D cluster state underlying measurement based computation [6] is a PEPS with  [7], as well as various topological models such as Kitaev’s toric code state [32, 9, 22] or Levin and Wen’s string-net models [44, 45, 46]. A particularly interesting class of exact PEPS ansatzes is obtained by considering classical spin models , such as the 2D Ising model , and defining a quantum state

 |ψ⟩=∑s1,…,sNe−β/2H(s1,…,sN)|s1,…,sN⟩ .

It is easy to see that this state has the same correlation functions in the basis as the classical Gibbs state at inverse temperature . On the other hand, this state has a PEPS representation (called the “Ising PEPS”) with , where and  [9]. This implies that in two dimensions, PEPS (e.g., the Ising PEPS at the critical ) can exhibit algebraically decaying correlations which implies that any corresponding Hamiltonian must be gapless.10 This should be contrasted with the one-dimensional case, where we have seen that any MPS has exponentially decaying correlations and is the ground state of a gapped parent Hamiltonians.

Clearly, parent Hamiltonians can be defined for PEPS just the same way as for MPS: The reduced density operator of a region of size lives on a -dimensional system, yet can have rank at most . Thus, by growing both and we eventually reach the point where becomes rank deficient, giving rise to a non-trivial parent Hamiltonian with local terms projecting onto the kernel of . Again, just as in one dimension it is possible to devise conditions under which the ground state of is unique [49], as well as modified conditions under which the ground space of has a topological structure [22], which turns out to be related to “hidden” (i.e., virtual) symmetries of the PEPS tensors. Unlike in one dimension, these parent Hamiltonians are not always gapped (such as for the critical Ising PEPS discussed above); however, techniques similar to the ones used in one dimension can be used to prove a gap for PEPS parent Hamiltonians under certain conditions [24].

### 3.2 Simulating time evolution and thermal states

Up to now, our discussion has been focused on ground states of many-body systems. However, the techniques described here can also be adapted to simulate thermal states as well as time evolution of systems governed by local Hamiltonians. In the following, we will discuss the implementation for one-dimensional systems; the generalization to to 2D and beyond is straightforward.

Let us start by discussing how to simulate time evolution. (This will also form the basis for the simulation of thermal states.) We want to study how an initial MPS changes under the evolution with ; w.l.o.g., we consider to be nearest neighbor. To this end, we perform a Trotter decomposition

 eiHt≈(eiHevent/MeiHoddt/M)M

where we split into even and odd terms (acting between sites , , …, and , , …, respectively), such that both and are sums of non-overlapping terms. For large , the Trotter expansion becomes exact, with the error scaling like . We can now write

 eiHevenτ=⨂i=1,3,5,…eihi,i+1τ=\raisebox−11.381102pt\includegraphics[scale=0.6]figs/trotterstep

(with ), and similarly for . Thus, after one time step the initial MPS is transformed into

 \raisebox−5.690551pt\includegraphics[scale=0.58]figs/timeevol.eps.

Here, the lowest line is the initial MPS, and the next two lines the evolution by and for a time , respectively. We can proceed this way and find that the state after a time is described as the boundary of a two-dimensional tensor network. We can then use the same procedure as for the approximate contraction of PEPS (proceeding row by row) to obtain an MPS description of the state at time  [5]. A caveat of this method is that this only works well as long as the state has low entanglement at all times, since only then, a good MPS approximation of the state exists [4, 50]. While this holds for low-lying excited states with a small number of quasiparticles, this is not true after a quench, i.e., a sudden change of the overall Hamiltonian of the system [51, 52]. However, this does not necessarily rule out the possibility to simulate time evolution using tensor networks, since in order to compute an expectation value , one only needs to contract a two-dimensional tensor network with no boundary, which can not only be done along the time direction (row-wise) but also along the space direction (column-wise), where such bounds on the correlations do not necessarily hold; indeed, much longer simulations times have be obtained this way [53].

In the same way as real time evolution, we can also implement imaginary time evolution; and since acting on a random initial state approximates the ground state for , this can be used as an alternative algorithm for obtaining MPS approximations of ground states.

In order to simulate thermal states, we use Matrix Product Density Operators (MPDOs) [54] {align*} ρ&=∑_i_1,…,i_Nj1,…,jN tr[A^[1]_i1,j1⋯A^[N]_i_N,j_N] —i_1,…,i_N⟩⟨j_1,…,j_N—
&= \includegraphics[scale=.55]figs/mpdo , where each tensor now has two physical indices, one for the ket and one for the bra layer. We can then write the thermal state as

 e−βH=e−βH/2\leavevmode\small 1\kern-3.8pt\normalsize 1e−βH/2

and use imaginary time evolution (starting from the maximally mixed state 11 which has a trivial tensor network description) and the Trotter decomposition to obtain a tensor network for , which can again be transformed into an MPDO with bounded bond dimension using approximate contraction [54].

### 3.3 Other tensor network ansatzes

There is a number of other entanglement based ansatzes beyond MPS and PEPS for interacting quantum systems, some of which we will briefly sketch in the following.

Firstly, there is the Multiscale Entanglement Renormalization Ansatz (MERA) [55], which is an ansatz for scale invariant systems (these are systems at a critical point where the Hamiltonian is gapless, and which have algebraically decaying correlation functions), and which incorporates the scale-invariance in the ansatz. A first step towards a scale-invariant ansatz would be to choose a tree-like tensor network. However, such an ansatz will not have sufficient entanglement between different blocks. Thus, one adds additional disentanglers which serve to remove the entanglement between different blocks, which gives rise to the tensor network shown in Fig. 7.

In order to obtain an efficiently contractible tensor network, one chooses the tensors to be unitaries/isometries in vertical direction, such that each tensor cancels with its adjoint. It is easy to see that this way for any local , in the tensor network for most tensors cancel, and one only has to evaluate a tensor network of the size of the depth of the MERA, which is logarithmic in its length [55]. The MERA ansatz is not restricted to one dimension and can also be used to simulate critical system in 2D and beyond [56].

A different variational class is obtained by studying states for which expectation values can be computed efficiently using Monte Carlo sampling. Following Eq. (14), this requires (for local quantities ) that we can compute efficiently for all . One class of states for which this holds is formed by MPS, which implies that we can evaluate expectation values for MPS using Monte Carlo sampling [42, 41] (note that the scaling in is more favorable since can be computed in time ). This can be extended to the case where is a product of efficiently computable objects, such as products of MPS coefficients defined on subsets of spins: We can arrange overlapping one-dimensional strings in a 2D geometry and associate to each of them an MPS, yielding a class known as string-bond states [41, 57], which combines a flexible geometry with the favorable scaling of MPS-based methods. We can also consider to be a product of coefficients each of which only depends on the spins supported on a small plaquette, and where the lattice is covered with overlapping plaquettes, yielding a family of states known as Entangled Plaquette States (EPS) [58] or Correlator Product States (CPS) [59], which again yields an efficient algorithm with flexible geometries. In all of these ansatzes, the energy is minimized by using a gradient method, which is considerably facilitated by the fact that the gradient can be sampled directly without the need to first compute the energy landscape.

In order to simulate infinite lattices, it is possible to extend MPS and PEPS to work for infinite systems: iMPS and iPEPS. The underlying idea is to describe the system by an infinite MPS and PEPS with a periodic pattern of tensors such as ABABAB…(which allows the system to break translational symmetry and makes the optimization more well-behaved). Then, one fixes all tensors except for one and minimizes the energy as a function of that tensor until convergence is reached. For the optimization, one needs to determine the dependence of the energy on the selected tensor, which can be accomplished in various ways, such as using the fixed point of the transfer operator, renormalization methods (cf. Section 3.1.2), or the corner transfer matrix approach. For more information, see, e.g., [60, 61, 62].

### 3.4 Simulation of fermionic sytems

Up to now, we have considered the simulation of spin systems using tensor networks. On the other hand, in many cases of interest, such as for the Hubbard model or the simulation of molecules, the underlying systems are fermionic in nature. In the following, we will discuss how tensor network methods such as MPS, PEPS, or MERA can be extended to the simulation of fermionic systems.

In order to obtain a natural description of fermionic systems, the idea is to replace each object (i.e., tensor) in the construction of PEPS or MERA by fermionic operators [63, 64, 65]. This is, in the construction of MPS and PEPS, Fig. 3 and Fig. 6, both the maximally entangled bonds and the are now built from fermionic operators and need to preserve parity; equally, in the MERA construction, Fig. 7, all unitaries and isometries are fermionic in nature. The resulting states are called fermionic PEPS (fPEPS) and fermionic MERA (fMERA).

Let us now have a closer look at a fermionic tensor network, and discuss how to compute expectation values for those states. E.g., the fPEPS construction yields a state

 (P1⊗P2⊗⋯)(ω1⊗ω2⊗⋯)|Ω⟩ ,

where is the vacuum state, the create entangled fermionic states between the corresponding auxiliary modes, and the map the auxiliary fermionic modes to the physical fermionic modes at site (leaving the auxiliary modes in the vacuum). While the product of the contains only auxiliary mode operators in a given order, the product of the contains the physical and auxiliary operators for each site grouped together. To compute expectation values, on the other hand, we need to move all the physical operators to the left and the virtual operators to the right in the product of the ; additionally, the virtual operators have to be arranged such that they cancel with the ones arising from the product of the . Due to the fermionic anti-commutation relations, this reordering of fermionic operators results in an additional complication which was not present for spin systems. Fortunately, it turns out that there are various ways how to take care of the ordering of fermionic operators at no extra computational cost: One can use a Jordan-Wigner transformation to transform the fermionic system to a spin system [63, 65]; one can map the fPEPS to a normal PEPS with one additional bond which takes care of the fermionic anticommutation relations [64]; or one can replace the fermionic tensor network by a planar spin tensor network with parity preserving tensors, where each crossing of lines [note that a planar embedding of a network such as the 2D expectation value in Eq. (11) gives rise to crossings of lines, which corresponds to the reordering of fermionic operators] is replaced by a tensor which takes care of the anticommutation rules [66, 67].

## 4 Quantum Complexity: Understanding the limitations to our understanding

In the preceding sections, we have learned about different methods to simulate the physics of many-body systems governed by local interactions, using their specific entanglement structure. While these algorithms work very well in practice, the convergence of these methods can almost never be rigorously proven. But does this mean that we just haven’t found the right proof yet, or is there a fundamental obstacle to proving that these methods work – and if yes, why do they work after all?

### 4.1 A crash course in complexity classes

These questions can be answered using the tools of complexity theory, which deals with classifying the difficulty of problems. Difficulty is typically classified by the resources needed to solve the problem on a computer (such as computation time or memory) as a function of the problem size, this is, the length of the problem description .

Let us start by introducing some relevant complexity classes.11 To simplify the comparison of problems, complexity theorists usually focus on “decision problems”, i.e., problems with a yes/no answer. While this seems overly restrictive, being able to answer the right yes/no question typically enables one to also find an actual solution to the problem. (E.g., one can ask whether the solution is in a certain interval, and divide this interval by two in every step.) So let us get started with the complexity classes:

• P (“polynomial time”): The class of problems which can be solved in a time which scales at most as some polynomial in the size of the problem. For instance, multiplying two numbers is in P – the number of elementary operators scales like the product of the number of digits of the two numbers to be multiplied. Generally, we consider the problems in P those which are efficiently solvable.

• NP (“non-deterministic polynomial time”): The class of all decision problems where for “yes” instances, there exists a “proof” (i.e., the solution to the problem) whose correctness can be verified in a time which is , while for “no” instances, no such proof exists. Typical problems are decomposing a number into its prime factors (given the prime factors, their product can easily be verified), or checking whether a given graph can be colored with a certain number of colors without coloring two adjacent vertices in the same color (for a given coloring, it can be efficiently checked whether all adjacent vertices have different colors).

• BQP (bounded-error quantum polynomial time), the quantum version of P: The class of problems which can be (probabilistically) solved by a quantum computer in time ; a famous example of a problem in BQP which is not known to be in P is Shor’s algorithm for factoring [70].

• QMA (quantum Merlin Arthur), one quantum version of NP: The class of decision problems where for a “yes” instance, there exists a quantum proof (i.e., a quantum state) which can be efficiently verified by a quantum computer, and no such proof exists for a “no” instance. For example, determining whether the ground state energy of a local Hamiltonian is below some threshold (up to a certain accuracy) is in QMA – in a “yes” case, the proof would be any state with energy below the threshold, and the energy can be efficiently estimated by a quantum computer, using e.g. phase estimation [71].

The relation of the classes to each other is illustrated in Fig. 8. All of the inclusions shown are generally believed to be strict, but none of them has up to now been rigorously proven. In particular, whether NPP is probably the most famous open problem in theoretical computer science.

An important concept it that of complete problems: A problem is complete for a class if it is as hard as any problem in the class. Completeness is established by showing that any problem in the class can be reduced to the complete problem, this is, it can be reformulated as an instance of the complete problem in time. The relevance of complete problems lies in the fact that they are the hardest problems in a class: This is, if a problem is e.g. NP-complete, and we believe that NPP, then this problem cannot be solved in polynomial time; and on similar grounds, QMA-complete problems cannot be solved in polynomial time by either a classical or a quantum computer. We will discuss physically relevant complete problems relating to quantum many-body systems in the next section.

### 4.2 The complexity of problems in many-body physics

In the following, we will discuss the computational complexity of quantum many-body problems. The main problem we will consider is determining whether the ground state energy of a Hamiltonian which is a sum of few-body terms is below a certain threshold (up to a certain accuracy). As we have argued above, this problem is inside the class QMA. In the following, we will argue that the problem is in fact QMA-complete, which implies that it is hard not only for classical, but even for quantum computers; the following construction is due to Kitaev [71, 72]. To this end, we need to consider an arbitrary problem in QMA and show that we can rephrase it as computing the ground state energy of a certain Hamiltonian. A problem in QMA is described by specifying the quantum circuit which verifies the proof, and asking whether it has an input which will be accepted with a certain probability. The verifying circuit is illustrated in Fig. 9. Now let us assume that there is a valid proof , and run the verifier with (plus ancillas ) as an input. At each time step , the state of the register the verifier acts on is denoted by , and the proof is accepted if the first qubit of the state at time is in state .

Our aim will now be to build a Hamiltonian which has the “time history state”

 |Ψ⟩=∑t|ψt⟩R|t⟩T (15)

as a ground state. To this end, we will use three types of terms: One term, , will ensure that the ancillas are in the state at time , by increasing the energy of non-zero ancillas if . A second term, , will ensure correct propagation from time to , i.e., . Finally, a last term will increase the energy if the proof is not accepted, i.e., the first bit of is not . Together,

 H=Hinit+Hprop+Hfinal (16)

has a ground state with minimal energy if there exists a valid proof [which in that case is exactly the time history state of Eq. (15)]. On the other hand, in a “no” instance in which no valid proof exists, any ground state – which is still of the form (15) – must either start with incorrectly initialized ancillas, or propagate the states incorrectly (violating ), or be rejected by , thereby increasing the energy sufficiently to reliably distinguish “yes” from “no” instances. Together, this demonstrates that finding the ground state energy of the Hamiltonian (16) is a QMA-complete problem, showing that it is impossible to solve this problem even for a quantum computer [71, 72]).

While the Hamiltonian (16) is not spatially local, there has been a number of works sequentially proving QMA-hardness for more and more simple Hamiltonians. In particular, nearest-neighbor Hamiltonians on a 2D square lattice of qubits are QMA-complete [73], and, most notably, nearest neighbor Hamiltonians on one-dimensional chains [74], thus substantiating the difficulties encountered in searching for convergence proofs for simulation methods even in 1D.

Let us note that there is also a classical version of the above construction, showing NP-completeness of classical ground state problems: In that case, we can use a “history state”

 |ψ0⟩⊗|ψ1⟩⊗⋯⊗|ψT⟩

with no coherence between different times, where we think of the states as being arranged in columns. The Hamiltonian is similar to the one before, Eq. (16), just that the individual terms are no more conditioned on the value of a “time” register, but simply act on the columns of the history state corresponding to the right time . (Sloppily speaking, the reason that we can do so is that all terms in the Hamiltonian act in a classical basis, and classical information can be cloned.) The resulting Hamiltonian is at most four-local, acting across plaquettes of adjacent spins. Again, it has been shown using differerent techniques that even classical two-dimensional models with Ising-type couplings and local fields are still NP-complete [33].

Given all these hardness results for determining ground state properties of classical and quantum spin systems, one might wonder why in practice, simulation methods often work very well. First, let us note that these hardness result tell us why in many cases, we cannot prove that numerical methods work: This would require to identify the necessary conditions such as to exclude all hard instances, while still keeping the relevant problems, and to find a proof relying exactly on these conditions. Indeed, there are many properties which separate practical problems from the hard instances we discussed: For instance, in practice one usually considers translationally invariant (or nearly translationally invariant) systems (though there exist hardness results for translationally invariant systems even in one dimension [75]), and the systems considered will have various symmetries. Even more importantly, all the QMA-complete models known have a spectral gap of at most , as opposed to the typically constant gap in many problems of interest. On the other hand, the NP-complete Hamiltonian outlined above is completely classical and thus has gap , while still being NP-complete. Overall, while there is no doubt that numerical methods are extremely successful in practice, computational complexity tells us that we have to carefully assess their applicability in every new scenario. It should also be pointed out that the the fact that a family of problems (such as ground state problems) is complete for a certain complexity class addresses only the worst-case complexity, and in many cases, typical instances of the same problem can be solved much more efficiently.

## 5 Summary

In this lecture, we have given an overview over condensed matter applications of quantum information methods, and in particular of entanglement theory. The main focus was on entanglement-based ansatzes for the description and simulation of quantum many-body systems. We started by discussing the area law for the entanglement entropy which is obeyed by ground states of local interactions, and used this to derive the Matrix Product State (MPS) ansatz which is well suited to describe the physics of such systems. We showed that the one-dimensional structure of MPS allows for the efficient evaluation of expectation values, and that this can be used to build a variational algorithm for the simulation of one-dimensional systems. We have also shown how MPS appear as ground state of local “parent Hamiltonians” which makes them suitable to construct solvable models, and thus for the study of specific models as well as the general structure of quantum phases.

We have then discussed Projected Entangled Pair States (PEPS), which naturally generalize MPS and are well suited for the description of two-dimensional systems, and we have shown how approximation methods can be used to implement efficient PEPS based simulations. Again, PEPS can also be used to construct solvable models very much the same way as for MPS. We have subsequently demonstrated that MPS and PEPS can be used to simulate the time evolution and thermal states of systems governed by local Hamiltonians. Moreover, we have discussed other tensor network based approaches, such as MERA for scale-invariant systems or iMPS and iPEPS for infinite sytems, and concluded with a discussion on how to apply tensor network methods to fermionic systems.

Finally, we have introduced another field in which quantum information tools are useful to understand condensed matter systems, namely the field of computational complexity. Complexity tools allow us to understand which problems we can or cannot solve. In particular, we have discussed the class QMA which contains the problems which are hard even for quantum computers, and we have argued that many ground state problems, including 2D lattices of qubits and 1D chains, are complete for this class. While this does not invalidate the applicability of simulation methods to these problems, it tells us that special care has to be taken, and points out to us why we cannot hope for convergence proofs without imposing further conditions.

### Footnotes

1. footnotetext: Lecture Notes of the IFF Spring School “Quantum Information Processing”, edited by D. DiVincenzo (Forschungszentrum Jülich, 2013).
2. We ignore fermionic systems for the moment, but as it turns out, many of the results described in this lecture also hold for fermionic systems, see Sec. 3.4.
3. The equivalence of (2) and (4) can be proven straightforwardly by noting that for two maps and , and the bond between them, it holds that and iterating this argument through the chain.
4. Stricly speaking, this bound only follows from an area law for the Rényi entropy for , with in (6) depending on  [4], which also holds for gapped Hamiltonians [3]. The proof uses the fact that a bound on the area law implies a fast decay of the Schmidt coefficients (i.e., the eigenvalues of the reduced density operator), and thus, one can construct an MPS by sequentially doing Schmidt decompositions of the state and discarding all but the largest Schmidt coefficients [5, 4].
5. Firstly, one uses that the products and need to be computed only once (this can be simplified even further by choosing the appropriate gauge [11, 12]), reducing the scaling in to . Secondly, one slightly changes the contraction order: Starting from the left, one contracts , , , , , etc.: This involves multiplications of matrices with matrices, and matrices with matrices, yielding a scaling.
6. MPS have a natural gauge degree of freedom, since for any with a right inverse , we can always replace {align*} A^[s]_i & ↔ A^[s]_i X_s
A^[s+1]_i & ↔ X_s^-1 A^[s+1]_i without changing the state; this gauge degree of freedom can be used to obtain standard forms for MPS with particularly nice properties [13, 12].
7. This bond can be easily replaced by a normal bond by applying the transformation on one side, which can subsequently be absorbed in the map .
8. This condition is known as “injectivity” and states there is a such that after blocking sites, the map from the “open bonds” to the physical system is injective, or, differently speaking, that we can infer the state of the bonds by looking at the physical system; the AKLT state acquires injectivity for . It can then be shown that the parent Hamiltonian defined on sites has a unique ground state; in the case of the AKLT model, one can additionally show that the ground space of this three-body parent Hamiltonian and the original two-body one are identical [21, 13, 12, 22, 23].
9. This is nothing but the statement that two purifications of a mixed state are identical up to an isometry on the purifying system (this can be easily proven using the Schmidt decomposition, cf. Sec. 2.5 of Ref. [29]), where the index corresponds to the purifying system.
10. This follows from the so-called “exponential clustering theorem” [47, 48] which states that ground states of gapped Hamiltonians have exponentially decaying correlation functions.
11. This is going to be a very brief introduction. If you want to learn more, consult e.g. Refs. [68, 69].

### References

1. C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schumacher, Phys. Rev. A 53, 2046 (1996), quant-ph/9511030.
2. J. Eisert, M. Cramer, and M. Plenio, Rev. Mod. Phys. 82, 277 (2010), arXiv:0808.3773.
3. M. Hastings, J. Stat. Mech. P08024 (2007), arXiv:0705.2024.
4. F. Verstraete and J. I. Cirac, Phys. Rev. B 73, 094423 (2006), cond-mat/0505140.
5. G. Vidal, Phys. Rev. Lett. 93, 040502 (2004), quant-ph/0310089.
6. R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001), quant-ph/0010033.
7. F. Verstraete and J. I. Cirac, Phys. Rev. A 70, 060302 (2004), quant-ph/0311130.
8. C. K. Majumdar and D. Ghosh, J. Math. Phys. 10, 1388 (1969).
9. F. Verstraete, M. M. Wolf, D. Perez-Garcia, and J. I. Cirac, Phys. Rev. Lett. 96, 220601 (2006), quant-ph/0601075.
10. N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 98, 140506 (2007), quant-ph/0611050.
11. U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005), cond-mat/0409292.
12. D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, Quant. Inf. Comput. 7, 401 (2007), quant-ph/0608197.
13. M. Fannes, B. Nachtergaele, and R. F. Werner, Commun. Math. Phys. 144, 443 (1992).
14. J. Eisert, Phys. Rev. Lett. 97, 260501 (2006), quant-ph/0609051.
15. N. Schuch, I. Cirac, and F. Verstraete, Phys. Rev. Lett. 100, 250501 (2008), arXiv:0802.3351.
16. S. R. White, Phys. Rev. Lett. 69, 2863 (1992).
17. F. Verstraete, D. Porras, and J. I. Cirac, Phys. Rev. Lett. 93, 227205 (2004), cond-mat/0404706.
18. U. Schollwöck, Ann. Phys. 326, 96 (2011), arXiv:1008.3477.
19. F. Verstraete, V. Murg, and J. I. Cirac, Advances in Physics 57, 143 (2008).
20. I. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Phys. Rev. Lett. 59, 799 (1987).
21. A. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Commun. Math. Phys. 115, 477 (1988).
22. N. Schuch, I. Cirac, and D. Pérez-García, Ann. Phys. 325, 2153 (2010), arXiv:1001.3807.
23. N. Schuch, D. Poilblanc, J. I. Cirac, and D. Pérez-García, Phys. Rev. B 86, 115108 (2012), arXiv:1203.4816.
24. N. Schuch, D. Perez-Garcia, and I. Cirac, Phys. Rev. B 84, 165139 (2011), arXiv:1010.3732.
25. D. Perez-Garcia, M. Wolf, M. Sanz, F. Verstraete, and J. Cirac, Phys. Rev. Lett. 100, 167202 (2008), arXiv:0802.0447.
26. M. Sanz, M. M. Wolf, D. Perez-Garcia, and J. I. Cirac, Phys. Rev. A 79, 042308 (2009), arXiv:0901.2223.
27. F. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, Phys. Rev. B 81, 064439 (2010), arXiv:0910.1811.
28. X. Chen, Z. Gu, and X. Wen, Phys. Rev. B 83, 035107 (2011), arXiv:1008.3745.
29. M. A. Nielsen and I. A. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2000).
30. B. Nachtergaele, Commun. Math. Phys. 175, 565 (1996).
31. X.-G. Wen, Quantum Field Theory of Many Body Systems (Oxford University Press, 2004).
32. A. Kitaev, Ann. Phys. 303, 2 (2003), quant-ph/9707021.
33. F. Barahona, J. Phys. A 15, 3241 (1982).
34. S. Yan, D. A. Huse, and S. R. White, Science 332, 1173 (2011), arXiv:1011.6114.
35. M. B. Hastings, Phys. Rev. B 73, 085115 (2006), cond-mat/0508554.
36. M. B. Hastings, Phys. Rev. B 76, 035114 (2007), cond-mat/0701055.
37. F. Verstraete and J. I. Cirac, (2004), cond-mat/0407066.
38. H. C. Jiang, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett. 101, 090603 (2008), arXiv:0806.3719.
39. Z.-C. Gu, M. Levin, and X.-G. Wen, Phys. Rev. B 78, 205116 (2008), arXiv:0806.3509.
40. Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett. 103, 160601 (2009), arXiv:0809.0182.
41. N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 100, 40501 (2008), arXiv:0708.1567.
42. A. W. Sandvik and G. Vidal, Phys. Rev. Lett. 99, 220602 (2007), arXiv:0708.2232.
43. L. Wang, I. Pizorn, and F. Verstraete, Phys. Rev. B 83, 134421 (2011), arXiv:1010.5450.
44. M. A. Levin and X.-G. Wen, Phys. Rev. B 71, 045110 (2005), cond-mat/0404617.
45. O. Buerschaper, M. Aguado, and G. Vidal, Phys. Rev. B 79, 085119 (2009), arXiv:0809.2393.
46. Z.-C. Gu, M. Levin, B. Swingle, and X.-G. Wen, Phys. Rev. B 79, 085118 (2009), arXiv:0809.2821.
47. M. B. Hastings and T. Koma, Commun. Math. Phys. 265, 781 (2006), math-ph/0507008.
48. B. Nachtergaele and R. Sims, Commun. Math. Phys. 265, 119 (2006), math-ph/0506030.
49. D. Perez-Garcia, F. Verstraete, J. I. Cirac, and M. M. Wolf, Quantum Inf. Comput. 8, 0650 (2008), arXiv:0707.2260.
50. N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 100, 30504 (2008), arXiv:0705.0292.
51. P. Calabrese and J. Cardy, J. Stat. Mech. P04010 (2005), cond-mat/0503393.
52. N. Schuch, M. M. Wolf, K. G. H. Vollbrecht, and J. I. Cirac, New J. Phys. 10, 33032 (2008), arXiv:0801.2078.
53. M. C. Bañuls, M. B. Hastings, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 102, 240603 (2009), arXiv:0904.1926.
54. F. Verstraete, J. J. Garcia-Ripoll, and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004), cond-mat/0406426.
55. G. Vidal, Phys. Rev. Lett. 101, 110501 (2008), quant-ph/0610099.
56. G. Evenbly and G. Vidal, Phys. Rev. Lett. 102, 180406 (2009), arXiv:0811.0879.
57. A. Sfondrini, J. Cerrillo, N. Schuch, and J. I. Cirac, Phys. Rev. B 81, 214426 (2010), arXiv:0908.4036.
58. F. Mezzacapo, N. Schuch, M. Boninsegni, and J. I. Cirac, New J. Phys. 11, 083026 (2009), arXiv:0905.3898.
59. H. J. Changlani, J. M. Kinder, C. J. Umrigar, and G. K.-L. Chan, arXiv:0907.4646.
60. G. Vidal, Phys. Rev. Lett. 98, 070201 (2007), cond-mat/0605597.
61. J. Jordan, R. Orus, G. Vidal, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 101, 250602 (2008), cond-mat/0703788.
62. R. Orus and G. Vidal, Phys. Rev. B 80, 094403 (2009), arXiv:0905.3225.
63. P. Corboz, G. Evenbly, F. Verstraete, and G. Vidal, Phys. Rev. A 81, 010303(R) (2010), arXiv:0904.4151.
64. C. V. Kraus, N. Schuch, F. Verstraete, and J. I. Cirac, Phys. Rev. A 81, 052338 (2010), arXiv:0904.4667.
65. C. Pineda, T. Barthel, and J. Eisert, Phys. Rev. A 81, 050303(R) (2010), arXiv:0905.0669.
66. P. Corboz and G. Vidal, Phys. Rev. B 80, 165129 (2009), arXiv:0907.3184.
67. P. Corboz, R. Orus, B. Bauer, and G. Vidal, Phys. Rev. B 81, 165104 (2010), arXiv:0912.0646.
69. M. Sipser, Introduction To The Theory Of Computation (Brooks Cole, 2006).
70. P. W. Shor, SIAM J. Comput. 26, 1484 (1997), quant-ph/9508027.
71. A. Y. Kitaev, A. H. Shen, and M. N. Vyalyi, Classical and quantum computation (American Mathematical Society, Providence, Rhode Island, 2002).
72. D. Aharonov and T. Naveh, (2002), quant-ph/0210077.
73. R. Oliveira and B. M. Terhal, Quant. Inf. Comput. 8, 900 (2009), quant-ph/0504050.
74. D. Aharonov, D. Gottesman, S. Irani, and J. Kempe, Commun. Math. Phys. 287, 41 (2009), arXiv:0705.4077.
75. D. Gottesman and S. Irani, Proc. 50th Annual Symp. on Foundations of Computer Science, 95 (2009), arXiv:0905.2419.