# Neural Networks Quantum States, String-Bond States and chiral topological states

## Abstract

Neural Networks Quantum States have been recently introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between Neural Networks Quantum States in the form of Restricted Boltzmann Machines and some classes of Tensor Network states in arbitrary dimension. In particular we demonstrate that short-range Restricted Boltzmann Machines are Entangled Plaquette States, while fully connected Restricted Boltzmann Machines are String-Bond States with a non-local geometry and low bond dimension. These results shed light on the underlying architecture of Restricted Boltzmann Machines and their efficiency at representing many-body quantum states. String-Bond States also provide a generic way of enhancing the power of Neural Networks Quantum States and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of Tensor Networks and the efficiency of Neural Network Quantum States into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional Tensor Networks, we show that due to their non-local geometry Neural Networks Quantum States and their String-Bond States extension can describe a lattice Fractional Quantum Hall state exactly. In addition, we provide numerical evidence that Neural Networks Quantum States can approximate a chiral spin liquid with better accuracy than Entangled Plaquette States and local String-Bond States. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of String-Bond States as a tool in more traditional machine learning applications.

## 1Introduction

Recognizing complex patterns is a central problem which pervades all fields of science. The increased computational power of modern computers has allowed the application of advanced methods to the extraction of such patterns from humongous amounts of data and we are witnessing an ever increasing effort to find novel applications in numerous disciplines. This led to a line of research now called Quantum Machine Learning[1], which is divided in two main different branches. The first tries to develop quantum algorithms capable of learning, i.e. to exploit speed ups from quantum computers to make machines learn faster and better. The second, that we will consider in this work, tries to use classical machine learning algorithms to extract insightful information about quantum systems.

The versatility of machine learning has allowed scientists to employ it in a number of problems which span from quantum control[2] and error correcting codes[5] to tomography [6]. In the last few years we are experiencing interesting developments also for some central problems in condensed matter, such as quantum phase classification/recognition[7], improvement of dynamical mean field theory[11], enhancement of Quantum Monte Carlo methods [12] or approximations of thermodynamic observables in statistical systems[14].

An idea which received a lot of attention from the scientific community consists in using neural networks as variational wave functions to approximate ground states of many-body quantum systems[15]. These networks are trained/optimized by the standard Variational Monte Carlo (VMC) method and while a few different neural networks architectures have been tested[15], the most promising results so far have been achieved with Boltzmann Machines[18]. In particular, state of the art numerical results have been obtained on popular models with Restricted Boltzmann Machines (RBM) and recent effort has demonstrated the power of Deep Boltzmann Machines to represent ground states of many-body Hamiltonians with polynomial-size gap and quantum states generated by any polynomial size quantum circuits[19].

Other seemingly unrelated classes of states that are widely used in condensed matter physics are Tensor Networks States. In 1D, Matrix Product States (MPS) can approximate ground states of physical Hamiltonians efficiently [21] and their structure has led to both analytical insights over the entanglement properties of physical systems as well as efficient variational algorithms for approximating them[23]. The natural extension of MPS to larger dimensional systems are Projected Entangled Pair States (PEPS)[26], but their exact contraction is hard[27] and algorithms for optimizing them need to rely on approximations. Another approach to define higher dimensional Tensor Networks consists in first dividing the lattice into overlapping clusters of spins. The wave function of the spins in each cluster is then described by a simple Tensor Network. The global wave function is finally taken to be the product of these Tensor Networks, which introduces correlations among the different clusters. This construction for local clusters parametrized by a full tensor gives rise to Entangled Plaquette States (EPS)[28], while taking one dimensional clusters of spins each described by a MPS leads to a String-Bond States (SBS) Ansatz[31]. These states can be variationally optimized using the VMC method[33] and have been applied to 2D and 3D systems.

All these variational wave functions have been successful in describing strongly correlated quantum many body systems, including topologically ordered states. The Toric code[34] is a prototypical example which can be written exactly as a PEPS[35], an EPS[30], a SBS[31] or a short-range RBM[36]. This shows that in some cases Tensor Networks and Neural Networks Quantum States can be related. Indeed it was recently shown that local Tensor Networks can be represented efficiently by Deep Boltzmann Machines[19]. Not every topological state can however easily be represented by local Tensor Networks. A class of states for which this is challenging are chiral topological states breaking time-reversal symmetry. Such states were first realized in the context of the Fractional Quantum Hall (FQH) effect[38] and significant progress has since been made towards the construction of lattice models displaying the same physics, either in Hamiltonians realizing fractional Chern insulators[39] or in quantum anti-ferromagnets on several lattices[45]. One approach to describe the wave function of these anti-ferromagnets is to use parton constructed wave functions[48]. It has also been suggested to construct chiral lattice wave functions from the FQH continuum wave functions, the paradigmatic example being the Kalmeyer-Laughlin wave function[52]. Efforts to construct chiral topological states with PEPS have been undertaken recently[53], but the resulting states are critical. In the non-interacting case it has moreover been proven that the local parent Hamiltonian of a chiral fermionic Gaussian PEPS has to be gapless[54].

In this work we show that there is a strong relation between Restricted Boltzmann Machines and Tensor Network States in arbitrary dimension. We demonstrate that short-range RBM are a special subclass of EPS, while fully-connected RBM are a subclass of SBS with a flexible non-local geometry and low bond dimension. This relation provides additional insights over the geometric structure of RBM and their efficiency. We discuss the advantages and drawbacks of RBM and SBS and provide a way to combine them together. This generalization in the form of non-local String-Bond States takes leverage of both the entanglement structure of Tensor Networks and the efficiency of RBM. It allows for the description of states with larger local Hilbert space and has a flexible geometry. It can moreover be combined with more traditional Ansatz wave functions that serve as an initial approximation of the ground state.

We then apply these methods to the challenging problem of approximating chiral topological states. We prove that any Jastrow wave function, and thus the Kalmeyer-Laughlin wave function, can be written exactly as a RBM. We moreover show that a remarkable accuracy can be achieved numerically with much less parameters than is required for an exact construction. We numerically evaluate the power of EPS, SBS and RBM to approximate the ground state of a chiral spin liquid for which the Laughlin state is already a good approximation[45] and find that RBM and non-local SBS are able to achieve lower energy than the Laughlin wave function. By combining these classes of states with the Laughlin wave function, we are able to reach even lower energies and to characterize the properties of the ground state of the model.

The paper is organized as follows: in Section 2 we introduce the Variational Monte Carlo method and how it can be used to optimize both Tensor Networks and Neural Networks States. In Section 3 the mapping between RBM, EPS and SBS is derived and its geometric implications are discussed. Finally we apply these techniques to the approximation of chiral topological states in Section 4.

## 2Variational Monte Carlo with Tensor Networks and Neural Networks State

### 2.1The Variational Monte Carlo method

Given a general Hamiltonian , one of the main challenges of quantum many-body physics is to find its ground state satisfying the Schrödinger equation . This eigenvalue problem can be mapped to an optimization problem through the variational principle, stating that the energy of any quantum state is higher than the energy of the ground state. A general pure quantum state on a lattice with spins can be expressed in the basis spanned by , where are the projection of the spins on the z axis, as

Finding the ground state amounts to finding the exponentially many parameters minimizing the energy, which can only be done exactly for small sizes. Instead of searching for the ground state in the full Hilbert space, one may restrict the search to an Ansatz class specified by a particular form for the function depending on polynomially many variational parameters . The Variational Monte Carlo method [58] (VMC) provides a general algorithm for optimizing the energy of such a wave function. One can compute the energy by expressing it as

where is a spin configuration, is a classical probability distribution and the local energy can be evaluated efficiently for Hamiltonians involving few-body interactions. The energy is therefore an expectation value with respect to a probability distribution that can be evaluated using Markov Chain Monte Carlo sampling techniques such as the Metropolis-Hastings algorithm [60]. The second ingredient required to minimize the energy with respect to the parameters is the gradient of the energy, which can be expressed in a similar form since

where we have defined as the log-derivative of the wave function with respect to some parameter . This is also an expectation value with respect to the same probability distribution and can therefore be sampled at the same time, which allows for the use of gradient-based optimization methods. At each iteration, the energy and its gradient are computed with Monte Carlo, the parameters are updated by small steps in the direction of negative energy derivative () and the process is repeated until convergence of the energy. The VMC method, in its simplest form, only requires the efficient computation of for two spin configurations and , as well as the log-derivative of the wave function . More efficient optimization methods can be used, such as conjugate-gradient descent, Stochastic Reconfiguration[62], the Newton method[64] or the linear method[65].

At this point one has to choose a special form for the wave function . One of the traditional variational wave functions for a many-body quantum system is a Jastrow wave function[58], which consists in its most general form of a product of wave functions for all pairs of spins:

where each is fully specified by its four values . Such an Ansatz does not presuppose a particular local geometry of the many-body quantum state: in general this Ansatz can be non-local due to the correlations between all pairs of spins (Fig. ?). A local structure can be introduced by choosing a form for which decays with the distance between position and .

### 2.2Variational Monte Carlo method with Tensor Networks

In condensed matter physics, important assets to simplify the problem are the geometric structure and locality of physical Hamiltonians. In 1D, it has been proven that ground states of gapped local Hamiltonians have an entanglement entropy of a subsystem which grows only like the boundary of the subsystem[21]. States satisfying such an area-law can be efficiently approximated by Matrix Product States (MPS)[22]. Matrix Product State are one dimensional Tensor Network States whose wave function for a spin configuration reads

For each spin and lattice site, the matrix of dimension , where is called the bond dimension, contains the variational parameters. Matrix Product States can be efficiently optimized using the Density Matrix Renormalization Group (DMRG)[69], but the previously described VMC method can also be applied[33] by observing that the ratio of two configurations is straightforward to compute, and that the log-derivative with respect to some matrix is given by

In some cases, this method is less likely to be trapped in a local minimum than DMRG, since all coefficients can be updated at once. In addition, the cost only scales as in the bond dimension for periodic boundary conditions.

In higher dimensions, Matrix Product States can be defined by mapping the system to a line (Fig. ?). The problem of this construction is evident from Fig. ?. Spins which sit close to each other might be separated by a long distance on the line, the Ansatz thus fails to reproduce the local structure of the state, which leads to an exponential scaling of the computing resources needed with the system size[70]. The natural extension of MPS to 2D systems are Projected Entangled Pair States (PEPS)[26], for which the wave function can be written as a contraction of local tensors on the 2D lattice. While PEPS have been successful in describing strongly correlated quantum many body systems, their exact contraction is hard[27] and their optimization cannot rely on the standard VMC method without approximations. In the following we will instead consider other classes of tensor network states in more than one dimension for which the exact computation of the wave function is efficient, which allows for the direct use of the VMC method.

One approach consists in cutting a lattice in small clusters of spins, or plaquettes, and construct the wave function exactly on each plaquette. The wave function of the full quantum system is then taken to be the product of the wave functions in each plaquette, in a mean-field fashion. Choosing overlapping plaquettes allows one to go beyond mean-field and include correlations between different plaquettes (Fig. ?). The wave function of such an Entangled Plaquette State (EPS, also called a Correlated Product State) is written as[28]:

where a coefficient is assigned to each of the (for spin- particles) configurations of the spins on the plaquette . Each can be seen as the most general function on the Hilbert space corresponding to the spins in plaquette . The accuracy can be improved by enlarging the size of the plaquettes and the Ansatz is exact once the size of the plaquettes reaches the size of the lattice (which can only be achieved on small lattices). Moreover, once the spin configuration is fixed, the log-derivative of the wave function with respect to the variational parameters is simply

which is efficient to compute.

EPS are limited to small plaquettes since for each plaquette the number of coefficients scales exponentially with the size of the plaquette. However one can generalize this Ansatz by describing the state of clusters of spins by a MPS, avoiding the exponentially many coefficients needed. The lattice is now cut in overlapping 1D strings which can mediate correlations on longer distances compared to local plaquettes (Fig. ?). The resulting Ansatz is a String-Bond State (SBS)[31] defined by a set of strings (each string is an ordered subset of the set of spins) and a MPS for each string:

The descriptive power of this Ansatz is highly dependant on the choice of strings: for example, by using small strings covering small plaquettes and a large bond dimension it includes EPS; whereas a single long string in a snake pattern includes MPS in 2D. In 3D, it has been used by choosing strings parallel to the axes of the lattice[32]. Since the form of the wave function is a product of MPS, the log-derivative with respect to some elements present in one of the MPS is simply the log-derivative for the corresponding MPS (Eq. ). The VMC procedure for optimizing SBS and MPS thus have the same cost. In addition, the ratio of two configurations which differ only by a few spins can be computed by considering only the strings including these spins, which speeds up the computation considerably. Let us note that a SBS can be mapped analytically to a MPS, but that the resulting MPS would have a bond dimension exponential in the number of strings.

### 2.3Variational Monte Carlo method with Neural Networks

Recently, it was realized that the VMC method can be viewed as a form of learning, which motivated the use of another class of seemingly unrelated states for describing the ground state of many-body quantum states: Neural Networks Quantum States[15] are quantum states for which the wave function has the structure of an artificial neural network. While a few different networks have been investigated[15], the most promising results so far have been obtained with Boltzmann Machines[18]. Boltzmann Machines are a kind of generative stochastic artificial neural networks that can learn a distribution over the set of their inputs. In quantum many-body physics, the inputs are spin configurations and the wave function is interpreted as a (complex) probability distribution that the networks tries to approximate. Boltzmann Machines consist of two sets of binary units (classical spins): the visible units , corresponding to the configurations of the original spins in a chosen basis, and hidden units which introduce correlations between the visible units. The whole system interacts through an Ising interaction which defines a joint probability distribution over the visible and hidden units as the Boltzmann weight of this Hamiltonian:

where the Hamiltonian is defined as

and is the partition function. The marginal probability of a visible configuration is then given by summing over all possible hidden configurations:

and we take this quantity as Ansatz for the wave function: . The variational parameters are the complex parameters of the Ising Hamiltonian. In the case where there are interactions between the hidden units (Fig. ?), the Boltzmann Machine is called a Deep Boltzmann Machine.

It has been shown that Deep Boltzmann Machines can efficiently represent ground states of many-body Hamiltonians with polynomial-size gap, local tensor network state and quantum states generated by any polynomial size quantum circuits[19]. On the other hand, computing the wave function of such a Deep Boltzmann Machine in the general case is intractable, due to the exponential sum over the hidden variables, so the VMC method cannot be applied to Deep Boltzmann Machines without approximations. We therefore turn to the investigation of Restricted Boltzmann Machines (RBM), which only include interactions between the visible and hidden units (as well as the one-body interaction terms which correspond to biases). In this case, the sum over the hidden units can be performed analytically and the resulting wave function can be written as (here we take the hidden units to have values ):

RBM can represent many quantum states of interest, such as the toric code[36], any graph state, cluster states and coherent thermal states[19]; the possibility of computing efficiently prevents it however to approximate all PEPS and ground states of local Hamiltonians[19]. On the other hand, since computing and its derivative is very efficient, RBM can be optimized numerically via the VMC method.

## 3Relationship between Tensor Networks and Neural Networks states

While the machine learning perspective which leads to the application of Boltzmann Machines to quantum many-body systems seems quite different from the information-theoretic approach to the structure of tensor networks states, we will see that they are in fact intimately related. It was recently shown that while fully connected RBM can exhibit volume-law entanglement, contrary to local tensor networks, all short-range RBM satisfy an area law[71]. Moreover short-range and sufficiently sparse RBM can be written as a MPS[37], but doing so for a fully-connected RBM would require an exponential scaling of the bond dimension with the size of the system. In this section we show that there is a tighter connection between RBM and the previously introduced tensor networks in arbitrary dimension.

### 3.1Jastrow wave functions, RBM and the Majumdar-Gosh model

Before turning to tensor networks, let us first consider the simple case of the Jastrow wave function (Eq. ). Boltzmann Machines including only interactions between the visible units lead to a wave function

which has the form of a product between functions of pairs of spins, and is thus a Jastrow wave function. More generally, semi-restricted Boltzmann Machines including interactions between visible units as well as between hidden and visible units are a product of a RBM and a Jastrow factor.

Nevertheless, one may ask whether a RBM alone is enough to describe a Jastrow factor. We first rewrite the RBM as

where we have redefined the parameters with uppercase letters as the logarithm of the original parameters, thus removing the exponentials in the hyperbolic cosine. This form will be convenient for the numerical simulations presented later. Since Jastrow wave functions are a product of functions of all pairs of spins, let us show that a RBM with one hidden unit can represent any function of two spins. It then follows that a RBM with hidden units, each representing a function of one pair of spins, can represent a Jastrow wave function with polynomial resources. We thus have to solve for a system of four non-linear equations with and the most general function of two spins : . This system is solved in Appendix A, providing an analytical solution for the parameters of the RBM to represent the Jastrow wave function exactly, or to arbitrary precision if for some spins.

As an application, we use this result to write the ground state of the Majumdar-Gosh model[72] exactly as a RBM. The Majumdar-Ghosh model is defined by the following spin- Hamiltonian:

The ground state wave function is a product of singlets formed by neighboring pairs of spins:

This wave function can also be expanded in the computational basis as

Using the previous result, each function of two spins can be written as a RBM using one hidden unit, which leads to a RBM representation of the ground states with hidden units. We also find numerically on small systems that a RBM using less than has higher energy than the ground state, which suggests that could be optimal.

### 3.2Short-range RBM are EPS

Let us now turn to the specific case of RBM with short-range connections (sRBM). This encompasses all quantum states that have previously been written exactly as a RBM, such as for example the toric code or the 1D symmetry-protected topological cluster state[36]. Such states have weights connections between visible hidden units that are local. Each hidden unit is connected to a local region with at most neighboring spins. If we divide the lattice into subsets , the wave function can be rewritten as (we omit here the biases which are local one-body terms):

where is the spin configuration in the subset . This is the form (Eq.) of an EPS (Fig. ?). For translational invariant systems, the short-range RBM becomes a convolutional RBM, which corresponds to a translational invariant EPS. The main difference between a short-range RBM and an EPS is that the RBM considers a very specific function among all possible functions of the spins inside a plaquette, hence EPS are more general than short-range RBM. This also directly implies that the entanglement of short-range RBM follows an area law. The main advantage of short-range RBM over EPS is that due to the exponential scaling of EPS with the size of the plaquettes, larger plaquettes can be used in short-range RBM than in EPS. Since in practice for finite systems it is possible to work directly with fully-connected RBM, we argue that EPS or fully-connected RBM should be preferred to short-range RBM for numerical purposes.

### 3.3Fully-connected RBM are SBS

Fully-connected RBM, on the other hand, do not always satisfy an area law[71] and hence cannot always be approximated by local tensor networks. Nevertheless, one can express the RBM wave function as (here we also omit the bias ):

where

are diagonal matrices of bond dimension . This shows that RBM are String-Bond States, as the wave function can be written as a product of MPS over strings, where each hidden unit corresponds to one string. The only difference between the SBS as depicted in Fig. ? and the RBM is the geometry of the strings. In a fully-connected RBM, each string goes over the full lattice, while SBS have traditionally been used with smaller strings and with at most a few strings overlapping at each lattice site.

### 3.4Generalizing RBM to non-local SBS

In the SBS language, RBM consists in many strings overlapping on the full lattice. The matrices in each string in the RBM are diagonal, hence commute, so they can be moved in the string up to a reordering of the spins. This means that each string does not have a fixed geometry and can adapt to stronger correlations in different parts of the lattice, even over long distances. This motivates us to generalize RBM to SBS with diagonal matrices in which each string covers the full lattice (Fig. ?). In the following we denote these states as non-local dSBS. This amounts to relaxing the constraints on the RBM parameters to the most general diagonal matrix and enlarging the bond dimension of the matrices. For example taking the matrices

with different parameters for each string, lattice site and spin direction, leads to the wave function (here ):

Generalizing such a wave function to larger spins than spin- is straightforward, since the spin is just indexing the parameters. This provides a way of defining a natural generalization of RBM which can handle systems with larger physical dimension. For instance this can be applied to spin-1 systems, while a naive construction for a RBM with spin-1 visible and hidden units leads to additional constraints, as well as to approximate bosonic systems by truncating the local Hilbert space of the bosons.

A further way to extend this class of states is to include non-commuting matrices. This fixes the geometry of each string by defining an order and also enables to represent more complicated interactions. In the following we will refer to SBS in such a geometry as non-local SBS. The advantage of this approach is that it can capture more complex correlations within each string, while introducing additional geometric information about the problem at hand. It comes however at a greater numerical cost than non-local dSBS or RBM due to the additional number of parameters. In practice, one can use an already optimized RBM or dSBS as a way of initializing a non-local SBS.

In some cases, the SBS representation is more compact than the RBM/dSBS representation. Let us consider again the ground state of the Majumdar-Gosh Hamiltonian, which we previously wrote as a RBM with hidden units. The ground state of the Majumdar-Gosh Hamiltonian can also be written as a simple MPS with bond dimension 3 and periodic boundary conditions, with matrices [24]

or for open boundary conditions with

Since this state is a MPS, it is also a SBS with 1 string. The RBM representation of the same state requires strings. In practice the number of non-zero coefficients are comparable, since in both cases the representation is sparse, but for numerical purposes a fully-connected RBM needs of the order parameters before finding the exact ground state, while a MPS or SBS with one string will need parameters for both open and periodic boundary conditions.

Another example is the AKLT model[73] defined by the following spin- Hamiltonian in periodic boundary conditions:

Its ground state has a simple form as a MPS of bond dimension . It can also be written as an exact RBM by mapping the system to a spin- chain, but the number of hidden units needed for an exact representation scales as in the system size[74]. We have numerically optimized the spin-1 extension of a RBM with form Eq. (see Appendix B for the details of the numerical optimization) and found that already for small sizes of the chain a much higher number of parameters is required to approach the ground state energy as compared to a SBS with non-commuting matrices, which is exact with one string of bond dimension 2 (Fig. Figure 1). We will also show in Section 4 that in some other cases the RBM needs less parameters than a SBS to obtain a similar energy. This demonstrates that both RBM and SBS have advantages and that their efficiency depends on the particular model that is investigated. It remains an open question whether there exist MPS or SBS which can provably not be efficiently approximated by a RBM (for which the RBM would need exponentially many parameters).

To be able to use both the advantages of RBM (efficient to compute, few parameters) and of SBS (complex representation, geometric interpretation), one can use the flexibility of SBS by including some strings that have a full MPS over the whole lattice, some strings which include only local connections and that will ensure that the locality of the system is preserved, and some strings that have the form of an RBM and that can easily capture large entanglement and long-range correlations. In many cases of interest, an initial approximation of the ground state can be obtained, either by optimizing simpler wave functions or by first applying DMRG to optimize a MPS. This initial approximation can then be used in conjunction with the previous Ansatz classes by multiplying an Ansatz wave function with the initial approximation. For the resulting wave function

the ratio of the wave function on two configurations as well as the log-derivatives depend only on the respective ratio and log-derivatives of each separate wave function, making the application of the VMC method straightforward. This procedure has the advantage of reducing the number of parameters necessary for obtaining a good approximation to the ground state and making the optimization procedure more stable, since the initial state is not a completely random state. Such a procedure provides a generic way to enhance the power of more specific Ansatz wave functions tailored to particular problems, as we will demonstrate in the next section. A similar technique has been used to construct tensor-product projected states with tensor networks in Ref. .

## 4Application to chiral topological states

In this section we turn to a practical application on a challenging problem for traditional tensor networks methods, namely the approximation of a state with chiral topological order. While chiral topological PEPS have been constructed, the resulting states are critical. Moreover the local parent Hamiltonian of a chiral fermionic Gaussian PEPS has to be gapless[54]. In the following we investigate if this obstruction carries on to the tensor networks and neural networks states that we have introduced previously.

### 4.1RBM can describe a Laughlin state exactly

Let us consider a lattice version of the Laughlin wave function at filling factor defined for a spin- system as

where fixes the total spin to , the are the complex coordinates of the positions of the lattice sites and the phase factor are defined as , ensuring that the state is a singlet. This wave function is equivalent to the Kalmeyer-Laughlin wave function in the thermodynamic limit and has been shown to describe a lattice state sharing the topological properties of the continuum Laughlin states on several lattices[76]. In addition, it can be written as a correlator from conformal fields, which has enabled the exact derivation of parent Hamiltonians for this state on any finite lattice[79].

The Laughlin wave function has the structure of a Jastrow wave function and we have shown in Section 3.1 that any Jastrow wave function can be written as a RBM with hidden units. It follows that RBM and non-local SBS can represent a gapped chiral topological state exactly. This is in sharp contrast to local tensor network states for which there is no exact description of a (non-critical) chiral topological state known. This difference is due to the non-local connections in the RBM and Jastrow wave function which allow them to easily describe a Laughlin state. We note that a chiral p-wave superconductor is another example of a gapped chiral topological state which has been recently written as a (fermionic) quasi-local Boltzmann Machine[20].

The previous construction is however not satisfactory in the sense that the RBM requires a number of hidden units scaling as , which is too high for numerical purposes on lattices which are not extremely small. We thus turn to the approximate representation of the Laughlin wave function using a RBM.

### 4.2Numerical approximation of a Laughlin state

The lattice Laughlin wave function we consider has an exact parent Hamiltonian on a finite lattice[79] defined as

where and is the spin operator at site . We specialize to the square lattice with open boundary conditions and minimize the energy of different wave functions with respect to this Hamiltonian by applying the VMC method presented in Section 2.2 with a Stochastic Reconfiguration optimization[62] (details of the numerical optimization can be found in Appendix B). Results are presented in Table 1.

We find that EPS with plaquettes of size up to have an energy difference with the Laughlin state of the order , which is better than a short-range RBM (denoted sRBM) on plaquettes and up to hidden units per plaquette, while the energy of a fully connected RBM with hidden units is within of the energy of the ground state. The resulting RBM uses much less hidden units than would be required for it to be exact, yet reaches an overlap of with the Laughlin wave function. This result shows that the fully-connected structure of the RBM is an advantage to describe this state and that EPS can be used instead of short-range RBM. We have moreover found that EPS are easier to optimize numerically than a short-range RBM: they are more stable, since each coefficient is considered separately, no exponentials or products that lead to unstable behavior are present and the derivatives have a very simple form (Eq. ).

Ansatz | ||
---|---|---|

EPS | \(\ 4.3\times 10^{-2}\\) | |

EPS | \(\ 2.2\times 10^{-2}\\) | |

sRBM | \(\ 8.3\times 10^{-2}\\) | |

sRBM | \(\ 3.1\times 10^{-2}\\) | |

sRBM | \(\ 2.5\times 10^{-2}\\) | |

RBM | \(\ 5.8\times 10^{-4}\\) | |

RBM | \(\ 1.1\times 10^{-5}\\) | |

### 4.3Numerical approximation of a chiral spin liquid

The previous results indicate that RBM might be useful for approximating chiral topological states numerically, but are limited to relatively small sizes due to the non-local nature of the parent Hamiltonian, which includes interactions between all triplets of spins on the lattice. In Ref. a local Hamiltonian stabilizing a state in the same class as the Laughlin state was obtained by restricting to local terms and setting the long-range interactions to zero. This leads to the Hamiltonian

where indicates indices of nearest neighbours on the lattice and indicates indices of all triangles of neighboring spins, with vertices labelled in the counter clockwise direction. We focus on the case for which the ground state of has above overlap with the Laughlin wave function (Eq.) on a lattice. We minimize the energy of different classes of states on a and square lattice with open boundary conditions. For optimizing wave functions with tens of thousands of parameters we use a batch version of Stochastic Reconfiguration which optimizes a random subset of the parameters at each iteration (see Appendix B). We consider several Ansatz wave functions including EPS with plaquettes of size , , and , local SBS covering the lattice with horizontal, vertical and diagonal strings and increasing bond dimension, RBM with increasing number of hidden units, non-local SBS with diagonal matrices (denoted dSBS) or with non-commuting matrices of bond dimension and different number of strings covering the full lattice. We observe that while the optimization of EPS and SBS is particularly stable, the optimization of RBM can lead to numerical instabilities that are resolved by writing the RBM in the form presented in Eq.. Since we use the same optimization procedure for all Ansatz wave functions and since the required time (and memory) to perform the optimization is mainly a function of the number of parameters and of the accuracy, we can compare the Ansatz classes by comparing how many parameters are needed to obtain a similar energy.

We first focus (Fig. ?) on a lattice for which the exact ground state can be obtained using exact diagonalization. Local SBS have an energy higher than the Laughlin state and the energy is saturated with increasing bond dimension, which means that the pattern of horizontal, vertical and diagonal strings is not enough to capture all correlations in the ground state. While a large plaquette would make EPS exact on this small lattice, this would require parameters. The energy of the Laughlin state is already reached for plaquettes. RBM with a number of hidden units larger than and non-local SBS with a corresponding number of strings have lower energy than the Laughlin state or the Jastrow wave function. When the number of strings grows, the energy decreases even further. On a larger lattice (Fig. ?) the exact ground state energy is unknown but we can compare the energy of the different Ansatz wave functions and observe similar results. Only the Jastrow wave function, non-local SBS and RBM have an energy comparable to the Laughlin state. Notice that non-local SBS have a constant factor more parameters than a RBM for the same number of strings. On the one side this allows SBS to achieve better energy than RBM with the same number of strings. On the other side this comes with the drawback than we can only optimize fewer strings and on the large lattice we are numerically limited to non-local dSBS with up to N strings. We can conclude that RBM are particularly efficient in this example since they require significantly less parameters than SBS for attaining the same energy. This has to be contrasted with the previous examples of the Majumdar-Gosh and AKLT models where the opposite was true. Therefore each class of states has advantages and drawbacks depending on the model we are looking at. We note in addition that a non-local SBS can be initialized with the results of a previous optimization with a RBM, which could provide a way of minimizing the difficulties of optimizing large number of parameters.

Ansatz | TEE |
---|---|

Laughlin | |

l-EPS | |

RBM | |

l-RBM | |

As we have previously noticed, we can also use an initial approximation of the ground state in combination with the previous Ansatz classes. In the case of the Hamiltonian , the analytical Laughlin wave function can be used as our initial approximation in Equation 7. We denote l-EPS (resp. l-SBS, l-RBM) a wave function that consists in a product of the Laughlin wave function and an EPS (resp. SBS, RBM) and minimize the energy of the resulting states. This allows us to obtain lower energies for each Ansatz class (Fig. ?). Once the wave functions are optimized, their properties can be computed using Monte Carlo sampling. To check that the ground state is indeed in the same class as the Laughlin state, we compute the topological entropy of some of the optimized states by dividing the lattice into four regions (Fig. Figure 2) and computing the Renyi entropy of each subregion using the Metropolis-Hastings Monte Carlo algorithm with two independent spin chains [81]. The topological entanglement entropy is then defined as[83]

and is expected to be equal to for the Laughlin state[85]. The results we obtain are presented in Table 2 and provide additional evidence that the ground state of has the same topological properties as the Laughlin state. The Hamiltonian was recently investigated on an infinite lattice using infinite-PEPS[86] and further evidence was provided that the ground state is chiral. The PEPS results suggest the presence of long-range algebraically decaying correlations that may be a feature of the model or a restriction of PEPS to study chiral systems. The correlations on short distances agree with the correlations that we can compute on our finite system (Fig. ?) but our method does not allow us to make claims about the long-distance behavior of the correlation function. We also observe that fully-connected RBM cannot be defined directly in the thermodynamic limit without a truncation of the distance of the interaction between visible and hidden units, thus transforming the RBM into a short-range RBM (albeit of larger range than an EPS).

## 5Conclusion

We have shown that there is a strong connection between Neural Network Quantum States in the form of Boltzmann Machines and some Tensor Networks states that can be optimized using the Variational Monte Carlo method : while short-range Restricted Boltzmann Machines are a subclass of Entangled Plaquette States, fully connected Restricted Boltzmann Machines are a subclass of String-Bond States. These String-Bond States are however different from traditional String-Bond States due to their non-local structure which connects every spin on the lattice to every string. This enabled us to generalize Restricted Boltzmann Machines by introducing non-local (diagonal or non-commuting) String-Bond States which can be defined for larger local Hilbert space and with additional geometric flexibility. We compared the power of these different classes of states and showed that while there are cases where String-Bond States require less parameters than fully-connected Restricted Boltzmann Machines to describe the ground state of a many-body Hamiltonian, there are also cases where the additional parameters in each string make String-Bond States less efficient to optimize numerically. We applied these methods to the challenging problem of describing states with chiral topological order, which is hard for traditional Tensor Networks. We showed that every Jastrow wave function, and thus a Laughlin wave function, can be written as an exact Restricted Boltzmann Machine. In addition we gave numerical evidence that a Restricted Boltzmann Machine with a much smaller number of hidden units can still give a good approximation to the Laughlin state. Finally we turned to the approximation of the ground state of a chiral spin liquid and showed that Restricted Boltzmann Machines achieve a lower energy than the Laughlin state and the same topological entanglement entropy. We argued that combining different classes of states allows to take advantage of the initial knowledge of the model and of the particularities of each class. This was demonstrated by combining a Jastrow wave function to Tensor Networks and Restricted Boltzmann Machines, which allowed us to get lower energies than the initial states and characterize the ground state.

Our work sheds some light on the representative power of Restricted Boltzmann Machines and establish a bridge between their optimization and the optimization of Tensor Network states. On the one hand, the methods developed in this work can be used to target the ground state of other Hamiltonians and it would be interesting to know whether similar results can be achieved for example for non-Abelian chiral spin liquids[87] or generalized to fermionic systems of electrons in the continuum displaying the Fractional Quantum Hall effect. On the other hand, we also showed that some tools used in machine learning can be rephrased in Tensor Network language, thus providing additional physical insights about the systems they describe. Matrix Product States have already been used as a tool for supervised learning[89] and our work opens up the possibility of using not only Restricted Boltzmann Machines, but also String-Bond States to represent a probability distribution over some data while encoding additional information about its geometric structure.

*Note added.* After the completion of this manuscript, related independent work came to our attention. Y. Nomura et al.[91] combine RBM with pair product wave functions and apply them to the Heisenberg and Hubbard models. S. R. Clark[92] constructs a mapping between RBM and EPS/Correlator Product States. R. Kaubruegger et al.[93] give further analytical and numerical evidence supporting the application of RBM to chiral topological states such as the Laughlin state.

## AJastrow wave functions are Restricted Boltzmann Machines

Let us show that a RBM with one hidden unit can represent any function of two spins. It then follows that a RBM with hidden units, each representing a function of one pair of spins, can represent a Jastrow wave function. We parametrize by its four values on two spins and solve for a system of four non-linear equations:

where we have set . The RBM is well defined when all parameters are non-zero and we change of variables by defining , , , , obtaining a new set of equations:

We first suppose that the values are non-zero. These quadratic equations all have non-zero analytical solutions in the complex plane, that we denote , , , . The original parameters are then the solutions of

which is again a set of quadratic equations with non-zero analytical solutions. If (resp. ), the exact solution is given directly by (resp. ). In the remaining cases where some are zeros, the equations do not always have an exact solution, but the function can still be approximated to arbitrary precision. This case corresponds to strong restrictions on the part of the Hilbert space which is used to write the wave function and these constraints can also be imposed on the states directly by adding a delta function to the wave function which is equal to only when the constraints on the spins are satisfied. Having a Markov Chain Monte Carlo sampling which does not visit these states then allows for a more efficient sampling.

## BOptimization procedure

The goal is to minimize the energy depending on some vector of parameters . We define to be the energy gradient vector at . Expanding the energy to first order around leads to the steepest gradient descent, where the variational parameters are updated at each iteration according to , with a change of parameters given by . Here is a small step size. Expanding the energy to second order instead would result in the Newton method with a change of parameters given by:

where is the Hessian of the energy. Small changes of the variational parameters may however lead to big changes in the wave function, especially in the case of compact non-local representations like RBM in which each parameter affects each part of the wave function. Taking into account the metric of changes of the wave function leads to the Stochastic Reconfiguration[62] method, which is equivalent to the natural gradient descent[94] and replaces the Hessian in Eq. by the covariance matrix of the derivatives of the wave function, avoiding the computation of the second-order derivatives of the energy.

The Stochastic Reconfiguration method can also be viewed as an approximate imaginary-time evolution in the tangent space of the wave function. Consider the normalized wave function and its derivatives

defining a non-orthogonal basis set . Expanding the wave function to linear order around some parameters leads to

To minimize the energy, one can apply the imaginary-time evolution operator , which expanded to first order for small is . The change of coefficients is found by applying this operator to and projecting in the set , which leads to the equation

which can be rewritten as

where and . If we expand these expressions as expectation values over the probability distribution , we obtain

where the local energy is defined as and the log-derivative of the wave function as . Finally, the complete algorithm is as follows:

Using a Metropolis-Hastings algorithm, generate samples of the probability and compute stochastic estimates for the expectation values , , ,

Construct the vector and matrix ,

Update the parameters according to ,

Repeat the full procedure until convergence of the energy.

In practice we repeat the full procedure to times until the energy is converged. To optimize a large number of parameters we randomly select a subset of the parameters of size up to at each iteration of the algorithm and update only these parameters. This reduces the computational cost associated with the operations dealing with and . Moreover we can avoid forming the full matrix by instead solving Eq. with a conjugate-gradient solver [80]. Numerical stability can be achieved by adding a small constant to the diagonal elements of the matrix , rotating the direction of change towards the steeped descent direction. We find that a step size of the order , where is the iteration step, works well in conjunction with a large stabilization at the beginning, while a fixed step size can also be chosen in conjunction with a small stabilization of the order by performing several optimizations. At the later stages of the optimization, the step size is lowered to ensure that the energy is converged. Further improvements are achieved by projecting the wave functions in a subset of fixed total spin when it is conserved by the Hamiltonian we consider[95]. The spin-flip symmetry can be enforced in a RBM by choosing the bias .

### References

- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop
- bibitemNoStop