Unsupervised machine learning account of magnetic transitions in the Hubbard model

Unsupervised machine learning account of magnetic transitions in the Hubbard model

Kelvin Ch’ng Department of Physics and Astronomy, San José State University, San José, CA 95192, USA    Nick Vazquez Department of Physics and Astronomy, San José State University, San José, CA 95192, USA    Ehsan Khatami Department of Physics and Astronomy, San José State University, San José, CA 95192, USA
Abstract

We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t-distributed stochastic neighboring ensemble (t-SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t-SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t-SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the “sign problem” in quantum Monte Carlo simulations is present.

I Introduction

Machine learning has emerged as an unconventional tool to gain insight into properties of many-body physics. In 2014, Louis-François et al., Arsenault et al. (2014) used support vector machines, a type of supervised learning models, to obtain Green’s function of the Anderson impurity model. Supervised machine learning techniques based on artificial neural networks were used later in a groundbreaking work to classify phases of models in statistical mechanics and condensed matter physics Carrasquilla and Melko (2017). Shortly after, other groups expanded the application of these techniques to identify phase transitions, including to topological phases, in quantum many-body systems at zero or finite temperatures  van Nieuwenburg et al. (2017); Broecker et al. (2016); Ch’ng et al. (2017); Zhang and Kim (2017); Zhang et al. (2017).

Parallel efforts demonstrated the power of restricted Boltzmann machines, simple artificial neural networks with one visible layer corresponding to the physical system and one hidden layer, in learning thermodynamics of Ising models Torlai and Melko (2016), producing starting points for variational quantum Monte Carlo that are superior to those from conventional methods Carleo and Troyer (2017), performing tomography for many-body quantum states Torlai et al. (2017), and constructing topological states Deng et al. (2016). Interesting connections between artificial neural networks and more conventional methods in condensed matter physics have also been uncovered  Mehta and Schwab (2014); Stoudenmire and Schwab (2016)

Unsupervised machine learning techniques, on the other hand, have so far been mostly used to classify phases of classical model in many-body physics. For example, t-distributed stochastic neighboring ensemble (t-SNE) technique Maaten and Hinton (2008); tSN ; tSN (2017) was used in Ref. Carrasquilla and Melko (2017) to cluster spin configurations and visualize the phase transition of the two-dimensional (2D) Ising model. Later, Lei Wang Wang (2016) applied principle component analysis (PCA) Jolliffe (2002) to obtain low-dimensional representations of Ising spin configurations and make connections between principle components and physical observables such as the magnetization and the susceptibility, conventionally used to determine critical phenomena. His work was recently followed up by other groups who provided a more detailed examination of the PCA and other techniques applied to various classical models, including those on frustrated geometries Wetzel (2017); Hu et al. (2017); Wang and Zhai (2017). PCA has also been applied to quantum systems van Nieuwenburg et al. (2017), however, the 2D visualization of the spin configuration for the random-field Heisenberg model did not produce any useful features. A proposal for a different type of unsupervised machine learning for quantum many-body systems, which combines two-point function calculations with convolutional neural networks, is also recently put forth Broecker et al. (2017).

Here we employ several nonlinear unsupervised machine learning methods, including fully-connected and convolutional autoencoders Bourlard and Kamp (1988); Hinton and Salakhutdinov (2006); Chollet (2016), random trees embedding Geurts et al. (2006); Moosmann et al. (2006); ran , and the t-SNE, to reduce the dimensionality of raw auxiliary spin (also known as auxiliary field) configurations generated during quantum Monte Carlo simulations of the two- and three-dimensional (3D) Fermi-Hubbard models Hubbard (1963). We focus on the finite-temperature magnetic phase transitions of the model in three or the corresponding crossover in two spatial dimensions. Therefore, most of the data are generated at half filling, where there are on average one fermion per lattice site. We visualize the outcomes and look for features in the dimensionally-reduced configurations that may correlate with physical observables or signal phase transitions/crossovers. We work with unlabeled data during learning; we do not use any knowledge of the model parameters or temperature each configuration corresponds to in our analysis, nor do we use any information about the location of the phase transitions or crossovers during learning.

We start, however, with the classical Ising model on a 3D cubic lattice and benchmark the outcome of our convolutional autoencoder using spin configurations generated in a Monte Carlo simulation in a range of temperatures on both sides of the phase transition. We are able to define indicators that closely resemble the magnetization or the susceptibility. We then generalize the neural network to accommodate for the additional imaginary time axis in the auxiliary spin configurations of the Hubbard models and show that quantum fluctuations as well as the O(3) symmetry of the model in this case lead to low-dimensional visualizations that are fuzzier than their classical counterparts, although useful indicators signaling magnetic phase changes can still be defined. Next, we find that a fully-connected autoencoder, combined with random trees embedding produces a more or less temperature-resolved image of the configurations in two dimensions.

t-SNE emerges as a clear winner among all the techniques, or combinations of techniques we have used, producing low-dimensional representations of data with clearly distinguishable patterns above and below the Néel temperature for the 3D model or the crossover temperature for the 2D model. We define temperature-dependent indicators and show that, in the weak-coupling regime, they correlate perfectly with the antiferromagntic (AFM) structure factors, and can capture a transition in the presence of a magnetic field. Finally, we apply t-SNE to configurations generated for the 3D Hubbard model away from half filling in the presence of the “sign problem” Loh et al. (1990); Iglovikov et al. (2015) in quantum Monte Carlo simulations and discuss the risks of using dimensional-reduction techniques in sign-problematic regions.

In the following section, we briefly discuss the models we have considered in this study. Then in Sec. III, we provide an overview of the various machine learning techniques we employ. The results are discussed in Sec. IV, followed by concluding remarks.

Ii Models

ii.1 3D Ising Model

We first consider the classical Ising model on the 3D cubic lattice

(1)

where , denotes nearest neighbors, and is the strength of the corresponding exchange interaction (we set as the unit of energy whenever the Ising model is discussed). The system undergoes a second-order phase transition as the temperature is lowered below the critical value of  Barber et al. (1985).

We perform Monte Carlo simulations based on the Metropolis algorithm Hastings (1970) on a lattice and generate spin configurations at different temperatures. Each configuration is an array of size with as elements, and can be thought of as a points in a -dimensional space.

ii.2 The Fermi-Hubbard Model

We are mainly interested in how quantum fluctuations affect our ability to locate phase transitions or crossovers with unsupervised machine learning techniques. Therefore, we also consider strongly-correlated fermions in 3D cubic and 2D square lattices, described by the Hubbard model,

(2)

where () creates (annihilates) a fermion with spin on site , is the number operator, is the nearest neighbor hopping integral (we set as the unit of energy whenever the Hubbard model is discussed), is the onsite Coulomb interaction, is the chemical potential, and is the magnitude of the magnetic field. By symmetry, leads to half filling (average density of one fermion per site) and hole doping is achieved by decreasing .

For any , the 3D model displays a second-order transition from an unordered phase at high temperatures to a long-range Néel ordered phase below a -dependent critical temperature . Theoretical and numerical analysis  Scalettar et al. (1989); Staudt et al. (2000); Kent et al. (2005); Paiva et al. (2011); Kozik et al. (2013); Hirschmeier et al. (2015); Khatami (2016) have shown that after an exponential increase from zero by turning on in the weak-coupling regime Scalettar et al. (1989), peaks around and eventually goes to zero as by further increasing the interaction strength in the strong-coupling regime. The latter can be understood from the Néel transition of an antiferromagnetic spin Heisenberg model Sandvik (1998), which provides the low-energy description of the half-filled Hubbard model in the strong-coupling regime where double occupancy is largely suppressed and fermions interact predominantly through the spin exchange interaction . Long-range antiferromagntic correlations of the model have recently been observed in an experimental realization using ultracold fermionic atoms on optical lattices Hart et al. (2015).

We use the determinantal quantum Monte Carlo (DQMC) Blankenbecler et al. (1981); que method to simulate the model on a lattice for three different values of the interaction strength, , 9, and 14, in the weak-, intermediate- and strong-coupling regimes, respectively. We generate and save auxiliary spin configurations that can be thought of as points in a -dimensional space, where is the number of imaginary time slices, for a range of temperatures (see Ref. Ch’ng et al., 2017 for details of our DQMC simulations).

We also generate auxiliary spin configurations for the model in two dimensions with and and the same number of time slices as in the 3D case. The 2D model does not have a finite-temperature phase transition to a long-range order, rather, a crossover to a region with strong antiferromagnetic correlations. The onset of this region, which is associated with the formation of a peak in the uniform susceptibility is estimated to be around for  Paiva et al. (2010); Khatami and M. Rigol (2011).

Iii Methods

iii.1 Autoencoders

Autoencoder refers to a particular set of architectures of artificial neural networks Wikipedia (2017) that can be trained to extract features or reduce the dimensionality of big data without the specific prior knowledge of features or distinctions (in an unsupervised fashion). They are made up of multiple fully-connected and/or convolutional layers, similar to what is used in supervised neural network machine learning algorithms. An example of the autoencoder architecture is shown in the top panel of Fig. 1. Similarly to supervised learning, such a feed-forward network can be trained by minimizing a cost function, which is defined based on the difference between neuron outputs at the output (right-most) layer and a desired output. However unlike in the supervised learning, the autoencoder is supposed to reproduce the input data [what is fed to the network in the input (left-most) layer] at the output layer; the desired output is simply the same as the input.

The hidden layers on the left half of the autoencoder are called the encoding layers, where for example, an image is gradually deconstructed and represented using smaller and smaller number of pixels, whereas the hidden layers on the right half are called the decoding layers, where the network tries to reconstruct the image from the knowledge in the low-dimensional space. The middle layer, also known as the coding layer, provides the most dimensionally-reduced representation of the input data after the network has been trained, allowing other clustering or machine learning methods to extract meaningful information more easily. It is known that a single-layer autoencoder with linear activations and the PCA are very similar Baldi and Hornik (1989). Here, we have used deeper fully-connected and convolutional autoencoders and have avoided the PCA as it is useful only for data with linear correlations. The latter is shown to have a poor performance for quantum systems relative to classical ones in classifying phases van Nieuwenburg et al. (2017); Scalettar

In our convolutional autoencoder, the network tries to extract features in encoding layers by convolving a shared filter (or kernel), which sweeps the previous layer, with small cubic subsections on the data in that layer. To further reduce the dimensionality before the next convolutional layer, a process called maxpooling is typically performed after each convolutional layer in which the resolution is reduced by taking the maximum value of a subsection of data and passing only that value to the next layer. The decoding is done through a general process called upsampling, which refers to random resampling and interpolation to put together the extracted features and increase the resolution of the data.

In fully-connected autoencoders, however, each neuron in an encoding or decoding layer is connected to all the neurons in the neighboring layers, as opposed to convolutional autoencoders, where only subsections of data from the previous layer are connected to their corresponding neurons in the following layer via an adjustable filter, allowing for less parameters needing to be trained. We use both fully-connected and convolutional autoencoders in our study using convolutional autoencoders specifically to allow spatially correlated features to be extracted more efficiently.

iii.2 Random Trees Embedding

Random trees embedding transforms data in an unsupervised fashion to a high-dimensional space using tree graphs, resulting in sparser representation, for which the principal features can be extracted and mapped to a lower dimension. Here, tree is referring to a graph with nodes repeatedly branching unidirectionally. The parent node, or the root of the tree, contains all the data and a node on a branch has a subset of the data. A node is associated with a global feature of the subset, and so, smaller branches, which contain smaller subsets farther away from the root, reveal more localized structures. Representing data in a metric space on a tree graph introduces distortions. This issue is overcome by embedding the data using an ensemble of randomized trees instead. That is, the metric space is divided into random sections with overlapping of these sections permitted. A given section is then further divided up into smaller subsets on a tree. An ensemble of such trees can make independent observation. The maximum branching depth of a tree and the number of trees are tunable parameters on the algorithm

Once these random trees are grown, the pruning for features begins. The ensemble of trees votes for prominent features based on the density of the overlap between nodes from different trees at a given depth. Features with overlapping density lower than a certain threshold are discarded. This voting process reduces the bias and variance of those selected features.

Here, we use random trees embedding with 100 randomized trees and a maximum depth of 10 for the number of branchings. We apply the algorithm to a low-dimensional representation of the auxiliary field data for the Hubbard model obtained via a fully-connected autoencoder as outlines in Sec. IV.

iii.3 t-SNE

PCA has been the go-to dimensionality reduction technique in condense matter physics so far and has been successfully applied to classical models like the Ising model for extracting measures that closely resemble the order parameter or the susceptibility. However, PCA performs linear projection of the data from the high-dimensional space to a low-dimensional space by maximizing the variance of the projection, and so, local structures with non-linear correlations are not preserved. A simple measure such as the magnetization is evident to the linearity of the Ising model as each state can be projected onto a point in one dimension merely by summing its spins. The same linearity cannot be assumed for meaningful observables extracted from the auxiliary field configurations.

t-SNE is a powerful algorithm developed to preserve both global, and more importantly, local structures of data in low-dimensional space when projecting them from a high-dimensional space. Prior to t-SNE, SNE Hinton and Roweis (2003) was one of several attempts at achieving that. Although not very successful, it was the foundation for t-SNE. SNE employs stochastic gradient descent to minimize Kullback-Leibler divergences between pairwise conditional probability distributions that represent similarity of points, from the high- and low-dimensional spaces Maaten and Hinton (2008). The distributions are obtained from Gaussian functions centered around each point. The effective number of neighbors, also known as the perplexity, which is provided by the user, is kept fixed by adjusting the width of the Gaussian distributions in different regions of the configuration space with different density of points. t-SNE uses a slightly different cost function and Student-t distribution, as opposed to Gaussian, in the low-dimensional space to mitigate some issues in the original SNE and provide a better performance Maaten and Hinton (2008).

Values between 5 and 50 are suggested for the perplexity Maaten and Hinton (2008); tSN . We use a perplexity of about , which leads to the most physically interesting features across various ’s in the low-dimensional representations of the field variables. The method is rather slow, scaling like , where is the number of data Yang et al. (2013).

iii.4 k-means

k-means is an unsupervised clustering algorithm that locates the centroids of k clusters in the data, where k is provided by the user. The algorithm starts by k centroids distributed randomly. The next step is to calculate the euclidean distance (-norm) of each data point to each centroid and assign them to a cluster based on which centroid they are closest to. The new centroids of these clusters are then calculated and these two steps are repeated until one converges to a stable set of centroids. This type of clustering also allows for new points to be classified based on which cluster they fall closest to. It is, of course, most effective in cases where data points form well-separated clusters.

Here, we apply the k-means algorithm to the 2D output of our autoencoders or t-SNE to quantify the spread of data at various temperatures. We ask k-means to identify three centroids at each temperature. Then we find the center of the three centroids and define a temperature-dependent indicator, , which is the mean distance of the three centroids from their center. We note that our indicator is not unique, one may be able to come up with measures that more accurately capture the evolutions of features in the outputs as the temperature is varied.

Iv results

We start with spin configurations generated in a Monte Carlo simulation of the Ising model on a cluster. We use a fine grid for the temperature ranging from to in increments of 0.01 and work with a total of about 23,000 configurations across all temperatures. Similarly to what is done in Refs. Wang (2016); Hu et al. (2017), we treat each configuration as a point in the -dimensional space and try to deduce any features corresponding to the phase transition by reducing the dimensionality of the configuration space and visualizing it in one or two dimensions.

We use a 3D convolutional autoencoder with four encoding/decoding layers and one fully-connected coding layer with either one or two neurons. The architecture is shown in Fig. 1. The input/output ( and ) have the same structure as the physical system. The filter used in the convolutions is . A maxpooling layer is used after each convolutional layer. We shuffle the configurations among all temperatures, then use 70% of them to train the neural network until the accuracy, defined as one minus the mean square error between the input and the output, saturates to a value around 75%. We then feed the autoencoder with the remaining 30% of the configurations it has not seen during the training and plot the values of neurons in the coding layer (also known as the latent variables).

Figure 1: Values of neurons in the coding layer of a trained autoencoder that takes Monte Carlo spin configurations of a 3D Ising model on a lattice as the input. The architecture of our 3D convolutional autoencoder is depicted on top with neurons in the coding layer as filled (orange) circles. The encoder (decoder) part consists of four convolutional (upsampling) layers with 32, 8, 4, and 4 feature maps. Different symbol colors in the scatter plot correspond to different temperatures. Top inset: The output of a similar autoencoder, in which the coding layer consists of only a single neuron, as a function of temperature. The dashed line is a fit to with , , , and kept fixed at -0.25, which is the average output over all . Bottom inset: Temperature dependence of the spread of the data as measured through k-means. The vertical dashed lines marks the location of .

The main panel in Fig. 1 shows the output of the coding layer with two neurons. The color gradient of points represents the temperature gradient. There is a clear distinction between how data points cluster at high and low temperatures. At temperatures above the transition (red dots), we detect only one cluster. However, below the critical temperature, , (blue dots) two clusters separated along both axes clearly emerge. This duality is a result of the spin inversion symmetry in the model that manifests itself in the ordered phase.

At the lowest temperatures, the two clusters have the largest separation, which seems to reach a saturation value, analogous to the behavior of the order parameter (magnetization). However, we cannot properly quantify this separation as a function of temperature as the broken time reversal symmetry at a given forces all configurations to fall in one or the other low-temperature cluster, but not both. Having also predominantly one cluster of points formed at very high temperatures, suggests that in the critical temperature region, the points are the most spread out. We quantify the spread of data by applying the k-means clustering technique and requiring it to identify three clusters and their centroids (shown in Fig. 1 at as white circles). We plot as a function of temperature in the bottom inset. bears a striking resemblance to the magnetic susceptibility of the Ising model. Interestingly, thermodynamics of the system, encoded in the distribution of configurations in the importance sampling, are preserved during the dimension reduction.

In the top inset of Fig. 1, we also plot the autoencoder output in the case where we have only one neuron in the coding layer as a function of temperature. We observe a bifurcation of the neuron output as we decrease the temperature below . Similar results were shown in Ref. Hu et al. (2017) for the 2D Ising model. The neuron output looks almost exactly like magnetization of the model as a function of temperature, except for a seemingly arbitrary shift. Therefore, we fit the neuron outputs in the top branch at to a function proportional to after a shift, where , and are constants (fitted function is shown as a black dashed line in the inset of Fig. 1), and obtain as the critical temperature, which agrees well with , estimated for a system of the same size Barber et al. (1985), and , close to 0.33, the critical exponent of the 3D Ising model.

Inspired by these findings, we ask if one can use a similar dimension-reduction recipe to deduce critical temperatures in quantum mechanical systems? In this work, we focus on magnetic phase transitions. We know that the perfect antiferromagnetic alignment of spins in the direction can no longer describe the Néel state, and so, quantum fluctuations will likely blur the clear image we observe in reduced dimensions for the classical Ising model. However, to what extent can the information be still useful?

We set to answer this question by using the auxiliary fields and modifying our convolutional autoencoder such that the imaginary time slices are treated as different “color” channels in the input/output layers ( and ), each with neurons, the same as the single channel we used for the Ising model. The architecture of the remaining network is also modified to suite the smaller () we use for the Hubbard model. We use one less hidden layer and different number of feature maps in hidden layers than in the Ising autoencoder. The output in this case is sensitive to the number of feature maps and we have chosen a set that results in the largest accuracy.

Figure 2: The output of the 3D convolutional autoencoder for the 3D Hubbard model with (a) , (2) , and (3) at half filling. Each of the imaginary time slices are treated as a different “color” channel. The input is a four-dimensional array of size with . Similarly to the output for the Ising model, the character of the output in the space of the neuron outputs of the coding layer changes as a function of temperature. However, we do not find a formation of distinct clusters below the expected Néel temperatures. In (d), we show for the data in (a) and (b), normalized to fit the antiferromagnetic structure factors (shown as red lines) in each case, as a function of temperature. For , this indicator is dominated by noise, and therefore, not shown.

The results are shown in Fig. 2 for , 9 , and 14. The the first three main panels show the outcome in the case where the coding layer has two neurons. Unlike the classical case, lowering the temperature does not lead to the formation and contraction of two distinguishable clusters. Instead, the data points seem to spread out mostly along one of the two dimensions, at least for and . Quantum fluctuations and the fact that our auxiliary field is represented along a particular direction in the spin vector space seem to play a significant role in blurring the picture. As a result, in this case, our measure of the spread of data obtained from k-means, , behaves more similarly to an order parameter than to the susceptibility. In Fig. 2(d) we show this quantity, along with the AFM structure factor AFM , calculated in the DQMC, as a function of temperature for and 9. has been multiplied by a constant that minimizes the mean square distance of its values from the structure factor data (weighted by the error bars in the latter) over all temperatures. We do not find a good agreement between the two at low temperatures, however, is a relatively smooth function for and displays the fastest rise around , where we expect the critical temperature to be for this cluster. For , we observe large fluctuations in , which appear to grow significantly larger below , the expected Neél temperature. For , the indicator is dominated by noise and is not shown. Our approach seems to be most efficient in locating the critical behavior in the weak coupling regime of the Hubbard model.

The same conclusion can be drawn considering the output of an autoencoder that has a single neuron in the coding layer. Unlike for the Ising model, we do not find a bifurcation as a function of temperature when projecting the data to one dimension. However, we find that the fluctuations in the data increase rapidly as the temperature decreases. So, in the insets of Figs. 2(a), 2(b) and 2(c), we show the standard deviation of the latent variable as a function of temperature. For and , it behaves similarly to , e.g., rapidly rises around for . For , the fluctuation dominate at .

Figure 3: The output of random trees embedding algorithm trained on the four latent variables of a fully-connected autoencoder for the 3D Hubbard model with (a) , (b) , and (c) at half filling. The input layer of the autoencoder (shown on top) is a 1D array of size . The hidden layers in the encoder/decoder parts have 80, 30, and 10 neurons. The random trees embedding algorithm further reduces the dimension of the data from four to two. In this case, the outputs clearly separate data points from different temperatures. The dashed lines are line fits to data at the estimated Néel temperature for each . (d)-(f) Same as in (a)-(c), except that the input are the latent variables of the convolutional autoencoder used for Fig. 2, modified to have four neurons in the coding layer. In this case, the temperature gradient is larger along output 1 and clustering of points at low temperatures is visible for and .

Next, we explore the possibility of having more than two neurons in the coding layer (more than two latent variables) and further reducing the dimensionality of data using another technique such as random trees embedding. We use the same convolutional autoencoder architecture as we used for Fig. 2, except that we choose to have four neurons in the coding layer. Then, we feed the output of the autoencoder to the random trees embedding algorithm and obtain a representation in two dimensions. The results are summarized in Fig. 3(d)-(f). The emerging fan shape not only creates an approximate temperature resolution, but for and 9 also exhibits two low-temperature clusters near small values along the first dimension, separated in along second dimension, reminiscent of the Ising picture. For , the latter feature is mostly washed away.

The convolutions in our autoencoder appear to be crucial for the low-temperature clustering in the output of the random trees embedding, but not for the temperature resolution. We try the same combination of autoencoders and random trees embedding, except that this time, we use a fully-connected autoencoder, shown in the top part of Fig. 3. The results for the latter are shown in Fig. 3(a)-(c). We find that despite the better temperature resolution than with the convolutional autoencoder, which extends even to , there is no peculiarity that can point to different phases at high and low temperatures. Interestingly, data points corresponding to a given temperature spread along straight lines in this reduced space with the slope of the line correlating with temperature. We have also explored applying random trees embedding to data extracted not from the coding layer, but from various hidden layers of our fully-connected autoencoder. We find that the coding layer yields the best picture in terms of temperature resolution.

Figure 4: Output of the t-SNE algorithm in two dimensions for the 3D Hubbard model at half filling for three different values of the interaction strength, (a) , (b) , and (c) . We use the raw auxiliary field configurations from DQMC simulations of the model at 51 temperatures on a uniform grid that extends from to as input. Different symbol colors in (a)-(c) correspond to different temperatures. (d) Same indicator as in Fig. 2(d) as a function of temperature calculated for the t-SNE outputs. For , the indicator follows the AFM structure factor very closely.

Techniques like random trees embedding do not scale well with the dimension of the original configuration space and so cannot easily be directly applied to the raw auxiliary field configurations for dimension reduction. Other clustering techniques, such as the PCA or the t-SNE, can better handle larger dimensions. As we discuss below, we find that the direct application of t-SNE to the raw data also yields a superior distinction between clustering patterns at different temperatures.

We apply the t-SNE algorithm tSN (2017) with perplexity of 27.07 to the half-filled 3D Hubbard configurations at our three interaction strengths. We reduce the dimensionality to 37 after preprocessing using PCA by a batch size of 500 data at a time within the t-SNE algorithm for greater speed without inducing severe distortions and to filter out some noise Maaten and Hinton (2008). These are the same configuration as the ones we used for the autoencoder. The two-dimensional visualizations are shown in Figs. 4. For in Fig. 4(a), not only the data points spread out by decreasing the temperature, similarly to the autoencoder outcome in Fig. 2, but also two distinct clusters emerge, analogous to what we observed in Fig. 3 from random trees embedding, or for the 3D Ising model. We point out, however, that unlike in the Ising case, there is a significant number of points that are scattered between the two centers at the lowest temperatures.

A similar picture emerges for in Fig. 4(b), however for , we find that the data points stick together and form worm-like figures. As can be seen in Fig. 4(c), they mostly gather around two centers at low temperatures. Interestingly, we find that they are formed by data points that belong to the same temperature to a great extent. We attribute their formation to Mott physics. In the Mott region at large , the freezing of charge degrees of freedom manifests itself in a significant increase in the autocorrelation time in the single spin flip scheme of our DQMC algorithm, and the simulations become less ergodic than in the weak- or intermediate-coupling regimes. We observe this behavior despite the fact that we have attempted to mitigate the problem by performing 10 different simulations of the model for using different random number seeds and shuffling the configurations from those simulations before applying t-SNE.

We find a remarkable correlation of the indicator , calculated for the 2D t-SNE outputs, and the AFM structure factor of the model in the weak-coupling regime. The two are plotted in Fig. 4(d) for after normalizing to best fit the structure factor (red solid curve). They show a very good agreement across the entire range of temperatures shown. The development of long-range correlations and the growing dissimilarities in the configurations due to the breaking of the time-reversal symmetry as we lower the temperature can be directly mapped to the increase in the structure factor.

We have also applied t-SNE to the auxiliary spin con gurations for visualizations in 1D and 3D (not shown). We do not find any meaningful features in the 1D visualization. Projection of the configuration to 3D produces outputs that resemble volumetric versions of the 2D scatter plots.

Figure 5: (a) The convolutional autoencoder, and (b) t-SNE outputs for the 3D Hubbard model with at half filling in the presence of a magnetic field . (c) The corresponding indicators as a function of temperature. (d) The AFM structure factors of the model for the and components of the spin as a function of temperature.

Breaking the SU(2) symmetry of the Hubbard model changes the low-temperature physics and the picture obtained from unsupervised machine learning algorithms. We explore that by including a strong uniform magnetic field, , which mostly aligns the spins with the axis and pushes the remnant AFM correlations to the plane. The canted AFM physics for the 3D Hubbard model is evidenced by the and components of the AFM structure factor plotted in Fig. 5(d). The ferromagnetic correlations along (not shown) are more than a factor of two larger than the component of the AFM correlations.

We study the effect of the magnetic field on the results from the convolutional autoencoder and the t-SNE. The output of the autoencoder with two latent variables, shown in Fig. 5(a), displays a temperature resolution, however, unlike for the original model with SU(2) symmetry, all of the low-temperature points appear to belong to a single cluster. As shown in Fig. 5(c), for this case is a flat function of temperature, which misses the important phase changes in the plane. One can in principle define other indicators. For example, the distance between the high-temperature cluster and clusters at lower temperatures can presumably serve as a measure for the ferromagnetic correlations along . The t-SNE, on the other hand, in addition to displaying the temperature resolution, assignes low-temperature points to a region that shrinks rapidly by lowering the temperature below , signaling a phase change similar to the case of zero magnetic field. This is more clearly captured by the corresponding indicator in Fig. 5(c), which now measures simply the concentration of points.

Figure 6: Output of the t-SNE algorithm for the 2D Hubbard model with and at half filing. We use 800 raw auxiliary field configurations for each temperature in the uniform grid of between 0.1 and 0.60. We use the same t-SNE parameters as in the training for the 3D Hubbard model in Fig. 4. (b) The indicator (green symbols) and the AFM structure factor (red line) as a function of temperature.

Motivated by the ability of t-SNE to distinguish high-temperature configurations from the low-temperature ones below the critical temperatures of the 3D Hubbard model, at least for small , we examine the 2D Hubbard model in the weak-coupling regime, also at half filling, using t-SNE with the same parameters as used for the 3D model. As shown in Fig. 6, the outcome of t-SNE in this case exhibits a similar extension of points in the space by lowering the temperature as for the model in 3D. We do not find any specific features that point to a crossover, as opposed to a phase transition expected in 3D. In fact, the indicator , which closely follows the AFM structure factor in this case too, exhibits an even sharper rise just below than for the model in 3D.

Figure 7: (a)-(h) The t-SNE output for the 3D Hubbard model with away from half-filing. We have used 1,500 auxiliary field configurations per chemical potential , generated deep in the AFM phase () for ranging from =-0.2 to =-4.0 in steps of 0.2. We show snapshots of the output at select , separated into configurations with negative sign () in the top row and those with a positive sign () in the bottom row. We use the same t-SNE parameters as in the training for the 3D Hubbard model at half filling. (i) The expectation value of the AFM structure factor obtained using configurations with positive or negative signs separately as a function of the chemical potential. (j) The average sign as a function of .

One may be tempted to apply these techniques to configurations generated for the Fermi-Hubbard model away from half filling to, for example, study the fate of the antiferromagnetic phase of the model in 3D, or the pseudogap and superconducting properties in 2D, as the system is doped. However, any phase transition or crossover deduced from the dimensionally-reduced data in that case, for example, through an indicator similar to , would be a transition or crossover not for the Fermi-Hubbard model, but for an alternative model whose statistics is described by the absolute value of the probability amplitudes in the DQMC simulation of the Fermi-Hubbard model. Therefore, unless it so happens that the phase boundaries of the two models are the same for the transition/crossover of interest, the results will not be of much use for the Fermi-Hubbard model.

Here, we demonstrate this concept using DQMC simulations of the 3D Hubbard model for at nonzero hole doping (). We generate 2000 auxiliary spin configurations per that ranges from to in increments of 0.2 at , which is deep in the AFM phase at half filling. We first ignore the signs of the configurations and run t-SNE on 3/4 of the entire set. Then, in our visualization of the 2D output, we separate configurations with a negative sign () from those with a positive sign () at different values of . The results are shown in Fig. 7(a)-(d) and Fig. 7(e)-(h), respectively. We have seen an indicator can be defined based on the t-SNE output that behaves very similarly to a physical observable (e.g., the structure factor), although this was not exactly the case for our at . If we assume this is valid away from half filling and separately for the positive and negative configurations, the indicators are expected to correlate with the corresponding structure factors calculated using positive only or negative only configurations [see Fig. 7(i)]. However, the physical structure factor in the presence of the sign problem in the DQMC is obtained from the difference between the above two structure factors divided by the average sign [the latter is shown in Fig. 7(j) as a function of ]. Therefore, one cannot expect an indicator, when calculated using a mixture of negative and positive configurations at every (effectively ignoring the sign) to yield any physically meaningful quantity. This is specially evident in our case at , where the sign problem is severe and the scatter plots for and look essentially alike.

Finally, we examine the application of t-SNE separately on configurations with positive or negative signs to make sure that the results in Fig. 7 are not biased due to the dominance of the configurations near half filling. Figure 8 shows the same trend in the similarity of the scatter plots between the configurations with different signs when the average sign drops to zero.

Figure 8: Same as Fig. 7, except that here, t-SNE calculations have been performed separately on configurations with the same sign. We also use 2,000 auxiliary field configurations per .

In summary, we have applied various unsupervised machine learning techniques, such as autoencoders, random trees embedding, k-means, and t-SNE, to obtain low-dimensional representations for the auxiliary spin configurations of the Fermi-Hubbard models in different interaction regimes. We show that one can extract features from the data in reduced dimensions that resemble physical observables related to the magnetic correlations in the physical system. The configurations are sampled during DQMC simulations at finite temperatures at and away from the half-filing.

As a benchmark, we first train a convolutional autoencoder using spin configurations of a 3D classical Ising model and obtain indicators that closely resemble magnetization and susceptibility. The low-dimensional representations of autoencoders, or a combination of them with random trees embedding techniques, however, are largely affected by quantum fluctuations in the Hubbard model, preventing us from mapping the results to physical observables despite the existence of distinct features that can point to a phase transition at low temperatures at least in the weak-coupling regime.

On the other hand, we find that the t-SNE algorithm combined with k-means provides results that perfectly correlate with the AFM structure factor of the model in two or three spatial dimensions as a function of temperature. We also explore the effect of a symmetry breaking magnetic field on the outcome of the unsupervised techniques and show that t-SNE is also capable of capturing the transition to a different type of phase at low temperatures. We employ t-SNE to demonstrate that the measures we extract from low-dimensional representations of the auxiliary fields in order to describe the physics can no longer serve that purpose away from half filling in the presence of the sign problem in the DQMC simulations.

We thank Demetrius Almada for providing the Monte Carlo data for the 3D Ising model. We acknowledge support from the National Science Foundation (NSF) under the Grant No. DMR-1609560. The computations were performed in part on the Teal computer cluster of the Department of Physics and Astronomy of San Jose State University and in part on the Spartan high-performance computing facility at San Jose State University supported by the NSF under the Grant No. OAC-1626645.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
142653
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description