# Deep Learning Beyond Lefschetz Thimbles

###### Abstract

The generalized thimble method to treat field theories with sign problems requires repeatedly solving the computationally-expensive holomorphic flow equations. We present a machine learning technique to bypass this problem. The central idea is to obtain a few field configurations via the flow equations to train a feed-forward neural network. The trained network defines a new manifold of integration which reduces the sign problem and can be rapidly sampled. We present results for the dimensional Thirring model with Wilson fermions on sizable lattices. In addition to the gain in speed, the parameterization of the integration manifold we use avoids the “trapping” of Monte Carlo chains which plagues large-flow calculations, a considerable shortcoming of the previous attempts.

## I Introduction

Monte Carlo methods are widely used in the study of field theoretical and many-body systems. They can be understood as a way of computing a large dimensional integral encoding the physics of the system through the importance sampling of the integrand. In field theories, this integral is a discretized version of the Feynman path integral. The Monte Carlo method is essentially the only general purpose method capable of dealing with strongly interacting field theories. Unfortunately, many systems of great interest are defined by path integrals where the integrand oscillates wildly, making a direct stochastic estimation impossible in practice. This situation is referred to as the “sign problem”. The class of systems with a severe sign problem includes most finite-density systems, among them QCD, condensed matter models (e.g. the Hubbard model away from half-filling), and all real-time calculations. In the context of QCD, many ideas aiming at solving the sign problem have been developed through the years, among them the complex Langevin method Aarts and Stamatescu (2008), the density of states method Langfeld and Lucini (2016), canonical methods Alexandru et al. (2005); de Forcrand and Kratochvila (2006), reweighting methods Fodor and Katz (2002), series expansion on the chemical potential Allton et al. (2002), fermion bags Chandrasekharan (2013) and analytic continuation from imaginary chemical potentials de Forcrand and Philipsen (2007).

In the last few years the thimble approach Cristoforetti et al. (2012, 2014a) has received a lot of attention. The main idea of this method is to deform the region of integration of the path integral from the original real fields to some other manifold, , embedded in the space of complexified fields. This deformation, if care is exercised, does not change the value of the integral, thanks to a multidimensional generalization of the Cauchy theorem of complex analysis. If is properly chosen, the sign problem can be solved or, at least, substantially reduced.

In the first attempts, was chosen to be the set of Lefschetz thimbles, which are the multidimensional generalization of the “steepest descent” or “constant phase” paths of complex analysis Cristoforetti et al. (2014b); Di Renzo and Eruzzi (2015); Mukherjee et al. (2013); Fujii et al. (2015); Fukushima and Tanizaki (2015); Alexandru et al. (2016a). The phase of the (Euclidean) action, and consequently of the Boltzmann factor , is constant over one thimble^{1}^{1}1An additional contribution to the integral from the curvature of the thimble can be nonzero, but this “residual phase” is typically found to be small., so the sign problem is ameliorated. Two complications immediately appear. The first is that the thimbles cannot be found analytically in non-trivial models so an algorithm to compute them is necessary to perform the integral. This is a non-trivial task, but a few proposals have been put forward Cristoforetti et al. (2014b); Fukushima and Tanizaki (2015); Alexandru et al. (2016a). The second complication is that it is very hard to determine which combination of thimbles equals the original integral over real fields. It has been conjectured that one thimble dominates the path integral in the thermodynamic and/or continuum limits Cristoforetti et al. (2012) while, at the same time, the importance of multiple thimbles has been demonstrated in finite volume systems Alexandru et al. (2016b); Tanizaki et al. (2016).

A new method, inspired by the thimble approach, was suggested in Alexandru et al. (2016b). The deformation of the manifold of integration is chosen to be the result of evolving every point of the original integration region (the set of real fields, identified with ) through the holomorphic flow equations by a “time” :

(1) |

where is the (Euclidean) action of the model and the bar denotes complex conjugation. The point in corresponding to is given by . As the flow time increases, the set of all points thus obtained () approaches the right combination of thimbles which equals the original integral over . At intermediate values of flow time , differs from the thimbles and the sign problem is not completely solved but ameliorated. For this reason, the method introduced in Alexandru et al. (2016b) is sometimes called the “generalized thimble” approach Nishimura and Shimasaki (2017). This approach has the advantage that neither a priori information about the correct combination of thimbles equivalent to the integration over real fields, nor their location and shape, is required. This method was demonstrated in two dimensional models both in Euclidean time Alexandru et al. (2017a) and in real time (Minkowski space) Alexandru et al. (2017a). However, as the method was applied to larger systems two complications became apparent. They are a consequence of the fact that, in some models and parameter values, the sign problem is alleviated only after considerable flow. The first problem is that large flow times are computationally expensive. The second is that the probability distribution over — induced by the probability distribution over and the parameterization — typically becomes strongly multimodal, with wide barriers of low probability separating isolated regions of high probability. Such a distribution is difficult to properly sample through local updates. Although methods to deal with the multimodality have been proposed Fukuma and Umeda (2017); Alexandru et al. (2017b), they significantly increase the computational cost of the calculation.

This paper introduces a substantial add-on to the generalized thimble method which addresses the two shortcomings described above. The main idea is to use a parameterized form of the manifold obtained by interpolating between some (complex) fields obtained from evolving real configurations by Eq. (1). This multidimensional interpolation is a complex non-linear regression problem, a very non-trivial task. We approach it using machine learning techniques. More precisely, we will use a feed-forward network which inputs a real configuration and outputs the corresponding imaginary part on the integration manifold:

(2) |

where the function is implemented using the feed-forward network. The network is “trained” in such a way that the set of complex fields obtained by running all real , the “learnifold” , approximates the flowed manifold . The advantage of using the network to generate the configurations is that it bypasses the computationally expensive, repeated solution of Eq. (1) (and the even more expensive computation of a Jacobian, see below). In addition, the parameterization has better properties compared to the one used previously regarding the multimodality problem, as explained below.

We will review the generalized thimble method in Sec. II and the use of feed-forward network methods in Sec. III. The specifics of the learnifold will be covered in Sec. IV. In Sec. V we discuss the dimensional Thirring model, which we will the use to test and demonstrate our method. The details of the simulations are presented in Sec. VI and results are presented in Sec. VII. Conclusions are summarized in Sec. VIII.

## Ii Generalized thimble method

The expectation value of an observable in field theory can be cast in the form of a path integral

(3) |

where is the (Euclidean) action. The stochastic evaluation of this ratio is accomplished by approximating it by

(4) |

where the configurations are sampled randomly from the distribution . The exact value is approached as the number of configurations grows.

This method works if is real; otherwise the sampling can be done with respect to and the phase of the integrand included when computing observables by reweighting

(5) |

with . This procedure is practical only if the average phase is large. Otherwise the calculated expectation value will result from a ratio of small numbers, each resulting from detailed cancellations among configurations. In most theories the average sign is expected to vanish exponentially as the volume increases and/or the temperature decreases. This is the sign problem.

The idea of the generalized thimble method is to deform the domain of integration from to , a submanifold of of (real) dimension :

(6) |

where is the analytic continuation of the action to complex values of the field, is a parameterization of the manifold by the real variables and is the Jacobian relating to .

For our purposes, there are three conditions sufficient to ensure the equality of Eqs. (II) and (II). First, the integrand should be holomorphic (i.e., complex analytic with no poles in the complexified domain), which is the case for most observables in quantum field theories. Second, the original and final manifolds must be homotopic — that is, there should be a continuous family of manifolds connecting them. Lastly, the domain of integration should be compact^{2}^{2}2This condition can be relaxed for non-periodic variables if the deformation is such that
the asymptotic behavior of the integrand guarantees the existence of the integral at all intermediate steps of the deformation.. In the particular case of the Thirring model, all degrees of freedom are periodic, so that the original domain of integration is not but (an N-torus) which, upon complexification, becomes .

The calculation of the integral over requires a parameterization of the manifold. Previously the pre-image of the point on was used Alexandru et al. (2017a). In this case, the corresponding Jacobian can be calculated by solving

(7) |

Evolving according to Eq. (7) is the most computationally expensive part of the whole method. Since and over are complex, reweighting is done using:

(8) |

where . The phase is constant over each thimble so it typically fluctuates little over for large enough flow time. Experience shows that also fluctuates little for problems of interest.

The parameterization of a point of by its pre-image is problematic. This is because the regions in -space with large probability shrink with increasing flow time while the distance between their centers stay fixed. The resulting probability distribution is strongly multimodal and difficult to sample via a Monte Carlo chain with small updates. The difficulty arises because a single Metropolis proposal is unlikely to “tunnel” between such regions and therefore is “trapped”.

## Iii Feed-forward networks

In this section we summarize the use of feed-forward networks for interpolation purposes. In the next one we will detail the application of it to constructing the “learnifold”. Feed-forward neural networks provide a family of functions particularly amenable to non-linear regression, and we use them to represent an approximation to the flowed manifold . A feed-forward network may be thought of as a directed graph organized into several layers which we sketch in Fig. 1. All edges point from a node in one layer to a node in the next layer. The first layer, termed the “input layer”, has exactly as many nodes as the function has inputs. Similarly, each node of the last layer, termed the “output layer”, corresponds to a different degree of freedom in the output of the function. There are no restrictions on the number of nodes in the intermediate (“hidden”) layers, although for simplicity, the networks we use in this paper have the same number of nodes in each hidden layer. To the edge from node to node is assigned the weight , and to each non-input node is assigned a bias . For a fixed topology — number of layers and nodes — the weights and biases parameterize a family of functions. It is these weights and biases that will be adjusted when performing the non-linear regression.

Given a fixed assignment of weights and biases, the network represents a function , where has as many components as the network has input nodes, and has as many components as the network has output nodes. This function is evaluated in the following manner. The input values are fed into the network at the input nodes. Each node in the first hidden layer computes a linear combination of these values, weighted by and shifted by the bias , and then applies a certain nonlinear function . The result becomes the output of that particular node that is then taken as the input in the nodes of the next layer. This process is repeated for each hidden layer and then the output layer, as values are “fed forward” through the network. Thus the value at node is given by

(9) |

where the sum is taken over all nodes that have an edge leading to node (that is, all nodes in the previous layer). The values at the output nodes are taken to be the outputs of the function . The computation time required for is linear in the number of nodes in the network.

There is considerable freedom in the choice of the nonlinear function . We adopt the common choice of the “SoftPlus” function:

(10) |

which asymptotically behaves like the integral of the step function, but is smoothly differentiable (making training easier). Because this function is bounded from below, but the function we wish to approximate may not be, we do not apply any nonlinear function at the output layer. The output values are a simple (shifted) linear combination of the values at the last hidden layer.

Our goal is to use the feed-forward network to interpolate a “training set”, that is, a set of vectors and their corresponding that are assumed to be known. For that we minimize a “cost function”

(11) |

in relation to the biases and weights ( is the size of the training set). The minimization procedure is a simple gradient descent algorithm. For a network with inputs, hidden layers, nodes in each hidden layer, and outputs, there are weights and biases to be adjusted, and therefore that many dimensions to be explored during gradient descent. The gradient of the cost function is efficiently calculated through repeated use of the chain rule. Starting at the output end of the network, we compute the gradient of the cost function with respect to the output of each node. This step is known as backpropagation.

(12) |

Once backpropagation is complete, the gradient of the cost function with respect to the weights and biases is immediately determined:

(13) |

Like the evaluation of the function itself, the determination of the gradient of the cost function (once the gradient with respect to the values of the output nodes is known) is linear in the number of nodes in the network.

The minimization of is a tricky problem due to the existence of many local minima and an extensive literature is dedicated to this problem (a good review of modern gradient-descent methods is given in Ruder (2016)). We use the Adaptive Moment Estimate algorithm (Adam) Kingma and Ba (2014), which was found to perform best among the methods tried. In Adam, the weights and biases are repeatedly updated according to the descent rule

(14) |

where collectively denotes the weights and biases at iteration of the algorithm, is a modified gradient of the cost function with respect to , and is the dynamical learning rate that determines how far along the gradient to progress. The difference in is the inclusion of “momentum” by a decaying average of previous gradients which decreases the steps needs to reach a minima by encouraging the descent to follow the largest gradient:

(15) |

We set the weighting between the current and previous gradient to be following the suggestion of Ruder (2016). The prefactor of in Eq. (14) corrects for the bias where since is initialized to 0, is biased towards remaining there. Once is sufficiently large, this term goes to and has negligible effect. At long times, stochastic gradient methods can oscillate around a minima, so it is useful to decrease the learning rate to reach the minima. To decrease the learning rate, we use a dynamical learning rate is defined as

(16) |

where is a base learning rate, and is a regulator to prevent numerical instability. Further dynamical improvement comes from using which is the variance of , correcting for bias and including a momentum:

(17) |

where the weighting between current and previous terms is and we have again introduced into Eq. (16) a bias-correction factor . Thus, at iteration of Adam, we compute the gradient of the cost function , update the estimates and , and finally update the weights and biases. We perform iterations to train a learnifold.

Computing the gradient of the cost function is computationally expensive due to the size of the training set. For example configurations of lattices generate by translation a set of configurations (see the discussion below). Instead, we use a stochastic gradient descent: we approximate the gradient at each step by a sum over a small, randomly-selected batch of the configurations. We use a batch size of to begin the gradient descent, and then switch to a batch size of after steps. This increase in batch size decreases the amount of stochastic noise, and the second half of the gradient descent is able to perform more fine-tuned optimizations.

## Iv The learnifold

In order to avoid having to solve Eq. (1) and Eq. (7) at every step of the Monte Carlo chain, we will find another manifold (the “learnifold”) that approximates , but can be more readily computed (using a feed-forward network). Points on the learnifold are parameterized by points on the real plane:

(18) |

where the function will be constructed using the kernel function represented by a feed-forward network (see below).

Besides the gain in speed from the use of a feed-forward network in place of evolution of Eq. (1), our method differs from the one in Alexandru et al. (2016a, c, b, 2017a, 2017c, 2017b) by the use of a different manifold parameterization: a point in the manifold of integration is parameterized by the real part of its coordinates instead of its pre-image under the flow in Eq. (1). This new parameterization is portrayed in Fig. 2: the left-hand panel shows the parameterization arrived at from a pure-flow algorithm, and the right panel shows the parameterization from the learnifold.

This parameterization choice suffers from one drawback. Since a point on the learnifold is parameterized by the real part , the learnifold will necessarily have exactly one point with any given real part. In other words, the function defining the is single-valued. This is a restriction on the set of manifolds that can be represented by this scheme: if the flowed manifold is such that multiple points share the same real coordinates, the class of learnifolds described here may not contain a good approximation. In practice, we find that the parts of the that are of interest (those parts with comparatively low actions) do not behave in this way.

Despite this mild caveat, the advantages are substantial. Firstly, the parameterization of by the pre-image of the flow causes a small region of parameter space (shaded blue in the figure) to map to a large region of the manifold. This results in large fluctuations of , which is expensive to compute. The parameterization of by the real coordinates should not lead to large stretchings. In fact, we find that in practice, so that this contribution may be accounted for after-the-fact in reweighting. The second advantage is that the parameterization reduces the multimodality problem. In -space, regions of large probability do not shrink with the flow, and so no large gaps are created between regions that contribute significantly to the integral. A Monte Carlo chain can therefore more easily explore the relevant regions of the integration domain.

Field theoretic models of interest often have a discrete group of translational symmetries on the lattice. These symmetries are respected by the flowed manifold, and therefore should be impose on the learnifold which approximates it. Translation symmetry can be implemented in our setup in a simple way. Let be the lattice translation that places lattice site at the origin. We want that which requires that . A kernel function can be used to define a translational invariant function by:

(19) |

When multiple degrees of freedom are associated with each lattice site (for the model of interest to us, there are ), will have that many components. For our case we train the kernel function to match the values of the imaginary component of the flowed configuration at origin, that is , where are the configurations from the training set generated by flowing from . The procedure to get on the learnifold is to start with a configuration in the real plane, and evaluate to determine the imaginary part of the degrees of freedom associated to the lattice site at the origin. Then, translate the lattice so that site is moved to the origin, and evaluate again to get the imaginary part of the degrees of freedom associated to lattice site . This procedure is then repeated for all possible translations of the lattice.

The inputs to the network represent the real degrees of freedom at each lattice site; however, in our model, these degrees of freedom are periodic. We impose this periodicity by passing to the network not , but and separately — for a model with degrees of freedom, the network will take inputs. This is not simply an optimization: if the learnifold lacks this periodicity, it will generically belong to a different homology class from , that is, it will describe a manifold of integration that is not equivalent to the original domain .

Implementing translation invariance as described above, a point on the learnifold, parameterized by its real coordinates , is given by

(20) |

where , computed by a feed-forward network, outputs the imaginary parts of the degrees of freedom associated to a single lattice point.

## V Thirring Model

The model we use to illustrate our method, the dimensional Thirring model at finite chemical potential, has been studied before by the generalized thimble method Alexandru et al. (2017a). It is defined in the continuum by the Euclidean action

(21) |

where the flavor indices take values , is the chemical potential and the Dirac spinors have two components. It is convenient to treat the four-fermion interaction by introducing an auxiliary vector field . We use the Wilson discretization given by

(22) |

with

(23) |

and .

The integration over the fermion fields leads to

(24) |

For the determinant is not real so this model has a sign problem. In this work we use . Notice that the variables are periodic so the original (real) domain of integration of the path integral is with , being the temporal and spatial sizes of the lattice, respectively.

## Vi Simulation details

Our procedure begins by using the flow-based algorithm of Alexandru et al. (2016b) to generate a training set, that is, a set of points on . A quality training set need not sample the probability distribution but should provide information about all of , or at least the region likely to be sampled by a long Monte-Carlo run. A small set of training configurations sampled with turns out to be insufficient. The learnifold generated this way develops “pockets”, that is the distribution on it is multimodal and the Metropolis process becomes trapped. It is then crucial to provide additional information about .

Much freedom exists in generating this additional set: configurations normally thrown away during thermalization can be kept, and they can be generated in parallel. To cure the multimodality problem we need to include configurations on that have large . For that we include in the training set configurations from an ensemble sampled from the distribution , with . Sets with sample higher- regions of than . We use . It should be emphasized that the ensemble generated for training is not sufficient for proper evaluation of any observable. They are not thermalized, they are highly correlated and the ones obtained with are not distributed correctly.

Since, as discussed above, the action is translationally invariant, a single flowed configuration can be used as a total of training points, where is the space-time volume of the lattice. Thus each flowed configuration sampled is translated to other configurations. Since, in practice, the most time-consuming step of the algorithm is the generation of configurations on , this multiplication of the training set is critical in making this algorithm practical.

Once the training configurations are obtained, the feed-forward network is trained by minimizing the cost function. We use a network with hidden layers, each consisting of nodes. This choice is somewhat arbitrary and we have not yet fully investigated the behavior of the algorithm as the number of nodes and layers is changed. The training is accomplished by performing stochastic gradient descent to minimize the weights and biases with respect to the cost function. Specializing Eq. 11 to our specific case:

(25) |

where the sum over is taken over all training points, and the sum over is over the Lorentz indicies. By minimizing the , we minimize the distance between the and .

Once the gradient descent is complete, is used to define through Eq. (20). This manifold is parameterized by the real plane, and so we can perform an importance sampling on this manifold in the same manner as for a flowed manifold.

The previous parameterization, based on deforming the domain of integration via flow, required computing the Jacobian when performing importance sampling. The parameterization is found to result in Jacobians that have small fluctuations as can be seen from Fig. 3. Computing the Jacobian is expensive, but since its fluctuations are small it is preferable to ignore it when performing importance sampling, and include it in observables via reweighting. The Jacobian may be computed by direct application of the chain rule. In practice, it is sufficient to compute the Jacobian via finite differencing, that is, computing for small by feeding the values and through the network. We take this approach here.

After the network is trained and the manifold defined by it is specified, we use the Metropolis algorithm applied to the (real) parameterizing variables and the effective action , the real part of the Euclidean action. The Jacobian and the phase of the Euclidean action are included through reweighting:

(26) |

where in contrast to Eq. (II), the real part of the Jacobian is also included in the reweighting. The minimum number of configurations from required for training must be empirically determined and we find it to be roughly set by the number of degrees of freedom in the fit to be performed: if the network has more degrees of freedom than the number of training points available, a long training process will overfit the data, and the final product will be unusable.

## Vii Results

Although our calculations are not done particularly close to the continuum limit, we choose the bare parameters of the action so that the (renormalized) particle masses are somewhat below the lattice scale. Two particle masses have been measured: a fermion and a boson. The dimensionless masses of these particles, and ( is the lattice spacing) respectively, are determined via fitting the large time behavior of the correlators where and , where the subscripts indicate flavor. In a free theory, and we use this ratio to gauge the strength of interactions, where implies a strongly interacting theory. The parameters used for the simulations in this paper are and , which lead to and Alexandru et al. (2017a). We have then . Therefore we are studying a strongly coupled theory.

The flow time used to generate training points and a range for the size of each partition of the training set is given for each lattice in Table 1. During the generation of the training set we use an estimator of the Jacobian that has been shown to track accurately the full Jacobian Alexandru et al. (2016d). Even using the estimator the generation of the training set is computationally expensive. For example the ensemble used for training the learnifold at we use configurations with and with . To generate these configurations we use no thermalization and save a configuration after 200 Metropolis steps. This takes about 90 CPU-hours. The training of the network takes 24 CPU-hours. To generate the ensemble of 7220 measurements with 800 Metropolis steps between on the learnifold takes about 140 CPU-hours. Taking the training set time as an proxy (which neglects computing the Jacobian), this set would have taken about 4100 CPU-hours using flow. More details about the measurements can be found in Table 1. It should be noted that for larger lattices at larger , to achieve the same statistics more configurations are needed due to the smallness of average sign, which is reflected by a range being reported for the number of measurements.

Lattice | ||||||

1 | 2 | |||||

1000 | 1000 | 15000 | 800 | 2000 | ||

300-600 | 100-700 | 15000 | 800 | 2000-8000 | ||

50-170 | 30-150 | 20000 | 800 | 800-1000 |

In Fig. 4, we show for lattices of size , , and the average sign in the left column, and in the right column the average fermion density (per flavor) . The results obtained by standard reweighting techniques on are shown with black circles. As may be anticipated, the average sign drops to 0 as and reweighting becomes unfeasible. We further plot with red triangles the results obtained by choosing as a manifold of integration the tangent plane to the main thimble (which is computationally as cheap as integration over ). For this manifold, the average sign drops more slowly to zero, but for sufficiently large lattices is still inadequate. The final set of results, the blue squares, are obtained from the learnifold. We find that the average sign decrease even more slowly, extending the reach in . As a check, we include in these figures the analytic result for the free fermion gas (with the renormalized fermion mass) with a dashed line. We find that for values of , the free gas approximation becomes a poor description of the Thirring model.

As explained in Alexandru et al. (2017a) the sign problem is greatly improved by simply shifting the domain of integration: , for a certain real value of . This simple shift is enough in dimensions to allow for calculations in lattices of size up to if a staggered fermion action is used. However, the Wilson fermion action has a worse sign problem and systems of these sizes require longer flow times which make the calculation more expensive. More importantly, the large flow times lead to the trapping of the Monte Carlo chain as discussed above. For these reasons, Ref. Alexandru et al. (2017a) only contains simulations of the Wilson action in lattices. The use of the Wilson fermion model here demonstrates the utility of machine learning in applying the generalized thimble method to larger lattices.

On the 2010 lattice, the sign problem for is sufficiently improved for a flow time of that reliable results can be obtained. For , by simply flowing longer () we were able to again raise the average sign to viable levels again.

At the colder temperatures, we demonstrate that the learnifold method is capable of reproducing the independence of observables below the threshold , the so-called “Silver Blaze” phenomenon Cohen (2003). This is important, because other treatments of the sign problem can fail and wash out the plateau. In particular, we note that Lefschetz-thimble based approaches in which only the main thimble is sampled, or generalized thimble methods where trapping occurs, are likely to fail to produce the correct features and instead produce straight lines Tanizaki et al. (2016). We believe this is evidence that while our training sets are trapped for large flows, the learnifold parameterization reduces trapping to manageable levels in the Metropolis sampling such and the higher configurations give some information about the phase on additional thimbles to keep the sign problem manageable.

## Viii Discussion and Prospects

We have presented a method, based on machine learning techniques, to bypass the computationally expensive steps in the generalized thimble attack on the sign problem. The idea is that a reduced number of field configurations obtained from solving the flow equations can be used to train a feed-forward neural network that can roughly approximate the full manifold defined by the flow. The trained network defines a new manifold, the learnifold, which is equivalent to the real space for the computation of the path integral, but where the sign problem is ameliorated. The manifold defined by the network can be sampled very fast and a large number of measurements can be easily made, enough to bypass the sign problems in models where that was not previously possible.

This paper represents a first, exploratory study of the possibility of coupling the generalized thimble method and machine learning to solve the sign problem. Results were shown for the Wilson fermion version of the dimensional Thirring model. As a bonus feature of the method, the parameterization we used avoids the “trapping” of Monte Carlo chains which plagues some calculations with large flow times. This method is general and should be applicable to other theories of interest. The large freedom in flow time, size and bias of the training set, and number of layers and nodes in the network together suggest that this method can be further optimized for efficiency which would extend the practical range of applicability.

###### Acknowledgements.

A.A. is supported in part by the National Science Foundation CAREER grant PHY-1151648 and by U.S. Department of Energy grant DE-FG02-95ER40907. A.A. gratefully acknowledges the hospitality of the Physics Departments at the Universities of Maryland and Kentucky, and the Albert Einstein Center at the University of Bern where part of this work was carried out. P.F.B., H.L., and S.L. are supported by U.S. Department of Energy under Contract No. DE-FG02-93ER-40762.## References

- Aarts and Stamatescu (2008) G. Aarts and I.-O. Stamatescu, JHEP 09, 018 (2008), arXiv:0807.1597 [hep-lat] .
- Langfeld and Lucini (2016) K. Langfeld and B. Lucini, Proceedings, International Meeting Excited QCD 2016: Costa da Caparica, Portugal, March 6-12, 2016, Acta Phys. Polon. Supp. 9, 503 (2016), arXiv:1606.03879 [hep-lat] .
- Alexandru et al. (2005) A. Alexandru, M. Faber, I. Horvath, and K.-F. Liu, Phys. Rev. D72, 114513 (2005), arXiv:hep-lat/0507020 [hep-lat] .
- de Forcrand and Kratochvila (2006) P. de Forcrand and S. Kratochvila, Hadron physics, proceedings of the Workshop on Computational Hadron Physics, University of Cyprus, Nicosia, Cyprus, 14-17 September 2005, Nucl. Phys. Proc. Suppl. 153, 62 (2006), [,62(2006)], arXiv:hep-lat/0602024 [hep-lat] .
- Fodor and Katz (2002) Z. Fodor and S. D. Katz, Phys. Lett. B534, 87 (2002), arXiv:hep-lat/0104001 [hep-lat] .
- Allton et al. (2002) C. R. Allton, S. Ejiri, S. J. Hands, O. Kaczmarek, F. Karsch, E. Laermann, C. Schmidt, and L. Scorzato, Phys. Rev. D66, 074507 (2002), arXiv:hep-lat/0204010 [hep-lat] .
- Chandrasekharan (2013) S. Chandrasekharan, Eur. Phys. J. A49, 90 (2013), arXiv:1304.4900 [hep-lat] .
- de Forcrand and Philipsen (2007) P. de Forcrand and O. Philipsen, JHEP 01, 077 (2007), arXiv:hep-lat/0607017 [hep-lat] .
- Cristoforetti et al. (2012) M. Cristoforetti, F. Di Renzo, and L. Scorzato (AuroraScience), Phys. Rev. D86, 074506 (2012), arXiv:1205.3996 [hep-lat] .
- Cristoforetti et al. (2014a) M. Cristoforetti, F. Di Renzo, A. Mukherjee, and L. Scorzato, Proceedings, 31st International Symposium on Lattice Field Theory (Lattice 2013): Mainz, Germany, July 29-August 3, 2013, PoS LATTICE2013, 197 (2014a), arXiv:1312.1052 [hep-lat] .
- Cristoforetti et al. (2014b) M. Cristoforetti, F. Di Renzo, G. Eruzzi, A. Mukherjee, C. Schmidt, L. Scorzato, and C. Torrero, Phys. Rev. D89, 114505 (2014b), arXiv:1403.5637 [hep-lat] .
- Di Renzo and Eruzzi (2015) F. Di Renzo and G. Eruzzi, Phys. Rev. D92, 085030 (2015), arXiv:1507.03858 [hep-lat] .
- Mukherjee et al. (2013) A. Mukherjee, M. Cristoforetti, and L. Scorzato, Phys. Rev. D88, 051502 (2013), arXiv:1308.0233 [physics.comp-ph] .
- Fujii et al. (2015) H. Fujii, S. Kamata, and Y. Kikukawa, JHEP 12, 125 (2015), [Erratum: JHEP09,172(2016)], arXiv:1509.09141 [hep-lat] .
- Fukushima and Tanizaki (2015) K. Fukushima and Y. Tanizaki, PTEP 2015, 111A01 (2015), arXiv:1507.07351 [hep-th] .
- Alexandru et al. (2016a) A. Alexandru, G. Basar, and P. Bedaque, Phys. Rev. D93, 014504 (2016a), arXiv:1510.03258 [hep-lat] .
- Alexandru et al. (2016b) A. Alexandru, G. Basar, P. F. Bedaque, G. W. Ridgway, and N. C. Warrington, JHEP 05, 053 (2016b), arXiv:1512.08764 [hep-lat] .
- Tanizaki et al. (2016) Y. Tanizaki, Y. Hidaka, and T. Hayata, New J. Phys. 18, 033002 (2016), arXiv:1509.07146 [hep-th] .
- Nishimura and Shimasaki (2017) J. Nishimura and S. Shimasaki, JHEP 06, 023 (2017), arXiv:1703.09409 [hep-lat] .
- Alexandru et al. (2017a) A. Alexandru, G. Basar, P. F. Bedaque, G. W. Ridgway, and N. C. Warrington, Phys. Rev. D95, 014502 (2017a), arXiv:1609.01730 [hep-lat] .
- Fukuma and Umeda (2017) M. Fukuma and N. Umeda, (2017), arXiv:1703.00861 [hep-lat] .
- Alexandru et al. (2017b) A. Alexandru, G. Basar, P. F. Bedaque, and N. C. Warrington, (2017b), arXiv:1703.02414 [hep-lat] .
- Ruder (2016) S. Ruder, ArXiv e-prints (2016), arXiv:1609.04747 [cs.LG] .
- Kingma and Ba (2014) D. P. Kingma and J. Ba, ArXiv e-prints (2014), arXiv:1412.6980 [cs.LG] .
- Alexandru et al. (2016c) A. Alexandru, G. Basar, P. F. Bedaque, S. Vartak, and N. C. Warrington, Phys. Rev. Lett. 117, 081602 (2016c), arXiv:1605.08040 [hep-lat] .
- Alexandru et al. (2017c) A. Alexandru, G. Basar, P. F. Bedaque, and G. W. Ridgway, Phys. Rev. D95, 114501 (2017c), arXiv:1704.06404 [hep-lat] .
- Alexandru et al. (2016d) A. Alexandru, G. Basar, P. F. Bedaque, G. W. Ridgway, and N. C. Warrington, Phys. Rev. D93, 094514 (2016d), arXiv:1604.00956 [hep-lat] .
- Cohen (2003) T. D. Cohen, Phys. Rev. Lett. 91, 222001 (2003), arXiv:hep-ph/0307089 [hep-ph] .