Mutual Information, Neural Networks and the Renormalization Group

# Mutual Information, Neural Networks and the Renormalization Group

Maciej Koch-Janusz Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland    Zohar Ringel Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
###### Abstract

Physical systems differring in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains “slow” degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, performing this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.

###### pacs:

Machine learning has been captivating public attention lately due to groundbreaking advances in automated translation, image and speach recognition LeCun et al. (2015), game-playing Silver and et al. (2016), and achieving super-human performance in tasks in which humans excelled while more traditional algorithmic approaches struggled Hershey et al. (2010). The applications of those techniques in physics are very recent, initially leveraging the trademark prowess of machine learning in classification and pattern recognition and applying them to classify phases of matter Carrasquilla and Melko (2017); Torlai and Melko (2016); van Nieuwenburg et al. (2017); Wang (2016); Ohtsuki and Ohtsuki (2017), study amorphous materials Ronhovde et al. (2011, 2012), or exploiting the neural networks’ potential as efficient non-linear approximators of arbitrary functions Hinton and Salakhutdinov (2006); Lin and Tegmark (2017) to introduce a new numerical simulation method for quantum systems Carleo and Troyer (2017); Deng et al. (2017). However, the exciting possibility of employing machine learning not as a numerical simulator, or a hypothesis tester, but as an integral part of the physical reasoning process is still largely unexplored and, given the staggering pace of progress in the field of artificial intelligence, of fundamental importance and promise.

The renormalization group (RG) approach has been one of the conceptually most profound tools of theoretical physics since its inception. It underlies the seminal work on critical phenomena Wilson (1975), the discovery of asymptotic freedom in quantum chromodynamics Politzer (1973), and of the Kosterlitz-Thouless phase transition Berezinskii (1971); Kosterlitz and Thouless (1973). The RG is not a monolith, but rather a conceptual framework comprising different techniques: real-space RG Kadanoff (1966), functional RG Wetterich (1993), density matrix renormalization group (DMRG) White (1992), among others. While all those schemes differ quite substantially in details, style and applicability there is an underlying physical intuition which encompasses all of them – the essence of RG lies in identifying the “relevant” degrees of freedom and integrating out the “irrelevant” ones iteratively, thereby arriving at a universal, low-energy effective theory. However potent the RG idea, those relevant degrees of freedom need to be identified first Ma et al. (1979); Corboz and Mila (2013). This is often a challenging conceptual step, particularly for strongly interacting systems and may involve a sequence of mathematical mappings to models, whose behaviour is better understood Capponi et al. (2013); Auerbach (1994).

Here we introduce an artificial neural network algorithm iteratively identifying the physically relevant degrees of freedom in a spatial region and performing an RG coarse-graining step. The input data are samples of the system configurations drawn from a Boltzmann distribution, no further knowledge about the microscopic details of the system is provided. The internal parameters of the network, which ultimately encode the degrees of freedom of interest at each step, are optimized (’learned’, in neural networks parlance) by a training algorithm based on evaluating real-space mutual information (RSMI) between spatially separated regions. We validate our approach by studying the Ising and dimer models of classical statistical physics in two dimensions. We obtain the RG flow and extract the Ising critical exponent. The robustness of the RSMI algorithm to physically irrelevant noise is demonstrated.

The identification of the important degrees of freedom, and ability to execute real-space RG procedure Kadanoff (1966), has significance not only quantitative but also conceptual: it allows to gain insights about the correct way of thinking about the problem at hand, raising the prospect that machine learning techniques may augment the scientific inquiry in a fundamental fashion. .

## The Real Space Mutual Information algorithm

Before going into more detail, let us provide a bird’s eye view of our method and results. We begin by phrasing the problem in probabilistic/information-theoretic terms, a language also used in Refs. Gaite and O’Connor (1996); Preskill (2000); Apenko (2012); Machta et al. (2013); Beny and Osborne (2015). To this end, we consider a small “visible” spatial area , which together with its environment forms the system , and we define a particular conditional probability distribution , which describes how the relevant degrees of freedom (dubbed “hiddens”) in depend on both and . We then show that the sought-after conditional probability distribution is found by an algorithm maximizing an information-theoretic quantity, the mutual information (MI), and that this algorithm lends itself to a natural implementation using artificial neural networks. We describe how RG is practically performed by coarse-graining with respect to and iterating the procedure. Finally, we provide a verification of our claims by considering two paradigmatic models of statistical physics: the Ising model – for which the RG procedure yields the famous Kadanoff block spins – and the dimer model, whose relevant degrees of are much less trivial. We reconstruct the RG flow of the Ising model and extract the critical exponent.

Consider then a classical system of local degrees of freedom , defined by a Hamiltonian energy function and associated statistical probabilities , where is the inverse temperature. Alternatively (and sufficiently for our purposes), the system is given by Monte Carlo (MC) samples of the equilibrium distribution . We denote a small spatial region of interest by and the remainder of the system by , so that . We adopt a probabilistic point of view, and treat etc. as random variables. Our goal is to extract the relevant degrees of freedom from .

“Relevance” is understood here in the following way: the degrees of freedom RG captures govern the long distance behaviour of the theory, and therefore the experimentally measurable physical properties; they carry the most information about the system at large, as opposed to local fluctuations. We thus formally define the random variable as a composite function of degrees of freedom in maximizing the mutual information (MI) between and the environment . This definition, as we discuss in the Supplementary Materials, is related to the requirement that the effective coarse-grained Hamiltonian be compact and short-ranged, which is a condition any succesful standard RG scheme should satisfy. As we also show, it is supported by numerical results.

Mutual information, denoted by , measures the total amount of information about one random variable contained in the other Stephan et al. (2014); Ronhovde et al. (2011, 2012) (thus it is more general than correlation coefficients). It is given in our setting by:

 IΛ(H:E)=∑H,EPΛ(E,H)log(PΛ(E,H)PΛ(H)P(E)), (1)

The unknown distribution and its marginalization , depending on a set of parameters (which we keep generic at this point), are functions of and of , which is the central object of interest.

Finding which maximizes under certain constraints is a well-posed mathematical question and has a formal solution Tishby et al. (2001). Since, however, the space of probability distributions grows exponentially with number of local degrees of freedom, it is in practice impossible to use without further assumptions for any but the smallest physical systems. Our approach is to exploit the remarkable dimensionality reduction properties of artificial neural networks Hinton and Salakhutdinov (2006). We use restricted Boltzmann machines (RBM), a class of probabilistic networks well adapted to approximating arbitrary data probability distributions. An RBM is composed of two layers of nodes, the “visible” layer, corresponding to local degrees of freedom in our setting, and a “hidden” layer. The interactions between the layers are defined by an energy function , such that the joint probability distribution for a particular configuration of visible and hidden deegrees of freedom is given by a Boltzmann weight:

 PΘ(V,H)=1Ze−Ea,b,θ(V,H), (2)

with the normalization. The goal of the network training is to find parameters (“weights” or “filters”) and optimizing a chosen objective function.

Three distinct RBMs are used: two are trained as efficient approximators of the probability distributions and , using the celebrated contrastive divergence (CD) algorithm G.E. (2002). Their trained parameters are used by the third network [see Fig. 1(B)], which has a different objective: to find maximizing , we introduce the real space mutual information (RSMI) network, whose architecture is shown in Fig. 1(A). The hidden units of RSMI correspond to coarse-grained variables .

The parameters of the RSMI network are trained by an iterative procedure. At each iteration a Monte Carlo estimate of function and its gradients is performed for the current values of parameters . The gradients are then used to improve the values of weights in the next step, using a stochastic gradient descent procedure.

The trained weights define the probability of a Boltzmann form, which is used to generate MC samples of the coarse-grained system. Those, in turn, become input to the next iteration of the RSMI algorithm. The estimates of mutual information, weights of the trained RBMs and sets of generated MC samples at every RG step can be used to extract quantitative information about the system in the form of correlation functions, critical exponents etc. as we show below and in the supplementary materials. We also emphasize that the parameters identifying relevant degrees of freedom are re-computed at every RG step. This potentially allows RSMI to capture the evolution of the degrees of freedom along the RG flow Ludwig and Cardy (1987).

## Validation

To validate our approach we consider two important classical models of statistical physics: the Ising model, whose coarse-grained degrees of freedom resemble the original ones, and the fully-packed dimer model, where they are entirely different.

The Ising Hamiltonian on a two-dimensional square lattice is:

 HI=∑sisj, (3)

with and the summation over nearest neighbours. Real-space RG of the Ising model proceeds by the block-spin construction Kadanoff (1966), whereby each block of spins is coarse grained into a single effective spin, whose orientation is decided by a “majority rule”.

The results of the RSMI algorithm trained on Ising model samples are shown in Fig. 2. We vary the number of both hidden neurons and the visible units, which are arranged in a 2D area of size [see Fig. 1(A)]. For a spin area the network indeed rediscovers the famous Kadanoff block-spin: Fig. 2(A) shows a single hidden unit coupling uniformly to visible spins, i.e. the orientation of the hidden unit is decided by the average magnetisation in the area. Fig. 2(B) is a trivial but important sanity check: given hidden units to extract relevant degrees of freedom from an area of spins, the networks couples each hidden unit to a different spin, as expected. In the supplementary materials we also compare the weights for areas of different size, which are generalizations of Kadanoff procedure to larger blocks.

We next study the dimer model, given by an entropy-only partition function, which counts the number of dimer coverings of the lattice, i.e. subsets of edges such that every vertex is the endpoint of exactly one edge. Fig. 3(A) shows sample dimer configurations (and additional spin degrees of freedom added to generate noise). This deceptively simple description hides nontrivial physics Fisher and Stephenson (1963) and correspondingly, the RG procedure for the dimer model is more subtle, since – contrary to the Ising case – the correct degrees of freedom to perform RG on are not dimers, but rather look like effective local electric fields. This is revealed by a mathematical mapping to a “height field” (see Figs.3(A,B) and Ref. Fradkin (2013)), whose gradients behave like electric fields. The continuum limit of the dimer model is given by the following action:

 Sdim[h]=∫d2x (∇h(→x))2≡∫d2x →E2(→x), (4)

and therefore the coarse-grained degrees of freedom are low-momentum (Fourier) components of the electrical fields in the and directions. They correspond to “staggered” dimer configurations shown in Fig. 3(A).

Remarkably, the RSMI algorithm extracts the local electric fields from the dimer model samples without any knowledge of those mappings. In Fig. 4 the weights for and hidden neurons, for an area [similar to Fig. 3(A)] are shown: the pattern of large negative (blue) weights couples strongly to a dimer pattern corresponding to local uniform field [see left pannels of Figs. 3(A,B)]. The large positive (yellow) weights select an identical pattern, translated by one link. The remaining neurons extract linear superpositions or of the fields.

To demonstrate the robustness of the RSMI, we added physically irrelevant noise, forming nevertheless a pronounced pattern, which we model by additional spin degrees of freedom, strongly coupled (ferromagnetically) in pairs [wiggly lines in Fig. 3(A)]. Decoupled from the dimers, and from other pairs, they form a trivial system, whose fluctuations are short-range noise on top of the dimer model. Vanishing weights [green in Figs. 4(A,B)] on sites where pairs of spins reside prove RSMI discards their fluctuations as irrelevant for long-range physics, despite their regular pattern.

Notably, the filters obtained using our approach for the dimer model, which match the analytical expectation, are orthogonal to those obtained using Kullback-Leibler (KL) divergence. As expanded upon in the supplementary materials, this shows that standard RBMs minimizing the KL-divergence do not generally perform RG, thereby contradicting prior claims Mehta and Schwab (2014).

Finally, we demonstrate that by iterating the RSMI algorithm the qualitative insights about the nature of relevant degrees of freedom give rise to quantitive results. To this end we revisit the 2D Ising model which (contrary to the dimer model) exhibits a nontrivial critical point at the temperature , separating the paramagnetic and ferromagnetic phases. We generate MC samples of the system of size at values around the critical point, and for each one we perform up to four RG steps, by computing the filters using RMSI, coarse-graining the system with respect to those filters (effectively halving the linear dimensions) and re-iterating the procedure. In addition to the set of MC configurations for the coarse-grained system, estimates of mutual information as well as the filters of the CD-trained RBMs, are generated and stored. The effective temperature of the system at each RG step can be evaluated entirely intrinsically either from correlations or the mutual information, as discussed in the supplement. Using the RBM filters spin-spin correlations, next-nearest-neighbour, for instance, can be computed. By comparing these with known analytical results McCoy and Wu (1973) an additional cross-check of the effective temperature can be obtained.

In Fig. 5 the effective is plotted against , where , are the current and system correlation lengths, respectively (this has the meaning of an RG step for integer values). The RG flow of the 2D Ising model is recovered: systems starting with flow towards ever decreasing , i.e. an ordered state, while the ones with towards a paramagnet. In fact, the position of the critical point can be estimated with accuracy just from the divergent flow. Furthermore, we evaluate the correlation length exponent , defined by . Using the finite-size data collapse [see Fig.4 in Supplemental Materials] its value, equal to the negative slope, is estimated to be , consistent with the exact analytical result

## Future directions

Artificial neural networks based on real-space mutual information optimization have proven capable of extracting complex information about physically relevant degrees of freedom and using it to perform real-space RG procedure. The RSMI algorithm we propose allows for the study of existence and location of critical points, and RG flow in their vicinity, as well as estimation of correlations functions, critical exponents etc. This approach is an example of a new paradigm in applying machine learning in physics: the internal data representations discovered by suitably designed algorithms are not just technical means to an end, but instead are a clear reflection of the underlying structure of the physical system (see also Schoenholz et al. (2016)). Thus, in spite of their “black box” reputation, the innards of such architectures may teach us fundemental lessons. This raises the prospect of employing machine learning in science in a collaborative fashion, exploiting the machines’ power to distill subtle information from vast data, and human creativity and background knowledge Jordan and Mitchell (2015).

Numerous further research directions can be pursued. Most directly, equilibrium systems with less understood relevant degrees of freedom – e.g. disordered and glassy systems – can be investigated Ronhovde et al. (2011, 2012). The ability of RSMI algorithm to re-compute the relevant degrees of freedom at every RG step potentially allows to study their evolution along the (more complicated) RG flow Ludwig and Cardy (1987). Furthermore, though we studied classical systems, the extension to the quantum domain is possible via the quantum-to-classical mapping of Euclidean path integral formalism. A more detailed analysis of the mutual-information based RG procedure may prove fruitful from theory perspective. Finally, applications of RSMI beyond physics are possible, since it offers a neural network implementation of a variant of the Information Bottleneck method Tishby et al. (2001), succesful in compression and clustering analyses Slonim and Tishby (2000); it can also be used as a local-noise filtering pre-training stage for other machine learning algorithms.

#### Acknowledgements –

We thank Profs. S. Huber and P. Fendley for discussions. M.K-J. gratefully acknowledges the support of Swiss National Science Foundation (SNSF). Z.R. was supported by the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 657111.

#### Authorship –

Both authors (MKJ, ZR) contributed equally to this work.

#### Data availability statement –

The data that support the plots within this paper and other findings of this study are available from the corresponding author upon request.

## References

• LeCun et al. (2015) Y. LeCun, Y. Bengio,  and Hinton G.E., “Deep learning,” Nature 521, 436–444 (2015).
• Silver and et al. (2016) D. Silver and et al., “Mastering the game of Go with deep neural networks and tree search,” Nature 529, 584–589 (2016).
• Hershey et al. (2010) John R. Hershey, Steven J. Rennie, Peder A. Olsen,  and Trausti T. Kristjansson, “Super-human multi-talker speech recognition: A graphical modeling approach,” Comput. Speech Lang. 24, 45–66 (2010).
• Carrasquilla and Melko (2017) J. Carrasquilla and R. G. Melko, “Machine learning phases of matter,” Nature Physics 13, 431â–434 (2017).
• Torlai and Melko (2016) G. Torlai and R. G. Melko, “Learning thermodynamics with Boltzmann machines,” Phys. Rev. B 94, 165134 (2016).
• van Nieuwenburg et al. (2017) E. P. L. van Nieuwenburg, Y.-H. Liu,  and S. D. Huber, “Learning phase transitions by confusion,” Nature Physics 13, 435â–439 (2017).
• Wang (2016) L. Wang, “Discovering phase transitions with unsupervised learning,” Phys. Rev. B 94, 195105 (2016).
• Ohtsuki and Ohtsuki (2017) T. Ohtsuki and T. Ohtsuki, “Deep Learning the Quantum Phase Transitions in Random Electron Systems: Applications to Three Dimensions,” Journal of the Physical Society of Japan 86, 044708 (2017).
• Ronhovde et al. (2011) P. Ronhovde, S. Chakrabarty, D. Hu, M. Sahu, K. K. Sahu, K. F. Kelton, N. A. Mauro,  and Z. Nussinov, ‘‘Detecting hidden spatial and spatio-temporal structures in glasses and complex physical systems by multiresolution network clustering,” The European Physical Journal E 34, 105 (2011).
• Ronhovde et al. (2012) P. Ronhovde, S. Chakrabarty, D. Hu, M. Sahu, K. K. Sahu, K. F. Kelton, N. A. Mauro,  and Z. Nussinov, “Detection of hidden structures for arbitrary scales in complex physical systems.” Scientific Reports 2, 329 (2012).
• Hinton and Salakhutdinov (2006) G.E. Hinton and R.R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science 313, 504–507 (2006).
• Lin and Tegmark (2017) H. W. Lin and M. Tegmark, “Why does deep and cheap learning work so well?” Journal of Statistical Physics 168, 1223–1247 (2017).
• Carleo and Troyer (2017) G. Carleo and M. Troyer, “Solving the Quantum Many-Body Problem with Artificial Neural Networks,” Science 355, 602–606 (2017).
• Deng et al. (2017) Dong-Ling Deng, Xiaopeng Li,  and S. Das Sarma, “Machine learning topological states,” Phys. Rev. B 96, 195145 (2017).
• Wilson (1975) Kenneth G. Wilson, “The renormalization group: Critical phenomena and the kondo problem,” Rev. Mod. Phys. 47, 773–840 (1975).
• Politzer (1973) H. David Politzer, “Reliable perturbative results for strong interactions?” Phys. Rev. Lett. 30, 1346–1349 (1973).
• Berezinskii (1971) V. L. Berezinskii, “Destruction of Long-range Order in One-dimensional and Two-dimensional Systems having a Continuous Symmetry Group I. Classical Systems,” Soviet Journal of Experimental and Theoretical Physics 32, 493 (1971).
• Kosterlitz and Thouless (1973) J.M. Kosterlitz and D. Thouless, “Ordering, metastability and phase transitions in two-dimensional systems,” Journal of Physics C: Solid State Physics 6, 1181 (1973).
• Kadanoff (1966) L. P. Kadanoff, “Scaling laws for Ising models near T(c),” Physics 2, 263–272 (1966).
• Wetterich (1993) Christof Wetterich, “Exact evolution equation for the effective potential,” Physics Letters B 301, 90 – 94 (1993).
• White (1992) Steven R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett. 69, 2863–2866 (1992).
• Ma et al. (1979) Shang-keng Ma, Chandan Dasgupta,  and Chin-kun Hu, “Random antiferromagnetic chain,” Phys. Rev. Lett. 43, 1434–1437 (1979).
• Corboz and Mila (2013) Philippe Corboz and Frederic Mila, “Tensor network study of the shastry-sutherland model in zero magnetic field,” Phys. Rev. B 87, 115144 (2013).
• Capponi et al. (2013) Sylvain Capponi, V. Ravi Chandra, Assa Auerbach,  and Marvin Weinstein, ‘‘ chiral resonating valence bonds in the kagome antiferromagnet,” Phys. Rev. B 87, 161118 (2013).
• Auerbach (1994) A. Auerbach, Interacting electrons and quantum magnetism (Springer, 1994).
• Gaite and O’Connor (1996) Jose Gaite and Denjoe O’Connor, “Field theory entropy, the theorem, and the renormalization group,” Phys. Rev. D 54, 5163–5173 (1996).
• Preskill (2000) J. Preskill, “Quantum information and physics: some future directions,” J. Mod. Opt. 47, 127–137 (2000).
• Apenko (2012) S.M. Apenko, “Information theory and renormalization group flows,” Physica A 391, 62–77 (2012).
• Machta et al. (2013) B.B. Machta, R. Chachra, M.K. Transtrum,  and J.P Sethna, “Parameter space compression undelies emergent theories and predicitve models,” Science 342, 604–607 (2013).
• Beny and Osborne (2015) Cedric Beny and Tobias J Osborne, “The renormalization group via statistical inference,” New Journal of Physics 17 (2015).
• Stephan et al. (2014) Jean-Marie Stephan, Stephen Inglis, Paul Fendley,  and Roger G. Melko, “Geometric mutual information at classical critical points,” Phys. Rev. Lett. 112, 127204 (2014).
• Tishby et al. (2001) N. Tishby, F. C. Pereira,  and W. Bialek, “The information bottleneck method,” Proceedings of the 37th Allerton Conference on Communication, Control and Computation,  49 (2001).
• G.E. (2002) Hinton G.E., “Training Products of Experts by Minimizing Contrastive Divergence,” Neural Computation 14, 1771–1800 (2002).
• Ludwig and Cardy (1987) Andreas W.W. Ludwig and John L. Cardy, “Perturbative evaluation of the conformal anomaly at new critical points with applications to random systems,” Nuclear Physics B 285, 687 – 718 (1987).
• Fradkin (2013) E. Fradkin, Field theories of Condensed Matter Physics (Cambridge University Press, 2013).
• Fisher and Stephenson (1963) Michael E. Fisher and John Stephenson, “Statistical mechanics of dimers on a plane lattice. ii. dimer correlations and monomers,” Phys. Rev. 132, 1411–1431 (1963).
• Mehta and Schwab (2014) P. Mehta and D. J. Schwab, ‘‘An exact mapping between the Variational Renormalization Group and Deep Learning,” ArXiv e-prints abs/1410.3831 (2014).
• McCoy and Wu (1973) B.M. McCoy and T.T. Wu, The two-dimensional Ising model (Harvard University Press, 1973).
• Schoenholz et al. (2016) S.S. Schoenholz, E.D. Cubuk, D.M. Sussman, E. Kaxiras,  and A.J. Liu, “A structural approach to relaxation in glassy liquids,” Nature Physics 12, 469–471 (2016).
• Jordan and Mitchell (2015) M.I. Jordan and T.M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science 349, 255–260 (2015).
• Slonim and Tishby (2000) Noam Slonim and Naftali Tishby, “Document clustering using word clusters via the information bottleneck method,” in Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’00 (ACM, 2000) pp. 208–215.
• Haykin (2009) S. Haykin, Neural Networks and Learning Machines (Pearson, 2009).
• Theano Development Team (2016) Theano Development Team, “Theano: A Python framework for fast computation of mathematical expressions,” arXiv e-prints abs/1605.02688 (2016).
• Cardy (1988) J.L. Cardy, Finite-Size Scaling, Current physics (North-Holland, 1988).

Supplemental Materials for “Mutual Information, Neural Networks and the Renormalization Group” – Methods

## Estimating Mutual Information

Here we gather the notations and define formally the quantities appearing in the main text. We derive in detail the expression for the approximate mutual information measure, which is evaluated numerically by the RSMI algorithm. This measure is given in terms of a number of probability distributions, accessible via Monte Carlo samples and approximated by contrastive divergence (CD) trained RBMs, or directly defined by (different) RBMs Haykin (2009).

We consider a statistical system of classical Ising variables , which can equally well describe the presence or absence of a dimer. The system is divided into a small “visible” area , an “environment” , and a “buffer” separating the degrees of freedom in and spatially (for clarity of exposition we assumed in the main text). We additionally consider a set of “hidden” Ising variables . For the main RSMI network described in the text, has the interpretation of coarse-grained degrees of freedom extracted from .

We assume the data distribution – formally a Boltzmann equilibrium distribution defined by a Hamiltonian – is given to us indirectly via random (Monte Carlo) samples of . The distributions and are defined as marginalizations of . Performing the marginalizations explicitly is computationally costly and therefore it is much more efficient to approximate and using two RBMs of the type defined in and above Eq. (2), trained using the CD-algorithm G.E. (2002) on the restrictions of samples. The trained networks, with parameters and (which we refer to as -RBMs) define probability distributions and , respectively [see also Fig. 1(B)]. From mathematical standpoint, contrastive divergence is based on minimizing a proxy to the Kullback-Leibler divergence between and and the data probability distributions and , respectively, i.e. the training produces RBMs which model the data well G.E. (2002).

The conditional probability distribution is defined by another RBM, denoted henceforth by -RBM, with tunable parameters :

 PΛ(H|V) = e−EΛ(V,H)∑He−EΛ(V,H), (5) EΛ(V,H) = ∑ij−viλjihj−∑iaivi−∑jbjhj

In contrast to -RBMs it will not be trained using CD-algorithm, since its objective is not to approximate the data probability distribution. Instead, the parameters will be chosen so as to maximize a measure of mutual information between and . The reason for exclusion of a buffer , generally of linear extent comparable to , is that otherwise MI would take into account correlations of with its immediate vicinity, which are equivalent with short-ranged correlations within itself. We now derive the MI expression explicitly.

Using and we can define the joint probability distribution and marginalize over to obtain . We can then define the mutual information (MI) between and in the standard fashion:

 IΛ(H:E)=∑H,EPΛ(E,H)log(PΛ(E,H)PΛ(H)P(E)) (6)

The main task is to find the set of parameters which maximizes given the samples . Since is not a function of one can optimize a simpler quantity:

 AΛ(H:E)=∑H,EPΛ(E,H)log(PΛ(E,H)PΛ(H)) (7)

Using the -RBM approximations of the data probability distributions as well as the definition of the one can further rewrite this as:

 AΛ(H:E)=∑H,EPΛ(E,H)log(∑VPΛ(V,H)PΘ(V,E)/PΛ(V)∑V′PΛ(V′,H)PΘ(V′)/PΛ(V′)) (8)

The daunting looking argument of the logarithm can in fact be cast in a simple form, using the fact that all the probability distributions involved either are of Boltzmann form, or marginalization thereof over the hidden variables, which can be performed explicitly:

 AΛ(H:E)≡∑H,EPΛ(E,H)log⎛⎝∑Ve−EΛ,Θ(V,E,H)∑V′e−EΛ,Θ(V′,H)⎞⎠, (9)

where

 EΛ,Θ(V,E,H) = EΛ(V,H)+EΘ(V,E)+∑jlog[1+exp(∑ivjλji+bj)] (10) EΛ,Θ(V,H) = EΛ(V,H)+EΘ(V)+∑jlog[1+exp(∑ivjλji+bj)],

and where and are defined by the parameter sets and of the trained -RBMs:

 PΘ(V) ∝ e−EΘ(V) (11) PΘ(V,E) ∝ e−EΘ(V,E)

Note, that since in the parameter dependence cancels out [and consequently also in ], the quantity does not depend on . Hence, without loss of generality, we put in our numerical simulations, i.e. the -RBM is specified by the set of parameters only.

is an average over the distribution of a logarithmic expression [see Eq. (5)], which itself can be further rewritten as a statistical expectation value for a system with energy , with variables held fixed:

 = (12) ≡ log(⟨e−ΔEΛ,Θ(V,E,H)⟩H)≈⟨−ΔEΛ,Θ(V,E,H)⟩H (13)

with . Thus finally, we arrive at a simple expression for :

 AΛ(H:E)≈∑H,EPΛ(E,H)⟨−ΔEΛ,Θ(V,E,H)⟩H. (14)

This expression can be numerically evaluated: using the fact that we replace the sums over and with a Monte Carlo (MC) average over samples . Furthermore, given a -RBM (at current stage of training) and a sample of , one can easily draw a sample according to probability distribution . Hence we have a MC estimate:

 AΛ(H:E)≈1N(V,E)∑(V′,E,H(V′))i⟨−ΔEΛ,Θ(V,E,H)⟩H. (15)

The expectation value in the summand is itself also evaluated by MC averaging, this time with respect to Boltzmann probability distribution with energy .

## Numerical evaluation

### .1 One step of RSMI

Numerical results in the paper were obtained using a purpose-written (Python/Theano Theano Development Team (2016)) implementation of the main RSMI network and a standard implementation of CD-trained RBMs. For the RSMI network, maximizing with respect to parameters is performed using stochastic gradient descent procedure Haykin (2009). To this end we estimate the derivative over samples . More accurately, we divide the samples into mini-batches, obtain an average assessment of , and use it to update the parameters of the -RBM: (and similarly for ) with a learning rate . This is then repeated for next mini-batch. A run through all mini-batches constitutes one epoch of training. For the final data from the RSMI network 2000 epochs were used with a mini-batch size of 800 with a learning rate of ; a regulator was used. The internal Monte Carlo rough estimate of the expectation value in Eq. 15 used two samples after a burn-in period of 126. For numerical efficiency we restricted the size of the environment-buffer-visible area setup to a window of total size three times the linear extent of the visible area to be coarse-grained. The contrastive divergence RBMs were trained for 300 epochs in the case of Ising and 2000 epochs for the dimers with a mini-batch size of 25 and learning rate ; a regulator was used. Fig. 5 shows convergence of the estimation and the development of the weight matrices for an example training run in the case of the Ising system.

The initial Ising/dimer data were generated by Monte Carlo simulations. For the dimer data we used a 64 lattice (128 with spins) and performed MC with loop updates. Number of steps was tuned to have sample-at-t and sample-at-t+ correlation at the level of noise. For the Ising system we used 128 lattice and a cluster-update MC.

We remark that the gradients of should best be computed explicitly (a simple, if tedious, computation) prior to numerical evaluation, and one should not use the automated gradient computation capability provided by e.g. Theano package Theano Development Team (2016). The reason is that some of the dependence on parameters is stochastic. Specifically, it enters via the treshold values of the Monte Carlo acceptance, and this dependence results in a piece-wise constant function (although with very fine steps) which is not handled correctly by automated gradient computing procedures (the numerical gradients would be equal to zero in most cases).

### .2 Multiple steps

Here we describe the iterative procedure for performing multiple RG steps in sequence and extracting the numerical quantities characterizing the RG flow. The structure of the algorithm is shown in Fig. 7. The trained filters of the -RBM are used to construct a Monte Carlo sampler: the size configurations are tiled with a window of size and new coarse-grained variables are assigned with acceptance given by . For the 2D Ising model we used a single coarse-grained variable for a visible area of . The new set of configurations of size is used to train the - and -RBMs at this scale, iterating the procedure.

In addition to the filters, whose significance in identifying the relevant degrees of freedom we discussed in the main text, the multiple stages of the RSMI algorithm generate a wealth of other numerical data of physical importance:

• The sets of MC configurations at successive length-scales. Those can be used to compute correlations functions, evaluate expectation values etc.

• The -RBM filters at successive length-scales. Since the trained -RBM is an efficient approximator to the specific Boltzmann probability generating the samples of the system at a given length-scale, it can also be used to compute the properties of the system, such as correlations (without using the coarse-grained MC samples, in fact the RBM can be used to generate new samples). Below we compute the next-nearest-neigbour correlations in the Ising model as an example. They can also be used to intrinsically evaluate the effective temperature of the system.

• The Mutual Information (or the proxy ) captured by the -RBM. It can also be used to evaluate the effective temperature at succesive RG steps intrinsically. Below we show how how a practical MI “thermometer” is constructed.

These data allows the RSMI algorithm to make quantitive predictions. We show below, on the example of 2D Ising model, how they are sufficient to characterize the RG flow: the position of the critical point, the flow around it (stable/unstable) as well as critical exponents can all be evaluated.

We remark here that using coarse-graining schemes with more than one hidden degree of freedom per visible area requires certain care. This is well illustrated by Fig. 2B in the main text, where one of the hidden spins couples anti-ferromagnetically to the visible spin it is tracking. The immediate reason is that MI is maximized when and are either perfectly correlated or perfectly anti-correlated, as in both cases knowing one spin of the pair fully determines the other. Thus, for every hidden, there is a local symmetry, which is generically not broken, since RBMs have the conditional independence property, i.e. . Thus every one of 4 hiddens independently “decides” whether to align of anti-align with the single visible spin it is tracking.

One may wonder if this is not a problem from the point of view of using the filters to generate coarse-grained configurations (for which the “sanity check” filters were never intended to be used; their only purpose is to demonstrate full information capture), as independent “misaligned” filters could, for instance, introduce anti-correlations between coarse-grained areas, where previously the spins were correlated, or vice versa. The answer is, that this pitfall is in practice avoidable. First, in the most common case of coarse-graining an area to a single hidden degree of freedom there is only one filter, which is used for the whole system. For a coarse-graining with more hiddens the relative phase between the filters needs to be fixed, which can done, for instance, by comparing the signs of the correlator computed using the coarse-grained variables to the correlator of the original (composite) variables the couple to, which is fast for small areas .

## RG flow

In this subsection we provide more details on how the RG flow results in the paper were obtained. We demonstrate the ability of the RSMI algorithm to characterize the RG flow both qualitatively and quantitatively on the canonical example of the 2D Ising model. This allows us to benchmark against exact analytical results for the Ising model. The model exhibits two phases: a high-temperature disordered (paramagnetic) phase, and a low-temperature ordered (ferromagnetic) phase, separated by a critical point, which is an unstable RG fixed point. The critical exponents are known exactly.

In order to test our predictions we generated sets of MC samples of the system at values of the inverse temperature (in units of the critical ). For each of the sets we first performed a single RG step i.e. we trained the - and -RBMs. The next task is to establish a measure of the effective temperature of the system, or a “thermometer”. This is important, as it can be used to estimate the effective temperature in subsequent RG steps completely intrinsically, i.e. without any need to rely on the knowledge of temperature dependence of some quantities obtained by other analytical or numerical methods (for a generic model, such knowledge may simply not be available). This can be done in a number of ways. As we mentioned in the previus section, the -RBMs can be used to compute the spin-spin correlations. Since the correlations in the system have a clear temperature dependence, we can use the samples in the first step to establish an empirical curve , where is the chosen measure of correlations. In subsequent steps of the RG procedure can be evaluated either from the MC samples or the -RBMs and the effective temperature can be obtained as . Alternatively, the value of mutual information (or, more precisely, of the proxy ) captured by the trained -RBM can be used for this purpose. At this value should be identically zero – the system is totally uncorrelated – while at it is bounded from above by the total entropy of the variables in the visible area. In general, MI is a monotonic function of , since any form of correlations decrease with temperature. Both methods give similar results. In Fig. 8 we plot the value of as a function of for the 2D Ising model. In the parameter regime we considered we fit a linear relation . The validity of using this simple fit is, of course, restricted to the range of values of for which it was performed. In general, as discussed above, the dependence in not linear, but a non-linear fit can be used in exactly the same way.

Having constructed the thermometer we are now equiped to perform subsequent RG steps, as described in the previous section. In Fig. 5 (in the main text) we show the effective temperature of the system as a function of the RG step, for initial value of . We performed up to four RG steps (from system down to ). There are number of important observations to be made:

• The effective of systems with initial consistently decreases with successive RG steps, while it increases for initial . For initial we see the subsequent values of are constant within accuracy. This is a signature of a divergent flow around the unstable point .

• Examining the directions of the flow allows to establish the existence and nature (stable/unstable), and to find the position of the critical point. In fact, with the numerical data four our system, within accuracy.

• The flow data can be used to extract the value of the critical exponents (see discussion below)

The two ”thermometers” we used, based on mutual information and correlations, provide us with two inverse-temperature estimates and which depend on the scale on which the measurement was carried and the microscopic inverse temperature . In an infinite system these scale-dependent estimates will reflect the renormalisation group flow of the temperature, namely , with being the inverse temperature and . In a finite system this behaviour is augmented by some scaling function such that , with to be determined and being the lattice spacing. The finite size scaling hypothesis (see for instance Cardy (1988)) allows us to simplify by taking . We next define , and obtain

 β∗(l;L)−βcβ−βc=~f0((L/l)1/ν). (16)

Following the normalization prescription, all the data point () obtained for different values of and , should collapse on a single curve defined by . The result of this exercise is shown in Fig. 9, where we plot the data for four RG steps (for initial , i.e. the paramagnetic side of the transition). The slope of the curves is consistent with the exact analytical result for the exponent, i.e. .

We note here that the errors can be reduced by increasing the number of the MC samples (5000 per a value of in our simulations) and especially by considering larger initial system size ( is the limit under reasonable time constraints for our Python code).

## Relation of Mutual Information RG procedure to conventional RG schemes

Here we provide an intuitive theoretical argument elucidating the connection of our mutual information based approach to more standard treatments of real-space RG. Various information-theoretic approaches to RG were also advocated or investigated in Refs. Gaite and O’Connor (1996); Preskill (2000); Apenko (2012); Machta et al. (2013); Beny and Osborne (2015). Before defining our explicit criteria for identifying relevant degrees of freedom, let us first briefly rephrase the conventional RG procedure in probabilistic terms.

Consider then a physical system, represented by a set of local degrees of freedom (or random variables) and governed by a Hamiltonian energy function . The equlibrium probability distribution is of Boltzmann form: , with the inverse temperature. Next we consider a new and smaller set of degrees of freedom , i.e. the coarse-grained variables, whose dependece on is given by a conditional probability , where are variational internal parameters to be specified and each depends on some localized set of . The RG transformation in this language consists of finding the effective Hamiltonian of the coarse-grained degrees of freedom by marginalizing over (or integrating-out, in physical terms) degrees of freedom in the joint probability distribution of and :

 ~H = −log(Z[H]), (17) Z[H] = ∑XΠiPΛ(hi|X)e−βH.

Using the fact that all conditional probabilities are normalized we have that , i.e. the new partition function has the exact same free energy as the original one . Notably it also contains all the information required to evaluate certain expectations values in an exact fashion. Consider for instance the average of under . It can be re-expressed as an average over the original degrees of freedom as follows:

 ⟨hjhk⟩~Z =~Z−1∑H∑XhjhkΠiPΛ(hi|X)e−βH =Z−1∑X[∑hjhjPΛ(hj|X)][∑hkhkPΛ(hk|X)]e−βH,

where we used the fact that the overall conditional probability of given is given in terms of a product of , i.e. conditional independence, and in the second line we explicitly performed the trivial summations over . The summations over the remaining hiddens and can now be carried out, yielding expectation values of and with held fixed, which we denote by and . These quantities are local functions of , whose expectation value can be calculated exactly given . The real-space RG procedure thus performed is therefore an exact technique, which, in particular, always preserves critical behavior, regardless of what is.

Notwithstanding, the usefulness (and practicality) of the RG procedure depends on choosing (or equivalently the relevant degrees of freedom) such that effective Hamiltonian remains as short range as possible and (if is continuous) the fluctuations of are as small as possible, so that high powers of are not needed. More formally we demand that the Taylor expansion of in :

 ~H=∑ifihi+∑⟨ij⟩fijhihj+∑⟨⟨ij⟩⟩fijhihj+∑⟨ijk⟩fijkhihjhk+∑⟨⟨ijk⟩⟩... (19)

is as compact as possible i..e. it contains only short-ranged and few-body terms (the coefficients decay exponentially with distance). If the above requirements are satisfied, all the terms in beyond some finite distance can be removed while making only minor changes to statistical properties of system . The procedure can then be repeated recursively granting access to increasingly long-ranged features without keeping track of all degrees of freedom.

What constitutes a good RG scheme in the above language is intuitively clear, but hard to formalise, especially with an algorithmic goal in mind. The mutual information maximization (MIM) prescription, on the other hand, has a very precise formulation, and lends itself naturally to computational implementation. We argue that the two approaches are equivalent. Intuitively it is because maximizing the mutual information encourages to couple to the combinations of which are most strongly correlated with the environment. In field theory terms these combinations are the most relevant operators in the theory which are nothing else than the basic fields appearing in the low energy theory. Clearly, when performing an RG scheme one needs to keep track of precisely those fields. The numerical results reported in the main text provide strong empirical evidence for this equivalence. Below we provide two additional analytical insights which reinforce this assertion: (1) For the one-dimensional Ising model our approach coincides with the standard “decimation” RG approach, which is known to be optimal (2) For 1D (and quasi-1D) systems with nearest-neighbour interactions saturating the mutual information implies an effective Hamiltonian with nearest-neighbor interactions.

Lastly, we point out that mutual information is invariant under any homeomorphism of the degrees of freedom in or . Consequently, scrambling the degrees of freedom in , or taking complicated non-linear functions of them, does not affect this procedure (i.e. it is representation-invariant, as it should be). More precisely, if maximizes the mutual information w.r.t. and maximizes the mutual information w.r.t. then .

### 1. Ideal filters for the 1D Ising model

The 1D Ising model is one of the simplest statistical mechanical models; it can be solved exactly. In particular, the decimation RG prescription is known to be optimal for this model. We shall determine what the ideal filters – in terms of mutual information – are, and show that they indeed correspond to decimation.

The partition function on sites (periodic boundary conditions) is given in the transfer matrix formalism by:

 Z=Tr[TN],     T=(eKe−Ke−KeK)\ \ \ with: (20)
 Tm=[cosh2m(K)−sinh2m(K)]1/2⎛⎜ ⎜ ⎜⎝(coshm(K)+sinhm(K)coshm(K)−sinhm(K))1/2(coshm(K)+sinhm(K)coshm(K)−sinhm(K))−1/2(coshm(K)+sinhm(K)coshm(K)−sinhm(K))−1/2(coshm(K)+sinhm(K)coshm(K)−sinhm(K))1/2⎞⎟ ⎟ ⎟⎠. (21)

The dominant eigenvector of is with an eigenvalue . Note that is proportional to with an effective which always tends to zero at large . This implies that the model is always short range correlated.

Let us now consider the following geometry: a line region of size consisting of spins , with two buffer zones of size to its left and right, surrounded by an environment of size and then by the rest of the system which is assumed to be much larger than the correlation length. The probabilities involved in maximizing the mutual information can now be written down explicitly :

 PΘ(V)=⟨g|T|v1⟩Tv1,v2Tv2,v3...TvLV−1,vLV⟨vLV|T|g⟩[2cosh(K)]LV+1, (22)
 PΘ(V,E)=⟨g|T|e−LE⟩Te−LE,e−LE+1...TLB+1e−1,v1Tv1,v2...TLB+1vLV,e1..TeLE−1,eLE⟨eLET|g⟩[2cosh(K)]LV+2LB+2LE+1, (23)

Using the definitions of and we find:

 EΘ(V) = −∑vi∈VKvivi+1+const, (24) EΘ(E,V) = −∑ei∈EKeiei+1−KLB(e−1v1+vLVe1)+EΘ(V)+const′, (25) ΔEΘ = −∑ei∈EKeiei+1−KLB(e−1v1+vLVe1)+const′′, (26)

where the energies are defined via . We now have everything needed to evaluate the MI proxy :

 (27)

Let us examine this expression: any term in which does not involve , can be taken out of the average. For instance . Furthermore, the summation can then be performed trivially, yielding a factor of . Thus the -independent terms are also -independent and therefore they can be ignored as they are irrelevant for minimization w.r.t. . We may therefore redefine as:

 ΔEΘ=−KLB(e−1v1−vLVe1). (28)

Similar arguments imply that one may redefine as:

 EΘ(E,V)=−KLB(e−1v1−vLVe1)+EΘ(V) (29)

The exact solution to the problem of maximization of is difficult to find analytically, even for the case of a single hidden variable . We may, however, evaluate and compare the three most likely scenarios:

Decimation filter: It is defined by coupling to a single visible spin only. Here we assume, without loss of generality, that iff . Consequently:

 ⟨−ΔE⟩H=KLBe−1h+KLBe1⟨vLV⟩EΘ(V),v1=h (30)

where the last average is taken with respect to with the constraint . For large this last term would be exponentially small in and can be neglected. We therefore obtain:

 AΛ(H:E)≈∑V,EP(V,E)KLBe−1v1=KLB⟨e−1v1⟩, (31)

where the last average is taken with respect to the partition function of the 1D Ising model.

Boundary filter:. It is defined by equal coupling to the two boundary spins, i.e. the coarse grained variable is determined by majority rule from the on the boundary:

 P(h|v1,vLV)=1+h(v1+vLV)2. (32)

First, we evaluate the quantity:

 ⟨−ΔEΘ⟩H=KLB(e−1⟨v1⟩H+e1⟨vLV⟩H). (33)

Neglecting the correlations between and vanishing for large we have with . Consequently and we obtain:

 ⟨ΔEΘ⟩H=KLB(e−1+e1)h4. (34)

Finally, we have for :

 AΛ(H:E)=2KLB4∑V,h,EP(h|v1,vLV)P(V,E)e−1h=2KLB4∑V,h,E1+h(v1+vLV)2P(V,E)e−1h, (35)

yielding:

 AΛ(H:E)=KLB2⟨e−1v1⟩ (36)

Uniform filter:. It is defined by uniform coupling to all visible spins, or equivalently, is determined by majority rule on all of :

 P(h|V)=1+harctanh(Ω∑ivi)2 (37)

with . We again need to evaluate the quantity:

 ⟨−ΔEΘ⟩H=KLB(e−1⟨v1⟩H+e1⟨vLV⟩H), (38)

where the averages on the r.h.s. are now taken using but constrained to have the majority of aligned with . To simplify computations we exchange this hard constraint for an equivalent soft one: to this end we introduce a fictitious magnetic field and demand that it generates the same average magnetization as does via the hard constraint. As such approximations are only valid in the thermodynamical limit, the important quantity to track here is the scaling of all quantities with . The hard constraint induces an average magnetization proportional to the square-root of the variance of the magnetization in the absence of the constraint and so is proportional to . On the other hand induces magnetization proportional to , as it couples to all spins directly. As a result, we need to reproduce the hard constraint, and we can then estimate:

 ⟨−ΔEΘ⟩H∝KLB√LVe−1h, (39)

and thus:

 AΛ(H:E)∝KLB√LV∑V,h,EP(h|v1,vLV)P(V,E)e−1h. (40)

Since the quantities being averaged are bounded by a constant, vanishes in the limit of large .

Comparing the results in the three cases we conclude that the decimation filters are favored, as they yield twice the mutual information compared to the boundary filters (both are superior to the uniform filter). These results easily generalize to the anti-ferromagentic Ising model. We also confirmed the results by exact numerical evaluation for small systems. Interestingly, the decimation filters are known to be optimal from analytical point of view as they result in an effective Hamiltonian with nearest-neighbour interactions only. Hence, the mutual information RG scheme coincides with the standard result.

### 2. MI saturation implies a nearest-neighbour Hamiltonian

Consider a 1D, or quasi-1D, system , with a spatial subset of degrees of freedom separating two parts of the environment and , and the hiddens coupled to via parameters defined by the MIM prescription. Let us assume that adding further hiddens does not cause the mutual information to grow, i.e. MI has saturated. We argue that if this is the case, the effective Hamiltonian contains no interaction between any and which couple exclusively to distinct parts of the environment, and , respectively.

To this end consider the conditional probability distribution , where is a box-shaped area to be coarse-grained, centered around site (similar to the one depicted in Fig. 7):

 P(H| V|rn)=e∑λihi(rn)Oi(V|rn)Πi2cosh[λiOi(V|rn)], (41)

and where are functions of visibles in that area. The coefficients in the effective Hamiltonian Eq. 19 now appear as a cumulants with respect to given by:

 Hmassive=H−∑n∑ilogcosh(λiOi(V|rn)). (42)

These additional terms can be expanded assuming small fluctuations, yielding:

 Hmassive≈H+∑n,i(λiOi(V|rn))2, (43)

whilst for stronger fluctuations the mass term has a different asymptotic . Crucially, both of these have the property that when tends to infinity, the fluctuations of are completely suppresed. The interaction in the effective Hamiltonian between two hiddens and coupling to visible areas centered around , can now be expressed as:

 f(ni),(mi)=⟨Oi(V|rn)Oj(