Deep Learning for Topological Invariants

# Deep Learning for Topological Invariants

Ning Sun Institute for Advanced Study, Tsinghua University, Beijing, 100084, China    Jinmin Yi Institute for Advanced Study, Tsinghua University, Beijing, 100084, China Department of Physics, Peking University, Beijing, 100871, China    Pengfei Zhang Institute for Advanced Study, Tsinghua University, Beijing, 100084, China    Huitao Shen Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA    Hui Zhai Institute for Advanced Study, Tsinghua University, Beijing, 100084, China Collaborative Innovation Center of Quantum Matter, Beijing, 100084, China
July 24, 2019
###### Abstract

In this work we design and train deep neural networks to predict topological invariants for one-dimensional four-band insulators in AIII class whose topological invariant is the winding number, and two-dimensional two-band insulators in A class whose topological invariant is the Chern number. Given Hamiltonians in the momentum space as the input, neural networks can predict topological invariants for both classes with accuracy close to or higher than 90%, even for Hamiltonians whose invariants are beyond the training data set. Despite the complexity of the neural network, we find that the output of certain intermediate hidden layers resembles either the winding angle for models in AIII class or the solid angle (Berry curvature) for models in A class, indicating that neural networks essentially capture the mathematical formula of topological invariants. Our work demonstrates the ability of neural networks to predict topological invariants for complicated models with local Hamiltonians as the only input, and offers an example that even a deep neural network is understandable.

thanks: They contribute equally to this work. thanks: They contribute equally to this work.

## I Introduction

Machine learning has achieved huge success recently in industrial applications. In particular, deep learning prevails for its performance in several different fields including image recognition and speech transcription [1; 2; 3; 4; 5; 6; 7; 8]. In terms of applications in assisting academic research, aside from analyzing experimental data in high-energy physics [10; 9] and astrophysics [14; 13; 12; 11], progresses have also been made on recognizing phases of matter [40; 19; 18; 17; 44; 43; 42; 41; 25; 24; 23; 26; 22; 21; 20; 15; 16; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], accelerating Monte Carlo simulations [47; 45; 46; 50; 48; 51; 49], and extracting relations between many-body wavefunctions, entanglement and neural networks [53; 52; 56; 54; 57; 55]. Among these progresses, one challenging and interesting problem is to extract global topological features from local inputs, for instance, by supervised training a neural network, and to understand how the neural network works.

In Ref. [15], a convolutional neural network is trained to predict the topological invariant for band insulators with high accuracy. The highlights of that work are two-fold. First, only local Hamiltonians are used as the input and no human knowledge is used as a prior. Second, by analyzing the neural network after training, it is found the formula fitted by the neural network is precisely the same as the mathematical formula for the winding number. However, the limitations of Ref. [15] are also two-fold. Only one-dimensional models in AIII class whose topological invariants are the winding numbers are considered. Moreover, only two-band models are considered.

In this work, we extend the realm of the previous work to more sophisticated scenarios, including (i) one-dimensional models in AIII class with more than two-bands and (ii) two-dimensional two-band models in A class. We find that in both cases, the neural network can predict topological invariants with high accuracy, even for testing Hamiltonians whose topological numbers are beyond those in the training set. Similar to Ref. [15], we use local Hamiltonians as the input and do not feature engineer the input data with any human knowledge. Also, the design of the neural network architecture follows general principles, without specifically making use of the prior understanding of topological invariants. The only knowledge we explicitly exploit about these models is the translational symmetry, as we choose convolutional layers as the building blocks of our neural networks. Convolutional layers respect the translational symmetry by construction and reduce the redundancy in the parameterization [58].

Learning topological invariants of these two models is significantly harder than that in Ref. [15], as the mathematical formula of topological invariants in these models are intrinsically more complicated (see Eq. (2) and Eq. (LABEL:Chern_eq)) and the sizes of the input data are much larger. Consequently, to guarantee a good performance, neural networks used in this work are much deeper than the one used in Ref. [15]. As shown in Fig. 1, there are more than nine hidden layers in each neural network. Because the neural network becomes more complicated, it becomes more difficult to analyze how the neural network works. Nevertheless, we show that the intermediate output of certain hidden layer is, for case (i) the local winding angle, and for case (ii) the local Berry curvature — both are the integrands in the mathematical formula of the corresponding topological invariant. In this way, we demonstrate that the complicated function fitted by the neural network is essentially the same as the mathematical formula for the topological invariant.

The paper is organized as follows. In Section II we train a neural network to learn the winding number of one-dimensional four-band models in AIII class. After introducing the model Hamiltonian and the mathematical formula of the winding number, we present our neural network in detail and report its performance. We then analyze the mechanism of why the neural network works. We follow this routine in Section III and show the result for two-dimensional two-band models in A class.

## Ii Winding Number with Multiple Bands

### ii.1 Model

Consider a -band model in one dimension and introduce , where is the creation operator for a fermion on -orbital with momentum . A general one-dimensional four-band Hamiltonian in AIII class can be written as , where

 H(k)=(0D(k)D†(k)0). (1)

Without loss of generality, here is a -dimensional unitary matrix [59] and . The topological classification of band Hamiltonians in AIII class is the group [60]. When the model is half-filled, the topological invariant is computed by

 w=12π∫π−πdkTr[D−1(k)i∂kD(k)]. (2)

Since is unitary, it can be diagonalized as , where is a -dimensional diagonal matrix with diagonal elements . Formally, can also be uniquely decomposed as , where is a -dimensional unitary matrix with determinant 1 and is the winding angle at momentum .

To be concrete, we restrict our discussion to , which corresponds to four-band models. The winding number formula of Eq. (2) can then be reduced to

 w=1π∫π−πdk∂kα(k), (3)

where so that . The discretized version of the winding number formula is

 w =1πL∑l=1Δα(kl) =1πL∑l=1[α(kl+1)−α(kl)]modπ, (4)

where , are distributed uniformly in the Brillouin zone and .

### ii.2 Neural Network Performance

Since the neural network can only take discrete input, we first discretize the entire Brillouin zone uniformly into points by choosing . At each point, since the Hamiltonian is determined by the matrix , we denote its four elements as . The input data is therefore a -dimensional matrix of the following form

 ⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝Re[D11(0)]Re[D11(2π/L)]⋯Re[D11(2π)]Im[D11(0)]Im[D11(2π/L)]⋯Im[D11(2π)]Re[D12(0)]Re[D12(2π/L)]⋯Re[D12(2π)]Im[D12(0)]Im[D12(2π/L)]⋯Im[D12(2π)]Re[D21(0)]Re[D21(2π/L)]⋯Re[D21(2π)]Im[D21(0)]Im[D21(2π/L)]⋯Im[D21(2π)]Re[D22(0)]Re[D22(2π/L)]⋯Re[D22(2π)]Im[D22(0)]Im[D22(2π/L)]⋯Im[D22(2π)]⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠ (5)

In the following, we set .

The structure of the deep neural network is shown in Fig. 1 (a). It first contains several convolutional layers with kernel sizes marked in the figure, which are followed by two fully-connected layers leading to the final output. In each layer, a linear mapping is followed by a nonlinear ReLU function. We feed the neural network with a set of discretized training Hamiltonians with winding number for supervised training.

To compute accuracy, the final winding number is taken as the closest integer of the numerical value predicted by the network. It is considered as a correct prediction if the rounded integer matches the value computed by Eq. (4). The accuracy of this neural network is shown in TABLE 1. After training, the neural network achieves a prediction accuracy of 96% on Hamiltonians with winding numbers in a separate test data set, and an accuracy of more than 90% on Hamiltonians with winding number of that are beyond the training set. The numerical values of the winding number predicted for each Hamiltonian in the test set are shown in Fig. 2.

### ii.3 Neural Network Analysis

To see why the neural network excels predicting the topological winding number, it is illuminating to check whether the complicated function fitted by the neural network is consistent with the mathematical formula Eq. (4) introduced above. We open up the neural network at H1 and H2 marked in Fig. 1 by feeding test Hamiltonians into the neural network and plotting intermediate outputs at H1 and H2 separately. Notice that, the output of H1 is of dimension , while the dimension of H2 is . Each row of H1 can be interpreted as a vector , and each row of H2 can be interpreted as vector . They respectively have the same dimension as the discretized and defined in Sec. II.1. On the other hand, the exact value of and of the corresponding Hamiltonian can also be obtained directly according to the definition in Sec. II.1. In Fig. 3(a) we plot , where is the -th component of a selected row of H1, for various and input Hamiltonians. The plot for H2 in Fig. 3(b) is similar where are plotted.

As can be seen in Fig. 3(a), the intermediate output at H1 is approximately piecewise linear with , implying that this row of neuron successfully extracts the winding angle within some range. Other rows of neurons extracts winding angles at different ranges. In Fig. 3(b), the intermediate output at H2 is approximately linear with within some range, and each row of neuron functions as a extractor for different ranges of . Although their ranges may overlap with each other or have different slopes in their linear relations with the exact , a linear combination of these extractors with correct coefficients in the following fully-connected layer can easily lead to a function proportional to at all ranges. In this way, the winding number is calculated essentially the same way as that using the mathematical formula Eq. (4).

As emphasized in Sec. II.1, it is important to notice the input Hamiltonian can be written as the product of a phase factor and a matrix. The matrix does not play any role in determining the winding number and only the phase factor matters. It is quite impressive that the neural network successfully distills the phase factor from the irrelevant part.

## Iii Chern Number in Two Dimensions

### iii.1 Model

Consider a two-band model in two dimensions and introduce , where is the creation operator for a fermion on -orbital with momentum . A general two-dimensional two-band Hamiltonian in A class can be written as , where

 H(k)=h(k)⋅σ=hx(k)σx+hy(k)σy+hz(k)σz. (6)

Here is a vector of Pauli matrices. Without loss of generality, we can take as the normalization 111This is similar to that is taken as the unitary matrix in the previous case of the winding number, because we can always take flat-band approximation for an insulator without changing its band topology.. In two dimensions, the Chern number can be computed as

 C=12π∫T2d2kFxy(k), (7)

where is the torus of the Brillouin zone and

 Aμ(k)=i⟨u(k)|∂μu(k)⟩, Fμν(k)=∂μAν−∂νAμ. (8)

Here we assume the model is half-filled so that is the energy eigenstate with the lower energy . The integrand in Eq. (7) is then the Berry curvature of the lower band. For discretized lattices, the Berry curvature and the Chern number can be defined through the Wilson-loop approach, as is elaborated in the Appendix.

### iii.2 Neural Network Performance

The input data are Hamiltonians in the discretized Brillouin zone, i.e., tensors with

 Hμ =⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝hμ(0,0)hμ(0,2πL)⋯hμ(0,2π)hμ(2πL,0)hμ(2πL,2πL)⋯hμ(2πL,2π)⋮⋮⋱⋮hμ(2π,0)hμ(2π,2πL)⋯hμ(2π,2π)⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (9)

The corresponding Chern numbers are calculated using the method presented in the Appendix. In the following, we take .

The structure of the neural network is shown in Fig. 1(b) which is similar to that used for the winding number. We feed the neural network with randomly generated Hamiltonians with Chern numbers limited to . The accuracy here is computed similarly to before by rounding the final output of the network to the closet integer. After training, the neural network can achieve an accuracy of on Hamiltonians with Chern numbers , an accuracy of on Hamiltonians with Chern numbers and an accuracy of on Hamiltonians with Chern numbers . These results are shown in Fig. 4 and are summarized in TABLE 2.

### iii.3 Neural Network Analysis

We feed the neural network with a Hamiltonian in the test data set and plot the intermediate output of the last convolutional layer (marked by H3 in Fig. 1(b)) in Fig. 5(b-d). The output consists of three layers of matrices, which are respectively shown in Fig. 5(b), (c) and (d). They should be compared with the exact Berry curvature for the corresponding Hamiltonian shown in Fig. 5(a). Since the intermediate output is positive due to nature of the ReLU function while the Berry curvature are generally positive somewhere and negative elsewhere, the intermediate output reproduces the positive part of the Berry curvature in one layer (Fig. 5(b)) and the negative part in another layer (Fig. 5(c)). The remaining third layer is almost irresponsive (Fig. 5(d)). This result shows the neural network compute the topological invariant by first computing local Berry curvatures in the momentum space and then adding them together, which is essentially the same as Eq. (7).

## Iv Summary

In summary, we have trained deep neural networks to predict the winding number of one-dimensional four-band models in AIII class and the Chern number of two-dimensional two-band models in A class. In addition to the high prediction accuracies after the training, it is understood that deep neural networks essentially fit the mathematical formula for both topological invariants. In the first case, the network successfully distills the phase factors of Hamiltonians between two successive momenta and discards the degrees of freedom that is redundant in determining the topology. In the second case, the network successfully extracts the Berry curvature in momentum space. Our work provides an explicit example that even a complicated deep neural network can be understood. Our work can be further combined with ab initio calculations, and paves the way to the direct prediction of topological properties of real materials using machine learning.

## Appendix A Chern number in discrete spaces

The continuous version of Chern number and Berry curvature is defined in Eq. (8) in the main text. To introduce the discrete version of Chern number, it is convenient to first define the Berry curvature in discrete spaces[61]. The Chern number is then the summation of Berry curvatures in the space.

The definition of the Berry curvature and the Chern number in discrete spaces, and the procedure for computing them are outlined as follows.

1. Discretize a two-dimensional parameter space as sites. With periodic boundary condition by identifying sites at the boundary, there are plaquettes in total. In our setting, sites are labeled as . For uniform discretization, the area of each plaquette is , where and is the distance of neighboring sites along and respectively.

2. At each site in the discretized two-dimensional parameter space, diagonalize the Hamiltonian to obtain the eigenstates of the -th band . is a diagonal matrix with its diagonal elements the eigenenergy of each band.

3. All four vertices in each plaquette construct an ordered loop, called the Wilson loop.

(a). Compute the ordered inner product of the eigenstates along the ordered loop in each plaquette. Specifically, define

 U12=V†(k2)V(k1), U23=V†(k3)V(k2), U34=V†(k4)V(k3), U41=V†(k1)V(k4),

(b). Define , where means to extract the diagonal elements and construct a diagonal matrix. That is, .

(c). Define . is the (non-Abelian) Berry curvature at the plaqutte labeled . Define and the Berry curvature of the -th band

 F(n)xy(k)=θn(k)/s(k). (10)

4. The Chern number is the summation of the Berry curvature of all plaquettes. Define as the Chern number of the -th band:

 cn =12πL×L∑i=1θ(ki) =12πL∑i=1L∑j=1−ilogT(nn)loop(ki,kj). (11)

It can be verified that the Chern number defined above is quantized and gauge invariant. For a model defined in the continuous space but whose Chern number is computed only on discretized points in the continuous space, Equation (11) gives the same result as Eq. (7) if the discretization is dense enough. Hence Eq. (10) and (11) can be seen as the generalization of the Berry curvature and the Chern number to discrete spaces.

## References

You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters