An update on the BQCD Hybrid Monte Carlo program

# An update on the BQCD Hybrid Monte Carlo program

Taylor Ryan Haar    Yoshifumi Nakamura    Hinnerk Stüben Speaker, e-mail: hinnerk.stueben@uni-hamburg.de
###### Abstract

We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like for , and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.

An update on the BQCD Hybrid Monte Carlo program

Taylor Ryan Haar, Yoshifumi Nakamura, and Hinnerk StübenSpeaker, e-mail: hinnerk.stueben@uni-hamburg.de

\@textsuperscript

1CSSM, Department of Physics, The University of Adelaide, Adelaide, SA, Australia 5005
\@textsuperscript
2RIKEN Advanced Institute for Computational Science, Kobe, Hyogo 650-0047, Japan
\@textsuperscript
3Universität Hamburg, Regionales Rechenzentrum, 20146 Hamburg, Germany

• Abstract.

## 1 Introduction

BQCD is a Hybrid Monte Carlo program for simulating lattice QCD with dynamical Wilson fermions. It was first published at Lattice 2010 [1] and has been used by several groups: the QCDSF-UKQCD collaboration [2, 3, 4, 5, 6, 7], CSSM [6, 7, 8, 9], Japanese finite density [10, 11] and finite temperature [12] projects, and the RQCD collaboration [13].

Here we report on extensions and optimizations that were made meanwhile and give an update on compute performance. The code and a manual can be downloaded from [14]. New features of the program are:

• Actions: hopping term with chemical potential, clover improved Wilson action plus a CPT breaking term, QCD+QED, QCD+Axion. See section 2.

• Algorithms: polynomial filtering, a generalized multiscale integration scheme, truncated RHMC, Zolotarev approximation. See section 4.

• Compute performance optimizations: explicit vectorization with SIMD intrinsics, improvement of MPI communication. See section 5.

## 2 Actions

### 2.1 Gauge actions

Implemented are the Wilson gauge action and an improved gauge action, see [1].

### 2.2 Fermion actions

At the time of [1] the program could simulate the standard Wilson fermion action , clover improved Wilson fermions, and the SLiNC fermion action [4]. The new version can also simulate:

• the hopping term with chemical potential [10, 11]

 SF=∑x{¯ψ(x)ψ(x)−κ3∑i[¯ψ(x)U†i(x−^i)(1+γi)ψ(x−^i)+¯ψ(x)Ui(x)(1−γi)ψ(x+^i)]−κ[¯ψ(x)U†4(x−^4)(1+γ4)e−μψ(x−^4)+¯ψ(x)U4(x)(1−γ4)eμψ(x+^4)]} (1)
• the clover improved Wilson action plus a CPT breaking term with coefficient and a matrix [6, 7]

 (2)

### 2.3 Qcd+qed

The program can simulate QCD+QED [5] using the action

 S=SG+SA+∑qSqF. (3)

is an SU(3) gauge action, is the non-compact U(1) gauge action

 SA=βQED2∑x,μ<ν[Aμ(x)+Aν(x+^μ)−Aμ(x+^ν)−Aν(x)]2, (4)

and the fermion action for flavour is

 SqF=∑x{κq∑μ[¯¯¯q(x)(γμ−1)e−iQqAμ(x)~Uμ(x)q(x+^μ)−¯¯¯q(x)(γμ+1)eiQqAμ(x−^μ)~U†μ(x−^μ)q(x−^μ)]+¯¯¯q(x)q(x)−12κqcSW∑μ,ν¯¯¯q(x)σμνFμν(x)q(x)}, (5)

where , and is a singly iterated stout link.

### 2.4 QCD+Axion

The program can simulate QCD+Axion [15] using the action

 S=SG+Sa+∑qSqF. (6)

is scalar action for the axion field

 Sa=κa∑x∑μ(ϕa(x)−ϕa(x+μ))ϕa(x), (7)

and the fermion action for flavour in the case of Wilson fermions is

 SqF=∑x¯q(x)[1+(κqλq+finvϕa)γ5]q(x)−κq∑x,μ[¯q(x)(1−γμ)Uμ(x)q(x+a^μ)+¯q(x−a^μ)(1+γμ)U†μ(x−a^μ)q(x)], (8)

where

 κq=12amq+8,λq=i2amqθNf,finv=i2κqmq√κafaNf. (9)

## 3 Measurements

The following quantities can be measured with BQCD: plaquettes (quadratic and rectangular), topological charge (cooling method), Polyakov loop, Wilson flow, traces of the fermion matrix (, , ), quark determinant with chemical potential, smallest and largest eigenvalue of the Dirac matrix, meson and baryon propagators.

## 4 Algorithms

In addition to nested integrators for multiple time scales, a generalized integration scheme [8] has been implemented. This is where separate integration schemes for each action term are superimposed onto a single time step evolution. This allows the integration step-sizes for each action term to be completely independent of the others.

Rational Hybrid Monte Carlo (RHMC) is implemented with rational approximations from the Remez algorithm. In the case of approximating , an alternative rational function is available, namely the Zolotarev optimal rational approximation (see e.g. [16] for an explanation).

Alongside Hasenbusch filtering, there are two new filtering methods, one of which applies exclusively to RHMC:

• Polynomial filtering applies to both RHMC and standard HMC fermions, and is the application of a polynomial filter to split the fermion action into several terms:

 SPF=¯¯¯¯ψ1P(W†W)ψ1+¯¯¯¯ψ2P(W†W)−1W†Wψ2. (10)
• Term-splitting for RHMC splits the sum in the rational approximation for RHMC into several terms, giving action

 StRHMC=¯¯¯¯ψ1R1,t(W†W)ψ1+¯¯¯¯ψ2Rt+1,N(W†W)ψ2 (11)

where

 Ri,j(K)=cδi1nj∑k=iW†W+akW†W+bk, (12)

are ordered decreasing.

BQCD has a wide range of iterative solvers: cg, BiCGstab, GMRES, GCRODR, multishift cg, block multishift cg.

## 5 Optimization of compute performance

### 5.1 SIMD vectorization and MPI

In addition to parallelization with MPI and OpenMP a third level of parallel implementation was introduced for solvers: SIMD vectorization with SIMD intrinsic functions. The SIMD implementation is generic, i.e. it works for any size of SIMD vectors. In order to achieve this, the data layout of arrays had to be changed. All arrays (for gauge, spin-colour and clover fields) now have SIMD vectors as the smallest structure. In Fortran notation the gauge field is defined in the following way

old: complex(8) :: u(3, 3, volume/2) real(8) :: u(SIMDsize, re:im, 3, 3, volume/2 / SIMDsize)

and the new layout of the spin-color field is

old: complex(8) :: a(4, 3, volume/2) real(8) :: a(SIMDsize, re:im, 2, 3, volume/2 / SIMDsize, 2)

where the spin components of the spin-colour field are split into components which optimizes MPI communication in -direction. The clover arrays, for which a packed format is used, were changed accordingly.

At the single core level the SIMD code is about 2 times faster than the corresponding Fortran code. With this speed-up computations are increasingly dominated by communication and improvement of MPI communication becomes important. Hence, the following MPI optimizations where made:

• The overhead introduced by the reduction to two-component spinors was minimized. Previously the projection was done for the whole local volume, now it is only done for boundary sites, and there is no projection in the -direction needed any more.

• All MPI ’buffers’ are consecutive in memory and aligned to SIMD vector boundaries.

• Communication can overlap with computation. This is implemented with MPI plus OpenMP, where the master thread communicates while the other threads compute.

In tables 1 and 2 performance figures are listed for machines and lattices that are currently used in production. The optimized code runs between 1.3 and 1.7 times faster.

### 5.2 Quda

BQCD can run on GPUs by employing the QUDA library [17]. QUDA has a BQCD interface to its cg and multishift cg solvers.

## Acknowledgements

We would like to thank Gerrit Schierholz, Roger Horsley, Waseem Kamleh, Paul Rakow and James Zanotti for support, stimulating discussions and bug reports. The computations were performed on a Cray XC40 of the North-German Supercomputing Alliance (HLRN) and the IBM BlueGene/Q at Jülich Supercomputer Centre (JSC).

## References

• [1] Y. Nakamura, H. Stüben, PoS LATTICE2010, 040 (2010), 1011.0199
• [2] H. Stüben (UKQCD, QCDSF), Nucl. Phys. Proc. Suppl. 94, 273 (2001), hep-lat/0011045
• [3] M. Göckeler et al. (QCDSF), PoS LAT2007, 041 (2007), 0712.3525
• [4] N. Cundy et al., Phys. Rev. D79, 094507 (2009), 0901.3302
• [5] R. Horsley et al., JHEP 04, 093 (2016), 1509.00799
• [6] A.J. Chambers et al. (QCDSF/UKQCD, CSSM), Phys. Rev. D90, 014510 (2014), 1405.3019
• [7] A.J. Chambers et al., Phys. Rev. D92, 114517 (2015), 1508.06856
• [8] T. Haar, W. Kamleh, J. Zanotti, Y. Nakamura, Comput. Phys. Commun. 215, 113 (2017), 1609.02652
• [9] S. Hollitt, P. Jackson, R. Young, J. Zanotti, PoS INPC2016, 272 (2017)
• [10] X.Y. Jin, Y. Kuramashi, Y. Nakamura, S. Takeda, A. Ukawa, Phys. Rev. D88, 094508 (2013), 1307.7205
• [11] X.Y. Jin, Y. Kuramashi, Y. Nakamura, S. Takeda, A. Ukawa, Phys. Rev. D92, 114511 (2015), 1504.00113
• [12] X.Y. Jin, Y. Kuramashi, Y. Nakamura, S. Takeda, A. Ukawa, Phys. Rev. D96, 034523 (2017), 1706.01178
• [13] G.S. Bali, S. Collins, A. Cox, A. Schäfer, Phys. Rev. D96, 074501 (2017), 1706.01247
• [14]
• [15] G. Schierholz, Y. Nakamura, Dynamical QCD+Axion simulation: First results, in Proceedings, 35th International Symposium on Lattice Field Theory (Lattice2017): Granada, Spain, to appear in EPJ Web Conf.
• [16] T.W. Chiu, T.H. Hsieh, C.H. Huang, T.R. Huang, Phys. Rev. D66, 114502 (2002), hep-lat/0206007
• [17] M.A. Clark, R. Babich, K. Barros, R.C. Brower, C. Rebbi, Comput. Phys. Commun. 181, 1517 (2010), 0911.3191
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters