OpenDDA: A Novel High-Performance Computational Framework for the Discrete Dipole Approximation
This work presents a highly-optimised computational framework for the Discrete Dipole Approximation, a numerical method for calculating the optical properties associated with a target of arbitrary geometry that is widely used in atmospheric, astrophysical and industrial simulations. Core optimisations include the bit-fielding of integer data and iterative methods that complement a new Discrete Fourier Transform (DFT) kernel, which efficiently calculates the matrix-vector products required by these iterative solution schemes. The new kernel performs the requisite 3D DFTs as ensembles of 1D transforms, and by doing so, is able to reduce the number of constituent 1D transforms by and the memory by over . The optimisations also facilitate the use of parallel techniques to further enhance the performance. Complete OpenMP-based shared-memory and MPI-based distributed-memory implementations have been created to take full advantage of the various architectures. Several benchmarks of the new framework indicate extremely favourable performance and scalability.
PROPOSED RUNNING HEAD: OPENDDA: HIGH-PERFORMANCE DDA
James Mc Donald111Corresponding author: James Mc Donald (firstname.lastname@example.org)
Computational Astrophysics Laboratory, Department of Information Technology,
National University of Ireland, Galway
Email: email@example.com Phone: 353 91 495632 Fax: 353 91 494584
Computational Astrophysics Laboratory, Department of Information Technology,
National University of Ireland, Galway
Email: firstname.lastname@example.org Phone: 353 91 493549 Fax: 353 91 494501
S. Gerard Jennings
School of Physics & Environmental Change Institute,
National University of Ireland, Galway
Email: email@example.com Phone: 353 91 492704 Fax: 353 91 495515
Keywords: Discrete Dipole Approximation, optimisation, matrix-vector product, CG-FFT, parallel algorithm
The Discrete Dipole Approximation (DDA) is an intuitive and extremely flexible numerical method for calculating the optical properties associated with a target of arbitrary geometry, whose dimensions are comparable to or moderately larger than the incident wavelength. The basic concept was first introduced in 1964 to study the optical properties of molecular aggregates (devoe1964; devoe1965). Subsequently, prompted by the polarisation of light by interstellar dust and the desire to calculate the approximate extinction, absorption and scattering cross sections for wavelength-sized dielectric grains of arbitrary shape, a comparable scheme was formulated (purcell1973). Since then, the DDA has undergone major development. A recent erudite review describes this development in detail (yurkin2007).
There are currently two main publicly available DDA implementations, DDSCAT333http://www.astro.princeton.edu/draine and ADDA444http://www.science.uva.nl/research/scs/Software/adda/index.html, the development of which has provided seminal contributions to the advancement of the DDA. DDSCAT (draine2004) is written in FORTRAN 77 and is, at present, the most widely used DDA implementation. It has been developed by Bruce Draine from the Princeton University Observatory and Piotr Flatau from the Scripps Institution of Oceanography, University of California, San Diego. Its development provided several advancements including, among others, an updated expression for the dipole polarisability that incorporated a radiative reaction correction (draine1988), the application of DFT techniques to the iterative core (goodman1991), and the introduction of the Lattice Dispersion Relations (LDR) for the prescription of the dipole polarisability (draine1993). A review paper by the authors concisely summarises these contributions (draine1994). ADDA (yurkin2007b) is an extremely advanced C code that has been developed (over a period of more than 10 years) by Maxim Yurkin and Alfons Hoekstra at the University of Amsterdam. It was the first parallel implementation of DDA and from its initial incarnation has been able to perform single DDA computations across clusters of autonomous computers (hoekstra1994; hoekstra1995; hoekstra1998). This enables the code to overcome the restrictions on the number of dipoles that can be used due to the memory limitations of a single computer. The code also incorporates several integrated iterative solvers, several prescriptions for prescribing the dipole polarisabilities, and a host of other functionality (yurkin2008). Other DDA implementations, which are not publicly available, see (yurkin2007b) for further details, include SIRRI (rahola1996; lumme1998); a FORTRAN 90 code that has been developed by Simulintu Ltd in Finland, and ZDD (zubko2003); a C++ code that has been developed by Evgenij Zubko and Yuriy Shkuratov from the Institute of Astronomy, Kharkov National University in the Ukraine. For interested parties, a recent paper performs an in-depth comparison of these four codes (penttila2007).
Put succinctly, the DDA is an extremely flexible and powerful numerical scheme which replaces a continuum scatterer by an ensemble of dipoles situated on a cubic lattice. Each of the dipoles is driven by the incident electric field and by contributions from the other dipoles. Eq. 1 outlines the kernel of this scheme which is a very large and fully populated set of complex linear equations.
Eq. 1 must be solved for the unknown polarisations, , from which the scattering properties can be calculated. Here, is the wavenumber, is the permittivity of free space, is the polarisability of dipole , is the incident electric field at dipole , is the identity matrix, , and is the dyadic product, which is a special case of the Kronecker product for vectors of equal order.
For systems based on three-dimensional lattices, the computational requirements are extremely sensitive to the size of the lattice used. Even a small increase in the dimensions of the lattice significantly increases the computational overheads. As a result, the true potential of the host algorithm is often encumbered by the severe and often prohibitive burdens imposed by the computational scale required to produce meaningful results. In these cases, it is crucial that every possible algorithmic or computational enhancement be exploited in order to augment their applicability. OpenDDA is a novel computational framework for the DDA which has been custom-built from scratch in the C language. The new framework incorporates a variety of algorithmic and computational optimisations, enabling researchers to take full advantage of the different architectures that exist in the current generation of computational resources and beyond. Apart from the core optimisations, the new framework also incorporates an extremely intuitive and user-friendly interface for specifying the simulation parameters, including a sturdy and forgiving parser and a straightforward human-readable input control file.
The paper is organised as follows. Section 2 discusses the application of bit-fielding to streamline the storage of the target occupation and composition data. Section 3 deals with the design and development of a new highly-optimised DFT kernel to calculate the matrix-vector products within the iterative solution schemes. Section LABEL:itersch discusses the custom-built iterative schemes that have been included to support the new DFT kernel. Section LABEL:parallelimp describes the extension of this formulation to both shared-memory and distributed-memory parallel environments. Section LABEL:benchmark provides various various serial and parallel benchmarks to corroborate the assertions and illustrate the implications of the new optimisations. Finally, Section LABEL:conclusions presents concluding remarks.
2 Bit-fielding of target occupation and composition data
In many applications, the range provided by the standard integer data types far exceeds the maximum range associated with the task at hand, and as a result, for large datasets, most of the bits are superfluous, constituting a serious waste of resources. Bit-fielding is a way to overcome this issue by storing data within the bits rather than the integers themselves. The entries can be set & tested by taking the appropriately shifted bit-mask and applying the relevant bitwise operator, LOGICAL OR for setting & LOGICAL AND for testing. The simplicity and speed of bitwise operations makes bit-fielding an extremely efficient method of storing non-negative integer based data.
The DDA’s theoretical strength, which is also its computational weakness due to imposed computational overheads, is its volumetric nature. Its ability to treat arbitrarily shaped particles with complex internal structures derives from the fact that each constituent site in the computational domain has separate integer flags describing its properties. Rather than storing the spatial extent of the target as the Cartesian coordinates of each lattice site in the computational domain and a Boolean occupation flag to indicate whether or not the site is occupied, requiring four integers, OpenDDA simply stores a single bit-fielded occupation flag per lattice site. The Cartesian coordinates need not be stored explicitly as, if required, are readily available as the loop control variables or via direct extraction from the array index itself. The optimisation of this integer data storage reduces the memory required for the spatial description of the target by a factor of 128.
The DDA also requires the specification of the material properties of the target on a dipole-by-dipole basis. This is done by associating each complex refractive index value with a distinct integer flag, and for each dipole, storing three integers to define the material properties in the , & directions. To compress this composition data, OpenDDA employs an extension of the bit-fielding theory to non-Boolean data sets. The new framework automatically calculates the requisite bit-mask, i.e., for four disparate materials, , two bits are all that is required, and the bit-mask is . For this example, the memory required for the integer data associated with the compositional description of the target is reduced by a factor of 16.
To put the optimisations in perspective, for , where , & are the , & dimensions of the computational domain, respectively, with four materials, the memory requirements for the integer data associated with the spatial and compositional description of the target are reduced from 3.26 to just over 0.1. Furthermore, the computational overhead incurred by the application of this bit-fielding optimisation within OpenDDA is negligible. Firstly, OpenDDA does not require the actual Cartesian coordinates within the iterative core. They are only needed at the initialisation stage for the calculation of the incident electric field at each of the lattice sites. Secondly, due to the nature of the iterative methods and the fact that the iterative solution vectors only include entries corresponding to occupied sites (discussed in §LABEL:itersch), the Boolean occupation flags are only tested at the initialisation stage, during the calculation of the dipole polarisabilities, and once at the start and end of the DFT matrix-vector product routine. Although the integer data constitutes only a fraction of the total memory footprint, with most deriving from the requisite complex vectors, these optimisations are not without merit and fundamentally underpin the efficiency of the current distributed-memory implementation of OpenDDA. To facilitate the elimination of zero storage within all iterative solution vectors, each autonomous node requires local knowledge of the entire spatial and compositional specification of the target and thus, unlike the complex vectors within the iterative core, the target specification is not distributed. Bit-fielding allows the full description of the target to be known globally without a prohibitive overhead. With each node having typically a few gigabytes to work with, the original memory requirements associated with the description of the target would have been comparable to the maximum possible allocation per node, negating the possibility of a viable distributed implementation.
3 Kernel optimisation
Due to the sheer size of the system produced by Eq. 1, iterative methods, where and is the number of iterations required to reach convergence, are employed, which are much faster than direct methods for large and . The cardinal computational hurdle and key to the acceleration of these iterative methods, such as the conjugate gradient method (barrett1994; golub1996), are the matrix-vector products required by these schemes. A major breakthrough in this respect was the realisation that by embedding the arbitrarily shaped target in a computational domain and neglecting the diagonal terms, the matrix-vector products can be reformulated as convolutions (goodman1991). Once the convolutions have been performed, it is then a trivial matter to include the contributions from the previously neglected terms.
The solution method is predicated upon Toeplitz and circulant matrix structure (davis1994; golub1996). A Toeplitz matrix of order , denoted , is simply a matrix with constant diagonals, whereas a circulant matrix, , is a special case of Toeplitz structure, where the last element of each row is the same as first element of the next row. Any Toeplitz matrix, , can be converted to its circulant counterpart, , where , by taking the first column, appending the entries of the first row, excluding the initial entry, in reverse order, and completing the diagonals. The key is that any circulant matrix is diagonalised by the corresponding Fourier matrix, (vanloan1992), where is the first column of .
Using the fact that the Fourier matrix is unitary, , the multiplication of an arbitrary vector with a circulant matrix can be performed as an element-wise multiplication in the Fourier Domain (FD) via the Convolution Theorem, . The symbol ‘’ denotes convolution, ‘’ denotes a Fourier transform pair and the symbol ‘’ implies element-wise multiplication.
Not only does this reduce the computational complexity of the matrix-vector product from to , but since the circulant eigenvalues are a scalar multiple of the DFT of the first column of the circulant matrix, see Eq. 3, the memory is reduced from to . To expedite the calculation of a Toeplitz matrix-vector product, , the matrix must undergo Toeplitz-to-circulant (T-to-c) conversion, , and the vector zero-padded to the dimension of the circulant matrix by appending zeroes. Eq. 4 can then be applied. The first elements of the resultant are the required answer for the original Toeplitz matrix-vector product, with the remaining entries being redundant. Please note that this methodology and the following discussion is only applicable to systems exhibiting Toeplitz structure. Non-Toeplitz structure negates the transformation to circulant structure and hence, the use of DFTs to calculate the matrix-vector products efficiently. In terms of the specific geometry of the target in question, the method is indifferent. For sparse systems, from targets with a low volume filling fraction (VFF), such as dendritic or highly porous structures, the only difference is the amount of memory required by the iterative solution vectors. The solution vectors within the iterative core do not store entries corresponding to vacant lattice sites, a fact that will be discussed in §LABEL:itersch. However, the DFT kernel that efficiently calculates the matrix-vector products required by the iterative solvers must account for the entire rectangular computational domain, including the vacant sites, and thus performs the same amount of calculations, irrespective of the VFF.
For the DDA, by embedding the arbitrary target in a rectangular lattice and neglecting the diagonal terms, the requisite matrix-vector products take on the form of a complex-symmetric level-three tensor-Toeplitz matrix of order , denoted , in which, the constituent tensor elements are complex-symmetric but not necessarily Toeplitz, multiplied by an arbitrary vector of length , denoted , whose constituent elements are Cartesian vectors. Computationally, can be considered as an assemblage of nine complex-symmetric level-three single-element Toeplitz matrices, and since the tensor elements are symmetric, only six of the nine tensor components are independent and need to be considered explicitly, , . Analogously, the arbitrary vector can be split into three separate component vectors, , , . Consequently, this means that the components can be treated separately and the full problem is reduced to an ensemble of smaller single-element problems whose contributions will ultimately be manifested through FD products.
At this point, before attempting to decipher the ensuing optimisations of this DFT technique, for readers who are not familiar with the specific algorithmic details associated with the application of DFTs to the solution of such matrix-vector systems, it would be prudent to read Appendix LABEL:origdftalgor. This concisely summarises the conventional application of DFTs to these complex-symmetric level-three tensor-Toeplitz systems in the context of the DDA, as prescribed by goodman1991.
The complete standard algorithm, which encapsulates the conventional implementation, can be summarised as follows:
T-to-c conversion and 3D DFT of the six independent tensor components
Zero pad and 3D DFT of the three zero-padded vector components
FD element-wise multiplication of the tensor and vector components
3D iDFT and normalisation of the components of the resultant
Addition of the previously ignored diagonal contributions
Due to the T-to-c conversions that are required for DFT applicability, the six tensor-components arrays contain a high degree of symmetry. Since a 3D DFT can be carried out as an ensemble of 1D DFTs, in the , in the , and in the direction, a new algorithm which exploits these symmetries can be constructed. Rather than expanding the six tensor-component arrays from a Toeplitz structure to a circulant structure and applying 3D DFTs, as delineated in Fig. LABEL:fig6, the T-to-c conversion and subsequent DFT can be amalgamated into a single computationally efficient operation. Each of the lines in the direction, , in the direction, , and in the direction, , in section ‘’ in Fig. LABEL:fig6, can be taken in turn and copied into an external 1D array of length , , and , respectively. Each can be treated as the first column of a level-one Toeplitz matrix and the T-to-c conversion performed. The DFTs of these external vectors can be computed, and subsequently, the first , and elements of the resultants can be copied back into the 3D tensor-component array, overwriting the original data. This significantly reduces the number of 1D DFTs that have to be performed. Taking into account the zero-padding to ensure that , , and permits the use of fast DFT algorithms, the number of transforms is reduced by approximately in total:
|direction, see Fig. LABEL:fig1(a):|
|direction, see Fig. LABEL:fig1(b):|
|direction, see Fig. LABEL:fig1(c):|