An efficient multi-core implementation of a novel HSS-structured multifrontal solver using randomized sampling

An efficient multi-core implementation of a novel HSS-structured multifrontal solver using randomized sampling

Pieter Ghysels111Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA. ({pghysels,xsli,fhrouet,swwilliams}@lbl.gov)    Xiaoye S. Li111Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA. ({pghysels,xsli,fhrouet,swwilliams}@lbl.gov)    François-Henry Rouet111Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA. ({pghysels,xsli,fhrouet,swwilliams}@lbl.gov)   
Samuel Williams111Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA. ({pghysels,xsli,fhrouet,swwilliams}@lbl.gov)
   Artem Napov222Université Libre de Bruxelles, B-1050 Brussels, Belgium.
Abstract

We present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination, and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factorization leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to fold for problems in our test suite. The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel® Xeon Phi (MIC). The code is part of a software package called STRUMPACK – STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.

1 Introduction

Solving large linear systems efficiently on modern hardware is an important requirement for many engineering high performance computing codes. For a wide range of applications, like those using finite element, finite difference or finite volume discretizations of partial differential equations (PDEs), the resulting linear system is extremely sparse. Fast solution methods exploit this sparsity, but also arrange the computations in such a way that most of the computational work is done on smaller dense submatrices. The reason for this is that operations on dense matrices can be implemented very efficiently on modern hardware. The multifrontal method [23, 38] is an example of a sparse direct solver where most of the work is done on dense, so-called frontal matrices. Unfortunately, dense linear algebra operations, for instance LU decomposition, require operations, where is the matrix dimension. In a multifrontal solver these dense operations end up being the bottleneck. However, it has been observed that for many applications the dense frontal matrices have some kind of low-rank structure [17].

In [55], a rank-structured multifrontal method is presented in which the larger frontal matrices are approximated by hierarchically semiseparable (HSS) [50] matrices. For certain model problems, this leads to a solver, or preconditioner, with linear or almost linear complexity in the total number of degrees of freedom in the sparse linear system. Here, we present an efficient implementation of a slightly modified version of the algorithm presented in [55]. The algorithm in [55] handles only symmetric positive definite systems, while the code presented here targets general non-symmetric non-singular matrices. For HSS compression, a randomized sampling algorithm from [39] is used. Earlier HSS construction methods, see [56], cost at least , whereas the randomized method in combination with a fast matrix-vector product has a linear or almost linear complexity, depending on the rank-structure of the frontal matrix.

An important concept used in the randomized compression algorithm is the interpolative or skeleton decomposition [19]. Use of this decomposition leads to a special structure of the HSS generator matrices (see Eq. (12)). The HSS factorization used in [55] for symmetric matrices, and in [57] for non-symmetric matrices, exploits this special structure in a so-called ULV-like decomposition. In this paper, the ULV decomposition from [57] is used.

The HSS format is a subclass of a more general type of hierarchical rank-structured matrices called -matrices [13]. HSS matrices are similar to -matrices, another subclass of -matrices, in the sense that both formats have the special property that the generators are hierarchically nested (see Eq. (3) for what this means for HSS). This is typically not the case in the more general , the block low-rank (BLR) [4], the sequentially semi-separable (SSS) [50] or the hierarchically off-diagonal low-rank (HODLR) [3] formats (all of which are -matrices). In HSS and HODLR only off-diagonal blocks are approximated as low-rank whereas , and BLR allow more freedom in the partitioning. In [4], BLR is used to approximate dense frontal matrices in the multifrontal solver MUMPS [6] while in other recent work [8] HODLR has also been proposed to accelerate a multifrontal solver. Both HSS and HODLR use similar hierarchical off-diagonal partitioning, but HSS further exploits the hierarchically nested bases structure, which can lead to asymptotically faster factorization algorithm for some matrices.

Furthermore, thanks to the randomized HSS construction, our solver is also fully structured (compared to partially structured approaches where only part of the frontal matrix is compressed, see [51]) and the larger frontal matrices are never explicitly formed as dense matrices.

Achieving high performance on multi/many-core architectures can be challenging but it has been demonstrated by many authors now that dynamic scheduling of fine-grained tasks represented by a Directed Acyclic Graph (DAG) can lead to good performance for a range of codes. This approach was used successfully in the dense linear algebra libraries PLASMA and MAGMA [2] and more recently it has become clear that it is also a convenient and efficient strategy for sparse direct solvers. For instance, in [35] the PaStiX solver [30] is modified to use two different generic DAG schedulers (PaRSEC [14] and StarPU [9]). In [1] StarPU is used in a multifrontal QR solver. In [33], OpenMP tasks are submitted recursively for work on different frontal matrices, while parallelism inside the frontal matrices is also exploited with OpenMP tasks but with a manual resolution of inter-task dependencies. The sparse Cholesky solver HSL_MA87 [31] uses a custom DAG scheduler implemented in OpenMP. Just as sparse direct solvers, hierarchical matrix algorithms also benefit from task parallelism: Kriemann [34] uses a DAG to schedule fine-grained tasks to perform -matrix LU factorization. Our implementation uses OpenMP task scheduling, but since most of the tasks are generated recursively, a DAG is never explicitly constructed.

The main contribution of this work is the development of a robust and efficient code for the solution of general sparse linear systems, with a specific emphasis on systems from the discretization of PDEs. Our work addresses various implementation issues, the most important being the use of an adaptive HSS construction scheme (Section 2.2.1), based on the randomized sampling method [39]. Rather than assuming that the maximum rank in the HSS submatrices is known a-priori, it is computed in an adaptive way during the HSS compression. Other implementation techniques such as fast extraction of elements from an HSS structure (Section 4.5) are also indispensable to make the code robust and usable as a black-box solver. The code achieves high performance and good scalability on a range of modern multi/many-core architectures like Intel® Xeon and Intel® Xeon Phi (MIC), due to runtime scheduled task parallelism, using OpenMP111http://openmp.org/wp/openmp-specifications/. The exclusive use of task parallelism avoids expensive inter-thread synchronization and leads to a very scalable code. This is the first parallel algebraic sparse solver with fully structured HSS low-rank approximation. The code is made publicly available with a BSD license as part of a package called STRUMPACK222http://portal.nersc.gov/project/sparse/strumpack/ – STRUctured Matrices PACKage. STRUMPACK also has a dense distributed memory component, see [47].

This work advances the field significantly on several fronts.

  • Wang et al. [52] presented the first parallel multifrontal code with HSS embedding, called Hsolver. However, two shortcomings prevent it from being widely adopted: it is based on discretization on regular meshes and is only partially structured due to the hurdle of the extend-add of HSS update matrices (see Section 4). Our new code, on the other hand, is a purely algebraic solver for general sparse linear systems and is fully structured, mitigating the HSS extend-add obstacle thanks to the randomized sampling technique (see Section 4.3).

  • Napov and Li developed a purely algebraic sparse solver with HSS embedding [42], but it is only sequential and their experiments did not include the randomized sampling HSS compression. Our new code is parallel and we show detailed results with randomized sampling.

In future work, building on the current paper and on the distributed HSS code developed in [47], we intend to develop a distributed memory algebraic sparse solver with HSS compression.

The rest of the paper is outlined as follows. Some required background on HSS is briefly presented in Section 2. First, in 2.1, the HSS rank-structured format is described. Next, the fast randomized sampling HSS construction [39] and the ULV decomposition [57] are discussed in Sections 2.2 and 2.3 respectively. Section 3 describes multifrontal LU decomposition. Then, in Section 4 we discuss how HSS matrices can be incorporated into a multifrontal solver. Section 5 explains various aspects of the actual implementation. In Section 6 we present experimental results that illustrate numerical and performance aspects of the code. Finally, Section 7 has some concluding remarks and an outlook to planned future work.

2 HSS: Hierarchically Semi-Separable matrices

This section briefly introduces hierarchically semi-separable (HSS) matrices, mostly following the notation from [39]. HSS is a data-sparse matrix representation which is part of the more general class of -matrices and more specifically -matrices.

2.1 Overview of the HSS matrix format

The following notation is used: ‘‘ is matlab-like notation for all indices in the range, denotes complex conjugation, is the number of elements in index set and is the matrix consisting of only the rows of matrix .

Consider a square matrix with an index set associated with it. Let be a postordered binary tree, meaning that children in the tree are numbered before their parent. Each node of the tree is associated with a contiguous subset . For two siblings in the tree, and , children of , it holds that and . Furthermore, . The same tree is used for the rows and the columns of and only diagonal blocks are partitioned. An example of the resulting matrix partitioning is given in Figure 0(a) and the corresponding tree is shown in Figure 0(b).

(a) HSS partitioning of a square matrix.

7

3

1

2

6

4

5

(b) Tree corresponding to the HSS partition.
Figure 1: Illustration of an HSS partitioning of a square matrix. Diagonal blocks are partitioned recursively. Figure (b) shows the tree, using postordering, corresponding to the partitioning in (a) and it illustrates the basis matrices stored in the nodes of the tree.

The diagonal blocks of , denoted , are stored as dense matrices in the leaves of the tree

(1)

The off-diagonal blocks , where and denote two siblings in the tree, are factored (approximately) as

(2)

The matrices and , which form bases for the column and row spaces of , are typically tall and skinny, with having rows and (column-rank) columns, has rows and (row-rank) columns and hence is . The HSS-rank of matrix is defined as the maximum of and over all off-diagonal blocks, where typically . The matrices and are stored in the parent of and . For a non-leaf node with children and , the basis matrices and are not stored directly since they can be represented hierarchically as

(3)

Note that for a leaf node and . Hence, every node with children and , except for the root node, keeps matrices and . The example from Figure 1 can be written out explicitly as

(4)

The storage requirements for an HSS matrix are . Construction of the HSS generators will be discussed in the next section. Once an HSS representation of a matrix is available, it can be used to perform matrix-vector multiplication in operations compared to for classical dense matrix-vector multiplication, see [39, 47].

2.2 Fast HSS construction through randomized sampling

In [39] Martinsson presents a randomized sampling algorithm for the efficient construction of an HSS representation of a matrix . Note that the same technique was also used by Xia et al. in [57, 55] for HSS compression in a multifrontal solver. The main advantage of this approach is that it does not need explicit access to all elements of , but only needs a fast matrix-vector routine and selected elements from . The matrix never needs to be formed explicitly as a dense matrix and this allows to save memory. The overall complexity of the algorithm is , with the HSS-rank of , provided that a fast () matrix-vector product is available. By comparison, other approaches based on direct low-rank compression of matrix off-diagonal blocks typically require operations. This section briefly presents the randomized compression algorithm, for a more in depth discussion see [47, 39].

Suppose the HSS-rank is known a priori and is a tall and skinny random matrix with columns where is a small oversampling parameter. Let and be samples for the row (superscript ) and column bases (superscript ) of respectively. Algorithm 1 with computes the HSS representation of using the information available in the samples and by hierarchically compressing (using interpolative decompositions, see below) the off-diagonal blocks of , starting from the leaves.

Let for a non-leaf node with children and be defined as

(5)

If are all the nodes on level of the HSS tree, then

(6)

is an block diagonal matrix. The main idea of the randomized sampling Algorithm 1 is to construct a sample matrix for each level of the tree as

(7)

This sample matrix captures the action of a product of the block off-diagonal part of with a set of random vectors . It is exactly this block off-diagonal part that needs to be compressed using low-rank approximation.

Another crucial component of the randomized sampling algorithm is the interpolative decomposition (ID) [19]. The ID computes a factorization of a rank- matrix by expressing as a linear combination of a set of selected columns of :

(8)

or it can be modified to take a compression tolerance , such that

(9)

where is the numerical rank. The ID can be computed from a rank-revealing or column pivoted QR decomposition [16, 45]

(10)

where is upper-triangular, followed by a triangular solve such that

(11)
1 Function  HSSCompress(, , , , , )
       Data: and with ,
       Result: : (leaves), , (non-leaves), , (all except root).
2       foreach do HSSCompress(, , , , ) if then
3            
4       else // and are the children of node
5            
6               if then
7            
8       else
9            
10      
11
Algorithm 1 Computing the HSS factorization of a nonsymmetric matrix.

A consequence of using the ID in Algorithm 1 is that is a submatrix of the original matrix . Furthermore, it also leads to a special structure for the and generators:

(12)

referred to as interpolative bases, which can be exploited in the computations. Note that these interpolative bases are not orthonormal. Although creating orthonormal bases might slightly improve stability, the interpolative structure improves performance of the compression algorithm and the ULV decomposition, see Section 2.3.

2.2.1 Adaptive scheme to determine the HSS-rank

In practice however, the HSS-rank of the matrix is not known in advance. In this case, Algorithm 1 can be called repeatedly while increasing the number of columns of , and . As long as , the ID in line 1 will fail. Suppose the ID fails at node , i.e., the required accuracy is not reached, but the descendants of node are successfully compressed. In that case, during the next iteration of Algorithm 1 with , it is not necessary to redo the compression (ID) or the extraction of and for the descendants of node . However, those descendants do have to update the new columns in (lines 1 and 1) and (lines 1, 1 and 1). In [47], this adaptive rank scheme is presented in more detail.

2.2.2 Implementation issues

The random matrices and are filled element by element using a pseudo-random number generator. Our implementation offers the minstd_rand and mt19937 generators from the C++11 standard while the distribution can be either uniform over or standard normal (Gaussian) . By default the linear congruential engine333This choice is motivated further in section 4. minstd_rand is selected in combination with the Gaussian distribution.

The rank-revealing QR factorization, used in the ID, could be replaced by a strong rank-revealing QR factorization [28], with possibly greater accuracy and smaller HSS-rank. This is left as future work. Two interesting alternative approaches to the randomized compression routine discussed in this section should be mentioned, namely adaptive cross approximation [10] and a matrix-free approach presented in [37].

2.3 ULV-like factorization and solve

Solving a linear system with an HSS matrix can be done by first computing a so-called ULV decomposition [18], where and are unitary matrices and is lower triangular. However, in [55] and [57], the ULV decomposition is modified to take advantage of the special structure of the and generators, see (12). The resulting algorithm is referred to as ULV-like since it is no longer based on unitary transformations.

In the first step of a ULV factorization, zeros are introduced in the HSS block rows. This step can be done using for instance a full QL factorization

(13)

However, thanks to the special structure of , a multiplication from the left with a carefully chosen is much cheaper and has a similar effect

(14)

We refer to [47] for a detailed description of the ULV factorization and the corresponding solve.

3 Multifrontal sparse LU factorization

This section briefly recalls the main ingredients of the multifrontal method for the LU factorization of general invertible sparse matrices. For a more detailed discussion of multifrontal methods, see [23, 38]. The method casts the factorization of a sparse matrix into a series of partial factorizations of many smaller dense matrices and Schur complement updates.

3.1 Matrix reordering

As a preprocessing step, is first scaled and permuted for numerical stability: , where and are diagonal matrices that scale the rows and columns of and is a column permutation that places large entries on the diagonal. We use the MC64 code by Duff and Koster [22] to perform the scaling and column permutation. Popular alternative scaling algorithms can be found in [48, 7, 20]. After that, a fill-reducing permutation is applied in order to reduce the number of nonzero elements in the LU factors. Permutation matrix is computed using nested dissection applied to the adjacency graph of , using one of the graph partitioning tools SCOTCH [44] or METIS [32]. Instead of nested dissection, other heuristics like AMD [5] can be used.

The multifrontal method relies on a structure called the elimination tree. The elimination tree serves as a task and data-dependency graph for both the factorization and the solution process. A few equivalent definitions of the elimination tree are available. We use the following, and we recommend the survey by Liu [38] for more detail on the method and the survey by L’Excellent for more detail about implementation issues like parallelism, memory usage, numerical aspects etc. [36].

Definition 1.

Assume , where is an sparse, structurally symmetric matrix. The elimination tree of is a tree with nodes, where the -th node corresponds to the -th column of and with the parent relations defined by: , for .

In practice, nodes are amalgamated: nodes that represent columns and rows of the factors with similar structures are grouped together in a single node. For instance when using nested dissection reordering, all vertices from the same graph separator can be grouped in one elimination tree node. In the end, each node corresponds to a square dense matrix, referred to as a frontal matrix, with the following block structure:

(15)

3.2 Numerical factorization

Multifrontal factorization of the matrix consists in a bottom-up traversal of the tree, following a topological order (a node is processed before its parent). Processing a node means first forming (or assembling) the frontal matrix followed by elimination of the fully-summed variables in the block and finally a Schur complement update step. The frontal matrix is formed by summing the rows and columns of corresponding to the variables in the , and blocks, with the temporary data – the extended update matrices – that have been produced by the children of after their elimination step, i.e.,

(16)

where is the set of row and column indices of w.r.t. the global matrix , after reordering. Eliminating the fully-summed variables in the block is done through a partial factorization of , typically via a standard dense matrix factorization of the block. Next, the Schur complement (contribution block or update matrix) is computed as and stored in temporary memory. In contrast to the elimination step which uses straightforward dense matrix operations (high performance LAPACK/BLAS3 codes), the assembly step (16) requires index manipulation and indirect addressing while summing up . For example, if two children’s update matrices , have subscript sets and , respectively, then those update matrices can only be added after aligning the index sets of the two matrices by padding with zero entries

(17)

This summation operation is called extend-add, denoted by . The relationship between frontal matrices and update matrices can be revealed by , where nodes are the children of .

Each partial factorization might involve pivoting within the frontal matrix. It can also happen that no suitable pivot can be found during a step of partial factorization. In this situation, the corresponding row and column remain unfactored and are sent to the parent node. This strategy is used for instance in the MUMPS [6] code. Currently our code does not perform any such delayed pivoting, but instead relies on static pivoting (using MC64) and partial pivoting during the LU decomposition of the blocks.

3.3 Solution

Once the factors are computed, the solution of is computed in two steps: forward solution by doing a triangular solution with the factor and backward substitution by doing a triangular solution with the factor. The forward solution step is a bottom-up topological traversal of the elimination tree, while the backward substitution is a top-down traversal.

4 Multifrontal solver with HSS frontal matrices

This section explains how a multifrontal solver, see Section 3, can be used in combination with the HSS data-structures and algorithms from Section 2 to improve the computational complexity and storage requirements. This section closely follows [55].

4.1 Selection of HSS frontal matrices

Note that the largest frontal matrices, those that determine the computational complexity of the solver, typically correspond to nodes closer to the root of the elimination tree. Let the top of the tree, i.e., the root node, be at level of the tree. Then, define a switch-level such that the frontal matrices at levels of the elimination tree are stored as regular dense matrices whereas those at levels are compressed using the HSS format. According to the analysis in [55], should be chosen such that the factorization cost above and below the switch-level are equal. However, this rule is not very practical and experiments show that performance depends crucially on the choice of .

4.2 Separator reordering

Apart from the scaling and permutation of for stability, and nested dissection reordering to reduce fill-in, an additional reordering is applied to the index set of each separator. This reordering is needed to obtain favorable HSS rank structure in the corresponding frontal matrices. It is computed by recursively bisecting the graph of the separator in subgraphs of size approximately (defaults to ), using a graph partitioning tool (SCOTCH or METIS). Each partition then corresponds to a leaf in the HSS tree of the corresponding frontal matrix. However, since a separator graph can be disconnected, it is enriched with length-two connections from the connectivity graph before it is passed to the partitioner, see also the discussion in [42]. Note that other reorderings can be used instead of nested dissection. The influence of the reordering on the ranks of off-diagonal blocks is studied in [53].

4.3 Skinny extend-add

From here on, we assume that a binary elimination tree is used. The steps followed for each HSS frontal matrix are as follows. First, a random matrix is constructed. If the children and of are also HSS, then is constructed as follows

(18)

3

1

2

Figure 2: Illustration of the extend-merge procedure for the random vectors. Node 3 needs 3 random vectors. It can get elements and from either child 1 or 2. Elements , , and are copied from child 2. Elements and in are generated with the properly seeded pseudo random number generator. When the adaptive HSS compression scheme decides that a third column has to be added to , those elements are also generated.

The random matrices of the children are merged in the parent and any elements not present in any of the children’s are generated. This extend-merge procedure is illustrated in Figure 2. If node has no (HSS) children, is generated. However, it is important that corresponding “random” entries in and are equal, since that allows efficient evaluation of (similarly for ) based on

(19)

where denotes the subset of rows of which are also in and denotes an extend-add operation where the extend is only done for the rows, not the columns. By the construction of  (18), the first columns of are already available at node , which is convenient for the evaluation of . Evaluation of is discussed in more detail in Section 4.4. When generating rows in , the random generator is seeded for each row using the global row index , to ensure that is consistent with its sibling. This frequent seeding is the reason the linear congruential pseudo-random engine minstd_rand was chosen as default over for instance the mt19937 Mersenne-Twister, which has a much bigger internal state.

The frontal matrices with are completely approximated by HSS and are never explicitly formed as a dense matrix. This in contrast to earlier, so-called partially structured approaches where for instance only the or the , and blocks are compressed [52]. Partially structured approaches typically at one point or another form a dense representation of the block, perform the Schur complement update on it, and then use this dense update matrix in the extend-add procedure. This is done to avoid having to perform an overly complicated extend-add operation on HSS matrices. However, the approach followed here does not require first assembling a dense frontal matrix before doing HSS compression. This is due to the use of the randomized HSS compression, Algorithm 1, which only requires matrix-vector multiplication and extraction of selected elements from the frontal matrix.

When , and have been constructed, HSS compression using Algorithm 1 can be performed. However, when is less than the HSS-rank of , Algorithm 1 will fail. In that case, columns are added to , i.e., ( by default), the new columns of and are computed and Algorithm 1 is called again, this time only updating the new columns of , and . Due to the use of the ID in Algorithm 1, HSS generators and are submatrices of . Hence, a routine to extract specific elements from is required. This routine will be described in Section 4.5.

4.4 ULV factorization and low-rank Schur complement update

After HSS compression, a factorization of is performed: classical row-pivoted LU if is dense, ULV if it is HSS. For a dense frontal matrix, is computed explicitly. In the HSS case, is kept in HSS form and the update is stored as a low-rank product. Expressions for and are derived and presented in detail in [55] for symmetric and in [57] for non-symmetric matrices. Given , the multiplication with in (19) can be performed efficiently using HSS matrix-vector multiplication for and two dense (rectangular) matrix products for and .

4.5 Extracting elements from an HSS matrix

Finally, extracting elements from requires extracting elements from an HSS matrix. In [55] a routine is presented for extracting multiple elements from an HSS matrix while trying to minimize the number of traversals through the HSS tree. We use a conceptually simpler algorithm based on the HSS matrix-vector multiplication. By multiplying an HSS matrix with unit vectors, selected columns can be extracted. At the leaf nodes, instead of multiplying with a unit vector, one can simply select the proper columns of . Unlike for matrix-vector multiplication, during element extraction parts of the tree traversal can be pruned.

4.6 Preconditioning versus iterative refinement

Direct solvers often use a few steps of iterative refinement to improve the solution quality [54]. However, the multifrontal method with HSS compression as presented in this paper is used as a preconditioner for GMRES instead. For the same number of multifrontal solve steps (preconditioner applications), a Krylov solver typically leads to smaller residuals than iterative refinement. This is particularly useful when the HSS compression tolerance is increased, since in that case the HSS-multifrontal method is no longer an exact direct solver and the number of outer iterations increases.

4.7 Solver complexity

The computational complexity of a standard multifrontal solver is typically dominated by the dense linear algebra corresponding to the few largest frontal matrices. For instance a nested dissection reordering on a d-dimensional mesh with vertices has a top separator with vertices, leading to an overall complexity of , i.e., and for 2D and 3D meshes respectively.

For the HSS-embedded multifrontal solver, the complexity is dominated by the HSS compression of the dense frontal matrices, which in turn depends on the rank pattern. Earlier works by Chandrasekaran et al. [17] and Engquist & Ying [25] showed the rank patterns of the elliptic and the Helmholtz operators respectively. Xia showed complexities for the randomized HSS multifrontal solver assuming different rank patterns [55]. Combining the above results, we summarize the solver complexities for two types of PDEs and two sparse solvers, see Table 1.

MF MF-HSS-RS
problem HSS rank factor flops memory factor flops memory
2D elliptic
() Helmholtz
3D elliptic
() Helmholtz
Table 1: Summary of the complexities of the standard multifrontal solver (MF) and the randomized HSS-embedded multifrontal solver (MF-HSS-RS) applied to two important classes of problems. The mesh size per side is and the matrix dimensions are in 2D and in 3D.

5 Shared memory parallel implementation

The algorithm presented in Section 4 has been implemented using C++ and OpenMP, targeting shared memory platforms. The code relies on BLAS, LAPACK, METIS and/or SCOTCH and a recent C++11 compliant compiler with support for OpenMP 3.1 or higher. The code makes heavy use of the OpenMP task construct. OpenMP was chosen because it is easy to use, performs well and is well documented and supported. However, alternatives like Intel® Threading Building Blocks [46] or Cilk(+[11] offer conceptually similar task parallelism. Switching to one of those should not be hard. While other runtime systems like QUARK [58], DAGuE/PaRSEC [14] and StarPU [9] (distributed memory task scheduling) and OmpSs [24] might have certain specific advantages over the OpenMP runtime, many of those innovations, for instance explicit modeling of task dependencies or task-offloading, are eventually incorporated in the OpenMP standard as well.

OpenMP tasks are created and scheduled at runtime by the scheduler. Task schedulers typically use a work stealing [12] or task stealing strategy to balance load between threads. Each thread/core has its own local queue of tasks. When a thread runs out of work it can steal a task from one of the other thread’s task queues.

5.1 Task based tree parallelism

Traversals of both the elimination tree and the HSS hierarchy allow for tree parallelism, i.e., independent subtrees can be processed concurrently. For instance, multifrontal factorization requires bottom-up topological traversal of the elimination tree, just like HSS compression requires bottom-up traversal of the HSS hierarchy. The code in Listing 1 shows how to do a parallel bottom-up tree traversal using the OpenMP task construct. The tree is stored as objects of a class Tree with two members left_child and right_child, both pointers to subtrees, also objects of type Tree. In Listing 1, the variable depth keeps track of the recursion depth and no more tasks are generated after a certain depth to avoid excessive overhead of creating too fine-grained tasks. Experiments show that setting d_max to leads to a good task granularity. With this setting, the maximum number of tasks at any given point in time is about . This is enough to ensure good load balance and avoids excessive task creation overhead. OpenMP tasks supports an if clause, so the check if(depth<d_max) could have been put in the OpenMP pragma. However, optimizing the code to perform this check outside the directive completely avoids all task creation and synchronization overhead when it evaluates to false. The final(condition) clause informs the OpenMP runtime that the generated task will not generate more tasks if condition evaluates to true. Finally the untied clause informs the runtime that this task can be moved to a different thread when it encounters a scheduling point. For instance, when a task spawns a new task, the spawning task may be moved to another thread. Untied tasks allow for better load balance, whereas tied tasks (the default) typically lead to better data locality. The taskwait pragma ensures processing of the children is finished before continuing with the parent.

  void Tree::postorder(depth=0) {
    if (depth < d_max) {
      if (left_child)
#pragma omp task untied default(shared) final(depth >= d_max-1) mergeable
        left_child->postorder(depth+1)
      if (right_child)
#pragma omp task untied default(shared) final(depth >= d_max-1) mergeable
        right_child->postorder(depth+1)
#pragma omp taskwait
    } else {
      if (left_child) left_child->postorder(depth+1)
      if (right_child) right_child->postorder(depth+1)
    }
    do_stuff(depth); // factor/compress..., can generate more tasks
  }
Listing 1: Bottom-up topological parallel tree traversal implemented with recursion and the OpenMP (3.1) task construct.

5.2 Hybrid node and tree parallelism

Exploiting tree parallelism alone as in Listing 1 does not scale well due to the limited degree of parallelism near the root. Although the HSS-multifrontal algorithm can exploit two nested levels of tree parallelism (elimination tree and HSS hierarchy), the scaling bottleneck remains. To overcome this, one needs to exploit parallelism in the computational work inside the tree nodes, which are mostly dense linear algebra operations. However, work sharing constructs like OpenMP parallel for loops are not allowed within OpenMP tasks. Moreover, calling multithreaded BLAS or LAPACK routines from multiple tasks/threads leads to over-subscription and generally poor performance. This is because existing multithreaded BLAS/LAPACK libraries are optimized to use the entire machine. One possible strategy is to exploit tree parallelism only for the lower levels of the tree and switch to a sequential processing of the nodes higher up in the tree while switching to multithreaded linear algebra. However, this leads to many synchronization points and does not scale with increasing number of threads. Our approach on the other hand is to use task parallelism within the tree nodes as well to allow for a seamless transition between tree and node parallelism, since scheduling of tasks is left to the runtime system. When getting closer to the root node there is a shift from tree to node parallelism. This is illustrated in Figure 3. Even in the case of highly unbalanced trees, the runtime can assign work evenly to the available cores. We chose not to use an existing library for the task based dense linear algebra, for instance PLASMA (based on the QUARK runtime), since we wished to exploit the same threading mechanism (OpenMP) already used for the tree parallelism.

(a) Three nested levels of parallelism.
(b) Constant overall concurrency.
Figure 3: Schematic illustration of the different types of concurrency in the code and the gradual shift from tree parallelism to in-node parallelism. (a) Tasks for dense kernels () are nested in nodes () of the HSS trees, which are nested in the elimination tree nodes () (e-tree). (b) Left-to-right, top-to-bottom: (1) Elimination tree concurrency decreases when getting closer to the root node. (2) Closer to the root of the elimination tree, more HSS tree concurrency is exploited as it becomes available, i.e., while moving down the HSS tree away from the root. (3) Towards the root of the HSS tree and the root of the elimination tree, more in-node concurrency (parallel tasked dense algebra) is exploited. (4) The product of the 3 types of concurrency, i.e., the overall concurrency, remains constant throughout both the elimination and HSS trees.

5.3 Parallel BLAS and LAPACK

One of the most time consuming operations of the algorithm is dense matrix-matrix multiplication . This can be implemented easily with recursion and task parallelism [41], by splitting the problem into smaller matrix-matrix multiplications; this strategy is referred to as divide-and-conquer and is often used in so-called cache-oblivious algorithms [26]. How the matrices are split depends on their shapes. Let be and be , then

(20)

The last case in (20), short fat times tall skinny , uses two consecutive recursive matrix-matrix multiplication calls. Cases 2 and 3 start two multiplications in parallel, spawning two tasks. The recursion ends when reaching case 1, with a tuning parameter set by default to , where a sequential vendor optimized BLAS3 *gemm routine is called. Depending on the scalar type, one of four inlined template specialization functions for gemm<scalar> is executed to pick the correct version: sgemm, dgemm, cgemm or zgemm. For the other BLAS2/3 routines that are required, for instance triangular matrix multiplication and solve, a similar recursive approach is used. This recursive task generation is also stopped when the recursion depth becomes too large, with the same depth parameter being passed through the entire code and incremented each time it enters a new task.

The code also requires some LAPACK functionality, namely LQ, LU and RRQR decompositions. For those, we modify the reference Fortran LAPACK implementation to make use of our parallel (tasked) BLAS routines. Some vendor optimized LAPACK libraries do not just use the LAPACK reference code on top of multithreaded BLAS calls, but add additional optimizations to the LAPACK routines. Unfortunately, in our approach we cannot take advantage of these optimized multithreaded codes.

Figure 4: Speedups over sequential getrf from Intel® MKL for matrices of size to . Left: Recursive LU decomposition using OpenMP tasked BLAS code. Middle: Reference LAPACK getrf using OpenMP tasked BLAS code. Right: MKL optimized multithreaded getrf. Our recursive implementation scales better than the reference netlib getrf with parallel BLAS but worse than the MKL optimized code. However, calling MKL multithreaded getrf from multiple threads simultaneously would lead to over-subscription and performance penalty. This is not a problem with the recursive LU because it uses the OpenMP task runtime, just like the rest of the code.

Consider partial pivoted LU decomposition, used for the block of a dense frontal matrix. Apart from the LAPACK *getrf routine using our OpenMP tasked BLAS routines, we also implemented a recursive LU factorization algorithm [21]. The parallelism in this algorithm has to come from the BLAS routines, triangular solve, row permutation, and matrix-matrix multiply. Figure 4 compares the performance and scalability of the two LU decomposition approaches with the MKL optimized implementation and shows our implementation of LU scales nearly as well as MKL without sacrificing the ability to exploit subtree concurrency. A more scalable approach [15], based on so-called tiled algorithms instead of recursion, partitions the matrices in tiles of fixed sizes and assigns tasks to each of the tiles while explicitly modeling the data dependencies between the tasks. A DAG (directed acyclic graph) scheduler then executes the tasks while respecting their dependencies. OpenMP supports explicit task dependencies since version 4.0444Not all compilers currently support the latest OpenMP 4.0 standard.. We intend to exploit this feature in the future to achieve more scalable dense linear algebra operations. For the rank-revealing QR decomposition we use a modified version of the LAPACK *geqp3 code [45], a BLAS3 version of column pivoted QR. The routine is modified to call our parallel tasked BLAS and an extra tolerance parameter is added to stop the rank-revealing process as soon as the -rank has been found instead of computing the full decomposition. More precisely, numerical rank is detected when , where is the upper-triangular factor.

5.4 Scaling bottlenecks

Before the actual numerical factorization step, but after matrix scaling and nested dissection reordering, a symbolic factorization step is performed. During this step some memory is allocated and the index sets are assembled. The symbolic factorization is a bottom-up tree traversal which is done in parallel, as in Listing 1. In a multithreaded setting, memory allocation can become a serious scaling bottleneck. We have found that the use of a scalable memory allocator, like TCMalloc [27] or the TBB scalable memory allocator [46] greatly improves the performance over for instance the default malloc in glibc555http://www.gnu.org/software/libc/. For instance, running on a 60 core Intel® Xeon Phi, the symbolic factorization phase runs up to faster when using TBBMalloc instead of the default allocator.

6 Numerical experiments

This section presents various numerical results. Section 6.1 first focuses on some PDE problems on regular grids as this allows us to easily change the problem size. The following sections consider other matrices from various applications. Unless otherwise stated, the experiments are performed on a single -core socket of a single node of the NERSC Edison machine666https://www.nersc.gov/users/computational-systems/edison/. A compute node has two -core Intel® Ivy Bridge processors at GHz. Double precision peak performance is Gflop/s per core, Gflop/s per socket or Gflop/s per node. Each socket has GB DDR3 MHz memory, hence GB per node, with a STREAM [40] bandwidth of GB/s. We use the Intel® 15.0.1 compiler with sequential MKL.

6.1 PDEs on a regular grid

We start with a number of benchmarks of well-known PDEs on regular 2D and 3D grids to study scaling of time-to-solution, number of floating point operations, memory usage, HSS-ranks etc., with respect to problem size. For these regular grids, a geometric nested dissection code is used instead of the default METIS graph partitioner. The following benchmark problems are considered:

  • Poisson equation on a 2D grid using the standard -point finite difference stencil with homogeneous Dirichlet boundary conditions.

  • Poisson equation on a 3D grid using the standard -point stencil with homogeneous Dirichlet boundary conditions.

  • Convection diffusion equation [43] on a 2D grid using a -point upwind stencil, with viscosity and

    (21)
  • Convection diffusion, similar as above, on 3D grid with

    (22)
  • Helmholtz equation

    (23)

    on a 2D grid, with the angular frequency, the seismic velocity and the time-harmonic wavefield solution to the forcing term . The discretization uses a -point stencil and the frequency is set at Hz with . We use a sampling rate of about points per wavelength and PML boundary conditions. This example uses complex arithmetic.

  • Same as H2D, but 3D using a -point stencil.

A crucial parameter for performance is the number of levels of the elimination tree for which HSS compression is performed. Note that corresponds to a pure multifrontal solver. Unfortunately, the optimal is impossible to predict a priori, so it is determined experimentally and will always be mentioned with each result. The same applies to the compression tolerance . When , i.e., with HSS compression, the multifrontal solver is used as a preconditioner for restarted GMRES() with modified Gram-Schmidt and a zero initial guess. Without HSS compression, iterative refinement with the direct solver is used. All experiments are performed in double precision with relative or absolute stopping criteria or , where , with the approximate multifrontal factorization of , is the preconditioned residual. The right-hand-side is always set to .

(a) Num fact time, .
(b) Total solve time, .
(c) Num fact time, .
(d) Total solve time, .
Figure 5: Times for factorization (left) and solve (right) of the 2D (top) and 3D (bottom) Poisson equation on and grids as function of the number of levels in the elimination tree for which HSS compression is applied. Different curves correspond to different HSS compression tolerances . For 3D, much more aggressive HSS compression can be used.

Figure 5 shows timing results for the 2D (top) and 3D (bottom) Poisson equation on and grids respectively. Figure 4(a) shows numerical factorization time as a function of the number of levels in the elimination tree for which HSS compression was used. The HSS levels always correspond to the top levels of the elimination tree. This shows that applying HSS compression leads to a speedup of about for HSS levels. Different lines correspond to different HSS compression tolerances . Somewhat larger factorization speedups are possible for . However, this does not lead to faster time-to-solution. Figure 4(b) shows the cumulative time for nested dissection reordering, symbolic factorization, numerical factorization and GMRES solve. For , the number of GMRES iterations, and thus the number of applications of the multifrontal solve, increases too much to get overall speedup. Best results were obtained with , and only GMRES iterations ( multifrontal solves). Figures 4(c) and 4(d) show the timings for the 3D Poisson problem. For the 3D problem, much more aggressive HSS compression can be used. Best results were obtained with , and GMRES iterations. For the Poisson problem it seems that for 2D the direct solver is very efficient, with a modest speedup from HSS, while for 3D the HSS enabled factorization leads to a good preconditioner.

(a) 2D Poisson.
(b) 3D Poisson.
Figure 6: Scaling of the number of floating point operations required to factor and solve a 2D(a) or 3D(b) Poisson equation. (a) The theory predicts complexity for the multifrontal (MF) solver and optimal  [55] complexity with HSS compression. (b) complexity for the multifrontal solver and slightly lower complexity with HSS compression. The fits (black lines) are very sensitive to the data and not very reliable. However, note the smaller exponents and the larger constants for the new solver.

Figure 5(a) shows the total number of flops (numerical factorization and GMRES solve) for solving a 2D Poisson equation as function of the number of degrees of freedom, again for different compression tolerances. For the pure multifrontal method (no compression), the number of flops is , as predicted by the theory