Parallel Sparse Matrix-Matrix Multiplication and Indexing: Implementation and Experiments This work was supported in part by NSF grant CNS-0709385, and by grants from Intel Corporation and Microsoft Corporation. All authors from LBNL were supported by the ASCR Office in the DOE Office of Science under contract number DE-AC02-05CH11231.

Parallel Sparse Matrix-Matrix Multiplication and Indexing: Implementation and Experiments thanks: This work was supported in part by NSF grant CNS-0709385, and by grants from Intel Corporation and Microsoft Corporation. All authors from LBNL were supported by the ASCR Office in the DOE Office of Science under contract number DE-AC02-05CH11231.

Aydin BuluÇ  Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (abuluc@lbl.gov).    John R. Gilbert Computer Science Department, University of California, Santa Barbara, CA 93106-5110 (gilbert@cs.ucsb.edu).
Abstract

Generalized sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. Here we show that SpGEMM also yields efficient algorithms for general sparse-matrix indexing in distributed memory, provided that the underlying SpGEMM implementation is sufficiently flexible and scalable. We demonstrate that our parallel SpGEMM methods, which use two-dimensional block data distributions with serial hypersparse kernels, are indeed highly flexible, scalable, and memory-efficient in the general case. This algorithm is the first to yield increasing speedup on an unbounded number of processors; our experiments show scaling up to thousands of processors in a variety of test scenarios.

P

arallel computing, numerical linear algebra, sparse matrix-matrix multiplication, SpGEMM, sparse matrix indexing, sparse matrix assignment, 2D data decomposition, hypersparsity, graph algorithms, sparse SUMMA, subgraph extraction, graph contraction, graph batch update.

{AMS}

05C50, 05C85, 65F50, 68W10

1 Introduction

We describe scalable parallel implementations of two sparse matrix kernels. The first, SpGEMM, computes the product of two sparse matrices over a general semiring. The second, SpRef, performs generalized indexing into a sparse matrix: Given vectors and of row and column indices, SpRef extracts the submatrix . Our novel approach to SpRef uses SpGEMM as its key subroutine, which regularizes the computation and data access patterns; conversely, applying SpGEMM to SpRef emphasizes the importance of an SpGEMM implementation that handles arbitrary matrix shapes and sparsity patterns, and a complexity analysis that applies to the general case.

Our main contributions in this paper are: first, we show that SpGEMM leads to a simple and efficient implementation of SpRef; second, we describe a distributed-memory implementation of SpGEMM that is more general in application and more flexible in processor layout than before; and, third, we report on extensive experiments with the performance of SpGEMM and SpRef. We also describe an algorithm for sparse matrix assignment (SpAsgn), and report its parallel performance. The SpAsgn operation, formally , assigns a sparse matrix to a submatrix of another sparse matrix. It can be used to perform streaming batch updates to a graph.

Parallel algorithms for SpGEMM and SpRef, as well as their theoretical performance, are described in Sections 3 and 4. We present the general SpGEMM algorithm and its parallel complexity before SpRef since the latter uses SpGEMM as a subroutine and its analysis uses results from the SpGEMM analysis. Section 3.1 summarizes our earlier results on the complexity of various SpGEMM algorithms on distributed memory. Section 3.3 presents our algorithm of choice, Sparse SUMMA, in a more formal way than before, including a pseudocode general enough to handle different blocking parameters, rectangular matrices, and rectangular processor grids. The reader interested only in parallel SpRef can skip these sections and go directly to Section 4, where we describe our SpRef algorithm, its novel parallelization and its analysis. Section 5 gives an extensive performance evaluation of these two primitives using large scale parallel experiments, including a performance comparison with similar primitives from the Trilinos package. Various implementation decisions and their effects on performance are also detailed.

2 Notation

Let be a sparse rectangular matrix of elements from a semiring . We use to denote the number of nonzero elements in . When the matrix is clear from context, we drop the parenthesis and simply use . For sparse matrix indexing, we use the convenient Matlab colon notation, where denotes the th column, denotes the th row, and denotes the element at the th position of matrix . Array and vector indices are 1-based throughout this paper. The length of an array , denoted by , is equal to its number of elements. For one-dimensional arrays, denotes the th component of the array. We use , pronounced “flops”, to denote the number of nonzero arithmetic operations required when computing the product of matrices and . Since the flops required to form the matrix triple product differ depending on the order of multiplication, and mean different things. The former is the flops needed to multiply the product with , where the latter is the flops needed to multiply with the product . When the operation and the operands are clear from context, we simply use . The Matlab sparse(i,j,v,m,n) function, which is used in some of the pseudocode, creates an sparse matrix with nonzeros .

In our analyses of parallel running time, the latency of sending a message over the communication interconnect is , and the inverse bandwidth is , both expressed in terms of time for a floating-point operation (also accounting for the cost of cache misses and memory indirections associated with that floating point operation). means that is bounded asymptotically by both above and below.

3 Sparse matrix-matrix multiplication

Figure 1: Multiplication of sparse matrices stored by columns [11]. Columns of are accumulated as specified by the non-zero entries in a column of using a sparse accumulator or SPA [24]. The contents of the SPA are stored into a column of once all required columns are accumulated.

SpGEMM is a building block for many high-performance graph algorithms, including graph contraction [25], breadth-first search from multiple source vertices [10], peer pressure clustering [34], recursive all-pairs shortest-paths [19], matching [33], and cycle detection [38]. It is a subroutine in more traditional scientific computing applications such as multigrid interpolation and restriction [5] and Schur complement methods in hybrid linear solvers [37]. It also has applications in general computing, including parsing context-free languages [32] and colored intersection searching [29].

The classical serial SpGEMM algorithm for general sparse matrices was first described by Gustavson [26], and was subsequently used in Matlab [24] and CSparse [20]. That algorithm, shown in Figure 1, runs in time, which is optimal for . It uses the popular compressed sparse column (CSC) format for representing its sparse matrices. Algorithm 1 gives the pseudocode for this column-wise serial algorithm for SpGEMM.

1:procedure Columnwise-SpGEMM()
2:    for  to  do
3:        for  where  do
4:                        
Algorithm 1 Column-wise formulation of serial matrix multiplication

3.1 Distributed memory SpGEMM

The first question for distributed memory algorithms is “where is the data?”. In parallel SpGEMM, we consider two ways of distributing data to processors. In 1D algorithms, each processor stores a block of rows of an -by- sparse matrix. In 2D algorithms, processors are logically organized as a rectangular grid, so that a typical processor is named . Submatrices are assigned to processors according to a 2D block decomposition: processor stores the submatrix of dimensions in its local memory. We extend the colon notation to slices of submatrices: denotes the slice of collectively owned by all the processors along the th processor row and denotes the slice of collectively owned by all the processors along the th processor column.

We have previously shown that known 1D SpGEMM algorithms are not scalable to thousands of processors [7], while 2D algorithms can potentially speed up indefinitely, albeit with decreasing efficiency. There are two reasons that the 1D algorithms do not scale: First, their auxiliary data structures cannot be loaded and unloaded fast enough to amortize their costs. This loading and unloading is necessary because the 1D algorithms proceed in stages in which only one processor broadcasts its submatrix to the others, in order to avoid running out of memory. Second, and more fundamentally, the communication costs of 1D algorithms are not scalable regardless of data structures. Each processor receives (of either or ) data in the worst case, which implies that communication cost is on the same order as computation, prohibiting speedup beyond a fixed number of processors. This leaves us with 2D algorithms for a scalable solution.

Our previous work [8] shows that the standard compressed column or row (CSC or CSR) data structures are too wasteful for storing the local submatrices arising from a 2D decomposition. This is because the local submatrices are hypersparse, meaning that the ratio of nonzeros to dimension is asymptotically zero. The total memory across all processors for CSC format would be , as opposed to memory to store the whole matrix in CSC on a single processor. Thus a scalable parallel 2D data structure must respect hypersparsity.

Similarly, any algorithm whose complexity depends on matrix dimension, such as Gustavson’s serial SpGEMM algorithm, is asymptotically too wasteful to be used as a computational kernel for multiplying the hypersparse submatrices. Our HyperSparseGEMM [6, 8], on the other hand, operates on the strictly doubly compressed sparse column (DCSC) data structure, and its time complexity does not depend on the matrix dimension. Section 3.2 gives a succinct summary of DCSC.

Our HyperSparseGEMM uses an outer-product formulation whose time complexity is , where is the number of columns of that are not entirely zero, is the number of rows of that are not entirely zero, and is the number of indices for which and . The extra factor at the time complexity originates from the priority queue that is used to merge outer products on the fly. The overall memory requirement of this algorithm is the asymptotically optimal , independent of either matrix dimensions or .

3.2 DCSC Data Structure

Figure 2: Matrix in CSC format

DCSC [8] is a further compressed version of CSC where repetitions in the column pointers array, which arise from empty columns, are not allowed. Only columns that have at least one nonzero are represented, together with their column indices.

Figure 3: Matrix in Triples format
Figure 4: Matrix in DCSC format

For example, consider the 9-by-9 matrix with 4 nonzeros as in Figure 4. Figure 2 showns its CSC storage, which includes repetitions and redundancies in the column pointers array (). Our new data structure compresses this column pointers array to avoid repetitions, giving of DCSC as in Figure 4. DCSC is essentially a sparse array of sparse columns, whereas CSC is a dense array of sparse columns.

After removing repetitions, does no longer refer to the th column. A new array, which is parallel to , gives us the column numbers. Although our Hypersparse_GEMM algorithm does not need column indexing, DCSC can support fast column indexing by building an array that contains pointers to nonzero columns (columns that have at least one nonzero element) in linear time.

3.3 Sparse SUMMA algorithm

Our parallel algorithm is inspired by the dense matrix-matrix multiplication algorithm SUMMA [23], used in parallel BLAS [16]. SUMMA is memory efficient and easy to generalize to non-square matrices and processor grids.

The pseudocode of our 2D algorithm, SparseSUMMA [7], is shown in Algorithm 2 in its most general form. The coarseness of the algorithm can be adjusted by changing the block size . For the first time, we present the algorithm in a form general enough to handle rectangular processor grids and a wide range of blocking parameter choices. The pseudocode, however, requires to evenly divide and for ease of presentation. This requirement can be dropped at the expense of having potentially multiple broadcasters along a given processor row and column during one iteration of the loop starting at line 6. The construct indicates that all of the do code blocks execute in parallel by all the processors. The execution of the algorithm on a rectangular grid with rectangular sparse matrices is illustrated in Figure 5. We refer to the Combinatorial BLAS source code [2] for additional details.

1:: sparse matrices distributed on a processor grid
2:: the product , similarly distributed.
3:procedure SparseSUMMA()
4:    for all processors  in parallel  do
5:        
6:        for  to  do blocking parameter evenly divides and
7:            is the broadcasting processor column
8:            is the broadcasting processor row
9:            local column range
10:            local row range
11:            Broadcast()
12:            Broadcast()
13:                    
14:         Restore the original     
Algorithm 2 Operation using Sparse SUMMA
Figure 5: Execution of the Sparse SUMMA algorithm for sparse matrix-matrix multiplication . The example shows the first stage of the algorithm execution (the broadcast and the local update by processor ). The two rectangular sparse operands and are of sizes -by- and -by-, distributed on a processor grid. Block size is chosen to be .

The Broadcast() syntax means that the owner of becomes the root and broadcasts its submatrix to all the processors on the th processor row. Similarly for Broadcast(), the owner of broadcasts its submatrix to all the processors on the th processor column. In lines 910, we find the local column (for ) and row (for ) ranges for matrices that are to be broadcast during that iteration. They are significant only at the broadcasting processors, which can be determined implicitly from the first parameter of Broadcast. We index by columns as opposed to rows because it has already been locally transposed in line 5. This makes indexing faster since local submatrices are stored in the column-based DCSC sparse data structure. Using DCSC, the expected cost of fetching consecutive columns of a matrix is plus the size (number of nonzeros) of the output. Therefore, the algorithm asymptotically has the same computation cost for all values of .

For our complexity analysis, we assume that nonzeros of input sparse matrices are independently and identically distributed, input matrices are -by-, with nonzeros per row and column on the average. The sparsity parameter simplifies our analysis by making different terms in the complexity comparable to each other. For example, if and both have sparsity , then and .

The communication cost of the Sparse SUMMA algorithm, for the case of , is

(1)

and its computation cost is

(2)

We see that although scalability is not perfect and efficiency deteriorates as increases, the achievable speedup is not bounded. Since becomes negligible as increases, the bottlenecks for scalability are the term of and the term of , which scale with . Consequently, two different scaling regimes are likely to be present: A close to linear scaling regime until those terms start to dominate and a -scaling regime afterwards.

4 Sparse matrix indexing and subgraph selection

Given a sparse matrix and two vectors and of indices, SpRef extracts a submatrix and stores it as another sparse matrix, . Matrix contains the elements in rows and columns of , for and , respecting the order of indices. If is the adjacency matrix of a graph, SpRef(, selects an induced subgraph. SpRef can also be used to randomly permute the rows and columns of a sparse matrix, a primitive in parallel matrix computations commonly used for load balancing [31].

Simple cases such as row (), column (), and element () indexing are often handled by special purpose subroutines [11]. A parallel algorithm for the general case, where and are arbitrary vectors of indices, does not exist in the literature. We propose an algorithm that uses parallel SpGEMM. Our algorithm is amenable to performance analysis for the general case.

A related kernel is SpAsgn, or sparse matrix assignment. This operation assigns a sparse matrix to a submatrix of another sparse matrix, . A variation of SpAsgn is , which is similar to Liu’s extend-add operation [30] in finite element matrix assembly. Here we describe the sequential SpAsgn algorithm and its analysis, and report large-scale performance results in Section 5.2.

Figure 6: Sparse matrix indexing (SpRef) using mixed-mode SpGEMM. On an -by- matrix , the SpRef operation extracts a -by- submatrix, where is a vector of row indices and is a vector of column indices. The example shows . It performs two SpGEMM operations between a boolean matrix and a general-type matrix.

4.1 Sequential algorithms for SpRef and SpAsgn

Performing SpRef by a triple sparse-matrix product is illustrated in Figure 6. The algorithm can be described concisely in Matlab notation as follows:

1ction B = spref(A,I,J)
2
3[m,n] = size(A);
4R = sparse(1:len(I),I,1,len(I),m);
5Q = sparse(J,1:len(J),1,n,len(J));
6B = R*A*Q;

The sequential complexity of this algorithm is . Due to the special structure of the permutation matrices, the number of nonzero operations required to form the product is equal to the number of nonzero elements in the product. That is, . Similarly, , making the overall complexity for any and . This is optimal in general, since just writing down the result of a matrix permutation requires operations.

Performing SpAsgn by two triple sparse-matrix products and additions is illustrated in Figure 7. We create two temporary sparse matrices of the same dimensions as . These matrices contain nonzeros only for the part, and zeros elsewhere. The first triple product embeds into a bigger sparse matrix that we add to . The second triple product embeds into an identically sized sparse matrix so that we can zero out the portion by subtracting it from . Since general semiring axioms do not require additive inverses to exist, we implement this piece of the algorithm slightly differently that stated in the pseudocode. We still form the product but instead of using subtraction, we use the generalized sparse elementwise multiplication function of the Combinatorial BLAS [10] to zero out the portion. In particular, we first perform an elementwise multiplication of with the negation of without explicitly forming the negated matrix, which can be dense. Thanks to this direct support for the implicit negation operation, the complexity bounds are identical to the version that uses subtraction. The negation does not assume additive inverses: it sets all zero entries to one and all nonzeros entries to zero. The algorithm can be described concisely in Matlab notation as follows:

1ction C = spasgn(A,I,J,B)
2% A = spasgn(A,I,J,B) performs A(I,J) = B
3
4[ma,na] = size(A);
5[mb,nb] = size(B);
6R = sparse(I,1:mb,1,ma,mb);
7Q = sparse(1:nb,J,1,nb,na);
8S = sparse(I,I,1,ma,ma);
9T = sparse(J,J,1,na,na);
10C = A  + R*B*Q - S*A*T;

Liu’s extend-add operation is similar to SpAsgn but simpler; it just omits subtracting the term.

Figure 7: Illustration of SpAsgn () for rings where additive inverses are defined. For simplicity, the vector indices and are shown as contiguous, but they need not be.

Let us analyze the complexity of SpAsgn. Given and , the intermediate boolean matrices have the following properties:

is -by- rectangular with nonzeros, one in each column.

is -by- rectangular with nonzeros, one in each row.

is -by- symmetric with nonzeros, all located along the diagonal.

is -by- symmetric with nonzeros, all located along the diagonal.

{theorem}

The sequential SpAsgn algorithm takes time using an optimal SpGEMM subroutine.

{proof}

The product requires operations because there is a one-to-one relationship between nonzeros in the output and performed. Similarly, , yielding complexity for the first triple product. The product only requires since it does not need to touch nonzeros of that do not contribute to . Similarly, requires only . The number of nonzeros in the second triple product is . The final pointwise addition and subtraction (or generalized elementwise multiplication in the absence of additive inverses) operations take time on the order of the total number of nonzeros in all operands [11], which is .

4.2 SpRef in parallel

The parallelization of SpRef poses several challenges. The boolean matrices have only one nonzero per row or column. For the parallel 2D algorithm to scale well with increasing number of processors, data structures and algorithms should respect hypersparsity [8]. Communication should ideally take place along a single processor dimension, to save a factor of in communication volume. As before, we assume a uniform distribution of nonzeros to processors in our analysis.

Figure 8: Parallel forming of the left hand side boolean matrix from the index vector on 9 processors in a logical grid. will be subsequently multiplied with to extract 6 rows out of 9 from and order them as .

The communication cost of forming the matrix in parallel is the cost of Scatter along the processor column. For the case of vector distributed to diagonal processors, scattering can be implemented with an average communication cost of  [14]. This process is illustrated in Figure 8. The matrix can be constructed identically, followed by a operation where each processor receives words of data from its diagonal neighbor . Note that the communication cost of the transposition is dominated by the cost of forming via Scatter.

While the analysis of our parallel SpRef algorithm assumes that the index vectors are distributed only on diagonal processors, the asymptotic costs are identical in the 2D case where the vectors are distributed across all the processors [12]. This is because the number of elements (the amount of data) received by a given processor stays the same with the only difference in the algorithm being the use of Alltoall operation instead of Scatter during the formation of the and matrices.

The parallel performance of SpGEMM is a complicated function of the matrix nonzero structures [7, 9]. For SpRef, however, the special structure makes our analysis more precise. Suppose that the triple product is evaluated from left to right, ; a similar analysis can be applied to the reverse evaluation. A conservative estimate of , the number of indices for which and , is .

Using our HyperSparseGEMM [6, 8] as the computational kernel, time to compute the product (excluding the communication costs) is:

where the maximum over all pairs is equal to the average, due to the uniform nonzero distribution assumption.

Recall from the sequential analysis that since each nonzero in contributes at most once to the overall flop count. We also know that and . Together with the uniformity assumption, these identities yield the following results:

In addition to the multiplication costs, adding intermediate triples in stages costs an extra operations per processor. Thus, we have the following estimates of computation and communication costs for computing the product :

Given that , the analysis of multiplying the intermediate product with is similar. Combined with the cost of forming auxiliary matrices and and the costs of transposition of , the total cost of the parallel SpRef algorithm becomes

We see that SpGEMM costs dominate the cost of SpRef. The asymptotic speedup is limited to , as in the case of SpGEMM.

5 Experimental Results

We ran experiments on NERSC’s Franklin system [1], a 9660-node Cray XT4. Each XT4 node contains a quad-core 2.3 GHz AMD Opteron processor, attached to the XT4 interconnect via a Cray SeaStar2 ASIC using a HyperTransport 2 interface capable of 6.4 GB/s. The SeaStar2 routing ASICs are connected in a 3D torus topology, and each link is capable of 7.6 GB/s peak bidirectional bandwidth. Our algorithms perform similarly well on a fat tree topology, as evidenced by our experimental results on the Ranger platform that are included in an earlier technical report [9].

We used the GNU C/C++ compilers (version 4.5), and Cray’s MPI implementation, which is based on MPICH2. We incorporated our code into the Combinatorial BLAS framework [10]. We experimented with core counts that are perfect squares, because the Combinatorial BLAS currently uses a square processor grid. We compared performance with the Trilinos package (version 10.6.2.0) [28], which uses a 1D decomposition for its sparse matrices.

In the majority of our experiments, we used synthetically generated R-MAT matrices rather than Erdős-Rényi [22] “flat” random matrices, as these are more realistic for many graph analysis applications. R-MAT [13], the Recursive MATrix generator, generates graphs with skewed degree distributions that approximate a power-law. A scale R-MAT matrix is -by-. Our R-MAT matrices have an average of nonzeros per row and column. R-MAT seed paratemeters are , and . We applied a random symmetric permutation to the input matrices to balance the memory and the computational load. In other words, instead of storing and computing , we compute . All of our experiments are performed on double-precision floating-point inputs.

Since algebraic multigrid on graphs coming from physical problems is an important case, we included two more matrices from the Florida Sparse Matrix collection [21] to our experimental analysis, into Section 5.3.2, where we benchmark restriction operation that is used in algebraic multigrid. The first such matrix is a large circuit problem (Freescale1) with million nonzeros and million rows and columns. The second matrix comes from a structural problem (GHS_psdef/ldoor), and has million nonzeros and rows and columns.

5.1 Parallel Scaling of SpRef

Figure 9: Performance and parallel scaling of applying a random symmetric permutation to an R-MAT matrix of scale 22. The x-axis uses a log scale.
Figure 10: Performance and parallel scaling of extracting 10 induced subgraphs from an R-MAT matrix of scale 22. The x-axis uses a log scale.

Our first set of experiments randomly permutes the rows and columns of , as an example case study for matrix reordering and partitioning. This operation corresponds to relabeling vertices of a graph. Our second set of experiments explores subgraph extraction by generating a random permutation of and dividing it into chunks . We then performed SpRef operations of the form , one after another (with a barrier in between). In both cases, the sequential reference is our algorithm itself.

The performance and parallel scaling of the symmetric random permutation is shown in Figure 10. The input is an R-MAT matrix of scale 22 with approximately 32 million nonzeros in a square matrix of dimension . Speedup and runtime are plotted on different vertical axes. We see that scaling is close to linear up to about 64 processors, and proportional to afterwards, agreeing with our analysis.

The performance of subgraph extraction for induced subgraphs, each with randomly chosen vertices, is shown in Figure 10. The algorithm performs well in this case too. The observed scaling is slightly less than the case of applying a single big permutation, which is to be expected since the multiple small subgraph extractions increase span and decrease available parallelism.

5.2 Parallel Scaling of SpAsgn

We benchmarked our parallel SpAsgn code by replacing a portion of the input matrix () with a structurally similar right-hand side matrix (). This operation is akin to replacing a portion of the graph due to a streaming update. The subset of vertices (row and column indices of ) to be updated is chosen randomly. In all the tests, the original graph is an R-MAT matrix of scale 22 with 32 million nonzeros. The right-hand side (replacement) matrix is also an R-MAT matrix of scales 21, 20, and 19, in three subsequent experiments, replacing 50%, 25%, and 12.5% of the original graph, respectively. The average number of nonzeros per row and column are also adjusted for the right hand side matrices to match the nonzero density of the subgraphs they are replacing.

Figure 11: Observed scaling of the SpAsgn operation where is an R-MAT matrix of scale 22 and is another R-MAT matrix whose scale is shown in the figure legend. is a duplicate-free sequence with entries randomly selected from the range ; its length matches the dimensions of . Both axes are log scale.

The performance of this sparse matrix assignment operation is shown in Figure 11. Our implementation uses a small number of Combinatorial BLAS routines: A sparse matrix constructor from distributed vectors, essentially a parallel version of Matlab’s sparse routine, the generalized elementwise multiplication with direct support for negation, and parallel SpGEMM implemented using Sparse SUMMA.

5.3 Parallel Scaling of Sparse SUMMA

We implemented two versions of the 2D parallel SpGEMM algorithms in C++ using MPI. The first is directly based on Sparse SUMMA and is synchronous in nature, using blocking broadcasts. The second is asynchronous and uses one-sided communication in MPI-2. We found the asynchronous implementation to be consistently slower than the broadcast-based synchronous implementation due to inefficient implementation of one-sided communication routines in MPI. Therefore, we only report the performance of the synchronous implementation. The motivation behind the asynchronous approach, performance comparisons, and implementation details, can be found in our technical report [9, Section 7]. On more than 4 cores of Franklin, synchronous implementation consistently outperformed the asynchronous implementation by 38-57%.

Our sequential HyperSparseGEMM routines return a set of intermediate triples that are kept in memory up to a certain threshold without being merged immediately. This permits more balanced merging, eliminating some unnecessary scans that degraded performance in a preliminary implementation [7].

5.3.1 Square Sparse Matrix Multiplication

Figure 12: Observed scaling of synchronous Sparse SUMMA for the R-MAT  R-MAT product on matrices having dimensions . Both axes are log scale.

In the first set of experiments, we multiply two structurally similar R-MAT matrices. This square multiplication is representative of the expansion operation used in the Markov clustering algorithm [36]. It is also a challenging case for our implementation due to the highly skewed nonzero distribution. We performed strong scaling experiments for matrix dimensions ranging from to .

Figure 12 shows the speedup we achieved. The graph shows linear speedup until around 100 processors; afterwards the speedup is proportional to the square root of the number of processors. Both results agree with the theoretical analysis. To illustrate how the scaling transitions from linear to , we drew trend lines on the scale 21 results. As shown in Figure 14, the slope of the log-log curve is (close to linear) until 121 cores, and the slope afterwards is (close to ). Figure 16 zooms to the linear speedup regime, and shows the performance of our algorithm at lower concurrencies. The speedup and timings are plotted on different y-axes of the same graph.

Our implementation of Sparse SUMMA achieves over 2 billion “useful flops” (in double precision) per second on 8100 cores when multiplying scale 24 R-MAT matrices. Since useful flops are highly dependent on the matrix structure and sparsity, we provide additional statistics for this operation in Table 14. Using matrices with more nonzeros per row and column will certainly yield higher performance rates (in useful flops). The gains from sparsity are clear if one considers dense flops that would be needed if these matrices were stored in a dense format. For example, multiplying two dense scale 24 matrices requires 9444 exaflops.

Scale 21 16.3 16.3 123.9 253.2 22 32.8 32.8 257.1 523.7 23 65.8 65.8 504.3 1021.3 24 132.1 132.1 1056.9 2137.4
Figure 13: Statistics about R-MAT product . All numbers (except scale) are in millions.
Figure 14: Demonstration of two scaling regimes for scale 21 R-MAT product.

Figure 16 breaks down the time spent in communication and computation when multiplying two R-MAT graphs of scale 24. We see that computation scales much better than communication (over x reduction when going from 36 to 8100 cores), implying that SpGEMM is communication bound for large concurrencies. For example, on 8100 cores, 83% of the time is spent in communication. Communication times include the overheads due to synchronization and load imbalance.

Figure 16 also shows the effect of different blocking sizes. Remember that each processor owns a submatrix of size -by-. On the left, the algorithm completes in stages, each time broadcasting its whole local matrix. On the right, the algorithm completes in stages, each time broadcasting half of its local matrix. We see that while communication costs are not affected, the computation slows down by 1-6% when doubling the number of stages. This difference is due to the costs of splitting the input matrices before the multiplication and reassembling them afterwards, which is small because splitting and reassembling are simple scans over the data whose costs are dominated by the cost of multiplication itself.

Figure 15: Performance and scaling of Sparse SUMMA at lower concurrencies (scale 21 inputs). The x-axis uses a log scale.
Figure 16: Communication and computation breakdown, at various concurrencies and two blocking sizes (scale 24 inputs).

5.3.2 Multiplication with the Restriction Operator

Multilevel methods are widely used in the solution of numerical and combinatorial problems [35]. Such methods construct smaller problems by successive coarsening. The simplest coarsening is graph contraction: a contraction step chooses two or more vertices in the original graph to become a single aggregate vertex in the contracted graph . The edges of that used to be incident to any of the vertices forming the aggregate become incident to the new aggregate vertex in .

Constructing coarse grid during the V-cycle of algebraic multigrid [5] or graph partitioning [27] is a generalized graph contraction operation. Different algorithms need different coarsening operators. For example, a weighted aggregation [15] might be preferred for partitioning problems. In general, coarsening can be represented as multiplication of the matrix representing the original fine domain (grid, graph, or hypergraph) by the restriction operator.

In these experiments, we use a simple restriction operation to perform graph contraction. Gilbert et al. [25] describe how to perform contraction using SpGEMM. Their algorithm creates a special sparse matrix with nonzeros. The triple product contracts the whole graph at once. Making smaller in the first dimension while keeping the number of nonzeros same changes the restriction order. For example, we contract the graph into half by using having dimensions , which is said to be of order 2.

Figure 17: Strong scaling of , multiplying scale 23 R-MAT matrices with the restriction operator on the right. The x-axis uses a log scale.
Figure 18: Normalized communication and computation breakdown for multiplying scale 23 R-MAT matrices with the restriction operator of order 4.

Figure 18 shows ‘strong scaling’ of operation for R-MAT graphs of scale 23. We used restrictions of order 2, 4, and 8. Changing the interpolation order results in minor (less than 5%) changes in performance, as shown by the overlapping curves. This is further evidence that our algorithm’s complexity is independent of the matrix dimension, because interpolation order has a profound effect on the dimension of the right hand side matrix, but it does not change the expected and numbers of nonzeros in the inputs (it may slightly decrease the number of nonzeros in the output). The experiment shows scaling up to processors. Figure 18 shows the breakdown of time (as percentages) spent on remote communication and local computation steps.

Figures (a)a and (b)b show ‘strong scaling’ of the full restriction operation of order 8, using different parenthesizations for the triple product. The results show that our code achieves speedup on 1024-way concurrency and speedup on 4096-way concurrency, and the performance is not affected by the different parenthesizations.

(a) Left to right evaluation:
(b) Right to left evaluation:
Figure 19: The full restriction operation of order 8 applied to a scale 23 R-MAT matrix.

Figure 20 shows the performance of full operation on real matrices from physical problems. Both matrices have a full diagonal that remains full after symmetric permutation. Due to the 2D decomposition, processors responsible for the diagonal blocks typically have more work to do. For load-balancing and performance reasons, we split these matrices into two pieces where is the diagonal piece and is the off-diagonal piece. The restriction of rows becomes . Scaling the columns of with the diagonal of performs the former multiplication, and the latter multiplication uses Sparse SUMMA algorithm described in our paper. This splitting approach especially improved the scalability of restriction on Freescale1 matrix, because it is much sparser that GHS_psdef/ldoor, which does not face severe load balancing issues. Order 2 restriction shrinks the number of nonzeros from 17.0 to 15.3 million for Freescale1, and from 42.5 to 42.0 million for GHS_psdef/ldoor.

(a) Freescale1 matrix.
(b) GHS_psdef/ldoor matrix.
Figure 20: Performance and strong scaling of Sparse SUMMA implementation for the full restriction of order 2 () on real matrices from physical problems.

5.3.3 Tall Skinny Right Hand Side Matrix

The last set of experiments multiplies R-MAT matrices by tall skinny matrices of varying sparsity. This computation is representative of the parallel breadth-first search that lies at the heart of our distributed-memory betweenness centrality implementation [10]. This set indirectly examines the sensitivity to sparsity as well, because we vary the sparsity of the right hand side matrix from approximately to nonzeros per column, in powers of 10. In this way, we imitate the patterns of the level-synchronous breadth-first search from multiple source vertices where the current frontier can range from a few vertices to hundreds of thousands [12].

For our experiments, the R-MAT matrices on the left hand side have nonzeros per column and their dimensions vary from to . The right-hand side is an Erdős-Rényi matrix of dimensions -by-, and the number of nonzeros per column, , is varied from to , in powers of . The right-hand matrix’s width varies from to , growing proportionally to its length , hence keeping the matrix aspect ratio constant at . Except for the case, the R-MAT matrix has more nonzeros than the right-hand matrix. In this computation, the total work is , the total memory consumption is , and the total bandwidth requirement is .

We performed weak scaling experiments where memory consumption per processor is constant. Since , this is achieved by keeping both and constant. Work per processor is also constant. However, per-processor bandwidth requirements of this algorithm increases by a factor of .

Figure 21 shows a performance graph in three dimensions. The timings for each slice along the XZ-plane (i.e. for every contour) are normalized to the running time on 64 processors. We do not cross-compare the absolute performances for different values, as our focus in this section is parallel scaling. In line with the theory, we observe the expected slowdown due to communication costs.

The performance we achieved for these large scale experiments, where we ran our code on up to processors, is remarkable. It also shows that our implementation does not incur any significant overheads since it does not deviate from the curve.

Figure 21: Weak scaling of R-MAT times a tall skinny Erdős-Rényi matrix. X (processors) and Y (nonzeros per column on fringe) axes are logarithmic, whereas Z (normalized time) axis is linear.

5.4 Comparison with Trilinos

The EpetraExt package of Trilinos can multiply two distributed sparse matrices in parallel. Trilinos can also permute matrices and extract submatrices through its Epetra_Import and Epetra_Export classes. These packages of Trilinos use a 1D data layout.

For SpGEMM, we compared the performance of Trilinos’s EpetraExt package with ours on two scenarios. In the first scenario, we multiplied two R-MAT matrices as described in Section 5.3.1, and in the second scenario, we multiplied an R-MAT matrix with the restriction operator of order 8 on the right as described in Section 5.3.2.

Trilinos ran out of memory when multiplying R-MAT matrices of scale larger than 21, or when using more than 256 processors. Figure (a)a shows SpGEMM timings for up to 256 processors on scale 21 data. Sparse SUMMA is consistently faster than Trilinos’s implementation, with the gap increasing with the processor count, reaching on 256-way concurrency. Sparse SUMMA is also more memory efficient as Trilinos’s matrix multiplication ran out of memory for and cores. The sweet spot for Trilinos seems to be around 120 cores, after which its performance degrades significantly.

(a) R-MAT  R-MAT product (scale 21).
(b) Multiplication of an R-MAT matrix of scale 23 with the restriction operator of order 8.
Figure 22: Comparison of SpGEMM implementation of Trilinos’s EpetraExt package with our Sparse SUMMA implementation using synthetically generated matrices. The data labels on the plots show the speedup of Sparse SUMMA over EpetraExt.
(a) Freescale matrix.
(b) GHS_psdef/ldoor matrix.
Figure 23: Comparison of Trilinos’s EpetraExt package with our Sparse SUMMA implementation for the full restriction of order 2 () on real matrices. The data labels on the plots show the speedup of Sparse SUMMA over EpetraExt.

In the case of multiplying with the restriction operator, the speed and scalability of our implementation over EpetraExt is even more pronounced. This is shown in Figure (b)b where our code is 65X faster even on just 121 processors. Remarkably, our codes scales up to 4096 cores on this problem, as shown in Section 5.3.2, while EpetraExt starts to slow down just beyond 16 cores. We also compared Sparse SUMMA with EpetraExt on matrices coming from physical problems, and the results for the full restriction operation () are shown in Figures 23.

In order to benchmark Trilinos’s sparse matrix indexing capabilities, we used EpetraExt’s permutation class that can permute row or columns of an Epetra_CrsMatrix by creating a map defined by the permutation, followed by an Epetra_Export operation to move data from the input object into the permuted object. We applied a random symmetric permutation on a R-MAT matrix, as done in Section 5.1. Trilinos shows good scaling up to 121 cores but then it starts slowing down as concurrency increases, eventually becoming over slower than our SpRef implementation at 169 cores.

6 Conclusions and Future Work

We presented a flexible parallel sparse matrix-matrix multiplication (SpGEMM) algorithm, Sparse SUMMA, which scales to thousands of processors in distributed memory. We used Sparse SUMMA as a building block to design and implement scalable parallel routines for sparse matrix indexing (SpRef) and assignment (SpAsgn). These operations are important in the context of graph operations. They yield elegant algorithms for coarsening graphs by edge contraction as in Figure 24, extracting subgraphs, performing parallel breadth-first search from multiple source vertices, and performing batch updates to a graph.

We performed parallel complexity analyses of our primitives. In particular, using SpGEMM as a building block enabled the most general analysis of SpRef. Our extensive experiments confirmed that our implementation achieves the performance predicted by our analyses.

Figure 24: Example of graph coarsening using edge contraction, which can be implemented via a triple sparse matrix product where is the restriction operator.

Our SpGEMM routine might be extended to handle matrix chain products. In particular, the sparse matrix triple product is used in the coarsening phase of algebraic multigrid [3]. Sparse matrix indexing and parallel graph contraction also require sparse matrix triple products [25]. Providing a first-class primitive for sparse matrix chain products would eliminate temporary intermediate products and allow more optimization, such as performing structure prediction [17] and determining the best order of multiplication based on the sparsity structure of the matrices.

As we show in Section 5.3, our implementation spends more than 75% of its time in inter-node communication after 2000 processors. Scaling to higher concurrencies require asymptotic reductions in communication volume. We are working on developing practical communication-avoiding algorithms [4] for sparse matrix-matrix multiplication (and consequently for sparse matrix indexing and assignment), which might require inventing efficient novel sparse data structures to support such algorithms.

Our preliminary experiments suggest that synchronous algorithms for SpGEMM cause considerably higher load imbalance than asynchronous ones [9, Section 7]. In particular, a truly one-sided implementation can perform up to 46% faster when multiplying two R-MAT matrices of scale 20 using 4000 processors. We will experiment with partitioned global address space (PGAS) languages, such as UPC [18], because the current implementations of one-sided MPI-2 were not able to deliver satisfactory performance when used to implement asynchronous versions of our algorithms.

As the number of cores per node increases due to multicore scaling, so does the contention on the network interface card. Without hierarchical parallelism that exploits the faster on-chip network, the flat MPI parallelism will be unscalable because more processes will be competing for the same network link. Therefore, designing hierarchically parallel SpGEMM and SpRef algorithms is an important future direction.

References

  • [1] Franklin, Nersc’s Cray XT4 System. http://www.nersc.gov/users/computational-systems/franklin/.
  • [2] Combinatorial BLAS Library (MPI reference implementation). http://gauss.cs.ucsb.edu/~aydin/CombBLAS/html/index.html, 2012.
  • [3] Mark Adams and James W. Demmel. Parallel multigrid solver for 3d unstructured finite element problems. In Supercomputing ’99: Proceedings of the 1999 ACM/IEEE conference on Supercomputing, page 27, New York, NY, USA, 1999. ACM.
  • [4] Grey Ballard, James Demmel, Olga Holtz, and Oded Schwartz. Minimizing communication in numerical linear algebra. SIAM. J. Matrix Anal. & Appl, 32:pp. 866–901, 2011.
  • [5] William L. Briggs, Van Emden Henson, and Steve F. McCormick. A multigrid tutorial: second edition. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2000.
  • [6] Aydın Buluç and John R. Gilbert. New ideas in sparse matrix-matrix multiplication. In J. Kepner and J. Gilbert, editors, Graph Algorithms in the Language of Linear Algebra. SIAM, Philadelphia. 2011.
  • [7] Aydın Buluç and John R. Gilbert. Challenges and advances in parallel sparse matrix-matrix multiplication. In ICPP’08: Proc. of the Intl. Conf. on Parallel Processing, pages 503–510, Portland, Oregon, USA, 2008. IEEE Computer Society.
  • [8] Aydın Buluç and John R. Gilbert. On the representation and multiplication of hypersparse matrices. In IPDPS’08: Proceedings of the 2008 IEEE International Symposium on Parallel&Distributed Processing, pages 1–11. IEEE Computer Society, 2008.
  • [9] Aydın Buluç and John R. Gilbert. Highly parallel sparse matrix-matrix multiplication. Technical Report UCSB-CS-2010-10, Computer Science Department, University of California, Santa Barbara, 2010.
  • [10] Aydın Buluç and John R. Gilbert. The Combinatorial BLAS: Design, implementation, and applications. International Journal of High Performance Computing Applications (IJHPCA), 25(4):496–509, 2011.
  • [11] Aydın Buluç, John R. Gilbert, and Viral B. Shah. Implementing sparse matrices for graph algorithms. In J. Kepner and J. Gilbert, editors, Graph Algorithms in the Language of Linear Algebra. SIAM, Philadelphia. 2011.
  • [12] Aydın Buluç and Kamesh Madduri. Parallel breadth-first search on distributed memory systems. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’11, New York, NY, USA, 2011. ACM.
  • [13] Deepayan Chakrabarti, Yiping Zhan, and Christos Faloutsos. R-MAT: A recursive model for graph mining. In Michael W. Berry, Umeshwar Dayal, Chandrika Kamath, and David B. Skillicorn, editors, SDM. SIAM, 2004.
  • [14] Ernie Chan, Marcel Heimlich, Avi Purkayastha, and Robert A. van de Geijn. Collective communication: theory, practice, and experience. Concurrency and Computation: Practice and Experience, 19(13):1749–1783, 2007.
  • [15] Cédric Chevalier and Ilya Safro. Comparison of coarsening schemes for multilevel graph partitioning. In Learning and Intelligent Optimization: Third International Conference, LION 3. Selected Papers, pages 191–205, Berlin, Heidelberg, 2009. Springer-Verlag.
  • [16] Almadena Chtchelkanova, John Gunnels, Greg Morrow, James Overfelt, and Robert A. van de Geijn. Parallel implementation of BLAS: General techniques for Level 3 BLAS. Concurrency: Practice and Experience, 9(9):837–857, 1997.
  • [17] Edith Cohen. Structure prediction and computation of sparse matrix products. Journal of Combinatorial Optimization, 2(4):307–332, 1998.
  • [18] UPC Consortium. UPC language specifications, v1.2. Technical Report LBNL-59208, Lawrence Berkeley National Laboratory, 2005.
  • [19] Paolo D’Alberto and Alexandru Nicolau. R-Kleene: A high-performance divide-and-conquer algorithm for the all-pair shortest path for densely connected networks. Algorithmica, 47(2):203–213, 2007.
  • [20] Timothy A. Davis. Direct Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2006.
  • [21] Timothy A. Davis and Yifan Hu. The university of florida sparse matrix collection. ACM Trans. Math. Softw., 38(1):1, 2011.
  • [22] Paul Erdős and Alfréd Rényi. On random graphs. Publicationes Mathematicae, 6(1):290–297, 1959.
  • [23] R. A. Van De Geijn and J. Watts. SUMMA: Scalable universal matrix multiplication algorithm. Concurrency: Practice and Experience, 9(4):255–274, 1997.
  • [24] John R. Gilbert, Cleve Moler, and Robert Schreiber. Sparse matrices in Matlab: Design and implementation. SIAM Journal of Matrix Analysis and Applications, 13(1):333–356, 1992.
  • [25] John R. Gilbert, Steve Reinhardt, and Viral B. Shah. A unified framework for numerical and combinatorial computing. Computing in Science and Engineering, 10(2):20–25, 2008.
  • [26] Fred G. Gustavson. Two fast algorithms for sparse matrices: Multiplication and permuted transposition. ACM Transactions on Mathematical Software, 4(3):250–269, 1978.
  • [27] Bruce Hendrickson and Robert Leland. A multilevel algorithm for partitioning graphs. In Supercomputing ’95: Proceedings of the 1995 ACM/IEEE conference on Supercomputing, page 28, New York, NY, USA, 1995. ACM.
  • [28] Michael A. Heroux, Roscoe A. Bartlett, Vicki E. Howle, Robert J. Hoekstra, Jonathan J. Hu, Tamara G. Kolda, Richard B. Lehoucq, Kevin R. Long, Roger P. Pawlowski, Eric T. Phipps, Andrew G. Salinger, Heidi K. Thornquist, Ray S. Tuminaro, James M. Willenbring, Alan Williams, and Kendall S. Stanley. An overview of the Trilinos project. ACM Trans. Math. Softw., 31(3):397–423, 2005.
  • [29] Haim Kaplan, Micha Sharir, and Elad Verbin. Colored intersection searching via sparse rectangular matrix multiplication. In Proceedings of the twenty-second annual symposium on Computational geometry, SCG ’06, pages 52–60, New York, NY, USA, 2006. ACM.
  • [30] Joseph W. H. Liu. The multifrontal method for sparse matrix solution: Theory and practice. SIAM Review, 34(1):pp. 82–109, 1992.
  • [31] Andrew T. Ogielski and William Aiello. Sparse matrix computations on parallel processor arrays. SIAM Journal on Scientific Computing, 14(3):519–530, 1993.
  • [32] Gerald Penn. Efficient transitive closure of sparse matrices over closed semirings. Theoretical Computer Science, 354(1):72–81, 2006.
  • [33] M. O. Rabin and V. V. Vazirani. Maximum matchings in general graphs through randomization. Journal of Algorithms, 10(4):557–567, 1989.
  • [34] Viral B. Shah. An Interactive System for Combinatorial Scientific Computing with an Emphasis on Programmer Productivity. PhD thesis, University of California, Santa Barbara, June 2007.
  • [35] Shang-Hua Teng. Coarsening, sampling, and smoothing: Elements of the multilevel method. In Parallel Processing, number 105 in The IMA Volumes in Mathematics and its Applications, pages 247–276, Germany, 1999. Springer-Verlag.
  • [36] Stijn Van Dongen. Graph clustering via a discrete uncoupling process. SIAM Journal on Matrix Analysis and Applications, 30(1):121–141, 2008.
  • [37] Ichitaro Yamazaki and Xiaoye Li. On techniques to improve robustness and scalability of a parallel hybrid linear solver. In High Performance Computing for Computational Science – VECPAR 2010, pages 421–434.
  • [38] Raphael Yuster and Uri Zwick. Detecting short directed cycles using rectangular matrix multiplication and dynamic programming. In SODA ’04: Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, pages 254–260, 2004.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
20492
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description