Parallel Louvain Community Detection Optimized for GPUs

Parallel Louvain Community Detection Optimized for GPUs

Richard Forster CERN [ Eotvos Lorand University forceuse@inf.elte.hu [ Wigner RCP [ Geneva, Switzerland Budapest, Hungary
Abstract

Community detection now is an important operation in numerous graph based applications. It is used to reveal groups that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential, the support for parallel computers is rather limited. The cause is largely the irregularity of the algorithm and the underlying heuristics imply a sequential nature. In this paper a GPU based parallelized version of the Louvain method is presented. The Louvain method is a multi-phase, iterative heuristic for modularity optimization. It was originally developed by Blondel et al. (2008), the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. The parallel heuristics used, were first introduced by Hao Lu et al. (2015). As the Louvain method is inherently sequential, it limits the possibility of scalable usage. Thanks to the proposed parallel heuristics, I observe how this method can behave on GPUs. For evaluation I implemented the heuristics using CUDA on a GeForce GTX 980 GPU and for testing I’ve used organization landscapes from the CERN developed Collaboration Spotting project that involves patents and publications to visualize the connections in technologies among its collaborators. Compared to the parallel Louvain implementation running on 8 threads on the same machine that has the used GPU, the CUDA implementation is able to produce community outputs comparable to the CPU generated results, while providing absolute speedups of up to 30 using the GeForce GTX 980 consumer grade GPU.

keywords:
CUDA, GPU, Louvain, Modularity, Community Detection, parallel
journal: Journal of Parallel and Distributed Computing

url]www.home.cern

url]www.inf.elte.hu

url]www.wigner.mta.hu

1 Introduction

Applications working with huge datasets requires optimizations to be able to utilize the available system resources efficiently. To leverage the power of the special architectures and their parallel capabilities additional heuristics might be needed, that can drive the otherwise sequential algorithm to run on multiple threads or even on a number of machines at the same time. Community detection is such an application, that is by default a sequential problem involving multiple phases and iterations to produce the desired community partition.

For community detection the Louvain algorithm will be explored and it’s implementation will be detailed, that involves parallel heuristics, that are required to be able to achieve the aforementioned efficiency. Furthermore the goal of this paper is to show the algorithm running on a GPU, thus delivering much higher performance, than what is possible on the CPU. The algorithm will be explained in details, followed by implementation techniques that involves the usage of the CUDA platform, that is a GPU dedicated programming toolkit, that can be used to do generic computational applications running on NVIDIA GPUs.

The performance of the algorithm will be explored in both experimental and theoretical ways, as the networks used for illustration couldn’t all fit on the test system. The experimental study shows a times speed up over the CPU based parallel implementation, while the theoretical results are pointing to an even higher times faster runtime.

2 Problem statement and notation

Let be an undirected weighted graph, with representing the set of vertices, the set of edges and a weighting function that assigns a positive weight to every edge from . If the input graph is unweighted, then the weight of the edges is considered to be . The graph is allowed to have loops, so edges like are valid, while multiple edges between the same nodes should not be present. The following will be the adjacency list of vertex : . Let denote the weighted degree of vertex , such as . Let denote the number of vertices in graph , the number of edges, and the sum of all edge weights, such as . By computing the communities, the vertex set will be partitioned into an arbitrary number of disjoint subsets, each with size , where . will denote the community containing vertex . is the set of all edges connecting vertex into community . Consequently let hold the sum of the edge weights in .

(1)

The sum of all the vertices in community C shall be denoted by , which will represent the degree of the whole community.

(2)

2.1 Modularity

Let be the set of every community in a given partitioning of , where . Modularity of partitioning is given by the following findcom:

(3)

Modularity is widely used in different community detection algorithms, while it also has issues, such as the resolution limit comdetrescom. The definition itself also has multiple variants rescommodgraphcomdetres. However, the version defined in Eq. 3 is still the more commonly used. This definition is used in the Louvain method louvainalg.

2.2 Community detection

On a given the goal of community detection is to compute partitioning of communities that produces the highest modularity. This problem has been shown to be NP-Complete clustering. The main difference between this problem and other graph partitioning is that, the number and size of the clusters and their distribution are known at the beginning. graphpart For community detection, both quantities are unknown for the computation.

3 The Louvain algorithm

In 2008 Blondel et al. presented an algorithm for community detectionlouvainalg. The Louvain method, is a multi-phase, iterative, greedy algorithm used to produce the community hierarchy. The main idea is the following: the algorithm has multiple phases, each phase with multiple iterations that is running until a convergence criteria is met. At the beginning of the first phase each vertex is going to be assigned to a separate community. Going on, the algorithm progresses from one iteration to another until the modularity gain becomes lower than a predefined threshold. With every iteration, the algorithm checks all the vertices in an arbitrary, but predefined order. For every vertex , all its neighboring communities (where the neighbors can be found) are explored and the modularity gain is computed for every possible community transfer. Once this calculation is done, the neighboring community yielding the highest modularity will be selected and vertex will be moved there. If no suitable community can be found the vertex stays in its own group. An iteration ends once all vertices are checked. As a result the modularity is a monotonically increasing function, that spreads across the multiple iterations of a phase. When the algorithm converges within a phase, it moves to the next one by reducing all vertices of a community to a single ”meta-vertex”parlouvain. The edge on such vertex can be a loop, in which case the weight will be the sum of the weights of all the edges that are having start and end vertices within that same community. If the edge is pointing into another community, the weight will be the sum of weights of all the edges between the corresponding two communities. The result will be graph , which then becomes the input for the consecutive phase. Multiple phases are processed until the modularity converges. At any given iteration, let represent the modularity gain resulting from moving vertex from its current community to a neighboring . This is given by:

(4)

The new community assignment for vertex will be determined as follows. For :

(5)

Effectively the iterations and phases run forever, but because the modularity is monotonically increasing, it is guaranteed to terminate at some point. Generally the method needs only tens of iterations and fewer phases to terminate on real world datasets.

3.1 Parallel heuristics

The challenges to parallelize the Louvain method were explored in parlouvain. To solve those issues multiple heuristics were introduced, that can be used to leverage the performance of the parallel systems in a basically sequential algorithm. From the proposed heuristics two is going to be detailed in this paper. Lets assume the communities at any given stage are labeled numerically. The notation will return the label of community .

3.1.1 Singlet minimum label heuristic

In the parallel algorithm, if at any given iteration vertex which is in a community by itself (, singlet community parlouvain), in hope for modularity gain might decide to move into another community, that holds only vertex . This transition will only be applied if .

3.1.2 Generalized minimum label heuristic

In the parallel algorithm, if at any given iteration the vertex has multiple neighboring communities providing modularity gains, the community with the minimum label will be selected. Swap situations might occur, when two vertices are transitioning into the other’s community in the same iteration. This can delay the convergence, but can never lead to nontermination as the minimum required modularity gain threshold will guarantee a successful termination.

Theorem 1 1

The memory requirement is not increasing between phases.

\newproof

pot1Proof of Theorem 1

{pot1}

Because of the generalized minimum label heuristic, the nodes are always moving to the communities with lower labels. So as the computation is progressing, the clusters are either merging into other clusters with lower labels, or nodes are keep filling up the cluster from those, that have higher labels. This means the nodes are moving among the clusters, that have been defined at the beginning with the first initialization, thus not creating more clusters, which leads to keeping the memory footprint the same.

4 Compute Unified Device Architecture

Thanks to the modern GPUs, very big datasets can be efficiently computed in parallel cudapostercudaarticlecudaarticle2. This is supported by the underlying hardware architecture that allows us to create general purpose algorithms running on the graphical hardware. There is no need to introduce any middle abstractions to be able to use these processors as they have evolved into a more general processing unit (Figure 1). The compute unified device architecture (CUDA) divides the GPUs into smaller logical parts to have a deeper understanding of the platform cuda. With the current device architecture the GPUs are working like coprocessors in the system, the instructions are issued through the CPU. In earlier generations the required data had to be copied manually over to the device memory. Furthermore the different parts in memory (global, shared, constant) used a different address space.

Figure 1: CUDA Processing Flow

With the second generation of Compute Capability devices it has became possible to issue direct memory transactions thanks to the Unified Virtual Addressing fermi. This has made it possible to use pointers and memory allocations not only on the host side, but on the device as well and also making it possible to use C/C++ like pointers.

4.1 Memory access strategies

For the early CUDA capable generation of GPUs with Compute Capability (CC) 1.x, it is important to use the right access patterns when operating on the device’s global memory. In such a case where the same value is required from thousands of threads, then the operations will have to be serialized on these GPUs. The newer GPUs since CC 2.0 also have caching functionality that greatly improves the performance of such applications, increasing the effective throughput of the memory. To fetch and store data in the most optimal way, the map access strategy should be used, where all threads will manipulate their own individual values, more specifically thread will access the position of the input or output array. The values about memory transactions presented in this subsection are taken from cuda.

If the data stored in memory can be accessed in a sequential order and those values are aligned to a multitude of byte address, then the most optimal memory utilization will be achieved even on devices with CC 1.x (Table 1). The GPU can issue , or bytes transactions based on the actual usage of the hardware.

CC 1.0 and 1.1 1.2 and 1.3 2.x and 3.0
Transactions Uncached Cached
1 x 64B at 128 1 x 64B at 128 1 x 128B at 128
1 x 64B at 192 1 x 64B at 192
Table 1: Aligned and sequential memory access

If the data is in a non-sequential order (Table 2), then the GPU will need additional memory transactions to process all the available values. We have to mention here, by using non-sequential ordering it is possible the transactions will fetch not only the actually required data, but also additional values that are stored on consecutive memory addresses. This can be quite a wasteful approach.

CC 1.0 and 1.1 1.2 and 1.3 2.x and 3.0
Transactions Uncached Cached
8 x 32B at 128 1 x 64B at 128 1 x 128B at 128
8 x 32B at 160 1 x 64B at 192
8 x 32B at 192
8 x 32B at 224
Table 2: Aligned and non-sequential memory access

If the data is completely misaligned (Table 3), then the hardware will be forced to invoke more smaller transactions to reach all the values present for processing. This case can be found difficult even for devices with caching involved.

CC 1.0 and 1.1 1.2 and 1.3 2.x and 3.0
Transactions Uncached Cached
7 x 32B at 128 1 x 128B at 128 1 x 128B at 128
8 x 32B at 160 1 x 64B at 192 1 x 128B at 256
8 x 32B at 192 1 x 32B 256
8 x 32B at 224
1 x 32B at 256
Table 3: Misaligned and sequential memory access

Overall it is important when the data structures are designed to consider these basic metrics about the memory subsystem of the GPUs, so no restrictions will be hit and the memory will work with maximum throughput.

4.2 CUDA algorithm

The CUDA algorithm uses the same parallel heuristics, that was introduced in Section 3.1. The algorithm (Figure 2) itself follows the same execution path as the CPU based implementation. Thanks to the characteristics of the GPUs they can run with much more threads in parallel compared to the CPUs, thus doing more work at the same time.

1:procedure Parallel Community Detection CUDA()
2:     
3:     
4:     
5:     
6:     while true do
7:         
8:         
9:         
10:         if  then
11:              break
12:         end if
13:         
14:         
15:     end while
16:end procedure
Figure 2: The community detection algorithm on CUDA. Input is graph . is a predefined threshold

5 The parallel Louvain algorithm

The parallel algorithm (Figure 3) has the following major steps:

(1) Phases: Execute phases one at a time. Within each phase, multiple iterations are running and each iteration performs a parallel evaluation of the vertices without any locking mechanism using only the community information from the previous iteration. This is going on until the modularity gain between the iterations is not below the threshold.

(2) Graph rebuilding: After a successive phase the new community assignment is used to construct a new input graph for the next phase. This is done by introducing the communities in the new graph as vertices and the edges are added based on their connection to these communities.

1:procedure Parallel Louvain(, Status)
2:     
3:     
4:     while true do
5:         for each in parallel do
6:              
7:              for each  do
8:                  
9:              end for
10:              
11:              if  then
12:                  
13:              end if
14:         end for
15:          Compute new modularity
16:         if  then
17:              break
18:         else
19:              
20:         end if
21:     end while
22:end procedure
Figure 3: The parallel Louvain algorithm for a single phase, inputs are the graph and a Status object containing the initial community informations. is a user specified threshold

5.1 Implementation

The implementation focuses on optimizations to provide maximum possible parallelization of the algorithm. The code is written in C++, for parallel primitives the NVIDIA provided Thrust library is used. Storing some intermediate values on the GPU is handled by the CUDPP hash tables. In this subsection the used data structures and procedures are going to be detailed.

5.1.1 Data Structures

To generate the dendogram, the algorithm needs to know the nodes and edges building up the input graph. Looking at it in more detail it will become clear, that not all nodes need to evaluated during the computation. Specifically those nodes, which degree is will not be able to move to any other community, than their default initialized group. Because of this the computation will use only the edges as their source, target pairs will identify all nodes with degrees greater than . All values required by the processings are stored in a StatusCUDA object. The input graph of each dendogram level is stored in CUDAGraph object.

To allocate device memory efficiently, integer, unsigned integer and float values are requested from the runtime driver as blocks. The memory pointers for individual variables are calculated on the host after allocation is done. To save up memory on intermediate storage variables the following fields are using the same memory space per block, as they do not interfere with each other during the computation.

Integer:

  • temp_source neighbourCommunities

  • CUDAGraph.edgeSource_temp StatusCUDA.neighbourSource_local

  • CUDAGraph.edgeSource_temp StatusCUDA.neighbourSource_local

Unsigned integer:

  • hash_idx_target hash_idx

  • CUDAGraph.node StatusCUDA.node

Float:

  • CUDAGraph.edgeWeight_temp StatusCUDA.neighbourWeights_local

5.1.2 Processes

Computation of the dendogram involves the following processes: calculating neighbors of the input graph, initialization for the actual level, generation of new community assignments in the current level, renumbering the nodes based on the new community numbers, generating new input graph for the next level.

Neighbor computation

As Subsection 5.1.1 shows, the neighbors are stored in source-target arrays according to the nodes connected to each edge. The computation starts by first collecting all the neighbors. To achieve maximum parallelization on the neighbors, target-source pairs are copied at the end of the source-target pairs. The reason behind this is the undirected nature of the graph, nodes can move to a new community in each direction on the edge. There is no restriction on the order of the edges, which means at this stage the neighbors of a node may not be stored consecutively. This leads to ill-formed memory access patterns, that reduce the achievable memory bandwidth of the GPU. To fix this, the edges are sorted based on the source-target pairs using the thrust library’s sort_by_key function with the edge weight as value. Thanks to this sorting now the number of neighbors can be calculated. The reduce_by_key function will give back for all neighbourSource values the number of targets. To use this the algorithm will also need to know the index where the neighbors of a given node starts. By calling the exclusive_scan function on the neighbor numbers, the result will give back the starting index of the targets for all given source.

Initializing status

For every level of the dendogram, the nodes are assigned to their default cluster, that will be a group made from one node each. The variables storing intermediate results from the previous cycle are also initialized, also the degree for each node is calculated with the possible loops in the graph. During the initial step the hash_idx variable is set to contain the node’s index in the storing array. This is used to store the initial positions of the nodes in the hash table assigned to the hash_table_handle handler. For the degree computation it is necessary to know the index of the actual node where the result will be stored. As the degree computation will go through the weights for all neighbors, the first neighbor is taken and the source of that link is used to get the index from the hash table. This index helps to set the degree to the correct node.

Generating one level

Each level in the dendogram contains those communities for each node where it will find a better modularity. This generation is iterative as it will move the nodes to new communities until the resulting gain on modularity is bigger, than a predefined threshold. Before the iterations, the variables (comSize, comSizeUpd, degreeUpd, internalUpd) used for updating the communities are initialized. ComSize will be always set to , as at this point each node represents their own communities. ComSizeUpd, degreeUpd and internalUpd are all set to and are containing the change for a given community’s size, degree and internal degree respectively. Also before the communities are processed the actual state of the neighbourWeights variable is copied into neighbourWeights_store.

In every iteration first the indexes of the individual target values of each edge are taken from the hash_table_handle stored hash table into the hash_idx variable. The next step is to compute the actual community of the given targets. This is important as the computation will work on communities and they can change in every iteration for all the edges. This is done by the neighbourCommunitySetKernel device kernel, that takes the current nodesToCom values for the actual communities based on the indexes stored in the hash_idx. Also for those edges, that represents loops the weight is getting set to . This is done because the computation would like to find a better community among a given node’s neighbors. If one node will belong to a false community, but with many different nodes there is a good chance it will never be able to find the correct cluster. Even if the original graph didn’t have loops, the computation might lead to clusters, where multiple edges will stay within the same community.

After having the communities first a reordering is required. Using sort_by_key with neighbourSource and neighbourCommunities as keys the neighbourWeights values are reordered to have the edges in a consecutive order based on the communities. This is used to calculate the weight of all the given communities with the reduce_by_key function. The reduction’s input keys will be the same as for the sorting, with neighbourSource_local and neighbourCommunities_local being the result keys and neighbourWeights_local storing the final reduced values.

In the final stage the number of neighbors and their starting coordinates are calculated as it was detailed before for the neighbor computation.

By having the current communities set and their values calculated the next move of the nodes can be processed. The SetNewCommunityWarpShuffle device kernel is responsible for calculating each node’s next community. To make this efficiently every CUDA block will compute one node. The threads within the block will get one neighboring community from the list of all clusters if the node has any neighbors. The first thread will fetch the current community for the node being evaluated. What needs to be done is effectively a maximum search based on multiple criteria. The neighbor providing the best weight increase will be selected. If the community of this selected neighbor is the same where the processed node is, nothing happens. If multiple neighbors are providing the same increase, then the community with the lower index will be selected based on what is detailed in Subsection 3.1.2.

As the name of the kernel implies, the implementation uses warp shuffle operations. With this specific values can be shared among the threads within a thread block without requiring any addition global memory read or write operations, further increasing the access performance of the algorithm.

As in the current iteration the neighbors will not be used anymore the weight difference and the selected community will be stored in the node’s first neighbor’s weight and community value respectively.

The changes will be registered by the oneLevelKernel device kernel. If the computed next community is different than the one already occupied by the node the remove device function will remove the weight from the old community associated with the node and the insert device function will add it to the new cluster. Additionally comSizeUpd is changed as needed with atomicSub and atomicAdd to avoid issues from multiple concurrent updates. Finally the nodesToComNext for each node will contain the community it will belong to in the next iteration.

After the computations are done the modularity is recomputed to check if additional iterations should be taken. If yes, the communitySetKernel device kernel will apply the changes on each cluster, that was computed throughout the iteration. To prepare for the next pass the weights calculated for the graph are copied back from neighbourWeights_store into neighbourWeights. Finally the nodesToComPrev is set to nodesToCom, em nodesToCom to nodesToComNext and nodesToComNext will be what was computed in the previous step.

When all possible iterations are done one final step is applied on those nodes, that have only one neighbor community. On those the mergeInitKernel, mergeIsolatedNodeKernel, mergeSetKernel device kernels are pushing them into those neighbors to increase quality of the end result. Finally the modularity is recomputed for later decision if multiple levels of the dendogram are necessary or not.

Renumbering nodes

Finishing the generation of a new level in the dendogram might lead to clusters getting removed as all their nodes are moved to other communities. To keep the data organized, each level of the dendogram will have consecutive cluster numbering, which means after one level is computed, the IDs of the groups will be mapped into a continuous array.

To do this, first the nodesToCom array containing the actual cluster assignments is copied into nodesToComPrev for preservation. The values in nodesToComPrev needs to be ordered, so sort_by_key will sort the cluster IDs and for the values node and hash_idx is used, where hash_idx will contain the indexes of the nodes. The resulting nodesToComPrev will be first copied into nodesToCom to have a copy of the sorted values.

Following this using the unique function from thrust on nodesToComPrev and hash_idx as keys, only the different cluster IDs will remain with their indexes. Now only the nodesToComPrev needs to be ordered based on hash_idx as key, resulting in the unique clusters, ordered by their indexes, which points to their location in nodesToCom.

After computing the unique clusters, they will be pushed with their indexes into a hash table assigned to the commMap_hash_table_handle handler. Now from this table the indexes for the clusters stored in nodesToCom can be retrieved. Finally this index will be the new community for the respective nodes, so at this point hash_idx is the community and node is the node that needs to be transfered to the host, so each level of the dendogram can be stored on the host. This node, community pair will also be stored in another hash table assigned to nodeCommMap_hash_table_handle, that will be used in the last process.

Inducing new graph

Each level of the dendogram will need an input graph to compute on. At the first level this will be the original graph, but for the subsequent levels this graph will be generated from new cluster assignments. At the end of the renumber process, each node has a new ID associated to them, this will represent the new cluster ID in the next level’s graph.

First the algorithm has to compute the unique IDs, so it will know how many different communities there are, which will tell how many nodes the graph in the next level will have. As these IDs are already stored in a hash table, the original storage can be manipulated. At the beginning the hash_idx array will be sorted. After that the unique_copy function will collect and copy the unique clusters into cuGraph.node. This way the nodes for the next level are generated.

To compute the new edges, again it is used, that the links are stored in source, target pairs. First the indexes of the neighbourSource values are retrieved into the hash_idx_source array from the nodeCommMap_hash_table_handle pointed hash table. Then the same is done for the targets, in that case the indexes will be stored in hash_idx_target. Because the original graph will not be used again, the new edges will be computed at the same memory address. The two hash_idx values retrieved before will give the new source and target IDs for the edges.

To finish the computation only the weights are missing at this point. As it was detailed in Section 2, the graph can be unweighted, in which case the weights are considered to be for all links. In every case where a given weight is , the algorithm will always increase that value, so during computation all edges have a weight greater than . Here a neighbourSource, neighbourTarget IDs as keys will sort the neighbourWeights values. This way a call to reduce_by_key on these variables will result in the new edges, that are stored in edgeSource_temp, edgeTarget_temp and edgeWeight_temp. Copying these values back to neighbourSource, neighbourTarget and neighbourWeights respectively will conclude the computation of one level.

6 Evaluation

Input graph Num. vertices (n) Num. edges (m) Max. deg. Avg. deg. RSD
CNR 325557 2738970 18236 16,826 13,024
coPapersDBLP 540486 15245729 3299 56,414 1,174
Channel 4802000 42681372 18 17,776 0,061
Europe-osm 50912018 54054660 13 2,123 0,225
Soc-LiveJournal1 4847571 68475391 22887 28,251 2,553
MG1 1280000 102268735 148155 159,794 2,311
Rgg_n_2_24_s0 16777216 132557200 40 15,802 0,251
uk-2002 18520486 261787258 194955 28,270 5,124
NLPKKT240 27993600 373239376 27 26,666 0,083
MG2 11005829 674142381 5466 122,506 2,370
friendster 51952104 1801014245 8603554 69,333 17,354
Table 4: The test data

Testing was done on multiple real world networks (Table 4), from which the first two was used for experimental study, the remaining graphs are for theoretical evaluation. The limit was the available resources, as only the first two graph could fit in the system’s memory. In this section the runtime of the parallel algorithm will be compared between the CPU and GPU implementation.

Evaluation was done using the following (Table 5) system:

CPU GPU OS Compiler CUDA
Intel Core i7 GeForce GTX Ubuntu GCC 9.0
4710HQ 980 16.04 4.8.5
Table 5: The test system

On the CPU, parallelization was running on threads. As the GTX 980 has Multiprocessors and each of them can have maximum threads, in overall giving threads, that can run in parallel at the same time.

6.1 Experimental results

Experimental results as mentioned before, are provided for the CNR and coPapersDBLP graphs. The whole community detection has multiple steps as it was described in Section 3.

Figure 4: Detailed runtime of the CNR graph

The overall computation time on the CPU is and on the GPU it is . Detailed runtimes are presented on Figure 4.

Brake down on the different processes runtime is presented on Figure 5.

Figure 5: Detailed runtime of the coPapersDBLP graph

Here the overall computation time on the CPU is and on the GPU it’s .

6.2 Theoretical results

The problem with the following graphs is that their size extend beyond the available system resources, but the average degree of nodes is known for every network, so it was used together with the number of nodes and edges to compute the theoretical performance (Table 6 and Table 7) of the parallel implementations. These runtimes for neighbour calculation and to generate one dendogram level, are calculated by dividing the execution time of one of the previous graphs (let’s take CNR) with the mentioned average degree. Finally this number is multiplied with the average degree of the bigger datasets. The runtime for the rest of the processes is calculated using the number of nodes and edges, so the execution time of init and renumber will be computed by dividing with the node count and then multiplying with the bigger networks vertex count, while for the induce graph function the same is done, but with the edge count. Have to mention that when dividing to compute the runtime for one unit, also has to multiple with , because during computation one unit will hold the maximum threads running in parallel, thus the runtime needs to be transformed back, like how it will be in the sequential computation. On multiplications on the other hand the result needs to be divided by this thread number to get the final result.

Input graph Neighbour Init Onelevel Renumber Induce Full
Channel 0,067 14,38 0,11 2,8 0,55 3,88
Europe-osm 0,008 152,47 0,05 29,71 0,69 34,21
Soc-LiveJournal1 0,11 14,52 0,19 2,83 0,88 4,36
MG1 0,61 3,83 0,48 0,75 1,31 3,24
Rgg_n_2_24_s0 0,06 50,25 0,18 9,79 1,69 12,96
uk-2002 0,11 55,47 0,81 10,81 3,35 16,43
NLPKKT240 0,1 83,84 0,28 16,34 4,77 23,55
MG2 0,47 32,96 5,86 6,42 8,61 22,18
friendster 0,26 155,59 1,16 30,32 23,01 58,59
Table 6: Theoretical runtimes on the GPU
Input graph Neighbour Init Onelevel Renumber Induce Full
Channel 1,14 0,37 7,43 1,93 9,24 20,11
Europe-osm 0,14 3,91 0,89 20,49 11,7 37,12
Soc-LiveJournal1 1,81 0,37 11,82 1,95 14,83 30,77
MG1 10,21 0,09 66,83 0,52 22,14 99,79
Rgg_n_2_24_s0 1,01 1,29 6,61 6,75 28,69 44,36
uk-2002 1,81 1,42 11,82 7,45 56,68 79,18
NLPKKT240 1,7 2,15 11,15 11,26 80,81 107,08
MG2 7,83 0,85 51,23 4,43 145,96 210,29
friendster 4,43 3,99 28,99 20,9 389,93 448,25
Table 7: Theoretical runtimes on the CPU

6.3 Analysis

Starting with the experimental results, the detailed runtimes shows, that the computation of one phase as described in Algorithm 3 takes most of the time running on the CPU.

For the ”CNR” graph to generate one level of the dendogram the CPU takes to finish, on the GPU it is . Computing the new graph input for the consecutive phase on the CPU is and on the GPU is . Processing the neighbours on the CPU takes , while the GPU requires . The GPU is also hit with an additional memory transfer time as the input and output values have to be copied between the host and device. This is . Overall the GPU is times faster than the CPU implementation.

The runtimes for the ”coPapersDLBP” graph are the following: generating the dendogram’s level on the CPU takes , on the GPU it is . Computation of the graph induction the CPU runs for and on the GPU for . Computing the neighbours on the CPU takes , while on the GPU it’s . The overall time of the device memory copy operations is . Overall the GPU achieves times better performance.

The theoretical results also shows similar performances, the speedups for the networks respectively are: , , , , , , , , . For the Europe-osm graph the shows, that there was no speed up, actually the GPU was running a few percent slower, than the CPU counter part. The GPU based theoretical runtimes (Table 6) show, that for this graph the renumber part is considerably slower, than for any other graphs. While this encourages further evaluation of the process in the whole clustering, it already shows that not just the size of the graph is the deciding factor of the performance, but the structure of the network as well.

One aspect of the algorithm that can reduce the efficiency of the GPU is the diverging execution path on the different threads, thanks to the different number of neighbors of the nodes. This might be further optimized by for example reordering the nodes to have the nodes with equal or nearly as much number of neighbors being consecutive in memory msc.

7 Future work

Implementing the other heuristics from parlouvain might further increase the performance of the GPU implementation. Optimization should be explored on the memory management level to make the algorithm be able to process bigger graphs on the given system. As seen in section 6.3, while the GPU version greatly outperforms the CPU, some phases are much slower on the CPU. These should be further explored to reduce the required time to compute these processes too.

8 Conclusion

By implementing the heuristics described in Section 3.1 using the CUDA platform (Section 4) for the GPU version (Section 4.2), the experimental results show a times faster computation time, while the theoretical study points up to a times speedup over the CPU based parallel version of the Louvain algorithm. According to the results detailed in Section 6.3, it can be said that the GPU’s can be used more effectively if the underlying data is big enough, but is not the only important factor. In the case of this algorithm, the hierarchy of the data is also significant in achieving higher performances. The test cases show, that the time to generate the new level of the dendogram was always significantly lower compared to the time of generating the new graph for the next pass: . The CPU was also producing similar results: . The runtimes are leading to the conclusion, that the parallel Louvain modularity (Algorithm 3) can profit by running on the GPUs as is valid in all the test cases and also the same applies to the other computational phases as and all stands.

9 Acknowledgments

The GPU used for the computations was provided by the Wigner RCP’s GPU Laboratory. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. A preliminary version of this paper appeared in ines.

References

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
200332
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description