TheoreticallyEfficient and Practical Parallel DBSCAN
Abstract.
The DBSCAN method for spatial clustering has received significant attention due to its applicability in a variety of data analysis tasks. There are fast sequential algorithms for DBSCAN in Euclidean space that take work for two dimensions, subquadratic work for three or more dimensions, and can be computed approximately in linear work for any constant number of dimensions. However, existing parallel DBSCAN algorithms require quadratic work in the worst case, making them inefficient for large datasets. This paper bridges the gap between theory and practice of parallel DBSCAN by presenting new parallel algorithms for Euclidean exact DBSCAN and approximate DBSCAN that match the work bounds of their sequential counterparts, and are highly parallel (polylogarithmic depth). We present implementations of our algorithms along with optimizations that improve their practical performance. We perform a comprehensive experimental evaluation of our algorithms on a variety of datasets and parameter settings. Our experiments on a 36core machine with hyperthreading show that we outperform existing parallel DBSCAN implementations by up to several orders of magnitude, and achieve speedups by up to 33x over the best sequential algorithms.
10pt
1. Introduction
Spatial clustering methods are frequently used to group together similar objects. Densitybased spatial clustering of applications with noise (DBSCAN) is a popular method developed by Ester et al. (Ester et al., 1996) that is able to find good clusters of different shapes in the presence of noise without requiring prior knowledge of the number of clusters. The DBSCAN algorithm has been applied successfully to clustering in spatial databases, with applications in various domains including transportation, biology, and astronomy.
The traditional DBSCAN algorithm (Ester et al., 1996) and their variants require work quadratic in the input size in the worst case, which can be prohibitive for the large data sets that need to be analyzed today. To address this computational bottleneck, there has been recent work on designing parallel algorithms for DBSCAN and its variants (Brecheisen et al., 2006; Xu et al., 1999; Arlia and Coppola, 2001; Januzaj et al., 2004b, a; Böhm et al., 2009; Xiang Fu et al., 2011; Huang et al., 2017; Coppola and Vanneschi, 2002; Patwary et al., 2012, 2013, 2014, 2015; Welton et al., 2013; Lulli et al., 2016; Fu et al., 2014; Dai and Lin, 2012; Andrade et al., 2013; Han et al., 2016; Kim et al., 2014; He et al., 2014; Fu et al., 2014; Chen and Chen, 2015; Götz et al., 2015; Cordova and Moh, 2015; Luo et al., 2016; Yu et al., 2015; Hu et al., 2017; Araujo Neto et al., 2015; Hu et al., 2018; Song and Lee, 2018). However, even though these solutions achieve scalability and speedup over sequential algorithms, in the worstcase their number of operations still scale quadratically with the input size. Therefore, a natural question is whether there exist DBSCAN algorithms that are faster both in theory and practice, and in both the sequential and parallel settings.
Given the ubiquity of datasets in Euclidean space, there has been work on better DBSCAN algorithms in this setting. Gunawan (Gunawan, 2013) and de Berg et al. (de Berg et al., 2017) has shown that Euclidean DBSCAN in 2D can be solved sequentially in work. Gan and Tao (Gan and Tao, 2017) provide alternative sequential Euclidean DBSCAN algorithms for twodimensions that take work. For higher dimensions, Chen et al. (Chen et al., 2005b) provide an algorithm that takes work for dimensions, and Gan and Tao (Gan and Tao, 2017) improve the result with an algorithm that takes work for any constant . To further reduce the work complexity, there have been approximate DBSCAN algorithms proposed. Chen et al. (Chen et al., 2005b) provide an approximate DBSCAN algorithm that takes work for any constant number of dimensions, and Gan and Tao (Gan and Tao, 2017) provide a similar algorithm taking expected work. However, none of the algorithms described above have been parallelized.
In this paper, we bridge the gap between theory and practice in parallel Euclidean DBSCAN algorithms by providing new parallel algorithms for exact and approximate DBSCAN with work complexity matching that of best sequential algorithms (Gunawan, 2013; de Berg et al., 2017; Gan and Tao, 2017), and with low depth (parallel time complexity), which is the gold standard in parallel algorithm design. For exact 2D DBSCAN, we design several parallel algorithms that use either the box or the grid construction for partitioning points as described in (Gunawan, 2013; de Berg et al., 2017) and one of the following three procedures for determining connectivity among core points: Delaunay triangulation (Gan and Tao, 2017), unitspherical emptiness checking with line separation (Gan and Tao, 2017), and bichromatic closest pairs. For higherdimensional exact DBSCAN, we provide an algorithm based on solving the higherdimensional bichromatic closest pairs problem in parallel. Unlike many existing parallel algorithms, our exact algorithms produce the same result as the standard definition of DBSCAN, and so we do not sacrifice clustering quality. For approximate DBSCAN, we design a parallel algorithm that uses parallel quadtree construction and querying. Our approximate algorithm returns the same result as the sequential approximate algorithm by Gan and Tao (Gan and Tao, 2017).
We perform a comprehensive set of experiments on both synthetic and realworld datasets using varying parameters, comparing our performance to optimized sequential implementations as well as existing parallel DBSCAN algorithms. On a 36core machine with twoway hyperthreading, our exact DBSCAN implementations achieve 2–89x (24x on average) selfrelative speedup and 5–33x (16x on average) speedup over the fastest sequential implementations. Our approximate DBSCAN implementations achieves 14–44x (24x on average) selfrelative speedup. Compared to existing parallel algorithms, which are scalable but have high overheads compared to the serial implementations, our fastest exact algorithms are faster by up to orders of magnitude (16–6102x) under correctly chosen parameters. Our algorithm is able to process the largest dataset that has been used in the DBSCAN literature for exact DBSCAN, and outperforms the reported numbers of the stateoftheart distributed RPDBSCAN algorithm (Song and Lee, 2018) by 18–577x. We advance the stateoftheart for DBSCAN by providing the fastest parallel algorithms for this problem, while also proving strong theoretical guarantees for our algorithms.
We summarize our contributions below.

New parallel algorithms for 2D exact DBSCAN, and higherdimensional exact and approximate DBSCAN with work bounds matching that of the best existing sequential algorithms, and polylogarithmic depth.

Highlyoptimized implementations of our parallel DBSCAN algorithms.

A comprehensive experimental evaluation showing that our algorithms achieve excellent parallel speedups over the best sequential algorithms and outperform existing parallel algorithms by up to orders of magnitude.
2. Preliminaries
DBSCAN Definition. The DBSCAN (densitybased spatial clustering with added noise) problem takes as input points , a distance function , and two parameters and minPts (Ester et al., 1996). A point is a core point if and only if . We denote the set of core points as . DBSCAN computes and outputs subsets of , referred to as clusters. Each point in is in exactly one cluster, and two points are in the same cluster if and only if there exists a list of points in such that . For all noncore points , belongs to cluster if for at least one point . Note that a noncore point can belong to multiple clusters. A noncore point belonging to at least one cluster is called a border point and a noncore point belonging to no clusters is called a noise point. For a given set of points and parameters and minPts, the clusters returned are unique. Similar to many previous papers on parallel DBSCAN, we focus on the Euclidean distance metric in this paper. See Figure 1(a) for an illustration of the DBSCAN problem.
Gan and Tao (Gan and Tao, 2017) define the approximate DBSCAN problem, which in addition to the DBSCAN inputs, takes a parameter . The definition is the same as DBSCAN, except for the connectivity rule among core points. In particular, core points within a distance of are still connected, but core points within a distance of may or may not be connected. Core points with distance greater than are still not connected. Due to this relaxation, multiple valid clusterings can be returned. The relaxation is what enables an asymptotically faster algorithm to be designed. A variation of this problem was described by Chen et al. (Chen et al., 2005a).
Existing algorithms as well as some of our new algorithms use subroutines for solving the bichromatic closest pair (BCP) problem, which takes as input two sets of points and and finds the closest pair of points and such that and , and returns the pair and their distance.
Computational Model. We use the workdepth model (Jaja, 1992; Cormen et al., 2009) to analyze the theoretical efficiency of parallel algorithms. The work of an algorithm is the number of operations used, similar to the time complexity in the sequential RAM model. The depth is the length of the longest sequence dependence. By Brent’s scheduling theorem (Brent, 1974), an algorithm with work and depth has overall running time , where is the number of processors available. In practice, the Cilk workstealing scheduler (Blumofe and Leiserson, 1999) can be used to obtain an expected running time of . A parallel algorithm is workefficient if its work asymptotically matches that of the best sequential algorithm for the problem, which is important since in practice the term in the running time often dominates.
Work  Depth  Reference  

Prefix sum, Filter  (Jaja, 1992)  
Comparison sort  (Jaja, 1992; Cole, 1988)  
Integer sort  (Vishkin, 2010)  
Semisort  (Gu et al., 2015)  
Merge  (Jaja, 1992)  
Hash table  (Gil et al., 1991)  
Delaunay triangulation  (Reif and Sen, 1992) 
Parallel Primitives. We give an overview of the primitives used in our new parallel algorithms. We use the implementation of these primitives in the Problem Based Benchmark Suite (PBBS) (Shun et al., 2012), an opensource library.
Prefix sum takes as input an array of length , and returns the array as well as the overall sum, . Prefix sum can be implemented by first adding the oddindexed elements to the evenindexed elements in parallel, recursively computing the prefix sum for the evenindexed elements, and finally using the results on the evenindexed elements to update the oddindexed elements in parallel. This algorithm takes work and depth (Jaja, 1992).
Filter takes an array of size and a predicate and returns a new array containing elements for which is true, in the same order as in . We first construct an array of size with if is true and otherwise. Then we compute the prefix sum of . Finally, for each element where is true, we write it to the output array at index (i.e., ). This algorithm also takes work and depth (Jaja, 1992).
Comparison sorting sorts elements based on a comparison function. Parallel comparison sorting can be done in work and depth (Jaja, 1992; Cole, 1988). We use a cacheefficient samplesort (Blelloch, et al., 2010) from PBBS which samples pivots on each level of recursion, partitions the keys based on the pivots, and recurses on each partition in parallel.
We also use integer sorting, which sorts integer keys from a polylogarithmic range in work and depth (Vishkin, 2010). The algorithm partitions the keys into subarrays and in parallel across all partitions, builds a histogram on each partition serially. It then uses a prefix sum on the counts of each key per partition to determine unique offsets into a global array for each partition. Finally, all partitions write their keys into unique positions in the global array in parallel.
Semisort takes as input keyvalue pairs, and groups pairs
with the same key together, but with no guarantee on the relative
ordering among pairs with different keys. Semisort also returns the
number of distinct groups. We use the implementation
from (Gu et al., 2015), which is available in PBBS. The algorithm
first hashes the keys, and then selects a sample of the keys to
predict the frequency of each key. Based on the frequency of keys
in the sample, we classify them into “heavy keys” and
“light keys”, and assign appropriatelysized arrays for each heavy
key and each range of light keys. Finally, we insert all keys into random
locations in the appropriate array and sort within the array. This
algorithm takes expected work and depth with high probability.
Merge takes two sorted arrays and merges them into a single sorted array. If the sum of the lengths of the inputs is , this can be done in work and depth (Jaja, 1992). The algorithm takes equally spaced pivots from one of the input arrays and does a binary search for each pivot in the other array . Each subarray between pivots in has a corresponding subarray between the binary search results in . Each pair of subarrays is then recursively merged, and the result can be written to a unique range of indices in the final output. All pairs can be processed in parallel.
For parallel hash tables, we can do hash table insertions or queries taking work and depth w.h.p. (Gil et al., 1991). We use the nondeterministic concurrent linear probing hash table from (Shun and Blelloch, 2014), which is similar to a sequential linear probing hash table, but when inserting it attempts an atomic update to an empty location in the probe sequence, and continues probing if the update fails.
The Delaunay triangulation on a set of points in 2D contains triangles among every triple of points , , and such that there are no other points inside the circumcircle defined by , , and (de Berg et al., 2008). Delaunay triangulation can be computed in parallel in work and depth (Reif and Sen, 1992). We use the randomized incremental algorithm from PBBS, which inserts points in parallel into the triangulation in rounds, such that the updates to the triangulation in each round by different points do not conflict (Blelloch et al., 2012).
3. DBSCAN Algorithm Overview
This section reviews the highlevel structure of existing sequential DBSCAN algorithms (Gunawan, 2013; Gan and Tao, 2017; de Berg et al., 2017) as well as our new parallel algorithms. The highlevel structure is shown in Algorithm 1, and an illustration of the key concepts are shown in Figure 1(b)(d).
We place the points into disjoint dimensional cells with sidelength based on their coordinates (Line 2 and Figure 1(b)). The cells have the property that all points inside a cell are within a distance of from each other, and will belong to the same cluster in the end. Then on Line 3 and Figure 1(c), we mark the core points. On Line 4, we generate the clusters for core points as follows. We create a graph containing one vertex per core cell (a cell containing at least one core point), and connect two vertices if the closest pair of core points from the two cells is within a distance of . We refer to this graph as the cell graph. This step is illustrated in Figure 1(d). We then find the connected components of the cell graph to assign cluster IDs to points in core cells. On Line 5, we assign cluster IDs for border points. Finally, we return the cluster labels on Line 6.
4. 2D DBSCAN Algorithms
This section presents our parallel algorithms for implementing each line of Algorithm 1 in two dimensions. The cells can be constructed either using a gridbased method or a boxbased method, which we describe in Sections 4.1 and 4.2, respectively. Section 4.3 presents our algorithm for marking core points. We present several methods for constructing the cell graph in Section 4.4. Finally, Section 4.5 describes our algorithm for clustering border points.
4.1. Grid Computation
In the gridbased method, the points are placed into disjoint cells with sidelength based on their coordinates, as done in the sequential algorithms by Gunawan (Gunawan, 2013) and de Berg et al. (de Berg et al., 2017). A hash table is used to store only the nonempty cells, and a serial algorithm simply inserts each point into the cell corresponding to its coordinates.
Parallelization. The challenge in parallelization is in distributing the points to the cells in parallel while maintaining workefficiency. While a comparison sort could be used to sort points by their cell IDs, this approach requires work and is not workefficient. We observe that semisort (see Section 2) can be used to solve this problem workefficiently. The key insight here is that we only need to group together points in the same cell, and do not care about the relative ordering of points within a cell or between different cells. We apply a semisort on an array of length of keyvalue pairs, where each key is the cell ID of a point and the value is the ID of the point. This also returns the number of distinct groups (nonempty cells).
We then create a parallel hash table of size equal to the number of nonempty cells, where each entry stores the bounding box of a cell as the key, and the number of points in the cell and a pointer to the start of its points in the semisorted array as the value. We can determine neighboring cells of a cell with arithmetic computation based on ’s bounding box, and then look up each neighboring cell in the hash table, which returns the information for that cell if it is nonempty.
4.2. Box Computation
In the boxbased method, we place the points into disjoint dimensional bounding boxes with sidelength at most , which are the cells.
Existing sequential solutions (Gunawan, 2013; de Berg et al., 2017) first sort all points by coordinate, then scan through the points, grouping them into strips of width and starting a new strip when a scanned point is further than distance from the beginning of the strip. It then repeats this process per strip in the dimension to create cells of sidelength at most . This step is shown in Figure 2(a). Pointers to neighboring cells are stored per cell. This is computed for all cells in each dimensional strip by merging with each of strips , , , and , as these are the only strips that can contain cells with points within distance . For each merge, we compare the bounding boxes of the cells in increasing direction, linking any two cells that may possibly have points within distance.
Parallelization. We now describe the method for assigning points to strips, which is illustrated in Figure 2(b). Let be the coordinate of point . We create a linked list where each point is a node. The node for point stores a pointer to the node for point (we call the parent of ), where is the point with the smallest coordinate such that . Each point can determine its parent in work and depth by binary searching the sorted list of points.
We then assign a value of to the node with smallest coordinate, and to all other nodes. We run a pointer jumping routine on the linked list where on each round, every node passes its value to its parent and updates its pointer to point to the parent of its parent (Jaja, 1992). The procedure terminates when no more pointers change in a round. In the end, every node with a value of will correspond to the point at the beginning of a strip, and all nodes with a value of will belong to the strip for the closest node to the left with a value of . This gives the same strips as the sequential algorithm, since all nodes marked will correspond to the closest point farther than from the point of the previously marked node. For merging to determine cells within distance , we use the parallel merging algorithm described in Section 2.
4.3. Mark Core
Illustrated in Figure 1(c), the highlevel idea in marking the core points is as follows: first, if a cell contains at least minPts points then all points in the cell are core points, as it is guaranteed that all the points inside a cell will be within to any other point in the same cell; otherwise, each point computes the number of points within its radius by checking its distance to points in all neighboring cells (defined as cells that could possibly contain points within a distance of to the current cell), and marking as a core point if the number of such points is at least minPts. For a constant dimension, there are only a constant number of neighboring cells that need to be checked.
Parallelization. Our parallel algorithm for marking core points is shown in Algorithm 2. We create an array of length that marks which points are core points. The array is initialized to all 0’s (Line 2). We then loop through all cells in parallel (Line 3). If a cell contains at least minPts points, we mark all points in the cell as core points in parallel (Line 4–6). Otherwise, we loop through all points in the cell in parallel, and for each neighboring cell we count the number of points within a distance of to , obtained using a RangeCount(, , ) query (Lines 8–11) that reports the number of points in that are no more than distance from . The RangeCount(, , ) can be implemented by comparing to all points in neighboring cell in parallel, followed by a parallel prefix sum to obtain the number of points in the radius. If the total count exceeds minPts, then is marked as a core point (Lines 12–13).
4.4. Cluster Core
We present three approaches for determining the connectivity between cells in the cell graph. After obtaining the cell graph, we run a parallel connected components algorithm to cluster the core points. For the BCPbased approach, we describe an optimization that merges the BCP computation with the connected components computation using a lockfree unionfind data structure.
BCPbased Cell Graph. The problem of determining cell connectivity can be solved by computing the BCP of core points between two cells (recall the definition in Section 2), and checking whether the distance is at most .
Each cell runs a BCP computation with each of its neighboring cells to check if they should be connected in the cell graph. We execute all BCP calls in parallel, and furthermore each BCP call can be implemented naively in parallel by computing all pairwise distances in parallel, writing them into an array containing point pairs and their distances, and applying a prefix sum on the array to obtain the BCP. We apply two optimizations to speed up individual BCP calls: (1) we first filter out points further than from the other cell beforehand as done by Gan and Tao (Gan and Tao, 2017), and (2) we iterate only until finding a pair of points with distance at most , at which point we abort the rest of the BCP computation, and connect the two cells. Filtering points can be done using a parallel filter. To parallelize the early termination optimization, it is not efficient to simply parallelize across all the point comparisons as this will lead to a significant amount of wasted work. Instead, we divide the points in each cell into fixedsized blocks, and iterate over all pairs of blocks. For each pair of blocks, we compute the distances of all pairs of points between the two blocks in parallel by writing their distances into an array. We then take the minimum distance in the array using a prefix sum, and return if the minimum is at most . This approach reduces the wasted work over the naive parallelization, while still providing ample parallelism within each pair of blocks.
Triangulationbased Cell Graph. In two dimensions, Gunawan (Gunawan, 2013) describes a special approach using Voronoi diagrams. In particular, we can efficiently determine whether a core cell should be connected to a neighboring cell by finding the nearest core point from the neighboring cell to each of the core cell’s core points. Gan and Tao (Gan and Tao, 2017) and de Berg et al. (de Berg et al., 2017) show that a Delaunay triangulation can also be used to determine connectivity in the cell graph. In particular, if there is an edge in the Delaunay triangulation between two core cells with distance at most , then those two cells are connected. This process is illustrated in Figure 3. The proof of correctness is described in (Gan and Tao, 2017; de Berg et al., 2017).
To compute Delaunay triangulation or Voronoi diagram in parallel, Reif and Sen present a parallel algorithm for constructing Voronoi diagrams and Delaunay triangulations in two dimensions. We used the parallel Delaunay triangulation implementation from PBBS (Blelloch et al., 2012; Shun et al., 2012), as described in Section 2.
Unitspherical emptiness checkingbased (USEC) Cell Graph. Gan and Tao (Gan and Tao, 2017) (who attribute the idea to Bose et al. (Bose et al., 2007)) describe an algorithm for solving the unitspherical emptiness checking (USEC) with line separation problem when comparing two core cells to determine whether they should be connected in the cell graph. This is illustrated in Figure 4.
First, the points in each core cell are sorted both by coordinate and by coordinate (two copies are stored). Suppose we are comparing two cells where one is below the other; thus the line is a horizontal line. We would like to generate the boundary formed by the union of radius circles centered around sorted points on a cell on one side of , called the wavefront. We can then scan the points in of one cell in sorted order and check if any of them intersect with the wavefront of other other cell.
To generate the wavefront in parallel, we use divideandconquer by constructing the wavefront for the left half of the points and the right half of the points (in sorted order) recursively in parallel. Merging two wavefronts is more challenging. The wavefronts are represented as balanced binary trees supporting split and join operations in logarithmic work and depth. We merge two wavefronts by taking the top part of each wavefront and joining them together. The top part of each wavefront can be obtained by having the rightmost arc on the left wavefront check where it would intersect on the right wavefront via binary search, and vice versa.
Reducing Cell Connectivity Queries. We now present an optimization that merges the cell graph construction with the connected components computation using a parallel lockfree unionfind data structure to maintain the connected components onthefly. This technique is used in both the BCP approach and USEC approach for cell graph construction. The pseudocode is shown in Algorithm 3. The main idea is to only run a cell connectivity query between two cells if they are not yet in the same component (Line 6), which can reduce the total number of connectivity queries. For example, assume that cells , and belong to the same component. After connecting with and with , we can avoid the connectivity check between and by checking their respective components in the unionfind structure beforehand. This optimization was used by Gan and Tao (Gan and Tao, 2017) in the sequential setting, and we extend it to the parallel setting. We also only check connectivity between two cells once by having the cell with higher ID responsible for checking connectivity with the cell with a lower ID (Line 6).
When constructing the cell graph and checking connectivity, we also use a heuristic to prioritize the cells based on the number of core points in the cells, and start from the cells with more points, as shown on Line 3. This is because cells with more points are more likely to have higher connectivity, hence connecting the nearby cells together and pruning their connectivity queries. This optimization can be less efficient in parallel, since a connectivity query could be executed before the corresponding query that would have pruned it in the sequential execution. To overcome this, we group the cells into batches, and process each batch in parallel before moving to the next batch. We refer to this new approach as bucketing, and show experimental results for it in Section 7.
4.5. Cluster Border
To assign cluster IDs for border points. We check all points not yet assigned a cluster ID, and for each point , check all of its neighboring cells and add it to the clusters of all neighboring cells with a core point within distance to .
Parallelization. Our algorithm is shown in Algorithm 4. We loop through all cells with fewer than minPts in parallel, and for each such cell we loop over all of its noncore points in parallel (Lines 2–3). On Lines 4–7, we check all core points in the current cell and all neighboring cells, and if any are within distance to then we add their clusters to ’s set of clusters (recall that border points can belong to multiple clusters).
5. Higherdimensional Exact and Approximate DBSCAN
The efficient exact and approximate algorithms for higherdimensional DBSCAN are also based on the highlevel structure of Algorithm 1, and are extensions of some of the techniques for twodimensional DBSCAN described in Section 4. They use the gridbased method for assigning points to cells (Section 4.1). Algorithms 2, 3, and 4 are used for marking core points, clustering core points, and clustering border points, respectively. However, we use two major optimizations on top of the 2D algorithms: a d tree for finding neighboring cells and a quadtree for answering range counting queries.
5.1. Finding Neighboring Cells
The number of possible neighboring cells grows exponentially with the dimension , and so enumerating all possible neighboring cells can be inefficient in practice for higher dimensions (although still constant work in theory). Therefore, instead of implementing NeighborCells by enumerating all possible neighboring cells, we first insert all cells into a d tree (Bentley, 1975), which enables us to perform range queries to obtain just the nonempty neighboring cells. The construction of our d tree is done recursively, and all recursive calls for children nodes are executed in parallel. We also sort the points at each level in parallel and pass them to the appropriate child. Queries do not modify the d tree, and can all be performed in parallel. Since a cell needs to find its neighboring cells multiple times throughout the algorithm, we cache the result on its first query to avoid repeated computation.
5.2. Range Counting
While RangeCount queries can be implemented theoreticallyefficiently in DBSCAN by checking all points in the target cell, there is a large overhead for doing so in practice. In higherdimensional DBSCAN, we construct a quadtree data structure for each cell to answer RangeCount queries. The structure of a quadtree is illustrated in Figure 5. A cell of sidelength is recursively divided into subcells of the same size until the subcell becomes empty. This forms a tree where each subcell is a node and its children are the up to nonempty subcells that it divides into. Each node of the tree stores the number of points contained in its corresponding subcell. Queries do not modify the quadtrees and are therefore all executed in parallel. We now describe how to construct the quadtrees in parallel.
Parallel Quadtree Construction. The construction procedure recursively divides each cell into subcells. Each node of the tree has access to the points contained in its subcell in a contiguous subarray that is part of a global array (e.g., by storing a pointer to the start of its points in the global array as well as the number of points that it represents). We use an integer sort on keys from the range to sort the points in the subarray based on which of the subcells it belongs to. Now the points belonging to each of the child nodes are contiguous, and we can recursively construct the up to nonempty child nodes independently in parallel by passing in the appropriate subarray.
To reduce construction time, we set a threshold for the number of points in a subcell, below which the node becomes a leaf node. This reduces the height of the tree but makes leaf nodes larger. In addition, we avoid unnecessary tree node traversal by ensuring that each tree node has at least two nonempty children: when processing a cell, we repeatedly divide the points until they fall into at least two different subcells.
Range Counting in MarkCore. RangeCount queries used in marking core points is in Algorithm 2. For marking core points, a quadtree containing all of the points is constructed in parallel. Then the RangeCount(, , ) query reports the number of points in cell that are no more than distance from point . Instead of naively looping through all points in , it initiates a traversal of the quadtree starting from cell , and recursively searches all children whose bounding boxes intersect with the radius of . When reaching a leaf node on a query, we explicitly count the number of points contained in the radius of the query point.
Exact DBSCAN. For higherdimensional exact DBSCAN, one of our implementations uses RangeCount queries when computing BCPs in Algorithm 3. For each core cell, we build a quadtree on its core points in parallel. Then for each core point in each core cell , we issue a RangeCount query to each of its neighboring core cells and connect and in the cell graph if the range query returns a nonzero count of core points. We combine this with the optimization of reducing cell connectivity queries described in Section 4.4
Approximate DBSCAN. For approximate DBSCAN, the sequential algorithm of Gan and Tao (Gan and Tao, 2017) follows the highlevel structure of Algorithm 1 using the gridbased cell structure. The only difference is in the cell graph construction, which is done using approximate RangeCount queries.
In the quadtree for approximate RangeCount, each cell of sidelength is still recursively divided into subcells of the same size, but until either the subcell becomes empty or has sidelength at most . The tree has maximum depth . We use our parallel quadtree construction method on this modified quadtree to parallelize approximate DBSCAN.
An approximate RangeCount(, , , ) query takes as input any point , and returns an integer that is between the number of points in the radius and the number of points in the radius of that are in , (when using approximate RangeCount, all relevant methods takes an additional parameter ). Our query implementation starts a traversal of the quadtree from , and recursively searches all children whose bounding box intersects with the radius of . Once either a leaf node is reached or a node’s bounding box is completely contained in the radius of , the search on that path terminates, adding the node’s count to the total count. Once all searches terminate, the total count is returned. Queries do not modify the quadtree and can all be executed in parallel.
6. Analysis
This section analyzes the theoretical complexity of our algorithms, showing that they are workefficient and have polylogarithmic depth.
6.1. 2D Algorithms
Grid Computation. In our parallel algorithm shown in Section 4.1, creating keyvalue pairs can be done in work and depth in a dataparallel fashion. Semisorting takes expected work and depth w.h.p. Constructing the hash table and inserting nonempty cells into it takes work and depth w.h.p. The overall cost of the parallel grid computation is therefore work in expectation and depth w.h.p.
Box Computation. The serial algorithm (Gunawan, 2013; de Berg et al., 2017) uses work, including sorting, scanning the points to assign them to strips and cells, and merging strips. However, the span is since in the worst case we can have at most strips.
Parallel comparison sorting takes work and depth. Therefore, sorting the points by coordinate, and each strip by coordinate can be done in work and depth overall. Parent finding using binary search for all points takes work and depth. The longest path in the linked list halves on each round, and so the algorithm terminates after rounds. We do work per round, leading to an overall work of . The depth is per round, for a total of overall. We repeat this process for the points in each strip, but in the direction, and the work and depth bounds are the same. For assigning pointers to neighboring cells for each cell, we use a parallel merging algorithm which takes work and depth. The pointers are stored in an array, accessible in constant work and depth.
MarkCore. For cells with at least minPts points, we spend work overall marking their points as core points (Lines 4–6 of Algorithm 2). All cells are processed in parallel, and all points can be marked in parallel, giving depth.
For all cells with fewer than minPts points, each point only needs to execute a range count query on a constant number of neighboring cells (Gunawan, 2013; Gan and Tao, 2017). RangeCount(, , ) compares to all points in neighboring cell in parallel. Across all queries, each cell will only be checked by many points, and so the overall work for range counting is . Therefore, Lines 8–13 of Algorithm 2 takes work. All points are processed in parallel, and there are a constant number of RangeCount calls per point, each of which takes depth for a parallel prefix sum to obtain the number of points in the radius. Therefore, the depth for range counting is .
The work for looking up the neighbor cells is and depth is w.h.p. using the parallel hash table that stores the nonempty cells. Therefore, our parallel MarkCore takes work and depth w.h.p.
Cell Graph Construction. Reif and Sen present a parallel algorithm for constructing Voronoi diagrams and Delaunay triangulations in two dimensions in work and depth (Reif and Sen, 1992). For the Voronoi diagram approach, each nearest neighbor query can be answered in , which is used to check whether two cells should be connected and can be applied in parallel. Each cell will only query a constant number of times, and so the overall complexity is work and depth. For the Delaunay triangulation approach we can simply apply a parallel filter over all of the edges in the triangulation, keeping the edges between different cells with distance at most . The cost of the filter is dominated by the cost of constructing the Delaunay triangulation.
For the USEC method, the sorted order of points for each cell can be generated in work and depth overall. To generate a wavefront on points, the binary searches on each level take work and depth. Thus, for the work we obtain the recurrence which solves to , and for the depth we obtain the recurrence which solves to . Checking whether a set of points from one cell intersects with a wavefront from another cell can be done using a parallel merging algorithm in linear work and depth. Since the sequential algorithms for wavefront generation and determining cell connectivity take linear work, our algorithm is workefficient. Including the preprocessing step of sorting, our parallel USEC with line separation problem for determining the connectivity of core cells takes work and depth.
Connected Components. After the cell graph that contains points and edges are constructed, we run connected components of the cell graph. This step can be done in parallel in expected work and depth w.h.p. using parallel connectivity algorithms (Gazit, 1991; Halperin and Zwick, 1994; Cole et al., 1996; Halperin and Zwick, 2001; Pettie and Ramachandran, 2002).
ClusterBorder. Using a similar analysis as done for marking core points, it can be shown that assigning cluster IDs to border points takes work sequentially (Gunawan, 2013; de Berg et al., 2017). In parallel, since there are a constant number of neighboring cells for each noncore point, and all points in neighboring cells as well as all noncore points are checked in parallel, the depth is for the distance comparisons. Looking up the neighboring cells can be done in work and depth w.h.p. using our parallel hash table. Adding cluster IDs to border point’s set of clusters, while removing duplicates at the same time, can be done using parallel hashing in linear work and depth w.h.p. The work is since we do not introduce any asymptotic work overhead compared to the sequential algorithm.
The overall bounds of the 2D algorithms are summarized in the following theorem.
Theorem 6.1 ().
For a constant value of minPts, 2D Euclidean DBSCAN using either the box or grid method, and using Voronoi diagrams or Delaunay triangulation for determining core cell connectivity can be solved in expected work and depth with high probability. If USEC with line separation is used for determining core cell connectivity, the algorithm takes expected work and depth with high probability.
6.2. Higherdimensional Algorithm
For dimensions, the BCP problem can be done either using bruteforce checking, which takes quadratic work, or using more theoreticallyefficient algorithms (Agarwal et al., 1991; Chazelle, 1985; Clarkson, 1988), which take expected work for and expected work for where is any constant (Gan and Tao, 2017). The theoreticallyefficient algorithms seem too complicated to perform well in practice, and the actual implementation of (Gan and Tao, 2017) does not use them. However, it is still of theoretical interest to parallelize them, and so we show how to do so in this section.
All of the theoreticallyefficient algorithms are based on constructing Delaunay triangulations (DT) in dimensions per cell, which can be used for nearest neighbor search. However, we cannot afford to construct a DT on all the points, since a dimensional DT contains up to simplices, which is at least quadratic in the worstcase for .
The idea in the algorithm by Agarwal et al. (Agarwal et al., 1991) is to construct multiple DTs, each for a subset of the points, and a nearest neighbor query then takes the closest neighbor among queries to all of the data structures. The data structure for nearest neighbor queries used by Aggarwal et al. is based on the RPOtree by Clarkson (Clarkson, 1988). The RPOtree contains levels where each node in the RPOtree corresponds to the DT of a random subset of the points. Parallel incremental DT for a constant dimension can be computed workefficiently in expectation and in depth w.h.p. (Blelloch et al., 2016, 2018). The children of each node can be determined by traversing the history graph of the DT, which takes work and depth. The RPOtree is constructed recursively for levels, and so the overall depth is w.h.p. A query traverses down a path in the RPOtree, querying each DT along the path, which takes work and depth overall. Using this data structure to solve BCP gives a DBSCAN algorithm with expected work and depth w.h.p. For , an improved data structure by Agarwal et al. (Agarwal et al., 1990) can be used to improve the expected work to . The data structure is also based on DT, and so similar to before, we can parallelize the DT construction and obtain the same depth bound as before.
The overall bounds are summarized in the following theorem.
Theorem 6.2 ().
For a constant value of minPts, Euclidean DBSCAN can be solved in expected work for and expected work for any constant for , and polylogarithmic depth with high probability.
6.3. Approximate Algorithm
The algorithms for grid construction, mark core, connected component and cluster border are the same as the exact algorithms, and so we only analyze approximate cell graph construction in the approximate algorithm based on the quadtree introduced in Section 5.2. A quadtree has levels and can be constructed in work sequentially for a cell with points. A hash table is used to map nonempty cells to their quadtrees, which takes work w.h.p. to construct. Using a fact from (Arya and Mount, 2000), Gan and Tao show that the number of nodes visited by a query is . Therefore, for constant and , all of the quadtrees can be constructed in a total of work w.h.p., and queries can be answered in expected work.
All of the quadtrees can be constructed in parallel. To parallelize the construction of a quadtree for a cell with points, we sort the points on each level in work and depth using parallel integer sorting (Vishkin, 2010), since the keys are integers in a constant range. In total, this gives work and depth per quadtree. We use a parallel hash table to map nonempty cells to their quadtrees, which takes work and depth w.h.p. to construct. To construct the cell graph, all core points issue a constant number of queries to neighboring cells in parallel. Since the number of queries issued is , constructing the cell graph takes expected work and depth w.h.p.
The bounds for approximate DBSCAN are summarized in the following theorem.
Theorem 6.3 ().
For constant values of minPts and , our approximate Euclidean DBSCAN algorithm takes expected work and depth with high probability.
7. Experiments
This section presents experiments comparing our exact and approximate algorithms as well as existing algorithms.
Datasets. We use the synthetic seed spreader (SS) datasets produced by Gan and Tao’s generator (Gan and Tao, 2017). The generator produces points generated by a random walk in a local neighborhood, but jumping to a random location with some probability. SSsimden and SSvarden refer to the datasets with similardensity and variabledensity clusters, respectively. We also use a synthetic dataset called UniformFill that contains points distributed uniformly at random inside a bounding hypergrid with side length where is the total number of points. The points have doubleprecision floating point values, but we scaled them to integers when testing Gan and Tao’s implementation. We generated the synthetic datasets with 10 million points (unless specified otherwise) for dimensions .
In addition, we use the following realworld datasets, which contain points with doubleprecision floating point values.

Household (83) is a 7dimensional dataset with points excluding the datetime information.

GeoLife (Zheng et al., 2008) is a 3dimensional dataset with data points. This dataset contains user location data (longitude, latitude, altitude), and is extremely skewed in terms of distribution as the majority of the data is collected inside the city of Beijing.

Cosmo50 (Kwon et al., 2010) is a 3dimensional dataset with data points. We extracted the , , and coordinate information to construct the 3dimensional dataset.

OpenStreetMap (Haklay and Weber, 2008) is a 2dimensional dataset with data points, containing GPS location data.

TeraClickLog (81) is a 13dimensional dataset with
points containing feature values and click feedback of online advertisements. As far as we know, TeraClickLog is larger than any dataset used in the literature for exact DBSCAN.
We performed a search on and minPts for the synthetic datasets and chose the default parameters to be those that output a correct clustering. For the SS datasets, the default parameters we use are similar to those found by Gan and Tao (Gan and Tao, 2017). For ease of comparison, the default parameters for Household are the same as Gan and Tao (Gan and Tao, 2017) and the default parameters for GeoLife, Cosmo50, OpenStreetMap, and TeraClickLog are same as RPDBSCAN (Song and Lee, 2018). For approximate DBSCAN, we set , unless specified otherwise.
Testing Environment. We perform all of our experiments on Amazon EC2 machines. We use a c5.18xlarge machine for testing of all datasets other than Cosmo50, OpenStreetMap, and TeraClickLog. The c5.18xlarge machine has 2 Intel Xeon Platinum 8124M (3.00GHz) CPUs for a total for a total of 36 twoway hyperthreaded cores, and 144 GB of RAM. We use a r5.24xlarge machine for the three larger datasets just mentioned. The r5.24xlarge machine has 2 Intel Xeon Platinum 8175M (2.50 GHz) CPUs for a total of 48 twoway hyperthreaded cores, and 768 GB of RAM. By default, we use all the cores with hyperthreading on each machine.
To compile the programs, we use the g++ compiler (version 7.4) with the O3 flag. We use Cilk Plus, which is supported by g++, for parallelism in our code (Leiserson, 2010).
7.1. Algorithms Tested
We implement the different methods for marking core points and BCP computation in exact and approximate DBSCAN for , and present results for the fastest versions, which are described below.

ourapprox: This approximate implementation implements the RangeCount query in marking core points by scanning through all points in the neighboring cell in parallel, and uses the quadtree for approximate RangeCount queries in cell graph construction described in Section 5.2.

ourapproxqt: This approximate implementation is the same as ourapprox except that it uses the RangeCount query supported by the quadtree described in Section 5.2 for marking core points.
We append the bucketing suffix to the names of these implementations when using the bucketing optimization described in Section 4.4.
For , we have six implementations which differ in whether they use the grid or the box method to construct cells and whether they use BCP, Delaunay triangulation, or USEC with line separation to construct the cell graph. We refer to these as our2dgridbcp, our2dgridusec, our2dgriddelaunay, our2dboxbcp, our2dboxusec, and our2dboxdelaunay.
We note that our exact algorithms return the same answer as the standard DBSCAN definition, and our approximate algorithms return answers that satisfy Gan and Tao’s approximate DBSCAN definition (see Section 2).
We compare with the following implementations:

Gan&Taov2 (Gan and Tao, 2017) is the stateoftheart serial implementation for both exact and approximate DBSCAN. Gan&Taov2 only accepts integer values between and , and so when running their code we scaled the datasets up into this integer range and scaled up the value accordingly to achieve a consistent clustering output with other methods.

pdsdbscan (Patwary et al., 2013) is the implementation of the parallel disjointset exact DBSCAN by Patwary et al. compiled with OpenMP.

hpdbscan (Götz et al., 2015) is the implementation of parallel exact DBSCAN by Gotz et al. compiled with OpenMP. We modified the source code to remove the file output code.

rpdbscan (Song and Lee, 2018) is the stateoftheart distributed implementation for DBSCAN using Apache Spark. We note that their variant does not return the same result as DBSCAN. We tested rpdbscan on the same machine that we used by running a Spark environment locally. We also cite the timings reported in (Song and Lee, 2018), which were obtained using at least as many cores as our largest machine.
7.2. Experiments for
We first evaluate the performance of the different algorithms for . In the following plots, data points that did not finish within an hour are not shown.
Influence of on Parallel Running Time. In this experiment, we fix the default value of minPts corresponding to the correct clustering, and vary within a range centered around the default value. Figure 6 shows the parallel running time vs. for the different implementations. In general, both pdsdbscan and hpdbscan becomes slower with increasing . This is because they use pointwise range queries, which get more expensive with larger . Our methods tend to improve with increasing because there are fewer cells leading to a smaller cell graph, which speeds up computations on the graph. Our implementations significantly outperform pdsdbscan and hpdbscan on all of the data points.
We observe a spike in plot Figure 6(f) when . The implementations that mark core points by scanning through all points in neighboring cells spend a significant amount of time in that phase; in comparison, the quadtree versions perform better because of their more optimized range counting. There is also a spike in Figure 6(j) when . Our exact implementation spends a significant amount of time in cell graph construction. This is because the GeoLife dataset is heavily skewed, certain cells could contain significantly more points. When many cell connectivity queries involve these cells, the quadratic nature using the BCP approach in ourexact makes the cost of queries expensive. On the contrary, methods using the quadtree for cell graph construction (ourexactqt, ourapproxqt and ourapprox) tend to have consistent performance across the values. It is interesting to see that the bucketing implementations ourexactqtbucketing and ourexactbucketing improves significantly over ourexactqt and ourexact because many of the expensive connectivity queries are pruned.
Influence of minPts on Parallel Running Time. In this experiment, we fix the default value of for a dataset and vary minPts over a range from to . Figure 7 shows that our implementations has an increasing trend in running time as minPts increases in most cases. This is consistent with our analysis in Section 6.1 that the overall work for marking core points is . In contrast, minPts does not have much impact on the performance of hpdbscan and pdsdbscan because their range queries, which dominate the total running times, do not depend on minPts. Our implementations outperform hpdbscan and pdsdbscan for almost all values of minPts. Figures 7(d) and 7(g) suggests that hpdbscan can surpass our performance for certain datasets when . However, as suggested by Schubert et al. (Schubert et al., 2017), the minPts value used in practice is usually much smaller, and based on our observation, a minPts value of at most 100 usually gives the correct clusters.
Parallel Speedup. To the best of our knowledge, Gan&Taov2 is the fastest existing serial implementation both for exact and approximate DBSCAN. However, we find that across all our datasets, our serial implementations are faster than theirs by an average of 5.18x and 1.52x for exact DBSCAN and approximate DBSCAN, respectively. In Figure 8, we compare the speedup of the parallel implementations under different thread counts over the best serial baselines for each dataset and choice of parameters. For this experiment, we use parameters that generate the correct clusters. Our implementations get very good speedups on most datasets, achieving speedups of 5–33x (16x on average) over the best serial baselines. Additionally, the selfrelative speedups of our exact and approximate methods are 2–89x (24x on average) and 1444x (24x on average), respectively. Although hpdbscan and pdsdbscan achieve good selfrelative speedup (22–31x and 7–20x respectively; one example is shown in Figure 9), they failed to outperform the serial implementation on most of the datasets due to workinefficiency. Compared to hpdbscan and pdsdbscan, we are faster by up to orders of magnitude (16–6102x).
Our speedup on the GeoLife dataset (Figure 8(j)) is low due to the high skewness of cell connectivity queries caused by the skewed point distribution, however the parallel running time is reasonable (less than 1 second). In contrast, hpdbscan and pdsdbscan did not terminate within an hour.
The bucketing heuristic achieved the best parallel performance for several of the datasets (Figures 6(f), (g), and (j); Figures 7(c) and (j); and Figures 8(c), (f), (g), and (j)). In general, the bucketing heuristic greatly reduces the number of connectivity queries during cell graph construction, but in some cases it can reduce parallelism and/or increase overhead due to sorting. We also observe a similar trend on all methods where bucketing is applied.
We also implemented our own parallel baseline based on the original DBSCAN algorithm (Ester et al., 1996). We use a parallel d tree, and all points perform a query to find all neighbors in its radius to determine if they should be a core point. However, the baseline was more than 10x slower than our fastest parallel implementation for datasets with the correct parameters, and hence we do not show it in the plots.
GeoLife  Cosmo50  OpenStreetMap  TeraClickLog  
20  40  80  160  0.01  0.02  0.04  0.08  0.01  0.02  0.04  0.08  1500  3000  6000  12000  
ourexact  0.541  0.617  0.535  0.482  41.8  5.51  4.69  3.03  41.4  43.2  40  44.5  26.8  26.9  27.0  27.6 
rpdbscan (our machine)  29.13  27.92  32.04  27.81  3750  562.0  576.9  672.6  –  –  –  –  –  –  –  – 
rpdbscan ((Song and Lee, 2018))  36  33  28  27  960  504  438  432  3000  1720  1200  840  15480  7200  3540  1680 
Influence of on Parallel Running Time. Figure 10 shows the effect of varying for our two approximate DBSCAN implementations. We also show our best exact method as a baseline. We only show plots for two datasets as the trend was similar in other datasets. We observe a small decrease in running time as increases, but find that the approximate methods are still mostly slower than the best exact method. On average, for the parameters corresponding to correct clusters, we find that our best exact method is 1.24x and 1.53x faster than our best approximate method when running in parallel and serially, respectively; this can also be seen in Figure 8. Schubert et al. (Schubert et al., 2017) also found exact DBSCAN to be faster than approximate DBSCAN for appropriatelychosen parameters, which is consistent with our observation.
Largescale Datasets. In this section, we evaluate the ourexact implementation on largescale datasets and compare with the reported numbers for the stateoftheart distributed implementation rpdbscan, which use 48 cores distributed across 12 machines (Song and Lee, 2018) as well as numbers for rpdbscan on our machines. The purpose of this experiment is to show that we are able to efficiently process large datasets using just a multicore machine. GeoLife was run on the 36 core machine whereas others were run on the 48 core machine due to their larger memory footprint. The running times are shown in Table 2. We see that ourexact achieves a 18–577x speedup over rpdbscan using the same or fewer number of cores. We believe that this speed up is due to lower communication costs in sharedmemory as well as a better algorithm. Note that even though TeraClickLog is significantly larger than the other datasets, our running times are not proportionally larger. This is because for the parameters chosen by (Song and Lee, 2018), all points fall into one cell. Therefore, in our implementation all points are core points and are trivially placed into the only cluster. In contrast, rpdbscan has to incur communication in partitioning the points across machines and merging the clusters from different machines together.
7.3. Experiments for
In Figure 11, we show the performance of our six 2D algorithms as well as hpdbscan and pdsdbscan on the synthetic datasets. We show the running time while varying , minPts, number of points, or number of threads. We first note that all of our implementations are significantly faster than hpdbscan and pdsdbscan. In general, we found the gridbased implementations to be faster than the boxbased implementations due to the higher cell construction time of the boxedbased implementations. We also found the Delaunay triangulationbased implementations to be significantly slower than the BCP and USECbased methods due to the high overhead of computing the Delaunay triangulation. The fastest implementation overall was our2dgridbcp.
8. Related Work
There has been a significant amount of work on parallelizing DBSCAN, which we discuss in this section. Some of these solutions solve the standard DBSCAN problem while others are approximate (however, unlike the approximate DBSCAN algorithm, there are no theoretical bounds on the approximation guarantees for these algorithms). We then discuss variants of DBSCAN that have been proposed.
Xu et al. (Xu et al., 1999) provide the first parallel DBSCAN algorithm, called PDBSCAN, based on a distributed tree and always produces the same answer as DBSCAN. Arlia and Coppola (Arlia and Coppola, 2001) present a parallel DBSCAN implementation that replicates a sequential tree across machines so that points can be processed in parallel. Coppola and Vanneschi (Coppola and Vanneschi, 2002) design a parallel algorithm using a queue to store core points, where each core point is processed one at a time but their neighbors are checked in parallel to see whether they should be placed at the end of the queue. Januzaj et al. (Januzaj et al., 2004a, b) design an approximate DBSCAN algorithm based on determining representative points on different local processors, and then running a sequential DBSCAN on the representatives. Brecheisen et al. (Brecheisen et al., 2006) parallelize a version of DBSCAN optimized for complex distance functions (Brecheisen et al., 2004).
Patwary et al. (Patwary et al., 2012) present PDSDBSCAN, a multicore and distributed algorithm for DBSCAN using a unionfind data structure for connecting points. Their unionfind data structure is lockbased whereas ours is lockfree. Patwary et al. (Patwary et al., 2014, 2015) also present distributed DBSCAN algorithms that are approximate but more scalable than PDSDBSCAN. Hu et al. (Hu et al., 2017) design PSDBSCAN, an implementation of DBSCAN using a parameter server framework. Gotz et al. (Götz et al., 2015) present HPDBSCAN, an algorithm for both sharedmemory and distributedmemory based on partitioning the data among processors, running DBSCAN locally on each partition, and then merging the clusters together.
Exact and approximate distributed DBSCAN algorithms have been designed using the MapReduce (Xiang Fu et al., 2011; Dai and Lin, 2012; He et al., 2014; Fu et al., 2014; Kim et al., 2014; Yu et al., 2015; Araujo Neto et al., 2015; Hu et al., 2018) and Spark (Cordova and Moh, 2015; Han et al., 2016; Luo et al., 2016; Huang et al., 2017; Song and Lee, 2018; Lulli et al., 2016) paradigms. RPDBSCAN (Song and Lee, 2018), which is an approximate DBSCAN algorithm, has been shown to be the stateoftheart for MapReduce and Spark. GPU implementations of DBSCAN have also been developed (Böhm et al., 2009; Andrade et al., 2013; Welton et al., 2013; Chen and Chen, 2015).
In addition to parallel solutions, there have been optimizations to speed up sequential DBSCAN (Brecheisen et al., 2004; Kryszkiewicz and Lasek, 2010; Mahesh Kumar and Rama Mohan Reddy, 2016). DBSCAN has also been generalized to other definitions of neighborhoods (Sander et al., 1998). Furthermore, there have been variants of DBSCAN proposed in the literature, which do not return the same result as the standard DBSCAN. IDBSCAN (Borah and Bhattacharyya, 2004), FDBSCAN (Liu, 2006), GFDBSCAN (Tsai and Wu, 2009), IDBSCAN (Viswanath and Pinkesh, 2006), GNDBSCAN (Huang and Bian, 2009), RoughDBSCAN (Viswanath and Babu, 2009), and DBSCAN++ (Jang and Jiang, 2019) use sampling to reduce the number of range queries needed. ElSonbaty et al. (ElSonbaty et al., 2004) presents a variation that partitions the dataset, runs DBSCAN within each partition, and merges together dense regions. GriDBSCAN (Mahran and Mahar, 2008) uses a similar idea with an improved scheme for partitioning and merging. Other partitioning based algorithms include PACADBSCAN (Jiang et al., 2011), APSCAN (Chen et al., 2011), and AADBSCAN (Kim et al., 2019). DBSCAN and HDBSCAN are variants of DBSCAN where only core points are included in clusters (Campello et al., 2015). Other variants use approximate neighbor queries to speed up DBSCAN (Wu et al., 2007; He et al., 2017).
OPTICS (Ankerst et al., 1999), SUBCLU (Kailing et al., 2004), and GRIDBSCAN (Uncu et al., 2006), are hierarchical versions of DBSCAN that compute DBSCAN clusters on different parameters, enabling clusters of different densities to more easily be found. POPTICS (Patwary et al., 2013) is a parallel version of OPTICS based on concurrent unionfind. Our parallel DBSCAN solutions could be used as a subroutine to potentially speed up hierarchical DBSCAN algorithms.
9. Conclusion
We have presented new parallel algorithms for exact and approximate Euclidean DBSCAN that are both theoreticallyefficient and practical. Our algorithms are workefficient, matching the work complexity of the sequential algorithms, and polylogarithmic depth, making them highly parallel. Our experiments demonstrate that our solutions achieve excellent parallel speedup and significantly outperform existing parallel DBSCAN solutions. Future work includes designing theoreticallyefficient and practical parallel algorithms for variants of DBSCAN and hierarchical versions of DBSCAN.
Acknowledgements. This research was supported by DOE Early Career Award #DESC0018947, NSF CAREER Award #CCF1845763, Google Faculty Research Award, DARPA SDH Award #HR00111830007, and Applications Driving Architectures (ADA) Research Center, a JUMP Center cosponsored by SRC and DARPA.
Footnotes
 copyright: acmcopyright
 journalyear:
 doi:
 conference: ; ;
 booktitle:
 price:
 isbn:
 We say that a bound holds with high probability (w.h.p.) on an input of size if it holds with probability at least for a constant .
References
 Euclidean minimum spanning trees and bichromatic closest pairs. In Annual Symposium on Computational Geometry, pp. 203–210. Cited by: §6.2.
 Euclidean minimum spanning trees and bichromatic closest pairs. Discrete & Computational Geometry 6 (3), pp. 407–422. Cited by: §6.2, §6.2.
 GDBSCAN: a GPU accelerated algorithm for densitybased clustering. Procedia Computer Science 18, pp. 369 – 378. Cited by: §1, §8.
 OPTICS: ordering points to identify the clustering structure. In ACM International Conference on Management of Data (SIGMOD), pp. 49–60. Cited by: §8.
 G2P: a partitioning approach for processing DBSCAN with MapReduce. In Web and Wireless Geographical Information Systems, pp. 191–202. Cited by: §1, §8.
 Experiments in parallel clustering with dbscan. In European Conference on Parallel Processing (EuroPar), pp. 326–331. Cited by: §1, §8.
 Approximate range searching. Computational Geometry 17 (3), pp. 135 – 152. Cited by: §6.3.
 Multidimensional binary search trees used for associative searching. Commun. ACM 18 (9), pp. 509–517. Cited by: §5.1.
 Internally deterministic parallel algorithms can be fast. In ACM SIGPLAN Symposium on Proceedings of Principles and Practice of Parallel Programming (PPoPP), pp. 181–192. Cited by: §2, §4.4.
 Parallelism in randomized incremental algorithms. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 467–478. Cited by: §6.2.
 Parallel writeefficient algorithms and data structures for computational geometry. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 235–246. Cited by: §6.2.
 Lowdepth cache oblivious algorithms. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 189–199. Cited by: §2.
 Scheduling multithreaded computations by work stealing. J. ACM 46 (5), pp. 720–748. Cited by: §2.
 Densitybased clustering using graphics processors. In ACM Conference on Information and Knowledge Management, pp. 661–670. Cited by: §1, §8.
 An improved samplingbased DBSCAN for large spatial databases. In International Conference on Intelligent Sensing and Information Processing, pp. 92–96. Cited by: §8.
 Spaceefficient geometric divideandconquer algorithms. Computational Geometry 37 (3), pp. 209 – 227. Cited by: §4.4.
 Efficient densitybased clustering of complex objects. In IEEE International Conference on Data Mining (ICDM), pp. 43–50. Cited by: §8, §8.
 Parallel densitybased clustering of complex objects. In Advances in Knowledge Discovery and Data Mining (PAKDD), pp. 179–188. Cited by: §1, §8.
 The parallel evaluation of general arithmetic expressions. J. ACM 21 (2), pp. 201–206. External Links: ISSN 00045411 Cited by: §2.
 Hierarchical density estimates for data clustering, visualization, and outlier detection. ACM Trans. Knowl. Discov. Data 10 (1), pp. 5:1–5:51. Cited by: §8.
 How to search in history. Information and control 64 (13), pp. 77–99. Cited by: §6.2.
 HiClus: highly scalable densitybased clustering with heterogeneous cloud. Procedia Computer Science 53, pp. 149 – 157. Cited by: §1, §8.
 Geometric algorithms for densitybased data clustering. International Journal of Computational Geometry & Applications 15 (03), pp. 239–260. Cited by: §2.
 Geometric algorithms for densitybased data clustering. International Journal of Computational Geometry & Applications 15 (03), pp. 239–260. Cited by: §1.
 APSCAN: a parameter free algorithm for clustering. Pattern Recognition Letters 32 (7), pp. 973 – 986. Cited by: §8.
 A randomized algorithm for closestpoint queries. SIAM Journal on Computing 17 (4), pp. 830–847. Cited by: §6.2, §6.2.
 Finding minimum spanning forests in logarithmic time and linear work using random sampling. In ACM Symposium on Parallel Algorithms and Architectures, pp. 243–250. Cited by: §6.1.
 Parallel merge sort. SIAM J. Comput. 17 (4), pp. 770–785. Cited by: Table 1, §2.
 Highperformance data mining with skeletonbased structured parallel programming. Parallel Comput. 28 (5), pp. 793–813. Cited by: §1, §8.
 DBSCAN on resilient distributed datasets. In International Conference on High Performance Computing Simulation (HPCS), pp. 531–540. Cited by: §1, §8.
 Introduction to algorithms (3. ed.). MIT Press. Cited by: §2.
 Efficient map/reducebased DBSCAN algorithm with optimized data partition. In IEEE International Conference on Cloud Computing, pp. 59–66. Cited by: §1, §8.
 Computational geometry: algorithms and applications. SpringerVerlag. Cited by: §2.
 Faster DBscan and HDBscan in lowdimensional euclidean spaces. In International Symposium on Algorithms and Computation (ISAAC), pp. 25:1–25:13. Cited by: §1, §1, §3, §4.1, §4.2, §4.4, §6.1, §6.1.
 An efficient density based clustering algorithm for large databases. In IEEE International Conference on Tools with Artificial Intelligence, pp. 673–677. Cited by: §8.
 A densitybased algorithm for discovering clusters a densitybased algorithm for discovering clusters in large spatial databases with noise. In International Conference on Knowledge Discovery and Data Mining (KDD), pp. 226–231. Cited by: §1, §1, §2, §7.2.
 Research and application of dbscan algorithm based on hadoop platform. In Pervasive Computing and the Networked World, pp. 73–87. Cited by: §1, §8.
 On the hardness and approximation of euclidean DBSCAN. ACM Trans. Database Syst. 42 (3), pp. 14:1–14:45. Cited by: §1, §1, §2, §3, §4.4, §4.4, §4.4, §4.4, §5.2, §6.1, §6.2, 1st item, §7, §7.
 An optimal randomized parallel algorithm for finding connected components in a graph. SIAM J. Comput. 20 (6), pp. 1046–1067. Cited by: §6.1.
 Towards a theory of nearly constant time parallel algorithms. In IEEE Symposium on Foundations of Computer Science (FOCS), pp. 698–710. Cited by: Table 1, §2.
 HPDBSCAN: highly parallel DBSCAN. In Workshop on Machine Learning in HighPerformance Computing Environments, pp. 2:1–2:10. Cited by: §1, 3rd item, §8.
 A topdown parallel semisort. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 24–34. Cited by: Table 1, §2.
 A faster algorithm for DBSCAN. Note: Master’s thesis, Eindhoven University of Technology Cited by: §1, §1, §3, §4.1, §4.2, §4.4, §6.1, §6.1, §6.1.
 OpenStreetMap: usergenerated street maps. IEEE Pervasive Computing 7 (4), pp. 12–18. Cited by: item 4.
 An optimal randomized logarithmic time connectivity algorithm for the EREW PRAM (extended abstract). In ACM Symposium on Parallel Algorithms and Architectures, pp. 1–10. Cited by: §6.1.
 Optimal randomized EREW PRAM algorithms for finding spanning forests. Journal of Algorithms 39 (1), pp. 1 – 46. Cited by: §6.1.
 A novel scalable DBSCAN algorithm with Spark. In IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 1393–1402. Cited by: §1, §8.
 A novel DBSCAN based on binary local sensitive hashing and binaryKNN representation. Adv. in MM 2017, pp. 3695323:1–3695323:9. Cited by: §8.
 MRDBSCAN: a scalable mapreducebased DBSCAN algorithm for heavily skewed data. Frontiers of Computer Science 8 (1), pp. 83–99. Cited by: §1, §8.
 A MapReducebased improvement algorithm for DBSCAN. Journal of Algorithms & Computational Technology 12 (1), pp. 53–61. Cited by: §1, §8.
 A communication efficient parallel DBSCAN algorithm based on parameter server. In ACM on Conference on Information and Knowledge Management (CIKM), pp. 2107–2110. Cited by: §1, §8.
 Research on the parallelization of the DBSCAN clustering algorithm for spatial data mining based on the Spark platform. Remote Sensing 9 (12). Cited by: §1, §8.
 A grid and density based fast spatial clustering algorithm. In International Conference on Artificial Intelligence and Computational Intelligence, Vol. 4, pp. 260–263. Cited by: §8.
 Introduction to parallel algorithms. AddisonWesley Professional. Cited by: Table 1, §2, §2, §2, §2, §2, §4.2.
 DBSCAN++: towards fast and scalable density clustering. In International Conference on Machine Learning (ICML), Vol. 97, pp. 3019–3029. Cited by: §8.
 DBDC: density based distributed clustering. In International Conference on Extending Database Technology (EDBT), pp. 88–105. Cited by: §1, §8.
 Scalable densitybased distributed clustering. In European Conference on Principles and Practice of Knowledge Discovery in Databases, pp. 231–244. Cited by: §1, §8.
 A new hybrid method based on partitioningbased DBSCAN and ant clustering. Expert Systems with Applications 38 (8), pp. 9373 – 9381. Cited by: §8.
 Densityconnected subspace clustering for highdimensional data. In SIAM International Conference on Data Mining, pp. 246–256. Cited by: §8.
 AADBSCAN: an approximate adaptive DBSCAN for finding clusters with varying densities. The Journal of Supercomputing 75 (1), pp. 142–169. Cited by: §8.
 DBCUREMR: an efficient densitybased clustering algorithm for large data using mapreduce. Information Systems 42, pp. 15 – 35. Cited by: §1, §8.
 TIDBSCAN: clustering with DBSCAN by means of theÂ triangle inequality. In Rough Sets and Current Trends in Computing, pp. 60–69. Cited by: §8.
 Scalable clustering algorithm for Nbody simulations in a sharednothing cluster. In Scientific and Statistical Database Management, pp. 132–150. Cited by: item 3.
 The Cilk++ concurrency platform. J. Supercomputing 51 (3). Cited by: §7.
 A fast densitybased clustering algorithm for large databases. In International Conference on Machine Learning and Cybernetics, pp. 996–1000. Cited by: §8.
 NGDBSCAN: scalable densitybased clustering for arbitrary data. Proc. VLDB Endow. 10 (3), pp. 157–168. Cited by: §1, §8.
 A parallel DBSCAN algorithm based on Spark. In IEEE International Conferences on Big Data and Cloud Computing, pp. 548–553. Cited by: §1, §8.
 A fast DBSCAN clustering algorithm by accelerating neighbor searching using groups method. Pattern Recogn. 58 (C), pp. 39–48. Cited by: §8.
 Using grid for accelerating densitybased clustering. In IEEE International Conference on Computer and Information Technology, pp. 35–40. Cited by: §8.
 A new scalable parallel DBSCAN algorithm using the disjointset data structure. In ACM/IEEE International Conference on High Performance Computing, Networking, Storage and Analysis (SC), pp. 62:1–62:11. Cited by: §1, §8.
 Scalable parallel OPTICS data clustering using graph algorithmic techniques. In ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 49:1–49:12. Cited by: §1, 2nd item, §8.
 BDCATS: big data clustering at trillion particle scale. In ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 6:1–6:12. Cited by: §1, §8.
 PARDICLE: parallel approximate densitybased clustering. In ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 560–571. Cited by: §1, §8.
 A randomized timework optimal parallel algorithm for finding a minimum spanning forest. SIAM J. Comput. 31 (6), pp. 1879–1895. Cited by: §6.1.
 Optimal randomized parallel algorithms for computational geometry. Algorithmica 7 (1), pp. 91–117. Cited by: Table 1, §2, §6.1.
 Densitybased clustering in spatial databases: the algorithm gdbscan and its applications. Data Mining and Knowledge Discovery 2 (2), pp. 169–194. Cited by: §8.
 DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 42 (3), pp. 19:1–19:21. Cited by: §7.2, §7.2.
 Phaseconcurrent hash tables for determinism. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 96–107. Cited by: §2.
 Brief announcement: the Problem Based Benchmark Suite. In ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), pp. 68–70. Cited by: §2, §4.4.
 RPDBSCAN: a superfast parallel DBSCAN algorithm based on random partitioning. In ACM SIGMOD International Conference on Management of Data, pp. 1173–1187. Cited by: §1, §1, 4th item, §7.2, Table 2, §7, §8.
 Terabyte click logs. Note: \urlhttp://labs.criteo.com/downloads/downloadterabyteclicklogs/ Cited by: item 5.
 GFDBSCAN: a new efficient and effective data clustering technique for large databases. In WSEAS International Conference on Multimedia Systems & Signal Processing, pp. 231–236. Cited by: §8.
 UCI machine learning repository. Note: \urlhttp://archive.ics.uci.edu/ml Cited by: item 1.
 GRIDBSCAN: grid densitybased spatial clustering of applications with noise. In IEEE International Conference on Systems, Man and Cybernetics, Vol. 4, pp. 2976–2981. Cited by: §8.
 Thinking in parallel: some basic dataparallel algorithms and techniques. Cited by: Table 1, §2, §6.3.
 RoughDBSCAN: a fast hybrid density based clustering method for large data sets. Pattern Recognition Letters 30 (16), pp. 1477 – 1488. Cited by: §8.
 lDBSCAN : a fast hybrid density based clustering method. In International Conference on Pattern Recognition (ICPR), Vol. 1, pp. 912–915. Cited by: §8.
 Mr. scan: extreme scale densitybased clustering using a treebased network of GPGPU nodes. In ACM/IEEE International Conference on High Performance Computing, Networking, Storage and Analysis (SC), pp. 84:1–84:11. Cited by: §1, §8.
 A linear DBSCAN algorithm based on LSH. In International Conference on Machine Learning and Cybernetics, Vol. 5, pp. 2608–2614. Cited by: §8.
 Research on parallel DBSCAN algorithm design based on MapReduce. Advanced Materials Research 301303, pp. 1133–1138. Cited by: §1, §8.
 A fast parallel clustering algorithm for large spatial databases. Data Mining and Knowledge Discovery 3 (3), pp. 263–290. Cited by: §1, §8.
 Cludoop: an efficient distributed densitybased clustering for big data using Hadoop. Int. J. Distrib. Sen. Netw. 2015, pp. 2:2–2:2. Cited by: §1, §8.
 Learning transportation mode from raw gps data for geographic applications on the web. In International Conference on World Wide Web, pp. 247–256. Cited by: item 2.