On The Equivalence of Tries and Dendrograms - Efficient Hierarchical Clustering of Traffic Data
The widespread use of GPS-enabled devices generates voluminous and continuous amounts of traffic data but analyzing such data for interpretable and actionable insights poses challenges. A hierarchical clustering of the trips has many uses such as discovering shortest paths, common routes and often traversed areas. However, hierarchical clustering typically has time complexity of where is the number of instances, and is difficult to scale to large data sets associated with GPS data. Furthermore, incremental hierarchical clustering is still a developing area. Prefix trees (also called tries) can be efficiently constructed and updated in linear time (in ). We show how a specially constructed trie can compactly store the trips and further show this trie is equivalent to a dendrogram that would have been built by classic agglomerative hierarchical algorithms using a specific distance metric. This allows creating hierarchical clusterings of GPS trip data and updating this hierarchy in linear time. We demonstrate the usefulness of our proposed approach on a real world data set of half a million taxis’ GPS traces, well beyond the capabilities of agglomerative clustering methods. Our work is not limited to trip data and can be used with other data with a string representation.
Location tracking devices have become widely popular over the last decade and this has enabled the collection of large amounts of spatial temporal trip data. Given a collection of such trip data, clustering is often a natural start to explore the general properties of the data and among clustering methods, hierarchical clustering is well suited as it provides a set of groupings at different levels where each grouping at one level is a refinement of the groupings at the previous levels. This dendrogram structure can also naturally represent the evolutionary/temporal nature of trip data where each level in the dendrogram corresponds to a particular time. However, one significant drawback of the standard agglomerative hierarchical clustering is its run time, which is prohibitive when the number of instances, is large. For example, in our experiments our data set has nearly half a million trips. Classic agglomerative algorithms would take days or even weeks to build a dendrogram on such large data set.
In this paper we consider an alternative way to efficiently construct a dendrogram of the trip data without the long computation time. We achieve this by first converting each trip to a trajectory string and then building a prefix tree from the trip-strings. We then formally show the equivalence between a prefix tree and a dendrogram by showing that the prefix tree we create would have been built by a classic agglomerative method using a specific distance metric on the strings. This result is not trivial as the prefix tree is created top-down and the dendrogram bottom-up.
Creating Trajectory Strings from GPS Data. We discretize both the spatial and temporal dimensions with respective pre-defined resolutions, effectively converting each spatial location to a unique symbol. For example, in our experiments we disceretize the San Francisco Bay area into a 100 100 grid so our alphabet contains 10,000 symbols. We can then naturally represent a trip as a sequence (i.e. string) of the discretized regions (symbols). The symbol at position in the string represents the location of the trip at time step as shown in Figure 1.
Creating Trip Tries. A prefix tree is a tree structure built from strings where each path from the root to any node corresponds to a unique string prefix and is commonly used for indexing and retrievals of text and symbolic data. A prefix tree (trie) can be constructed in linear time to both number of trips, and maximum number of discretized time steps, . An example of such a tree is shown in Figure 2.
Uses of Trip Tries. A trip trie is not only a hierarchical clustering (as we shall see) but has other uses. For example, easy to understand visualizations of a collection of trips such as heat maps (see Figures 4 and 6) can be created from a trip trie; and trip tries constructed from different collections of trips can be compared (i.e. Table 1). Tries have many useful properties such as the ability to efficiently compute Levenshtein distances and we describe uses such as creating more robust clusters using these properties. Though tries are commonly used in the database literature for tasks such as retrieval and indexing, to our knowledge they have not been used for the purposes we outlined in this paper.
Uses Beyond GPS Trip Data. In this paper we have focused on GPS trip data as the application domain is important and has readily available public data. However, our work is applicable in other domains where the data represents behavior over time such as settings where some categorical event (a symbol) occurs over time (the position of the symbol). In our earlier work (Davidson et al., 2012) we modeled behavioral data as these event strings but other applications exist in areas such as computer network traffic where each location is an IP address.
Our contributions can be summarized as follows.
We provide a novel way to organize trip data into symbolic data and then a prefix tree/trie (see section 2). Tries can be built and updated in linear time to the number of instances and alphabet size.
We discuss extensions of our work including uses beyond hierarchical clustering such as outlier detection (see section 4).
We demonstrate the usefulness of our dendrogram in illustrating interesting insights of trip data on a real data set of GPS traces of taxis (see section 5).
Our paper is organized as follows. We describe the steps to create string representations and a trip trie in section 2, which is then followed by a proof of its equivalence to standard agglomerative hierarchical clustering in section 3. We further discuss other different ways this trip trie can be used in section 4. In section 5 we evaluate our approach on a real world dataset of taxis’ GPS traces and demonstrate the usefulness of our approach in obtaining insights on the traffic dynamics. We discuss related work in section 6 and conclude our paper in section 7.
2. Creating Trip Trie from GPS Data
Here we describe the setting upon which our algorithm is built and detail the steps of constructing the trie structure. We assume our data is composed of trips where each trip consists of a sequence of GPS-located spatial temporal points where are the longitude and latitude, respectively, and is the time stamp when this sample is recorded. Note here we assume the time stamps are synchronized/identical for each trip. In reality GPS tracks at irregular time intervals and later in experiments we will use interpolation/extrapolation. Our approach consists of three major steps, shown in corresponding order in Figure 3(a), 3(b) and 3(c).
Discretization of the geographic space: This step breaks the modeled space into a finite set of distinct non-overlapping regions which we will use as symbols in an alphabet (see Figure 1). This preprocessing is carried out before the major algorithm is applied, just like most work in trajectory mining (Giannotti et al., 2007) (Chattopadhyay et al., 2013). In our experiment we use equal-sized rectangular grids over the modeled space; this allows straightforward mapping between actual spatial coordinates to the symbols; however our method can be used with any shaped regions. It is worth noting that the actual number of regions with activity is typically much smaller than the possible number of grid cells due to physical presence of roads. This is a desirable property allowing large geographic areas to be efficiently represented. For example, in our experiments, though we discretize the San Francisco Bay area into 10,000 cells, less than 20% of them see any activity.
Build trajectory strings: In this step we build a trajectory string for each trip using symbols in . Note that we regard the beginning of each trip as time 0 and each position of the string indicates the location of the object at a particular time. For example, if we decide the temporal resolution is , then a trajectory string of records the information that this trip starts (i.e. at its time 0) at region , goes through region at time and ends at region at time . The temporal resolution is typically given by the devices’ capture rate but as mentioned before we use a single resolution and construct strings accordingly through interpolation/extrapolation. This encoding would produce a forest of tries as the initial starting locations may differ. However we can instead create a symbol for an artifical common starting point that occurs before the trips start. This allows a single trie to be created as there is a common root. Figure 1 gives an illustrative example of the result of such construction.
Construct trie: Once each trip is converted into a string as above (i.e. sequence of regions), we can construct a trie of these strings. It helps to note that position in a string indicates location at time step and hence the nodes at level (from top down) in the trie represent trips whose first positions are the same. Figure 2 shows an example of the resulting trie and what the nodes represent at each level. In our experiments we implement simpler tries in MATLAB since it provides simple built-in functions and thus an easy way to reproduce the results. In more sophisticated implementation the user can store useful relevant information about the trips at each node such as the distribution of the trip durations shown in Figure 2. We present our pseudo-code in Figure 3.
3. On the Equivalence Between Trip Tries and Hierarchical Clustering
Here we make the claim that our top-down trip trie can be efficiently constructed in linear time and that it is identical to the result of standard bottom-up agglomerative hierarchical clustering with the following metric on pairs of strings.
where is the 0/1 indicator function. This metric can be viewed as a weighted Hamming distance with the weight diminishing exponentially with the symbol position.
We divide this section into several parts. Firstly, we discuss why such a distance metric is useful, its interpretation and implication. We then prove that our method of constructing trie will generate this clustering and finally we present a brief complexity analysis.
3.1. Interpretation of Dendrogram and Uses of Clusters
Our string representation of trips effectively takes spatially and temporally irregular data and converts them into a string. The strings have a natural interpretation: the symbol at position is where the trip was at time . Our string distance function above then effectively says two trips are more similar if they are initially in the same locations. This string representation also allows our dendrogram’s levels to naturally explain the evolution of a trip. We can then interpret the clusters in the following observations:
The first level of the dendrogram contains a clustering of the trips based on their starting locations, the next level a refined clustering based on their starting locations and locations at time step and so on.
Each path from the root to any node represents a cluster of common trips.
Understanding what these clusters represent is critical to understanding how they can be used. Whereas in our earlier work (Kuo et al., 2015) a cluster of trips represented trips which started and ended in the same location/time, here we consider the trajectory and the entire duration taken to complete the trip. Therefore it is possible (and can be desirable in some contexts) that if two types of trips have the same start and end locations but are at different paces or slightly different routes (i.e. because some are completed during rush hour and others at night) they will appear in different clusters. Such type of clustering can be used in a variety of settings as follows:
Next Movement Prediction. A current trip can be quickly mapped to a node in the trie and the children of the node determine the likely next locations in the time step.
Diversity Route Understanding. Consider trips between a start and an end locations (symbols). Such trips could appear multiple times in the tree (i.e. different nodes) due to different routes and different travel times. Counting how frequent this combination occurs gives a measure of diversity for the pair of start/end locations.
Common Ancestors. Consider two nodes. Their common ancestor represents a bifurication point for these trips. Locations which appear frequently in the tree are therefore naturally hub locations.
If these nuances between trips speed and trajectories producing different clusters is undesirable, then we propose a method to post-process the dendrogram (see section 4). In that section we describe how efficient calculation of the Levenshtein distance between strings using a prefix tree allows us to combine clusters in the dendrogram to alleviate these nuances.
3.2. A Proof of Equivalence Between Dendrograms & Tries
Here we show that our trie can be naturally defined by a (threshold) dendrogram111We use a threshold dendrogram to maintain only the ordering of the merges. A proximity dendrogram also records the smallest distance for which a merge occurs (see (Jain et al., 1988)). that maps natural numbers to partitions of our finite set of distinct222The assumption of distinct set makes the following definitions and explanation cleaner. In our experiments we don’t need to discard duplicate trips as we record at each node (i.e. prefix) which trips have this prefix. trajectory strings . We append null symbols, , to shorter strings so that all strings will have equal length, . Accordingly a string can be written in its symbols as . Our aim here is to claim that our trip trie is identical to the result of single linkage hierarchical clustering on the trajectory strings with a specific metric in equation 1, which would require computation time if done directly. We will verify this claim empirically in the experimental section. We achieve this aim with 3 steps.
Define equivalence relations from the prefixes such that an equivalence class forms a single node in one level of the trie and all equivalence classes form all clusters at one level in the trie.
Define our trip trie as a dendrogram; that is the function mapping from natural numbers (i.e. levels) to the set partitions of the strings ().
Define a metric on the strings so that single linkage hierarchical clustering outputs our exact dendrogram above.
We will follow the notations and definitions used in (Carlsson and Mémoli, 2010) throughout the discussion.
Equivalence Classes We define equivalence relations on for each integer with : if and only if ; in other words, strings and are equivalent if they share common prefix of length ; it should be clear from the construction of a trie that all trips going through the same node at level are equivalent under relation . It is straightforward to verify that these are indeed equivalence relations.
Dendrogram. We formally define our dendrogram , a mapping from the natural numbers to the partitions of our set of strings as follows.
. (i.e. the bottom level contains all singletons).
For each positive integer , contains the equivalence classes of under .
For , .
Simply put, in partition each block contains strings that agree in the first positions. To check that is indeed a dendrogram, we need to make sure is a refinement of for each . This can also be verified straightforwardly from the definitions of : for any two strings and such that , and must share the same symbols by definition. Hence they must also share the same symbols (since ) and it follows . Our trip trie is in fact identical to this dendrogram; each level in our trie is a partition of all trips where each node is a block that contains all trips with the particular prefix denoted by the node. The leaf nodes correspond to . The level right above the leaves corresponds to , etc. and the root is .
Metric on strings. Instead of constructing the trie as described, we could define a metric on the strings such that the standard single linkage agglomerative hierarchical clustering (Carlsson and Mémoli, 2010; Jain et al., 1988) using this metric would result in the same dendrogram described above, that is, identical to our trip trie. Intuitively we are measuring the distance between two strings by the number of positions they differ where positions towards the start are weighted more. We want to weigh those positions to guarantee that two strings sharing only the first position are still closer than another pair of strings that share all but the first positions. From this we define
where is the 0/1 indicator function.
Theorem 3.1 ().
is a metric on the space .
It is straightforward to verify that and . To prove that is indeed a metric on , we show the triangle inequality: for any strings , . Note by the definition of and the distributive property of multiplication, it suffices to show that for each ,
We consider each of the two possible cases. If , then the left side is 0 and the right side is either 0 (if , too) or 2 (if ). Thus the inequality is satisfied. On the other hand, if , then the left side is 1. Since , it is impossible that and at the same time. Thus at least one of the two indicators on the right will output 1 and the right side will be at least 1; the inequality holds in general. ∎
Remark To have agglomerative hierarchical clustering result in exactly the same dendrogram as our trie, we must allow more than two clusters to merge at each step (if the linkage distance between them is the same).
3.3. Run Time Complexity
Here we give a brief analysis of the run time of our proposed algorithm. The first step of spatial discretization defines a simple mapping from longitude/latitude coordinates to a finite symbol space. With our equal-sized rectangular grids, this mapping can be computed in constant time (for each coordinate) and thus the overall complexity of the discretization and the construction of string representation for all trips is where is the total number of trips and is the length of the longest string, which is determined by the duration of the trip and the chosen temporal resolution. From the pseudo-code in Figure 3, we can easily see that construction of the trie requires invoking unique times. Since straightforward implementation of unique takes time, the overall complexity of building the tree is too. This is the same as the standard trie implementation using linked nodes. It is worth noting that the distributed implementation of prefix tree already exists in popular softwares (e.g. Apache HBase) and thus could scale to huge data sets. In comparison standard hierarchical clustering algorithms require time, which becomes prohibitively demanding when is as small as . Later in our experiments, we build a trip trie of K trips (strings) in 100’s of seconds.
4. Other Uses of Trip Tries
The literature on tries and trees is significant and many useful properties have been derived. Here we describe how to use two such properties: i) The ability to efficiently compute Levenshtein distances from tries and ii) Outlier score computation from trees.
4.1. From Micro to Macro Clusters With Relaxed Distance Calculations
The distance function that our method effectively uses (see equation 1) is quite strict. Two trips with overwhelmingly similar trajectories and speed may be placed in different clusters if the differences between the trips are towards the beginning of the trip. This may be desirable in some settings but in others more tolerance of these slight differences may be required. Here we discuss how this can be achieved by exploiting that the prefix tree can be used to efficiently calculate the Levenshtein distance (Chou and Juang, 2003) between strings. Suppose we choose a level of our dendrogram we term all clusters at that level micro-clusters. In our experiments there are upwards of 10,000 such micro-clusters at level 20, each representing a unique path (i.e. trip type) through the space. We can group together these micro-clusters based on their Levenshtein distance.
The Levenshtein distance between two strings is the number of operations (insertion, deletion, substitution) so that the two strings are the same (Levenshtein, 1966). Therefore two strings and will be in different clusters in our hierarchial clustering but could be grouped together as their Levenshtein distance is just 1 (substitution at position 1). The Levenshtein distance is suitable for forming these clusters based on the micro-clusters as slight variations in trajectories can be overcome via the substitution operation and slight variations in speed can be overcome with the insertion and deletion operations. Using clustering objectives such as minimizing maximum cluster diameter will produce clusters with a strong semantic interpretation: a diameter of means all trips have at most differences in speed and route.
4.2. Outlier Scores
In this paper we mainly explore the interpretation and use of our trip tries as a clustering tool. However we introduce a different view on the trip trie here. A key to understanding this interpretation is that our discretization of space into a grid defines a state space where each state is represented by a symbol in the alphabet. Every root-to-leaf path is then a unique trip type and all root-leaf combinations represents all unique trips through the state space that occurred in our data set. Note this definition of unique trips (due to our distance metric) implies that trips starting and ending at the same location but using different routes and/or paces will be considered different.
The notion of Isolation Forest (Liu et al., 2008, 2012) provides a method of identifying outliers as being those entities far from the root of the tree. We can use our trip tries to achieve a similar purpose. The frequency of a symbol (which represents a location) in the trie is an indication of its outlier score with respect to how many unique trip types involve it. If location occurs twice as much as , then the former is involved in twice as many unique trips as the latter. This is visualized in Figure 6(b) for trips starting at San Francisco airport. The depth of each symbol in the tree is an indication of how often it is used. If some measure of depth of is twice as large as , then the former is much less prevalent at the beginnings of trips than the latter. This is visualized in Figure 6 where we consider first appearance in the tree though the mean level of occurrence could also have been used.
We have choosen experiments to demonstrate the usefulness of our proposed approach on a real world data set of freely avaialable GPS traces of taxis333Data set can be downloaded, after registration, at http://crawdad.org/epfl/mobility/. (Piorkowski et al., 2009). In particular we focus on the following questions:
Do results from a MATLAB agglomerative hierarchical clustering algorithm and our trie actually agree with each other? (verifying Theorem 3.1).
Is the distance metric we implicitly use useful? We address this by investigating properties of hierarchical clusterings built from different time periods to see if the insights found make sense (see Table 1).
Can we explain clusters using our dendrogram’s spatial and temporal interpretation? For example, where are the most common trip trajectories? Where do trips starting at a particular region go?
We start with the description of our data set and the processing step we use to extract information of trips for reproducibility444All our code will be made publicly available for replication of our results upon acceptance..
5.1. Data Description and Preparation
The raw data set contains GPS traces of 536 taxis from Yellowcab in San Franciso bay area during a 24-day period in 2008. Each trace file corresponds to one taxi and consists of recordings of the latitude, longitude, a customer on/off flag and the time of recording (i.e. 37.75134, -122.39488, 0, 1213084687). We generate a trip by searching for contiguous values of ’1’ for the customer-on flag for each taxi. Overall a total of trips are extracted. Since the vast majority of the taxi trips are short in duration and we are more interested in analyzing local traffics, we decide to study those trips whose trip time is minutes. These trips in fact account for of all trips extracted from the data set. In addition, we pick our temporal resolution to be 1 minute. That is, in our string representation, consecutive symbols are the regions recorded at 1 minute apart. Assuming an average speed of miles per hour in the city, a taxi moves miles every minute. Accordingly we partition our modeled space into a grids where each grid cell has a corresponding geographic dimension of miles miles, which means adjacent symbols in a string are more likely to be different.
5.2. Question 1: Equivalence of MATLAB Hierarchical Clustering and Our Approach
Here we verify our claim (in section 3) that the trie constructed from our approach is the same as the dendrogram output by standard agglomerative hierarchical clustering with metric defined in equation 1. We constructed our trie and compared it with the results from the built-in single linkage clustering from MATLAB (i.e. linkage, followed by dendrogram) and check their equivalence. For any given level we can verify the two clusterings from our trie and the MATLAB dendrogram are identical up to reordering/relabeling with Algorithm 1. If this is true for all levels, then we conclude the equivalence between our trie and the MATLAB dendrogram. Note standard agglomerative clustering needs to compute and update distances between each pair of instances, which is prohibitive when the number of instances is large as in our case (in fact, standard agglomerative clustering cannot even handle moderate-sized data sets; see (Gilpin et al., 2013b)). Therefore, we draw a random subset of 1000 trips and compare our trip trie built on them with the dendrogram output by agglomerative hierarchical clustering with string metric in equation 1. We repeat this for 10 random samples and in each case the dendrograms produced by MATLAB and our method are identical.
5.3. Question 2: Usefulness of Hierarchies - Comparing Different Clusterings
Here we demonstrate the usefulness of the hierarchies we build by exploring the properties of different dendrograms built from different groups of data. We categorize trips into distinct groups and construct a dendrogram for each of the groups. We are interested to know if hierarchical clusterings (dendrograms) built from distinct subsets of the trips exhibit any differences in their properties and if these differences make sense. We use the start times of trips to extract four subsets of the data: day peak (5 AM - noon), night peak (3 PM - 10 PM), weekdays and weekends. Note that these subsets are not the same in size and not mutually disjoint either hence we report the average of these properties.
Measures of Diversity. One interesting characteristics of the trips is their diversity which can be measured in two ways. The branching factor of the dendrogram at each level provides a measure of dispersion of the trips: if a dendrogram has large branching factors on average, then trips have more different trajectories in general. In Table 1 (line 2 and 3) we see that weekend trip dendrogram have a slightly higher average branching factor. A second measure is the number of times a region appears in the dendrogram divided by the number of clusters. If this number is significantly higher for one dendrogram than another then it means there are more diverse routes in the former dendrogram. Table 1 (line 4) shows night time has more diverse routes than day time and weekdays more than weekends. Finally, Table 1 (lines 6-7) report the average number of trips per cluster and we find as expected there are bigger clusters for the nighttime and weekday trips.
|Day peak||Night peak||Weekdays||Weekends|
|Total number of trips||96582||161355||282545||148159|
|Level-wise Average branching factor||1.3234||1.3352||1.3531||1.3361|
|Level-wise Average branching factor (first 11 levels)||1.8802||1.9108||1.9598||1.9145|
|Average number of clusters per region||191.9070||251.0469||304.4685||238.3682|
|Average number of clusters per region (first 11 levels)||89.0493||113.2751||151.1392||113.0555|
|Average number of trips per cluster||12.5902||19.1384||23.6316||16.8952|
|Average number of trips per cluster (first 11 levels)||33.3085||51.5930||64.1925||45.3436|
5.4. Question 3: Temporal Interpretations of our Hierarchies
Each dendrogram has a natural temporal interpretation, the clustering at level is where the trips are at time step . Here we explore finding high level movement patterns of the taxis using our hierarchy. Some natural questions regarding such movements are “where do the trips start?” and “what paths are taken by the trips?”.
Visualizing Frequencies of Regions/Symbols at Different Levels. At each level of our dendrogram we will have a certain number of clusters each with a differing amount of trips. We can count the total number of trips through each distinct region/symbol across all clusters. These counts can be represented as a heat map for a given time/level and these heat maps provide information on how taxis in general move about in the city. Figure 4 shows 3 selected heat maps constructed from our dendrogram. Figure 4, 4 and 4 show the densities of taxis’ locations at the start of the trips, 10th minute and 20th minute after the start, respectively. From these figures, we can see that San Francisco downtown area always has traffic but there are general movement patterns radiating outwards from the area too. The most obvious interpretation is that these taxis carry customers from the more central area towards the north, east and south.
Visualizing Clusters as Trajectories. In our dendrogram each cluster at level can be naturally interpreted as a partial trip/trajectory upto time . Hence for any level we can easily visualize the most largest clusters as a trajectory. Figure 5 shows the top 3 largest clusters (partial trajectories) of our dendrogram at depth 11 (i.e. 10th minute after start of the trips). This is an interesting contrast to Figure 4 which showed most trips originating in the downtown area. We find that after 10 minutes those trips have dissipated in so many directions that the most frequent trips do not include any originating in the downtown.
Visualizing Regions across Clusterings. Often we are interested in the order which regions appear in the trajectories. Are some regions more likely to be at the beginning of a trip or the end? For example the regions where people often start their trips will appear on the first level of our dendrogram (i.e. crudest clustering) and the places that only appear as dropoff points will appear at the leaf level (i.e. finest clustering). Accordingly we can construct a map of all regions and associate each region with the depth of the dendrogram in which it first appears in any cluster. This can be efficiently computed from our dendrogram by finding the level which a symbol/region first appears. Figure 6 shows a heat map of this “order of occurrence” map. This map can be interpreted that those areas in red appear often at the start of a trip and those in blue appear towards the end of a trip. We can see the regions in Downtown/Berkeley/Oakland appear in the crudest clusterings whereas regions farther away from road segments often occur in much finer clusterings only.
Visualizing Refinement of a Clustering. Our dendrogram models how a clustering (at a level) is refined. Some clusters may split into many new clusters at the next levels while some others split into fewer clusters and others even stay the same. In our dendrogram, the split of a cluster in a refinement (i.e. next level) depends on the next regions the trajectory goes. Often we are interested to look into the subset of trips that start from a particular region and see where they go. We can obtain such information by first choosing the cluster associated with the given start region (at level 1), and then following all the new clusters that are split from it in the refinements. This corresponds to the sub-tree (also a dendrogram) rooted at a particular cluster (at level 1) in the dendrogram. Figure 6(b) shows the most frequently appearing regions at the 10th minute for those trips that start at the San Francisco airport. This could potentially be used as a predictive tool providing the distribution of end regions for all past trips starting at particular regions.
|Constructing string representations|
|Calculating trie statistics (section 5.3)|
|Generating movement (or clusters) heat maps (section 5.4)|
|Generating region occurrence heat map (section 5.4)|
5.5. Run Time
In earlier sections we show (by complexity) that our trip trie is much faster to construct over standard aggplomerative hierarchical clustering. Here we document the actual run time taken for each of the experiments described above in Table 2. All experiments were performed using MATLAB version 7 on Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz (no parallel computation was implemented and thus the presence of multi-cores is of little relevance). The run times as presented in Table 2 are very reasonable for a data set of roughly half of a million instances. Given that our method scales linearly and the existence of modern parallel implementation of tries, we expect it is straightforward to apply our methods to much larger data sets of tens of millions with relatively little effort.
6. Related Work
Our work touches upon several areas of related work: hierarchical clustering, spatial temporal mining, trajectory mining and tree structures for trajectory data. To our knowledge the idea of building prefix trees to efficiently compute dendrogram structures is novel. The hierarchical clustering of large scale trajectory data sets is also novel as hierarchical clustering methods do not readily scale.
Hierarchical Clustering As we mentioned in the introduction, standard hierarchical clustering outputs a dendrogram and there is well known result that a dendrogram can be equivalently represented as an ultrametric through a canonical mapping (Hartigan, 1985; Carlsson and Mémoli, 2010). Some work (Murtagh et al., 2008) also pointed out links between prefixes and ultrametrics when trying to increase “ultrametricity” of data through data recoding/coarsening. Our current work makes the equivalence between a prefix tree and a dendrogram more formal, both analytically and empirically, and provide one metric between pairs of strigns by which standard hierarchical clustering outputs a dendrogram identical to our prefix tree.
Spatial temporal mining: Spatial temporal data mining has been more recently studied partially due to the emergence of cheap sensors that can easily collect vast amounts of data. The spatial temporal nature of the data adds multiple challenges not handled by many classical data mining algorithms, such as discretization of continuous dimensions, non-independence of samples, topological constraints, visualization of the discovered results, and many more (Rao et al., 2012; Andrienko et al., 2006; Davidson, 2009; Qian and Davidson, 2010; Gilpin et al., 2013a; Wang et al., 2013). Our paper analyzes a particular form of spatial temporal data with a specialized data strucuture and we discuss some related work along this line.
Trajectory data indexing and retrieval: There exists a body of literature deals with the storing, indexing and retrieval of large trajectory data sets (Dittrich et al., 2009; Chakka et al., 2003; Cudre-Mauroux et al., 2010). Due to the spatial temporal nature, different static and dynamic data structures were proposed and explored, such as quadtree (Cudre-Mauroux et al., 2010), 3D R-tree (Chakka et al., 2003; Guttman, 1984), etc. This line of work primarily focused on efficiently performing tasks common in databases, such as retrieving and updating data records. Although data storage and updating are necessities of almost all data structures as well, our current work, on the other hand, is more focused on insightful and actionable discovery and summarization from a large collection of GPS trip data.
Tree structures in spatial temporal data mining: Probabilistic suffix trees were used to mine outliers in sequential temporal database (Sun et al., 2006). Tree-based structures were also studied and employed in optimizing queries and computations in large spatial temporal databases (Mouratidis et al., 2008; Tao et al., 2003). Another work (Monreale et al., 2009) attempted to predict the next locations of the given (incomplete) trajectories by first extracting frequent trajectory patterns (called T-patterns from (Giannotti et al., 2006)) and built a specialized prefix tree where the nodes are the (pre-determined) frequent regions and the edges are annotated with travel times. Next locations in trajectories are then predicted using association rules. Our current work is meant to provide a exploratory analysis of the trips (i.e. clustering), rather than prediction, through a prefix tree structure that could potentially illuminate the users with new insights of the data.
In this paper we propose a novel way to efficiently organize GPS trip data into hierarchy to gain high level actionable insights from such data. We represent each trip symbolically as a string that contains its spatial temporal information and create a trie from these strings. The trie partitions the trips at multiple granularities at different levels and can be shown to be equivalent to the output of standard agglomerative hierarchical clustering with a specific metric. We discuss several uses of the trie including discovering traffic dynamics and characterizing outliers, and an empirical evaluation of our proposed approach on a real world data set of taxis’ GPS traces demonstrates its usefulness.
One future work direction is allowing flexible dynamic changes to our trie while new trip records are collected and added. Examples of the changes include dynamically modifying the trie such as merging/splitting nodes (i.e. regions) when new trip records are collected or if information about densities of trips or road infrastructure are to be considered.
- Andrienko et al. (2006) Gennady Andrienko, Donato Malerba, Michael May, and Maguelonne Teisseire. 2006. Mining spatio-temporal data. Journal of Intelligent Information Systems 27, 3 (2006), 187–190.
- Carlsson and Mémoli (2010) Gunnar Carlsson and Facundo Mémoli. 2010. Characterization, stability and convergence of hierarchical clustering methods. JMLR 11 (2010), 1425–1470.
- Chakka et al. (2003) V Prasad Chakka, Adam C Everspaugh, and Jignesh M Patel. 2003. Indexing large trajectory data sets with SETI. Ann Arbor 1001 (2003), 48109–2122.
- Chattopadhyay et al. (2013) Rita Chattopadhyay, Wei Fan, Ian Davidson, Sethuraman Panchanathan, and Jieping Ye. 2013. Joint transfer and batch-mode active learning. In International Conference on Machine Learning. 253–261.
- Chou and Juang (2003) Wu Chou and Biing-Hwang Juang. 2003. Pattern recognition in speech and language processing. CRC Press.
- Cudre-Mauroux et al. (2010) Philippe Cudre-Mauroux, Eugene Wu, and Samuel Madden. 2010. Trajstore: An adaptive storage system for very large trajectory data sets. In ICDE 2010. IEEE, 109–120.
- Davidson (2009) Ian Davidson. 2009. Knowledge Driven Dimension Reduction for Clustering.. In IJCAI. 1034–1039.
- Davidson et al. (2012) Ian Davidson, Sean Gilpin, and Peter B Walker. 2012. Behavioral event data and their analysis. Data Mining and Knowledge Discovery (2012), 1–19.
- Dittrich et al. (2009) Jens Dittrich, Lukas Blunschi, and Marcos Antonio Vaz Salles. 2009. Indexing moving objects using short-lived throwaway indexes. In Advances in Spatial and Temporal Databases. Springer, 189–207.
- Giannotti et al. (2006) Fosca Giannotti, Mirco Nanni, and Dino Pedreschi. 2006. Efficient mining of temporally annotated sequences. In In Proc. SDM.
- Giannotti et al. (2007) Fosca Giannotti, Mirco Nanni, Fabio Pinelli, and Dino Pedreschi. 2007. Trajectory pattern mining. In ACM SIGKDD 2007. ACM, 330–339.
- Gilpin et al. (2013a) Sean Gilpin, Tina Eliassi-Rad, and Ian Davidson. 2013a. Guided learning for role discovery (glrd): framework, algorithms, and applications. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 113–121.
- Gilpin et al. (2013b) Sean Gilpin, Buyue Qian, and Ian Davidson. 2013b. Efficient hierarchical clustering of large high dimensional datasets. In ACM CIKM 2013. ACM, 1371–1380.
- Guttman (1984) Antonin Guttman. 1984. R-trees: a dynamic index structure for spatial searching. Vol. 14. ACM.
- Hartigan (1985) John A Hartigan. 1985. Statistical theory in clustering. Journal of classification 2, 1 (1985), 63–76.
- Jain et al. (1988) Anil K Jain, Richard C Dubes, and others. 1988. Algorithms for clustering data. Vol. 6. Prentice hall Englewood Cliffs.
- Kuo et al. (2015) Chia-Tung Kuo, James Bailey, and Ian Davidson. 2015. A Framework for Simplifying Trip Data into Networks via Coupled Matrix Factorization. In Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, 739–747.
- Levenshtein (1966) Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, Vol. 10. 707–710.
- Liu et al. (2008) Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. 2008. Isolation forest. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on. IEEE, 413–422.
- Liu et al. (2012) Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. 2012. Isolation-based anomaly detection. ACM Transactions on Knowledge Discovery from Data (TKDD) 6, 1 (2012), 3.
- Monreale et al. (2009) Anna Monreale, Fabio Pinelli, Roberto Trasarti, and Fosca Giannotti. 2009. Wherenext: a location predictor on trajectory pattern mining. In ACM SIGKDD. ACM, 637–646.
- Mouratidis et al. (2008) Kyriakos Mouratidis, Dimitris Papadias, and Spiros Papadimitriou. 2008. Tree-based partition querying: a methodology for computing medoids in large spatial datasets. VLDB 17, 4 (2008), 923–945.
- Murtagh et al. (2008) Fionn Murtagh, Geoff Downs, and Pedro Contreras. 2008. Hierarchical clustering of massive, high dimensional data sets by exploiting ultrametric embedding. SIAM Journal on Scientific Computing 30, 2 (2008), 707–730.
- Piorkowski et al. (2009) Michal Piorkowski, Natasa Sarafijanovic-Djukic, and Matthias Grossglauser. 2009. CRAWDAD data set epfl/mobility (v. 2009-02-24). (Feb. 2009).
- Qian and Davidson (2010) Buyue Qian and Ian Davidson. 2010. Semi-Supervised Dimension Reduction for Multi-Label Classification.. In AAAI, Vol. 10. 569–574.
- Rao et al. (2012) K Venkateswara Rao, A Govardhan, and KV Chalapati Rao. 2012. Spatiotemporal Data Mining: Issues, Tasks and Applications. Int. J. Computer Science Eng. Survey 3, 1 (2012), 39–52.
- Sun et al. (2006) Pei Sun, Sanjay Chawla, and Bavani Arunasalam. 2006. Mining for Outliers in Sequential Databases.. In SDM. SIAM, 94–105.
- Tao et al. (2003) Yufei Tao, Dimitris Papadias, and Jimeng Sun. 2003. The TPR*-tree: an optimized spatio-temporal access method for predictive queries. In VLDB, Vol. 29. 790–801.
- Wang et al. (2013) Xiang Wang, Buyue Qian, Jieping Ye, and Ian Davidson. 2013. Multi-objective multi-view spectral clustering via pareto optimization. In Proceedings of the 2013 SIAM International Conference on Data Mining. SIAM, 234–242.