# Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture

## Abstract

This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters. The algorithm is derived via a low-variance asymptotic analysis of the Gibbs sampling algorithm for the DDPMM, and provides a hard clustering with convergence guarantees similar to those of the k-means algorithm. Empirical results from a synthetic test with moving Gaussian clusters and a test with real ADS-B aircraft trajectory data demonstrate that the algorithm requires orders of magnitude less computational time than contemporary probabilistic and hard clustering algorithms, while providing higher accuracy on the examined datasets.

## 1Introduction

The Dirichlet process mixture model (DPMM) is a powerful tool for clustering data that enables the inference of an unbounded number of mixture components, and has been widely studied in the machine learning and statistics communities [?]. Despite its flexibility, it assumes the observations are exchangeable, and therefore that the data points have no inherent ordering that influences their labeling. This assumption is invalid for modeling temporally/spatially evolving phenomena, in which the order of the data points plays a principal role in creating meaningful clusters. The dependent Dirichlet process (DDP), originally formulated by MacEachern [?], provides a prior over such evolving mixture models, and is a promising tool for incrementally monitoring the dynamic evolution of the cluster structure within a dataset. More recently, a construction of the DDP built upon completely random measures [?] led to the development of the dependent Dirichlet process Mixture model (DDPMM) and a corresponding approximate posterior inference Gibbs sampling algorithm. This model generalizes the DPMM by including birth, death and transition processes for the clusters in the model.

The DDPMM is a Bayesian nonparametric (BNP) model, part of an ever-growing class of probabilistic models for which inference captures uncertainty in both the number of parameters and their values. While these models are powerful in their capability to capture complex structures in data without requiring explicit model selection, they suffer some practical shortcomings. Inference techniques for BNPs typically fall into two classes: sampling methods (e.g., Gibbs sampling [?] or particle learning [?]) and optimization methods (e.g., variational inference [?] or stochastic variational inference [?]). Current methods based on sampling do not scale well with the size of the dataset [?]. Most optimization methods require analytic derivatives and the selection of an upper bound on the number of clusters a priori, where the computational complexity increases with that upper bound [?]. State-of-the-art techniques in both classes are not ideal for use in contexts where performing inference quickly and reliably on large volumes of streaming data is crucial for timely decision-making, such as autonomous robotic systems [?]. On the other hand, many classical clustering methods [?] scale well with the size of the dataset and are easy to implement, and advances have recently been made to capture the flexibility of Bayesian nonparametrics in such approaches [?]. However, as of yet there is no classical algorithm that captures dynamic cluster structure with the same representational power as the DDP mixture model.

This paper discusses the Dynamic Means algorithm, a novel hard clustering algorithm for spatio-temporal data derived from the low-variance asymptotic limit of the Gibbs sampling algorithm for the dependent Dirichlet process Gaussian mixture model. This algorithm captures the scalability and ease of implementation of classical clustering methods, along with the representational power of the DDP prior, and is guaranteed to converge to a local minimum of a k-means-like cost function. The algorithm is significantly more computationally tractable than Gibbs sampling, particle learning, and variational inference for the DDP mixture model in practice, while providing equivalent or better clustering accuracy on the examples presented. The performance and characteristics of the algorithm are demonstrated in a test on synthetic data, with a comparison to those of Gibbs sampling, particle learning and variational inference. Finally, the applicability of the algorithm to real data is presented through an example of clustering a spatio-temporal dataset of aircraft trajectories recorded across the United States.

## 2Background

The Dirichlet process (DP) is a prior over mixture models, where the number of mixture components is not known a priori[?]. In general, we denote , where and are the *concentration parameter* and *base measure* of the DP, respectively. If , then , where and [?]. The reader is directed to [?] for a more thorough coverage of Dirichlet processes.

The dependent Dirichlet process (DDP)[?], an extension to the DP, is a prior over evolving mixture models. Given a Poisson process construction[?], the DDP essentially forms a Markov chain of DPs (), where the transitions are governed by a set of three stochastic operations: Points may be added, removed, and may move during each step of the Markov chain. Thus, they become parameterized by time, denoted by . In slightly more detail, if is the DP at time step , then the following procedure defines the generative model of conditioned on :

**Subsampling**: Define a function . Then for each point , sample a Bernoulli distribution . Set to be the collection of points such that , and renormalize the weights. Then , where .**Transition**: Define a distribution . For each point , sample , and set to be the collection of points . Then , where .**Superposition**: Sample , and sample . Then set to be the union of for all and for all . Thus, is a random convex combination of and , where .

If the DDP is used as a prior over a mixture model, these three operations allow new mixture components to arise over time, and old mixture components to exhibit dynamics and perhaps disappear over time. As this is covered thoroughly in [?], the mathematics of the underlying Poisson point process construction are not discussed in more depth in this work. However, an important result of using such a construction is the development of an explicit posterior for given observations of the points at timestep . For each point that was observed in for some , define: as the number of observations of point in timestep ; as the number of past observations of point prior to timestep , i.e. ; as the subsampling weight on point at timestep ; and as the number of time steps that have elapsed since point was last observed. Further, let be the measure for unobserved points at time step . Then,

where for any point that was first observed during timestep . This posterior leads directly to the development of a Gibbs sampling algorithm for the DDP, whose low-variance asymptotics are discussed further below.

## 3Asymptotic Analysis of the DDP Mixture

The dependent Dirichlet process Gaussian mixture model (DDP-GMM) serves as the foundation upon which the present work is built. The generative model of a DDP-GMM at time step is

where is the mean of cluster , is the categorical weight for class , is a -dimensional observation vector, is a cluster label for observation , and is the base measure from equation (Equation 1). Throughout the rest of this paper, the subscript refers to quantities related to cluster at time step , and subscript refers to quantities related to observation at time step .

The Gibbs sampling algorithm for the DDP-GMM iterates between sampling labels for datapoints given the set of parameters , and sampling parameters given each group of data . Assuming the transition model is Gaussian, and the subsampling function is constant, the functions and distributions used in the Gibbs sampling algorithm are: the prior over cluster parameters, ; the likelihood of an observation given its cluster parameter, ; the distribution over the transitioned cluster parameter given its last known location after time steps, ; and the subsampling function . Given these functions and distributions, the low-variance asymptotic limits (i.e. ) of these two steps are discussed in the following sections.

### 3.1Setting Labels Given Parameters

In the label sampling step, a datapoint can either create a new cluster, join a current cluster, or revive an old, transitioned cluster. Using the distributions defined previously, the label assignment probabilities are

where due to the fact that is constant over , and where is the concentration parameter for the innovation process, . The low-variance asymptotic limit of this label assignment step yields meaningful assignments as long as , , and vary appropriately with ; thus, setting , , and as follows (where , and are positive constants):

yields the following assignments in the limit as :

In this assignment step, acts as a cost penalty for reviving old clusters that increases with the time since the cluster was last seen, acts as a cost reduction to account for the possible motion of clusters since they were last instantiated, and acts as a cost penalty for introducing a new cluster.

### 3.2Setting Parameters given Labels

In the parameter sampling step, the parameters are sampled using the distribution

There are two cases to consider when setting a parameter . Either and the cluster is new in the current time step, or and the cluster was previously created, disappeared for some amount of time, and then was revived in the current time step.

**New Cluster** Suppose cluster is being newly created. In this case, . Using the fact that a normal prior is conjugate a normal likelihood, the closed-form posterior for is

Then letting ,

where is the mean of the observations in the current timestep.

**Revived Cluster** Suppose there are time steps where cluster was not observed, but there are now data points with mean assigned to it in this time step. In this case,

Again using conjugacy of normal likelihoods and priors,

Similarly to the label assignment step, let . Then as long as , (which holds if equation (Equation 3) is used to recursively keep track of the parameter posterior), taking the asymptotic limit of this as yields:

that is to say, the revived is a weighted average of estimates using current timestep data and previous timestep data. controls how much the current data is favored - as increases, the weight on current data increases, which is explained by the fact that our uncertainty in where the old transitioned to increases with . It is also noted that if , this reduces to a simple weighted average using the amount of data collected as weights.

**Combined Update** Combining the updates for new cluster parameters and old transitioned cluster parameters yields a recursive update scheme:

where time step here corresponds to when the cluster is first created. An interesting interpretation of this update is that it behaves like a standard Kalman filter, in which serves as the current estimate variance, serves as the process noise variance, and serves as the inverse of the measurement variance.

## 4The Dynamic Means Algorithm

In this section, some further notation is required for brevity:

where and are the sets of observations and labels at time step , is the set of currently active clusters (some are new with , and some are revived with ), and is the set of old cluster information.

### 4.1Algorithm Description

As shown in the previous section, the low-variance asymptotic limit of the DDP Gibbs sampling algorithm is a deterministic observation label update (Equation 2) followed by a deterministic, weighted least-squares parameter update (Equation 4). Inspired by the original K-Means algorithm, applying these two updates iteratively yields an algorithm which clusters a set of observations at a single time step given cluster means and weights from past time steps (Algorithm ?). Applying Algorithm ? to a sequence of batches of data yields a clustering procedure that is able to track a set of dynamically evolving clusters (Algorithm ?), and allows new clusters to emerge and old clusters to be forgotten. While this is the primary application of Algorithm ?, the sequence of batches need not be a temporal sequence. For example, Algorithm ? may be used as an any-time clustering algorithm for large datasets, where the sequence of batches is generated by selecting random subsets of the full dataset.

The AssignParams function is exactly the update from equation (Equation 4) applied to each . Similarly, the AssignLabels function applies the update from equation (Equation 2) to each observation; however, in the case that a new cluster is created or an old one is revived by an observation, AssignLabels also creates a parameter for that new cluster based on the parameter update equation (Equation 4) with that single observation. Note that the performance of the algorithm depends on the order in which AssignLabels assigns labels. Multiple random restarts of the algorithm with different assignment orders may be used to mitigate this dependence. The UpdateC function is run after clustering observations from each time step, and constructs by setting for any new or revived cluster, and by incrementing for any old cluster that was not revived:

An important question is whether this algorithm is guaranteed to converge while clustering data in each time step. Indeed, it is; Theorem ? shows that a particular cost function monotonically decreases under the label and parameter updates (Equation 2) and (Equation 4) at each time step. Since , and it is monotonically decreased by Algorithm , the algorithm converges. Note that the Dynamic Means is only guaranteed to converge to a local optimum, similarly to the k-means[?] and DP-Means[?] algorithms.

The cost function is comprised of a number of components for each currently active cluster : A penalty for new clusters based on , a penalty for old clusters based on and , and finally a prior-weighted sum of squared distance cost for all the observations in cluster . It is noted that for new clusters, since , so the least squares cost is unweighted. The AssignParams function calculates this cost function in each iteration of Algorithm ?, and the algorithm terminates once the cost function does not decrease during an iteration.

### 4.2Reparameterizing the Algorithm

In order to use the Dynamic Means algorithm, there are three free parameters to select: , , and . While represents how far an observation can be from a cluster before it is placed in a new cluster, and thus can be tuned intuitively, and are not so straightforward. The parameter represents a conceptual added distance from any data point to a cluster for every time step that the cluster is not observed. The parameter represents a conceptual reduction of distance from any data point to a cluster for every time step that the cluster is not observed. How these two quantities affect the algorithm, and how they interact with the setting of , is hard to judge.

Instead of picking and directly, the algorithm may be reparameterized by picking , , , and given a choice of , setting

If and are set in this manner, represents the number (possibly fractional) of time steps a cluster can be unobserved before the label update (Equation 2) will never revive that cluster, and represents the maximum squared distance away from a cluster center such that after a single time step, the label update (Equation 2) will revive that cluster. As and are specified in terms of concrete algorithmic behavior, they are intuitively easier to set than and .

## 5Related Work

Prior k-means clustering algorithms that determine the number of clusters present in the data have primarily involved a method for iteratively modifying k using various statistical criteria [?]. In contrast, this work derives this capability from a Bayesian nonparametric model, similarly to the DP-Means algorithm [?]. In this sense, the relationship between the Dynamic Means algorithm and the dependent Dirichlet process [?] is exactly that between the DP-Means algorithm and Dirichlet process [?], where the Dynamic Means algorithm may be seen as an extension to the DP-Means that handles sequential data with time-varying cluster parameters. MONIC [?] and MC3 [?] have the capability to monitor time-varying clusters; however, these methods require datapoints to be identifiable across timesteps, and determine cluster similarity across timesteps via the commonalities between label assignments. The Dynamic Means algorithm does not require such information, and tracks clusters essentially based on similarity of the parameters across timesteps. Evolutionary clustering [?], similar to Dynamic Means, minimizes an objective consisting of a cost for clustering the present data set and a cost related to the comparison between the current clustering and past clusterings. The present work can be seen as a theoretically-founded extension of this class of algorithm that provides methods for automatic and adaptive prior weight selection, forming correspondences between old and current clusters, and for deciding when to introduce new clusters. Finally, some sequential Monte-Carlo methods (e.g. particle learning [?] or multi-target tracking [?]) can be adapted for use in the present context, but suffer the drawbacks typical of particle filtering methods.

## 6Applications

### 6.1Synthetic Gaussian Motion Data

In this experiment, moving Gaussian clusters on were generated synthetically over a period of 100 time steps. In each step, there was some number of clusters, each having 15 data points. The data points were sampled from a symmetric Gaussian distribution with a standard deviation of 0.05. Between time steps, the cluster centers moved randomly, with displacements sampled from the same distribution. At each time step, each cluster had a 0.05 probability of being destroyed.

This data was clustered with Dynamic Means (with 3 random assignment ordering restarts), DDP-GMM Gibbs sampling [?], variational inference [?], and particle learning [?] on a computer with an Intel i7 processor and 16GB of memory. First, the number of clusters was fixed to , and the parameter space of each algorithm was searched for the best possible cluster label accuracy (taking into account correct cluster tracking across time steps). The results of this parameter sweep for the Dynamic Means algorithm with trials at each parameter setting are shown in Figures Figure 1–Figure 3. Figures Figure 1 and Figure 2 show how the average clustering accuracy varies with the parameters after fixing either or to their values at the maximum accuracy parameter setting over the full space. The Dynamic Means algorithm had a similar robustness with respect to variations in its parameters as the comparison algorithms. The histogram in Figure 3 demonstrates that the clustering speed is robust to the setting of parameters. The speed of Dynamic Means, coupled with the smoothness of its performance with respect to its parameters, makes it well suited for automatic tuning [?].

Using the best parameter setting for each algorithm, the data as described above were clustered in 50 trials with a varying number of clusters present in the data. For the Dynamic Means algorithm, parameter values , , and were used, and the algorithm was again given 3 attempts with random labeling assignment orders, where the lowest cost solution of the 3 was picked to proceed to the next time step. For the other algorithms, the parameter values and were used, with a Gaussian transition distribution variance of 0.05. The number of samples for the Gibbs sampling algorithm was 5000 with one recorded for every 5 samples, the number of particles for the particle learning algorithm was 100, and the variational inference algorithm was run to a tolerance of with the maximum number of iterations set to 5000.

In Figures Figure 4 and Figure 5, the labeling accuracy and clustering time (respectively) for the algorithms is shown. The sampling algorithms were handicapped to generate Figure 4; the best posterior sample in terms of labeling accuracy was selected at each time step, which required knowledge of the true labeling. Further, the accuracy computation included enforcing consistency across timesteps, to allow tracking individual cluster trajectories. If this is not enforced (i.e. accuracy considers each time step independently), the other algorithms provide accuracies more comparable to those of the Dynamic Means algorithm. This effect is demonstrated in Figure 6, which shows the time/accuracy tradeoff for Gibbs sampling (varying the number of samples) and Dynamic Means (varying the number of restarts). These examples illustrate that Dynamic Means outperforms standard inference algorithms in both label accuracy and computation time for cluster tracking problems.

### 6.2Aircraft Trajectory Clustering

In this experiment, the Dynamic Means algorithm was used to find the typical spatial and temporal patterns in the motions of commercial aircraft. Automatic dependent surveillance-broadcast (ADS-B) data, including plane identification, timestamp, latitude, longitude, heading and speed, was collected from all transmitting planes across the United States during the week from 2013-3-22 1:30:0 to 2013-3-28 12:0:0 UTC. Then, individual ADS-B messages were connected together based on their plane identification and timestamp to form trajectories, and erroneous trajectories were filtered based on reasonable spatial/temporal bounds, yielding 17,895 unique trajectories. Then, for each trajectory, a Gaussian process was trained using the latitude and longitude of each ADS-B point along the trajectory as the inputs and the North and East components of plane velocity at those points as the outputs. Next, the mean latitudinal and longitudinal velocities from the Gaussian process were queried for each point on a regular lattice across the USA (10 latitudes and 20 longitudes), and used to create a 400-dimensional feature vector for each trajectory. Of the resulting 17,895 feature vectors, 600 were hand-labeled (each label including a confidence weight in ). The feature vectors were clustered using the DP-Means algorithm on the entire dataset in a single batch, and using Dynamic Means / DDPGMM Gibbs sampling (with 50 samples) with half-hour takeoff window batches.

The results of this exercise are provided in Figure 8 and Table ?. Figure 8 shows the spatial and temporal properties of the 12 most popular clusters discovered by Dynamic Means, demonstrating that the algorithm successfully identified major flows of commercial aircraft across the US. Table ? corroborates these qualitative results with a quantitative comparison of the computation time and accuracy for the three algorithms tested over 20 trials. The confidence-weighted accuracy was computed by taking the ratio between the sum of the weights for correctly labeled points and the sum of all weights. The DDPGMM Gibbs sampling algorithm was handicapped as described in the synthetic experiment section. Of the three algorithms, Dynamic Means provided the highest labeling accuracy, while requiring orders of magnitude less computation time than both DP-Means and DDPGMM Gibbs sampling.

## 7Conclusion

This work developed a clustering algorithm for batch-sequential data containing temporally evolving clusters, derived from a low-variance asymptotic analysis of the Gibbs sampling algorithm for the dependent Dirichlet process mixture model. Synthetic and real data experiments demonstrated that the algorithm requires orders of magnitude less computational time than contemporary probabilistic and hard clustering algorithms, while providing higher accuracy on the examined datasets. The speed of inference coupled with the convergence guarantees provided yield an algorithm which is suitable for use in time-critical applications, such as online model-based autonomous planning systems.

#### Acknowledgments

This work was supported by NSF award IIS-1217433 and ONR MURI grant N000141110688.