Optimal Transport Based Change Point Detection and Time Series Segment Clustering

Optimal Transport Based Change Point Detection and Time Series Segment Clustering


Two common problems in time series analysis are the decomposition of the data stream into disjoint segments that are each in some sense “homogeneous” - a problem known as Change Point Detection (CPD) - and the grouping of similar nonadjacent segments, a problem that we call Time Series Segment Clustering (TSSC). Building upon recent theoretical advances characterizing the limiting distribution-free behavior of the Wasserstein two-sample test (Ramdas et al. 2015), we propose a novel algorithm for unsupervised, distribution-free CPD which is amenable to both offline and online settings. We also introduce a method to mitigate false positives in CPD and address TSSC by using the Wasserstein distance between the detected segments to build an affinity matrix to which we apply spectral clustering. Results on both synthetic and real data sets show the benefits of the approach.


Kevin C. Cheng, Shuchin Aeron, Michael C. Hughes, Erika Hussey, Eric L. Miller 1 \addressTufts University, CCDC-Soldier Center
{first name}.{last name} @ tufts.edu


change point detection, time series segment clustering, Wasserstein two-sample, optimal transport.

1 Introduction

Change point detection (CPD) is a fundamental problem in data analysis with implications in many real world applications including the analysis of financial[3], electrocardiogram (ECG) [8], and human activity data [15]. Given a collection of change points, time series segment clustering (TSSC) seeks to group nonadjacent periods of activity which are, in some sense, ”similar,” in an unsupervised manner. Applications here overlap with those of CPD [1].

In this paper, we focus on the use of statistical methods for CPD which are broadly classified as either parametric (model-based) or non-parametric [25, 2]. The problem formulation employed by the majority of methods takes the observations as a sequence of random variables whose distribution changes abruptly at unknown points in time. The processing goal for CPD is to determine when the switches occur and, in those instances where TSSC is required, use a similarity measure to cluster like segments.

Parametric methods employ a specific model for the dynamics of the time series (either assumed [4] or learned from data [14]) and then make use of decision theory to identify change points. Classically, ARMA-type models and their state-space generalizations were the basis for parametric efforts starting in [26] with recent work focusing on hierarchical models such as switching linear-dynamical systems (SLDS) [17]. Generally, parametric methods are effective when the modelling assumptions hold. For example, SLDS assumes geometric state duration distributions and Gaussian observation models. When these assumptions are not applicable, performance will likely suffer.

When the dynamics or observations cannot be easily modeled, we can consider distribution-free methods that do not assume any particular parametric family of distributions. Change points are then estimated from sample distributions using density-ratio estimates [16, 13] or through two-sample tests like maximum mean discrepancy (MMD) [10], which was recently used for non-parametric CPD [15].

Similar to CPD, parametric TSSC methods have been explored using ARMA based models [6] or HMMs [18]. Non-parametric TSSC methods generally use alternate representations of time series such as frequency-based wavelet decompositions [11] or distribution-based methods [7].

In this work, we contribute a new set of non-parametric CPD and TSSC methods based on recent statistical results in the theory of Optimal Transport (OT). Assuming independent and identically distributed (IID) data, Ramdas et al. [21] provides a theoretical analysis of the asymptotic distribution of an OT-based two-sample test under the null hypothesis for deciding whether two empirical probability density functions are from the same distribution. We use this result as the basis for a sliding window test for identifying change points in a scalar time series. Another novel aspect of our method is the development of a statistically-derived ”matched filter” for post-processing our OT statistic to reduce false positives. Given the identified change points, we develop an OT-based spectral clustering scheme for TSSC.

Figure 1: The time series can be decomposed at change points into 4 segments represented by 3 actions. Sample windows of time series data (left) are represented as point cloud in (center) which in turn corresponds to a single point in the space of all probability measures (right), which is an estimate of the process distribution. The proposed method uses the Wasserstein two-sample test between adjacent windows on each dimension independently as the change point test statistic. At the two windows belong to similar distributions and thus no change is detected. However at spanning the change point places the distributions of adjacent windows in different clusters thus resulting in a high CPD statistic.

To organize this paper, we start with an overview of optimal transport concepts followed by problem formulation for CPD and TSSC. We then detail our proposed method and evaluate our techniques on toy and real-world data sets. We show improved precision and recall for CPD (as summarized in F1 scores) compared to state of the art. We also show improved label accuracy in TSSC for human activity data.

2 Optimal Transport Background

Given two probability distributions , where , the 2-Wasserstein distance, or earth mover’s distance, is defined as the minimum expected squared Euclidean cost required to transport to . Formally,


where denotes the set of all joint distributions. It is well-known that (2) is a linear program. Further, is a metric on the set of probability distributions [20] and metrizes weak convergence of probability measures.

We employ a distribution-free, non-parametric Wasserestein two-sample test (W2T) as a discrepancy measure between two sets of points. To this end, we note the following:

Theorem 2.1

(From [21]) Under the null hypothesis , given empirical CDF’s , consisting of IID samples from scalar distributions , ,

where denotes the standard Brownian motion. From [22], has mean and that we reject the null with confidence using a threshold of .

3 Problem Formulation

As detailed in Fig. 1 and throughout, we consider a time series , , where the data consists of distinct time segments , with such that within each time segment, , are IID samples from one of unknown distributions, where we assume here that is known a priori. The problem of change point detection (CPD) is to estimate , and the problem of time series segment clustering (TSSC) is to cluster the segments into classes.

4 Proposed Method

4.1 Change Point Detection

Figure 2: CPD and TSSC results from HASC2016-PAC data (black, left axis). For CPD, we plot both unfiltered (thin purple) and match filtered (thick purple) change point statistics (right axis). The left subplot shows how the matched filter removes false positives and improves localization of the change point. For TSSC, the bottom row shows our method’s assigned cluster labels, which make only one mistake relative to ground truth (GT) by grouping the stair-down segment with the last walk segment.

Given time-series , we define two empirical probability density functions (PDFs) at each time generated from the sum of dirac-delta functions supported on a window of samples collected before and after yielding . After transforming each PDF into a cumulative distribution function (CDF) , we can compute a change point statistic from the Wasserstein two-sample test (W2T) between the CDFs of the two windows:


The nominal, offline approach to CPD is to label local maxima of that exceed some threshold parameter as change points [15]. Shown through empirical analysis on both simulated and real data, we find this is problematic. Fig. 2 indicates the presence of spurious local maxima leading to a large number of false alarms and ambiguity in the change point locations. Moreover, the sliding-window nature of the processing causes a change point at time to create an extended signature in over the interval .

These observations suggest the benefit of a matched filtering approach to reduce the spurious maxima and better localize true changes. In Fig. 3 we estimate the shape of this signature empirically by averaging over ensembles of simulated IID data with a known change points separating samples from different pairs of distribution. From this plot, we observe that the structure of this function across a number of distributional changes is remarkably consistent. The theoretical analysis and discussion of this filter is left to future work. We derive the filter by removing the bias and normalizing by a constant such that the peaks of signal are preserved. Change points are the set of local maxima where 2 exceeds a threshold: 3.

Figure 3: Un-normalized empirically estimated matched filter. Given a change point at and window size , the effects of the change point are reflected in the W2T statistic on the interval . Thin traces represent the ensemble average of 200 IID sequences with different simulated change points (, and ). The matched filter is normalized by removing the bias and scaling by to have unit area.

4.2 Time Series Segment Clustering

Given change points and time segments , the process distribution in this time segment is estimated by . This represents a weighted point cloud generated from the data points over the time interval. Samples are weighted by a windowing function that down-weights samples around transition boundaries, mitigating the effects of segmentation errors and non-instantaneous transitions. To this effect, we use a half Hamming window of length for samples within of either boundary. Samples outside this range have weights .

The similarity matrix between time segments , uses the 2-Wasserstein distance between their respective empirical distributions as the distance measure. Given the number of action clusters , we utilize the similarity graph structure under the Wasserstein metric by clustering time segments via spectral clustering [24] into the optimal action clusters.

Input: , , , , ,
Output: ,
for all t doCPD
end for
for all  do
end for
Algorithm 1 Wasserstein Change Point Detection and Time Series Segment Clustering

5 Evaluation

CP-AUC CP-F1 Label Acc
Data W2T MStat W2T MStat GT W2T MStat
Beedance 3 14 14 0.527 0.549 0.647 0.625 0.705 0.651 0.646
HASC-PAC2016 6 500 250 0.689 0.658 0.748 0.713 0.789 0.658 0.675
HASC2011 6 500 250 0.576 0.585 0.824 0.770 0.565 0.498 0.382
ECG200 2 100 50 0.585 0.584 0.637 0.582 0.864 0.708 0.716
Table 1: CPD evaluation using AUC and F1 for proposed W2T method and MStat for given number of labels , window size , and detection delay . TSSC is evaluated with label Hamming accuracy using ground truth, W2T, and MStat change points.

5.1 Evaluation Criteria

We use the area under the ROC curve (CP-AUC) to evaluate change point performance, following previous work [15, 5, 16]. We also report the F1 score (CP-F1) for offline multiple CPD [23] using a margin of error for the acceptable offset to the true label.

For TSSC, cluster labels are mapped onto the ground truth labels using the standard Munkres algorithm and evaluated using the Hamming distance. Performance is reported in Tab. 1 separately using ground truth change points (GT) and learned change points (W2T or MStat).

5.2 Experimental Setup

We compare the performance of our algorithm to the M-Statistic (MStat)[15], setting parameters , . For fair comparison, we employ a MStat matched filter using a method analogous to that outlined in Sec. 4.1. The only hyperparameters to the CPD model are the window size and detection threshold . Since the window size controls the width of the matched filter, we utilize domain knowledge to set based on the expected frequency of changes. We also set the threshold parameter as the distribution of the MStat under the null is not known. The hyperparameters, along with the true positive detection window used for F1 can be found in Tab. 1. For vectored time series we computed the W2T over each dimension and averaged the result. We evaluate on the following datasets:

HASC-PAC2016: [12] consists of over 700 three-axis accelerometer sequences of subjects performing six actions: ’stay’, ’walk’, ’jog’, ’skip’, ’stairs up’, and ’stairs down’. We evaluate on the 92 longest sequences.

HASC-2011: three-axis accelerometer data from 6 actions: ’stay’, ’walk’, ’escalator up’, ’elevator up’, ’stairs up’, and ’stairs down’.

Beedance: [19] movements of dancing honeybees who communicate through three actions: ”turn left”, ”turn right” and ”waggle”. We use the gradient of the data as our input.

ECG200: [8] detection of abnormal heartbeats in ECG.

5.3 Results

Figure 4: t-SNE embedding of simulated data of 200 windows of IID 100 samples from 4 simulated distributions: (blue), (orange), (yellow), (purple) using Wasserstein metric, and two-sample tests: MMD, W2T, and Kolmogorov-Smirnov

The proposed algorithm demonstrates robust results for CPD and TSSC. Fig. 2 shows clear detection of change points on HASC-PAC2016, strong efficacy of the matched filter in reducing false positives, and a single label mis-classification.

The CPD performance for the W2T and MStat are comparable under the AUC metric, however, under the F1 metric, W2T consistently performs better. We note that the computation complexity of the W2T () is an improvement compared to that of the MStat () and that the OT measures show tighter clustering in the low-dimensional embedding of various simulated measures (Fig. 4).

Comparing to results reported in [5], our unsupervised method shows competitive results with an AUC of 0.527 on Beedance compared to supervised parametric models such as ARMA (0.537) and ARGP (0.583). We observe that since smooths the test statistic, its inclusion decreases AUC for a better F1 score, which we see as a positive tradeoff. For example, when including for HASC2011, our AUC drops from 0.630 to 0.576 while F1 improves from 0.720 to 0.824.

In terms of TSSC, using our unsupervised, distribution-free approach, we are able to achieve a 65% label accuracy on the Beedance data. For comparison, a state of the art supervised parametric model [19] achieves an 87.7% label accuracy, and a parametric unsupervised model using switching vector autoregressive HMMs [9] achieves a label accuracy of 66.8%. HASC also shows strong performance given that a total of six possible assignments were available.

6 Discussion

We propose a distribution-free, unsupervised approach to CPD and TSSC for time-series data. In our experiments, we run the CPD in an offline manner. Applied in an online setting, the minimum detection delay would be .

We approach CPD and TSSC with a weak set of assumptions: that change points occur when the process distribution changes, and actions can be clustered based on their respective empirical distributions. However, clearly time series data is rarely IID. In future work, we will expand these methods for CPD and TSSC beyond IID assumptions.


  1. thanks: This research was supported by funding from Army Research Center Natick via Tufts Center for Applied Brain and Cognitive Sciences (CABCS) under ARM994. Shuchin Aeron was supported in part by NSF CAREER award.
  2. Where denotes the convolution operation


  1. S. Aghabozorgi, A. Seyed Shirkhorshidi and T. Ying Wah (2015-10) Time-series clustering – A decade review. Information Systems 53 (en). External Links: ISSN 03064379, Link, Document Cited by: §1.
  2. M. Basseville and I. V. Nikiforov (1993) Detection of Abrupt Changes - Theory and Application. Prentice Hall, Inc.. External Links: Link Cited by: §1.
  3. C. M. Carvalho and H. F. Lopes (2007-05) Simulation-based sequential analysis of Markov switching stochastic volatility models. Computational Statistics & Data Analysis 51 (9) (en). External Links: ISSN 01679473, Link, Document Cited by: §1.
  4. F. Chamroukhi, S. Mohammed, D. Trabelsi, L. Oukhellou and Y. Amirat (2013-11) Joint segmentation of multivariate time series with hidden process regression for human activity recognition. Neurocomputing 120 (en). External Links: ISSN 09252312, Link, Document Cited by: §1.
  5. W. Chang, C. Li, Y. Yang and B. Póczos (2019-01) Kernel Change-point Detection with Auxiliary Deep Generative Models. arXiv:1901.06077 [cs, stat]. Note: arXiv: 1901.06077 External Links: Link Cited by: §5.1, §5.3.
  6. M. Corduas and D. Piccolo (2008-01) Time series clustering and classification by the autoregressive metric. Computational Statistics & Data Analysis 52 (4) (en). External Links: ISSN 01679473, Link, Document Cited by: §1.
  7. R. Dahlhaus (1996-03) On the Kullback-Leibler information divergence of locally stationary processes. Stochastic Processes and their Applications 62 (1) (en). External Links: ISSN 03044149, Link, Document Cited by: §1.
  8. H. A. Dau (2018-10) The ucr time series classification archive. Note: https://www.cs.ucr.edu/ eamonn/time_series_data_2018 Cited by: §1, §5.2.
  9. E. Fox, E. B. Sudderth, M. I. Jordan and A. S. Willsky (2009) Nonparametric Bayesian Learning of Switching Linear Dynamical Systems. In Advances in Neural Information Processing Systems 21, External Links: Link Cited by: §5.3.
  10. A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf and A. Smola (2012-03) A kernel two-sample test. J. Mach. Learn. Res. 13 (1), pp. 723––773. External Links: ISSN 1532-4435 Cited by: §1.
  11. Y. Huhtala, J. Kärkkäinen and H. Toivonen (1999) Mining for Similarities in Aligned Time Series Using Wavelets. Cited by: §1.
  12. H. Ichino (2016) HASC-PAC2016: large scale human pedestrian activity corpus and its baseline recognition. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany (en). External Links: ISBN 978-1-4503-4462-3, Link, Document Cited by: §5.2.
  13. T. Kanamori, S. Hido and M. Sugiyama (2009-12) A least-squares approach to direct importance estimation. J. Mach. Learn. Res. 10, pp. 1391–1445. External Links: ISSN 1532-4435 Cited by: §1.
  14. W. Lee, J. Ortiz, B. Ko and R. Lee (2018-01) Time Series Segmentation through Automatic Feature Learning. arXiv:1801.05394 [cs, stat]. Note: arXiv: 1801.05394 External Links: Link Cited by: §1.
  15. S. Li, Y. Xie, H. Dai and L. Song (2015) M-Statistic for Kernel Change-Point Detection. In Advances in Neural Information Processing Systems 28, External Links: Link Cited by: §1, §1, §4.1, §5.1, §5.2.
  16. S. Liu, M. Yamada, N. Collier and M. Sugiyama (2013-07) Change-point detection in time-series data by relative density-ratio estimation. Neural Networks 43. External Links: ISSN 0893-6080, Link, Document Cited by: §1, §5.1.
  17. K. P. Murphy (1998) Switching Kalman Filters. Technical report Cited by: §1.
  18. T. Oates, L. Firoiu and P. R. Cohen (1999) Clustering Time Series with Hidden Markov Models and Dynamic Time Warping. Cited by: §1.
  19. S. M. Oh, J. M. Rehg, T. Balch and F. Dellaert (2008-05) Learning and Inferring Motion Patterns using Parametric Segmental Switching Linear Dynamic Systems. International Journal of Computer Vision 77 (1-3) (en). External Links: ISSN 0920-5691, 1573-1405, Link, Document Cited by: §5.2, §5.3.
  20. G. Peyré and M. Cuturi (2018-03) Computational Optimal Transport. arXiv:1803.00567 [stat]. Note: arXiv: 1803.00567 External Links: Link Cited by: §2.
  21. A. Ramdas, N. G. Trillos and M. Cuturi (2015) On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests. Entropy 19. External Links: Document Cited by: §1, Theorem 2.1.
  22. L. Tolmatz (2002-01) On the Distribution of the Square Integral of the Brownian Bridge. The Annals of Probability 30 (1) (en). External Links: Link, Document Cited by: §2.
  23. C. Truong, L. Oudre and N. Vayatis (2018-01) Selective review of offline change point detection methods. arXiv:1801.00718 [cs, stat]. Note: arXiv: 1801.00718 External Links: Link Cited by: §5.1.
  24. U. Von Luxburg (2007) A tutorial on spectral clustering. Statistics and Computing 17 (4). Cited by: §4.2.
  25. A. Wald (1947) Sequential analysis. J. Wiley & Sons ; Chapman & Hall, New York; London (English). Cited by: §1.
  26. A. Willsky and H. Jones (1976-02) A generalized likelihood ratio approach to the detection and estimation of jumps in linear systems. IEEE Transactions on Automatic Control 21 (1) (en). External Links: ISSN 0018-9286, Link, Document Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description