A time resolved clustering method revealing longterm structures and their shortterm internal dynamics
Abstract
The last decades have not only been characterized by an explosive growth of data, but also an increasing appreciation of data as a valuable resource. It’s value comes with the ability to extract meaningful patterns that are of economic, societal or scientific relevance. A particular challenge is to identify patterns across time, including patterns that might only become apparent when the temporal dimension is taken into account. Here, we present a novel method that aims to achieve this by detecting dynamic clusters, i.e. structural elements that can be present over prolonged durations. It is based on an adaptive identification of majority overlaps between groups at different time points and allows the accommodation of transient decompositions in otherwise persistent dynamic clusters. As such, our method enables the detection of persistent structural elements with internal dynamics and can be applied to any classifiable data, ranging from social contact networks to arbitrary sets of time stamped feature vectors. It provides a unique tool to study systems with nontrivial temporal dynamics with a broad applicability to scientific, societal and economic data.
1 Introduction
With digitalization penetrating all aspects of life we are witnessing an explosive growth of data. Data clustering (Kaufman and Rousseeuw, 2009), i.e. a categorization of data sources into different groups, is one of the most popular approaches to harvest knowledge from this deluge of data. In countless applications it has proven to reveal latent yet meaningful structures. Clustering can be applied to both nonrelational data (information about individual data sources) and relational data (information about the relation between data sources). In nonrelational data, clustering aims to group data sources based on some measure of similarity. In relational data, clustering  also called community detection  focuses on identifying sets of data sources that are more densely connected within, as compared to between, sets. Clustering methods either apply to relational or to nonrelational data. However, their result is in both cases a clustering.
The bulk of methods for cluster detection both in nonrelational (Jain et al., 1999) and in relational data (Fortunato, 2010; Fortunato and Hric, 2016) has been developed for static datasets. However, one particular aspect of the ever growing amount of available data is the temporal dimension. In temporal data, each data source might contribute several data points to the dataset, each with a different time stamp. Including this temporal information allows to delineate the evolution of a system. The hence gained temporal information can be crucial for the understanding of observed patterns, as many systems are intrinsically dynamic; any observed state can only be explained in light of the history of the system. Some of the pertinent examples highlighting the importance of temporal dynamics are social media (Chakrabarti et al., 2006), mobile subscriber networks (Palla et al., 2007) or coauthorship relations (Backstrom et al., 2006; Rosvall and Bergstrom, 2010).
In the last decade and a half, considerable efforts were made to extend static methods to timestamped data and develop new ones capable to cope with temporal data. Such methods are commonly referred to as evolutionary clustering, a term shaped by Chakrabarti et al. (2006), or dynamic community detection in the context of social network analysis. A topical overview can be found in the review by Dakiche et al. (2019).
A common representation of time stamped data is in the form of a sequence of snapshots, with each snapshot being an aggregation of data points over a certain amount of time, e.g. per day, per month or per year. In a single snapshot each data source is maximally present with a single data point. This data point is the result of an aggregation, if a data source contributes several data points to a single snapshot. For relational data such representation is also called timewindow graphs (Holme, 2015). The advantage of this approach lies in the representation of temporal data in the form of a series of static datasets that can be analyzed using the rich toolset from traditional clustering analysis. The drawback is the loss of all temporal information about the data sources within the aggregation windows. Several approaches to include temporal information adapt either the snapshot representation, like the creation of joint graphs from two snapshots (Palla et al., 2007), or the measures from static clustering (Dinh et al., 2009; Sun et al., 2007), or both (Mucha et al., 2010). Another option is to define rules to combine a sequence of clusterings resulting form static methods applied to each snapshot. We will refer to this approach as ad hoc evolutionary clustering. Which can be considered to be a more general approach due to its independence on the clustering method used. Ad hoc evolutionary clustering methods are by definition applicable both to nonrelational and to relational data.
Evolutionary clustering methods define dynamic clusters (DCs), i.e. clusters that might persist over several snapshots, based on rules that relate clusters between time points. Careful thought and consideration should be given to the definition of those rules and their underlying principles. Ideally, the concept of a DC is defined a priori, such that the set of rules is an implementation of the concept and not the other way around.
Here, we propose an ad hoc evolutionary clustering method to detect DCs in temporal data. Our method offers the advantage of flexibility, as the only requirement for its application is a timeseries of clusterings, which can be generated by any clustering method applied to nonrelational or relational data, including overlapping community detection methods for relational data, such as the one by Palla et al. (2005). Our framework utilises majority as basis to detect DCs. It features a ruleset that allows to adapt the temporal scale at which processes are deemed relevant for the dynamic cluster structure. As a result, the framework allows to capture and study shortlived changes, e.g. natural fluctuations, small perturbation or smallscaled processes within the clusters, such as fissionfusion dynamics (Aureli et al., 2008), without loosing track of the dynamics at longer time scales. Finally, it is applicable to “live” datasets, i.e. with continuously generated data.
In the following, we dissect the lifecycle events of a DC in the context of a sequence of snapshots. We specify a set of properties that explicitly define what we consider to be a robust DC. Then, we present a set of rules, along with an algorithmic procedure, to detect DCs. In addition, the functioning of the novel framework is illustrated by means of synthetic examples.
2 The lifecycle of a dynamic cluster
In a sequence of snapshots representation of temporal data, DCs consist of a timeseries of sets of data sources. We will refer to these data sources as the members of the DC. Changes in this timeseries of members will determine the lifecycle of the DC. Such changes can be classified into six elementary events: birth, death, growth, shrinkage, split and merge, illustrated in Fig. 1. The lifecycle of a DC can be described in general terms, in the absence of explicit rules (see LABEL:lifecycleSI). However, robustly linking observed patterns to these lifecycle events requires a set of explicitly defined rules. In combination with an algorithmic application procedure they define an ad hoc evolutionary clustering method. Before we proceed to the presentation of our novel method, we present the properties of a dynamic cluster that we define as robust DC and that will figure as underlying principles to the set of rules for our method.
3 Definition of a robust dynamic cluster
A simple and intuitive principle to link clusters over time, in order to build a DC, is a majority based association, where the biggest fraction of the members of a cluster is followed to some other point in the past or future. We use bijective majority based relations, i.e. clusters from neighbouring time points reciprocally hold each others biggest fraction of members, as a first criterion to identify DCs.
An additional challenge, despite the establishment of linking relations between neighbouring snapshots, are the concepts of persistence and continuity. Intuitively, we tend to identify DCs as sets of data sources that appear as related clusters over several consecutive timepoints. If they only appear together in a cluster every other time, identifying them as members of a DC becomes more dubious. While in both cases the DC shows persistence over time, the latter lacks continuity. Many real systems are prone to produce discontinuous but persistent structures. We will follow the nomenclature of Rosenberg and Hirschberg (2007) and use the term homogeneous discontinuities to describe cases where a DC transiently decomposes into subunits. A variety of dynamic patterns fall into this classification, most notably the fissionfusion patterns wellstudied in social systems (Aureli et al., 2008). We identify two elementary patterns in DCs with homogeneous discontinuities: splintering and transitioning. As splintering, we denote events where members of a DC are temporally split into several subclusters. A transitioning event is defined by a series of points, during which the members gradually attach to a splinter subcluster, until all members have transitioned and the growing splinter cluster effectively becomes the initial DC. Both events must be transient, as otherwise the initial DC can not be considered to persist. The definition of an adequate timescale limit at which splintering and transitioning events can be considered transient, must be defined on a casebycase basis. Fig. 2 illustrates how two extreme choices in the timescale limit affect the detection of DCs. We add to the list of desirable features the capability to track clusters over time in spite of homogeneous discontinuities.
With these clarifications at hand, we postulate that the following set of features should define a DC and figure as a basis for the implementation of a detection algorithm (see Fig. 3 for a visual representation):
 Majority based

DCs must be identified using a bijective majority based association criterion between clusterings from different points in the timeseries. Clusters that reciprocally hold each others biggest fractions of members, should belong to the same DC.
 Progressive

A dynamic clustering must be based on existing data and be capable to incorporate newly generated data into the existing structure, i.e. DCs must be progressively detectable on a live dataset. This feature is equivalent to the condition that the dynamic clusters, at any point in time, can only depend on data from that and, potentially, any previous time point.
 Robust against high turnovers

The dynamic structure should show minimal dependency on the introduction and disappearance of data sources in the dataset. This to assure that the DC structure only follows structural changes and is not dominated by the turnover of data sources in the dataset.
 Structurally consistent

The most recent DC structure should always coincide with the clustering detected in this snapshot. Structural consistency is of particular importance in a live dataset, because the last (or current) snapshot continuously changes with incoming data. However, this requirement does not exclude the possibility that the clustering deviates from that of DCs, at points previous to the last point of observation.
 Resilient against homogeneous discontinuities

Persistent structures showing homogeneous discontinuities should still be identified as a DC. Homogeneous discontinuities are transient decompositions of two different types:
 Splintering:

Decomposition into subclusters.
 Transitioning:

Emergence of a subclusters that gradually absorbs the majority of members from the DC.
 Sensitivity to timescale of discontinuity

The timescale at which discontinuities are considered to be transient is context specific, and thus, needs to be adaptable.
4 A novel ad hoc evolutionary clustering method
For an indepth description and definition of each term introduced in this section refer to LABEL:detailsSI.
4.1 Relating neighbouring snapshots
The implementation of a majority based identification of clusters from different time points will determine the mechanistic definition of a DC. Practically, we first assess the similarity of two clusters from consecutive snapshots. To do so, we use the fraction of shared members, only considering resident members, i.e. data sources that contribute a data point to both snapshots. We divide the size of the intersection of two clusters by the number of resident individual in one and the other cluster, respectively. This yields two similarity measures that are unaffected by member turnover. These measures represent the fraction of members from one cluster present in the other, and viceversa. They can be considered as nonsymmetric variations of the well known Jaccard index (Jaccard, 1901). Based on these similarity measures, we identify the clusters in the neighbouring snapshots that are most similar. Since our similarity measure is not symmetric, we use the term mapping relation whenever we follow the majority forward in time, and tracing relation for a time backward direction. We will use a bijective majority match, i.e. a cluster is the tracing cluster of its own mapping cluster, as a condition to associate two cluster from consecutive snapshots to each other and, ultimately, to the same DC. Note that a posterior cluster could trace back to more than one prior cluster, and the same holds true in the other direction. Therefore, we consider sets of clusters when identifying bijective majority matches.
4.2 Generalisation to relations between distant time points
What renders this approach nontrivial, and will allow us to implement the remaining required features, is a generalisation of mapping and tracing relations to a measure between snapshots from distant time points. Concretely, for each cluster we will identify the earliest set of clusters with which it forms a bijective majority match and try to associate the cluster to the same DC as the clusters in this set. To generalise, we apply the matching between consecutive time points iteratively. Following tracing relations back over several snapshots we can construct what we call a tracing path, and a mapping path for the inverted direction. A cluster forms a bijective majority match with a set of clusters from an earlier time point if, at some depth or recursion, the tracing path of the cluster equates to this set and the mapping path from the set equates a set with the later cluster as its unique member. There is, in principle, no restriction on the number of time points over which a bijective majority match occurs. We include the possibility of such a restriction in the form of a parameter, determining the maximal recursion depth of the tracing and mapping paths. This parameter allows, on the one hand, to set a limit to the duration of within DC processes and, on the other hand, to study the types of transient decompositions present in the data by comparing DC structures generated under different restrictions for this maximal length.
We consider a cluster to hold its own majority, hence a cluster always forms a bijective majority match at least with itself. If a cluster forms a bijective majority match with sets of clusters from earlier snapshots, we associate the cluster to the DC of the earliest source set, i.e. the earliest set that forms a bijective majority match with the target cluster and contains exclusively clusters associated to the same DC. The combination of all clusters in the tracing and mapping paths connecting the target cluster to this source set can be seen as the sequence of sets of clusters along which the identity of a DC can propagate through time, and will be referred to as the identity flow of the target cluster.
An identity flow spanning over more than two snapshots can enclose sequences of cluster sets (see blue clusters in Fig. 4). These embedded sequences are necessarily shorter than the maximal length of an identify flow and satisfy our conditions for transient homogeneous decompositions. We thus identify clusters belonging to embedded sequences as marginal parts of the embedding DC.
4.3 Definition of a dynamic cluster
We define a DC as a sequence of sets of clusters, where each cluster either has an identity flow with a source set contained in the DC, is contained in an identity flow of a cluster belonging to the DC, or is part of the clusters marginalised by the identify flow of other clusters belonging to the DC.
The identification of the source set of a cluster is conditioned on the maximal length of a bijective majority match which must be determined a priori through a parametrization of the method. According to this definition, a minimal configuration of a DC might consist of a set containing a single cluster only.
4.4 Algorithmic procedure
Refer to Fig. 4 for a visual illustration of the algorithmic procedure, or to LABEL:methodSI:tbl:algoDesc of the supplementary material for an in depth stepbystep description.
The algorithmic procedure defining our ad hoc evolutionary clustering method consists in passing through the sequence of snapshots, starting at the earliest time point and performing two distinct tasks for each cluster in the current snapshot:

Associate the cluster and all clusters in its identity flow to the DC of its source set.

Correct existing DCclustering associations from previous time steps based on the marginalisations induced by the identity flow.
How far the procedure can reach back in time to determine a source set of a target cluster needs to be determined a priori through a parametrisation of the method. This parameter can be considered as the method’s history horizon and must be given in number of time steps. Henceforth, we will refer to a particular configuration of the method as step history, where specifies the number of snapshots the algorithm reaches back in time.
5 Consistency in evolutionary clustering
So far, we have focused on the features we consider desirable to define and, ultimately, to detect DCs. The presented method is a direct implementation of these features, thereby delivering qualitatively satisfying DC structures, when using our custom features as quality criteria. A more objective approach for quality assessment consists in exploring the autocorrelation of the members between time points. Note that we consider a DC to be a sequence of sets of clusters, however, the members of a DC at a given time point are all data sources that belong to the clusters within the set of clusters from this time point. The autocorrelation is given by
(1) 
where and are the sets of members of the DC, , at time points , respectively , and denotes the cardinality of set .
, also known as the Jaccard index (Jaccard, 1901), indicates the fraction of identical members among all members present. As such, it can be understood as an assessment of membership consistency between neighbouring snapshots. To alleviate the notation we consider a DC, , that is not present at snapshot to have a member set of . We define the total consistency of a dynamic clustering as the average autocorrelation. With the average over all DCs and for each DC over all pairs of neighboring time points the DC exists:
(2) 
with the total number of snapshots, the set of all dynamic clusters and the number of snapshots the DC exists.
This definition excludes creation and destruction events, i.e. the DC is only present in one of the two neighbouring snapshots. It thus indicates the overall consistency of existing structures.
If no external criteria exist that allow the determination of a suitable step history parameter, we argue that choosing a value that maximises the total consistency score is a sensible choice. Doing so leads to the most consistent temporal structure within the range of DC structures that result from all possible parametrizations.
6 Discussion
We have presented an algorithm that detects dynamic clusters (DCs) in a sequence of snapshots of time stamped data. It is based on a precisely defined set of features that allow new DCs to form, old ones to disappear, as well as existing DCs to shrink, grow or to transiently split and merge. The algorithm only depends on a timeseries of cluster associations and is, therefore, compatible with any clustering method for nonrelational and relational data. Thus the user can choose the most suitable clustering method according to the particular study and/or dataset at hand. Furthermore, this minimal input leads to an efficient scalability of our method with the size of the dataset, i.e. the number of data sources present. It scales linearly with the number of data sources, if the number of clusters detected does not depend on the size of the dataset. Consequentially, our method is unlikely to ever figure as the computational bottleneck in the analysis of temporal data. Only in the limit case when the number of clusters per snapshot scales linearly with the number of data sources is the scalability comparable to the one of a typical clustering method (see LABEL:scalabilitySI for further details).
Identifying clusters in a dataset requires an algorithmic procedure that classifies data sources. Such an algorithm does, in principle, not need to stem from an explicit definition of what a cluster should be. It can be based on relations between single data points and thus only implicitly define the concept of a cluster  e.g. two feature vectors need to be closer than a certain distance, in order for their respective data source to belong to the same cluster. As a result, clustering methods implement a variety of, at least partially, not well defined concepts of a cluster. This is not only a challenge for traditional clustering, both in relational and nonrelational data, but perhaps even more so in data with a temporal resolution. Therefore, emphasis should be put on a clear outline of the features that define dynamic clusters, be it in the presentation of a new algorithmic procedure, as we do here, or in the application of an existing one.
Finally, this diversity in concepts calls for objective measures that allow to quantify structures deduced by evolutionary clustering methods. With the total consistency measure defined in this study, we present a measure that does not only allow to objectively parametrise the novel method but also permits to compare different dynamic clusterings quantitatively.
Our method allows to detect transient homogeneous decompositions in dynamic clusters that are present at longer time scales. Here it is important to clarify, that the detected dynamic clusters differ from the clustering that would result from simply expanding the aggregation period. This, in part, because transient decompositions that are not homogeneous, i.e. a dynamic cluster decomposes and some of its parts recombine with (parts of) a different dynamic cluster, might lead to a single cluster including all involved data sources when aggregating over the entire duration of the decomposition. Another difference is the information that our method retains about transient subclusters. Even if the algorithm determines that, for a given snapshot, several clusters belong to the same DC, the composition of this DC in terms of clusters remains available. This information on the temporal dynamics within a DC is lost if one simply expands the aggregation period. Gaining access to the temporal within DC dynamics is the stand alone feature of this novel method and allows to gain insights on withinDC processes, such as the presence and course of subclusters or fissionfusion processes (Aureli et al., 2008).
Supplementary Materials:
A time resolved clustering method revealing longterm structures and their shortterm internal dynamics
SI 1 Snapshot representation of data with a temporal resolution
The presented method for the detection of dynamic clusters requires a timeseries of clusterings of data sources. Timeseries data present the particularity that a data source might contribute several data points each of which being associated to a discrete time stamp, thus forming a snapshot of the system associated to this particular time stamp. For some data this time stamp is given naturally through the process of data collection, for other data the individual data points require binning onto a series of discrete time stamps, before a classification in the form of a clustering can be performed. Binning consists of aggregating multiple data points from the same data source such that for each snapshot a data source maximally contributes a single data point.
Formally, the raw data for a sequence of snapshots must be present in form of a series of clusterings of individual data sources associated to time points, , respectively. At any time point the clustering consist of a set of clusters, . For short, we will refer to the time point simply by its index .
Consider a DC, , of length with a mapping of the index from the timeseries of length of member sets of to the timeseries index of the sequence of snapshots. The function is to be understood as the mapping that yields for an index in the timeseries of the DC the corresponding index of the timeseries of snapshots. Thus allows to place an event within the lifespan of onto the timeseries representing the dynamic dataset. Let be the set of members of at the time point , respectively, at the time point , with , of the timeseries of its member set.
References
 Aureli, F., C. M. Schaffner, C. Boesch et al. 2008. Fissionfusion dynamics: new research frameworks. Current Anthropology 49(4):627–654.
 Backstrom, L., D. Huttenlocher, J. Kleinberg et al. 2006. Group formation in large social networks: membership, growth, and evolution. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 44–54. ACM.
 Chakrabarti, D., R. Kumar and A. Tomkins. 2006. Evolutionary clustering. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 554–560. ACM.
 Dakiche, N., F. B.S. Tayeb, Y. Slimani et al. 2019. Tracking community evolution in social networks: A survey. Information Processing & Management 56(3):1084–1102.
 Dinh, T. N., Y. Xuan and M. T. Thai. 2009. Towards socialaware routing in dynamic communication networks. In: 2009 IEEE 28th International Performance Computing and Communications Conference, pages 161–168. IEEE.
 Fortunato, S. 2010. Community detection in graphs. Physics reports 486(35):75–174.
 Fortunato, S. and D. Hric. 2016. Community detection in networks: A user guide. Physics Reports 659:1–44.
 Holme, P. 2015. Modern temporal network theory: a colloquium. The European Physical Journal B 88(9):234.
 Jaccard, P. 1901. Distribution de la flore alpine dans le bassin des Dranses et dans quelques régions voisines. Bull Soc Vaudoise Sci Nat 37:241–272.
 Jain, A. K., M. N. Murty and P. J. Flynn. 1999. Data clustering: a review. ACM computing surveys (CSUR) 31(3):264–323.
 Kaufman, L. and P. J. Rousseeuw. 2009. Finding groups in data: an introduction to cluster analysis, volume 344. John Wiley & Sons.
 Mucha, P. J., T. Richardson, K. Macon et al. 2010. Community structure in timedependent, multiscale, and multiplex networks. science 328(5980):876–878.
 Palla, G., A.L. Barabási and T. Vicsek. 2007. Quantifying social group evolution. Nature 446(7136):664.
 Palla, G., I. Derényi, I. Farkas et al. 2005. Uncovering the overlapping community structure of complex networks in nature and society. nature 435(7043):814.
 Rosenberg, A. and J. Hirschberg. 2007. Vmeasure: A conditional entropybased external cluster evaluation measure. In: Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLPCoNLL).
 Rosvall, M. and C. T. Bergstrom. 2010. Mapping change in large networks. PloS one 5(1):e8694.
 Sun, J., C. Faloutsos, C. Faloutsos et al. 2007. Graphscope: parameterfree mining of large timeevolving graphs. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 687–696. ACM.