A time resolved clustering method revealing longterm structures and their short-term internal dynamics

A time resolved clustering method revealing longterm structures and their short-term internal dynamics

Abstract

The last decades have not only been characterized by an explosive growth of data, but also an increasing appreciation of data as a valuable resource. It’s value comes with the ability to extract meaningful patterns that are of economic, societal or scientific relevance. A particular challenge is to identify patterns across time, including patterns that might only become apparent when the temporal dimension is taken into account. Here, we present a novel method that aims to achieve this by detecting dynamic clusters, i.e. structural elements that can be present over prolonged durations. It is based on an adaptive identification of majority overlaps between groups at different time points and allows the accommodation of transient decompositions in otherwise persistent dynamic clusters. As such, our method enables the detection of persistent structural elements with internal dynamics and can be applied to any classifiable data, ranging from social contact networks to arbitrary sets of time stamped feature vectors. It provides a unique tool to study systems with non-trivial temporal dynamics with a broad applicability to scientific, societal and economic data.

1 Introduction

With digitalization penetrating all aspects of life we are witnessing an explosive growth of data. Data clustering (Kaufman and Rousseeuw, 2009), i.e. a categorization of data sources into different groups, is one of the most popular approaches to harvest knowledge from this deluge of data. In countless applications it has proven to reveal latent yet meaningful structures. Clustering can be applied to both non-relational data (information about individual data sources) and relational data (information about the relation between data sources). In non-relational data, clustering aims to group data sources based on some measure of similarity. In relational data, clustering - also called community detection - focuses on identifying sets of data sources that are more densely connected within, as compared to between, sets. Clustering methods either apply to relational or to non-relational data. However, their result is in both cases a clustering.

The bulk of methods for cluster detection both in non-relational (Jain et al., 1999) and in relational data (Fortunato, 2010; Fortunato and Hric, 2016) has been developed for static datasets. However, one particular aspect of the ever growing amount of available data is the temporal dimension. In temporal data, each data source might contribute several data points to the dataset, each with a different time stamp. Including this temporal information allows to delineate the evolution of a system. The hence gained temporal information can be crucial for the understanding of observed patterns, as many systems are intrinsically dynamic; any observed state can only be explained in light of the history of the system. Some of the pertinent examples highlighting the importance of temporal dynamics are social media (Chakrabarti et al., 2006), mobile subscriber networks (Palla et al., 2007) or co-authorship relations (Backstrom et al., 2006; Rosvall and Bergstrom, 2010).

In the last decade and a half, considerable efforts were made to extend static methods to time-stamped data and develop new ones capable to cope with temporal data. Such methods are commonly referred to as evolutionary clustering, a term shaped by Chakrabarti et al. (2006), or dynamic community detection in the context of social network analysis. A topical overview can be found in the review by Dakiche et al. (2019).

A common representation of time stamped data is in the form of a sequence of snapshots, with each snapshot being an aggregation of data points over a certain amount of time, e.g. per day, per month or per year. In a single snapshot each data source is maximally present with a single data point. This data point is the result of an aggregation, if a data source contributes several data points to a single snapshot. For relational data such representation is also called time-window graphs (Holme, 2015). The advantage of this approach lies in the representation of temporal data in the form of a series of static datasets that can be analyzed using the rich tool-set from traditional clustering analysis. The drawback is the loss of all temporal information about the data sources within the aggregation windows. Several approaches to include temporal information adapt either the snapshot representation, like the creation of joint graphs from two snapshots (Palla et al., 2007), or the measures from static clustering (Dinh et al., 2009; Sun et al., 2007), or both (Mucha et al., 2010). Another option is to define rules to combine a sequence of clusterings resulting form static methods applied to each snapshot. We will refer to this approach as ad hoc evolutionary clustering. Which can be considered to be a more general approach due to its independence on the clustering method used. Ad hoc evolutionary clustering methods are by definition applicable both to non-relational and to relational data.

Evolutionary clustering methods define dynamic clusters (DCs), i.e. clusters that might persist over several snapshots, based on rules that relate clusters between time points. Careful thought and consideration should be given to the definition of those rules and their underlying principles. Ideally, the concept of a DC is defined a priori, such that the set of rules is an implementation of the concept and not the other way around.

Here, we propose an ad hoc evolutionary clustering method to detect DCs in temporal data. Our method offers the advantage of flexibility, as the only requirement for its application is a time-series of clusterings, which can be generated by any clustering method applied to non-relational or relational data, including overlapping community detection methods for relational data, such as the one by Palla et al. (2005). Our framework utilises majority as basis to detect DCs. It features a rule-set that allows to adapt the temporal scale at which processes are deemed relevant for the dynamic cluster structure. As a result, the framework allows to capture and study short-lived changes, e.g. natural fluctuations, small perturbation or small-scaled processes within the clusters, such as fission-fusion dynamics (Aureli et al., 2008), without loosing track of the dynamics at longer time scales. Finally, it is applicable to “live” datasets, i.e. with continuously generated data.

In the following, we dissect the life-cycle events of a DC in the context of a sequence of snapshots. We specify a set of properties that explicitly define what we consider to be a robust DC. Then, we present a set of rules, along with an algorithmic procedure, to detect DCs. In addition, the functioning of the novel framework is illustrated by means of synthetic examples.

2 The life-cycle of a dynamic cluster

In a sequence of snapshots representation of temporal data, DCs consist of a time-series of sets of data sources. We will refer to these data sources as the members of the DC. Changes in this time-series of members will determine the life-cycle of the DC. Such changes can be classified into six elementary events: birth, death, growth, shrinkage, split and merge, illustrated in Fig. 1. The life-cycle of a DC can be described in general terms, in the absence of explicit rules (see LABEL:lifecycleSI). However, robustly linking observed patterns to these life-cycle events requires a set of explicitly defined rules. In combination with an algorithmic application procedure they define an ad hoc evolutionary clustering method. Before we proceed to the presentation of our novel method, we present the properties of a dynamic cluster that we define as robust DC and that will figure as underlying principles to the set of rules for our method.

Fig. 1: Illustration of events in the life-cycle of a dynamic cluster (DC). Upper part: DC with individual members in a sequence of snapshots of a time stamped dataset. In non-relational data, clusters originate from similarity measures between data points, as highlighted by the position of each data point within a snapshot. In relational data, the position is irrelevant and clusters correspond to (more densely) connected data sources (nodes). Relations between data sources are illustrated by lines between them. Enclosed sets of nodes correspond to clusters, with the colour implicating the associated DC. The dashed lines are visual guidelines to track the present DCs through time. Lower part: Cluster based representation of a DC using an Alluvial diagram. Each block corresponds to a cluster. The height of a block represents the cluster size, the width has no particular meaning. The flows between blocks illustrate how the data sources redistribute between time points. The difference between block height and summed height of in- and out-flows corresponds to the number of introduced and removed data sources, respectively.

3 Definition of a robust dynamic cluster

A simple and intuitive principle to link clusters over time, in order to build a DC, is a majority based association, where the biggest fraction of the members of a cluster is followed to some other point in the past or future. We use bijective majority based relations, i.e. clusters from neighbouring time points reciprocally hold each others biggest fraction of members, as a first criterion to identify DCs.

An additional challenge, despite the establishment of linking relations between neighbouring snapshots, are the concepts of persistence and continuity. Intuitively, we tend to identify DCs as sets of data sources that appear as related clusters over several consecutive time-points. If they only appear together in a cluster every other time, identifying them as members of a DC becomes more dubious. While in both cases the DC shows persistence over time, the latter lacks continuity. Many real systems are prone to produce discontinuous but persistent structures. We will follow the nomenclature of Rosenberg and Hirschberg (2007) and use the term homogeneous discontinuities to describe cases where a DC transiently decomposes into sub-units. A variety of dynamic patterns fall into this classification, most notably the fission-fusion patterns well-studied in social systems (Aureli et al., 2008). We identify two elementary patterns in DCs with homogeneous discontinuities: splintering and transitioning. As splintering, we denote events where members of a DC are temporally split into several sub-clusters. A transitioning event is defined by a series of points, during which the members gradually attach to a splinter sub-cluster, until all members have transitioned and the growing splinter cluster effectively becomes the initial DC. Both events must be transient, as otherwise the initial DC can not be considered to persist. The definition of an adequate time-scale limit at which splintering and transitioning events can be considered transient, must be defined on a case-by-case basis. Fig. 2 illustrates how two extreme choices in the time-scale limit affect the detection of DCs. We add to the list of desirable features the capability to track clusters over time in spite of homogeneous discontinuities.

Fig. 2: Illustration of how continuity in expression of a persistent dynamic cluster (DC) affects the observed structural dynamics. Center top: Alluvial diagram showing the time-series of clusters in a sequence of snapshots from a synthetic dataset. Left: The same alluvial diagram but with each colour representing a DC. The DCs are formed based on the condition of continuous presence throughout their life-span. The association criterion for clusters is based on a simple bijective majority rule between each neighboring time point, i.e. two clusters from neighbouring time points are associated to the same DC if they contain the majority of each other’s members. Right: The same alluvial diagram, but with each colour representing a DC. In this case, the condition on continuity is relaxed. A particular DC must be detectable within a limited number of time points - here set to five. The association criterion is still based on a bijective majority rule, but generalized to distant time points. For a detailed description of the method see Section 4 or refer to LABEL:methodSI:tbl:algoDesc from the supplementary material.

With these clarifications at hand, we postulate that the following set of features should define a DC and figure as a basis for the implementation of a detection algorithm (see Fig. 3 for a visual representation):

Majority based

DCs must be identified using a bijective majority based association criterion between clusterings from different points in the time-series. Clusters that reciprocally hold each others biggest fractions of members, should belong to the same DC.

Progressive

A dynamic clustering must be based on existing data and be capable to incorporate newly generated data into the existing structure, i.e. DCs must be progressively detectable on a live dataset. This feature is equivalent to the condition that the dynamic clusters, at any point in time, can only depend on data from that and, potentially, any previous time point.

Robust against high turnovers

The dynamic structure should show minimal dependency on the introduction and disappearance of data sources in the dataset. This to assure that the DC structure only follows structural changes and is not dominated by the turnover of data sources in the dataset.

Structurally consistent

The most recent DC structure should always coincide with the clustering detected in this snapshot. Structural consistency is of particular importance in a live dataset, because the last (or current) snapshot continuously changes with incoming data. However, this requirement does not exclude the possibility that the clustering deviates from that of DCs, at points previous to the last point of observation.

Resilient against homogeneous discontinuities

Persistent structures showing homogeneous discontinuities should still be identified as a DC. Homogeneous discontinuities are transient decompositions of two different types:

Splintering:

Decomposition into sub-clusters.

Transitioning:

Emergence of a sub-clusters that gradually absorbs the majority of members from the DC.

Sensitivity to time-scale of discontinuity

The time-scale at which discontinuities are considered to be transient is context specific, and thus, needs to be adaptable.

Fig. 3: Illustration of different stages of the progressive detection algorithm. The status of the progressive algorithm at the time of observation is indicated using an orange arrowed box and specified by the time index . The requested features are illustrated as follows: Majority based: Associations of clusters at to existing DCs are based on a bijective majority match with clusters from earlier time points (see Section 4 for details). Progressiveness: Gray elements correspond to future data and are, therefore, irrelevant to the DC detection algorithm. Robustness against turnover: Strong size fluctuations of a DC, (e.g. the growth from to by over and shrinkage between and ) should not affect the identification of a DC. Structural consistency: The algorithm should generate a DC structure that coincides with the present clustering. Each cluster at the current stage of the algorithm has an individual colour, as illustrated by step . Splinter-Resilience: From the three DCs identified at , the top most cluster is retrospectively identified at as a splinter sub-cluster, and thus, is re-integrated into the blue DC. Transition-Resilience: At stage the blue DC splinters. Until stage the splinter completely absorbed the rest of the DC, effectively recombining it. Transient discontinuity: Among the two smaller DCs at , only the upper (pink) classifies as a splinter. The orange one remains separated considerably longer, until , from the blue DC and is therefore counted as a DC on its own.

4 A novel ad hoc evolutionary clustering method

For an in-depth description and definition of each term introduced in this section refer to LABEL:detailsSI.

4.1 Relating neighbouring snapshots

The implementation of a majority based identification of clusters from different time points will determine the mechanistic definition of a DC. Practically, we first assess the similarity of two clusters from consecutive snapshots. To do so, we use the fraction of shared members, only considering resident members, i.e. data sources that contribute a data point to both snapshots. We divide the size of the intersection of two clusters by the number of resident individual in one and the other cluster, respectively. This yields two similarity measures that are unaffected by member turnover. These measures represent the fraction of members from one cluster present in the other, and vice-versa. They can be considered as non-symmetric variations of the well known Jaccard index (Jaccard, 1901). Based on these similarity measures, we identify the clusters in the neighbouring snapshots that are most similar. Since our similarity measure is not symmetric, we use the term mapping relation whenever we follow the majority forward in time, and tracing relation for a time backward direction. We will use a bijective majority match, i.e. a cluster is the tracing cluster of its own mapping cluster, as a condition to associate two cluster from consecutive snapshots to each other and, ultimately, to the same DC. Note that a posterior cluster could trace back to more than one prior cluster, and the same holds true in the other direction. Therefore, we consider sets of clusters when identifying bijective majority matches.

4.2 Generalisation to relations between distant time points

What renders this approach non-trivial, and will allow us to implement the remaining required features, is a generalisation of mapping and tracing relations to a measure between snapshots from distant time points. Concretely, for each cluster we will identify the earliest set of clusters with which it forms a bijective majority match and try to associate the cluster to the same DC as the clusters in this set. To generalise, we apply the matching between consecutive time points iteratively. Following tracing relations back over several snapshots we can construct what we call a tracing path, and a mapping path for the inverted direction. A cluster forms a bijective majority match with a set of clusters from an earlier time point if, at some depth or recursion, the tracing path of the cluster equates to this set and the mapping path from the set equates a set with the later cluster as its unique member. There is, in principle, no restriction on the number of time points over which a bijective majority match occurs. We include the possibility of such a restriction in the form of a parameter, determining the maximal recursion depth of the tracing and mapping paths. This parameter allows, on the one hand, to set a limit to the duration of within DC processes and, on the other hand, to study the types of transient decompositions present in the data by comparing DC structures generated under different restrictions for this maximal length.

We consider a cluster to hold its own majority, hence a cluster always forms a bijective majority match at least with itself. If a cluster forms a bijective majority match with sets of clusters from earlier snapshots, we associate the cluster to the DC of the earliest source set, i.e. the earliest set that forms a bijective majority match with the target cluster and contains exclusively clusters associated to the same DC. The combination of all clusters in the tracing and mapping paths connecting the target cluster to this source set can be seen as the sequence of sets of clusters along which the identity of a DC can propagate through time, and will be referred to as the identity flow of the target cluster.

An identity flow spanning over more than two snapshots can enclose sequences of cluster sets (see blue clusters in Fig. 4). These embedded sequences are necessarily shorter than the maximal length of an identify flow and satisfy our conditions for transient homogeneous decompositions. We thus identify clusters belonging to embedded sequences as marginal parts of the embedding DC.

4.3 Definition of a dynamic cluster

We define a DC as a sequence of sets of clusters, where each cluster either has an identity flow with a source set contained in the DC, is contained in an identity flow of a cluster belonging to the DC, or is part of the clusters marginalised by the identify flow of other clusters belonging to the DC.

The identification of the source set of a cluster is conditioned on the maximal length of a bijective majority match which must be determined a priori through a parametrization of the method. According to this definition, a minimal configuration of a DC might consist of a set containing a single cluster only.

4.4 Algorithmic procedure

Refer to Fig. 4 for a visual illustration of the algorithmic procedure, or to LABEL:methodSI:tbl:algoDesc of the supplementary material for an in depth step-by-step description.

The algorithmic procedure defining our ad hoc evolutionary clustering method consists in passing through the sequence of snapshots, starting at the earliest time point and performing two distinct tasks for each cluster in the current snapshot:

  1. Associate the cluster and all clusters in its identity flow to the DC of its source set.

  2. Correct existing DC-clustering associations from previous time steps based on the marginalisations induced by the identity flow.

How far the procedure can reach back in time to determine a source set of a target cluster needs to be determined a priori through a parametrisation of the method. This parameter can be considered as the method’s history horizon and must be given in number of time steps. Henceforth, we will refer to a particular configuration of the method as -step history, where specifies the number of snapshots the algorithm reaches back in time.

Fig. 4: Illustration of a DC-cluster association with a 3-step history. Framed in orange is the target cluster at that will be associated to the DC of the source set of clusters at , colored in purple. The source set is defined as the earliest candidate set for a bijective match, given the history parameter and consists of a single cluster in the current example. The association is based on a bijective majority match between the target cluster and the source set. The fluxes, highlighted in orange, show the tracing path of the target cluster, while those highlighted in purple describe the mapping path from the source set. Together, the purple and yellow fluxes are called the identity flow (see LABEL:methodSI:tbl:algoDesc for further details). Fluxes coloured in blue designate tracing or mapping paths that are attached exclusively to clusters from the identity flow. By construction, these fluxes do not contribute to the identity propagation of a DC and are thus called marginal flows. The ensemble of involved clusters, indicated by coloured frames, will be associated to the purple DC as a result from the illustrated bijective match.

5 Consistency in evolutionary clustering

So far, we have focused on the features we consider desirable to define and, ultimately, to detect DCs. The presented method is a direct implementation of these features, thereby delivering qualitatively satisfying DC structures, when using our custom features as quality criteria. A more objective approach for quality assessment consists in exploring the auto-correlation of the members between time points. Note that we consider a DC to be a sequence of sets of clusters, however, the members of a DC at a given time point are all data sources that belong to the clusters within the set of clusters from this time point. The auto-correlation is given by

(1)

where and are the sets of members of the DC, , at time points , respectively , and denotes the cardinality of set .

, also known as the Jaccard index (Jaccard, 1901), indicates the fraction of identical members among all members present. As such, it can be understood as an assessment of membership consistency between neighbouring snapshots. To alleviate the notation we consider a DC, , that is not present at snapshot to have a member set of . We define the total consistency of a dynamic clustering as the average auto-correlation. With the average over all DCs and for each DC over all pairs of neighboring time points the DC exists:

(2)

with the total number of snapshots, the set of all dynamic clusters and the number of snapshots the DC exists.

This definition excludes creation and destruction events, i.e. the DC is only present in one of the two neighbouring snapshots. It thus indicates the overall consistency of existing structures.

If no external criteria exist that allow the determination of a suitable -step history parameter, we argue that choosing a value that maximises the total consistency score is a sensible choice. Doing so leads to the most consistent temporal structure within the range of DC structures that result from all possible parametrizations.

6 Discussion

We have presented an algorithm that detects dynamic clusters (DCs) in a sequence of snapshots of time stamped data. It is based on a precisely defined set of features that allow new DCs to form, old ones to disappear, as well as existing DCs to shrink, grow or to transiently split and merge. The algorithm only depends on a time-series of cluster associations and is, therefore, compatible with any clustering method for non-relational and relational data. Thus the user can choose the most suitable clustering method according to the particular study and/or dataset at hand. Furthermore, this minimal input leads to an efficient scalability of our method with the size of the dataset, i.e. the number of data sources present. It scales linearly with the number of data sources, if the number of clusters detected does not depend on the size of the dataset. Consequentially, our method is unlikely to ever figure as the computational bottleneck in the analysis of temporal data. Only in the limit case when the number of clusters per snapshot scales linearly with the number of data sources is the scalability comparable to the one of a typical clustering method (see LABEL:scalabilitySI for further details).

Identifying clusters in a dataset requires an algorithmic procedure that classifies data sources. Such an algorithm does, in principle, not need to stem from an explicit definition of what a cluster should be. It can be based on relations between single data points and thus only implicitly define the concept of a cluster - e.g. two feature vectors need to be closer than a certain distance, in order for their respective data source to belong to the same cluster. As a result, clustering methods implement a variety of, at least partially, not well defined concepts of a cluster. This is not only a challenge for traditional clustering, both in relational and non-relational data, but perhaps even more so in data with a temporal resolution. Therefore, emphasis should be put on a clear outline of the features that define dynamic clusters, be it in the presentation of a new algorithmic procedure, as we do here, or in the application of an existing one.

Finally, this diversity in concepts calls for objective measures that allow to quantify structures deduced by evolutionary clustering methods. With the total consistency measure defined in this study, we present a measure that does not only allow to objectively parametrise the novel method but also permits to compare different dynamic clusterings quantitatively.

Our method allows to detect transient homogeneous decompositions in dynamic clusters that are present at longer time scales. Here it is important to clarify, that the detected dynamic clusters differ from the clustering that would result from simply expanding the aggregation period. This, in part, because transient decompositions that are not homogeneous, i.e. a dynamic cluster decomposes and some of its parts recombine with (parts of) a different dynamic cluster, might lead to a single cluster including all involved data sources when aggregating over the entire duration of the decomposition. Another difference is the information that our method retains about transient sub-clusters. Even if the algorithm determines that, for a given snapshot, several clusters belong to the same DC, the composition of this DC in terms of clusters remains available. This information on the temporal dynamics within a DC is lost if one simply expands the aggregation period. Gaining access to the temporal within DC dynamics is the stand alone feature of this novel method and allows to gain insights on within-DC processes, such as the presence and course of sub-clusters or fission-fusion processes (Aureli et al., 2008).

Supplementary Materials:

A time resolved clustering method revealing longterm structures and their short-term internal dynamics

SI 1 Snapshot representation of data with a temporal resolution

The presented method for the detection of dynamic clusters requires a time-series of clusterings of data sources. Time-series data present the particularity that a data source might contribute several data points each of which being associated to a discrete time stamp, thus forming a snapshot of the system associated to this particular time stamp. For some data this time stamp is given naturally through the process of data collection, for other data the individual data points require binning onto a series of discrete time stamps, before a classification in the form of a clustering can be performed. Binning consists of aggregating multiple data points from the same data source such that for each snapshot a data source maximally contributes a single data point.

Formally, the raw data for a sequence of snapshots must be present in form of a series of clusterings of individual data sources associated to time points, , respectively. At any time point the clustering consist of a set of clusters, . For short, we will refer to the time point simply by its index .

Consider a DC, , of length with a mapping of the index from the time-series of length of member sets of to the time-series index of the sequence of snapshots. The function is to be understood as the mapping that yields for an index in the time-series of the DC the corresponding index of the time-series of snapshots. Thus allows to place an event within the life-span of onto the time-series representing the dynamic dataset. Let be the set of members of at the time point , respectively, at the time point , with , of the time-series of its member set.

References

  1. Aureli, F., C. M. Schaffner, C. Boesch et al. 2008. Fission-fusion dynamics: new research frameworks. Current Anthropology 49(4):627–654.
  2. Backstrom, L., D. Huttenlocher, J. Kleinberg et al. 2006. Group formation in large social networks: membership, growth, and evolution. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 44–54. ACM.
  3. Chakrabarti, D., R. Kumar and A. Tomkins. 2006. Evolutionary clustering. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 554–560. ACM.
  4. Dakiche, N., F. B.-S. Tayeb, Y. Slimani et al. 2019. Tracking community evolution in social networks: A survey. Information Processing & Management 56(3):1084–1102.
  5. Dinh, T. N., Y. Xuan and M. T. Thai. 2009. Towards social-aware routing in dynamic communication networks. In: 2009 IEEE 28th International Performance Computing and Communications Conference, pages 161–168. IEEE.
  6. Fortunato, S. 2010. Community detection in graphs. Physics reports 486(3-5):75–174.
  7. Fortunato, S. and D. Hric. 2016. Community detection in networks: A user guide. Physics Reports 659:1–44.
  8. Holme, P. 2015. Modern temporal network theory: a colloquium. The European Physical Journal B 88(9):234.
  9. Jaccard, P. 1901. Distribution de la flore alpine dans le bassin des Dranses et dans quelques régions voisines. Bull Soc Vaudoise Sci Nat 37:241–272.
  10. Jain, A. K., M. N. Murty and P. J. Flynn. 1999. Data clustering: a review. ACM computing surveys (CSUR) 31(3):264–323.
  11. Kaufman, L. and P. J. Rousseeuw. 2009. Finding groups in data: an introduction to cluster analysis, volume 344. John Wiley & Sons.
  12. Mucha, P. J., T. Richardson, K. Macon et al. 2010. Community structure in time-dependent, multiscale, and multiplex networks. science 328(5980):876–878.
  13. Palla, G., A.-L. Barabási and T. Vicsek. 2007. Quantifying social group evolution. Nature 446(7136):664.
  14. Palla, G., I. Derényi, I. Farkas et al. 2005. Uncovering the overlapping community structure of complex networks in nature and society. nature 435(7043):814.
  15. Rosenberg, A. and J. Hirschberg. 2007. V-measure: A conditional entropy-based external cluster evaluation measure. In: Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL).
  16. Rosvall, M. and C. T. Bergstrom. 2010. Mapping change in large networks. PloS one 5(1):e8694.
  17. Sun, J., C. Faloutsos, C. Faloutsos et al. 2007. Graphscope: parameter-free mining of large time-evolving graphs. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 687–696. ACM.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
401257
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description