Scaffoldings and Spines: Organizing High-Dimensional Data Using Cover Trees, Local Principal Component Analysis, and Persistent Homology

Scaffoldings and Spines: Organizing High-Dimensional Data Using Cover Trees, Local Principal Component Analysis, and Persistent Homology

Paul Bendich, Ellen Gasparovic, Christopher J. Tralie, and John Harer Department of Mathematics, Duke University, and Geometric Data Analytics, Inc.Department of Mathematics, Union CollegeDepartment of Electrical and Computer Engineering, Duke UniversityDepartments of Mathematics and of Electrical and Computer Engineering, Duke University, and Geometric Data Analytics, Inc.

We propose a flexible and multi-scale method for organizing, visualizing, and understanding datasets sampled from or near stratified spaces. The first part of the algorithm produces a cover tree using adaptive thresholds based on a combination of multi-scale local principal component analysis and topological data analysis. The resulting cover tree nodes consist of points within or near the same stratum of the stratified space. They are then connected to form a scaffolding graph, which is then simplified and collapsed down into a spine graph. From this latter graph the stratified structure becomes apparent. We demonstrate our technique on several synthetic point cloud examples and we use it to understand song structure in musical audio data.

1 Introduction

We consider point cloud data, modeled as a set of points in a Euclidean space . Such clouds are hard to analyze directly when and/or is large. Subsampling techniques are often used in the former situation, and dimension reduction in the latter. In both cases, much care has to be taken to ensure that the reduction in the number of points and number of dimensions does not destroy essential features of the data. While theorems exist in both contexts, they tend to make assumptions about parameters (like intrinsic dimension) that may be unknown or that may vary widely across . The latter problem often occurs when is not sampled from a manifold, but rather from a stratified space.

A stratified space is a topological space that can be decomposed into manifold pieces (called strata), of possibly different dimension, all of which fit together in some uniform fashion. A key distinction is between maximal and non-maximal (also called singular) strata: briefly, a non-maximal stratum occurs where two or more maximal strata meet. See Figure 2 for an example.

This paper proposes a novel, fast, and flexible technique for organizing, visualizing, analyzing, and understanding a point cloud that has been sampled from or near a stratified space. The technique uses a data structure called the cover tree [6] and employs techniques derived from multi-scale local principal component analysis (MLPCA, [18]) and topological data analysis (TDA, [12]). Our method summarizes the strata that make up the underlying stratified space, exhibits how the different pieces fit together, and reflects the local geometric and topological properties of the space.

Instead of subsampling or reducing dimensions, we derive from a multi-scale set of graphs, called the scaffoldings and the spine of . At each fixed scale, a point in belongs to a unique node in these graphs. A subset of points belongs to a common node only if the local geometry at each of those points is similar enough; that is, only if they belong to a common stratum. We also determine whether a node corresponds to a maximal or non-maximal stratum. In the former case, this suggests that a single dimension reduction technique might be employed on the points in that node. The edges of these graphs give information about the possible transition regions between different zones of local similarity.

1.1 Outline of method and paper

After a brief discussion of relevant graph theory (Section 2.1), we explain how to represent a point cloud by a cover tree , which is a rooted tree structure with an associated collection of subsets of called “levels.” The points in each level, called “centers” or “nodes,” are evenly distributed and representative of the point cloud at that scale. See Section 2.2 for more details.

For the next step, described in detail in Section 3, we use MLPCA (Section 2.4) to select a subset of nodes from . Each node in represents a collection of points for which the local structural information (as measured by the eigenvalues of covariance matrices) remains relatively constant. Other uniformity criteria, such as demanding that the points in a node form a single topologically simple cluster, can also be employed. The nodes in form the vertices of the scaffolding graph , whose edge set is computed via the geometry of the ambient space.

Then we label each scaffolding node with an estimated local dimension , using insights from MLPCA and the theory of stratified spaces; this is described in Section 4. We note that any other dimension-estimation technique can be used here instead. Formally, the scaffolding of is the triple , with . See Figures 6-(d) and 8-(b) for scaffolding examples.

At this juncture, the collection of points in each fixed node of can be fed into a variety of dimension-reduction, topological, or otherwise analytical techniques. But one can also (Section 5) use graph-collapsing techniques, again informed by insights from the theory of stratified spaces, to produce a much smaller graph , called the spine of . Here a subset of points belongs to a fixed node in only if those points belong to the same stratum. See Figure 8-(c) for a simple example of a spine.

To illustrate this framework, we first run several experiments (Section 6) on low-dimensional synthetic datasets, including two that have already been studied in the literature. We also give a high-dimensional example (Section 7) with real data, constructing scaffoldings and spines for point clouds created by sliding window embeddings on musical audio.

In all of our synthetic examples, the actual structure of the original stratified space is apparent from the spine graph. We do not yet have a theorem that guarantees that this will always happen, but we close the paper (Section 8) with a precise conjecture, as well as proposals for future applications to real data.

1.2 Related work

Our methods do not compete with dimension-reduction techniques such as principal component analysis [16] or manifold learning. Instead, these techniques can be employed at will on the output nodes of our scaffoldings and spines.

MLPCA has been used before ([3], [2], [5]) to analyze data, but not, to our knowledge, in combination with the cover tree.

Many papers ([14], for example) have analyzed stratified spaces as mixtures of general manifolds, and one [24] as a union of flats. Using techniques derived mostly from Gaussian Mixture Models in the former, and Grassmanian methods in the latter, they prove theoretical guarantees about when a point belongs to a maximal stratum, but they do not say much about the singularities. Other work [4] uses topological data analysis to make theoretically-backed inferences about the singularities themselves; however, the algorithms are too slow to be practical.

The cover tree was first introduced in [6] as a way of performing approximate nearest neighbors in time for a point cloud with points in arbitrary metric spaces of a fixed dimension (though constants depend exponentially on the dimension). More recent work has shown that cover trees can be used for analysis of high dimensional point clouds with a low dimensional intrinsic structure [10]. Similarly to [10], we use the cover tree to decompose the point cloud into geometrically simpler parts, but we encode more explicitly the stratified space structure in our representation, and we also autotune which nodes at which levels represent each part, as explained in Section 3.

2 Background

2.1 Graph Theory

We give the basic vocabulary about graphs and rooted trees that we will need, the former for defining the spine graph and the latter for the cover tree. For the expert, we note that our graphs are undirected and simple.

A graph consists of a set of vertices (or nodes) and a collection of edges , where each edge is an unordered pair of vertices. Given such an edge, we say that and are adjacent, that connects and , and that and are both incident to . The link of a vertex consists of all vertices adjacent to , and the link of is . Suppose is a real-valued function defined on . The upper link of a vertex is , and the upper link of the edge is

Whenever is a subset of , we can form a new graph by deleting . Here , and whenever and were both adjacent to some vertex in ; all edges from that connected two non- vertices are retained.

Alternatively, we can form a different graph from by collapsing . In this case, is obtained from by removing and then adding in a dummy vertex . The edges are obtained from by deleting all edges that connect pairs of vertices from , and also drawing a new edge from to each such that was connected to some vertex from in .

A path of length between and in is a collection of distinct vertices such that each and are adjacent. A cycle is a path from a vertex back to itself. If there exists a path in between every pair of vertices, then is connected. Otherwise, can be decomposed into two or more connected components.

A tree is a connected graph with no cycles. If an arbitrary vertex is distinguished as a root, then becomes a rooted tree. Each non-root node has a unique path to the root. The th level of consists of all vertices for which the length of this path is . We define a partial ordering on the vertices by declaring if and only if the path from to passes through , in which case we say that is a descendant of and that is an ancestor of . If and are one step away from each other, then we use child and parent instead.

2.2 Cover Tree

(a) Level 3
(b) Level 4
(c) Level 5
(d) Full Tree Rendering
Figure 1: An example of a cover tree on 250 points in . In (a)-(c), the centers at level are circled, and disks with radius are drawn in blue around each center. Points within the same subtree rooted at a center at that level are drawn with the same color. Centers at level are marked with an X to illustrate the covering condition. The tree is drawn in (d), with points in the first five levels numbered with the level in which they first occur.
Definition 2.1

A cover tree on a point cloud with a metric is a leveled tree on where each node is associated to one of the points in (called “centers”), and where the collection of (set of points in that are level centers for ) satisfy the following conditions:

  • (Nesting)

  • (Covering): For each , there exists a node so that , where .

  • (Packing): For all , , where

Here, indicates the th center at level .

Another way of phrasing the covering condition is to say that balls of radius centered at the points in will cover the whole point cloud. Exactly one such is designated as the parent of in the tree structure. The packing condition promotes levels which contain equally spaced centers. Note that in this definition, it is possible for a node at one level to be its own parent one level up. There is a minimum for which the size of is , and the node at that value of is designated as the root node. Similarly, there is a maximum for which . Figure 1 shows an example cover tree on a point cloud in .

2.3 Stratified Spaces

Recall that a space is called a manifold of dimension if every point in has a neighborhood that is homeomorphic to . The space on the left of Figure 2, consisting of a sphere intersecting with a horizontal plane at its equator and with a line piercing it at the north and south poles, is not a manifold: for example, every small enough neighborhood of the north pole looks like a plane pierced by a line.

Figure 2: Far left: A simple stratified space . Rest of figure: a stratification of into strata; through are maximal, and through are non-maximal.

A stratification of is a decomposition of into connected manifolds called strata. Each stratum is required to fit nicely within the whole of ; see [15] for rigorous versions of this condition and this entire discussion. The rest of the figure shows a stratification of .

A stratum is called maximal if it is disjoint from the closure of any other stratum. The other strata are called non-maximal. If is a point on a non-maximal stratum, then every small neighborhood of it within looks like the intersection of two or more strata. In our example, the maximal strata are labeled through ; note that there is no reason for the maximal strata to all have the same dimension. Each of the non-maximal strata, through , belongs to the closure of two or more of the maximal strata.

2.4 Local Principal Component Analysis

We assume the reader is already familiar with principal component analysis (PCA, [16]). Here we regard PCA simply as a machine that takes in a point cloud, computes a covariance matrix, and returns the eigenvectors along with the corresponding eigenvalues given in non-increasing order. This “eigeninformation” may then be used for dimensionality reduction.

The technique of multi-scale local principal component analysis (MLPCA [18]), takes as input a point cloud , a particular point and a radius , and returns as output the results of PCA run only on the points in that lie within the Euclidean -ball around . This is typically done at multiple radius scales and at many center points; see Figure 3.

Figure 3: Illustration of the technique of multi-scale local principal component analysis (MLPCA). Directions of largest variance are drawn as black arrows, and directions of smallest variance are drawn as orange arrows in this 2D example

We close with an observation that will be crucial later. Suppose the point cloud has been sampled from a stratified space . After computing MLPCA at a few radii around a specific point , one might consistently see dominant eigenvectors, and thus try to conclude that the “local dimension” around is ; in other words, that looks locally like a -flat near . This is certainly possible, but need not be the case. Instead, might be near a non-maximal stratum of , and the local picture might be more like the intersection of an -flat and a flat, for some non-zero value of .

2.5 Persistent Homology

Algebraic topology associates to a topological space a series of abelian groups , called the homology groups of , which measure its higher-order connectivity. The ranks of these groups are called the Betti numbers : counts the number of connected components of , (resp. ) counts the number of independent non-bounding loops (resp., closed surfaces) in the space, and so forth. See [21], for example, for more precise definitions.

Persistent homology transforms these algebraic notions into a measurement tool relevant to high-dimensional and noisy data. The key idea is that we are rarely interested in the Betti numbers of a particular fixed space; rather, we are concerned with the homological features that persist across a wide interval in a one-parameter filtration of a space, and we represent these features in a compact visual way as a persistence diagram in the plane. For precise definitions and statements, see for example [12] or [9].

Multi-scale clustering

The simplest homological invariant of a space is , its number of components. Let be the point cloud seen on the left of Figure 4. The most literal interpretation of here would be , where . On the other hand, most off-the-shelf clustering algorithms would report that has three components, since that seems to be the most natural way to group the points. Some algorithms might report back two clusters (grouping the top two into one), depending on how parameters are chosen. Persistent homology in dimension zero gives a useful description of all of these answers along with a report of the scales at which they are valid.

More precisely, let us define, over a set of scales , the space to be the union of closed balls of radius centered at each point in the cloud. As increases, we imagine the point cloud gradually thickening into the ambient space, and more and more components begin to merge into one another. An -value at which a merger occurs will be called a death, and we record the multi-set of these deaths as dots in the zero-dimensional persistence diagram of , as shown on the right side of Figure 4. One way to translate between this diagram and a traditional number-of-clusters answer is to cut the diagram by a horizontal line, and then report the number of dots above that line. This shows that the answers “two clusters” and “three clusters” are both quite reasonable, since each could be obtained via a wide range of threshold parameters. On the other hand, the answer “nine clusters” is in no way stable, since there is only a very narrow range of threshold parameters that would report that number.

Figure 4: A point cloud (left) and the zero-dimensional persistence diagram (right). The values and are half the distance between the two top clusters and between the right and bottom cluster, respectively.
Higher homology of point clouds.

In general, the -dimensional persistence diagram of a point cloud will summarize the evolution of the -dimensional homology group , while thickens as described above.

Consider the left side of Figure 5, which shows a point cloud sampled, with some noise, from a closed curve in the plane. As starts to thicken, all of the components very quickly collapse into one. However, there is also a new phenomenon that occurs: namely, the formation of a closed loop. We call this the birth of a one-cycle. When increases enough to build a bridge across the neck of the shape, a new one-cycle is born. Each of these one-cycles is said to die at the -value for which it is possible to contract the loop to a point within . The (birth, death) pairs corresponding to the lifetime of each cycle are stored in , shown on the right side of the same figure.

Figure 5: A point cloud sampled from a closed curve in the plane (left) and the persistence diagram in dimension one (right).

The persistence of a dot is .

Suppose that is a good sampling from a compact space . There are then a series of “homology inference” theorems ([11], [9]) which prove that one can read off the “ground truth” homology of by examining the points in which sit, very roughly speaking, near the -axis and are of high-enough persistence.

3 The Scaffolding

Given a point cloud , one can let be the Euclidean metric and build the full cover tree as above. Instead, at each node in the cover tree, we use a criterion to decide whether we will continue to construct the subtree rooted at that node, or whether all of the points contained in that subtree are sufficiently “uniform.” In the latter case, the entire subtree is compressed to a single node. At the end of this process, we output the leaves as , the nodes of our scaffolding graph.

3.1 Eigenmetric threshold

Our adaptive tree building construction is based on MLPCA, as we now explain. First, we choose some increasing set of radii . For each point and each , let be the eigenvalues, in non-increasing order, that result from performing MLPCA on with radius and center point . Then for any pair of points , we define


where is the usual Euclidean norm. We shall refer to as the eigenmetric on . Points with similar “eigenprofiles” across multiple scales will have small eigenmetric distance. Note that is a pseudometric as may be 0 for (e.g., consider two copies of the same point cloud laid out side by side).

The eigenmetric is used to construct as follows. Fix a level of . At each node on this level, we perform an eigenmetric threshold test on the points from associated with that would end up in its subtree upon further construction. To do this, we look at the points associated with each child of , and we only further subdivide a node when the eigenmetric between a pair of children exceeds an eigenthreshold . Otherwise, we stop constructing the subtree below and instead add to . At the end of this process, every point in belongs to a unique node in . We refer to the set of points belonging to a given node as the cluster of .

Parts (a)-(c) of Figure 6 show this process. Note that one must choose . To reap the multi-scale benefits, one should construct the scaffolding nodes for a range of eigenthreshold values and witness how they change.

(a) (d)

Figure 6: Results of the adaptive cover tree construction corresponding to eigenthresholds (a) , (b) , and (c) for a point cloud sampled from a sphere with an adjoining curve. Points belonging a single -node are given the same color. Also shown is (d): A scaffolding for the last cover tree with automatically determined distance threshold .

Additional subdivision criteria

It may be desirable to have the set of points belonging to a specific node in satisfy a variety of “simplicity” properties: for example, one might want them to look like one cluster rather than many. This can be achieved by introducing a second criterion based on a choice of multi-scale clustering technique. For example, one might compute the zero-dimensional persistence diagram on the set of points in to quantify how much thickening is needed before these points become one component. If this value exceeds a user-specified threshold, the node is further subdivided. We follow this practice in some of our examples below.

One could go even further, and demand that the points in a node look genuinely like a contractible clump, rather than having any large voids or holes. This can be done by computing the higher-dimensional persistence diagrams on the points belonging to the node, and then subdividing that node if any dots of persistence higher than a user-specified threshold appear in any of the diagrams. To make this computationally feasible, one can leverage recent approximation schemes [23] for persistent homology that involve the cover tree.

3.2 Building the scaffolding

Now we describe how to build the scaffolding graph on the nodes . A pair of nodes forms an edge in if and only if the Euclidean distance between and is below some threshold. The value of this threshold can be chosen manually, or one may use a variety of automatic procedures based on prior assumptions. For example, if one feels that comes from an underlying connected space, then one might desire that be connected. To enforce this, one can use zero-dimensional persistent homology on and let be the smallest thickening value that creates one component from . To avoid choosing a distance threshold, one may select a range of values for and watch how the scaffolding changes as the distance threshold varies. See part (d) of Figure 6 and part (b) of Figure 8 for examples of scaffolding graphs.

4 Local Dimension and other Data

4.1 Local Dimension Estimation

Once the scaffolding is constructed, we compute a nonnegative-integer-valued function on . If the cluster of a node comes from well within a maximal stratum, then estimates the local dimension of that stratum. We describe our choice of here, but point out that any number of other intrinsic dimension estimation techniques (for example, the box-counting method of [14]) can be substituted into our essential framework.

For each node , we begin by performing PCA on a point set made up of the union of the cluster of and all clusters corresponding to neighbors of in . We compute the square roots of the eigenvalues from PCA, normalizing so that their values range between 0 and 1. We then compute the differences between successive eigenvalues, including the difference between the smallest eigenvalue and 0, and use the location of the largest “eigengap” as our initial estimate of local dimension near that cluster.

Refining dimension estimates

Next, we refine these initial estimates by building in knowledge of maximal vs. non-maximal strata in generic stratified spaces. Specifically, we consider the nodes which have at least one neighbor such that . If belongs to the set of such nodes, then it is possible that the cluster of was sampled from a non-maximal stratum. This is due to the fact that, when two or more maximal strata come together, PCA will see the dimension near the intersection as the sum of the dimensions of the individual maximal strata.

Now if truly represents a non-maximal stratum where two or more maximal strata come together to create a singularity, then (1) its link in the scaffolding should be disconnected (with the connected components corresponding to the nearby maximal strata) and (2) the link should include nodes whose dimensions (-values) sum to at least its estimated dimension, i.e.,


If either of these conditions fails to hold, we conclude that the -values in were miscalculated and relabel them all with .

Figure 7 shows an example, with points sampled from a plane intersecting a line in . The nodes of the scaffolding are shown as black asterisks, and the local dimension estimation is illustrated by color in the figure. Red points belong to nodes labeled 3-dimensional, and since such nodes have neighbors in the scaffolding with smaller dimensions, they are candidates for non-maximal strata. Green points scattered on the plane belong to nodes labeled 1-dimensional, so the blue 2-dimensional nodes that are connected to such 1-dimensional nodes are also candidates for being non-maximal strata. After refining the estimates using the theory above, the 1-dimensional nodes scattered on the plane are determined to be errors in the refining process, and are therefore relabeled with improved dimension estimates. Parts (a) and (b) show the estimates before and after refinement, respectively.


Figure 7: Results of local dimension estimation prior to refining the estimates (a) and after the refinement process (b).

The local dimension estimate process is also a multi-scale process. It is possible that performing PCA on a node and its neighbors in the scaffolding at a small distance threshold does not give an accurate representation of the local geometric structure near the node. Perhaps there are too few points to capture the local picture, or perhaps the shape of the clusters is such that the dimension estimation from PCA would result in an error. To improve one’s understanding of the intrinsic local dimension, it is best to employ a generalized version of MLPCA in the local dimension estimation process. That is, one may preserve the value of for the scaffolding but gradually relax the threshold for including more clusters in the PCA computations for each node. For example, for the point cloud in the top of Figure 8, although the distance threshold choice of yields a connected scaffolding graph, the dimension estimation process of performing PCA on a cluster and those clusters connected to it in the scaffolding yields a too-local understanding of the dataset for this choice of . The results of the local dimension estimation that are shown in Figure 8 were the result of computing, for each cluster, PCA on the clusters within a distance threshold range of approximately 0.12 to 0.23.

Topology of strata

As an option to understand the topology of the strata, one can perform persistence on the points — or better, on only the node centers — corresponding to each dimension-based connected component in the scaffolding. One can then decorate the nodes with vectors of persistent Betti numbers ; see Figure 10 in Section 6. We point out that due to the flexible nature of our basic framework, one could also apply a different manifold learning technique of one’s choice to obtain information regarding the topology of the strata.

5 The Spine

Local dimension estimation works best when there are many cover tree nodes, which is a result of choosing a small eigenthreshold. Consequently, the scaffolding has many vertices. We now describe a further procedure that radically simplifies to produce a new graph on a much smaller vertex set. This graph, called the spine of , provides a streamlined and efficient summary of the strata that make up the stratified space which produced .

(a) (b) (c)

Figure 8: Local dimension estimation on a point cloud (a), as well as the scaffolding (b) and spine (c), for eigenthreshold , a persistent threshold of 0.05, and distance threshold . Asterisks indicate node centers and green, blue, and red points indicate 1-, 2-, and 3-dimensional areas, respectively.

Using the procedure above, is partitioned into maximal nodes and non-maximal ones . We then further partition by -value and locate the connected components therein. These connected components are then collapsed into single nodes (as described in Section 2.1). Note this retains all edges that previously connected these components to nodes of other dimensions.

Now we process the maximal nodes , starting by partitioning them by -value and then taking connected components within this partition. Let be one such component. We divide further into boundary nodes and interior nodes, where a node is in the boundary if it is connected to at least one node of a different dimension. The interior nodes are deleted (as described in Section 2.1). Then, for any edge that connects two boundary nodes, we check whether ; in other words, we check whether and are connected to the same set of non-maximal nodes. If so, then the edge is collapsed. Running through this process for each component of produces the vertices and edges of the spine . We note that vertices with different -value never get collapsed together. Therefore, it makes sense to label each node in with the -value of any node that was collapsed into it. This labeling produces , and we have our spine .

For example, the scaffolding for the point cloud in Figure 8 (a) is in part (b) of the same figure. If we run this process on , we obtain the spine in part (c). Note that is in fact the Haase diagram for the partial ordering on the strata of a plane pierced by two lines (see Section 8).

6 Synthetic Examples

This section demonstrates our algorithms on several synthetic examples111code to run these examples appears at

We begin by considering two synthetic examples that appeared in previous related work. First, as in [14] and [10], we sampled 300 points from a spiral and 800 points from a plane in ; the spiral and plane intersect transversely. We constructed the adaptive cover tree and scaffolding for threshold choices and . Part (a) of Figure 9 shows the refined local dimension estimation, and part (b) displays the spine. The red points, labelled as 3-dimensional, indicate the presence of a non-maximal stratum in the underlying stratified space: namely, the singular stratum where the two 1-dimensional spiral pieces intersect the 2-dimensional plane. The spine captures how the strata are situated relative to one another.

Our second example, a sphere with an attached curve-fragment, also appeared in [14]. Part (c) of Figure 9 shows the refined local dimension estimation and part (d) shows the spine. Again our method identifies the important location where two maximal strata of different dimensions meet. We point out that the same spine would have been produced by a point cloud sampled from a -plane with a ray sticking out of it. Persistent homology, computed on each node, distinguishes these two cases. In part (d), the spine nodes are decorated with vectors of persistent Betti numbers.

(a) (c)

Figure 9: (a) Local dimension estimation and (b) spine for the spiral-plane example. (c) Local dimension estimation and (d) spine for sphere-curve example.

For another example that incorporates topological information, we constructed a point cloud sampled from three lollipop-like structures in (6000 points in total) belonging to three distinct, mutually orthogonal planes in . Each lollipop-like structure is made up of a circle and a line segment emanating out from a point on the circle, and the three line segments meet at a single point at each of their bases. See Figure 10. Note that the three-dimensional non-maximal node corresponds to the point of contact of the three line segments, and the two-dimensional non-maximal nodes occur at the intersections of the circles and line segments. Every node consists of a single persistent connected component. Some nodes have been decorated with vectors of persistent Betti numbers, and these are the nodes with non-trivial 1-dimensional homology. A non-decorated node can be assumed to have persistent homology described by the vector .



Figure 10: (a) The scaffolding and (b) spine for a point cloud sampled from three lollipop-like structures in ( and ).

Note that our rendering preserves an essential topological feature: the actual loops. A linear technique, such as PCA, will often completely destroy features like this.

Finally, for our last synthetic example, we consider a 2-plane and a 3-plane intersecting along a 1-dimensional subspace in . There are two nodes of dimension 2, one 3-dimensional node, and one 4-dimensional node in the spine. It is evident from the spine, seen in Figure 11, that it is impossible to move from one of the 2-dimensional maximal nodes to the 3-dimensional maximal node without passing through the 4-dimensional non-maximal node. Here the cover tree and scaffolding were constructed using eigenthreshold and distance threshold .

Figure 11: The spine for a point cloud sampled from a -plane and -plane intersecting along a line in .

7 Music Structure Visualization

We turn to the domain of music analysis to test our geometric models on real data. Specifically, we address the problem of music structure analysis by representing a song as a point cloud in some feature space and using our methods to automatically group distinct musical segments, such as the chorus and verses, into strata. The problem of automatically recognizing song structure and detecting boundaries between song segments is long standing in the music information retrieval community [22]. Of particular note is recent work in [20] which uses spectral clustering on nearest neighbor graphs of a point cloud in a related feature space to ours, with additional edges added between points adjacent in time. The eigenvectors corresponding to the lower eigenvalues can then be used to encode membership to different song segments, and taking increasing numbers of eigenvectors allows for a multi-scale notion of structure, which is important, since the notion of a song “segment” is otherwise ill-posed [19]. Compared to this technique and other previously published techniques, we are less interested in precise segmentation and more interested in the potential for visualization using the spines from our geometric models, but we note that our framework can also naturally handle multi-scale information by varying eigenthresholds or persistence cutoffs in the cover tree construction.

7.1 Music Feature Space

Before showing examples of our pipeline on music data, we first provide a high level overview of our scheme for turning music into point clouds which are then plugged into our pipeline. It is of utmost importance to perform a feature extraction on audio time series, as music data typically consists of very noisy, high frequency data sampled at 22050 or 44100 samples per second which is difficult to analyze in its raw form. The first group of features we use are referred to as “timbral features,” which are simple shape features on top of the Short-Time Fourier Transform (STFT) frame meant to summarize relative information across the spectrum [25]. We also use the popular “Mel-Frequency Cepstral Coefficients” (MFCC) [8], which are perceptually motivated coarse descriptors of the shape of the spectral envelope of an STFT window. Absolute pitch information is blurred in the above features, so we also use a feature set known as “chroma” to capture complementary information about notes and chords (see [1], [13]).

All of the features described so far are typically computed over a very small amount of audio, which we call a “window,” so that each spectrogram frame is nearly stationary ([25] calls this an “analysis window”). In our application, we typically take this window size to be 2048 samples for audio sampled at 44100hz, or approximately 50 milliseconds. For 5 timbral features, 12 MFCC coefficients, and 12 chroma features, this gives a total of 29 features per window. The features in these windows alone are not appropriate for structure modeling, however, because they are simply too short in time, causing rapid variations from one window to the next and the inability to model higher level information. To make each feature vector more distinct and the resulting point cloud smoother, we aggregate sets of windows into larger time scales called “blocks” ([25] calls these “texture windows”). In each block, we take the mean and standard deviation of each feature in the windows contained in that block, for a total of 29*2, adding an additional “low energy measure” over the block (as recommended in [25]). This leaves a total of 59 feature dimensions per block. Since the features are on different scales, we also normalize each dimension by its standard deviation. In our applications, we typically take the set of all blocks consisting of 150 consecutive windows, spanning roughly 7 seconds per block. In sum, we transform each song into a 59 dimensional point cloud, where each point represents perceptually motivated statistics in a 7 second window.

7.2 Spines on Music Data

We now apply our pipeline on two real songs222interactive versions of these examples, which allow one to play the song and explore the spine at the same time, can be found at “Bad” by Michael Jackson (Figure 12 (a) ) and “Lucy in The Sky with Diamonds” by The Beatles (Figure 13 (a)). We visualize the point clouds in by projecting onto the first three principal components (in both cases, approximately of the variance is explained by these components). The spine graphs (Figure 12 (b) and Figure 13 (b)) visibly capture high level structural elements of the music. Areas of greater musical complexity (e.g., verses, chorus) tend to be characterized by higher dimensional strata, whereas transitions between distinct parts of the song are represented by one-dimensional strata. In “Bad,” an additional distinction is seen between the instrumental only part of the verse and the part with the same vamp (repeating) instrumentals with vocals on top, which are acoustically similar but different enough to be clustered into different strata right next to each other in our scheme. Overall, our spines provide musically relevant visuals for acoustic clustering and transitions.

8 Discussion

This paper proposes a flexible and fast new technique for high-dimensional data organization, visualization, and analysis. The most interesting datasets to apply this method to are those sampled from stratified spaces. We have demonstrated the utility of the method with some experiments on synthetic data, and our main application in this work was to music structure analysis. Going forward, we plan to build on our methods to perform classification by genre or by artist, and we could also apply our framework to a time series corpus with a metric between different time series, in addition to applying it to sliding windows of a single time series as we did in Section 7.2.

Our framework is quite general and is well-suited for many kinds of data, not just music. For instance, in the domain of natural language processing, we could analyze a collection of documents using some metric between them, with the aim of separating out different topics into different strata. With a simple bag of words approach, there is also a natural interpretation of the cover tree centers and eigenvectors within each strata: the centers capture the words that make up the main topic/conversation, while the eigenvectors capture groups of words that differentiate documents in the same conversation. In a different application, we could apply our pipeline to group EKG measurements for different types of heart disease patients. Finally, we could use our pipeline to help users efficiently navigate large collections of images or even of 3D shapes (building off of the work of [17]).

We end by outlining some potential conjectures that we plan to pursue in future work. First, the scaffolding and the spine are both representations of the point cloud . It would be nice to be able to show some sort of stability result relating these to and/or for some point cloud that was near in, say, Hausdorff distance. Second and finally, suppose that is a partially ordered set [7]. We recall that the Haase diagram of this partial ordering is a directed graph , where there is an arc from to if and only if . If is a stratification of a space , then there is a partial ordering on the set of strata, where if , the local neighborhood of a point in looks like the intersection of with one or more other strata. In each of the examples we’ve shown, the spine of our point cloud is exactly the Haase diagram of the partial ordering on the stratification of the space that was sampled from. Compare, for example, the spine in part (c) of Figure 8 with the space underlying the point cloud in part (a); arrows are not drawn, but imagine that the lower index always points to the higher. Of course, parameters had to be chosen to make this true. We conjecture that this should always work, assuming some appropriate sampling conditions.


Figure 12: (a) The point cloud corresponding to Michael Jackson’s song “Bad,” colored by dimension within each node and with the scaffolding edges included. (b) The spine graph for “Bad.”


Figure 13: The point cloud (a) and spine graph (b) corresponding to The Beatles’ song “Lucy in the Sky with Diamonds.”


The first and third authors were partially supported by NSF award BIGDATA 1444791, and the first was also partially supported by NSF award WBSE 3331753. The fourth author was partially supported by the Research Training Grant NSF-DMS award 1045133 and an NSF Graduate Fellowship. The second author thanks the Department of Mathematics at Duke University for hosting her during the fall of .


  • [1] Mark A Bartsch and Gregory H Wakefield. To catch a chorus: Using chroma-based representations for audio thumbnailing. In Applications of Signal Processing to Audio and Acoustics, 2001 IEEE Workshop on the, pages 15–18. IEEE, 2001.
  • [2] D. Bassu, R. Izmailov, A McIntosh, L. Ness, and D. Shallcross. Application of multi-scale singular vector decomposition to vessel classification in overhead satellite imagery. 7th Ann. Intl. Conf. on Digital Image Processing. to appear.
  • [3] D. Bassu, R. Izmailov, A McIntosh, L. Ness, and D. Shallcross. Centralized multi-scale singular value decomposition for feature construction in LIDAR image classification problems. In Applied Imagery Pattern Recognition Workshop (AIPR), 2012 IEEE, pages 1–6, Oct 2012.
  • [4] P. Bendich, B. Wang, and S. Mukherjee. Local homology transfer and stratification learning. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1355–1370. SIAM, 2012.
  • [5] Paul Bendich, Ellen Gasparovic, John Harer, Rauf Ismailov, and Linda Ness. Multi-scale local shape analysis and feature selection for machine learning applications. Proc. Int. Joint Conf. Neur. Net., 2015.
  • [6] Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. In Proceedings of the 23rd international conference on Machine learning, pages 97–104. ACM, 2006.
  • [7] Garrett Birkhoff. Lattice Theory. Amer. Math. Soc., 1948.
  • [8] Bruce P Bogert, Michael JR Healy, and John W Tukey. The quefrency alanysis of time series for echoes: Cepstrum, pseudo-autocovariance, cross-cepstrum and saphe cracking. In Proceedings of the symposium on time series analysis, volume 15, pages 209–243. chapter, 1963.
  • [9] F. Chazal, D. Cohen-Steiner, M. Glisse, L. Guibas, and S. Oudot. Proximity of persistence modules and their diagrams. In Proceedings of the 25th annual symposium on Computational geometry, SCG ’09, pages 237–246, New York, NY, USA, 2009. ACM.
  • [10] Guangliang Chen, Anna V Little, and Mauro Maggioni. Multi-resolution geometric analysis for data in high dimensions. In Excursions in Harmonic Analysis, Volume 1, pages 259–285. Springer, 2013.
  • [11] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete Comput. Geom., 37(1):103–120, January 2007.
  • [12] H. Edelsbrunner and J. Harer. Computational Topology: An Introduction. American Mathematical Society, 2010.
  • [13] Takuya Fujishima. Realtime chord recognition of musical sound: A system using common lisp music. In Proc. ICMC, volume 1999, pages 464–467, 1999.
  • [14] Gloria Haro, Gregory Randall, and Guillermo Sapiro. Translated poisson mixture model for stratification learning. International Journal of Computer Vision, 80(3):358–374, 2008.
  • [15] Bruce Hughes and Shmuel Weinberger. Surgery and stratified spaces. Surveys on Surgery Theory, 2:311–342, 2000.
  • [16] F.R.S. Karl Pearson. Liii. On lines and planes of closest fit to systems of points in space. Philosophical Magazine Series 6, 2(11):559–572, 1901.
  • [17] Vladimir Georgievich Kim. Understanding the structure of large, diverse collections of shapes. 2013.
  • [18] G. Lerman. Quantifying curvelike structures of measures by using Jones quantities. Communications on Pure and Applied Mathematics, 56(9):1294–1365, 2003.
  • [19] B. McFee, O. Nieto, and J.P. Bello. Hierarchical evaluation of segment boundary detection. In 16th International Society for Music Information Retrieval Conference, ISMIR, 2015.
  • [20] Brian McFee and Daniel PW Ellis. Analyzing song structure with spectral clustering. In ISMIR 2014: Proceedings of the 15th International Conference on Music Information Retrieval: October 27-31, 2014, Taipei, Taiwan, 2014.
  • [21] J. R. Munkres. Elements of Algebraic Topology. Addison Wesley, 1993.
  • [22] Jouni Paulus, Meinard Müller, and Anssi Klapuri. State of the art report: Audio-based music structure analysis. In ISMIR, pages 625–636, 2010.
  • [23] Donald R. Sheehy. Linear-size approximations to the Vietoris-Rips filtration. Discrete & Computational Geometry, 49(4):778–796, 2013.
  • [24] B. St. Thomas, L. Lin, L.-H. Lim, and S. Mukherjee. Learning Subspaces of Different Dimension. arXiv e-prints, April 2014.
  • [25] George Tzanetakis and Perry Cook. Musical genre classification of audio signals. Speech and Audio Processing, IEEE transactions on, 10(5):293–302, 2002.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description