Stabilizing the unstable output of persistent homology computations

Stabilizing the unstable output of persistent homology computations

Paul Bendich Department of Mathematics, Duke University, and Geometric Data Analytics, Inc. bendich@math.duke.edu Peter Bubenik Department of Mathematics, University of Florida peter.bubenik@ufl.edu  and  Alexander Wagner Department of Mathematics, University of Florida wagnera@ufl.edu
Abstract.

We propose a general technique for extracting a larger set of stable information from persistent homology computations than is currently done. The persistent homology algorithm is generally seen as a procedure which starts with a filtered complex and ends with a persistence diagram. This procedure is stable (at least to certain types of perturbations of the input). This justifies the use of the diagram as a signature of the input, and the use of features derived from it in machine learning. These computations also produce other information of great interest to practitioners that is unfortunately unstable. For example, each point in the diagram corresponds to a simplex whose addition in the filtration results in the birth of the corresponding persistent homology class, but this correspondence is unstable. In addition, the persistence diagram is not stable with respect to other procedures that are employed in practice, such as thresholding a point cloud by density. We recast these problems as real-valued functions which are discontinuous but measurable, and then observe that convolving such a function with a suitable function produces a Lipschitz function. The resulting stable function can be estimated by perturbing the input and averaging the output. We illustrate this approach with a number of examples.

1. Introduction

Persistence diagrams, also called barcodes, are one of the main tools in topological data analysis [8, 27, 25, 29]. In combination with machine-learning and statistical techniques, they have been used in a wide variety of real-world applications, including the assessment of road network reconstruction [3], neuroscience [17], [5], vehicle tracking [4], object recognition [31], protein compressibility [28], and protein structure [30].

Put briefly, these persistence diagrams are multi-sets of points in the extended plane, and they compactly describe some of the multi-scale topological and geometric information present in a high-dimensional point cloud, or carried by a real-valued function on a domain. Several theorems [18, 12, 20] state that persistence diagrams are stable with respect to certain variations in the point-cloud or functional input, and so the conclusions drawn from them can be taken with some confidence.

On the other hand, there is additional potentially very useful but unstable information produced during the computation of persistence diagrams. For example, a point far from the diagonal in the degree-zero persistence diagram represents a connected component with high persistence. This component first appears somewhere and the computation that produces the persistence diagram can be used to find its location. However this location is not stable: as we describe below, a small change in the input will cause only a small change in the persistence of this connected component, but it can radically alter the location of its birth.

Also, persistent homology computations may rely on parameters such that the output persistence diagram is not stable with respect to changes of these parameters.

1.1. Our Contribution

This paper introduces a method for stabilizing desirable but unstable outputs of persistent homology computations. The main idea is the following. On the front end, we think of a persistent homology computation as being parametrized by a vector of real numbers. These parameters could specify the input to the computation (e.g. the values on the vertices of a simplicial complex) or they could specify other values used in the computation (e.g. threshold parameters used in de-noising or bandwidths for smoothing). For a given choice of , we get a persistence diagram. On the back end, we consider a function that extracts a real-number summary from a persistence diagram; for example, might extract the persistence of a homology class created by the addition of a specific edge in a filtered simplicial complex. The composite function that maps the parameter vector to the real-number need not be continuous, but it will in many cases be measurable. We convolve this function with a Gaussian (or indeed any Lipschitz function) to produce a new Lipschitz function that carries the persistence-based information we desire.

Our main theoretical results (Theorems 4.2, 4.3 and 4.4) give conditions on functions and (where will usually be a kernel) that guarantee that the convolution is Lipschitz with specified Lipschitz constant. From these we obtain the following, where more precise statements are given as Corollaries 4.5, 4.6 and 4.8.

Theorem 1.1.

If is locally essentially bounded then for the triangular and Epanechnikov kernels, is locally Lipschitz. If is essentially bounded then for the Gaussian kernel, is Lipschitz.

In practice, this can be translated to a simple procedure for stabilizing unstable persistent homology computations: perturb the input by adding, for example, Gaussian noise, and redo the computation; repeat and average. By the law of large numbers, the result converges to the desired stable value.

Theorem 1.2.

Let be drawn independently from . Then

For the reader familiar with persistent homology who wants to see how this works in practice, we provide the following example. Consider the function on the square in Figure 1. This induces a function on the torus since on the boundary of the square. Suppose we are only given a finite sample of this induced function and we are interested in the presence of long-lived bars which are born in the region of the torus corresponding to the second quadrant of the square.

Figure 1. The graph of the function on the square given by . It induces a function on the torus, , with two global minima with value , one global maximum with value , one local maximum with value , and four saddle points with value .

To be concrete, we start with a sample of points from the graph of , by sampling independently from the uniform distribution on and letting . Note that is a random variable. We use to construct a filtered simplicial complex approximating the unknown function as follows. From the points we construct a Delaunay triangulation of the torus. We filter this filtration by assigning the vertex the value and assigning edges and triangles the maximum value of their vertices.

Figure 2. Sample of 1000 points from the graph , where the function values are indicated using the same color scale as in Figure 1. The points on the torus are used to construct a Delaunay triangulation, which is filtered using the function values. On the right we indicate the filtration values by moving the points in the normal direction.

We compute the -dimensional extended111Extended persistent homology [19] pairs the global minimum with the global maximum. persistence diagram of this filtered simplicial complex. Let be the length of the longest bar if that bar was born in the region corresponding to the second quadrant and otherwise.

This process defines a function , but is unstable. Consider the sample in Figure 2. We have since the global minimum, highlighted in red, is born outside the region corresponding to the second quadrant. Because of the symmetry of , the random variable is approximately half the time and about approximately half the time.

Let denote the -variate Gaussian with mean and standard deviation . For , sample independently from . Compute . See Figure 3. As increases, this quantity converges to , where is the stabilized version of .

Figure 3. Locations and sizes of 100 longest bars from the trials. Averaging the lengths of the blue bars over 1000 trials we get , which is consistent with the fact that the random variable is or about with equal probability. We should not expect to converge to because unlike , a particular sample is not symmetric with respect to the second and fourth quadrants.

1.2. Related Work

Partial inspiration for the main idea of our work (when faced with an instability caused by a near-interchange of values, perturb the values many times and take some sort of average) comes from the trembling-hand equilibrium solution [32] to the non-uniqueness problem for Fréchet means of persistence diagrams. Our approach should also be compared with the topological reconstruction results of Niyogi, Smale and Weinberger [34].

Several recent papers have advocated principled approaches for extracting features from persistence diagrams, including persistence landscapes [6], the stable multi-scale kernel [35], intensity functionals [15], persistence images [2], the stable topological signature [11], and the cover-tree entropy reduction [36]. Our result complements these ideas: once one identifies some specific parts of the persistence diagram as having good classification power, one can then attempt to locate, in a robust way, the portions of the domain responsible for these parts. Other papers (e.g. [13, 7, 1]) have developed sophisticated schemes for data-cleaning before persistent homology computation. These techniques are generally fragile to certain initial parameter choices, such as the parameter in [13]. Again, we provide a complementary role: any of these schemes can be run many times for several perturbations of an initial parameter choice, and the output can then be taken with confidence.

Dey and Wenger [23] have shown that the critical points of interval persistent homology are stable in the sense that they remain within some path-connected component.

Finally, Zomorodian and Carlsson [37] use Mayer-Vietoris as inspiration in their technique for localizing (relative to a cover) homology classes within a given simplicial complex. However, this works only for a fixed simplicial complex, not a simplicial complex endowed with a filtration, and the results are certainly fragile to changes in this fixed complex.

1.3. Outline

Persistent homology computations and stability theorems are reviewed in Section 2, although we assume the reader is already somewhat familiar with them. Several examples of important but unstable persistence-based information are given in Section 3, and we then describe a general approach that stabilizes them in Section 4. In Section 5, we show how to apply these results to persistent homology computations. Computational experiments, and some suggested interpretations of the results, are presented in Section 6. Potential future directions are discussed in Section 7.

2. Persistent Homology and Stability

The treatment of persistence diagrams here is adapted from [25]. For a more general discussion, see [12]. We assume the reader is familiar with the basics of homology groups: the textbook [33] is a good introduction. All homology groups are assumed to be computed over some fixed field. For concreteness, we restrict our attention to simplicial complexes, but our results also apply to more general complexes.

2.1. Persistent Homology

2.1.1. Filtered simplicial complexes

Persistent homology is computed for a finite filtered abstract simplicial complex. That is, we have a finite abstract simplicial complex, a collection, , of nonempty subsets of a fixed finite set that satisfy the condition that if then . In addition, we have a filtration, a function such that then . That is, is order preserving.

2.1.2. Persistence Diagrams

Fix a homological dimension . Suppose the distinct values of are For each define . Since is order preserving, each is a subcomplex. Whenever , there is an inclusion , which induces a homomorphism:

A homology class is a persistent homology class that is born at level if , and that dies entering level if but . If never dies, we say that it dies entering level and . The persistence of is defined to be The set of classes which are born at and die entering level form a vector space, with rank denoted The degree- persistence diagram of , encodes these ranks. It is a multiset of points in the extended plane, with a point of multiplicity at each point

2.1.3. Persistent homology computations

In practice, one constructs a filtered abstract simplicial complex from some other starting data. In addition, more information can be extracted from the persistent homology algorithm than just the persistence diagram.

We define a persistent homology computation, , to be a function whose input consists of real numbers . These may include input values and also parameter values for the computation. Using this input, constructs an abstract simplicial complex together with a filtration . The output of consists of a degree- persistence diagram together with for each in the persistence diagram (counted with multiplicity), a -simplex with , a -cycle in containing , a -simplex with , and a -chain in containing with .

2.1.4. Examples

Example 2.1.

Functions on simplicial complexes. A filtered abstract simplicial complex, , may be obtained from a real-valued function, , on the vertices in a finite simplicial complex, . As a set . A filtration, , on is defined by .

Figure 4. Left: The graph of a function on a simplicial complex . Right: the degree-zero persistence diagram for the corresponding abstract simplicial complex and filtration . The labeled points have coordinates and The point on the very top has infinite -coordinate.

For example, let be the geometric line graph (i.e. an embedding of a graph - consisting of vertices and edges - in the plane), shown on the bottom of the left side of Figure 4. Above this, we have the graph of a function on the points in . From this, we have a corresponding abstract simplicial complex and filtration . The persistence diagram is on the right. The input to consists of the function values (from left to right) .

Example 2.2.

Distance to a PL-Curve. Consider the piecewise-linear curve on the left side of Figure 5. Moving clockwise, we order its vertices . Let be the full simplex on these vertices. For each vertex , define . For each edge of the form define , and for any other edge we set to be the Euclidean distance between and . Finally, for any higher simplex , set , where we take the maximum over the set of edges contained in . The degree-one persistence diagram appears on the right of Figure 5.

Figure 5. Left: a piecewise-linear curve in the plane. The distance between and is slightly smaller than the distance between and . Right: where is as defined in the text. The points and correspond to one-cycles that are created by the additions of edges and , respectively.

Here the input to consists of the coordinates of the vertices. We note this paradigm can be extended to curves in higher-dimensional ambient spaces, or even to higher-dimensional complexes.

Example 2.3.

Point cloud – Vietoris-Rips. Suppose that is a set of points in some metric space . We let be the full simplex on these vertices. Define for each vertex and for each edge As above, we set for all higher-dimensional simplices. This is called the Vietoris-Rips filtration. We denote . The input to consists of the coordinates of the points in in some parametrization of . Alternatively, it consists of the entries of the distance matrix .

For example, let be the circular point cloud on the top-left of Figure 7. The corresponding appears on the top-right of the same figure.

Example 2.4.

Point cloud – geometry. Often, the simplicial complex in the previous example is too large to work with. Instead one applies some geometric ideas to construct a smaller filtered simplicial complex. Examples include witness complexes [22], the graph-induced complex [24], and the use of nudged elastic bands [1]. These constructions typically include one or more parameters, which we append to the input to .

Example 2.5.

Point cloud – statistics. Instead of using geometric ideas to construct a smaller point cloud we can use statistical ideas. For example, one can use a kernel to smooth the point cloud to obtain a density estimator on the underlying space and use this to filter a triangulation of  [17, 7, 16]. Or one may use the local density to threshold the point cloud [9]; we consider this in more detail in Example 5.4. Again, these constructions include one or more parameters, which we append to the input for .

Example 2.6.

Regression. Here we present a variant of Example 2.1 in which we are not given the simplicial complex. Instead we sample points , from some probability distribution on . We also sample corresponding perturbed function values . For example, we may have , where is sampled from a univariate Gaussian. We use to construct a Delaunay triangulation . We then use to filter as follows: . This is called the lower star filtration.

Instead of the sample points lying in , they may lie on some compact Riemannian manifold. See the torus example in Section 1.

Instead of filtering directly using , one can instead use to construct an estimator of the unknown regression function . We can then use to filter  [7].

2.2. Stability

The persistence diagram is a summary of the function , and it turns out to be a stable one. The discussion here is adapted from [18]. For a broader description, see [12, 14].

For convenience, to each persistence diagram, we add every point on the major diagonal, each with infinite multiplicity.

Now suppose that is some bijection between two persistence diagrams; bijections exist because of the infinite-multiplicity points along the diagonal. The cost of is defined to be that is, the largest box-norm distance between matched points. The bottleneck distance is defined to be the minimum cost amongst all such bijections. For example, if and are the black and red diagrams, respectively, on the right side of Figure 6, then the best bijection would pair with , with , the two infinite-persistence points with each other, and the other two points with the closest diagonal points. The bottleneck distance would be the cost of this bijection.

Figure 6. Left: The graphs of functions (black) and (red), both on the same domain . Right: the persistence diagrams and , using the same color scheme.

The Diagram Stability Theorem [18] guarantees that persistence diagrams of nearby functions are close to one another. More precisely, we have This is illustrated by Figure 6.

Note that the difference between and is measured in the norm. In the point cloud context (Ex. 2.3), this translates into requiring that the two point cloud inputs be Hausdorff-close. However, the persistence diagram is not stable with respect to the addition of outliers. We discuss this problem in more detail in Section 3.2 and propose a solution in Section 4.

3. Instability

The Diagram Stability Theorem tells us that the persistence diagram obtained in the output of a persistent homology computation is stable with respect to certain perturbations of the input used to construct a filtered abstract simplicial complex. However, other outputs of persistent homology computations are not stable. This includes the simplices and cycles that generate persistent homology classes. These are of great interest to practitioners hoping to interpret persistence calculations more directly. In addition, many persistence computations rely on choices of parameters and the resulting persistence diagrams may be unstable with respect to these choices.

3.1. Instability of Generating Cycles/Simplices

Persistence diagrams are useful and robust measures of the size of topological features. What they are less good at, on the other hand, is robustly pinpointing the location of important topological features. We use Figure 6 to illustrate this problem. Suppose that we have the fixed domain and we observe the function . One of the most prominent points in is , which corresponds to the pair of values and We might thus be tempted to say that has an important feature, a component of high-persistence, at . But consider the nearby function instead. Its diagram has a point that is very close to , but this point corresponds to the pair of values and . There is still a component born at , but it corresponds to the much smaller persistence point . And so while the persistence of the point is a stable summary of the function , the actual location of the topological feature it corresponds to is not.

This is unfortunate. Several recent works ([4], [5], among others) have shown that the presence of points in certain regions of the persistence diagram has strong correlation with covariates under study. For example, each diagram in the second cited work came from a filtration of the brain artery tree in a specific patient’s brain, and it was found that the density of points in a certain middle-persistence range gave strong correlations with patient age. It would of course be tempting to hold specific locations in the brain responsible for these points with high distinguishing power.

Unsurprisingly, this problem remains for persistent homology in higher degrees. Consider Figure 5 again. It is easy to see that edge creates the large loop which corresponds to point . However, a slight perturbation of the vertex configuration could render responsible for this loop instead, and so we cannot robustly locate the persistence of this loop at .

In Section 4, we both rigorously define this non-robustness and suggest a method for addressing it.

3.2. Outliers and Instability of Parameter Choices

The Diagram Stability Theorem guarantees the persistence diagrams associated to two Hausdorff-close point clouds will themselves be close. However, it says nothing about the outlier problem. For example, consider again the point cloud (Figure 7, top-left) from Example 2.3 to which we apply the Vietoris-Rips construction. Its persistence diagram (top-right of same figure) has one high-persistence point, which corresponds to the “circle” that we qualitatively see when looking at the points. On the other hand, consider the point cloud on the bottom-left, which consists of and three “outlier” points spread across the interior of the circle. The diagram (bottom-right) is not close to : there is still one point of fairly high persistence, but it’s much closer to the diagonal than before.

In practice, this problem is often addressed by first de-noising the point cloud in some way. For example, Carlsson et. al. [10] first thresholded by density before computing Vietoris-Rips filtrations when they discovered a Klein bottle in the space of natural images. There are no guarantees that a different, nearby choice of density threshold parameter would not give a qualitatively different persistence diagram. Section 4 addresses this by introducing a general method for handling parameter choice in persistence computations.

Figure 7. Illustration of the outlier problem for the persistent homology of the Vietoris-Rips complex of a point cloud. All figures produced in MATLAB. Top left: points , sampled with bounded noise from a circle. Top right: Bottom left: points , which is plus three outlier points. Bottom right:

4. Theory: Stability from convolutions

In this section we show how functions may be stabilized by convolving them with a kernel. First, we give three general results with various assumptions on the function and the kernel. Next, we apply them in three particular cases: the simple triangular kernel and the commonly used Epanechnikov and Gaussian kernels. We then outline a few specific examples, some of which will be explored via experiment in the next section.

4.1. Lipschitz functions and convolution

Let us start by recalling a few definitions. For , a function is said to be -Lipschitz if for all , , where denotes the Euclidean norm. We will call a function Lipschitz if it is -Lipschitz for some . The support of , denoted , is the closure of the subset of where is non-zero.

Let be (Lebesgue) measurable functions that are defined almost everywhere. The 1-norm of , is given by , if it exists. The essential supremum of , denoted by , is the smallest number such that the set has measure . If it exists, the convolution product of and , is given by

It exists everywhere, for example, if one function is essentially bounded and the other is integrable; or if one function is bounded and compactly supported and the other is locally integrable [26, Section 473D].

Assumption 4.1.

Throughout this section we assume that is defined almost everywhere, and that that the convolution product exists almost everywhere.

4.2. Stability theorems

We now give several conditions on a pair of functions which imply that their convolution product is (locally) Lipschitz.

The first result appears in [26, 473D(d)], but the proof is included here for completeness.

Theorem 4.2.

If and is -Lipschitz, then is -Lipschitz.

Proof.

Let . First we have, Then,

Let denote the closed ball of radius centered at , and let denote the volume of the -dimensional ball of radius .

Theorem 4.3.

Let and let . If on , is -Lipschitz and , then is -Lipschitz in .

Proof.

Let . Let . As in the previous proof, . ∎

Theorem 4.4.

If and for all , then is -Lipschitz.

Proof.

Let . Again, . ∎

4.3. Application to kernels

We now apply the above theorems to smooth a function , obtaining a Lipschitz function. That is, we will take to be a kernel, a non-negative integrable real-valued function on satisfying , and . For example, we can choose to be the triangular kernel, , for appropriate normalization constant . The most common choices are the Gaussian kernel and the Epanechnikov kernel, which are described below. Notice that if is a kernel, then so is .222More generally, we can choose the bandwidth to be a symmetric positive definite matrix and let . The parameter is called the bandwidth and allows one to control the amount of smoothing. We have .

4.3.1. The triangular kernel

Let . Let denote the volume of the -dimensional ball of radius . For , let denote the indicator function on . That is, if and otherwise. The triangular kernel is given by

Note that and is -Lipschitz. Applying Theorem 4.3, we have the following.

Corollary 4.5.

Let . If on then is -Lipschitz in .

Note that it follows that if the bound on is global then so is the Lipschitz bound.

4.3.2. The Epanechnikov kernel

Let . The Epanechnikov kernel is given by

Now and is -Lipschitz. Applying Theorem 4.3, we have the following.

Corollary 4.6.

Let . If on then is -Lipschitz in .

4.3.3. The Gaussian kernel

Let . The Gaussian kernel is given by

Lemma 4.7.

For the Gaussian kernel , let . Then for all .

Proof.

Change coordinates so that and . Then by symmetry

It follows that . ∎

Thus by Theorem 4.4 we have the following

Corollary 4.8.

If then is -Lipschitz.

In practice, the function will rarely be essentially bounded, but this can be arranged by setting it to be outside a closed ball centered at a specified configuration.

4.3.4. Sharpness of the Lipschitz constants

Assume that is symmetric in the first variable. Let be defined by if and otherwise. Let . Then we calculate

So

Let be the Gaussian kernel. Then

It follows that converges to as approaches by the first fundamental theorem of calculus. Hence, the Lipschitz constant given in Corollary 4.8 is optimal.

When and is the triangular kernel, as . So the Lipschitz constant of is at least . Hence, the Lipschitz constant given in Corollary 4.5 is optimal up to at most a factor of .

When and is the Epanechnikov kernel, as . So the Lipschitz constant of is at least . Hence, the Lipschitz constant given in Corollary 4.6 is optimal up to at most a factor of .

4.4. Stable Computations in Practice

Suppose that we can compute for values of for which it is defined, we can sample from , and that for a fixed we want to compute . In practice, we will not be able to evaluate this integral analytically. We approximate as follows. Let be a random variable with probability distribution given by the kernel (one writes ). Let be the random variable given by . Then the expected value of is given by . We will approximate by drawing a sample where are independent. Then can be approximated by . By the law of large numbers, , where the convergence may be taken to be in probability (the weak law) or almost surely (the strong law). This is the justification for the computations in Section 6. Let us record this result.

Theorem 4.9.

Let and be drawn independently from . Then

If is large, the cost of computing times may be considerable. In the case that is a persistent homology calculation we suggest the following. Consider the set of filtered complexes, produced by . If and are sufficiently close then and are isomorphic up to a change of filtration value and one doesn’t need to repeat the persistence computation. More generally, one may obtain the persistent homology of similar filtered complexes using vineyard updates [21].

4.5. Stability of the choice of kernel

As should be clear, and as borne out by the experiments in Section 6, the value of , for fixed and , will certainly depend on . However, there is no fragility of output with respect to this choice, as shown by the following fact.

Theorem 4.10.

Let be an essentially bounded function. Then the map is Lipschitz, from to .

Proof.

Let be given by . For , . ∎

4.6. Choice of bandwidth

After choosing a family of kernels, such as the Gaussian kernels described in Section 4.3.3, the most important choice in implementing the method described here is the choice of bandwidth .

Choosing the amount of smoothing is a well-studied problem in nonparametric regression, where increasing the bandwidth decreases the estimation variance, but increases the squared bias. Both of these terms contribute to the error. A bandwidth which optimizes this trade-off may be estimated using cross-validation.

Our situation is somewhat different and a proper understanding of this problem requires analysis that goes beyond the scope of the present paper.

However, we offer some suggestions for the choice of bandwidth. First, it may be chosen to obtain a desired amount of smoothness of . For example, we may want to be -Lipschitz. Second, it seems reasonable to choose the bandwidth to (at least) equal the level of estimated noise of the input data. One may combine these two to find the minimum bandwidth that satisfies both requirements.

5. Application to persistent homology computations

Now let us apply the results of the previous section to persistent homology. Assume we have a persistent homology computation, , with input the real numbers . Let be the set of outputs of this computation. Let be the set of all inputs for which is defined. If then add a state to and say that the computation sends all points in to . Thus we encode this computation as a function . Let be a real-valued function on with . Let . We will need to be (Lebesgue) measurable.

To make this less abstract, we show how the instabilities described in Sections 3.1 and 3.2 can be addressed by this method.

Example 5.1.

Stable persistence located at a point. We return to Example 2.1, where we have a geometric line graph with vertices , and edges for To produce a filtration of the type used in this example, we just need to know function values. More precisely, our persistence computation takes as input a vector , from which we obtain a piecewise linear function, , on determined by . Next we consider the corresponding abstract simplicial complex and filtration . Then we compute the persistence diagram . This defines a function , where .

Now fix a specific vertex in . A given diagram either contains a point that represents a persistent connected component born at in the filtration, or it does not. In the former case, we define and in the latter we define ; that is, we map the diagram to the persistence of the connected component created by the addition of this specific vertex.

The discontinuity of the function expresses the instability of localizing the persistence of a connected component. Referring to Figure 6, suppose that the vectors and produce the functions and , respectively, and that the vertex is as marked in the figure. Then is the persistence of , while is the persistence of . Corollaries 4.5 and 4.6 guarantee that smoothing by a Triangular kernel and Epanechnikov kernel will result in a locally Lipschitz function.

To be able to convolve with the Gaussian kernel and apply Corollary 4.8 we need to be essentially bounded. We can arrange this by specifying that the domain of be compact and that be bounded. This requires that all of the persistence pairs in the output of be finite. This can be arranged by truncating at some value or by applying extended persistence [19]. The resulting is Lipschitz.

The experiments in Section 6.1 show how this works in practice.

Example 5.2.

Stable persistence located at an edge. We return to Example 2.2. In this case, is the full complex on vertices, and we start with ordered points in the plane which lead to a piecewise-linear curve . That is, takes as input a vector and places a vertex at , thus creating a curve This leads to a filtration of and finally we produce . As before, defines a function

If we fix a specific edge , we can proceed as in Example 5.1 by defining the function and thus . For example, taking in Figure 5 and letting be the vector which led to that specific point configuration, we have equal to the persistence of . As above, is (locally) Lipschitz.

Example 5.3.

Stable persistence of generating cycles. Instead of tracking which -simplex creates a persistent homology class, a persistent homology algorithm may record a -cycle, , that represents the persistence class. In this case, we can define to be if represents a persistence pair or otherwise . Let and then is (locally) Lipschitz.

Example 5.4.

Stability in density-thresholding choice. Let be the point cloud on the bottom-left of Figure 7, which we recall was created from the point cloud on the top-left by adding three outlier points. Consider any de-noising process parametrized by some real numbers. For a specific example, let . For each , let . Then define

One then applies the Vietoris-Rips construction to obtain a filtered abstract simplicial complex from , and then computes We may consider the input of our persistent homology computation to be : that is, the coordinates of the vertices and the parameter values. However, we may also take the coordinates to be fixed and only consider the parameters to be our input. Doing this, we obtain In this case, define for any degree-one diagram . Then the discontinuity of the function given by

expresses the instability of the threshold-parameter choice referred to in Section 3.2. If is chosen so that the three outlier points are remove, then will be the persistence of the most prominent point on the top-right of Figure 7. On the other hand, a very nearby choice of might fail to remove these points, and we would get the persistence of the most prominent point on the bottom-right of Figure 7. As above, is (locally) Lipschitz.

6. Experiments and Interpretations

This section more deeply investigates some of the examples above, both via a few proof-of-principle experiments with synthetic data and via some suggested interpretations of the results. All experiments were run in MATLAB, using TDATools for the persistent homology computations.

6.1. Experiments

6.1.1. First line-graph experiment.

First we explore Example 2.1 (though with a different case than the one in Figure 4), where the input to a persistent homology computation is a choice of function-values on the vertices of a simplicial complex. Specifically, we consider a line graph with vertices , and the initial input choice The left side of Figure 8 shows the graph of the PL-function , and the persistence diagram is in the middle. Note that the high-persistence dot and the medium-persistence one are created by the additions of and , respectively; that is, and These values are of course unstable to perturbations of : for instance, if we switch the first and fifth entries of , the reader can check that

Figure 8. Input and results of first line graph experiment Left: a graph of the function defined on a line graph with seven vertices. Middle: the persistence diagram . We follow the extended persistence convention and pair the global min with the global max. Right: results of the experiment. Middle graph shows the value of versus ; bottom graph shows versus ; top graph shows their sum.

Let be a seven-dimensional Gaussian kernel with mean at the origin and bandwidth . For each put . The right side of Figure 8 shows graphs of the approximate values of and , plotted against , as well as a graph of their sum. There were evenly spaced values of used, ranging from to . To make these graphs, we followed the approximation procedure suggested by Theorem 4.9. For each fixed , we took independent draws from , and computed

with an identical procedure for

6.1.2. Second line-graph experiment.

Again we explore Example 2.1, this time with the input to a persistent homology computation that builds a filtration on a line graph with five vertices . The function , whose graph is on the left of Figure 9, has a global min at . From the diagram in the middle, we see . Note that for , since only one component is created during the entire filtration. On the right, we see convolved values of these functions, with notation and computation procedure exactly as in 6.1.1 above.

Figure 9. Input and results of second line graph experiment Left: a graph of the function defined on a line graph with seven vertices. Middle: the persistence diagram . We follow the extended persistence convention and pair the global min with the global max. Right: results of experiment. Moving from bottom to top, the values of , and their sum, all plotted against

6.1.3. Distance-to-a-curve experiment.

Next we reconsider Example 2.2. Let be the PL-curve with nine vertices on the left side of Figure 10. In our language, , where the input vector specifies the coordinates of the nine vertices: Following the vocabulary of Example 2.2, this curve placement leads to a order-preserving function on the abstract full complex on nine vertices.

Its degree-one persistence diagram , in the middle of the same figure, has only two off-diagonal points. The first, at , is created by the positive edge between and , while the second, at , comes from the edge between and . Thus we have and . As usual, these values are highly unstable to small perturbations in the vertex positions.

Figure 10. Input and results of distance-to-curve experiment. Left: the PL plane curve whose nine vertices are defined in the text. Middle: the persistence diagram . Right: Results of experiment. Top graph shows the value of versus , bottom graph shows versus .

In very similar fashion to the last experiment, we then computed approximate values at for the convolved functions and where was a nine-dimensional Gaussian kernel with bandwidth . This time we used evenly spaced values of , going from to . The results appear on the right side of Figure 10.

6.2. Interpretations

We now offer some possible interpretations one can draw from these results, and also suggest some potential uses of this technique in practice.

6.2.1. Locating a point in the domain.

Let be one of the high-persistence points in the diagram for our first line-graph experiment. It is accurate to say that was created, for this specific persistent homology computation, by the addition of to the filtration. However, it is also a potentially misleading thing to say.

We propose that the difference between the persistence of and the values of the convolutions might be seen as an indicator for how confidently one should locate at . The graphs on the right side of Figure 8 tell us that this confidence should be low. On the other hand, the other high-persistence point is created by the addition of . It turns out that remains very close to for all within a reasonable range.

6.2.2. Spreading out a point in the domain.

Alternatively, one might choose to give a more fuzzy location. A reasonable idea would be to spread out its location between vertices and , since is responsible for creating the same component in a nearby filtration. The graphs in figure 8 bear this out: note that the sum of the two convolution values is always very close to the sum of the persistences of the components created by and . Similarly, in the second line-graph experiment, it would be reasonable to smear the location of the only point throughout the immediate neighborhood of .

6.2.3. Convolved values as features.

One could also use the values of or as features in a machine-learning scheme. That is, the vector could be used as a summary feature of both the filtration created by and the noise model . The stabilities offered by Corollary 4.8 and Theorem 4.10 make this an appealing option.

7. Discussion

Persistence diagrams have already been used to produce features for machine-learning and statistical methods. This paper takes a first step towards the extraction of stable features that describe much of the other information produced during a persistent homology computation. We plan to demonstrate the utility of these new features in applications: for example, by exploring the locations of the distinguishing persistence points in the brain artery dataset from [5].

It would also be nice to build a set of visualization tools. For example, one might want to compute a persistence diagram, click on a point, and have the possible location candidates shown on the domain, perhaps with some sort of heat map of likelihood.

Finally, we also hope to enrich the theory whose development has started here. It would be nice to work instead in the category of topological spaces, to define some versions of the functions and , and to prove Lipschitz-continuity of the latter. We believe that the direction suggested by Example 2.6 may be the right one.

Acknowledgments

The authors would like to thank Justin Curry, Francis Motta, Chris Tralie, and Ulrich Bauer for helpful conversations. The first author would like to thank the University of Florida for hosting him during the initial research phase. The first author was partially supported by the NSF awards BIGDATA 1444791 and WBSE 3331753 and the Air Force Research Laboratory, under STTR # FA8750-16-C-0220. The second author was partially supported by AFOSR award FA9550-13-1-0115.

References

  • [1] Henry Adams, Atanas Atanasov, and Gunnar Carlsson. Nudged elastic band in topological data analysis. Topol. Methods Nonlinear Anal., 45(1):247–272, 2015.
  • [2] Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick Shipman, Sofya Chepushtanova, Eric Hanson, Francis Motta, and Lori Ziegelmeier. Persistence images: A stable vector representation of persistent homology. Journal of Machine Learning Research, 18(8):1–35, 2017.
  • [3] Mahmuda Ahmed, Brittany Terese Fasy, and Carola Wenk. Local persistent homology based distance between maps. In SIGSPATIAL. ACM, Nov. 2014.
  • [4] P. Bendich, S. P. Chin, J. Clark, J. Desena, J. Harer, E. Munch, A. Newman, D. Porter, D. Rouse, N. Strawn, and A. Watkins. Topological and statistical behavior classifiers for tracking applications. IEEE Transactions on Aerospace and Electronic Systems, 52(6):2644–2661, December 2016.
  • [5] Paul Bendich, J. S. Marron, Ezra Miller, Alex Pieloch, and Sean Skwerer. Persistent homology analysis of brain artery trees. Ann. Appl. Stat., 10(1):198–218, 03 2016.
  • [6] Peter Bubenik. Statistical topological data analysis using persistence landscapes. Journal of Machine Learning Research, 16:77–102, 2015.
  • [7] Peter Bubenik, Gunnar Carlsson, Peter T. Kim, and Zhi-Ming Luo. Statistical topology via Morse theory persistence and nonparametric estimation. In Algebraic Methods in Statistics and Probability II, volume 516 of Contemp. Math., pages 75–92. Amer. Math. Soc., Providence, RI, 2010.
  • [8] Gunnar Carlsson. Topology and data. Bulletin of the American Mathematical Society, 46(2):255–308, January 2009.
  • [9] Gunnar Carlsson, Tigran Ishkhanov, Vin de Silva, and Afra Zomorodian. On the local behavior of spaces of natural images. Int. J. Comput. Vision, 76(1):1–12, 2008.
  • [10] Gunnar Carlsson, Tigran Ishkhanov, Vin de Silva, and Afra Zomorodian. On the local behavior of spaces of natural images. International Journal of Computer Vision, 76(1):1–12, 2008.
  • [11] M. Carriere, S. Y. Oudot, and M. Ovsjanikov. Stable topological signatures for points on 3d shapes. In Proc. Sympos. on Geometry Processing, 2015.
  • [12] Frédéric Chazal, David Cohen-Steiner, Marc Glisse, Leonidas J. Guibas, and Steve Y. Oudot. Proximity of persistence modules and their diagrams. In Proceedings of the 25th annual symposium on Computational geometry, SCG ’09, pages 237–246, New York, NY, USA, 2009. ACM.
  • [13] Frédéric Chazal, David Cohen-Steiner, and Quentin Mérigot. Geometric Inference for Measures based on Distance Functions. Foundations of Computational Mathematics, 11(6):733–751, 2011. RR-6930 RR-6930.
  • [14] Frédéric Chazal, Vin de Silva, Marc Glisse, and Steve Oudot. The structure and stability of persistence modules. SpringerBriefs in Mathematics. Springer, [Cham], 2016.
  • [15] Y.-C. Chen, D. Wang, A. Rinaldo, and L. Wasserman. Statistical analysis of persistence intensity functions. ArXiv e-prints, 2015.
  • [16] Yen-Chi Chen, Christopher R. Genovese, and Larry Wasserman. Statistical inference using the morse-smale complex. 06 2015.
  • [17] Moo K. Chung, Peter Bubenik, and Peter T. Kim. Persistence diagrams in cortical surface data. In Information Processing in Medical Imaging (IPMI) 2009, volume 5636 of Lecture Notes in Computer Science, pages 386–397, 2009.
  • [18] David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Stability of persistence diagrams. Discrete Comput. Geom., 37(1):103–120, January 2007.
  • [19] David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Extending persistence using Poincaré and Lefschetz duality. Found. Comput. Math., 9(1):79–103, 2009.
  • [20] David Cohen-Steiner, Herbert Edelsbrunner, John Harer, and Yuriy Mileyko. Lipschitz functions have -stable persistence. Found. Comput. Math., 10(2):127–139, February 2010.
  • [21] David Cohen-Steiner, Herbert Edelsbrunner, and Dmitriy Morozov. Vines and vineyards by updating persistence in linear time. In Computational geometry (SCG’06), pages 119–126. ACM, New York, 2006.
  • [22] Vin de Silva and Gunnar Carlsson. Topological estimation using witness complexes. Eurographics Symposium on Point-Based Graphics, 2004.
  • [23] Tamal K. Dey and Rephael Wenger. Stability of critical points with interval persistence. Discrete Comput. Geom., 38(3):479–512, 2007.
  • [24] Tamal Krishna Dey, Fengtao Fan, and Yusu Wang. Graph induced complex on point data. In Proceedings of the Twenty-ninth Annual Symposium on Computational Geometry, SoCG ’13, pages 107–116, New York, NY, USA, 2013. ACM.
  • [25] Herbert Edelsbrunner and John Harer. Computational Topology: An Introduction. American Mathematical Society, 2010.
  • [26] D.H. Fremlin. Measure Theory, Volume 4. Torres Fremlin, 2000.
  • [27] P. Frozini and B. Landi. Size theory as a topological tool for computer vision. Pattern Recognition and Image Analysis, 9:596–603, 1999.
  • [28] Marcio Gameiro, Yasuaki Hiraoka, Shunsuke Izumi, Miroslav Kramar, Konstantin Mischaikow, and Vidit Nanda. A topological measurement of protein compressibility. Jpn. J. Ind. Appl. Math., 32(1):1–17, 2015.
  • [29] Robert Ghrist. Barcodes: the persistent topology of data. Bull. Amer. Math. Soc. (N.S.), 45(1):61–75, 2008.
  • [30] Violeta Kovacev-Nikolic, Peter Bubenik, Dragan Nikolić, and Giseon Heo. Using persistent homology and dynamical distances to analyze protein binding. Stat. Appl. Genet. Mol. Biol., 15(1):19–38, 2016.
  • [31] Chunyuan Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 2003–2010, June 2014.
  • [32] Elizabeth Munch, Katharine Turner, Paul Bendich, Sayan Mukherjee, Jonathan Mattingly, John Harer, et al. Probabilistic fréchet means for time varying persistence diagrams. Electronic Journal of Statistics, 9:1173–1204, 2015.
  • [33] James R. Munkres. Elements of Algebraic Topology. Addison Wesley, 1993.
  • [34] P. Niyogi, S. Smale, and S. Weinberger. A topological view of unsupervised learning from noisy data. SIAM J. Comput., 40(3):646–663, 2011.
  • [35] Jan Reininghaus, Stefan Huber, Ulrich Bauer, and Roland Kwitt. A stable multi-scale kernel for topological machine learning. Proc. CVPR, pages 4741–4748, 2015.
  • [36] Abraham Smith, Paul Bendich, John Harer, and Jay Hineman. Supervised learning of labeled pointcloud differences via cover-tree entropy reduction. 02 2017.
  • [37] Afra Zomorodian and Gunnar Carlsson. Localized homology. Comput. Geom., 41(3):126–148, 2008.
Comments 0
Request Comment