Topology of deep neural networks

Topology of deep neural networks

Abstract.

We study how the topology of a data set , representing two classes of objects and in a binary classification problem, changes as it passes through the layers of a well-trained neural network, i.e., one with perfect accuracy on its training set and a near-zero generalization error (). The goal is to shed light on two well-known mysteries in deep neural networks: (i) a nonsmooth activation function like ReLU outperforms a smooth one like hyperbolic tangent; (ii) successful neural network architectures rely on having many layers, despite the fact that a shallow network is able to approximate any function arbitrary well. We performed extensive experiments on the persistent homology of a wide range of point cloud data sets, both real and simulated. The results consistently demonstrate the following: (1) Neural networks operate by changing topology, transforming a topologically complicated data set into a topologically simple one as it passes through the layers. No matter how complicated the topology of we begin with, when passed through a well-trained neural network , there is invariably a vast reduction in the Betti numbers of both components and ; in fact they nearly always reduce to their lowest possible values: for and , . Furthermore, (2) the reduction in Betti numbers is significantly faster for ReLU activation compared to hyperbolic tangent activation as the former defines nonhomeomorphic maps that change topology, whereas the latter defines homeomorphic maps that preserve topology. Lastly, (3) shallow and deep networks transform the same data set somewhat differently — a shallow network operates mainly through changing geometry and changes topology only in its final layers, a deep one spreads topological changes more evenly across all layers.

Key words and phrases:
Neural networks, topology change, topological data analysis, Betti numbers, topological complexity, persistent homology

1. Overview

A key insight of topological data analysis is that “data has shape[6, 7]. That data sets often have nontrivial topologies, which may be exploited in their analysis, is now a widely accepted principle with abundant examples across multiple disciplines: dynamical systems [24], medicine [30, 42], genomics [47], neuroscience [18], time series [48], etc. An early striking example came from computer vision, where [8] showed that naturally occurring image patches reside on a low-dimensional manifold that has the topology of a Klein bottle.

We will study how modern deep neural networks transform topologies of data sets, with the goal of shedding light on their breathtaking yet somewhat mysterious effectiveness. Indeed, we seek to show that neural networks operate by changing the topology (i.e., shape) of data. The relative efficacy of ReLU activation over traditional sigmoidal activations can be explained by the different speeds with which they change topology — a ReLU-activated neural network (which is not a homeomorphism) is able to sharply reduce Betti numbers but not a sigmoidal-activated one (which is a homeomorphism). Also, the higher the topological complexity of the data, the greater the depth of the network required to reduce it, explaining the need to have an adequate number of layers.

We would like to point out that the idea of changing the topology of space to facilitate a machine learning goal is not as esoteric as one might imagine. For example, it is implicit in kernel methods [51] — a data set with two components inseparable by a hyperplane is embedded in a higher-dimensional space where the embedded images of the components are separable by a hyperplane. Note that dimension is a topological invariant, changing dimension is changing topology. We will see that a ReLU-activated neural network with many layers effects topological changes primarily through changing Betti numbers, another topological invariant.

Our study differs from current approaches in two important ways. Many existing studies either (i) analyze neural networks in an asymptotic or extreme regime, where the number of neurons in each layer or the number of layers becomes unbounded or infinite, leading to conclusions that really pertain to neural networks of somewhat unrealistic architectures; or (ii) they focus on what a neural network does to a single object, e.g., an image of a cat, and examine how that object changes as it passes through the layers. While we do not dispute the value of such approaches, we would like to contrast them with ours: We study what a neural network with a realistic architecture does to an entire class of objects. It is common to find expositions (especially in the mass media) of deep neural networks that purport to show their workings by showing how an image of a cat is transformed as it passes through the layers. We think this is misguided — one should be looking at the entire manifold of cat images, not a single point on that manifold (i.e., a single cat image). This is the approach we undertake in our article.

Figure 1 illustrates what we mean by ‘changing topology’. The two subfigures are caricatures of real results (see Figures 2, 11, 12, 13, for the true versions obtained via actual Betti numbers computations and projections to the principal components.)

  

Figure 1. Progression of Betti numbers . Left: red: ; green: . Right: red: ; green: .

In both subfigures, we begin with a three-dimensional manifold , comprising two disjoint submanifolds (green) and (red) entangled in a topologically nontrivial manner, and track its progressive transformation into a topologically simple manifold comprising a green ball and a red ball. In the left box, is initially the union of the two green solid tori , interlocked with the red solid figure-eight . In the right box, is initially a union of , the green solid ball with three voids inside, and , three red balls each placed within one of the three voids of . The topological simplification in both boxes are achieved via a reduction in the Betti numbers of both and so that eventually we have for and , . Our main goal is to provide (what we hope is) incontrovertible evidence that this picture captures the actual workings of a well-trained1 neural network in a binary classification problem where and represent the two classes.

In reality, the manifold would have to be replaced by a point cloud data set, i.e., a finite set of points sampled with noise from . The notion of persistent homology allows us to give meaning to the topology of point cloud data and estimate the Betti numbers of its underlying manifold.

1.1. Key findings.

This work is primarily an empirical study — we performed more than 10,000 experiments on real and simulated data sets of varying topological complexities and have made our codes available for reader’s further experimentations.2 We summarize our most salient observations and discuss their implications:

  1. For a fixed data set and fixed network architecture, topological changes effected by a well-trained network are robust across different training instances and follow a similar profile.

  2. Using smooth activations like hyperbolic tangent results in a slow down of topological simplification compared to nonsmooth activations like ReLU or Leaky ReLU.

  3. The initial layers mostly induce only geometric changes, it is in deeper layers that topological changes take place. Moreover, as we reduce network depth, the burden of producing topological simplification is not spread uniformly across layers but remains concentrated in the last layers. The last layers see a greater reduction in topological complexity than the initial layers.

Observation (ii) provides a plausible answer to a widely asked question [39, 34, 19]: What makes rectified activations such as ReLU and its variants perform better than smooth sigmoidal activations? We posit that it is not a matter of smooth versus nonsmooth but that a neural network with sigmoid activation is a homeomorphic map that preserves topology whereas one with ReLU activation is a nonhomeomorphic map that can change topology. It is much harder to change topology with homeomorphisms; in fact, mathematically it is impossible; but maps like the hyperbolic tangent achieve it in practice via rounding errors. Note that in IEEE finite-precision arithmetic, the hyperbolic tangent is effectively a piecewise linear step function:

where denotes floating point representation of , and is the unit roundoff, i.e., with the machine epsilon [45]. Applied coordinatewise to a vector, is a homeomorphism of to and necessarily preserves topology; but is not a homeomorphism and thus has the ability to change topology. We also observe that lowering the floating point precision increases the value of (e.g., for double precision , for half precision3 ), which has the effect of coarsening , making it even further from a homeomorphism and thus more effective at changing topology. We suspect that this may account for the paradoxical superior performance of lower precision arithmetic in deep neural networks [11, 20, 23].

The ReLU activation, on the other hand, is far from a homeomorphism (for starters, it is not injective) even in exact arithmetic. Indeed, if changing topology is the goal, then a composition of an affine map with ReLU activation, , , is a quintessential tool for achieving it — any topologically complicated part of can be affinely moved outside the nonnegative orthant and collapsed to a single point by the rectifier. We see this in action in Figure 2, which unlike Figure 1, is a genuine example of a ReLU neural network trained to perfect accuracy on a two-dimensional manifold data set, where comprises five red disks in a square and is the remaining green portion with the five disks removed. The ‘folding’ transformations in Figure 2 clearly require many-to-one maps and can never be achieved by any homeomorphism.

Figure 2. We see how the data set is transformed after passing through layers of a ReLU network with three neurons in each layer, well-trained to detect five disks in a square. red: .

The effectiveness of ReLU activation over sigmoidal activations is often attributed to the former’s avoidance of the vanishing/exploding gradient problem. Our results in Section 6 indicate that this does not give the full explanation. Leaky ReLU and ReLU both avoid vanishing/exploding gradients, yet they transform data sets in markedly different manners — for one, ReLU reduces topology faster than Leaky ReLU. The sharpness of the gradients is clearly not what matters most; on the other hand, the topological perspective perfectly explains why.

Observation (iii) addresses another perennial paradox [27, 16, 52]: Why does a neural network with more layers work better, despite the well-known universal approximation property that any function can be approximated arbitrarily well by a two-layer one? We posit that the traditional approximation-theoretic view of neural networks is insufficient here; instead, the proper perspective is that of a topologically complicated input getting progressively simplified as it passes through the layers of a neural network. Observation (iii) accounts for the role of the additional layers — topological changes are minor in the first few layers and occur mainly in the later layers, thus a complicated data set requires many more layers to simplify.

We emphasize that our goal is to explain the mechanics of what happens from one layer to the next, and to see what role each attribute of the network’s architecture — depth, width, activation — serves in changing topology. Note that we are not merely stating that a neural network is a blackbox that collapses each class to a component but how that is achieved, i.e., what goes on inside the blackbox.

1.2. Relations with and departures from prior works.

As in topological data analysis, we make use of persistent homology and quantify topological complexity in terms of Betti numbers; we track how these numbers change as a point cloud data set passes through the layers of a neural network. But that is the full extent of any similarity with topological data analysis. In fact, from our perspective, topological data analysis and neural networks have opposite goals — the former is largely concerned with reading the shape of data, whereas the latter is about writing the shape of data; not unlike the relation between computer vision and computer graphics, wherein one is interested the inverse problems of the other. Incidentally, this shows that a well-trained neural network applied in reverse can be used as a tool for labeling components of a complex data set and their interrelation, serving a role similar to mapper [53] in topological data analysis. This idea has been explored in [46, 40].

To the best of our knowledge, our approach towards elucidating the inner workings of a neural network by studying how the topology, as quantified by persistent Betti numbers, of a point cloud data set changes as it passes through the layers has never been done before. The key conclusion of these studies, namely, that the role of a neural network is primarily as a topology-changing map, is also novel as far as we know. Nevertheless, we would like to acknowledge a Google Brain blog post [44] that inspired our work — it speculated on how neural networks may act as homeomorphisms that distort geometry, but stopped short of making the leap to topology-changing maps.

There are other works that employ Betti numbers in the analysis of neural networks. [3] did a purely theoretical study of upper bounds on the topological complexity (i.e., sum of Betti numbers) of the decision boundaries of neural networks with smooth sigmoidal activations; [49] did a similar study with a different measure of topological complexity. [21] studied the empirical relation between the topological complexity of a data set and the minimal network capacity required to classify it. [50] used persistent homology to monitor changes in the weights of neural network during training and proposed an early stopping criteria based on persistent homology.

1.3. Outline.

In Section 2 we introduce, in an informal way, the main topological concepts used throughout this article. This is supplemented by a more careful and detailed treatment in Section 3, which provides a self-contained exposition of simplicial homology and persistent homology tailored to our needs. Section 4 contains a precise formulation of the problem we study, specifies what is tracked empirically, and addresses some caveats. Section 5 introduces our methodology for tracking topological changes and implementation details. We present the results from our empirical studies with discussions in Section 6, verified our findings on real-world data in Section 7, and conclude with some speculative discussions in Section 8.

2. Quantifying topology

In this article, we rely entirely on Betti numbers to quantify topology as they are the simplest topological invariants that capture the shape of a space , have intuitive interpretations, and are readily computable within the framework of persistent homology for a point cloud data set sampled from . The zeroth Betti number, , counts the number of connected components in ; the th Betti number, , , is informally the number of -dimensional holes in . In particular, when as there is no -dimensional holes in -dimensional space. So for , we write  — these numbers capture the shape or topology of , as one can surmise from Figure 3. So whenever we refer to ‘topology’ in this article, we implicitly mean .

Manifold (a) Single contractible manifold (b) Five contractible manifolds (c) Sphere (d) Solid torus (filled) (e) Surface of torus (hollow) (f) Genus two surface (hollow) (g) Torso surface (hollow)
Figure 3. Manifolds in and their Betti numbers.

If has no holes and can be continuously (i.e., without tearing) deformed to a point, then and for all ; such a space is called contractible. The simplest noncontractible space is a circle , which has a single connected component and a single one-dimensional hole, so and for all . Figure 3 has a few more examples.

Intuitively, the more holes a space has, the more complex its topology. In other words, the larger the numbers in , the more complicated the topology of . As such, we define its topological complexity by

(2.1)

While not as well-known as the Euler characteristic (which is an alternating signed sum of the Betti numbers), the topological complexity is also a classical notion in topology, appearing most notably in Morse theory [35]; one of its best known result is that the topological complexity of gives a lower bound for the number of stationary points of a function with nondegenerate Hessians. It also appears in many other contexts [36, 1], including neural networks. We highlight in particular the work of [3] that we mentioned earlier, which studies the topological complexity of the decision boundary of neural networks with activations that are Pfaffian functions [55, 17]. These include sigmoidal activations but not the ReLU nor leaky ReLU activations studied in this article. For piecewise linear activations like ReLU and leaky ReLU, the most appropriate theoretical upper bounds for topological complexity of decision boundaries are likely given by the number of linear regions [38, 56].

The goal of our article is different, we are interested not in the shape of the decision boundary of an -layer neural network but in the shapes of the input , output , and all its intermediate layers , . By so doing, we may observe how the shape of is transformed as it passes through the layers of a well-trained neural network, thereby elucidating its workings. In other words, we would like to track the Betti numbers

To do this in reality, we will have to estimate from a point cloud data set, i.e., a finite set of points sampled from , possibly with noise. The next section will describe the procedure to do this via persistent homology, which is by now a standard tool in topological data analysis. Readers who do not want to be bothered with the details just need to know that one may reliably estimate by sampling points from ; those who like to know the details may consult the next section. The main idea is that the Betti numbers of may be estimated by constructing a simplicial complex from in one of several ways that depend on a ‘persistent parameter’, and then using simplicial homology to compute the Betti numbers of this simplicial complex. Roughly speaking, the ‘persistent parameter’ allows one to pick the right scale at which the point cloud should be sampled so as to give a faithful estimation of . Henceforth whenever we speak of , we mean the Betti numbers estimated in this fashion.

For simplicity of the preceding discussion, we have used as a placeholder for any manifold. Take say a handwritten digits classification problem (see Section 7), then has ten components, with the manifold of all possible handwritten digits . Here we are not interested in per se but in for all and  — so that we may see how each component is transformed as passes through the layers, i.e., we will need to sample points from each of to estimate its Betti numbers, for each component and at each layer .

3. Algebraic topology and persistent homology background

This section may be skipped by readers who are already familiar with persistent homology or are willing to take on faith what we wrote in the last two paragraphs of the last section. Here we will introduce background knowledge in algebraic topology — simplicial complex, homology, simplicial homology — and provide a brief exposition on selected aspects of topological data analysis — Vietoris–Rips complex, persistent homology, practical homology computations — that we need for our purposes.

3.1. Simplicial complexes

A -dimensional simplex, or -simplex, in , is the convex hull of affinely independent points . A -simplex is a point, a -simplex is a line segment, a -simplex is a triangle, and a -simplex is a tetrahedron. A -simplex is represented by listing the set of its vertices and denoted . The faces of a -simplex are simplices of dimensions to formed by convex hulls of proper subsets of its vertex set . For example, the faces of a line segment/-simplex are its end points, which are -simplices; the faces of a triangle/-simplex are its three sides, which are -simplices, and its three vertices, which are -simplices.

An -dimensional geometrical simplicial complex in is a finite collection of simplices in of dimensions at most that are (i) glued together along faces, i.e., any intersection between two simplices in is necessary a face of both of them; and (ii) include all faces of all its simplices, e.g., if the simplex is in , then the simplices must all also belong to . Behind each geometrical simplicial complex is an abstract simplicial complex — a list of simplices with the property that if , then . This combinatorial description of an abstract simplicial complex is exactly how we describe a graph, i.e., -dimensional simplicial complex, as an abstract collection of edges, i.e., -simplices, comprising pairs of vertices. Conversely, any abstract simplicial complex can be realized geometrically as a geometrical simplicial complex in like in Figure 4, an example of a -dimensional simplicial complex in . The abstract description of a simplicial complex allows us to treat its simplices as elements in a vector space, a key to define simplicial homology, as we will see in Section 3.3.

Figure 4. A geometrical simplicial complex in that is a geometrical realization of an abstract simplicial complex comprising simplices: a single -simplex , five -simplices such as and , eighteen -simplices such as and , fourteen -simplices . Note that in the geometrical simplicial complex, the simplices intersect along faces.

3.2. Homology and Betti numbers

Homology is an abstract way to encode the topology of a space by means of a chain of vector spaces and linear maps. We refer readers to [31] for an elementary treatment requiring nothing more than linear algebra and graph theory. Here we will give an even simpler treatment restricted to , the field of two elements with arithmetic performed modulo , which is enough for this article.

Let be vector spaces over . Let be linear maps called boundary operators that satisfy the condition that “a boundary of a boundary is trivial,” i.e.,

(3.1)

for all . A chain complex refers to the sequence

where we set , the trivial subspace. The elements in the image of are called boundaries and elements in the kernel of are called cycles. Clearly and are both subspaces of and by (3.1),

We may form the quotient vector space

and we will call it the th homology group — the ‘group’ here refers to the structure of as an abelian group under addition. The elements of are called homology classes; note that these are cosets or equivalence classes of the form

(3.2)

In particular for any . The dimension of as a vector space is denoted

This has special topological significance when is the homology group of a topological space like a simplicial complex and is called the th Betti number of . Intuitively counts the number of -dimensional holes in . Note that by definition, has a basis comprising homology classes for some .

3.3. Simplicial homology

We present a very simple exposition of simplicial homology tailored to our purposes. The simplification stems partly from our working over a field of two elements . In particular and we do not need to concern with signs.

Given an abstract simplicial complex , we define an -vector space in the following way: Let be the set of all -dimensional simplices in . Then an element of is a formal linear combination:

In other words, is a vector space over with as a basis.

The boundary operators are defined on a -simplex by

(3.3)

where indicates that is omitted from , and extended linearly to all of , i.e.,

For example, , , .

Working over simplifies calculations enormously. In particular, it is easy to check that for all , as each -simplex appears twice in the resulting sum and in . Thus (3.1) holds and , form a chain complex. The th homology of the simplicial complex is then with as defined in (3.3). Working over also guarantees that takes the simple form where is the th Betti number, i.e.,

(3.4)

for . Let , the number of -simplices in . To compute , note that with the -simplices in as basis, is an matrix with entries in and the problem in (3.4) reduces to linear algebra over with modulo arithmetic. While this seems innocuous, the cost of computing Betti numbers becomes prohibitive when the size of the simplicial complex is large. The number of simplices in a -dimensional simplicial complex is bounded above by

(3.5)

where is the size of the vertex set, i.e., , and the bound is obtained by summing over the maximal number of simplices of each dimension. The cost of computing is [54].

We conclude with a discussion of simplicial maps, which we will need in persistent homology. Let and be two abstract simplicial complexes. A simplicial map is a map defined on their vertex sets so that for each simplex , we have that is a simplex in . Such a map induces a map between chain complexes that we will also denote by , slightly abusing notation, defined by

that in turn induces a map between homologies

(3.6)

for all . Recall that is a shorthand for homology class (3.2).

The composition of two simplicial maps and is also a simplicial map and thus induces a map between homologies for any . For the type of simplicial complex (Vietoris–Rips) and simplicial maps (inclusions) we consider in this article, we have that , a property known as functoriality.

3.4. Vietoris–Rips complex

There are several ways to obtain a simplicial complex from a point cloud data set but one stands out for its simplicity and widespread adoption in topological data analysis. Note that a point cloud data set is simply a finite set of points . We will build an abstract simplicial complex with vertex set .

Let be a metric on . The Vietoris–Rips complex at scale on is denoted by and defined to be the simplicial complex whose vertex set is and whose -simplices comprise all simplices satisfying for all . In other words,

It follows immediately from definition that is an abstract simplicial complex. Note that it depends on two things — the scale and the choice of metric . Figure 5 shows an example of Vietoris–Rips complex constructed from a point cloud data set of ten points in at three different scales , , and with given by the Euclidean norm.

For a point cloud sampled from a manifold embedded in , the most appropriate metric is the geodesic distance on and not the Euclidean norm on . This is usually estimated from using the graph geodesic distance as we will see in Section 5.3.

When is sampled from a manifold , then for a dense enough sample, and at sufficiently small scale, the topology of recovers the true topology in an appropriate sense, made precise in the following result in [43]:

Proposition 3.1 (Niyogi–Smale–Weinberger).

Let be -dense in a compact Riemannian manifold , i.e., for every , there exists such that . Let be the condition number of . Then for any , the union of balls deformation retracts to . In particular, the homology of equals the homology of .

Roughly speaking the condition number of a manifold embedded in encodes its local and global curvature properties but the details are too technical for us to go into here.

Figure 5. Left: Vietoris–Rips complex on ten points in at scales , , . Right: Persistence barcodes diagram obtained from filtration of the Vietoris–Rips complex with scale varying from to . Barcodes show two most prominent topological features of the point cloud, the long black line at the bottom and the long red line near the top, revealing the topology of a circle, i.e., . A -homology class dies at times , , and ; a -homology class is simultaneously born at time .

3.5. Persistent homology

The Vietoris–Rips complex of a point cloud data set involves a parameter . Here we will discuss how this may be determined.

The homology classes, are very sensitive to small changes. For example, punching a small hole in a sphere has little effect on its geometry but has large consequence on its topology — even a very small hole would kill the homology class, turning a sphere into a topological disk. This also affects the estimation of Betti numbers of a manifold from a subset of sampled point cloud data: there are many scenarios where moving a single point can significantly change the homology estimates. Persistent homology [15] addresses this problem by blending geometry and topology. It allows one to reliably estimate the Betti numbers of a manifold from a point cloud data set, and to a large extent avoids the problem of the extreme sensitivity of topology to perturbations. In machine learning lingo, Betti numbers are features associated with the point cloud, and persistent homology enables one to identify the features that are robust to noise.

Informally, the idea of persistent homology is to introduce a geometric scale that varies from to into homology calculations. At a scale of zero, is a collection of -dimensional simplices with and all other Betti numbers zero. In machine learning lingo the simplicial ‘overfits’ the data , giving us a discrete topological space. As increases, more and more distant points come together to form higher and higher-dimensional simplices in and its topology becomes richer. But as , eventually all points in become vertices of a single -dimensional simplex, giving us a contractible topological space. So at the extreme ends and , we have trivial (discrete and indiscrete) topologies and the true answer we seek lies somewhere in between — to obtain a ‘right’ scale , we use the so-called persistence barcodes. Figure 5 shows an example of a persistence barcode diagram. This is the standard output of persistent homology calculations and it provides a summary of the evolution of topology across all scales. Generally speaking, a persistence barcode is an interval where its left-end point is the scale at which the new feature appears or born; and its right-end point is the scale at which that feature disappears or die. The length of the interval is the persistence of that feature. Features that are non-robust to perturbations will produce short intervals; conversely, features that persist long enough, i.e., produce long intervals, are thought to be prominent features of the underlying manifold. For our purpose, the feature in question will always be a homology class in th homology group. The collection of all persistence barcodes over then gives us our persistence barcode diagram. If we sample a point cloud satisfying Proposition 3.1 from a sphere with a small punctured hole, we expect to see a single prominent interval corresponding to , and a short interval corresponding to the small hole. The persistence barcode would allow us to identify a scale at which all prominent topological features of are represented, assuming that such a scale exists. In the following we will assume that we are interested in selecting from a list of finitely many scales but that they could go as fine as we want. For our purpose, the simplicial complex below are taken to be , , but the following discussion holds more generally.

We provide the details for computing persistence barcodes for homology groups, or persistent homology in short. This essentially tracks the evolution of homology in a filtration of simplicial complexes, which is chain of a nested simplicial complexes

(3.7)

We let , denote the inclusion maps where each simplex of is sent to the same simplex in and regarded as a simplex in . As is obviously a simplicial map and induces a linear map between the homologies of and as discussed in Section 3.3, composing inclusions gives us a linear map between any two complexes in a filtration and , , . The index is often referred to as ‘time’ in this context. As such, for any , one can tell whether two simplices belonging to two different homology classes in are mapped to the same homology class in  — if this happens, one of the homology class is said to have died while the other has persisted from time to . If a homology class in is not in the image of , we say that its homology class is born at time . The persistence barcodes simply keep track of the birth and death times of the homology classes.

To be completely formal, we have the two-dimensional complex called a persistent complex shown in Figure 6 with horizontal maps given by boundary maps and vertical maps given by simplicial maps . Thanks to a well-known structure theorem [57] which guarantees that a barcodes diagram completely describes the structure of a persistent complex in an appropriate sense, we may avoid persistent complexes like Figure 6 and work entirely with persistence barcodes diagram like the one on the right of Figure 5.

Figure 6. Persistence complex of the filtration .

Henceforth we let , , be the Vietoris–Rips complex of our point cloud data at scales . An important fact to note is that persistence barcodes may be computed without having to compute homology at every scale , or, equivalently, at every time . To identify the homology classes in that persist from time to time , there is no need to compute individually as one might think. Rather, one considers the -persistent th homology group

where . This captures the cycles in that contribute to homology in . One may consistently choose a basis for each so that the basis elements are compatible for homologies across for all possible values of and . This allows one to track the persistence of each homology class throughout the filtration (3.7) and thus obtain the persistence barcodes: roughly speaking, with the right basis, we may simultaneously represent the boundary maps on as matrices in an column-echelon form and read-off the dimension of , known as the -persistent th Betti number , from the pivot entries in these matrices. For details we refer readers to [15, 57].

3.6. Homology computations in practice

Actual computation of homology from a point cloud data set is more involved than what one might surmise from the description in the last few sections. We will briefly discuss some of the issues involved.

Before we even begin to compute the homology of the point cloud data , we will need to perform a few preprocessing steps, as depicted in Figure 7. These steps are standard practice in topological data analysis: (i) We smooth out and discard outliers to reduce noise. (ii) We then select the scale and constructing the corresponding Vietoris–Rips complex . (iii) We simplify in a way that reduces its size but leaving its topology unchanged. All processing operations that can have an effect on the homology are in steps (i) and (ii), and the homology of the simplicial complex is assumed to closely approximate that of the underlying manifold . The simplification in step (iii) is done to accelerate computations without altering homology.

(i)(ii)(iii)
Figure 7. Pipeline for computation of homology from a point cloud data.

Note that the size of the final simplicial complex on which we perform homology calculations is the most important factor in computational cost. While increasing the number of points sampled from a manifold, i.e., the size of , up to the point in Proposition 3.1 improves the accuracy of our homology estimates, it also results in a simplicial complex that is prohibitively large for carrying out computations, as we saw in (3.5). But since we are not concerned with the geometry of the underlaying manifold, only its topology, it is desirable to construct a small simplicial complex with minimal topology distortion. A well-known simplification is the Witness complex [12], which gives results of nearly the same quality with a simplicial complex of a smaller size constructed from so-called landmark points. Many other methods have been proposed for this [4, 14, 37], and while we will take advantage of these techniques in our calculations, we will not discuss them here.

The takeaway is that persistence barcodes are considerably more expensive to compute than homology at a single fixed scale . Therefore, running full persistent homology in the context of modern deep neural network poses some big challenges: modern deep neural networks operate on very high dimensional big data sets, a setting in which persistent homology cannot be used directly due to computation and memory complexity. This situation is exacerbated by the fact that neural networks are randomly trained (with potentially big variation in the learned decision boundaries), therefore one needs to run many computations to obtain reliable results. Furthermore, an automated statistical analysis of persistent homology is still an active area of research and often requires additional large computational effort. It seems therefore largely beyond the reach of current technology to try to analyze topology of many of the standard deep learning data sets (such as SVHN, CIFAR-10, ImageNet, see [41, 26, 13]). We will return to this point later when we introduce our methodology for monitoring topology transformations in a neural network. In particular, we will see in Section 5.3 that our experiments are designed in such a way that although we will compute homology at every layer, we only need to compute persistence barcodes once, before the data set is passed through the layers.

4. Overview of problem and methodology

We will use binary classification, the most basic and fundamental problem in supervised learning, as our platform for studying how neural networks change topology. More precisely, we seek to classify two different probability distributions supported on two disjoint manifolds , . The distance can be arbitrarily small but not zero. So there exists an ideal classifier with zero prediction error. Here and henceforth, will denote the Euclidean norm in .

We sample a large but finite set of points uniformly and densely, so that the Betti numbers of and can be faithfully obtained from the point cloud data sets and as described in Section 3. Our training set is a labeled point cloud data set, i.e., is labeled to indicate whether or . We will use and , or rather, their Vietoris–Rips complex as described in Section 3.4, as finite proxies for and .

Our feedforward neural network is given by the usual composition

(4.1)

where each layer of the network , , is the composition of an affine map , , with an activation function ; and is the score function. The width is the number of nodes in the th layer and we set and . For , the composition of the first through th layers is denoted

We assume that is a linear classifier and thus the decision boundary of is a hyperplane in . For notational convenience later, we define the ’th layer’ to be the score function and the ’th layer’ to be the identity function on .

We train an -layer neural network on a training set to classify samples into class or . As usual, the network’s output for a sample is interpreted to be the probability of . In all our experiments, we train until it correctly classifies all  — we will call such a network well-trained. In fact, we sampled so densely that in reality also has near zero misclassification error on any test set ; and we trained so thoroughly that its output is concentrated near and . For all intents and purposes, we may treat as an ideal classifier.

We deliberately choose to have doubly complicated topologies in the following sense:

  1. For each , the component itself will have complicated topologies, with multiple components, i.e., large , as well as multiple -dimensional holes, i.e., large .

  2. In addition, and will be entangled in a topologically complicated manner. See Figures 8 and 9 for example. They not only cannot be separated by a hyperplane but any decision boundary that separates them will necessarily have complicated topology itself.

In terms of the topological complexity in (2.1), , , are all large.

Our experiments are intended to show the topologies of and evolve as runs from through , for different manifolds entangled in different ways, for different number of layers and choices of widths , and different activations . Getting ahead of ourselves, the results will show that a well-trained neural network reduces the topological complexity of and on a layer-by-layer basis until, at the output, we see a simple disentangled arrangement where the point cloud gets mapped into two clusters of points and on opposite ends of . This indicates that an initial decision boundary of complicated topology ultimately gets transformed into a hyperplane in by the time it reaches the final layer. We measure and track the topologies of and directly, but our approach only permits us to indirectly observe the topology of the decision boundary separating them.

4.1. Real versus simulated data

We perform our experiments on a range of both real-world and simulated data sets to validate our premise that a neural network operates by simplifying topology. We explain why each is indispensable to our goal.

Unlike real-world data, simulated data may be generated in a controlled manner with well-defined topological features that are known in advance (crucial for finding a single scale for all homology computations). Moreover, with simulated data we have clean samples and may skip the denoising step mentioned in the previous section. We can generate samples that are uniformly distributed on the underlaying manifold, and ensure that the assumptions of Section 4 are satisfied. In addition, we may always simulate a data set with a perfect classifier, whereas such a classifier for a real-wold data set may not exist when the probability distributions of different categories overlap. For convincing results, we train our neural network to perfect accuracy on training set and near-zero generalization error — this may be impossible for real-world data. Evidently if there is no complete separation of one category from the other , i.e., , the manifold will be impossible to disentangle. Such is often the case with real-world data sets, which means that they may not fit our required setup in Section 4.

Nevertheless, the biggest issue with real-world data sets is that they have vastly more complicated topologies that are nearly impossible to determine in advance. Even something as basic as the Mumford data set [29], a mere collection of -pixels of high contrast patches of natural images, took many years to have its topology determined [8] and whether the conclusion (that it has the topology type of a Klein bottle) is correct is still a matter of debate. Figuring out, say, the topology of the manifold of cat images within the space of all possible images is well-beyond our capabilities for the foreseeable future.

Since our experiments on simulated data allow us to pick the right scale to compute homology, we only need to compute homology at one single scale. On the other hand, for real data we will need to find the persistence barcodes, i.e., determine homology over a full range of scales. Consequently, our experiments on simulated data are extensive — we repeat our experiments for each simulated data set over a large number of neural networks of different architectures to examine their effects on topology changes. In all we ran more than 10,000 homology computations on our simulated data sets since we can do them fast and accurate. In comparison, our experiments on real-world data are more limited in scope as it is significantly more expensive to compute persistence barcodes then to compute homology at a single scale. As such, we use simulated data to fully explore and investigate the effects of depth, width, shapes, activation functions, and various combinations of these factors on the topology-changing power of neural networks. Thereafter we use real-world data to validate the findings we draw from the simulated data sets.

5. Methodology

In this section, we will describe the full details of our methodology for (i) simulating topologically nontrivial data sets in a binary classification problem; (ii) training a variety of neural networks to near-perfect accuracy for such a problem; (iii) determining the homology of the data set as it passes through the layers of such a neural network. For real data sets, step (i) is of course irrelevant, but steps (ii) and (iii) will apply with minor modifications; these discussions will be deferred to Section 7.

The key underlying reason for designing our experiments in the way we did is relative computational costs:

  • multidimensional persistent homology is much more costly than persistent homology;

  • persistent homology is much more costly than homology;

  • homology is much more costly than training neural networks.

As such, we train tens of thousands of neural networks to near zero generalization error; for each neural network, we compute homology at every layer but we compute persistent homology only once; and we avoid multidimensional persistent homology altogether.

5.1. Generating data sets

We generate three point cloud data sets D-I, D-II, D-III in a controlled manner to have complicated but manageable topologies that we know in advance.

Figure 8. The manifolds underlying data sets D-I, D-II, D-III (left to right). The green represents category , the red represents category .
Figure 9. Left: D-II comprises nine pairs of such interlocking rings. Right: D-III comprises nine units of such doubly concentric spheres.

D-I is sampled from a two-dimensional manifold consisting of , nine green disks, positioned in , a larger disk with nine holes as on the left in Figure 8. We clearly have and (one connected component, nine holes). D-II is sampled from a three-dimensional manifold comprising nine disjoint pairs of red solid torus interlocked with a green solid torus (a single pair is shown in Figure 9). (resp. ) is the union of all nine green (resp. red) tori. So . D-III is sampled from a three-dimensional manifold comprising nine disjoint units of the following — a large red sphere enclosing a smaller green sphere enclosing a red ball; the green sphere is trapped between the red sphere and the red ball. is the union of all nine green spheres and is the union of the nine spheres and nine balls. So we have and = (see Figures 8 and 9 for more details, but note on Figure 9 the spheres are shown with portions omitted). In all cases, the two categories and are entangled in such a way that any decision boundary separating the two categories necessarily has highly complex topology.

The point cloud data sets D-I and D-III are sampled on a grid whereas D-II is sampled uniformly from the solid tori. The difference in sampling schemes is inconsequential for all intents and purposes in this article, as the samples are sufficiently dense that there is no difference in training and testing behaviors.

5.2. Training neural networks

Our goal is to examine the topology changing effects of (i) different activations: hyperbolic tangent, leaky ReLU set to be , and ReLU; (ii) different depths of four to ten layers; (iii) different widths of six to fifty neurons. So for any given data set (D-I, D-II, D-III) and any given architecture (depth, width, activation), we tracked the Betti numbers through all layers for at least well-trained neural networks. The repetition is necessary — given that neural network training involves a fair amount of randomization in initialization, batching, optimization, etc — to ensure that what we observe is not a fluke.

To train these neural networks to our requirements — recall that this means zero training error and a near-zero () generalization error — we relied on TensorFlow (version 1.12.0 on Ubuntu 16.04.1). Training is done on cross-entropy categorical loss with standard Adam optimizer [25] for up to 18,000 training epochs. Learning rate is set to with an exponential decay, i.e., where is the training epoch normalized by . For the ‘bottleneck architectures’ where the widths narrow down in the middle layers (see Table 1), the decay is set to and . We use the function as the score function in all of our networks, i.e., whose th coordinate is

where is the number of categories. In our case, and .

Table 1 summarizes our results: the data set used, the activation type, the widths of each layer, and the number of successfully trained neural networks of that architecture obtained. The first number in the sequence of the third column gives the dimension of the input, which is two for the two-dimensional D-I and three for the three-dimensional D-II and D-III. The last number in that sequence is always two since they are all binary classification problems. To give readers an idea, training any one of these neural networks to near zero generalization error takes at most 10 minutes, often much less.

data set activation neurons in each layer #
D-I 2-15-15-15-15-15-15-15-15-15-15-2 30
D-I leaky ReLU 2-15-15-15-15-15-15-15-15-15-15-2 30
D-I leaky ReLU 2-05-05-05-05-03-05-05-05-05-05-2 30
D-I leaky ReLU 2-15-15-15-15-03-15-15-15-15-2 30
D-I leaky ReLU 2-50-50-50-50-50-50-50-50-50-50-2 30
D-I ReLU 2-15-15-15-15-15-15-15-15-15-15-2 30
D-II 3-15-15-15-15-15-15-15-15-15-15-2 32
D-II leaky ReLU 3-15-15-15-15-15-15-15-15-15-15-2 36
D-II ReLU 3-15-15-15-15-15-15-15-15-15-15-2 31
D-II 3-25-25-25-25-25-25-25-25-25-25-2 30
D-II leaky ReLU 3-25-25-25-25-25-25-25-25-25-25-2 30
D-II ReLU 3-25-25-25-25-25-25-25-25-25-25-2 30
D-III 3-15-15-15-15-15-15-15-15-15-15-2 30
D-III leaky ReLU 3-15-15-15-15-15-15-15-15-15-15-2 46
D-III ReLU 3-15-15-15-15-15-15-15-15-15-15-2 30
D-III 3-50-50-50-50-50-50-50-50-50-50-2 30
D-III leaky ReLU 3-50-50-50-50-50-50-50-50-50-50-2 30
D-III ReLU 3-50-50-50-50-50-50-50-50-50-50-2 34
D-I 2-15-15-15-15-2 30
D-I 2-15-15-15-15-15-15-15-15-2 30
D-I leaky ReLU 2-15-15-15-15-2 30
D-I leaky ReLU 2-15-15-15-15-15-15-15-15-2 30
D-I ReLU 2-15-15-15-15-2 30
D-I ReLU 2-15-15-15-15-15-15-15-15-2 30
D-II 3-15-15-15-15-2 31
D-II 3-15-15-15-15-15-2 31
D-II 3-15-15-15-15-15-15-15-2 30
D-II leaky ReLU 3-15-15-15-15-2 31
D-II leaky ReLU 3-15-15-15-15-15-2 30
D-II leaky ReLU 3-15-15-15-15-15-15-2 30
D-II leaky ReLU 3-15-15-15-15-15-15-15-2 31
D-II leaky ReLU 3-15-15-15-15-15-15-15-15-2 42
D-II ReLU 3-15-15-15-15-2 32
D-II ReLU 3-15-15-15-15-15-2 32
D-II ReLU 3-15-15-15-15-15-15-15-15-2 31
D-III 3-15-15-15-15-15-15-2 30
D-III 3-15-15-15-15-15-15-15-15-2 31
D-III leaky ReLU 3-15-15-15-15-15-15-2 30
D-III leaky ReLU 3-15-15-15-15-15-15-15-15-2 30
D-III ReLU 3-15-15-15-15-15-15-2 33
D-III ReLU 3-15-15-15-15-15-15-15-15-2 32

Table 1. First column specifies the data set on which we train the networks. Next two columns give the activation used and a sequence giving the number of neurons in each layer. Last column gives the number of well-trained networks obtained.

5.3. Computing homology

For each of the neural networks obtained in Section 5.2, we track how the topology of the respective point cloud data set changes as it passed through the layers. This represents the bulk of the computational effort, way beyond that required for training neural networks in Section 5.2. With simulated data, we are essentially assured of a perfectly clean data set and the preprocessing step in Figure 7 of Section 3.6 may be omitted. We describe the rest of the work involved below.

The metric used to form our Vietoris–Rips complex is given by the graph geodesic distance on the -nearest neighbors graph determined by the point cloud . As this depends on , a positive integer specifying the number of neighbors used in the graph construction, we denote the metric by . In other words, the Euclidean distance on is used only to form the -nearest neighbors graph and do not play a role thereafter. For any , the distance is given by the minimal number of edges between them in the -nearest neighbors graph. Each edge, regardless of its Euclidean length, has the same length of one when measured in .

The metric has the effect of normalizing distances across layers of a neural network while preserving connectivity of nearest neighbors. This is important for us as the local densities of a point cloud can vary enormously as it passes through a layer of a well-trained neural network — each layer stretches and shrinks different regions, dramatically altering geometry as one can see in the bottom halves of Figures 11, 12, and 13. Using an intrinsic metric like ameliorates this variation in densities; it is robust to geometric changes and yet reveals topological ones. Furthermore, our choice of allows for comparison across layers with different numbers of neurons. Note that if , the Euclidean norms on and are two different metrics on two different spaces with no basis for comparison. Had we used Euclidean norms, two Vietoris–Rips complexes of the same scale in two different layers cannot be directly compared — the scale needs to be calibrated somehow to reflect that they live in different spaces. Using avoids this problem.

This leaves us with two parameters to set: , the number of neighbors in the nearest neighbors graph and , the scale at which to build our Vietoris–Rips complex. This is where persistent homology, described at length in Section 3.5, comes into play. Informed readers may think that we should be using multidimensional persistence since there are two parameters but this is prohibitively expensive as the problem is EXPSPACE-complete [9] and its results are not quite what we need, for one, there is no multidimensional analogue of persistence barcodes [10]. To choose an appropriate for a point cloud , we construct a filtered complex over the two parameters: Let be the Vietoris–Rips complex of with respect to the metric at scale . In our case, we know the topology of the underlying manifold completely as we generated it in Section 5.1 as part of our data sets. Thus we may ascertain whether our chosen value gives a Vietoris–Rips complex with the same homology as .

Set , we determine a value of with persistent homology on the -filtered complex in the metric with correct zeroth homology, i.e., is chosen so that

Set , we determine a value of with persistent homology on the -filtered complex in the metric with correct first and second homologies, i.e., is chosen so that