Cliques and Cavities in the Human Connectome

Cliques and Cavities in the Human Connectome

Ann Sizemore Chad Giusti Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19041 USA Ari Kahn Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19041 USA Richard F. Betzel Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19041 USA Danielle S. Bassett


Encoding brain regions and their connections as a network of nodes and edges captures many of the possible paths along which information can be transmitted as humans process and perform complex behaviors. Because cognitive processes involve large and distributed networks of brain areas, principled examinations of multi-node routes within larger connection patterns can offer fundamental insights into the complexities of brain function. Here, we investigate both densely connected groups of nodes that could perform local computations as well as larger patterns of interactions that would allow for parallel processing. Finding such structures necessitates that we move from considering exclusively pairwise interactions to capturing higher order relations, concepts naturally expressed in the language of algebraic topology. These tools can be used to study mesoscale network structures that arise from the arrangement of densely connected substructures called cliques in otherwise sparsely connected brain networks. We detect cliques (all-to-all connected sets of brain regions) in the average structural connectomes of 8 healthy adults scanned in triplicate and discover the presence of more large cliques than expected in null networks constructed via wiring minimization, providing architecture through which brain network can perform rapid, local processing. We then locate topological cavities of different dimensions, around which information may flow in either diverging or converging patterns. These cavities exist consistently across subjects, differ from those observed in null model networks, and – importantly – link regions of early and late evolutionary origin in long loops, underscoring their unique role in controlling brain function. These results offer a first demonstration that techniques from algebraic topology offer a novel perspective on structural connectomics, highlighting loop-like paths as crucial features in the human brain’s structural architecture.


Macroscopic computation and cognition in the human brain are affected by an intricately interconnected collection of neurophysical mechanisms [1, 2]. Unlike modern parallel computers, which operate through vast numbers of programs running in tandem and in isolation from one another, it is understood that many of these processes are supported on anatomically specialized brain regions that constantly share information among themselves through a network of white matter tracts [3]. One approach for understanding the function of such a system begins with studying the organization of this white matter substrate using the language of networks [4, 5, 6]. Collections of regions that are pairwise tightly interconnected by large tracts, variously known as communities [7], modules [8], and rich clubs [9, 10], have been the subject of substantial prior study. Moreover, they have given critical insights into the large-scale structural units of the brain that give rise to many common cognitive functions [11, 12]. Such communities easily and rapidly transmit information among their members, facilitating local integration of information [13].

Often left implicit in such investigations of the white matter network is the understanding that just as important as the strong internal connections in communities are the relative weakness of connections to external regions. This tendency to focus on strongly connected local regions arises naturally because standard network analyses are based on local properties of the network at individual vertices, where local edge strength is the primary feature [14, 15, 16]; the particular choice of quantitative language serves as a filter that diverts attention toward certain facets of the system. However, if one takes a more macro-scale view of the network, the small or absent white matter tracts intuitively serve to isolate processes carried on the strong white matter tracts from one another. Such structure facilitates more traditional conceptual models of parallel processing, wherein data is copied or divided into multiple pieces in order to rapidly perform distinct computations, and then recombined [17]. Together, the two notions of dense cliques and isolating cavities provide a picture of a system that performs complex computations by decomposing information into coherent pieces to be disseminated to local processing centers, and then aggregating the results.

In order to quantitatively characterize this macroscale structure, we employ an enrichment of networks that comes from the field of algebraic topology [18], developed precisely to understand the interplay between these weak and strong connections in systems [19]. Beginning with a structural white matter network, we first extract the collection of all-to-all connected subgraphs, called cliques, which represent sets of brain regions that may possess a similar function, operate in unison, or share information rapidly [20]. Attaching these cliques to one another following a map given by the network creates a topological object called a clique complex from which we can extract certain patterns of strongly connected regions called cycles [21]. Chordless cycles correspond to extended paths of potential information transmission along which computations can be performed serially to effect cognition in either a divergent or convergent manner, and we refer to these “enclosed spaces” as topological cavities111In the mathematical literature, these are called non-trivial homology classes. However, due to the extensive and potentially confusing collision with the use of the word “homology” in the study of brain function, here we elect to use this new terminology outside of references and necessary mathematical discussion in the Methods and Supplementary Information. Throughout, the word “homology” referes to the mathematical, rather than the biological, notion. in the network. We hypothesize that the spatial distributions of cliques and cavities will differ in their anatomical locations, corresponding to their differential putative roles in neural computations.

To address these hypotheses, we construct structural brain networks from diffusion spectrum imaging (DSI) data acquired from eight volunteers in triplicate. We measure node participation in cliques and compare these with a minimally wired null model [22]. We also demonstrate the correspondence between the anatomical location of cliques and the anatomical location of the brain’s structural rich club: a group of hubs that are densely connected to one another. Next, we study topological cavities using a recently developed method from algebraic topology, which detects the presence and robustness , summarized by a quantity called persistence), of shell-like motifs of cliques called cycles in the network architecture. Specifically, we recover all minimal length cycles corresponding to four persistent topological cavities in the consensus structure, and show that these features are robustly present across subjects through multiple scans. Our results demonstrate that while cliques are observed in the structural core, cycles enclosing topological cavities are observed to link regions of subcortex, frontal cortex, and parietal cortex in long loops, underscoring their unique role in controlling brain function [23, 24, 25].


To extract relevant architectural features of the human structural connectome, we first encoded diffusion spectrum imaging (DSI) data acquired from eight subjects in triplicate as undirected, weighted networks. In this network, nodes correspond to 83 brain regions defined by the Lausanne parcellation [26] and edges correspond to the density of white matter tracts between node pairs (Fig. 1a). We initially study a group-averaged network, and then demonstrate that our results are consistently observed across individuals in the group as well as across multiple scans from the same individual.

Figure 1: Cliques are features of local neighborhoods in structural brain networks. (a) Diffusion spectrum imaging (DSI) data can be summarized as a network of nodes corresponding to brain regions, and weighted edges corresponding to the density of white matter streamlines reconstructed between them. Here we present a group-averaged network, where each edge corresponds to the mean density of white matter streamlines across eight subjects scanned in triplicate. We show the network at an edge density , and display its topology on the brain (top), and on a circle plot (bottom). This and all brain networks are drawn with BrainNetViewer [27]. (b) All-to-all connected subgraphs on nodes are called -cliques. For example, 2-, 3-, and 4-cliques are shown both as schematics and as features of a structural brain network. (c) A maximal 4-clique has 3-, 2-, and 1-cliques as faces. (d) For statistical validation, we construct a minimally wired null model by linking brain regions by edge weights equal to the inverse of the Euclidean distance between nodes corresponding to brain region centers. Here we show an example of this scheme on 15 randomly chosen brain regions.

Cliques in the Human Structural Connectome

Here, we use the group-averaged network thresholded at an edge density () of 0.25 for computational purposes and for consistency with prior studies [20]. Results at other densities are similar, and details can be found in the Supplimentary Information. As a null-model, we use minimally wired networks (Fig. 1d) created from assigning edge weights to the inverse Euclidean distance between brain region centers (see Methods) observed in each of 24 scans. This model mimics the tendency of the brain to conserve wiring cost by giving edges connecting physically close nodes higher weight than edges between distant nodes.

For each network, we now enumerate all maximal -cliques. Recall that a -clique is a set of nodes having all pairwise connections (see Fig. 1b for 2-, 3-, and 4-cliques representing edges, triangles, and tetrahedra, respectively.) By definition, a subgraph of a clique will itself be a clique of lower dimension, called a face. A maximal clique is one that is not a face of any other (see Fig. 1c for a maximal 4-clique, which contains 3-, 2-, and 1-cliques as faces).

To understand the anatomical distribution of maximal cliques in both real and null model networks, we count the number of maximal -cliques in which a node is a member, and refer to this value as the node participation, (see Methods). Summing over all gives the total participation, . We observe that the distribution of maximal clique degrees is unimodal in the minimally wired null model and bimodal in the empirical data (see Fig. 2a). Anatomically, we observe a general progression of maximal clique participation from anterior to posterior regions of cortex as we detect higher degrees (Fig. 8). Indeed, maximal cliques of 12–16 nodes contain nearly all of the visual cortex. This spatial distribution suggests that large interacting groups of brain regions are required for early information processing, while areas of frontal cortex driving higher-order cognition utilize smaller working clusters. We also observe that the human brain displays smaller maximal cliques than the minimally wired null model, a fact that might support its distributed processing.

Figure 2: Spatial distribution of maximal cliques varies between average DSI and minimally wired null model. (a) Distribution of maximal cliques in the average DSI (black) and individual minimally wired (gray) networks, thresholded at an edge density of . Heat maps of node participation on the brain for a range of clique degrees equal to 4–6 (left), 8–10 (middle), and 12–16 (right). (b) Node participation in maximal cliques sorted by the putative cognitive system to which the node is affiliated in functional imaging studies [28]. We show individual node values (top) as well as the difference between real and null model (; bottom) according to the colormap (right). Individual node labels are listed in Fig. 9.

The anterior-posterior gradient of maximal clique size can be complemented by additionally analyzing regional variation in the cognitive computations being performed. Specifically, we ask whether node participation in maximal cliques differs in specific cognitive systems [28] (Fig. 2b). We observe that the largest maximal cliques are formed by nodes located almost exclusively in the subcortical, dorsal attention, visual, and default mode systems, suggesting that these systems are tightly interconnected and might utilize robust topologically-local communication. Critically, this spatial distribution of the participation in maximal cliques differs significantly from the minimally wired null model, particularly in the cingulo-opercular and subcortical systems. We hypothesized that these differences may be driven by the excess of maximal 8-cliques in the minimally wired network (Fig. 2a). Expanding on the difference in node participation (), we see indeed that the large discrepancies between empirical and null model networks in cingulo-opercular and subcortical systems are caused by a difference in maximal cliques of approximately eight nodes (Fig. 2b, bottom).

A node with high participation must in turn be well connected locally222Note the converse is not necessarily true. As an example consider a node that only participates in one maximal 16-clique.. Therefore we expect the participation of a node to act similarly to other measures of connectivity. To test this expectation, we examine the correlation of node participation with node strength, the summed edge weight of connections emanating from a node, as well as with node communicability, a measure of the strength of long distance walks emanating from a node (Fig. 3a). We find that both strength and communicability exhibit a strong linear correlation with the participation of a node in maximal cliques (Pearson correlation coefficient and , respectively).

Figure 3: Maximal clique participation tracks with network measures. (a) Scatter plot of node participation and node strength (top) or communicability (bottom). (b) Calculated -core (top) and -core decomposition in relation to participation in maximal cliques with rich club nodes (shown ; see Methods and Fig. 10) indicated in orange (bottom). Size indicates maximum -core or -core level attained by the node, while color indicates the participation .

These results indicate that regions that are strongly connected to the rest of the brain by both direct paths and indirect walks also participate in many maximal cliques. Such an observation suggests the possibility that brain hubs – which are known to be strongly connected with one another in a so-called rich-club – play a key role in maximal cliques. To test this possibility, we measure the association of brain regions to the rich-club using notions of coreness. A -core of a graph is a maximal connected subgraph of in which all vertices have degree at least , and an -core is the equivalent notion for weighted graphs (see Methods). Using these notions, we consider how the -core and -core decompositions align with high participation (Fig. 3b). In both cases, nodes with higher participation often achieve higher levels in the - and -core decomposition. Moreover, we also observe the frequent existence of rich club connections between nodes with high participation (Fig. 3b, bottom). Together, these results suggest that rich-club regions of the human brain tend to participate in local computational units in the form of cliques.

Cavities in the Structural Connectome

Whereas cliques in the DSI network act as neighborhood-scale building blocks for the computational structure of the brain, the relationships between these blocks can be investigated by studying the unexpected absence of strong connections, which can be detected as topological cavities in the structure of the brain network. Because connections are treated as communication channels along which brain regions can signal one another and participate in shared neural function, the absence of such connections implies a decreased capacity for communication which serves to enhance the segregation of different functions.

To identify topological cavities in a weighted network, we construct a sequence of binary graphs, each included in the next (Fig. 4a), known as a filtration. Beginning with the empty graph we replace unweighted edges one at a time according to order of decreasing edge weight, and we index each graph by its edge density , given by the number of edges in the graph divided by the number of possible edges. After each edge addition, we extract “shell-like” motifs of -cliques called (non-trivial) -cycles333This shift in index is due to geometry: a -clique is a 1-dimensional line segment, a -clique is a 2-dimensional triangle, etc., each of which encloses a -dimensional topological cavity in the structure. When is clear or not pertinent, we will supress it from the notation, and refer simply to “cycles” and “cavities”. While any cavity is surrounded by at least one cycle, often multiple cycles surround the same cavity. However, any two cycles that detect the same cavity will necessarily differ from one another by the boundaries of some collection of -cliques (see Supporting Information and Fig. 15). Any two such cycles are called topologically equivalent, so each topological cavity is detected by a non-trivial444The equivalence class containing the cycle consisting of a single vertex is called trivial and bounds the “empty” cavity. equivalence class of cycles. We can represent a topological cavity using any of the cycles within the corresponding equivalence class, but for purposes of studying computational architectures it is reasonable to assume information will travel along paths of minimal length; thus, in this analysis we will consider the collection of cycles in an equivalence class with the minimal number of nodes555In the absence of a filtration, there are serious computational issues involved in locating minimal-size representatives of equivalence classes. However, in this setting the computation is easily performed using standard algorithms (see Methods). and call these the minimal cycles representing the cavity.

As we move through the filtration by adding edges, the structure of the cycles, and thus of the cavities they represent, will evolve. We consider an example in Fig. 4a, showing a green minimal cycle surrounding a 2D cavity which first appears (is born) in the graph sequence at (cyan). As an edge completing a -clique is added, the minimal cycle representative shrinks to four nodes in size, then finally is tessellated by -cliques (dies) at (orange). We record and for all topological cavities (e.g., non-trivial equivalence classes of cycles) found within the filtration, and display them on a persistence diagram (Fig. 4b). Cavities that survive many edge additions have a long lifetime, defined as , or a large death-to-birth ratio, . Such cycles are commonly referred to as persistent cavities and in many applications are considered the “topological features” of the system.

Figure 4: Tracking clique patterns through a network filtration reveals key topological cavities in the structural brain network. (a) Example filtration of a network on 15 nodes shown in the brain across edge density (). Blue line on the axis indicates the density of birth () of the 2D cavity surrounded by the green minimal cycle. As edges are added, 3-cliques (cyan) form and shrink the cavity and consequentially the minimal green cycle is now four nodes in size. Finally, the orange line marks the time of death () when the cavity is now filled by 3-cliques. (b) Persistence diagram for the cavity surrounded by the green cycle from panel (a). (c) Persistence diagrams for the group-averaged DSI (teal) and minimally wired null (gray) networks in dimensions one (left) and two (right). Cavities in the group-averaged DSI network with long lifetime or high death-to-birth ratio are shown in unique colors and will be studied in more detail. (d) Box plots of the death-to-birth ratio for cavities of two and three dimensions in the group-averagd DSI and minimally wired null networks. Colored dots correspond to those highlighted in panel (c). The difference between values for 3D topological cavities in the average DSI data versus the minimally wired null model was not found to be significant. (e) Minimal cycles representing each persistent cavity at noted in panels (c), (d) shown in the brain (top) and as a schematic (bottom).

We investigate the persistence of 2D and 3D cavities (respectively represented by equivalence classes of - and -cycles) in the group-average DSI network and minimally wired null networks (see Fig. 4c). There are substantially fewer persistent cavities in the group-average DSI network than in the null models. To illustrate the structure of these cavities, we select four representative cavities with exceedingly long lifetimes or a high to ratio (Fig. 4c,d) in the empirical data, and for each we find the minimal-length representative cycles at (Fig. 4e)666Such cycles for all of the persistent cavities found in the empircal data are illustrated in Fig. LABEL:fig:allcycles1 and 21. The first persistent cavity appears as early as and is minimally enclosed by the unique blue cycle composed of the thalamus and caudate nucleus of both hemispheres. The green cycle connecting the medial and lateral orbitofrontal, rostaral anterior cingulate, putamen, and superior frontal cortex is the only minimal cycle surrounding a long-lived cavity in the left hemisphere. The final persistent 2D cavity in the average DSI data is found in the right hemisphere between the medial orbitofrontal, accumbens nucleus, any of the subcortical regions hippocampus, caudate nucleus, putamen, thalamus, and amygdala, and any of the rostral middle frontal, lateral orbitofrontal, medial orbitofrontal of the left hemisphere, and rostral anterior cingulate from both hemispheres (see Fig. 5e for all 12 minimal representatives). Finally, the purple octahedral cycle made from 3-cliques contains the inferior and middle temporal, lateral occipital, inferior parietal, supramarginal, superior parietal, and either of the superior temporal and insula of the left hemisphere, and encloses the longest-lived 3D cavity in the structural brain network.

Test-ReTest Reliability and Other Methodological Considerations

It is important to ask whether the architectural features that we observe in the group-averaged DSI network can also be consistently observed across multiple individuals, and across multiple scans of the same individual to ensure these cavities are not artifacts driven by a few outliers. Comparison of persistent cavities arising from two different networks is complicated by our notion of equivalence of cavities, and our desire to work with particular representative cycles. To capture the extent to which the cavities and their minimal representatives in the average DSI data are present in the individual scans, we record the collection of cliques that compose each minimal cycle representing the equivalance class (as seen in Fig. 4e), and check both for the existence of one of those collections of cliques, corresponding to the existence of the same strong fiber tracts, and, more stringently, for the presence of a topological cavity represented by that cycle in each individual’s DSI network (see Supporting Information for more details). We observed that the subcortical cycle (Fig. 4e, blue) exists and these nodes (thalamus and caudate nucleus of both hemispheres) surround an equivalent 2D cavity in at least one scan of all individuals and the late-developing subcortical-frontal cycle (Fig. 4e, red) surrounds a cavity found in seven of the eight individuals in at least one of three scans (Fig. 5b,f). The earlier arriving subcortical-frontal cycle (Fig. 4e, green) is present in all individuals and a similar cavity is seen at least once in all individuals (Fig. 5d). Finally, we observe that the octahedral connection pattern in posterior parietal and occipital cortex (Fig. 4e, purple) is present at least once in seven of eight individuals and these regions enclose a similar cavity at least once in six of these individuals (Fig. 5h). In the opposite hemisphere, the cyclic connection patterns and similar cavities appear though not as regularly (Fig. 24). Finally we check the existence of similar cavities within the minimally wired null models, and see cavities denoted by the green and purple cycles are never seen (Fig. 24). However, similar cavities to those represented by the red and blue minimal cycles appear frequently in the null model, though with different birth/death densities and lifetimes. In summary we find topological cavities observed in the group-averaged DSI network appear consistently across individuals, suggesting their potential role as conserved wiring motifs in the human brain.

Figure 5: Cycles and similar cavities in the average DSI network are consistently seen across individuals. (a),(c),(e),(g) Normalized summed edge weights of the individual scan data connecting nodes seen in minimal cycle(s) recovered from average DSI. (b),(d),(f),(h) Connections between indicated cycle nodes in each scan shown thresholded at the minimal weight of expected cycle edges. A gray background indicates a similar cavity found in this scan.

In addition to consistency across subjects and scans, it is important to determine whether the known high connectivity from subcortical nodes to the rest of the brain may be artificially obscuring non-trivial cortico-cortical cavities important for brain function. To address this question, we examined the 66-node group-average DSI network composed only of cortical regions, after removing subcortical regions, insula, and brainstem. We recovered a long-lived topological cavity surrounded by four cycles of minimal length composed of nine nodes connecting temporal, parietal, and frontal regions (Fig. 6). Note in the schematic of Fig. 6a we see clearly two 2D cavities. The birth edge here was between the lateral orbitofrontal and superior temporal regions, which prevents us from determining whether the exact minimal cycle surrounding this cavity follows the superior frontal (LH)/posterior cingulate or the superior frontal (RH)/caudal middle frontal branch of the top loop. Following either of these two branches (then either of the banks of the superior temporal sulcus or middle temporal route) gives four cycles in which two are equivalent to each other but not to either cycle in the other pair. We will accept all of these four as minimal maroon cycles since any of the four could be minimal representatives. Moreover, at least one of these minimal cycles and corresponding cavity was observed in each scan of every individual (Fig. 27c), and often in the opposite hemisphere as well (Fig. 27d). These results reveal that cortico-cortical cycles are indeed present and suggest their potential utility in segregating function across the brain.

Figure 6: Removal of subcortical nodes allows for detection of nine-node cortical cycle enclosing large 2D cavity. (a) Minimal cycles shown in the brain (left) and as a schematic (right). (b) Persistence diagram of and . Persistent feature corresponding to minimal cycles in (a) indicated with maroon dot.


In this study, we describe a principled examination of multi-node routes within larger connection patterns that are not accessible to network analysis methods that exclusively consider pairwise interactions between nodes. Our approach draws on concepts from a discipline of mathematics known as algebraic topology to define sets of all-to-all connected nodes as structural units, called cliques, and then to use the clique architecture of the network to detect structural topological cavities, detected by the existence of non-trivial representative cycles. Using this approach, we show that node participation in maximal cliques varies spatially and by cognitive systems, suggesting a global organization of these neighborhood-scale features. These cliques form shell-like patterns of connectivity in the human structural connectome, which separate relatively early-evolving regions of the subcortex with higher-order association areas in frontal, parietal, and temporal cortex that evolved on more recent time scales. We found the recovered topological cavities exist consistently across individuals and are not expected in a spatially embedded null model, emphasizing their importance in neural wiring and function. These results offer a first demonstration that techniques from algebraic topology offer a novel perspective on structural connectomics, highlighting cavernous spaces as crucial features in the human brain’s structural architecture.

Algebro-topological Tools for Neural Data Analysis

Algebraic topology is a relatively young field of pure mathematics that has only recently been applied to the study of real-world data. However, the power of these techniques to measure structures that are inaccessible to common graph metrics has gained immediate traction in the neuroscience community. Here, we highlight a few notable examples from the growing literature; a more comprehensive recent account can be found in [29]. At the neuron level, persistent persistence has been used to detect intrinsic structure in correlations between neural spike trains [21], expanding our understanding of the formation of spatial maps in the hippocampus [30]. Moreover, at the level of large-scale brain regions, these tools have been exercised to characterize the global architecture of fMRI data [31]. Based on their unique sensitivity, we expect these algebric-topological methods to provide novel contributions to our understanding of the structure and function of neural circuitry across all scales at which combinatorial components act together for a common goal: from firing patterns coding for memory [32, 33] to brain regions interacting to enable cognition.

Our study uses algebraic topology in the classical form to obtain a global understanding of the structure, and in conjunction, it investigates particular topological features themselves and relates these features to cognitive function. Cycle representatives have previously been considered in biology [34, 35, 36], but to our knowledge this is a first attempt to compare topological features in multiple brains.

Cliques and Cavities for Computations

Cliques and minimal cycles representing cavities are structurally positioned to play distinct roles in neural computations. Cliques represent sets of brain regions that may possess a similar function, operate in unison, or share information rapidly [20]. Conversely, minimal cycles correspond to extended paths of potential information transmission along which computations can be performed serially to affect cognition in either a divergent or convergent manner. Indeed, the shell-like or chain-like nature of cycles is a structural motif that has previously been – at least qualitatively – described in neuroanatomical studies of cellular circuitry. In this context, such motifs are known to play a key role in learning [37], memory [32], and behavioral control [38, 39]. The presence of minimal cycles suggests a possible role for polysynaptic connections and their importance to neural computations, consistent with evidence from the field of computational neuroscience highlighting the role of highly structured circuits in sequence generation and memory [32, 37]. Indeed, in computational models at the neuron level, architectures reminiscent of chains [38, 39] and rings are particularly conducive to the generation of sequential behavioral responses. It is interesting to speculate that the presence of these structures at the much larger scale of white matter tracts could support diverse neural dynamics and a broader repertoire of cognitive computations than possible in simpler and more integrated network architectures [40].

Another consideration concerns the apparent asymmetry of our results with respect to left and right cerebral hemispheres. While unanticipated, we note that in some cases they have intuitive mathematical underpinnings. For example, in Fig. 3, we explicitly count maximal cliques, so one edge difference between a region in the left and right hemisphere could result in a large difference in the number of observed maximal cliques. Interestingly, despite this fact we still observe a strong correlation between node strength and , instilling confidence in these results. From a neuroscience point of view, brain asymmetries are not wholly unexpected. There is a storied and ever-growing literature describing the lateralization (i.e., asymmetries) of brain function [41]. While speech generation [42] and language processing [43, 44] are among the most commonly-cited functions to exhibit lateralization [45, 46], such effects have also been linked to a diverse group of other cognitive domains. These include emotion [47], processing of visual input [48], and even working memory [49]. In addition, a number of studies have also reported the emergence of pathological lateralization or the disruption of asymmetries with neurocognitive disorders including ADHD [50]. Our study does not offer a conclusive demonstration that the observed asymmetries arise from the lateralization of any specific brain function; we merely wish note that there is a precedent for such observations.

Evolutionary and Developmental Drivers

Network filtration revealed several persistent cavities in the macroscale human connectome. While each minimal cycle surrounding these cavities involved brain regions interacting in a distinct configuration, we also observed commonalities across these structures. One such commonality was these minimal cycles tended to link evolutionarily old structures with more recently-developed neo-cortical regions [51]. For example, the green cycle depicted in Fig. 4e linked the putamen, an area involved in motor behavior [52], with the rostral anterior cingulate cortex, associated with higher-order cognitive functions such as error-monitoring [53] and reward processing [54]. This observation led us to speculate that the emergence of these cavities may reflect the disparate timescales over which brain regions and their circuitry have evolved [55], through the relative paucity of direct connections between regions that evolved to perform different functions. This hypothesis can be investigated in future work comparing the clique and cavity structure of the human connectome with that of non-human connectomes from organisms with less developed neocortices.

Towards a Global Understanding of Network Organization

Though we highlighted minimal cycles in the brain, by nature persistence describes the global organization of the network. Often regions in the brain wire minimally to conserve wiring cost [1, 56, 57, 58], though there are exceptions that give the brain its topological properties such as its small-world architecture [14, 59, 60, 61, 62]. Following this idea, we could interpret the difference in the number of persistent cavities between the minimally wired and DSI networks as a consequence of the non-minimally wired edges, which tessellate cavities in the brain itself. Yet when the subcortical regions are removed, the persistent cavities of the minimally wired and DSI networks are much more similar (Fig. 27b). This suggests that the wiring of cortical regions may be more heavily influenced by energy conservation than the wiring of subcortical regions. Additionally the drop in the number and lifetime of persistent cavities when subcortical regions are included indicates that these subcortical regions may prematurely collapse topological cavities. The often high participation of subcortical regions in maximal cliques suggests these well-connected nodes may have hub-like projections to regions involved in cortical cycles, thus tessellating the cortical cavity with higher dimensional cliques.777Topologically these subcortical nodes are cone points. Previous studies have found that networks with “star-like” configurations are optimally efficient in terms of shortest-path efficiency, but also efficient in terms of a random walk-based measure of efficiency [63]. That is, networks optimized to have one or the other type of efficiency tend to have stars. Thus, stars appear to be useful configurations for fast communication, both along shortest paths and also in an unguided sense along random walks. The fact that we see star-like projections to cycles from subcortical regions may suggest that they are useful for efficient communication.

Methodological Considerations

An important consideration relates to the data from which we construct the human structural connectome. DSI and tractography, non-invasive tools for mapping the brain’s white-matter connectivity, have some limitations. Tractography algorithms trade off specificity and sensitivity, making it challenging to simultaneously detect true connections while avoiding false connections [64], fail to detect superficial connections (i.e. those that do not pass through deep white matter)[65], and have challenges tracking “crossing fibers”, connections with different orientations that pass through the same voxel [66]. Nonetheless, DSI and tractography represent the only techniques for non-invasive imaging and reconstruction of the human connectome. While such shortcomings limit the applicability of DSI and tractography, they may prove addressable with the development of improved tractography algorithms and imaging techniques [67].


In conclusion, here we offer a unique perspective on the structural substrates of distinct types of neural computations. While traditional notions from graph theory and network science preferentially focus on local properties of the network at individual vertices or edges [14, 68, 15, 16], here we utilize an enriched network formalism that comes from the field of algebraic topology [18]. These tools are tuned to the interplay between weak and strong connections [69], and therefore reveal architectural features that serve to isolate information transmission processes [29]. It will be interesting in future to compare human and non-human connectomes across a range of spatial scales [70] to further elucidate the evolutionary development of these features, and to link them to their functional [71] and behavioral [72] consequences.

Materials and Methods

Data Acquisition, Preprocessing, and Network Construction

Diffusion spectrum imaging (DSI) data and T1-weighted anatomical scans were acquired from eight healthy adult volunteers on 3 separate days ( years old, two female, and two left-handed) [23]. All participants provided informed consent in writing according to the Institutional Review Board at the University of California, Santa Barbara. Whole-brain images were parcellated into 83 regions (network nodes) using the Lausanne atlas [3], and connections between regions (network edges) were weighted by the number of streamlines identified using a determistic fiber tracking algorithm. We represent this network as a graph on nodes and edges, corresponding to a weighted symmetric adjacency matrix . For calculations in the main text, the network was thresholded at for consistency with previous work [20]. See Supporting Information and Refs [73, 23] for detailed descriptions of acquisition parameters, data preprocessing, and fiber tracking. In the supplement, we provide additional results for the case in which we correct edge weight definitions for the effect of region size Fig. 24.

Cliques versus Cycles

In a graph a -clique is a set of all-to-all connected nodes. It follows that any subset of a -clique is a clique of smaller degree, called a face. Any clique that is not a face we call maximal. To assess how individual nodes contribute to these structures, we define node participation in maximal -cliques as , and we record the total participation of a node as .

To detect cycles which enclose topological cavities, we computed the persistent homology for dimensions 1–2 using [74]. This process involves first decomposing the weighted network into a sequence of binary graphs beginning with the empty graph and adding one edge at a time in order of decreasing edge weight. Formally, we translate edge weight information into a sequence of binary graphs called a filtration,

beginning with the empty graph and adding back one edge at a time following the decreasing edge weight ordering. To ensure all edge weights are unique we added random noise uniformly sampled from . However, this has essentially no effect on the final results, as stability theorems ensure that small perturbation of the filtration leads to small perturbation of the persistent homology [75, 76].

Within each binary graph of this filtration, we extract the collection of all -cycles, families of -cliques which, when considered as a geometric object, form a closed shell with no boundary. Formally, these are collections of -cliques such that every -subclique of some (called a boundary) appears as a subclique in the collection an even number of times. Two -cycles are equivalent if they differ by a boundary of -cliques. This relation forms equivalence classes of cycles with each non-trivial equivalence class representing a unique topological cavity.

Constructing the sequence of binary graphs allows us to follow equivalence classes of cycles as a function of the edge density . Important points of interest along this sequence are the edge density associated with the first in which the equivalence class is found (called the birth density, ) and the edge density associated with the first in which the enclosed void is triangulated into higher dimensional cliques (called the death density, ). One potential marker of the relative importance of a persistent cavity to the weighted network architecture is its lifetime (). A large lifetime indicates topological cavities that persist over many edge additions. An alternative measure is the death to birth ratio which highlights topological cavities that survive exceptionally long in spite of being born early, a feature that is interesting in geometric random graphs. (see [77] and Supporting Information).

To study the role of each topological cavity in cognitive function, we extract the minimal representatives of each non-trivial equivalence class at the birth density. For unfiltered complexes, the problem of finding a minimal generator for a given homology class is well known to be intractable [78, 79]. However, leveraging the filtration, we are able to answer the corresponding question in this context with relative ease. The persistent homology software [74] returns the starting edge and birth density of each homology class. To recover the minimal cycle we threshold the network at the density immediately preceding , then perform a breadth-first search for a path from one vertex to the other, taking all minimum length paths as solutions. If these minimum length paths arise from different equivalence classes (cycles 3, 7, and 10 in Fig. LABEL:fig:allcycles1) we record and analyze each individually since each could be the generator of the homology class. For higher dimensional cycles we perform a similar process by hand, but we note that they could be algorithmically identified using appropriate generalizations of the graph search method.

Standard Graph Statistics

In addition to the notions of cliques and cavities from algebraic topology, we also examined corresponding notions from traditional graph theory including communicability and rich-club architecture, which are directly related to node participation in maximal cliques.

We first considered nodes that participated in many maximal cliques, and we assessed their putative role in brain communication using the notion of network communicability. The weighted communicability between nodes and is

with for the strength of node in the adjacency matrix , providing a normalization step where each is divided by [80, 81]. This statistic accounts for all walks between node pairs and scales the walk contribution according to the product of the component edge weights. The statistic also normalizes node strength to prevent high strength nodes from skewing the walk contributions. We refer to the sum of a node’s communicability with all other nodes as node communicability, .

Intuitively, nodes that participate in many maximal cliques may also play a critical role in the well-known rich club organization of the brain, in which highly connected nodes in the network are more connected to each other than expected in a random graph. For each degree we compute the weighted rich club coefficient

where is the summed weight of edges in the subgraph composed of nodes with degree greater than , is the number of edges in this subgraph, and is the -th greatest edge weight in . Rich club nodes are those that exist in this subgraph when is significantly greater (one sided -test) than , the rich club coefficient calculated from 1000 networks constructed by randomly rewiring the graph while preserving node strength [82].

Furthermore, highly participating nodes may also contribute to a hierarchical organization of the network. To evaluate this contribution, we compute the -core and -core decompositions of the graph [3, 83]. The -core is the maximally connected component of the subgraph with only nodes having degree greater than . The -core is similarly defined with summed edge weights in the subgraph required to be at least .

Null Model Construction

We sought to compare the empirically observed network architecture to that expected in an appropriate null model. Due to the well-known spatial constraints on structural brain connectivity [57, 58, 56, 22] as well as the similarity in mesoscale homological features to the Random Geometric network [20] we considered a minimally wired network in which nodes are placed at the center of mass of anatomical brain regions. Each pair of nodes are then linked by an edge with weight , where is the Euclidean distance between nodes and . In each scan, the locations of region centers were collected. Thus, we considered a population of 24 model networks. This null model allows us to assess what topological properties are driven by the precise spatial locations of brain regions combined with a stringent penalty on wiring length.


This work was supported from the John D. and Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the Army Research Laboratory and the Army Research Office through contract numbers W911NF-10-2-0022 and W911NF-14-1-0679, the National Institute of Mental Health (2-R01-DC-009209-11), the National Institute of Child Health and Human Development (1R01HD086888-01), the Office of Naval Research, and the National Science Foundation (CRCNS BCS-1441502 and CAREER PHY-1554488). We thank Scott T. Grafton for access to the DSI data.


  • [1] Bassett DS, Greenfield DL, Meyer-Lindenberg A, Weinberger DR, Moore SW, Bullmore ET. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits. PLoS Comput Biol. 2010;6(4):e1000748.
  • [2] Sporns O, Tononi G, Kotter R. The human connectome: A structural description of the human brain. PLoS Comput Biol. 2005;1(4):e42.
  • [3] Hagmann P, Cammoun L, Gigandet X, Meuli R, Honey CJ, Wedeen VJ, et al. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008;6(7):e159.
  • [4] Sporns O. Cerebral cartography and connectomics. Philos Trans R Soc Lond B Biol Sci. 2015;370:1668.
  • [5] Bassett DS, Brown JA, Deshpande V, Carlson JM, Grafton ST. Conserved and variable architecture of human white matter connectivity. Neuroimage. 2011;54(2):1262–1279.
  • [6] Sporns O. The human connectome: origins and challenges. Neuroimage. 2013;80:53–61.
  • [7] Porter MA, Onnela JP, Mucha PJ. Communities in Networks. Notices of the American Mathematical Society. 2009;56(9):1082–1097, 1164–1166.
  • [8] Meunier D, Achard S, Morcom A, Bullmore E. Age-related changes in modular organization of human brain functional networks. Neuroimage. 2009;44(3):715–723.
  • [9] van den Heuvel MP, Sporns O. Rich-club organization of the human connectome. J Neurosci. 2011;31(44):15775–15786.
  • [10] Senden M, Deco G, de Reus MA, Goebel R, van den Heuvel MP. Rich club organization supports a diverse set of functional network configurations. Neuroimage. 2014;96:174–182.
  • [11] Chen ZJ, He Y, Rosa-Neto P, Germann J, Evans AC. Revealing modular architecture of human brain structural networks by using cortical thickness from MRI. Cereb Cortex. 2008;18(10):2374–2381.
  • [12] Medaglia JD, Lynall ME, Bassett DS. Cognitive network neuroscience. Journal of cognitive neuroscience. 2015;.
  • [13] Sporns O, Betzel RF. Modular Brain Networks. Annu Rev Psychol. 2016;67:613–640.
  • [14] Bassett DS, Bullmore E. Small-world brain networks. Neuroscientist. 2006;12(6):512–523.
  • [15] Bullmore E, Sporns O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci. 2009;10(3):186–198.
  • [16] Bullmore ET, Bassett DS. Brain graphs: graphical models of the human brain connectome. Annu Rev Clin Psychol. 2011;7:113–140.
  • [17] Graham D, Rockmore D. The packet switching brain. J Cogn Neurosci. 2011;23(2):267–276.
  • [18] Ghrist R. Elementary Applied Topology. CreateSpace Independent Publishing Platform; 2014. Available from:
  • [19] Ghrist R. Barcodes: the persistent topology of data. Bull Am Math Soc. 2008;45(1):61–75.
  • [20] Sizemore A, Giusti C, Bassett DS. Classification of weighted networks through mesoscale homological features. Journal of Complex Networks. 2016;In Press.
  • [21] Giusti C, Pastalkova E, Curto C, Itskov V. Clique topology reveals intrinsic geometric structure in neural correlations. Proceedings of the National Academy of Sciences. 2015;112(44):13455–13460.
  • [22] Betzel RF, Medaglia JD, Papadopoulos L, Baum G, Gur RE, Gur RC, et al. The modular organization of human anatomical brain networks: Accounting for the cost of wiring. Network Neuroscience. 2016;In Press.
  • [23] Gu S, Pasqualetti F, Cieslak M, Telesford QK, Alfred BY, Kahn AE, et al. Controllability of structural brain networks. Nat Commun. 2015;6.
  • [24] Betzel RF, Gu S, Medaglia JD, Pasqualetti F, Bassett DS. Optimally controlling the human connectome: the role of network topology. Sci Rep. 2016;6:30770.
  • [25] Muldoon SF, Pasqualetti F, Gu S, Cieslak M, Grafton ST, Vettel JM, et al. Stimulation-Based Control of Dynamic Brain Networks. PLoS Comput Biol. 2016;12(9):e1005076.
  • [26] Cammoun L, Gigandet X, Meskaldji D, Thiran JP, Sporns O, Do KQ, et al. Mapping the human connectome at multiple scales with diffusion spectrum MRI. J Neurosci Methods. 2012;203(2):386–397.
  • [27] Xia M, Wang J, He Y. BrainNet Viewer: a network visualization tool for human brain connectomics. PloS one. 2013;8(7):e68910.
  • [28] Power JD, Cohen AL, Nelson SM, Wig GS, Barnes KA, Church JA, et al. Functional network organization of the human brain. Neuron. 2011;72(4):665–678.
  • [29] Giusti C, Ghrist R, Bassett DS. Two’s company, three (or more) is a simplex: Algebraic-topological tools for understanding higher-order structure in neural data. Journal of Complex Networks. 2016;In Press.
  • [30] Dabaghian Y, Mémoli F, Frank L, Carlsson G. A topological paradigm for hippocampal spatial map formation using persistent homology. PLoS Comput Biol. 2012;8(8):e1002581.
  • [31] Stolz B. Computational Topology in Neuroscience. University of Oxford; 2014.
  • [32] Rajan K, Harvey CD, Tank DW. Recurrent network models of sequence generation and memory. Neuron. 2016;.
  • [33] Leen DA, Shea-Brown E. A Simple Mechanism for Beyond-Pairwise Correlations in Integrate-and-Fire Neurons. The Journal of Mathematical Neuroscience (JMN). 2015;5(1):1–13.
  • [34] Chan JM, Carlsson G, Rabadan R. Topology of viral evolution. Proceedings of the National Academy of Sciences. 2013;110(46):18566–18571.
  • [35] Petri G, Expert P, Turkheimer F, Carhart-Harris R, Nutt D, Hellyer P, et al. Homological scaffolds of brain functional networks. Journal of The Royal Society Interface. 2014;11(101):20140873.
  • [36] Lord LD, Expert P, Fernandes H, Petri G, Van Hartevelt T, Vaccarino F, et al. Insights into brain architectures from the homological scaffolds of functional connectivity networks. Frontiers in Systems Neuroscience. 2016;10:85. doi:10.3389/fnsys.2016.00085.
  • [37] Hermundstad AM, Brown KS, Bassett DS, Carlson JM. Learning, memory, and the role of neural network architecture. PLoS Comput Biol. 2011;7:e1002063.
  • [38] Levy N, Horn D, Meilijson I, Ruppin E. Distributed synchrony in a cell assembly of spiking neurons. Neural Netw. 2001;14:815–824.
  • [39] Fiete IR, Senn W, Wang CZ, Hahnloser RHR. Spike-time-dependent plasticity and heterosynaptic competition organize networks to produce long scale-free sequences of neural activity. Neuron. 2010;65:563–576.
  • [40] Tang E, Giusti C, Baum G, Gu S, Kahn AE, Roalf D, et al. Structural drivers of diverse neural dynamics and their evolution across development. arXiv preprint arXiv:160701010. 2016;.
  • [41] Galaburda AM, LeMay M, Kemper TL, Geschwind N. Right-left asymmetrics in the brain. Science. 1978;199(4331):852–856.
  • [42] Rasmussen T, Milner B. The role of early left-brain injury in determining lateralization of cerebral speech functions. Annals of the New York Academy of Sciences. 1977;299(1):355–369.
  • [43] Desmond JE, Sum J, Wagner A, Demb J, Shear P, Glover G, et al. Functional MRI measurement of language lateralization in Wada-tested patients. Brain. 1995;118(6):1411–1419.
  • [44] Thulborn KR, Carpenter PA, Just MA. Plasticity of language-related brain function during recovery from stroke. Stroke. 1999;30(4):749–754.
  • [45] Doron KW, Bassett DS, Gazzaniga MS. Dynamic network structure of interhemispheric coordination. Proc Natl Acad Sci U S A. 2012;109(46):18661–18668.
  • [46] Chai LR, Mattar MG, Blank IA, Fedorenko E, Bassett DS. Functional Network Dynamics of the Language System. Cereb Cortex. 2016;Epub ahead of print.
  • [47] Wager TD, Phan KL, Liberzon I, Taylor SF. Valence, gender, and lateralization of functional brain anatomy in emotion: a meta-analysis of findings from neuroimaging. Neuroimage. 2003;19(3):513–531.
  • [48] Sandi C, Patterson TA, Rose S. Visual input and lateralization of brain function in learning in the chick. Neuroscience. 1993;52(2):393–401.
  • [49] Carpenter PA, Just MA, Reichle ED. Working memory and executive function: evidence from neuroimaging. Current opinion in neurobiology. 2000;10(2):195–199.
  • [50] Oades RD. Frontal, temporal and lateralized brain function in children with attention-deficit hyperactivity disorder: a psychophysiological and neuropsychological viewpoint on development. Behavioural Brain Research. 1998;94(1):83–95.
  • [51] Rakic P. Evolution of the neocortex: a perspective from developmental biology. Nat Rev Neurosci. 2009;10(10):724–735.
  • [52] Middleton FA, Strick PL. Basal ganglia and cerebellar loops: motor and cognitive circuits. Brain Res Brain Res Rev. 2000;31(2–3):236–250.
  • [53] Braver TS, Barch DM, Gray JR, Molfese DL, Snyder A. Anterior cingulate cortex and response conflict: effects of frequency, inhibition and errors. Cereb Cortex. 2001;11(9):825–836.
  • [54] Kringelbach ML, Rolls ET. The functional neuroanatomy of the human orbitofrontal cortex: evidence from neuroimaging and neuropsychology. Prog Neurobiol. 2004;72(5):341–372.
  • [55] Gu S, Satterthwaite TD, Medaglia JD, Yang M, Gur RE, Gur RC, et al. Emergence of system roles in normative neurodevelopment. Proc Natl Acad Sci U S A. 2015;112(44):13681–13686.
  • [56] Bullmore E, Sporns O. The economy of brain network organization. Nature Rev Neurosci. 2012;13(5):336–349.
  • [57] Klimm F, Bassett DS, Carlson JM, Mucha PJ. Resolving structural variability in network models and the brain. PLOS Comput Biol. 2014;10(3):e1003491.
  • [58] Lohse C, Bassett DS, Lim KO, Carlson JM. Resolving anatomical and functional structure in human brain organization: identifying mesoscale organization in weighted network representations. PLoS Comput Biol. 2014;10(10):e1003712.
  • [59] Pessoa L. Understanding brain networks and brain organization. Phys Life Rev. 2014;11(3):400–435.
  • [60] Hilgetag CC, Goulas A. Is the brain really a small-world network? Brain Struct Funct. 2016;221(4):2361–2366.
  • [61] Muldoon SF, Bridgeford EW, Bassett DS. Small-World Propensity and Weighted Brain Networks. Sci Rep. 2016;6:22057.
  • [62] Bassett DS, Bullmore ET. Small-World Brain Networks Revisited. Neuroscientist. 2016;Epub ahead of print:1073858416667720.
  • [63] Goni J, Avena-Koenigsberger A, Velez de Mendizabal N, van den Heuvel MP, Betzel RF, Sporns O. Exploring the morphospace of communication efficiency in complex networks. PLoS One. 2013;8(3):e58070.
  • [64] Thomas C, Ye FQ, Irfanoglu MO, Modi P, Saleem KS, Leopold DA, et al. Anatomical accuracy of brain connections derived from diffusion MRI tractography is inherently limited. Proc Natl Acad Sci U S A. 2014;111(46):16574–16579.
  • [65] Reveley C, Seth AK, Pierpaoli C, Silva AC, Yu D, Saunders RC, et al. Superficial white matter fiber systems impede detection of long-range cortical connections in diffusion MR tractography. Proc Natl Acad Sci U S A. 2015;112(21):E2820–E2828.
  • [66] Wedeen VJ, Wang RP, Schmahmann JD, Benner T, Tseng WY, Dai G, et al. Diffusion spectrum magnetic resonance imaging (DSI) tractography of crossing fibers. Neuroimage. 2008;41(4):1267–1277.
  • [67] Pestilli F, Yeatman JD, Rokem A, Kay KN, Wandell BA. Evaluation and statistical inference for human connectomes. Nat Methods. 2014;11(10):1058–1063.
  • [68] Bassett DS, Bullmore ET. Human brain networks in health and disease. Curr Opin Neurol. 2009;22(4):340–347.
  • [69] Bassett DS, Nelson BG, Mueller BA, Camchong J, Lim KO. Altered resting state complexity in schizophrenia. Neuroimage. 2012;59(3):2196–2207.
  • [70] Betzel RF, Bassett DS. Multi-scale brain networks. Neuroimage. 2016;S1053-8119(16):30615–2.
  • [71] Hermundstad AM, Bassett DS, Brown KS, Aminoff EM, Clewett D, Freeman S, et al. Structural foundations of resting-state and task-based functional connectivity in the human brain. Proc Natl Acad Sci U S A. 2013;110(15):6169–6174.
  • [72] Hermundstad AM, Brown KS, Bassett DS, Aminoff EM, Frithsen A, Johnson A, et al. Structurally-constrained relationships between cognitive states in the human brain. PLoS Comput Biol. 2014;10(5):e1003591.
  • [73] Cieslak M, Grafton S. Local termination pattern analysis: A tool for comparing white matter morphology. Brain Imaging Behav. 2014;8(2):292–299.
  • [74] Henselman G, Ghrist R. A Novel Algorithm for Topological Persistence, with Application to Neuroscience; 2016.
  • [75] Chowdhury S, Mémoli F. Persistent Homology of Asymmetric Networks: An Approach based on Dowker Filtrations. arXiv preprint arXiv:160805432. 2016;.
  • [76] Cohen-Steiner D, Edelsbrunner H, Harer J. Stability of persistence diagrams. DCG. 2007;37(1):103–120.
  • [77] Bobrowski O, Kahle M, Skraba P. Maximally Persistent Cycles in Random Geometric Complexes. arXiv preprint arXiv:150904347. 2015;.
  • [78] Chen C, Freedman D. Hardness results for homology localization. Discrete & Computational Geometry. 2011;45(3):425–448.
  • [79] Dey TK, Hirani AN, Krishnamoorthy B. Optimal homologous cycles, total unimodularity, and linear programming. SIAM Journal on Computing. 2011;40(4):1026–1044.
  • [80] Crofts JJ, Higham DJ. A weighted communicability measure applied to complex brain networks. Journal of the Royal Society Interface. 2009; p. rsif–2008.
  • [81] Estrada E, Hatano N. Communicability in complex networks. Physical Review E. 2008;77(3):036111.
  • [82] Rubinov M, Sporns O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage. 2010;52(3):1059–1069.
  • [83] Chatterjee N, Sinha S. Understanding the mind of a worm: hierarchical network structure underlying nervous system function in C. elegans. Progress in brain research. 2007;168:145–153.
  • [84] Yeh FC, Tseng WYI. NTU-90: a high angular resolution brain atlas constructed by q-space diffeomorphic reconstruction. Neuroimage. 2011;58(1):91–99.
  • [85] Dale AM, Fischl B, Sereno MI. Cortical surface-based analysis: I. Segmentation and surface reconstruction. Neuroimage. 1999;9(2):179–194.
  • [86] Carlsson G. Topology and data. Bull Amer Math Soc. 2009;46(2):255–308.
  • [87] Zomorodian A, Carlsson G. Computing persistent homology. DCG. 2005;33(2):249–274.
  • [88] Hatcher A. Algebraic topology. Cambridge University Press; 2002.
  • [89] Tucker A. Chapter 2: Covering Circuits and Graph Colorings. Applied Combinatorics. 2006; p. 49.
  • [90] Johnson DB. Finding all the elementary circuits of a directed graph. SIAM Journal on Computing. 1975;4(1):77–84.
  • [91] Carlsson G, De Silva V. Zigzag persistence. Foundations of computational mathematics. 2010;10(4):367–405.
  • [92] Dey TK, Fan F, Wang Y. Computing topological persistence for simplicial maps. In: Proceedings of the thirtieth annual symposium on Computational geometry. ACM; 2014. p. 345.
  • [93] Kahle M. Random geometric complexes. DCG. 2011;45(3):553–573.

Supporting Information

Appendix A Data Acquisition

All participants volunteered with informed consent in writing in accordance with the Institutional Review Board/Human Subjects Committee of the University of California, Santa Barbara. Diffusion spectrum imaging (DSI) scans were acquired from eight subjects (mean age 275 years, two female, two left handed) on 3 separate days, for a total of 24 scans [73]. DSI scans sampled 257 directions using a Q5 half-shell acquisition scheme with a maximum -value of 5000 and an isotropic voxel size of 2.4 mm. We utilized an axial acquisition with the following parameters: repetition time (TR)=11.4 s, echo time (TE)=138 ms, 51 slices, field of view (FoV) (231,231,123 mm).

DSI data were reconstructed in DSI Studio ( using -space diffeomorphic reconstruction (QSDR) [84]. QSDR first reconstructs diffusion-weighted images in native space and computes the quantitative anisotropy (QA) in each voxel. These QA values are used to warp the brain to a template QA volume in Montreal Neurological Institute (MNI) space using the statistical parametric mapping (SPM) nonlinear registration algorithm. Once in MNI space, spin density functions were again reconstructed with a mean diffusion distance of 1.25 mm using three fiber orientations per voxel. Fiber tracking was performed in DSI studio with an angular cutoff of 55 degrees, step size of 1.0 mm, minimum length of 10 mm, spin density function smoothing of 0.0, maximum length of 400 mm and a QA threshold determined by DWI signal in the colony-stimulating factor. Deterministic fibre tracking using a modified FACT algorithm was performed until 100,000 streamlines were reconstructed for each individual.

In addition to diffusion scans, a three-dimensional high-resolution T1-weighted sagittal sequence image of the whole brain was obtained at each scanning session by a magnetization-prepared rapid acquisition gradient-echo sequence with the following parameters: TR=15.0 ms; TE=4.2 ms; flip angle=9 degrees, 3D acquisition, FOV=256 mm; slice thickness=0.89 mm, matrix=256 256. Anatomical scans were segmented using FreeSurfer [85] and parcellated according to the Lausanne 2008 atlas included in the connectome mapping toolkit [3]. A parcellation scheme including 83 regions was registered to the B0 volume from each subject’s DSI data. The B0 to MNI voxel mapping produced via QSDR was used to map region labels from native space to MNI coordinates. To extend region labels through the grey–white matter interface, the atlas was dilated by 4 mm. Dilation was accomplished by filling non-labelled voxels with the statistical mode of their neighbours’ labels. In the event of a tie, one of the modes was arbitrarily selected. Each streamline was labelled according to its terminal region pair.

Appendix B Additional neighborhood-scale computations

In the main text we count maximal cliques at an edge density of 0.25 (Fig. 2). To ensure our interpretation would not fluctuate based on this choice of , we also show the maximal clique distribution for (Fig. 7a) and (Fig. 7b).

Figure 7: Maximal clique distribution and spatial organization at multiple edge densities. (a) Distribution of maximal cliques in DSI and minimally wired null model at (left). Heat maps of node participation on the brain for maximal clique degrees 4-6 (left), 8-10 (middle), 12-16 (right). (b) Plots as in (a) at .

To address the extent to which an anterior-posterior gradient of maximal cliques exists, we calculated the correlation coefficient of with the position of the node along this axis. Fig. 8 shows generally the maixmal participation of a node is more highly correlated with anterior-posterior position for higher degree cliques. To complement this calculation, Fig. 8b shows the normalized of each node for all maximal clique degree .

Figure 8: Maximal clique correlation with anterior-posterior gradient. (a) Pearson correlation coefficient of with the coordinate along the anterior-posterior axis. (b) Spatial distribution of for each . Color of node corresponds to the value of between zero and the maximum participation of any node for the given degree .

We then asked if node participation varies by cognitive system, perhaps reflecting each system’s unique function. Results are shown in Fig. 2. The specific ordering of nodes for this figure are shown below (Fig. 9b). For each (right, left) hemisphere pair, the brain region in the right hemisphere was listed first, immediately followed by that in the left hemisphere.

Figure 9: Order of brain regions for Fig. 2b.

Additionally we are interested in comparing node participation to other measures of connectedness, as we expect they should generally agree. One such measure is the rich club. Following the procedure of van den Heuvel and Sporns [9], we calculated , , and for each value of (Fig. 10).

Figure 10: Defining the rich club of the DSI network. Rich club coefficient of the DSI network () is shown in black, the average rich club coefficient of randomized networks () in gray, and the normalized rich club coefficient ) in blue. Shaded regions indicate values of for which significantly exceeded .

Appendix C Persistent Homology

We are interested in finding mesoscale structural features, specifically non-trivial minimal cycles within our weighted network. Persistent homology strings together these features across network snapshots in a filtration, offering a global picture of network architecture. We include a brief description of the method here, and we advise the interested reader to consult [86, 18, 87] for additional details.

c.1 Complexes

Cliques First, we will transform our network of interest into an algebraic object so that we can use powerful computational tools from linear algebra to compute intuitive geometric features. We begin by selecting building blocks from which to assemble larger, mesoscale structures. Drawing on classical graph theory and our intuition about the type of structures we are looking for, we are led to a natural (and well studied) choice of such blocks: sets of all-to-all connected nodes called cliques. In the context of brain networks, cliques are groups of brain regions that are able to rapidly and effectively share information. Formally, a -clique of a graph as a set of nodes for which all pairwise edges are in G. Thus, a single node is a 1-clique, an edge a 2-clique, a triangle a 3-clique, and so on. Any subgraph of a clique must itself be a clique of lower degree, called a face. A maximal clique is thus any clique that is not a face. Intuitively, we will think of cliques as ”filled in” regions, rather than hollow collections of edges (Fig. 11a).

Figure 11: From cliques to a clique complex. (a) Cliques are all-to-all connected sets of nodes which we use as ”filled in” building blocks. (b)The clique complex is created by inserting these building blocks into the completely connected subgraphs of .

Clique Complex We study the structure formed by all cliques induced by the graph , a combinatorial object called the clique complex888More specifically, we build the abstract simplicial complex formed from the correspondence of -simplices and -cliques. See [86, 88, 18] for more details. (Fig. 11b). The clique complex of a graph is the collection of all the cliques in , formally denoted where is the set of all -cliques in . Historically, the index is chosen to correspond to the dimension of the enclosed region, and we adopt this index shift here for consistency. The clique complex is an object which allows us to formally manipulate certain important geometric properties (as we explore in more detail in the following sections), and, through these computations, discover mesoscale features of interest.

Chain Group In order to perform computations, we move from sets of cliques to vector spaces. We define the chain group (abbreviated to when the underlying clique complex is understood) as the vector space with basis . We denote by the basis element corresponding to a -clique on nodes . Though this definition can be made for any scalar field, we use vector spaces over the field with two elements, , as is standard in topological data analysis. Elements of are linear combinations of -chains which correspond to collections of -cliques.

For example, consider the clique complex shown in Fig. 12. Elements of are linear combinations of edges, or -cliques. One such element is , shown in blue in Fig. 12. This is intuitively an undirected path from to that passes through and . We could also take the purple path . This path begins at and follows , , , then which returns us to 999Because we work over , this algebraic encoding is not sensitive to clique direction, only the parity of the number of times a clique appears in a chain. In , an element is a linear combination of -cliques. Highlighted in Fig. 12 (right) is one such example: the element with . Because we are working in , if we took this path twice, we would have the chain .

Figure 12: Chain group elements are linear combinations of cliques. See appendix text for a complete description of these graphs.

Boundary Operator Recall that our goal is to detect topological cavities101010The structure of cycles is subtle and not necessarily indicative of physical cavities in a general sense. However, in the case of these relatively sparse 3D graphs this is usually the case. in our algebraic object. Cavities exist when cliques are arranged in a loop or shell, but there are no higher dimensional cliques that “fill in” the enclosed space – that is, the shell is not the “boundary” of some collection of higher dimensional cliques. To detect this computationally, we use the boundary operator , which takes a collection of -cliques (an element of ) and sends them to their boundary (an element of ).

Geometrically, the boundary of a -clique is the family of -cliques obtained by removing each vertex in succession. The boundary of a contiguous collection of (one or more) -cliques is a “shell” of -cliques surrounding the original collection, inside of which the boundaries of neighboring -cliques overlap. Detecting this pattern of overlaps computationally is accomplished when chains corresponding to the shared faces cancel. In Fig. 13 the boundary of is the chain corresponding to the surrounding four edges (2-cliques), as the interior edge () cancels. Formalizing this intuition, we define the boundary operator (with coefficients in ) on the basis to be

where indicates that vertex is not included in the set of vertices that form the clique, and we extend this map linearly to all of . Then, for example, in Fig. 13,

Figure 13: Example of the boundary operator in . See appendix text for a complete description of these graphs.

Because the boundary of is itself an element of , we can apply to it in turn. As illustrated in Fig. 13,

This example illustrates a crucial property of the boundary operator: , which will be more thoroughly discussed in the Homology section below.

Chain Complex We now have a boundary operator that lets us move from -chains to -chains for every 111111The boundary of a 0-chain is defined to be 0, since a node is a single point with no geometric boundary.. These operators link together the chain groups into a sequence

called the chain complex for X(G). This is our fundamental algebraic tool for studying the structure of the network.

In summary, we have taken an unweighted, undirected graph and, from an enumeration of its cliques, formed the clique complex (Fig. 14, left). We then used the cliques of each dimension as basis elements in the chain groups (Fig. 14, middle). Finally, we defined the boundary operator that finds the boundary of a chain (which represents a collection of -cliques), itself a (possibly empty) chain representing a collection of -cliques, and we used this function to string together the chain groups into the chain complex (Fig. 14, right).

Figure 14: Illustration of creating from the clique complex . Also shown are the induced chain complex and an example of boundary calculations on an element in .

c.2 Homology

We turn now to the definitions and concepts needed to compute homology. Homology discoveres features of interest in the clique complex by separating cycles, mesoscale patterns constructed from cliques, which surround a cavity from those that are the boundary of a collection of cliques.

Cycles Though we have seen examples of cliques strung together as paths, we are particularly interested in paths that form closed structures called cycles, the 1-dimensional analogue of which are graph-theoretic circuits. Consider the three closed circuits in Fig. 15, each can be thought of as a linear combination of elements in . If we begin at any 1-clique (node) on the cycle, for example in , and traverse each 2-clique in the cycle in order, we will end at our starting 1-clique. Since the boundary of any path is , the boundary of any cycle must be


Though we have thus far focused on the familiar notion of cycles built of 2-cliques, the intuitive notion that boundaries should cancel allows us to construct cycles in any dimension. We define a -cycle to be any element with . Since the cycles are exactly the elements that are sent to 0 by the boundary operator, the subspace of -cycles is precisely the kernel (or nullspace), denoted .

As noted above, cycles can either surround cavities or a collection of cliques, and since we are strictly interested in cycles of the first type, we must determine how to differentiate between these two options. Fig. 15 depicts three 1-cycles found in the clique complex shown on the left. Looking strictly at , we cannot distinguish which of these three cycles belong to which category.

Figure 15: Cycles. Examples of a cycle that is also a boundary () and two equivalent, non-boundary cycles ( and ).

However if include information about 3-cliques, the separation becomes apparent, in the same way looking at the full depiction of the clique complex in Fig. 15 (left) makes it apparent that this object surrounds one cavity. We need consider only the image of the boundary map from : if a 1-cycle surrounds a collection of higher dimensional cliques, it must in particular surround a collection of 2-cliques (2-faces of these larger cliques). In our example in Fig. 15, this means is the boundary of some element in (this element is ).

We can repeat such an argument for any 1-cycle that surrounds a collection of higher dimensional cliques, which allows us to define k-boundaries as elements in . Furthermore it must be true that per our previous observation that .

However, not all cycles are necessarily boundaries: and are in but neither are elements of . The -cycles that surround cavities are thus those that are in but not . However, enumerating cycles in is not enough to produce a proper list of cavities in our clique complex, because we will suffer from redundancy. For example, knowing either or tells us the cavity they both enclose exists. Certainly , but we should consider them equivalent since they both reveal the same feature of our complex. So we need a way to count more carefully.

Equivalence The solution to our enumeration problem will depend on what we regard as ”the same”. Above we mentioned we should consider to be equivalent to because they surround the same cavity. How is it that we understood this? We see they both enclose this cavity, while also surrounds one 3-clique. But this 3-clique (specifically ) does not change the cavity or add a new one, so we decided this difference of a higher dimensional clique should be insubstantial, and thus the two cycles are equivalent. Generalizing this example provides a method for correctly enumerating the cavities in the complex.

Two -cycles, and , are called equivalent if their sum, (working over ) is the boundary of a -chain, e.g. if . In Fig. 15, we have

so indeed we see .

This, finally, provides us with a proper count: if we only count one cycle from each set of (non-trivial) equivalent cycles, then we will have precisely the number of topological cavities of a given dimension within the clique complex. The clique complex in Fig. 14 by eye has only one cavity surrounded by 1-cycles, and our computations to come to the same conclusion. Notice that any closed loop of 2-cliques either is equivalent to or it is strictly a boundary of higher dimensional cliques and thus is trivial. So, as desired, we have a sole 2-dimensional cavity.

The equivalence class of a -cycle is . Note the equivalence class of boundary loops contain the empty set, since . This means for any and , we have , confirming our requirement that cycles differing by boundaries are equivalent. By abuse, it is common to refer to an equivalence class of -cycles as a -cycle, and we will continue with this convention.

Homology Groups The heavy lifting is now complete and we are left with only the formal definition of homology to conclude the section. Recalling the equivalence classes we have discussed above, we define the homology group of dimension as

which is simply the vector space spanned by equivalence classes of -cycles. The dimension of is the number of nontrivial -cycles and thus the number of -dimensional topological cavities of our clique complex. In summary we can now take a graph of nodes and edges, convert it to an algebraic object called the clique complex, then use the boundary operator to find equivalence classes of cycles that describe essential mesoscale architecture of our network in the form of topological cavities.

c.3 Homology for Weighted Networks: Persistent Homology

While homology detects cavities in binary graphs, the DSI data (and many other sources in biology) create a weighted network. Persistent homology was developed [86, 87] to address this situation. The method uses the edge weights to unravel the weighted network into a sequence of binary networks on which we can then compute homology, in a manner related to but more principled than standard thresholding techniques. Overall persistent homology perceives how the features seen with homology evolve with the weighted network.

Filtrations Given a weighted network, we first construct a sequence of binary graphs that will allow us to use homology on each graph in the sequence. The edge weights induce a natural ordering on the edges from highest to lowest weight. Then, beginning with the empty graph, we replace edges following this ordering. This process creates a filtration

where each contains one more edge than . Since contains (and one more edge), we obtain an inclusion map which describes how maps into . In our case this is quite natural, is sent to itself, now a subgraph of (Fig. 16, top row).

Figure 16: Filtrations and inclusion maps. Edge weights indicated by line thickness induce a filtration on the weighted graph . The inclusion maps induce inclusion maps on the corresponding clique complexes .

Having an inclusion of into means we can also get an inclusion of into in a similar fashion, where cliques in map to their corresponding selves in (Fig. 16, bottom row).

But now knowing how one clique complex maps into the next clique complex means we get maps between the chain groups as well. For example, in Fig. 17 we look only at the inclusion of into . This inclusion map tells us how to take cliques from and fit them into , which means we can figure out how to take some combination of cliques and fit them into as well. The functions that perform this task are defined

where the refers to the set of functions indexed by dimension. We show the first three with examples in Fig. 17. If we have a 0-chain , it gets mapped by to a chain in , explicitly .

Figure 17: Inclusion maps between clique complexes induce maps between the corresponding chain complexes. See appendix text for a complete description of these graphs.

We can do this in the higher dimensions as well. Figure 17 also shows the green 1-chain and how it maps into as well. It is interesting here to note that in , the 1-chain is also a 1-cycle, but this is not the case in . Again we can move to the 2-chains and observe how is sent to .

Generally filtrations are a powerful way to understand weighted networks. Here, we will use these chain maps to track particular chains throughout the filtration to see how they may change as new edges (and thus cliques) are added.

Persistent Homology As we are interested in cycles, we now turn to tracking specifically cycles throughout the filtration. A -loop is a -chain, so it can be tracked horizontally from clique complex to clique complex in the filtration. Additionally, we have vertical boundary maps that tell us if the -loop in question is a cycle or a boundary loop within the particular clique complex. More generally we are combining the information from the filtration and its between-complex induced maps (Fig. 1617) with the boundary loop information from the within-complex boundary operators (Fig. 14) to observe how cycles change as we add edges of decreasing weight.

Formally these maps and complexes form the persistence complex of our weighted graph (Fig. 18). Armed with inclusion and boundary maps between chain groups, we can compute the homology of each graph in the filtration and therefore obtain maps that describe how cycles (equivalence classes of cycles) in change (map directly, shrink in length, become a boundary loop) in .

Figure 18: Illustration of the persistence complex of the weighted graph . The green 1-cycle is first seen in , is mapped through filtrations, and finally becomes the boundary of a collection of 3-cliques (pink) in .

For example, in Fig. 18 we see the green 1-cycle first appears in . We say the cycle is born at this edge density . The green cycle continues to exist until it maps to a cycle that is the boundary of the pink -chain in . Since this cycle is now a boundary, it is equivalent to the trivial cycle in . We say the cycle dies at this edge density .

Cycles that exist over many edge additions must evade becoming triangulated by cliques, thus becoming a boundary. Therefore we consider such cycles more essential if they persist for many edge additions. We measure cycle persistence in two ways. First we record cycle lifetime , which is commonly used in persistent homology calculations [86] and displayed on a persistence diagram. For our cycle which is born at and dies at , we see an example persistence diagram in Fig. 19. However, recent work [77] suggests alternatively considering which allows for cycle persistence comparison at difference scales and underscores the importance of cycles forming at low edge densities.

Figure 19: Example persistence diagram for green cycle shown in Fig. 18. See appendix text for a complete description of these graphs.

To summarize, persistent homology tracks interesting connection patterns (cycles) through network frames induced by edge weights, recovering a parameter-free perspective on essential structural features in a weighted network.

c.4 Comparison with alternative loop-finding algorithms

One may ask how our method compares with other loop-finding algorithms. While such programs can be powerful, two fundamental differences exist. The first is in the definition of cycles identified. Recall that we extract equivalence classes of cycles, so we will find only cycles that enclose a structural cavity, while loop-finding algorithms will extract all loops that are boundaries of higher cliques [89]. Additionally, persistent homology detects cycles in multiple dimensions with much less computational effort than loop algorithms [90].

Additionally one might ask how small changes in edge weights or edge ordering may affect these findings. Cohen-Steiner et al. showed generally small changes in the edge ordering will result in small changes in the persistence diagram [76]. This makes persistent homology relatively robust to noise and consequentially a powerful tool in neuroscience [29].

Appendix D Cycles in the Average DSI Data

To understand the function non-boundary cycles may have in the structural brain network, we recover the minimal generator at for each persistent homology class found in the averaged DSI data (Fig. 4c). These cycles for all 20 of the 2D cavities and the two 3D cavities are shown below in Fig. LABEL:fig:allcycles1, 21, respectively. To summarize this information we plot all minimal representatives with edges weighted by their participation in minimal representatives. This summarization is similar to the frequency scaffold [36, 35] in Fig. 22, though here we are unable to assign one minimal representative to each persistent equivalence class so if an edge is part of any of the minimal representatives of one equivalence class it gets an added weight of one. Cycles reach most areas of the brain, and as seen in Fig. LABEL:fig:allcycles1, many follow the cortical to subcortical theme. The edge involved in the highest number of dimension-one minimal generators in the average DSI data links the left and right thalamus. For dimension 2 we see each edge only exists within one minimal generator.

Figure 20: Minimal representatives at of all 2D cavities found in the average DSI data, listed in order of increasing birth density. For each topological cavity, the lifespan ( - ), location in the brain, and schematic is shown. For the third, seventh, and tenth appearing cavities, we could not isolate a single equivalence class.
Figure 21: Minimal representatives at of all 3D cavities found in the average DSI data, listed in order of increasing birth density. For each, the lifespan ( - ), location in the brain, and schematic is shown.
Figure 22: Spatial distribution of minimal generators at of 2D (top) and 3D (bottom) persistent cavities. Edges sized by number of minimal generators in which they participate.

Appendix E Cycles in Individuals

Though we detected persistent cavities in the group-averaged DSI network using persistent homology, we also ask whether these cyclical patterns of connectivity and the corresponding cavities exist in multiple individuals and in multiple scans acquired from the same individual. To address this question, we asked whether a similar geometric loop is seen and whether a similar topological cavity is present in each scan.

e.1 Considerations in per scan cycle validation

Persistent homology is a powerful tool with which to understand the mesoscale homological features of a weighted network. Determining all minimal generators for each of the long-lived topological cavities gives a finer resolution of such features which can have biological implications, as is the case with our DSI data. Isolating all minimal generators for each homology class additionally gives a geometric interpretation to these cavities. Then each cavity can be viewed from a biological, topological, and to a lesser extent geometric perspective.

This presents a challenge when looking for the “same” persistent homology classes in another clique complex. From the neuroscience perspective, two minimal cycles may be similar if the cycles include the same brain regions, or if the group of regions forming the cycle performs the same function as those in the first. Geometrically we would perhaps require the same rigid shape of two cycles to call them similar. Finally through the lens of topology we might call two minimal cycles in two different complexes similar if we can find a map between the complexes which takes one cycle to the other. This is quite complicated so perhaps we could instead ask if the minimal cycle of a homology class in the first clique complex exists in the second as a cycle in a nontrivial homology class but not necessarily as the minimal generator. This is the least stringent and is an area of active research [91, 92]. Yet persistent homology is fundamentally a tool for understanding topological features, so it is important that we weight the topological considerations and ideas of similarity heavily when we compare cycles found in the average data versus individual scans.

With these three perspectives in mind, we present the set of rules used in this paper to define whether a persistent homology class found in an individual scan was the “same” as the persistent homology class in the average network. Though convoluted, we argue that with the tools available, these requirements for similarity adequately capture some flexibility of topological similarity while being conservative enough to generally preserve the biological function of the cycle as well.

We consider each persistent homology class in turn. For a given persistent homology class found in the average DSI connectome, we denote the set of minimal generators of the homology class at by with elements for . Then for each there is a set of nodes containing the nodes within this representative. We require both a loop formed by connections between at least one of and a similar topological cavity to exist.

  1. Nodes connected in a loop. We first take the subgraph on and ask if there is precisely one non-trivial homology class at any edge density. We then show the connection pattern at the edge density at which this class first appears, for example in Fig. 5d we find a non-trivial cycle equivalence class in every scan, though this is not seen in panel (h). This first allows us to ask if these nodes ever form a non-trivial cycle throughout the filtration, which is possibly of interest from a geometric and neuroscience perspective. We also use this first test as a filter to see in which scans could these nodes surround a topological cavity. Then if we find a non-trivial cycle formed by any of , this scan passes to the next stage.

  2. Similar topological cavity. We then ask if a similar topological cavity exists. The algorithm from [74] returns the birth density (and thus birth edge) of each persistent homology class. In order of increasing birth density, we ask if any of the nodes in are in the birth edge. If this is true, we call this a similar cavity in an individual scan if any of the following hold:

    1. Let be minimal generators of this homology class in the individual scan at . If any of are the same as one of or are in the same equivalence class, then we call this a similar topological cavity and we are done. This is the most straightforward and was most frequently observed within the unnormalized data. See Fig. 23a for an example.

    2. If there is some cycle within this non-trivial homology class at formed by at least all but one node of some , along with no more than two additional nodes, and nodes from are in the original order along the cycle, we call this similar. See Fig. 23b for an example.

    3. If either (a) or (b) hold at any with , we call this a similar topological cavity. See Fig. 23c for an example where the cycle is validated at . At , a minimal cycle contains seven nodes, four of which are the thalamus and caudate nucleus from both hemispheres. Following the minimal cycles throughout the lifetime of this persistent cavity we find at some edge density before , a minimal representative consists of exclusively the thalamus and caudate nucleus regions from both hemispheres.

    Figure 23: Determining similar topological features in individual scans (a) Example of an individual scan in the unnormalized dataset in which we see one of the minimal generators of the purple cycle from the average DSI data (left) at for a persistent homology class (right, minimal generator shown). (b) Cycles from brains in individual scans may be similar in shape to those seen in the average DSI network. Example of a cycle we call similar (right) to the green minimal cycle in the average network (left). The insula and rostral middle frontal regions in the cycle on the right are needed here for the minimal cycle to exist, but a slightly larger loop contains all of the original nodes in the same order. Finally this cycle still connects subcortical to cortical regions. This example was found in the unnormalized dataset, shown at . (c) Example of a minimal generator that begins as a large loop but collapses onto our original cyclic connection pattern at some edge density , . This example is from an individual scan in the unnormalized dataset with , , and .

    The first test covers the possibility of the same biological and geometric feature occurring in the individual scan. The second is perhaps the most important, however, because it allows for matching the topological cavity itself. It is important to remember the topological cavities are the features of interest, not the precise cycles themselves, though the two are clearly intertwined. With the focus on the topological holes, the rationale for the three subrules 2a, 2b, and 2c, is more clear. Though labor intensive, this lets us keep the topological perspective when determining cycle similarity. Moreover, the rationale for focusing on cavities and not specific connections is similar to why large-scale organization such as communities [22], cores [3], and rich-club organization [9] are studies with increased intensity. Composed of a plurality of interacting brain regions, these types of structures, and not the individual brain regions nor connections, form computational units that theoretically act to help segregate and integrate information flow across the brain..

    One clear drawback of this method is the possibility of false negatives. For example, a persistent homology class may have been born which is similar to the cycle in the average data, yet the beginning edge did not include any of the cycle nodes and thus we would not detect this following the above procedure. This is a first attempt to identify similar topological cavities across subjects, and we expect more robust algorithms to be a topic of future research.

e.2 Confirming topological cavities in contralateral hemisphere

In the main text we show validation of the four highlighted cycles in individual scans. Following the procedure above, we next ask if these cycles are seen in the contralateral hemisphere to asses symmetry of these features. Figure 24 shows these features are seen in the contralateral hemisphere, though with less frequency than in the original.

Figure 24: Validation of similar topological cavities in additional data. For each of the four minimal generators highlighted in the main text, bars indicating cavity lifetime for all collected data. Dotted lines indicate average birth or death edge density.

e.3 Cavities in the normalized dataset

When studying the network formed from DSI, it is important to consider any potential bias created by the different sizes of the 83 brain regions. To account for this potential bias, we normalized the original network of streamline counts by the geometric mean of the end point region sizes and checked to see which cycles were still present [3]. More precisely, the normalized edge weight between nodes and is [5].

After this normalization, we asked if the cycles found in the streamline counts data are present in the normalized networks. Figs. 24 (DSI Norm, DSI Norm cont) show the cycles are found to a similar extent across scans in the original and contralateral hemispheres.

e.4 Locating all cavities from the group-averaged DSI in the minimally wired networks

Noting many persistent cycles seem likely sampled from the minimally wired distribution of persistent cycles, we asked if we detect the 20 cycles observed in the average data in the null model. Figure 25 show the lifespan of each of these persistent cycles within the individual scans (black) and the minimally wired null model (gray). Each vertical bar represents a persistent cavity within a scan, and scans where the cavity was not validated are removed. Average birth and death densities are indicated with horizontal dashed lines. We surprisingly see very few of the persistent homology classes of the DSI data have counterparts in the minimally wired null model. Of those that do, often the average birth and death times are quite different, underscoring the importance of the filtration in this method.

Figure 25: Lifetimes of all 20 persistent 2D cavities in the individuals (black bars) and minimally wired models (gray bars). Dashed lines indicate the average birth and death densities of each class.
Figure 26: Lifetimes of both persistent 3D cavities in the individuals (black bars) and minimally wired models (gray bars). Dashed lines indicate the average birth and death densities of each class.

e.5 Cortical cavities

Densely connected subcortical nodes may prevent the longevity of nonzero homology classes by forming cross-cycle edges or cliques which tessellate the cycle completely. We asked what cavities could be found when removing these subcortical nodes, forming as described in the main text. Here, Fig. 27 shows a 1-cycle on nine nodes recovered from within the brain and as a schematic (panel (a)). The persistence diagram for 2D cavities within in Fig. 27b shows the four minimal cycles marked in maroon. Importantly, because of the connection patterns between nodes at the density of cycle formation, we will refer to any of these four cycles as the minimal cycle. Two of these cycles are equivalent loops which involve the superior frontal (RH) and the caudal middle frontal regions. The other two are equivalent to each other but not to the first two loops, and involve the superior frontal (LH) and posterior cingulate (LH). The edge added at connects the lateral orbitofrontal to the superior temporal. The cycle formed by the superior frontal (RH, LH), caudal middle frontal, precentral, and posterior cingulate (LH) is itself a minimal cycle surrounding a separate topological cavity. This information along with the connection patterns at mean we cannot claim either pair are the two minimal generators, instead it is either one pair or the other. The smaller, five node cycle was already in existence, so either of these possible paths (but not both simultaneously) completes the larger maroon cycle.

We see the pattern of connectivity is not often exactly seen in all individuals, yet the large 2-dimensional cavity enclosed is present in every scan (Fig. 27c) in the original hemisphere, and often in the opposite hemisphere (Fig. 27d), suggesting its importance in neural structure.

Figure 27: Recovered 1-cycle on nine nodes. (a) Minimal representatives at shown in the brain (left) and as a schematic (right). (b) Persistence diagram of and . Topological cavity in (a) circled in maroon. (c) Patterns of connectivity between maroon loop nodes found for the original (c) and contralateral (d) hemispheres in each scan. If the exact pattern is not found, the pattern at the edge density when all cycle edges first exist is shown. For each scan, the connection pattern of nodes in the minimal generator with the fewest number of cross-edges is shown.

The number and pattern of persistent cycles in Fig. 27b matches that of the minimally wired null model much more closely than the full DSI network. Thus suggests first that the cortical wiring of the brain is globally arranged as if it was wired minimally. Yet the difference in the cortical only and full DSI persistence diagrams also implies the subcortical regions drive the reduction of homology. Knowing the subcortical regions are highly connected and participate in many high-dimensional cliques (Fig. 2), we conclude the subcortical regions are acting as cone points in the brain network (Fig. 28, left). Finally, this adds more detail to our understanding of the global wiring of the brain, as we imagine many cortical loops that are coned by sets of subcortical regions (Fig. 28, right).

Figure 28: Subcortical regions as cone points in the brain network. A loop (maroon, left) may be the base of a cone, where the cone point (gray) triangulates the loop interior thus making the loop a boundary loop. In the brain, the greater number and longevity of topological cavities seen after removing subcortical nodes indicates these subcortical regions (gray, right) may act as cone points for many cortical cycles.

e.6 Persistent homology of an alternative parcellation

Throughout this investigation we have exclusively used the 83-node Lausanne parcellation [26] for consistency. Yet, it is important to consider how the choice of parcellation might impact on the observed architectures. Currently, there is no single agreed upon parcellation scheme for either structural or functional imaging data. Moreover, it is not yet agreed upon whether separate parcellations should be used for the two sorts of imaging data. In this work, we have used a parcellation scheme that is derived from the subject-level rescontruction of the gray matter surface, based on the state-of-the-art application of FreeSurfer software. Thus, our parcellation is one that should maximize sensitivity to individual differences in surface morphology, and thus maintain optimal identification of boundaries between morphological features. Other parcellation techniques are available – such as the registration of an averaged template to the subject volume. However, these techniques have the potential disadvantage of enforcing a group-level solution for parcels onto a subject-specific image. Nevertheless, for completeness, we also examined such an alternative parcellation scheme – by applying th 90-region AAL atlas to the subject-level images. Shown in Fig. 29, we observed more persistent features than in the original 83-node parcellation, both in dimensions one and two, consistent with the fact that in networks with geometric constraints, we expect the number of features to increase with more nodes [21]. Additionally, we note this persistent homology signature is more geometric [93] than the original network (Fig. 4). Importantly, exact comparisons between the cycles in one parcellation and those in the other parcellation is intractable because the two parcellations do not contain the same number of regions, or same anatomical location of regions.

Figure 29: Persistence diagram of the structural connectome with the AAL 90 parcellation.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description