Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity

Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity

Danielle S. Bassett    Ankit N. Khambhati    Scott T. Grafton Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104 Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, 19104 UCSB Brain Imaging Center and Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106 USA Institute for Collaborative Biotechnologies, University of California, Santa Barbara, CA 93106 USA
July 4, 2019
Abstract

Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales, and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems, by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems, from micro- to macroscales. We present examples of how human brain imaging data is being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers, and emphasize their utility in informing diagnosis and monitoring, brain-machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights critical to the neuroengineer’s toolkit.

preprint: APS/123-QED

Could we graft new connections into the brain, to give someone back the abilities they had pre-injury chen2016neural ()? Could we decode the thoughts of someone who is caged inside their own body haynes2006decoding (); christophel2012decoding ()? Could we develop adaptive brain-computer interfaces that evolve and adapt to remain effective for a child whose brain is continuously developing putze2014adaptive (); krusienski2011critical ()? Answering these and many other seemingly over-ambitious questions is the fundamental aim of neuroengineering dilorenzo2007neuroengineering (), a relatively new domain of biomedical engineering that develops and uses computational and empirical techniques to understand and modulate the properties of neural systems. Particularly exciting frontiers of neuroengineering include neuroimaging, neural interfaces, neural prosthetics and robotics, and more general techniques for regeneration, enhancement, and refinement of neural systems johnson2013neuromodulation ().

In the era of big data, neural systems are no exception to the rule of ever-increasing petabytes streaming in to servers around the world glaser2016development (). However, in many other arenas, the amount of data being gathered has not posed an insurmountable obstacle. What is the fundamental difference that causes neuroscientific data to be so challenging? Is it a lack of a mechanistic understanding of how the brain works valiant2014what (); craver2005beyond ()? Or an inability to physically construct the hardware required to liaise with neural systems for effective interventions krusienski2011critical ()? We argue that fundamental to both of these problems is the challenge of dealing with complex relational data bullmore2009generic (). In developing a data science to meet these rising demands stevenson2011how (), we must acknowledge that these data are far from independent: instead, data from neural systems are inherently relational data bassett2016network ().

Relational data can be defined as any data that codifies relationships between elements long2010relational (). The nervous system is composed of units across many spatial scales (genes, neurons, columns, areas) that are related to one another in many different ways (anatomical connections, functional relationships, material similarities) conaco2012functionalization (); zhang2016stretch () (Fig. 1). These relationships form intricate patterns – of synaptic connections, gene co-expression, connectome fingerprints – that may differ across species van2016comparative (); shih2015connectomics (), or across cohorts within a single species (e.g., in health versus disease) bassett2009human (); fornito2015connectomics (). From these patterns stem the very complicated phenomena of development, behavior, and cognition medaglia2015cognitive (); misic2016from ().

Biological patterns like these are particularly difficult to study for several reasons. First, the governing principles of pattern formation are often difficult to infer vertes2012simple (); vertes2014generative (); betzel2016generative (), and thus mechanistic insights are difficult to come by. Second, it is difficult to simplify patterns using coarse-graining or other dimensionality reduction approaches while still maintaining the richness of the neurophysiologically relevant information craver2005beyond (). While retaining necessary information while simplifying the patterns is difficult, so is studying each element in the pattern: with thousands and sometimes millions of elements, the set of interactions between them – particularly if they evolve in time – quickly becomes enormous and complicated. Indeed, as many fields have now come to realize, the intricacies of relational data call for a new conceptual and mathematical framework pilosof2015multilayer (); proulx2005network ().

Figure 1: Relational data in biological systems. Repeating genotypic and phenotypic patterns emerge frequently in the study of biological systems. These biological patterns are expressed across multiple scales of granularity. Illustrated here are three different scales of biological elements (behavioral, structural, genetic) in different animal species, with lines representing conceptual relationships between elements. At the macro-scale, we observe behavioral similarities across different species, such as the ability to fly in birds and fruit flies. However, a closer lens on the neurological substrate of this behavior may tell a different story: that meso-scale structural brain architecture differs significantly between birds and fruit flies, and is more similar between insects (e.g. fruit flies and ants) and between mammals (e.g. mice and cats). Despite differences in structural brain architecture, we might find that animals of different species share commonalities in genetic code that manifest similarly in physical attributes. While differences in each element yield unique qualities to each individual animal species, examining relational data can provide a more comprehensive view on the functional role of each element ecologically.

The Peculiar Appropriateness of Network Science

Network science is an emerging interdisciplinary field that combines theories from statistical mechanics, computational techniques from computer science, statistics, applied mathematics, and visualization approaches to probe, perturb, and predict the behavior of complex systems in technology, biology, and sociology newman2010networks (). While historically developed to understand social interactions or friendship webs similar to those supported (or elicited) by Facebook or Twitter, network science is a peculiarly appropriate framework in which to tackle the challenges of neural data sciences to better engineer artificial and natural neural systems.

In particular, rather than reducing complex relational data to a list of independent parts, network science provides tools to explicitly characterize the pattern of interactions between neural elements rubinov2010complex (). In addition to these descriptive tools, it provides benchmark graphical models for statistical comparison and inference pavlovic2014stochastic (); simpson2011exponential (); lindquist2014evaluating (), mathematical models to quantify and predict the flow of information misic2015cooperative () or communication misic2014communication () through neural circuits, and predictive tools to forecast how networks might change in response to injury patel2014single () or therapeutic interventions gratton2013effect ().

How does one go about translating neural systems into the language of network science kaiser2011tutorial (); fornito2013graph (); vandiessen2015opportunities (); reijneveld2007application (); bullmore2009complex (); bassett2009human (); bassett2006small (); bullmore2011brain ()? The first critical step is to determine which constituent elements are the fundamental unit of interest that is measurable in the particular experiment under consideration butts2009revisiting (). These elements – which might be single neurons, neuronal ensembles, genes, or large-scale brain areas – will be treated as nodes or vertices in the network zalesky2010whole (). Then, one must define the connections, interactions, or relationships of interest between network nodes. These links – which might be white matter tracts between large-scale brain areas, chemical or electrical synapses between neurons, or co-expression patterns among genes – will be treated as network edges. Once nodes and edges have been defined, the network itself – the pattern of edges linking nodes – can be studied from the point of view of a graph in mathematics bollobas1985random (); bollobas1979graph ().

The Mathematics of Network Science in Neural Systems

In the field of mathematics, a graph is composed of a node set and an edge set bollobas1979graph (); bollobas1985random (). We store this information in an adjacency matrix , whose elements indicate the strength of edges between nodes. The representation of data in a graph enables the investigator to characterize the patterns of connectivity locally surrounding a single node or globally taking into account all edges. In addition to local and global structure, tools are available to probe so-called mesoscale structure in the graph, which can be defined as structure that is present at intermediate length scales in the system (Fig. 2).

To give the reader some simple intuitions, we briefly describe examples of local, meso-scale, and global statistics that can be computed from graphs of neural systems. First, a common local statistic that has proven particularly effective in characterizing neural systems is the clustering coefficient of a node, which can be defined as the fraction of a node’s neighbors that are also connected to one another watts1998collective (). In essence, this statistic is sensitive to the density of triangles in the graph, and is thought to play a non-trivial role in local information processing in neural systems kitzbichler2011cognitive () (although for caveats see rubinov2011emerging (); bassett2015cognitive ()). A common global statistic that has proven useful in characterizing neural systems is the characteristic path length, which is defined as the average shortest path between all possible node pairs newman2010networks (). This statistic is sensitive to long-distance connections that provide short cuts from one side of the network to another, and is thought to play a role in the swift transmission of information across the system bullmore2011brain (). Interestingly, early work demonstrated that humans displaying brain wiring patterns with shorter characteristic path length also had higher IQ than those with longer characteristic path length li2009brain (), suggesting the sensitivity of network statistics to architectures that support healthy human cognitive function. However, it is worth noting that short characteristic path lengths do not appear to be the full story bassett2006small (); bassett2016small (), and measures of segregated information processing also play an important role in brain function sporns2016modular ().

Figure 2: Multi-scale topology in brain networks. Brain networks express fundamental organizing principles across multiple spatial scales. Brain networks are modeled as a collection of nodes – representing regions of interest with presumably coherent functional responsibilities – and edges – structural connections or functional interactions between brain regions. (A) Node centrality describes the importance of individual nodes in terms of their connectivity relative to other nodes in the network. Nodes with more connections or stronger edges tend to be hubs (red), while nodes with less connections tend to be isolated (blue). (B) Clustering coefficient, a measure of connectivity between the neighbors of a node, is another local measure of network topology. Unlike network topologies with strong hubness qualities, as in A, networks with strong clustering coefficient demonstrate a high density of triangles that is believed to facilitate local information processing. (C) Modularity is a meso-scale topological property that captures communities of nodes that are tightly connected to one another and weakly connected to nodes in other communities. Modular organization underlies a rich functional specialization within individual communities. Here, nodes of different communities are colored red, blue, or pink. (D) Networks with core-periphery structure exhibit a set of tightly-connected nodes (core; red) sparsely connected to a set of isolated nodes (periphery; blue). This organization is in stark contrast to the modular organization in C. The core-periphery architecture is characteristic in networks that integrate information from isolated regions in a central area.

In addition to local and global structure, meso-scale organization provides a window into the properties of groups of nodes. Two common mesoscale structures are modularity and core-periphery structure. A network with modular structure is one that contains groups of nodes also known as modules; the nodes in a module are more densely connected to other nodes in the same module than to nodes in other modules porter2009communities (); fortunato2010community (). This modular architecture is thought to support specialization of function, each module performing a different role in support of neurophysiological processes from synchronization to cognition sporns2016modular (). In contrast, a network with core-periphery structure contains a set of core nodes that are densely interconnected with all other nodes in the network, and a set of periphery nodes that are sparsely interconnected with all other nodes in the network everett1999peripheries (); borgatti1999models (). This organization is thought to support the integration of information across neuronal assemblies, neural circuits, or large-scale functional modules bassett2013task (), in each of which the top-down web os often referred to as a rich club van2011rich ().

The multiscale nature of these network tools are particularly useful for neural systems, which are thought to perform inherently different computations at different levels of the network hierarchy bassett2013multiscale (). For example, information is thought to be processed in local cortical areas before being passed across modules along rich-club edges (in a so-called small-world bassett2006small (); bassett2016small ()), allowing integrative computations and coherent behavioral responses bassett2010efficient (); bassett2011conserved (). Understanding this multiscale architecture and its functional role in neural system dynamics is critical for developing effective interventions that capitalize on existing structure and dynamic properties rather than fighting against them.

How do we build brain networks?

Using the mathematical tools of network science to understand neural data requires one to explicitly build network models. How does one go about doing so? This topic fully warrants a review of its own: describing methods to build brain networks from spiking data muldoon2013spatially (), calcium transients and microelectrode arrays bettencourt2007functional (), mesoscale tract tracing bassett2016small (), genetic expression fulcher2016transcriptional (), and large-scale neuroimaging bullmore2011brain () and across species from cat hilgetag2004clustered () and macaque markov2013cortical (), to C. elegans bassett2010efficient (), mouse rubinov2015wiring (), rat heuvel2016topological (), drosophila kaiser2015neuroanatomy (); shih2015connectomics (), and human hagmann2008mapping () (to offer a sparse list!). Because we cannot do justice to the full richness of this question here, we focus our presentation on human brain imaging data, which has historically provided the largest source of data for testing the utility of network science to characterize complex neural systems. Thousands of healthy subjects and patient populations have been scanned, primarily by magnetic resonance imaging to identify both structural and functional properties of the nervous system. Given the dominant role of imaging, here we familiarize the reader with the strategies used to transform brain scans into data structures that are amenable to network analysis. However, we emphasize that the network tools we describe are fully translatable to other neural systems, and are commonly applied in EEG deng2015brain (); toppi2015graph (); zhang2013prediction (), MEG bassett2009cognitive (), ECOG khambhati2015dynamic (); khambhati2016virtual (), and fNIRS niu2012revealing (); zhang2016mapping () as well (for a more thorough review of these application areas, see bassett2016network ()).

The most common measurements of human brain connectivity, whether they are functional or structural networks, rely on scans obtained by magnetic resonance imaging (MRI). There are three basic types of brain scans that are typically used in network construction. The first is an “anatomic” scan. This is a T1 weighted, high resolution (1mm isotropic) sequence that can distinguish gray matter from underlying white matter. Many software tools are available for segmenting these two types of tissues and for partitioning the gray matter into a set of local regions that form the network nodes, as shown in Fig. 3. There are many atlases available for partitioning the gray matter, varying from 50-1000 separate regions. The second type of scan is a “functional” MRI. This is a series of T2* weighted scans acquired at sampling rates as fast as 2.21 Hz (although more typically at 0.5 Hz) and at a lower spatial resolution (3mm non-isotropic). This tissue contrast is sensitive to changes in blood oxygen level dependent (BOLD) signals, which vary as a function of cortical activity (whether neuronal or synaptic). These scans can be acquired with the subject at rest, the so-called “resting state” MRI or while performing a particular task, the so-called “task based” fMRI RN13458 (); RN15502 (). The time series of brain activity, averaged across all voxels in each local region can then be extracted. To create an adjacency matrix reflecting functional connectivity, pairs of time series are related by correlation, partial correlation, wavelet filtered correlation or coherence within a particular frequency band.

The third method of imaging is based on diffusion imaging. This involves the acquisition of a set of scans, each of which is sensitive to the magnitude of water diffusion in a particular direction in 3-dimensions. The set of oriented diffusion scans are then combined to estimate voxel-wise distributions of water diffusion RN9963 (). The brain is then seeded uniformly at subvoxel resolution and a probable path of diffusion through the full volume is calculated, resulting in a virtual tract of diffusion, referred to as a “streamline”. These streamlines are virtual estimates of possible water diffusion that can correspond to true white matter fascicles or tracts. The white matter fascicles and tracts are thought to be the primary means for information sharing between distinct gray matter regions, analogous to wiring that connects distinct computer modules hagmann2008mapping (); bassett2011conserved (). To create an adjacency matrix reflecting this structural connectivity, the strength of connectivity between pairs of regions can be estimated by the number of streamlines, the density of streamlines or the number of streamlines normalized by the length they traverse. The set of all real connections in the brain is referred to as the human connectome. While it is not yet possible to characterize this full set of connections, it can be approximated by the streamlines reconstructed with diffusion imagng. This lower dimensional connectome can then be characterized with the tools of network science. Given that it is an approximation, it is valuable to test the robustness of any particular network property across a range of atlases or spatial resolutions.

Figure 3: Constructing Connectomes from MRI Data. To generate human connectomes with magnetic resonance imaging, an anatomic scan delineating gray matter is partitioned into a set of nodes. This is combined with either diffusion scans of white matter structural connections or time series of brain activity measured by functional MRI, resulting in a weighted connectivity matrix.

What do brain networks offer neuroengineering?

After building networks from imaging data, one can then use these networks to address pressing questions in neuroengineering. While we cannot exhaustively cover all possible uses of these tools currently in the literature, here we highlight their utility in neural mapping and connectivity estimation, diagnosis and monitoring, and rehabilitation and treatment.

Diagnosis and monitoring

Accurately diagnosing disorders of the human connectome and monitoring their progression are particularly critical applications of network-based tools to neural systems. Diseases thought to be accompanied by connectome abnormalities or alterations include schizophrenia bassett2008hierarchical (); bassett2009cognitive (); lynall2010functional (), autism nomi2015developmental (); menon2011large (), epilepsy burns2014network (); khambhati2016virtual (), and Alzheimer’s disease he2009neuronal (); poza2013characterization (); tijms2013alzheimers (), among others stam2014modern (); bassett2009human (). The pattern of alterations in a given condition can be described in the form of a graph, as can the patterns that are similar or different between a pair of conditions. In some cases, these network changes occur early in a disease, offering potential as diagnostic biomarkers zhu2016changes (). Bolstering this possibility, several studies have demonstrated that by incorporating network statistics as features in machine learning algorithms, it is possible to classify groups of individuals with and without a particular condition, from aging petti2013aged () to major depression sacchet2014elucidating (). While diagnosis and classification are binary decisions, one can also continuously monitor brain networks within a single individual toppi2014investigating () either during drastic changes such as those accompanying disease progression, or during minor changes in mental state such as those induced by driving fatigue zhao2016reorganization ().

Rehabilitation and treatment

The sensitivity of network measures to brain state offers the generalizable potential for graph statistics to be used as indicators of the efficacy of rehabilitation and treatment. Initial studies support this potential efficacy by demonstrating appreciable changes in network organization induced by memory rehabilitation treatment (a broad intervention useful across multiple clinical conditions toppi2014time ()), seizure therapy (an intervention for severe depression deng2015brain ()), and motor imagery (a frequent intervention for stroke ge2015motor ()). Common techniques to affect these interventions include neurofeedback where humans learn to control the activity or connectivity in certain parts of their brain to enhance mental function stoeckel2014optimizing (); banca2015visual (). Notably, graph statistics of functional network architecture have proven sensitive to cognitive workload during these interventions, offering task-independent markers for monitoring and matching participants ability and task difficulty during neurofeedback training fels2015predicting (). Neurofeedback approaches often utilize exquisitely calibrated brain-computer interfaces andersen2014toward (), systems that can also dual as neural prosthetics. When applying these techniques to clinical populations, a pressing question arises due to limited resources: Who will benefit most? Can we choose the intervention that best fits a given individual? Interestingly, emerging data suggest that organizational characteristics of a person’s functional network architecture as measured by EEG can be used to predict who will be receptive to motor imagery treatment zhang2015efficient (); zhang2013prediction (). These initial studies underscore the potential of network representations of neural data to provide sensitive and specific markers of the receptiveness of neural circuits to induced network structure change sporns2013human ().

Neural mapping and neural connectivity estimation

While neuroengineering is often thought of as a field of clinical translation, basic science plays a fundamental role in giving the investigator the knowledge and understanding necessary to intervene in a way that benefits the system. A particularly exciting current frontier in neuroengineering lies in mapping neural systems using a variety of imaging techniques christopoulos2012network (), and in estimating the connectivity between neural elements using sophisticated statistical algorithms kafashan2015optimal (). In these contexts, network science offers explicit tools to characterize the maps, and to use empirical estimates of connectivity to inform the design of new networks. Indeed, the concept of network design is a relatively new one in biological systems. When applied to the neural domain, network design includes the building of computational models of neural dynamics, as well as physical models kanagasabapathi2012functional () via micropatterning, microfabricated multielectrode arrays, and low-density neuronal culture techniques chang2006neuronal (). Together, these algorithmic and empirical approaches provide exciting avenues to map the neural connectome across scales and species bassett2016network (), and to better understand the dynamics that produce cognition and behavior medaglia2015cognitive ().

Constructing and Using Brain Networks for Neuroengineering

This brief survey of the literature demonstrates that brain networks offer exciting capabilities in addressing pressing questions in neuroengineering. In this section, describe important considerations in constructing and using brain networks in the context of human imaging. While we focus on human neuroimaging, these (or similar) considerations are likely to be important in the collection and analysis of other types of data (multi-unit recording, optical imaging) as they become available for network analysis.

Image Acquisition

The rapid growth of imaging-based network science has been accompanied by a parallel recognition that functional and structural MRI data can be corrupted by a broad array of technical, physiologic and anatomic factors that, if not handled properly, lead to major errors in network modeling. The good news is that MRI is a mature technology and it is rare for data to be corrupted by artifacts secondary to unreliable hardware or poor pulse sequence designs. The bad news is that brain imaging is commonly corrupted by more subtle physical-anatomic properties that can be difficult to surmount with conventional hardware and pulse sequences RN15485 (). Both diffusion weighted imaging (used for structural connectomics) and T2* weighted imaging (to detect changes of BOLD signals in functional connectomics) are highly sensitive to susceptibility artifacts. The most troublesome cause is an air-tissue interface that leads to very localized non-linear image distortion and signal irregularity. For example, the medial and inferior temporal and orbital frontal cortex of the brain are adjacent to air filled petrous and ethmoid sinuses. The resulting artifacts lead to missing streamlines projecting into these areas or unreliable estimates of functional activity within the distorted gray matter regions. These distortions are difficult to correct post hoc. The degree of signal dropout and missing data varies enormously between individual subjects. Thus, network analyses that are aggregated over a population need to carefully evaluate the influence of missing data on the underlying connectivity matrices. A second challenge in brain MRI, particularly for diffusion imaging is the effect of eddy currents. Eddy currents are loops of electrical current induced within the brain tissue by the changing magnetic field required to generate images. This causes spatial distortion within each image slice and can be particularly impactful in diffusion imaging. A third challenge, also leading to geometric distortion, arises from magnetic field inhomogeneity and phase encoding errors. There are numerous software tools available for correcting both types of distortion post hoc.

Pitfalls unique to functional imaging networks

Ideally, all of the functional connections, whether for a resting state RN15491 (); RN15490 (); RN15489 () or task-based network would be determined by patterned brain activity reflecting inherent cognitive processes. However, there are numerous other sources of noise that can contribute to spurious increases of functional connectivity RN15470 (); RN15457 (); RN15464 (). One of the most important influences on functional connectivity is variations in amplitude or rate in the cardiac and respiration cycles. Respiration rate (0.2 Hz) and depth of breathing can clearly influence local BOLD signal RN13210 (). Cardiac rate (1 Hz) also influences BOLD signal RN15492 (). These effects are regionally complex, with respiration effects more apparent near the ventricles, and cardiac effects more apparent near the largest arteries. Higher order effects of the cardiac and respiratory cycle may also be present in the tissue beyond a simple linear projection of the pulse and bellows signals. For example, the chest wall expansion will influence global magnetic field inhomogeneity, while CSF pulsatility (via the cardiac pulse) may be periodic with the chest expansion. Thus, the influence of both on functional connectivity analyses will be dependent on an individual subject’s physiologic state and unique anatomy.

There are many retrospective strategies for removing the effects of cardiac and respiratory cycle variation on the BOLD time series from each voxel. If heart rate and respiration have been independently measured, then software tools such as “RETROICOR” can be used RN15493 (). It uses a Fourier expansion of the non-brain physiologic signals with 8-20 regressors. While this method works well for both linear and higher order artifacts, there is a trade-off in that increasing the number of regressors during RETROICOR correction will remove a greater amount of relevant brain signal RN15461 ().

For many experimental situations, independent measures of heart rate and respiration are not available and methods besides RETROICOR are needed. Here we mention techniques based on independent component analysis (ICA) of the rsMRI data. ICA decomposes the functional time series for all voxels into patterns of activity consisting of a set of spatial maps, each of which has a corresponding time series that when added linearly, sum to the original voxel-wise time series. A set of ICA components will represent both brain activity and “noise” components. Ideally, these sources of brain and non-brain activity are independent. If so, then these latter noise components can be removed and a new noise free times series can be reconstructed. The challenge then, is to find an unbiased, efficient method for identifying those components reflecting noise. Manual classification of ICA components is very difficult, and requires expert knowledge. One semi-automated ICA-based X-noiseifier called “FIX” RN15421 () uses a machine learning approach to aid with this process. For each ICA component a large number of distinct spatial and temporal features are generated, each describing the proportion of temporal fluctuations observed at high frequencies. These features are fed into a multi-level classifier. After training by hand-classification across a sufficient number of datasets, the classifier can then be used with new datasets.

An alternative approach is to estimate pulse and respiratory variability for a subject directly from an independent set of fMRI data, utilizing temporal independent component analysis RN15460 (). The method assumes that non-brain physiologic noise is spatially stationary. For example, noise associated with the carotid arteries will be in the same location across different rsMRI scans from the same subject. Once the underlying and independently derived spatial weighting matrix is identified by ICA in one dataset, it can be applied to a separate rsMRI time series from the same subject to produce the temporal pattern of noise. The resulting cardiac and respiratory estimators can then be used with RETROICOR or similar correction methods. While this method works well, it requires an independent sample of functional data.

It has been assumed that global BOLD activity, measured over the whole brain, will remain constant across a time series. Any fluctuation would be due to instrumentation issues or non-brain physiologic effects. However, recent studies have examined the effect of removing the global mean signal from the time course on subsequent connectivity analyses. Interestingly, multiple studies show that a significant portion of the global mean signal is in fact related to the average signal within particular resting state brain networks RN15494 (); RN15497 () and that removing the global signal can result in spurious negative correlations RN13032 () and reduces reproducibility of many network metrics RN15471 (). Despite these disadvantages, global signal regression can be helpful in developmental and clinical cohorts to correct for motion-related artifact RN15500 (); ciric2016benchmarking ().

Indeed, it is almost impossible for a person to remain motionless in an MRI scanner. Breathing, swallowing and volitional movements can create motion that propagates to the head. A brain placed in the MRI field will become magnetized over 6 seconds. If the brain moves, then it will no longer be magnetized in the same direction and there can be a massive increase in the signal until the brain has remagnetized to the new magnetic orientation. To account for the effects of this motion-induced noise, a variety of retrospective methods have been proposed. Most assume that the change in signal intensity will be global, occurring within a single time sample of the rsMRI time series. One of the most common methods is to use linear transformations to fit each time sample to one time point. The resulting transformation weights (translations and rotations) can be used to adjust global signal intensity or be included in a regression model as a covariate of non-interest RN15499 (); RN15498 (). However, the “filtering” of time series data with motion parameters is problematic because they do not model continuous motion directly. Rather, they capture net displacement at the temporal resolution of the sampling frequency (0.5 Hz). If the head is displaced rapidly, and returns to the same position within a single sampling period, then there is no net displacement, but a large signal spike in the data. This will profoundly alter the strength of connectivity between areas with a common motion-induced signal change. This type of signal change has been described as the “predominant effect of motion” in a sample ranging from 8 to 23 years old RN15500 (). In recognition of potential artifacts from rapid motion (or RF spikes), software has been developed to address them. Rather than using the realignment information, these methods search for global spikes in signal intensity. There is one final challenge with head motion artifacts. Within each volumetric acquisition, a stack of slices are acquired sequentially, typically by first sampling the odd slices and then the even slices. It is not uncommon for a brief head movement to demagnetize a subset of slices, causing artifacts in every other slice. This is not detected in the transformation matrix and may not alter global signal intensity. New tools are emerging to detect unexpected signal spikes within single slices RN15472 (). For a useful study benchmarking confound regression strategies for the control of motion artifact in studies of functional connectivity, see ciric2016benchmarking ().

Pitfalls unique to structural imaging networks

There are many sampling schemes for acquiring a set of oriented diffusion scans. These include diffusion tensor imaging (DTI), which sample an object at a uniformly spaced set of angles and at a constant magnetic gradient strength. When the gradient strength or number of directions are increased the angular resolution improves (Q-ball and high angular resolution diffusion imaging “HARDI”) but with the tradeoff of reduced signal to noise in the scans. Multiple shells of gradient strengths can be applied (multishell diffusion imaging) or a uniform distribution across gradient strength and direction can be applied (diffusion spectrum imaging) RN15488 (). Critically, each of these methods requires a different mathematical technique for converting diffusion images to probabilistic estimates of local water diffusion, resulting in varying success at modeling the connectivity in different brain areas where there can be water diffusion in multiple directions (the crossing fiber problem). Methods using lower angular resolution such as DTI consistently underestimate the number of possible streamlines by an order of magnitude compared to multishell and DSI methods. Missing connections can also arise because an insufficient number of seeds are introduced to generate the underlying streamline set. Whatever the cause, allowing for missing data can significantly alter graph metrics RN13458 (). On the other hand, commonly used algorithms for generating streamlines can create noisy, anatomically implausible connections that must be removed by length and/or angle thresholds. Most current algorithms for generating streamline connections suffer from a length bias RN15501 (); RN15503 (). The shorter a connection, the easier it is to be reconstructed. Thus, a structural network will be more likely to represent short connections than long. Similarly, the odds are more likely that a streamline will be reconstructed if it is in a thick white matter fascicle with many fibers oriented in a common direction than if it is in a thin fascicle. To address the length bias, some authors normalize the streamline count between two gray matter regions by the physical size of those same regions. Clearly, standardization in these acquisition, reconstruction and counting procedures is essential for reproducibility and generalizability.

Frontiers in Computational Science and Systems Engineering

With these empirical considerations in mind, it is nevertheless clear that brain imaging has provided a fertile test bed for developing and testing novel tools from network science. Yet, it is likewise clear that this is only a first wave of innovation. Indeed, network neuroscience offers to the field of neuroengineering two distinct sets of frontiers: one in the development of computational and systems engineering approaches, and the other in translating current and future advances directly to clinical populations. In this section, we briefly review new directions in algorithmic development, computational architectures, signal processing techniques, and statistics that support the extraction, representation, and characterization of meaningful relational patterns in neural data. We also discuss the nascent application of control theory to these networks, and highlight their potential utility in guiding clinical interventions.

Dynamic and multilayer networks

A commonly faced challenge in applying network analyses to neural data is that the processes we often wish to understand are inherently dynamic processes hutchison2013dynamic (); calhoun2014chronnectome (); kopell2014beyond (). Rehabilitation, response to treatment, monitoring disease progression, and tracking BCI learning are all evolving processes that occur over a range of time scales. Yet, networks in their simplest forms are static: a fixed set of network nodes are connected by a single estimate of connectivity. In extending these static descriptions to incorporate time, the applied mathematics community has defined so-called multilayer networks kivela2014multilayer (). Colloquially, a multilayer network is a network that contains different layers, and in which the edges in a given layer represent a different type of relationship than the edges in another layer. Perhaps the simplest type of multilayer network is a temporal network, where each layer is a time window and the edges within that layer represent relationships that are true in that time window holme2012temporal (). By tying each layer to the next using identity links (edges between node in layer and node in layer ), the static graph representation as an adjacency matrix can be expanded to a dynamic graph representation as an adjacency tensor mucha2010community (); bassett2013robust (), providing important mathematical advantages to common statistical challenges present in these data.

The tools of temporal networks are particularly useful in modeling plasticity and learning with the aim of predicting recovery rienkensmeyer2016computational (). In initial efforts, temporal networks have been used to reveal common patterns of network reconfiguration that occur as healthy adult individuals learn a new motor-visual skill over the course of days to weeks bassett2011dynamic (); bassett2013task (); mantzaris2013dynamic (); bassett2014cross (); bassett2015learning (). Interestingly, individuals that displayed greater network flexibility, particularly in areas of the brain critical for cognitive control bassett2015learning () learned more quickly than individuals with less flexible brains bassett2011dynamic (). While these studies initially applied temporal network techniques to motor skill learning with the aim of informing rehabilitation after stroke heitger2012motor (), there are many open questions about how sensitive these techniques might be to neural or cognitive plasticity underlying other types of learning mattar2016network (); karuza2016local (), or to other dynamic processes that are of important to neuroengineering in other clinical contexts.

Beyond temporal networks, one can extend the multilayer network construct to represent relationships across different imaging modalities muldoon2016network (): for example, calcium transients and local field potentials, or structural MRI and EEG, or diffusion imaging and functional MRI nicosia2014spontaneous (). Alternatively, one could think of letting each layer represent a different frequency band brookes2016multi (); domenico2016mapping () or a different patient in a clinical cohort. Indeed, the potential applications of these multilayer representations across neuroimaging contexts is surprisingly broad, and future efforts will likely include a careful assessment of their utility in uncovering conserved and variable properties of networked neural systems.

Statistical tools, frameworks, and null models

An important burgeoning area of work lies in building, testing, and validating appropriate statistical methods and models for network inference. Because networks are not simple mathematical objects, the tools required to capture and compare them extend beyond what traditional statistics offers kolaczyk2009statistical (). Many efforts have focused on developing sophisticated permutation-based methods for network comparison simpson2013permutation (); winkler2015multi (), and some have extended these methods to assess differences in network functions (rather than univariate statistics) ginestet2011statistical (); bassett2012altered (); betzel2016modular (), for example by building on tools developed in the field of functional data analysis ramsay2005functional (); ramsay2002applied (); ramsay2006functional (). In addition to comparing networks, one often wishes to understand whether the network structure or dynamics that one observes in empirical data is expected or unexpected. Answering these questions depends on the development of appropriate static and dynamic network null models (see betzel2016modular (); papadopoulos2016evolution (); bassett2015extraction () and bassett2013robust (), respectively). Statistical considerations also extend to estimating the connectivity itself lindquist2014evaluating (); lepage2013inferring (), assessing its significance ginestet2011statistical (), and measuring its relationship to behavior or symptomatology shehzad2014multivariate (). Finally, a nascent area of inquiry lies in building statistical models of networks simpson2011exponential (); simpson2012exponential (); klimm2014resolving () in order to understand their generative principles betzel2016generative (); vertes2012simple (); vertes2014generative (); pavlovic2014stochastic ().

Algebraic topology

While extremely powerful, network science is largely built on the tools of graph theory, which inherently treat the dyad (a single connection between two nodes) as the fundamental unit of interest. Recent evidence, however, points to the fact that sensor networks, technological networks, and even neural networks display higher-order interactions that simply cannot be reduced to pairwise relationships ganmor2011architecture (); ganmor2011sparse (). To address this growing realization, we can turn to recent advances in applied algebraic topology ghrist2014elementary (), which reframes the problem of relational data in terms of simplices or collections – rather than pairs – of vertices giusti2016twos () (Fig. 4). This added sensitivity enables algebro-topological tools to offer mechanisms for neural coding giusti2015clique (); curto2016what (), distinguish disparate classes of graph models sizemore2016classification (), and separate healthy from clinical populations kim2014morphological (). The framework also offers useful tools to consider the evolution of simplices over time drawing on the notion of a filtration, and tools to identify and track hollow cavities in networks – structures that are otherwise invisible to common graph metrics giusti2016twos (). We anticipate that the next few years will see an increasing interest in better understanding the role of these higher order interactions in healthy cognition versus disease, and their sensitivity as biomarkers for tracking effects of training and rehabilitation.

Figure 4: Tools for Higher-Order Interactions from Algebraic Topology. (a) The human connectome is a complex network architecture that contains both dyadic and higher-order interactions. Graph representations of the human connectome only encode dyadic relationships, and leave the higher-order interactions unaccounted for. A natural way in which to encode higher-order interactions is in the language of algebraic topology, which defines building blocks called simplices giusti2016twos (): a 0-simplex is a node, a 1-simplex is an edge between two nodes, a 2-simplex is a filled triangle, etc. (b) These building blocks enable the desciption of two distinct structural motifs that are thought to play very different roles in neural computations curto2016what (): (i) cliques, which are all-to-all connected subgraphs, are thought to facilitate integrated codes and computations, and (ii) cycles or cavities, which are collection of -simplices arranged to have an empty geometric boundary, are thought to facilitate segregated codes and computations. (c) Additional tools available to the investigator include filtrations and persistent homology. Filtrations represent weighted simplicial complexes as a series of unweighted simplicial complexes, and can be used to study networks that change over time, or that display hierarchical structure across edge weights. Filtrations allow one to follow cycles from one complex to another and quantify how long they live (via the number of complexes in which they are consecutively present). Because this is a study of the persistence of a cycle, it is referred to as the persistent homology of the weighted simplicial complex.

Network control theory

A final exciting frontier that we will mention – which bridges both computational science and systems engineering – is the development and application of explicitly network control theory liu2011controllability (); pasqualetti2014controllability () to neural systems (Fig. 5). Indeed, neural control engineering schiff2011neural () is slowly evolving into neuro-network control engineering, as the control problems become tuned to the underlying graph architecture of the dynamical processes motter2015network (). In general, these applications take one of two forms: either seeking to understand how neural systems control themselves, or how one can exogeneously control a neural system, steering it away from pathological dynamics and towards healthy dynamics.

In the first case, we seek to understand how neural systems control themselves. To address this question, we can write down a model of brain dynamics where the current brain state depends on (i) the previous brain state, (ii) the wiring pattern that structurally connects network nodes (brain regions), and (iii) the control input. Assuming this is a linear, time-invariant, noise-free, and discrete-time model, we can infer which brain regions are predisposed to affect the system, and in what ways. Early efforts along these lines revealed that regions in the brain’s executive system are well-poised to push the brain into difficult-to-reach states, far away on an energy landscape gu2015controllability (). Moreover, the brain’s densely interconnected rich-club is poised to form the ground state of the system, being the least energetically costly target state betzel2016optimally (). Interestingly, these control principles of the brain, built on the organization of white matter tracts, are significantly altered in individuals who have experienced traumatic brain injury gu2016optimal (), suggesting their utility in clinical applications. However, it is also important to be cautious; these predictions are based on linear network control while the brain is a nonlinear dynamical system, and therefore interpretations should be validated in additional studies cornelius2013realistic (). For example, demonstrating that individual differences in cognitive control function are correlated with individual differences in network control statistics will be an important first step medaglia2016cognitive (), as will demonstrating that these statistics change over developmental time scales in which cognitive control emerges in children tang2016structural (). Moreover, exploring the applicability of nonlinear control strategies, including linearization of nonlinear systems, will be an important avenue of inquiry for future work cornelius2013realistic ().

The second context in which network control theory offers a powerful toolset for neuroengineers is in addressing the question of how to exogeneously control a neural system and accurately predict the outcome on neurophysiological dynamics – and, by extention, cognition and behavior. Indeed, how to target, tune, and optimize stimulation interventions is one of the most pressing challenges in the treatment of Parkinson’s disease and epilepsy, to name a couple johnson2013neuromodulation (). More broadly, this question directly impacts the targeting of optogentic stimulation in animals ching2013control () and the use of invasive and non-invasive stimulation in humans muldoon2016stimulation () (e.g., deep brain, grid, transcranial magnetic, transcranial direct current, and transcranial alternating current stimulation). As a case study, consider medically refractory epilepsy, where network techniques can be used to identify seizure onset zones hao2014computing (); khambhati2015dynamic (); burns2014network () and where network control theory can be used to detect drug-resistant seizures santaniello2011quickest (), inform the development of a distributed control algorithm to quiet seizures using grid stimulation ching2012distributed (), and identify areas of the network to target during resective surgery khambhati2016virtual ().

Figure 5: Brain network regulation and control can help navigate dynamical states. To accomplish behavioral and cognitive goals, brain networks internally navigate a complex space of dynamical states. Putative brain states may be situated in various peaks and troughs of an energy landscape – requiring the brain to expend metabolic energy to move from the current state to the next state. Within the space of possible dynamical states, there are easily accessible states and harder-to-reach states; in some cases, the accessible states are healthy while in other cases they may contribute to dysfunction, and similarly for the harder-to-reach states. Two commonly observed control strategies used by brain networks are average control and modal control. In average control, highly central nodes navigate the brain towards easy-to-reach states. In contrast, modal control nodes tend to be isolated brain regions that navigate the brain towards hard-to-reach states that may require additional energy expenditure gu2015controllability (). As a self-regulation mechanism for preventing transitions towards damaging states, the brain may employ cooperative and antagonistic, push-pull, strategies khambhati2016virtual (). In such a framework, the propensity for the brain to transition towards a damaging state might be competitively limited by opposing modal and average controllers whose goal would be to pull the brain towards less damaging states.

Towards Clinical Translation

Together, these exciting computational frontiers have the potential to directly inform clinical practice. Indeed, several of the main translational challenges of neuroengineering are ripe for the incorporation of network data. These opportunities begin at the earliest stages of clinical diagnosis and monitoring, where variation between individuals – and even indeed variation within a single individual – stymie progress in tuning medication, stimulation, brain-machine interfaces, neuroprosthetics, and physical or cognitive-behavioral therapy to offer individuals a better quality of life (Fig. 6). Concerted efforts in mapping relational architectures in neural and behavioral data in the form of graphs and networks will be critical to obtaining a more holistic understanding of mental health, as well as greater insights into optimizing interventions. Such mappings could occur in the traditional sense using empirical measurements performed in research or hospital settings; but perhaps the most tantalizing possibilities currently being discussed include the use of digital data from smart phones to accurately phenotype individuals, and the health of their nervous system, with the goal of better guiding intervention strategies for the clinically unwell onnela2016harnessing (); torous2016new ().

Figure 6: Clinical translation of network neuroscience tools. Network neuroscience offers a natural framework for improving tools to diagnose and treat brain network disorders. (A) For drug-resistant epilepsy patients, invasive monitoring of brain activity to localize brain tissue where seizures originate and plan resective surgery is challenging, because the neural processes generating seizures are poorly understood. Epileptic brain signals, electrical fields produced by the firing of neuron populations, are sensed by electrodes that rest on the surface of the brain, beneath the dura, and are recorded by a digital acquisition system. A three-dimensional reconstruction of a patient’s brain (red) with electrodes co-localized (green) to anatomical features is shown here. (B) Recorded brain signals are studied by clinical practitioners to characterize spatial and temporal behavior of the patient’s seizure activity. In the plot, each line represents time-varying voltage fluctuation from each electrode sensor. (C) Inferred functional connections from a single time-slice during the patient’s recorded seizure demonstrates rich relationships in neural dynamics between brain regions and are not visually evident from B (blue circles are nodes, red links are strong connections, yellow links are weak connections). Functional connectivity patterns demonstrate strong interactions around brain regions in which seizures begin and weak projections to brain regions where seizures spread. Objective tools in network neuroscience can usher in an era of personalized algorithms capable of mapping epileptic network architecture from neural signals and pinpointing implantable, neurostimulation devices to specific brain regions for intervention muldoon2016stimulation (); khambhati2015dynamic (); khambhati2016virtual ().

Extensions Beyond Neuroengineering

Before concluding, it is important to point out that the mathematical methods and conceptual frameworks that we have been discusing in this review are more generally applicable beyond the specific realm of brain connectivity. From genes beagan2016local () to the musculo-skeletal system, from central to peripheral nervous systems chen2016neural (), and from injured neural tissue in brains causing cognitive deficits to neural tissue in muscles causing pain zhang2016stretch (), network science offers an approach that spurns reductionism in favor of wholistic maps and models of complex interconnected systems. Indeed, future work may benefit from considering the nervous system as embedded or embodied in the broader context, as only one part of an interconnected web of networks supporting human life baldassano2016topological (); steinway2015inference ().

Conclusion and Future Outlook

In this review, we have sought to introduce an exciting and emerging frontier in neuroengineering: a network science of brain connectivity. In addition to outlining the mathematical underpinnings of the field, we have briefly described some marked initial successes in which the tools of network neuroscience have been brought to bear on neural mapping and connectivity estimation, diagnosis and monitoring, and rehabilitation and treatment. However, we are also careful to describe common pitfalls and associated limitations, in an effort to offer a balanced guide in incorporating these techniques into one’s own research practices. We took the liberty to speculate in the later sections about some important frontiers that we believe will become increasingly critical to the questions posed by neuroengineering in the near future, both from a computational point of view and from a view towards clinical impact. In closing, we underscore yet again that the strength and novelty of network neuroscience lies in its brazen grasp on the full complexities of relational data, facilitating transformative approaches to understanding, fixing, and building brains.  
 
Acknowledgements. We thank Chad Giusti, Jason Kim, and Matthew Hemphill for helpful comments on an earlier version of this manuscript. DSB would also like to acknowledge support from the John D. and Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the Army Research Laboratory and the Army Research Office through contract numbers W911NF-10-2-0022 and W911NF-14-1-0679,the National Institute of Mental Health (2-R01-DC-009209-11), the National Institute of Child Health and Human Development (1R01HD086888-01), the Office of Naval Research, and the National Science Foundation (BCS-1441502, BCS-1430087, BCS-1631550, and CAREER PHY-1554488). The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies.

References

  • (1) Chen, H. I. et al. Neural substrate expansion for the restoration of brain function. Front Syst Neurosci 10, 1 (2016).
  • (2) Haynes, J. D. & Rees, G. Decoding mental states from brain activity in humans. Nat Rev Neurosci 7, 523–534 (2006).
  • (3) Christophel, T. B., Hebart, M. N. & Haynes, J. D. Decoding the contents of visual short-term memory from human visual and parietal cortex. J Neurosci 32, 12983–12989 (2012).
  • (4) Putze, F. & Schultz, T. Adaptive cognitive technical systems. J Neurosci Methods 234, 108–115 (2014).
  • (5) Krusienski, D. J. et al. Critical issues in state-of-the-art brain-computer interface signal processing. J Neural Eng 8, 025002 (2011).
  • (6) DiLorenzo, D. J. & Bronzino, J. D. Neuroengineering (CRC Press, 2007).
  • (7) Johnson, M. D. et al. Neuromodulation for brain disorders: challenges and opportunities. IEEE Trans Biomed Eng 60, 610–624 (2013).
  • (8) Glaser, J. I. & Kording, K. P. The development and analysis of integrated neuroscience data. Front Comput Neurosci 10, 11 (2016).
  • (9) Valiant, L. G. What must a global theory of cortex explain? Curr Opin Neurobiol 25, 15–19 (2014).
  • (10) Craver, C. F. Beyond reduction: mechanisms, multifield integration and the unity of neuroscience. Stud Hist Philos Biol Biomed Sci 36, 373–395 (2005).
  • (11) Bullmore, E. et al. Generic aspects of complexity in brain imaging data and other biological systems. Neuroimage 47, 1125–1134 (2009).
  • (12) Stevenson, I. H. & Kording, K. P. How advances in neural recording affect data analysis. Nat Neurosci 14, 139–142 (2011).
  • (13) Bassett, D. S. & Sporns, O. Network neuroscience. Nature Neuroscience In Press (2016).
  • (14) Long, B., Zhang, Z. & Yu, P. S. Relational Data Clustering: Models, Algorithms, and Applications (CRC Press, 2010).
  • (15) Conaco, C. et al. Functionalization of a protosynaptic gene expression network. Proc Natl Acad Sci U S A 109, 10612–10618 (2012).
  • (16) Zhang, S., Bassett, D. S. & Winkelstein, B. A. Stretch-induced network reconfiguration of collagen fibres in the human facet capsular ligament. J R Soc Interface 13, 20150883 (2016).
  • (17) van den Heuvel, M. P., Bullmore, E. T. & Sporns, O. Comparative connectomics. Trends in Cognitive Sciences 20, 345–361 (2016).
  • (18) Shih, C. T. et al. Connectomics-based analysis of information flow in the Drosophila brain. Curr Biol 25, 1249–1258 (2015).
  • (19) Bassett, E. T., D Sand Bullmore. Human brain networks in health and disease. Curr Opin Neurol 22, 340–347 (2009).
  • (20) Fornito, A. & Bullmore, E. T. Connectomics: a new paradigm for understanding brain disease. Eur Neuropsychopharmacol 25, 733–748 (2015).
  • (21) Medaglia, J. D., Lynall, M. E. & Bassett, D. S. Cognitive network neuroscience. J Cogn Neurosci 27, 1471–1491 (2015).
  • (22) Misic, B. & Sporns, O. From regions to connections and networks: new bridges between brain and behavior. Curr Opin Neurobiol 40, 1–7 (2016).
  • (23) Vertes, P. E. et al. Simple models of human brain functional networks. Proc Natl Acad Sci U S A 109, 5868–5873 (2012).
  • (24) Vertes, P. E., Alexander-Bloch, A. & Bullmore, E. T. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks. Philos Trans R Soc Lond B Biol Sci 369, 1653 (2014).
  • (25) Betzel, R. F. et al. Generative models of the human connectome. Neuroimage 124, 1054–1064 (2016).
  • (26) Pilosof, S., Porter, M. A., Pascual, M. & Kefi, S. The multilayer nature of ecological networks. arXiv 1511, 04453 (2016).
  • (27) Proulx, S. R., Promislow, D. E. & Phillips, P. C. Network thinking in ecology and evolution. Trends Ecol Evol 20, 345–353 (2005).
  • (28) Newman, M. E. J. Networks: An Introduction (MIT Press, 2010).
  • (29) Rubinov, M. & Sporns, O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52, 1059–1069 (2010).
  • (30) Pavlovic, D. M., Vértes, P. E., Bullmore, E. T., Schafer, W. R. & Nichols, T. E. Stochastic blockmodeling of the modules and core of the caenorhabditis elegans connectome. PloS one 9, e97584 (2014).
  • (31) Simpson, S. L., Hayasaka, S. & Laurienti, P. J. Exponential random graph modeling for complex brain networks. PLoS One 6, e20039 (2011).
  • (32) Lindquist, M. A., Xu, Y., Nebel, M. B. & Caffo, B. S. Evaluating dynamic bivariate correlations in resting-state fMRI: a comparison study and a new approach. Neuroimage 101, 531–546 (2014).
  • (33) Misic, B. et al. Cooperative and competitive spreading dynamics on the human connectome. Neuron 86, 1518–1529 (2015).
  • (34) Misic, B., Sporns, O. & McIntosh, A. R. Communication efficiency and congestion of signal traffic in large-scale brain networks. PLoS Comput Biol 10, e1003427 (2014).
  • (35) Patel, T. P., Ventre, S. C., Geddes-Klein, D., Singh, P. K. & Meaney, D. F. Single-neuron NMDA receptor phenotype influences neuronal rewiring and reintegration following traumatic injury. J Neurosci 34, 4200–4213 (2014).
  • (36) Gratton, C., Lee, T. G., Nomura, E. M. & D’Esposito, M. The effect of theta-burst TMS on cognitive control networks measured with resting state fMRI. Front Syst Neurosci 7, 124 (2013).
  • (37) Kaiser, M. A tutorial in connectome analysis: topological and spatial features of brain networks. Neuroimage 57, 892–907 (2011).
  • (38) Fornito, A., Zalesky, A. & Breakspear, M. Graph analysis of the human connectome: promise, progress, and pitfalls. Neuroimage 80, 426–444 (2013).
  • (39) van Diessen, E. et al. Opportunities and methodological challenges in EEG and MEG resting state functional brain network research. Clin Neurophysiol 126, 1468–1481 (2015).
  • (40) Reijneveld, J. C., Ponten, S. C., Berendse, H. W. & Stam, C. J. The application of graph theoretical analysis to complex networks in the brain. Clin Neurophysiol 118, 2317–2331 (2007).
  • (41) Bullmore, E. & Sporns, O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat Rev Neurosci 10, 186–198 (2009).
  • (42) Bassett, D. S. & Bullmore, E. Small-world brain networks. Neuroscientist 12, 512–523 (2006).
  • (43) Bullmore, E. T. & Bassett, D. S. Brain graphs: graphical models of the human brain connectome. Annu Rev Clin Psychol 7, 113–140 (2011).
  • (44) Butts, C. T. Revisiting the foundations of network analysis. Science 325, 414–416 (2009).
  • (45) Zalesky, A. et al. Whole-brain anatomical networks: does the choice of nodes matter? Neuroimage 50, 970–983 (2010).
  • (46) Bollobas, B. Random Graphs (1985).
  • (47) Bollobas, B. Graph Theory: An Introductory Course (Springer-Verlag, 1979).
  • (48) Watts, D. J. & Strogatz, S. H. Collective dynamics of ’small-world’ networks. Nature 393, 440–442 (1998).
  • (49) Kitzbichler, M. G., Henson, R. N., Smith, M. L., Nathan, P. J. & Bullmore, E. T. Cognitive effort drives workspace configuration of human brain functional networks. J Neurosci 31, 8259–8570 (2011).
  • (50) Rubinov, M. & Bassett, D. S. Emerging evidence of connectomic abnormalities in schizophrenia. J Neurosci 31, 6263–6265 (2011).
  • (51) Bassett, D. S. & Lynall, M.-E. Cognitive Neurosciences: The Biology of the Mind, chap. Network methods to characterize brain structure and function (MIT Press, 2015).
  • (52) Li, Y. et al. Brain anatomical network and intelligence. PLoS Comput Biol 5, e1000395 (2009).
  • (53) Bassett, D. S. & Bullmore, E. T. Small world brain networks revisited. The Neuroscientist Commissioned (2016).
  • (54) Sporns, O. & Betzel, R. F. Modular brain networks. Annu Rev Psychol 67 (2016).
  • (55) Porter, M. A., Onnela, J.-P. & Mucha, P. J. Communities in networks. Notices of the AMS 56, 1082–1097 (2009).
  • (56) Fortunato, S. Community detection in graphs. Physics reports 486, 75–174 (2010).
  • (57) Everett, M. G. & Borgatti, S. P. Peripheries of cohesive subsets. Social Networks 21, 397–407 (1999).
  • (58) Borgatti, S. P. & Everett, M. G. Models of core/periphery structures. Social Networks 21, 375–395 (1999).
  • (59) Bassett, D. S. et al. Task-based core-periphery organization of human brain dynamics. PLoS Comput Biol 9, e1003171 (2013).
  • (60) van den Heuvel, M. P. & Sporns, O. Rich-club organization of the human connectome. The Journal of neuroscience 31, 15775–15786 (2011).
  • (61) Bassett, D. S. & Siebenhuhner, F. Multiscale Analysis and Nonlinear Dynamics: From Genes to the Brain, chap. Multiscale network organization in the human brain (Wiley, 2013).
  • (62) Bassett, D. S. et al. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits. PLoS Comput Biol 6, e1000748 (2010).
  • (63) Bassett, D. S., Brown, J. A., Deshpande, V., Carlson, J. M. & Grafton, S. T. Conserved and variable architecture of human white matter connectivity. Neuroimage 54, 1262–1279 (2011).
  • (64) Muldoon, S. F., Soltesz, I. & Cossart, R. Spatially clustered neuronal assemblies comprise the microstructure of synchrony in chronically epileptic networks. Proc Natl Acad Sci USA 110, 3567–3572 (2013).
  • (65) Bettencourt, L. M., Stephens, G. J., Ham, M. I. & Gross, G. W. Functional structure of cortical neuronal networks grown in vitro. Phys Rev E Stat Nonlin Soft Matter Phys 75, 021915 (2007).
  • (66) Fulcher, B. D. & Fornito, A. A transcriptional signature of hub connectivity in the mouse connectome. Proc Natl Acad Sci U S A 113, 1435–1440 (2016).
  • (67) Hilgetag, C. C. & Kaiser, M. Clustered organization of cortical connectivity. Neuroinformatics 2, 353–360 (2004).
  • (68) Markov, N. T. et al. Cortical high-density counterstream architectures. Science 342, 1238406 (2013).
  • (69) Rubinov, M., Ypma, R., Watson, C. & Bullmore, E. Wiring cost and topological participation of the mouse brain connectome. Proceedings of the National Academy of Sciences of the USA doi/10.1073/pnas.1420315112 (2015).
  • (70) van den Heuvel, M. P., Scholtens, L. H. & de Reus, M. A. Topological organization of connectivity strength in the rat connectome. Brain Struct Funct 221, 1719–1736 (2016).
  • (71) Kaiser, M. Neuroanatomy: connectome connects fly and mammalian brain networks. Curr Biol 25, R416–R418 (2015).
  • (72) Hagmann, P. et al. Mapping the structural core of human cerebral cortex. PLoS Biol 6, e159 (2008).
  • (73) Deng, Z. D., McClinctock, S. M. & Lisanby, S. H. Brain network properties in depressed patients receiving seizure therapy: A graph theoretical analysis of peri-treatment resting EEG. Conf Proc IEEE Eng Med Biol Soc 2015, 2203–2206 (2015).
  • (74) Toppi, J. et al. Graph theory in brain-to-brain connectivity: A simulation study and an application to an EEG hyperscanning experiment. Conf Proc IEEE Eng Med Biol Soc 2015, 2211–2214 (2015).
  • (75) Zhang, Y., Xu, P., Guo, D. & Yao, D. Prediction of SSVEP-based BCI performance by the resting-state EEG network. J Neural Eng 10, 066017 (2013).
  • (76) Bassett, D. S. et al. Cognitive fitness of cost-efficient brain functional networks. Proc Natl Acad Sci U S A 106, 11747–11752 (2009).
  • (77) Khambhati, A. N. et al. Dynamic network drivers of seizure generation, propagation and termination in human neocortical epilepsy. PLoS Comput Biol 11, e1004608 (2015).
  • (78) Khambhati, A., Davis, K., Lucas, T., Litt, B. & Bassett, D. S. Virtual cortical resection reveals push-pull network control preceding seizure evolution. Neuron In Press (2016).
  • (79) Niu, H., Wang, J., Zhao, T., Shu, N. & He, Y. Revealing topological organization of human brain functional networks with resting-state functional near infrared spectroscopy. PLoS One 7, e45771 (2012).
  • (80) Zhang, J. et al. Mapping the small-world properties of brain networks in deception with functional near-infrared spectroscopy. Sci Rep 6, 25297 (2016).
  • (81) Bassett, D., Brown, J., Deshpande, V., Carlson, J. & Grafton, S. Conserved and variable architecture of human white matter connectivity. Neuroimage 54, 1262–1279 (2011).
  • (82) Zalesky, A. et al. Whole-brain anatomical networks: does the choice of nodes matter? Neuroimage 50, 970–83 (2010). URL http://www.ncbi.nlm.nih.gov/pubmed/20035887.
  • (83) Jones, D. K. Studying connections in the living human brain with diffusion mri. Cortex; a journal devoted to the study of the nervous system and behavior 44, 936–52 (2008). URL http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&dopt=AbstractPlus&list_uids=18635164.
  • (84) Bassett, D. S. et al. Hierarchical organization of human cortical networks in health and schizophrenia. J Neurosci 28, 9239–9248 (2008).
  • (85) Lynall, M. E. et al. Functional connectivity and brain networks in schizophrenia. J Neurosci 30, 9477–9487 (2010).
  • (86) Nomi, J. S. & Uddin, L. Q. Developmental changes in large-scale network connectivity in autism. Neuroimage Clin 7, 732–741 (2015).
  • (87) Menon, V. Large-scale brain networks and psychopathology: a unifying triple network model. Trends Cogn Sci 15, 483–506 (2011).
  • (88) Burns, S. P. et al. Network dynamics of the brain and influence of the epileptic seizure onset zone. Proc Natl Acad Sci U S A 111, E5321–E5330 (2014).
  • (89) He, Y., Chen, Z., Gong, G. & Evans, A. Neuronal networks in Alzheimer’s disease. Neuroscientist 15, 333–350 (2009).
  • (90) Poza, J. et al. Characterization of the spontaneous electroencephalographic activity in Alzheimer’s disease using disequilibria and graph theory. Conf Proc IEEE Eng Med Biol Soc 2013, 5990–5993 (2013).
  • (91) Tijms, B. M. et al. Alzheimer’s disease: connecting findings from graph theoretical studies of brain networks. Neurobiol Aging 34, 2023–2036 (2013).
  • (92) Stam, C. J. Modern network science of neurological disorders. Nat Rev Neurosci 15, 683–695 (2014).
  • (93) Zhu, H. et al. Changes of intranetwork and internetwork functional connectivity in Alzheimer’s disease and mild cognitive impairment. J Neural Eng 13, 046008 (2016).
  • (94) Petti, M. et al. Aged-related changes in brain activity classification with respect to age by means of graph indexes. Conf Proc IEEE Eng Med Biol Soc 2013, 4350–4353 (2013).
  • (95) Sacchet, M. D., Prasad, G., Foland-Ross, L. C., Thompson, P. M. & Gotlib, I. H. Elucidating brain connectivity networks in major depressive disorder using classification-based scoring. Proc IEEE Int Symp Biomed Imaging 2014, 246–249 (2014).
  • (96) Toppi, J. et al. Investigating statistical differences in connectivity patterns properties at single subject level: a new resampling approach. Conf Proc IEEE Eng Med Biol Soc 2014, 6357–6360 (2014).
  • (97) Zhao, C. et al. The reorganization of human brain networks modulated by driving mental fatigue. IEEE J Biomed Health Inform Epub ahead of print (2016).
  • (98) Toppi, J. et al. Time varying effective connectivity for describing brain network changes induced by a memory rehabilitation treatment. Conf Proc IEEE Eng Med Biol Soc 2014, 6786–6789 (2014).
  • (99) Ge, R., Zhang, H., Yao, L. & Long, Z. Motor imagery learning induced changes in functional connectivity of the default mode network. IEEE Trans Neural Syst Rehabil Eng 23, 138–148 (2015).
  • (100) Stoeckel, L. E. et al. Optimizing real time fMRI neurofeedback for therapeutic discovery and development. Neuroimage Clin 5, 245–255 (2014).
  • (101) Banca, P., Sousa, T., Duarte, I. C. & Castelo-Branco, M. Visual motion imagery neurofeedback based on the hMT+/V5 complex: evidence for a feedback-specific neural circuit involving neocortical and cerebellar regions. J Neural Eng 12, 066003 (2015).
  • (102) Fels, M., Bauer, R. & Gharabaghi, A. Predicting workload profiles of brain-robot interface and electromygraphic neurofeedback with cortical resting-state networks: personal trait or task-specific challenge? J Neural Eng 12, 046029 (2015).
  • (103) Andersen, R. A., Kellis, S., Klaes, C. & Aflalo, T. Toward more versatile and intuitive cortical brain-machine interfaces. Curr Biol 24, R885–R897 (2014).
  • (104) Zhang, R. et al. Efficient resting-state EEG network facilitates motor imagery performance. J Neural Eng 12, 066024 (2015).
  • (105) Sporns, O. The human connectome: origins and challenges. Neuroimage 80, 53–61 (2013).
  • (106) Christopoulos, V. N. et al. A network analysis of developing brain cultures. J Neural Eng 9 (2012).
  • (107) Kafashan, M. & Ching, S. Optimal stimulus scheduling for active estimation of evoked brain networks. J Neural Eng 12, 066011 (2015).
  • (108) Kanagasabapathi, T. T. et al. Functional connectivity and dynamics of cortical-thalamic networks co-cultured in a dual compartment device. J Neural Eng 9, 036010 (2012).
  • (109) Chang, J. C., Brewer, G. J. & Wheeler, B. C. Neuronal network structuring induces greater neuronal activity through enhanced astroglial development. J Neural Eng 3, 217–226 (2006).
  • (110) Viallon, M. et al. State-of-the-art mri techniques in neuroradiology: principles, pitfalls, and clinical applications. Neuroradiology 57, 441–467 (2015). URL http://link.springer.com/article/10.1007/s00234-015-1500-1.
  • (111) Beckmann, C. F., DeLuca, M., Devlin, J. T. & Smith, S. M. Investigations into resting-state connectivity using independent component analysis. Philos Trans R Soc Lond B Biol Sci 360, 1001–13 (2005). URL http://www.ncbi.nlm.nih.gov/pubmed/16087444.
  • (112) De Luca, M., Beckmann, C. F., De Stefano, N., Matthews, P. M. & Smith, S. M. fmri resting state networks define distinct modes of long-distance interactions in the human brain. Neuroimage 29, 1359–67 (2006). URL http://www.ncbi.nlm.nih.gov/pubmed/16260155.
  • (113) Greicius, M. D., Supekar, K., Menon, V. & Dougherty, R. F. Resting-state functional connectivity reflects structural connectivity in the default mode network. Cereb Cortex 19, 72–8 (2009). URL http://www.ncbi.nlm.nih.gov/pubmed/18403396.
  • (114) Aurich, N. K. & Filho, J. A. Evaluating the reliability of different preprocessing steps to estimate graph theoretical measures in resting state fmri data. Frontiers in Neuroscience 9, 1–10 (2015). URL http://journal.frontiersin.org/article/10.3389/fnins.2015.00048/full.
  • (115) Brooks, J. C. W., Faull, O. K., Pattinson, K. T. S. & Jenkinson, M. Physiological noise in brainstem fmri. Frontiers in human neuroscience 7, 623 (2013). URL http://journal.frontiersin.org/article/10.3389/fnhum.2013.00623/abstract.
  • (116) Marchitelli, R. et al. Test-retest reliability of the default mode network in a multi-centric fmri study of healthy elderly: Effects of data-driven physiological noise correction techniques. Human brain mapping 37, 2114–2132 (2016). URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26990928&retmode=ref&cmd=prlinks.
  • (117) Birn, R. M., Diamond, J. B., Smith, M. A. & Bandettini, P. A. Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fmri. NeuroImage 31, 1536–1548 (2006).
  • (118) Shmueli, K. et al. Low-frequency fluctuations in the cardiac rate as a source of variance in the resting-state fmri bold signal. Neuroimage 38, 306–20 (2007). URL http://www.ncbi.nlm.nih.gov/pubmed/17869543.
  • (119) Glover, G. H., Li, T. Q. & Ress, D. Image-based method for retrospective correction of physiological motion effects in fmri: Retroicor. Magn Reson Med 44, 162–7 (2000). URL http://www.ncbi.nlm.nih.gov/pubmed/10893535.
  • (120) Beall, E. B. & Lowe, M. J. The non-separability of physiologic noise in functional connectivity mri with spatial ica at 3t. Journal of Neuroscience Methods 191, 263–276 (2010). URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=20600313&retmode=ref&cmd=prlinks.
  • (121) Salimi-Khorshidi, G. et al. Automatic denoising of functional mri data: combining independent component analysis and hierarchical fusion of classifiers. NeuroImage 90, 449–468 (2014). URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=24389422&retmode=ref&cmd=prlinks.
  • (122) Beall, E. B. & Lowe, M. J. Isolating physiologic noise sources with independently determined spatial measures. NeuroImage 37, 1286–1300 (2007). URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=17689982&retmode=ref&cmd=prlinks.
  • (123) Chang, C. & Glover, G. H. Time-frequency dynamics of resting-state brain connectivity measured with fmri. Neuroimage 50, 81–98 (2010). URL http://www.ncbi.nlm.nih.gov/pubmed/20006716.
  • (124) Fox, M. D., Zhang, D., Snyder, A. Z. & Raichle, M. E. The global signal and observed anticorrelated resting state brain networks. J Neurophysiol 101, 3270–83 (2009). URL http://www.ncbi.nlm.nih.gov/pubmed/19339462.
  • (125) Murphy, K., Birn, R. M., Handwerker, D. A., Jones, T. B. & Bandettini, P. A. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced? Neuroimage 44, 893–905 (2009). URL http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=18976716.
  • (126) Andellini, M., Cannat�, V., Gazzellini, S., Bernardi, B. & Napolitano, A. Test-retest reliability of graph metrics of resting state mri functional brain networks: A review. Journal of Neuroscience Methods 253, 183–192 (2015). URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26072249&retmode=ref&cmd=prlinks.
  • (127) Satterthwaite, T. D. et al. An improved framework for confound regression and filtering for control of motion artifact in the preprocessing of resting-state functional connectivity data. Neuroimage 64, 240–56 (2013). URL http://www.ncbi.nlm.nih.gov/pubmed/22926292.
  • (128) Ciric, R. et al. Benchmarking confound regression strategies for the control of motion artifact in studies of functional connectivity. Neuroimage In Submission (2016).
  • (129) Friston, K. J., Williams, S., Howard, R., Frackowiak, R. S. & Turner, R. Movement-related effects in fmri time-series. Magn Reson Med 35, 346–55 (1996). URL http://www.ncbi.nlm.nih.gov/pubmed/8699946.
  • (130) Lemieux, L., Salek-Haddadi, A., Lund, T. E., Laufs, H. & Carmichael, D. Modelling large motion events in fmri studies of patients with epilepsy. Magn Reson Imaging 25, 894–901 (2007). URL http://www.ncbi.nlm.nih.gov/pubmed/17490845.
  • (131) Tierney, T. M. et al. Fiach: A biophysical model for automatic retrospective noise control in fmri. NeuroImage 124, 1009–1020 (2016). URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&id=26416652&retmode=ref&cmd=prlinks.
  • (132) Hagmann, P. et al. Understanding diffusion mr imaging techniques: from scalar diffusion-weighted imaging to diffusion tensor imaging and beyond. Radiographics : a review publication of the Radiological Society of North America, Inc 26 Suppl 1, S205–23 (2006). URL http://pubs.rsna.org/doi/abs/10.1148/rg.26si065510.
  • (133) Pestilli, F., Yeatman, J. D., Rokem, A., Kay, K. N. & Wandell, B. A. Evaluation and statistical inference for human connectomes. Nat Methods 11, 1058–63 (2014). URL http://www.ncbi.nlm.nih.gov/pubmed/25194848.
  • (134) Yeh, C. H., Smith, R. E., Liang, X., Calamante, F. & Connelly, A. Correction for diffusion mri fibre tracking biases: The consequences for structural connectomic metrics. Neuroimage (2016). URL http://www.ncbi.nlm.nih.gov/pubmed/27211472.
  • (135) Hutchison, R. M. et al. Dynamic functional connectivity: promise, issues, and interpretations. Neuroimage 80, 360–378 (2013).
  • (136) Calhoun, V. D., Miller, R., Pearlson, G. & Adalı, T. The chronnectome: time-varying connectivity networks as the next frontier in fMRI data discovery. Neuron 84, 262–274 (2014).
  • (137) Kopell, N. J., Gritton, H. J., Whittington, M. A. & Kramer, M. A. Beyond the connectome: the dynome. Neuron 83, 1319–1328 (2014).
  • (138) Kivel, M. et al. Multilayer networks. J. Complex Netw. 2, 203–271 (2014).
  • (139) Holme, P. & Saramaki, J. Temporal networks. Phys. Rep. 519, 97–125 (2012).
  • (140) Mucha, P. J., Richardson, T., Macon, K., Porter, M. A. & Onnela, J.-P. Community structure in time-dependent, multiscale, and multiplex networks. science 328, 876–878 (2010).
  • (141) Bassett, D. S. et al. Robust detection of dynamic community structure in networks. Chaos 23, 013142 (2013).
  • (142) Reinkensmeyer, D. J. et al. Computational neurorehabilitation: modeling plasticity and learning to predict recovery. J Neuroeng Rehabil 13, 42 (2016).
  • (143) Bassett, D. S. et al. Dynamic reconfiguration of human brain networks during learning. Proc Natl Acad Sci U S A 108, 7641–7646 (2011).
  • (144) Mantzaris, A. V. et al. Dynamic network centrality summarizes learning in the human brain. Journal of Complex Networks 1, 83–92 (2013).
  • (145) Bassett, D. S., Wymbs, N. F., Porter, M. A., Mucha, P. J. & Grafton, S. T. Cross-linked structure of network evolution. Chaos 24, 013112 (2014).
  • (146) Bassett, D. S., Yang, M., Wymbs, N. F. & Grafton, S. T. Learning-induced autonomy of sensorimotor systems. Nat Neurosci 18, 744–751 (2015).
  • (147) Heitger, M. H. et al. Motor learning-induced changes in functional brain connectivity as revealed by means of graph-theoretical network analysis. Neuroimage 61, 633–650 (2012).
  • (148) Mattar, M. G., Thompson-Schill, S. L. & Bassett, D. S. The network architecture of value learning. arXiv 1607, 04169 (2016).
  • (149) Karuza, E. A., Thompson-Schill, S. L. & Bassett, D. S. Local patterns to global architectures: Influences of network topology on human learning. Trends Cogn Sci S1364-6613, 30071–30077 (2016).
  • (150) Muldoon, S. F. & Bassett, D. S. Network and multilayer network approaches to understanding human brain dynamics. Philosophy of Science In Press (2016).
  • (151) Nicosia, V., Skardal, P. S., Latora, V. & Arenas, A. Spontaneous synchronization driven by energy transport in interconnected networks. arXiv 1405, 5855v2 (2014).
  • (152) Brookes, M. J. et al. A multi-layer network approach to MEG connectivity analysis. Neuroimage 132, 425–438 (2016).
  • (153) De Domenico, M., Sasai, S. & Arenas, A. Mapping multiplex hubs in human functional brain network. Front. Neurosci. 10, 326 (2016).
  • (154) Kolaczyk, E. D. Statistical Analysis of Network Data: Methods and Models (Springer, 2009).
  • (155) Simpson, S. L., Lyday, R. G., Hayasaka, S., Marsh, A. P. & Laurienti, P. J. A permutation testing framework to compare groups of brain networks. Front Comput Neurosci 7, 171 (2013).
  • (156) Winkler, A. M., Webster, M. A., Vidaurre, D., Nichols, T. E. & Smith, S. M. Multi-level block permutation. Neuroimage 123, 253–268 (2015).
  • (157) Ginestet, C. E. & Simmons, A. Statistical parametric network analysis of functional connectivity dynamics during a working memory task. Neuroimage 55, 688–704 (2011).
  • (158) Bassett, D. S., Nelson, B. G., Mueller, B. A., Camchong, J. & Lim, K. O. Altered resting state complexity in schizophrenia. Neuroimage 59, 2196–2207 (2012).
  • (159) Betzel, R. F. et al. The modular organization of human anatomical brain networks: Accounting for the cost of wiring. arXiv 1608, 01161 (2016).
  • (160) Ramsay, J. & Silverman, B. W. Functional Data Analysis (Springer, 2005).
  • (161) Ramsay, J. O. & Silverman, B. W. Applied functional data analysis: methods and case studies, vol. 77 (Citeseer, 2002).
  • (162) Ramsay, J. O. Functional data analysis (Wiley Online Library, 2006).
  • (163) Papadopoulos, L., Puckett, J., Daniels, K. E. & Bassett, D. S. Evolution of network architecture in a granular material under compression. arXiv 1603, 08159 (2016).
  • (164) Bassett, D. S., Owens, E. T., Porter, M. A., Manning, M. L. & Daniels, K. E. Extraction of force-chain network architecture in granular materials using community detection. Soft Matter 11, 2731–2744 (2015).
  • (165) Lepage, K. Q., Ching, S. & Kramer, M. A. Inferring evoked brain connectivity through adaptive perturbation. J Comput Neurosci 34, 303–318 (2013).
  • (166) Shehzad, Z. et al. A multivariate distance-based analytic framework for connectome-wide association studies. Neuroimage 93, 74–94 (2014).
  • (167) Simpson, S. L., Moussa, M. N. & Laurienti, P. J. An exponential random graph modeling approach to creating group-based representative whole-brain connectivity networks. Neuroimage 60, 1117–1126 (2012).
  • (168) Klimm, F., Bassett, D. S., Carlson, J. M. & Mucha, P. J. Resolving structural variability in network models and the brain. PLoS Computational Biology 10, e1003491 (2014).
  • (169) Ganmor, E., Segev, R. & Schneidman, E. The architecture of functional interaction networks in the retina. J Neurosci 31, 3044–3054 (2011).
  • (170) Ganmor, E., Segev, R. & Schneidman, E. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proc Natl Acad Sci U S A 108, 9679–9684 (2011).
  • (171) Ghrist, R. Elementary applied topology (Createspace, 2014).
  • (172) Giusti, C., Ghrist, R. & Bassett, D. S. Two’s company, three (or more) is a simplex : Algebraic-topological tools for understanding higher-order structure in neural data. J Comput Neurosci 41, 1–14 (2016).
  • (173) Giusti, C., Pastalkova, E., Curto, C. & Itskov, V. Clique topology reveals intrinsic geometric structure in neural correlations. Proc Natl Acad Sci U S A 112, 13455–13460 (2015).
  • (174) Curto, C. What can topology tell us about the neural code? arXiv 1605, 01905 (2016).
  • (175) Sizemore, A., Giusti, C. & Bassett, D. S. Classification of weighted networks through mesoscale homological features. Journal of Complex Networks In Press (2016).
  • (176) Kim, E. et al. Morphological brain network assessed using graph theory and network filtration in deaf adults. Hearing Research 315, 88–98 (2014).
  • (177) Liu, Y.-Y., Slotine, J.-J. & Barabási, A.-L. Controllability of complex networks. Nature 473, 167–173 (2011).
  • (178) Pasqualetti, F., Zampieri, S. & Bullo, F. Controllability metrics, limitations and algorithms for complex networks. IEEE Transactions on Control of Network Systems 1, 40–52 (2014).
  • (179) Schiff, S. J. Neural Control Engineering: The Emerging Intersection between Control Theory and Neuroscience (MIT Press, 2011).
  • (180) Motter, A. E. Networkcontrology. Chaos 25, 097621 (2015).
  • (181) Gu, S. et al. Controllability of structural brain networks. Nat Commun 6, 8414 (2015).
  • (182) Betzel, R. F., Gu, S., Medaglia, J. D., Pasqualetti, F. & Bassett, D. S. Optimally controlling the human connectome: the role of network topology. Scientific Reports In Press (2016).
  • (183) Gu, S. et al. Optimal trajectories of brain state transitions. arXiv 1607, 01706 (2016).
  • (184) Cornelius, S. P., Kath, W. L. & Motter, A. E. Realistic control of network dynamics. Nat Commun 4, 1942 (2013).
  • (185) Medaglia, J. D. et al. Cognitive control in the controllable connectome. arXiv 1606, 09185 (2016).
  • (186) Tang, E. et al. Structural drivers of diverse neural dynamics and their evolution across development. arXiv 1607 (2016).
  • (187) Ching, S. & Ritt, J. T. Control strategies for underactuated neural ensembles driven by optogenetic stimulation. Front Neural Circuits 7, 54 (2013).
  • (188) Muldoon, S. F. et al. Stimulation-based control of dynamic brain networks. PLoS Comp Biol In Press (2016).
  • (189) Hao, S. et al. Computing network-based features from intracranial EEG time series data: Application to seizure focus localization. Conf Proc IEEE Eng Med Biol Soc 2014, 5812–5815 (2014).
  • (190) Santaniello, S. et al. Quickest detection of drug-resistant seizures: an optimal control approach. Epilepsy Behav 22, S49–S60 (2011).
  • (191) Ching, S., Brown, E. N. & Kramer, M. A. Distributed control in a mean-field cortical network model: implications for seizure suppression. Phys Rev E Stat Nonlin Soft Matter Phys 86, 021920 (2012).
  • (192) Onnela, J. P. & Rauch, S. L. Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health. Neuropsychopharmacology 41, 1691–1696 (2016).
  • (193) Torous, J., Kiang, M. V., Lorme, J. & Onnela, J. P. New tools for new research in psychiatry: A scalable and customizable platform to empower data driven smartphone research. JMIR Ment Health 3, e16 (2016).
  • (194) Beagan, J. A. et al. Local genome topology can exhibit an incompletely rewired 3d-folding state during somatic cell reprogramming. Cell Stem Cell 18, 611–624 (2016).
  • (195) Baldassano, S. N. & Bassett, D. S. Topological distortion and reorganized modular structure of gut microbial co-occurrence networks in inflammatory bowel disease. Sci Rep 6, 26087 (2016).
  • (196) Steinway, S. N., Biggs, M. B., Loughran, T. P. J., Papin, J. A. & Albert, R. Inference of network dynamics and metabolic interactions in the gut microbiome. PLoS Comput Biol 11, e1004338 (2015).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
27179
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description