Graph Filters and the Z-Laplacian

Graph Filters and the Z-Laplacian

Xiaoran Yan Brian M. Sadler, Robert J. Drost, Paul L. Yu Kristina Lerman Indiana University Network Science Institute
Bloomington, IN 47408
Army Research Laboratory
Adelphi, MD 20783
Information Science Institute, University of Southern California
Marina Del Rey, CA 90292

In network science, the interplay between dynamical processes and the underlying topologies of complex systems has \replacedledlead to a diverse family of models with different interpretations. In graph signal processing, this is manifested in the form of different graph shifts and their induced algebraic systems. In this paper, we propose the unifying Z-Laplacian framework\added, whose instances can act as graph shift operators. As a generalization of the traditional graph Laplacian, \addedthe Z-Laplacian spans the space of all possible Z-matrices, i.e., real square matrices \replacedwith nonpositive off-diagonal entries.whose off-diagonal entries are nonpositive. We show that \addedthe Z-Laplacian can model general \replacedcontinuous-timecontinuous time dynamical processes, including information flows and epidemic spreading on a given graph. It is also closely related to general nonnegative graph filters in the discrete time domain. We showcase its flexibility by considering two applications. First, we consider a wireless communications networking problem modeled with a graph, where the framework can be applied to model the effects of the underlying communications protocol and traffic. Second, we examine a structural brain network from the perspective of \replacedlow- to high-frequencylow to high frequency connectivity.

I Introduction

As a powerful representation for many complex systems, a network models entities and their interactions via vertices and edges. In the field of network science, studies of topological structures, including those of vertex centrality and community structure, have \replacedledlead to fundamental insights into the organization and function of social, biological\added, and technological systems [1, 2, 3]. To model dynamical properties on a given network, different dynamical processes can be defined \replacedoveron top of fixed topologies. Recent studies have demonstrated the fundamental interplay between dynamical operators and centrality and community structure measures [4, 5, 6].

In signal processing\added, we see the parallel development of graph signal processing (GSP). Starting with simple consensus problems on networks, Olfati-Saber, Fax\added, and Murray [7] developed methods that can be used to design and analyze distributed control of sensors, unmanned vehicles\added, and communications systems. Capable of dealing with directed networks, switching topologies\added, and time delays, their models are closely related to random walks on graphs. Sandryhaila and Moura [8] proposed a more general framework for defining linear invariant filters based on graph shift operators. By generalizing the classical discrete signal processing framework to graph topologies, well-developed theories and techniques can be extended to analyze more-complex problems involving interconnected systems and relational data sets [9].

By connecting these ideas from signal processing and network science, we investigate the mathematical duality between random walk and consensus processes on networks. We propose a class of discrete graph shift operators that are capable of modeling dynamical processes\added, including information flows and epidemic spreading. We demonstrate that\added, in this framework, these shifts span the space of all nonnegative matrices. We then adapt the discrete filters to \replacedcontinuous-timecontinuous time settings, introducing the Z-Laplacian with heterogeneous time delays. This theoretical generalization of the parameterized Laplacian framework \added[remark=bms: [8] used to be cited here. I think Xiaoran means to cite [10] here, instead of [8]. In the earlier version we said our approach was compatible with the general framework of [8], but the parameterized Laplacian is from [10].][10] opens the door to signal processing analysis of continuous dynamical systems on networks.

The cross-disciplinary connection also allows us to apply the idea of dynamics modeling to signal processing problems. Under the \replacedparameterizedparametrized Laplacian framework [10], we demonstrated that different parameterizations of random walk and consensus processes can lead to different perceived network structures on the same topology. In this paper, we make the following novel contributions:

  • We propose the general \replacedcontinuous-timecontinuous time Z-Laplacian framework. The associated shifts span the space of Z-matrices, which are \deletedmathematically defined as real square matrices \replacedwith nonpositive off-diagonal entrieswhose off-diagonal entries are nonpositive [11]. \replacedThe frameworkIt enables the modeling of epidemic and information diffusion (Theorems 1,2,4), unifying many existing linear operators in the literature.

  • We connect \replaceddiscrete- and continuous-timediscrete and continuous time dynamical processes to the \replacedGSPgraph signal processing framework. In particular, we propose the Z-Laplacian operators as graph shifts, leading to induced signal processing techniques with corresponding dynamical process interpretations.

  • We provide two signal processing examples of how different graph shift choices can lead to different conclusions in real applications.

    • For a wireless communications network with a fixed topology, we use the framework to model the traffic patterns under different communications protocols, coupling \replacedGSPgraph signal processing with underlying protocol strategies and enabling the study of their interplay. This example illustrates how \deletedthe a dynamical process on a graph \replacedcan beis altered depending on the underlying assumptions about the traffic, communications rate, and protocol.

    • For structural brain networks, we use the framework to conduct frequency analysis from different dynamical perspectives, including information diffusion models made possible by the Z-Laplacian framework. \deleted[remark=bms: This is already well known and not a novel contribution. Maybe we should drop this sentence.]This example also illustrates how signal processing tools are useful in network analysis.

Ii Background

Classical discrete signal processing provides a wide range of tools to analyze data on regular structures, including filtering, transformation, compression, etc.\added, GSP applies these tools to signals on graphs with arbitrary topologies [9].

Consider a directed graph , where is the set of vertices, representing elements in the system, and is the set of edges that represent the \replacedpairwisepair-wise interactions between the vertices. The topological structure of the system is captured by the weighted adjacency matrix . \added(Throughout, we will refer to weighted adjacency matrices simply as adjacency matrices.) The diagonal \replacedin- and out-degreein and out degree matrices are respectively defined \replacedbyas \replaced and \replaced, where is the element of matrix . For undirected graphs, \replaced.

We define a graph signal as a mapping from to the real numbers. We represent the graph signal at time step (or for continuous time) as a row vector ,111In this paper, we adopt the Markov process convention, i.e., using row vertex signal vectors and \replacedright multiplyingright-multiply them by matrix operators, which contrasts with the algebraic convention we used in [6, 10]. We will also use to represent entry of \replacedathe matrix \added and to represent entry of a diagonal matrix . making the space of graphs signals identical to . A graph filter \deleted is a mapping from one graph signal to another,


where the filter is represented by an matrix . Moreover, just like in classical signal processing, any linear shift-invariant filter can be \replacedexpresseddefined as


where \replaced is known as the graph shift operator corresponding to , the , , are real coefficients, and is the order of the filter. is the graph shift operator and are real coefficients and is the order of the filter. \addedThe graph shift operator is not only the building block of shift-invariant filters, it is also closely related \replacedtowith the \replacednotionsnotion of frequency response, convolution\added, and Fourier transforms on graphs [12]. In [8], Sandryhaila and Moura derived a formal algebra based on , generalizing corresponding signal processing concepts to graph topologies.

There are at least two major definitions of the shift operator based on the graph adjacency matrix \replacedorand the (unnormalized) Laplacian matrix ,\deleted respectively, and alternatives with other properties have also been proposed [13]. Because different shift operator definitions lead to divergent tools and algorithms, practitioners face difficult choices when applying \replacedGSPgraph signal processing techniques. It is thus crucial to develop a basic understanding of how graph shift operators differ and relate to each other. For this purpose, we connect to the ideas from the \replacedparameterizedparametrized Laplacian framework [10], where different operators can be interpreted as variants of random walk and consensus processes. We first reintroduce the framework in signal processing notation, described below and listed in Table I.

Term Description Nonnegative matrix A real matrix with all nonnegative entries Z-matrix A real matrix with all \replacedoff-diagonalnon-diagonal entries Graph signal at continuous time \replaced Graph signal under consensus basis Random walk operator \replaced Consensus operator Adjacency matrix of graph \replaced Diagonal degree matrix of Transformed adjacency matrix of \replaced Diagonal degree matrix of Diagonal matrix \replacedw/with the dominating eigenvector\added of Diagonal replicating factor matrix Diagonal delay factor matrix General Laplacian operator (examples follow) Random walk Laplacian \replaced, given Parameterized Laplacian \replaced, given Replicator Parameterized Laplacian with Z-Laplacian \replaced, given
TABLE I: Glossary of terms and notation

We start by representing a \replaceddiscrete-timediscrete time random walk as a signal on a directed graph :\deleted[remark=rjd:modified equation]


Here the update filter is an row (right) stochastic matrix. The graph signal represents the probability density of the random walk on each vertex at step .

A consensus process on a graph can be viewed as the dual of a random walk, given by [10],\deleted[remark=rjd:modified equation]


Here, the filter \replaced is a column (left) stochastic matrix. Assuming \replaced is the initial signal, then at every time step each vertex updates its signal using the weighted average of its neighbors via multiplication by \replaced. Unlike the graph signal in a random walk, entries in \replaced can be arbitrary (negative or positive) real numbers, without \replacednormalizationnormalizing constraints.

The parameterized Laplacian can represent a \replacedcontinuous-timecontinuous time random walk, as in the \deletedfollowing differential equation\deleted,\deleted[remark=rjd:modified equation]


where is a transformed adjacency matrix\replaced, is the diagonal matrix with , and and are matrix parameters discussed below. We also use to refer to the corresponding transformed graph itself, so that is the degree matrix of . and the degree matrix is also defined accordingly as: (see Table 1).

Compared with the random walk Laplacian \replaced, \deletedEquation (5) has two additional parameter sets, and . The diagonal matrix consists of vertex bias factors that alter the random walk trajectory by giving neighbors additional weights. In a biased random walk, the transition probability from vertex to \added, denoted \replaced, is multiplied by a target bias factor . In the parameterized Laplacian framework [10, 14], we introduced the idea of the bias transformation to relate the biased random walk to an unbiased version.

Lemma 1 (Bias transformation).

Any biased random walk on , with the diagonal matrix specifying vertex bias factors , is equivalent to an unbiased random walk on the transformed graph . If is undirected, \addedthen we instead consider the transformed graph \deletedis \added to maintain edges having equal weight in both directions.


See appendix. ∎

The other diagonal matrix \replacedparameter, , effects time delays for the continuous-time random walk, controls the time delay of a continuous-time random walk222Bias transformation (also called “reweighing transformation” in [10]) applies to both \replaceddiscrete- and continuous-timediscrete and continuous time dynamical processes\deleted. See [15].\replaced providing inverse clock rates that control how long the walk stays at each vertex., or inverse clock rate at which the random walk stays at each vertex. \added(Without loss of generality, we constrain all the \addeddiagonal entries \replacedin with .\added) In Section IV, we will \replacedjustifydemonstrate this intuition by connecting \replacedcontinuous-timecontinuous time processes to their discrete counterparts. Delayed continuous-time random walks can be captured using the delay transformation:

Lemma 2 (Delay transformation).

Any unbiased \replacedcontinuous-timecontinuous time random walk on , with the diagonal matrix specifying vertex delay factors , is equivalent to a \replacedcontinuous-timecontinuous time random walk with \deletedthe delay factors on the transformed graph \replaced, where is the identity matrix.


See appendix. ∎

Delay transformation enables us to view delay factors as self-loops, which can be absorbed into . A \deletedsimple special case is when is a scalar matrix, which can be understood as rescaling the global clock rate, so that all delays are identical and equal to .

Beyond the bias and delay transformations, the full parameterized Laplacian framework also has a similarity transformation that unifies the random walk and consensus processes on undirected graphs.

Lemma 3 (Similarity transformation).

Any \replacedcontinuous-timecontinuous time random walk on an undirected graph \deleted, captured by the parameterized Laplacian \added, with the diagonal matrices and specifying vertex bias factors and vertex delay factors, is equivalent to a \replacedcontinuous-timecontinuous time dynamical process captured by the parameterized Laplacian , up to a change of basis, \replacedwhereand is the basis parameter and \added[remark=bms: We don’t say what is until the next paragraph] (described below).


See the “similarity transformation” in [10]. ∎


In particular, we recover the random walk basis by setting , and the consensus basis with . Another relevant case is the symmetric basis with , which leads to a Laplacian operator represented by a symmetric matrix. In linear algebra, similarity is an equivalence relation for square matrices [11]. Similar matrices share many key properties, including their rank, determinant\added, and eigenvalues. Eigenvectors are also equivalent under a change of basis. For a given initial signal on an undirected graph with and , \deletedand it follows that given the same initial signal and an undirected graph, the random walk (3) and consensus process (4) become identical at every time step\added, up to a change of basis. This follows from\deleted[remark=rjd:modified equation]


where we used the fact that \deletedin undirected graphs, and \replaced for undirected graphs..

Using bias, delay\added, and similarity transformations, the parameterization Laplacian framework unifies various linear operators and their associated centrality and community structure measures from the network science literature [10]. In particular, here we introduce one special operator called the replicator, which is related to epidemic models.

Lemma 4 (Replicator operator).

If is undirected\deleted, and \replaced is the eigenvector of the adjacency matrix associated with the largest eigenvalue is the largest eigenvalue of the adjacency associated with the eigenvector , so that \replaced, a biased random walk with the diagonal matrix , whose bias factors are the components of \added,333\replacedThe replicatorReplicator operator, not to be confused with a “replicating factor”, also defines the \replacedmaximum-entropymaximum entropy random walk under the random walk basis [16].\deletedrjd: Is this footnote attached in the best place?\deleted, is defined by the stochastic matrix \replaced under the symmetric basis.\deleted[remark=bms: Not sure we define what “the symmetric basis” is…]


By Lemma 1, the stochastic matrix of a biased random walk with is

where is the diagonal degree matrix of the transformed graph . Because \replaced, we have \replaced, and thus \replaced The \replacedcontinuous-timecontinuous time counterpart of is represented by the \replacedrandomRandom walk Laplacian . By setting , \added, and \replaced, \deletedand according to Lemma 3\deleted, we have \replaced therefore \replaced. ∎


In the sequelThe rest of the paper, we \replacedrepeatedly usewill keep using these transformations to design flexible operators \replacedthat yieldwith intuitive insight. While the similarity transformation becomes obsolete as we generalize to the Z-Laplacian, bias and delay transformations remain essential in practice for interpreting and comparing models on the same topology\deleted that provide, as we will show in Sections V and VI.

Following the graph filter framework [8], we consider both \replaced and as potential graph shift operators\deleted, with interpretable dynamical parameters. Because both shifts and their \replacedcontinuous-timecontinuous time counterparts follow the aforementioned transformations, they form an infinite family of graph shifts on a given graph . Note that these operators all have a dominating eigenvalue of , leading to an asymptotically stationary signal . During this process, the total signal \replaced(i.e., the sum of the components of ) is always conserved under the random walk basis, preventing them from modeling non-conservative processes that grow or shrink over time.

Iii Epidemic model and nonnegative filters

In this section, we will generalize the random walk and consensus processes to \replacedmore-generalmore general operators, \replacedin the process unifyingwhich will unify nonnegative linear graph filters. We begin by recalling a classic epidemic model, the susceptible-infected-susceptible (SIS) model.

Iii-a Epidemic model on a graph

To generalize beyond conservative dynamical processes, we first redefine the classic SIS epidemic model on \addedthe graph [17] using graph signals. \replacedThe graph signal now represents the probabilities that each vertex is infected at step , given by\deleted[remark=rjd: modified equation]


Here, each vertex has two states, susceptible or infected. When a vertex is susceptible, each of its neighbors will transition to the infected state with virus infecting probability . Once a vertex is infected, it will return to the susceptible state with virus curing probability .

An important theorem about the SIS model is that its asymptotic behavior depends on the ratio , or the effective transmissibility of the virus. If the effective transmissibility is above the \deletedthe epidemic threshold, namely the inverse of the largest eigenvalue of the adjacency , it will spread to a significant portion of the network. Otherwise, it will eventually die out.

Iii-B Epidemic model filters

To generalize \deletedEquation (3), we introduce a uniform self-replicating factor after each random walk step\added, resulting in the following update rule and corresponding difference equation:


Compared with , the signal vector in \deletedEquation (III-B) does not necessarily sum to , so it is more general and capable of modeling dynamical processes like information and epidemic spreading.


The difference equation in (III-B) provides intuition as to the corresponding dynamics.To better demonstrate the ideas and develop intuition, we have rearranged the update rule into a difference equation in Equation (III-B). For the uniform self-replicating factor , the corresponding growth rate is actually for all vertices in the network. With , we have no replications and recover the conservative random walk process; with , we have an expanding process \replacedin whichwhere the incoming probability flow is scaled by while the outgoing flow remains ; and \replacedwith , we have a shrinking process.we have a shrinking process if . \replacedAlso note that in all cases isNotice that is also the dominating eigenvalue of \added, corresponding to the eigenvector \replaced, the all-ones vector.with all in all cases. In practice we often restrict so that the signal vector always has a positive sum, which converges to in shrinking processes.

For undirected graphs, we can rewrite \deletedEquation (III-B) as\deleted,\deleted[remark=rjd: modified equation]


where we have substituted in the replicator operator under the symmetric basis (see Lemma 4), with \replaced, and we have replaced \replaced to match the SIS epidemic model defined on adjacency matrix \added, as in (III-A). Here, the virus infecting probability from neighbors corresponds to \replaced, and the virus curing probability at an infected vertex corresponds to . Their ratio, the effective transmissibility, is \replaced, which determines how the epidemic will spread on the adjacency matrix .

With , the inverse of the classic epidemic threshold, i.e., the dominating eigenvalue of , is \replaced. If the effective transmissibility \replaced is greater than the threshold \replaced, or simply , we recover the definition of an expanding process. Similarly, conservative and shrinking processes are recovered with \replaced and , respectively.

If we apply the duality between the random walk and consensus models to the non-conservative process in \deletedEquation (III-B), we have\deleted[remark=rjd: modified equation]


Here\added, the dual is a graph filter where each vertex first updates its signal using the weighted neighbor average\replaced and, then the average signal is amplified by a factor , leading to \replaced with \deleted, or \replaced with .

Iii-C General nonnegative graph filters

To further generalize beyond epidemics with uniform self-replications, consider the following update and difference equations:


where we have used a positive diagonal matrix \added,444The replicating factors play the same role in the \replacedcontinuous-timecontinuous time Z-Laplacian, thus the notation here.\deleted, whose diagonal elements model a shrinking or expanding replicating factor for each vertex v. The random walk step is now followed by a vertex specific replicating process, with a generally non-uniform replicating factor specified by .

Applying the same duality between consensus and random walk processes to this more general dynamical process, we have\deleted[remark=rjd: modified equation]


To interpret the operator \replaced, we focus on the dynamics of a specific vertex , given by\deleted[remark=rjd: modified equation]


where \replaced forms a weighted probability distribution over all incoming neighbors of . Notice that the replicating factor of the incoming signals only depends on the target vertex . Compared with the uniform replicating factor in (III-B), the order of matrix \replacedmultiplication by multiplication now matters. Under the consensus model, with vertex averaging the signals of neighbors , all signals are multiplied by the same factor of , whereas this factor is dependent under the random walk basis.

Unlike the less general filters we have discussed previously, both operators in (III-C) span the same vector space. This equivalence is easiest to show if we have an undirected graph and let555Mathematically\added, one has the liberty to manipulate the parameters in and . In practice, however, we suggest setting them based on domain knowledge for intuition and interpretation. , leading to\deleted[remark=rjd: modified equation]


where we used \deleted, and \replaced.

In fact, for general directed graphs, the vector space spanned by both operators contains all possible nonnegative matrices, and we call them general nonnegative filters. To prepare for this theoretical unification, we first consider the following lemma regarding the adjacency matrix and random walks.

Lemma 5 (Adjacency mapping).

For every directed weighted graph , there is a unique transition matrix, \replaced, that captures an unbiased random walk on . Conversely, given a stochastic matrix , there is an infinite family of adjacency matrices whose random walk process is consistent with , denoted as


Since \replaced is uniquely determined by a given , then every directed network uniquely defines a random walk process. However, given a transition matrix , there remains degrees of freedom to specify the underlying network. Intuitively, the degrees of freedom can be interpreted as row scalings that proportionally multiply all outgoing edges of a vertex, thus preserving the random walk distribution leaving from the given vertex. Here we represent these degrees of freedom using the entries of . ∎

Now we are ready to prove the unification theorem for nonnegative filters expressed with an arbitrary basis\replaced.,

Theorem 1 (Basis Unification).

For any general nonnegative filter defined as \replaced on graph , there is an equivalent dual filter defined as on a dual graph , under any given basis parameter .


Assume we have two equivalent filters defined on two different graphs and . Then\deleted[remark=rjd: modified equation]


Setting , we have . Applying Lemma 5, the dual graph is a row scaled version of by setting . ∎

By Theorem 1, the duality in (III-C) essentially leads to another filter on a different graph. When \replaced and , we have the equivalence between the consensus and random walk processes. As a result, we will no longer need to separately specify dynamical processes in random walk, consensus\added, or any other basis. We will thus suppress the superscript notations on filters and signals in the following sections.

Next, we prove that general nonnegative filters span all possible nonnegative matrices:

Theorem 2 (Generality Theorem).

Given an arbitrary nonnegative matrix , we can associate it with a general nonnegative filter \replaced, which models a general dynamical process on a graph .


Let . For any nonnegative matrix , represents a graph adjacency matrix. By setting \replaced, we have \replaced. ∎

The simple proof essentially states that any nonnegative matrix can be interpreted as an epidemic model based on a random walk with non-uniform replicating factors that are proportional to the vertex degrees. From a degrees-of-freedom perspective, the non-uniform replicating factor perfectly matches the row normalization constraint.

Combining \replacedTheorems 1 and 2Theorem 1 and Theorem 2, we observe the full generality of the proposed nonnegative graph filters. Based on the algebraic framework introduced in [8], \deletedusing these nonnegative shift operators also define\deleteds a shift-invariant vector space, which can be generally expressed as . This unification opens the door to discrete signal processing methodologies while connecting the concepts of random walk, epidemics\added, and information diffusion on networks.

Iv \replacedContinuous-timeContinuous time filters and Z-Laplacian

Based on the discrete nonnegative graph filters in \deletedEquation (III-C), we can define continuous dynamical processes by replacing the difference equations with differential equations, \replacedyieldinggiven by666The variable is a real number representing continuous time.\deleted[remark=rjd: modified equation]


where we define the operator \replaced[remark=bms: We include the term out front in the definition in Table I and also in Lemma 6] as the Z-Laplacian.

Here the positive diagonal matrix represents the time delay, or inverse clock rate, at each vertex. When , we have , which corresponds to a special case of the \replacedparameterizedparametrized Laplacian we introduced in [6]. Compared with \replaceddiscrete-timediscrete time filters with uniform time steps, \deletedEquation (16) allows asynchronous updates by modeling the movement of random walks as Poisson processes for which the waiting times between jumps are exponentially distributed with \replacedmean parametersmeans specified by [15].

To connect the \replacedcontinuous-timecontinuous time Z-Laplacian to discrete filters, we first prove the following Lemma over a short time interval \replaced.,

Lemma 6 (Discrete approximation).

Given the graph signal and the \replacedcontinuous-timecontinuous time Z-Laplacian \replaced, the graph signal at time can be approximated as by\deleted,

where represents a discrete time filter.


See appendix. ∎


LettingLet , we rewrite the discrete time filter as\deleted[remark=rjd: modified equation]


Substituting , we can write the solution of (16) as\deleted[remark=rjd: modified equation]


Using theThe “uniformization technique” [18] \replacedwe can allows us to write matrix exponentiation in terms of \addedan infinite sum of matrix powers\replaced to obtain,\deleted[remark=rjd: modified equation]


where \replaced represents a Poisson probability mass function of a \replaceddiscrete-timediscrete time random walk \addedwith intensity taking steps during the time interval \deleted, with intensity .

As \deletedEquation (IV) demonstrates, the continuous process can be interpreted as a weighted infinite sum of powers of , representing discrete time random walks of different lengths. The following theorem give us \deletedan intuition \replacedforof the delay factors .

Theorem 3 (Delay interpretation).

Given the \replacedcontinuous-timecontinuous time Z-Laplacian \replaced and its discrete time approximation , the delay factor \replaced of vertex of vertex , is proportional to the expected “waiting steps” on vertex for the approximated \replaceddiscrete-timediscrete time random walk.


See appendix. ∎

Lemma 6 and Theorem 3 provide interpretation of the delay factors in \deleted, and connect the continuous\added- and discrete-time models. In the next section we give an example, applying the \replacedcontinuous-timecontinuous time model to a communications network.

Next, we prove the central theorem of the Z-Laplacian framework, which \replacedextendsextents the generality Theorem \replaced(Theorem 2)2 to the continuous domain.

Theorem 4 (\replacedContinuous-Time Generality TheoremContinuous time generality theorem).

For an arbitrary Z-matrix , we can associate it with a Z-Laplacian \replaced, which models a general \replacedcontinuous-timecontinuous time dynamical process on a graph (with self-loops) .


Without loss of generality, we set , and thus\deleted[remark=rjd: modified equation]


Let be an arbitrary Z-matrix, i.e., a real square matrix \replacedwith nonpositive off-diagonal entrieswhose off-diagonal entries are nonpositive[11]. \replacedThenTherefore, has only nonnegative off-diagonal entries. If there are negative diagonal entries, we set \replaced, making a nonnegative matrix.

By setting \replaced, we have , which represents a graph adjacency matrix given any Z-matrix . ∎

Theorem 4 justifies the generalized \replacedcontinuous-timecontinuous time operator as the \replacedZ-LaplacianZ-Laplacian, which is \replacedathe unifying framework for all potential shift operators. The Z-Laplacian is closely related to the generalized Laplacian [19, 20], with the latter being the symmetric subset of the Z-Laplacian.

V Communications Network Analysis

It is common practice to use a graph \replacedto modelmodel for a communications network, with edges modeling one-hop connectivity between nodes. For example, convergence analysis of a consensus process as in (4) is linked with the properties of the adjacency matrix of the graph [7]. This assumes an underlying communications protocol based on the connectivity, where each iteration requires every node to have a message exchange with its one-hop neighbors. This is a useful abstraction, but does not model the impact of channel variation, rate, multi-user interference, or delay, as occur in wireless communications applications such as sensor and mobile ad-hoc networks. Employing the Z-Laplacian framework enables the modeling of delays and multi-user collisions, which are a function of the communications protocol and the graph topology. To illustrate this, we consider analysis of a communications network to include delay effects\deleted, and their linkage with the topology and the medium access control (MAC) protocol. We study the isolation of network bottlenecks, including the impact of MAC choices, as well as topology alteration by inserting nodes and changing bandwidth allocation to \replacedimprove network connectivityenhance the network and eliminate the bottleneck.

V-a Networking Bottlenecks and the Z-Laplacian

Given a network graph, consider the problem of identifying bottlenecks where traffic may funnel through a small \replacedsubset of nodesnode subset and create excess delays in the overall network performance. We build on prior work for bottleneck identification based solely on the network topology [21, 22], briefly reviewed next. We use a running example throughout this section, illustrated in Figures 1, 2, and 3. In the figures, colors indicate graph subset membership, colored disks around each vertex indicate self-loops whose weight is proportional to communications delay at that vertex \added(with larger area implying more delay\added), and graphical edge thickness is proportional to link rate between the two vertices \added(with a thicker edge implying higher bandwidth\added).

In the simplest setting [21, 22], we assume that all communications are orthogonal, with no multi-user interference or additional delay, and that each vertex always has a packet to transmit (a fully saturated network traffic model). Recalling our notation from Section II, the network graph is , and elements of are denoted \replaced. We begin with the graph network model shown in Fig. 1.

(a) Traffic = , Conductance =
(b) Traffic = , Conductance =
Fig. 1: Communications network example graph (see text). Colors indicate graph subset membership after bottleneck discovery. \replaced(a)1(a): An ideal network graph with uniform and orthogonal connectivity\deleted, and no delays. \replaced(b)1(b): To model a random access MAC protocol, self-loops are introduced to model traffic delay, indicated graphically as colored circles around the vertices\replaced, the size of the circles being and whose size is proportional to delay. In both \replacedsubfigures,cases the primary communications bottleneck lies at the boundary of the two vertex communities, indicated by their different colors. The total traffic is measured by the total weighted degree, and the global efficiency is measured by the conductance.

Assuming each undirected edge carries one unit of traffic in both directions, the overall network traffic equals the total weighted degree, or for our example. Under the Z-Laplacian framework\added, as in (16), this simple traffic pattern can be captured by setting both the vertex delays and replicating factors to the identity matrix, so \deleted, and


where is the (diagonal) degree matrix (see Table I). This corresponds to the idealized case in \replacedFig.Figure 1(a). By setting , we assume each vertex spends one unit of delay to process the packet, whereas additional delays will represent interference or collisions\replaced, as described below later on. Here we assume that packets only flow along graph edges\deleted, and that each packet is only sent once (there is no duplication or broadcasting of packets), as indicated by .

To find and quantify the \addedprimary network bottleneck under different MAC protocols, we use the more flexible transformed graph for definitions (see Table I). For the \replacedbase-casebase case Laplacian in (21), we simply set .

We split the vertices into two subsets, and its complement . Let \replaced denote the total weights of all edges between and . Let \replaced denote the total (undirected, weighted) degree over all vertices in S. The ratio of these two quantities measures the balanced separation strength of the transformed graph, given by


The quantity is called the conductance, and we will use its minimum


to measure the network bottleneck and its minimizer to locate the bottleneck at its boundary777We use the optimization algorithm introduced in [10] to find both.. The \replacedbase-casebase case Laplacian leads to a minimum conductance of .

V-B MAC Protocols and the Z-Laplacian

The baseline analysis above assumes ideal and orthogonal communications, without any consideration of delay. Next\added, we adjust parameters of the Z-Laplacian to model different dynamical processes (protocols) on the same topology. Note that each protocol will induce a potentially different primary bottleneck position and strength according to (23), without changing the topology.

Generally, depending on the protocol and topology, the vertex delays will be non-uniform. Consider a random access MAC, which will result in transmission collisions\deleted, and packet delays due to the need for \replacedbackoffback off and retransmission. As one \replacedmodeling approachapproach to model this, let the vertex delays be proportional to the vertex degree, implying that \replacedhigher-degreehigher degree vertices will have more communications delay because collisions are more likely. Specifically, let so the delays are equal to the degree for each vertex\added.888More complicated \replacedsuper-linearsuper linear scaling can be implemented by other choices for the delay factors.\deleted. The resulting Z-Laplacian is


By appealing to the delay transformation in Lemma 2, delays are modeled by introducing vertex self-loops, and we have the corresponding transformed graph . Applying this transformation to our example, \deletedwe have Figure 1(b), again assuming a saturated traffic model\added, we obtain Fig. 1(b). Now the more realistically modeled network is considerably less efficient than its idealized version, with total traffic \deleted, and a minimum conductance of . Here, we have repeated the above bottleneck discovery algorithm, with the two resulting subsets shown in \replacedFig.Figure 1(b). Although not evident from this example, we emphasize that changing the network protocol may result in a different bottleneck position and strength.


Suppose now thatNow suppose we employ a \replacedtime-divisiontime division multiple access (TDMA) protocol to minimize collisions and enhance overall network throughput. Fig. 2(a) models this case as a graph filter, where each vertex evenly divides its transmission time among its neighbors. As a result, edges between \replacedhigh-degreehigh degree vertices have a reduced effective bandwidth due to their smaller share of the \replacedtime-divisiontime division allocation. This is readily captured under the Z-Laplacian framework using the bias transformation (Lemma 1):


where we have applied the undirected version of the bias transformation on both sides of \deleted, and is the diagonal degree matrix of the transformed graph . For comparison, we started with the same traffic \addedas in Fig. 1(a), with each edge carrying one unit of traffic in both directions\deleted as in Fig. 1 (a). We first saturated the time-divided effective bandwidth and then visualized the remaining traffic as self-loops or, effectively, vertex delay factors . The resulting transformed graph is , as visualized in Fig. 2(a).

(a) Traffic = , Conductance =
(b) Traffic = , Conductance =
Fig. 2: Communications network example, continued (see text). \replaced(a)2(a): A TDMA protocol is modeled with self-loops modeling the delay time at each vertex due to time slot allocation, assuming a saturated traffic load. \replaced(b)2(b): The same TDMA protocol is modeled, now with the traffic load perfectly matched to the available time slots, resulting in no delays.

It is a more efficient system compared \replacedto that of with Fig. 1(b), with increased minimum conductance (now equal to ) and no redundant traffic (equal to ).

When the network traffic load becomes lighter, TDMA protocols are less efficient in terms of utilizing the full bandwidth. Fig. 2(b) models a perfectly loaded TDMA system, where the traffic is precisely matched to the capacity of each edge and each message hits its time slot without any delay. Compared with the perfectly loaded random access system in Fig. 1(a), the network carries much less total traffic. These examples illustrate how the traffic load and protocol-induced delays can be modeled together to study the overall network performance, including the discovery of bottlenecks.

V-C Healing Networks by Vertex Insertion and Bandwidth Allocation

Once a bottleneck has been identified, we can consider network healing by augmenting the network. There are various ways to do this, such as bandwidth reallocation to reduce delays. Here\added, we consider augmenting the network topology by introducing a (perhaps higher bandwidth) link across the bottleneck. We limit our study to the introduction of a new edge in the existing graph, although the general framework allows for more elaborate combinations of new vertex introduction and optimized resource allocation, which can correspond to an adaptive enhancement at the network physical layer and/or the MAC. There are many possibilities\added, and this is an interesting topic for future study.

We continue our \addedrunning example in this section, beginning with the random access protocol model in Fig. 1(b). \replacedFigs. 3(a) and 3(b)Fig. 3(a) and (b) depict two cases where a new edge is introduced.

(a) Traffic = , Conductance =
(b) Traffic = , Conductance =
(c) Traffic = , Conductance =
Fig. 3: Communications network example, continued (see text). Two examples of bottleneck alleviation, using the results from \replacedFig.Figure 1(b), by introducing a new edge, shown as the thick blue line in \deleted3(a)\deleted, and the thick red line in \deleted3(b). While the total traffic measure is the same for both\added cases, \deletedcase (b) has higher conductance and leads to a more globally efficient network. In \deleted3(c), we \replacedinclude theassume additional delays \deletedare caused by the introduction of the new edge.

The new edges have a bandwidth that is four times greater than the preexisting edges (\addedand are hence graphically thicker). We further assume the new edges do not introduce new interference to the preexisting edges, and so no new delays are incurred. The first case, shown \addedin Fig. 3(a), connects two peripheral vertices at the corners of the two subsets, whereas the second case\added, shown in Fig. 3(b)\added, connects two central vertices on opposite sides of the bottleneck.

In line with our intuition, the cross-link between central vertices in Fig. 3(b) leads to an overall more efficient system as measured by conductance. The central vertices are more available to the entire network on both sides of the bottleneck. For both cases we also repeated the bottleneck discovery after the new edge was introduced, as shown in Fig. 3 with the new color groupings. Comparing with Fig. 1(b), the new edge in Fig. 3(a) has less of an impact on the bottleneck subsets than the more effective new edge in Fig. 3(b).

To demonstrate that changing the network protocol on the same topology may result in different bottlenecks, we also repeated the bottleneck discovery with the assumption that the new \replacededge doesedges do introduce delays proportional to the increased vertex degrees. As shown in Fig. 3(c), the additional delays naturally lead to a less efficient system. Compared with Fig. 1(b), the bottleneck location remains the same with the added edge\deleted, but \replacednow with lowerwith improved minimum conductance.

This analysis can be expanded to discover bottlenecks and the corresponding optimal choices for new edges in an iterative manner, providing network enhancement options and adding robustness.

Vi Exploring graph frequency analysis for brain connectivity

To illustrate how the Z-Laplacian based graph filters can be used in frequency analysis, we consider structural brain networks built from diffusion weighted imaging MRI scans999Diffusion weighted imaging captures bundles of white matter fibers, revealing the anatomical connections between different parts of the brain. of 40 experiment participants [23]. Frequency-specific brain activity is well known and associated with different brain states. Graph frequency analysis is thus a useful signal processing tool for studying functional brain networks [24, 25]. Recently, there is also evidence that structural brain networks may also organize by graph spectrum [26, 27, 28]. However, previous work is mostly limited to the standard shift operators based on graph adjacency or (unnormalized) Laplacian matrices . Here we demonstrate the flexibility of the Z-Laplacian in graph frequency analysis of given structural networks. In the future, we plan to \replacedinvestigatedo direct signal analysis of functional networks.

To demonstrate frequency analysis based on different candidate shift operators, we construct multiple Z-Laplacian operators using the same average network over all 40 samples. Fig. 4 \replaceddepictsshows the undirected weighted adjacency structure of the average structural network and a corresponding visualization.101010An visualization displays the \replaced of edges with the greatest weighttop percentile edges in terms of the edge weights. The percentile measure includes \replacedzero-weightzero weight edges or non-edges. In Fig. 4, the visualization contain the top 1,096 edges in terms of weight, whereas the visualizations in Fig. 5 \replacedeachall contain 548 edges.

Fig. 4: The adjacency matrix and visualization of the average structural brain network

The visualization uses a circular layout similar to a connectogram [29]. We have also labeled the four major regions of the cerebral cortex in the human brain, namely frontal lobe, parietal lobe, occipital lobe\added, and temporal lobe. For a better cross-hemisphere visualization, we adopted the circular symmetry and concatenated left and right hemispheres head to tail. We color coded the brain regions from blue to red in each hemisphere, and the stem is colored black. There are other brain regions in the network that are not labeled (dark red). Some vertices from the limbic lobe are merged into the frontal (blue) and parietal (green) lobes. Notice here the averaged network is quite dense with 27,540 total non-zero edges from all 40 samples.

Consider the following three Z-Laplacians as candidate shift operators:


These symmetric versions of \addedthe Z-Laplacian (Generalized Laplacians) are obtained through the similarity transformation (Lemma 3). \replacedThe first shift operator, , is the symmetrized unbiased random walk Laplacian on the original adjacency. It is also the symmetric normalized Laplacian, which has been suggested as a shift operator [9]. \replacedThe second shift operator, , represents the biased random walk in (25) with an increased tendency towards \replacedlower-degreelower degree vertices. The last shift operator\added, \added, is the Z-Laplacian representing a self-replicating process based on the unbiased random walk but with set to for vertices in the frontal lobe (blue) and for other vertices. Thus, emphasizes the hypothetical information flow associated with the frontal lobe. All three Z-Laplacian operators share the same uniform vertex time delays with .

Following the \replacedapproachidea of graph frequency analysis [9, 12], we consider the eigendecomposition\deleted[remark=rjd:made into equation with bold lambda]


of the Laplacians, where the columns of are eigenvectors of and \replaced is a diagonal matrix comprising the corresponding eigenvalues. By replacing all but the four smallest eigenvalues with zero, we build simple low-pass filters based on the three different Z-Laplacian shift operators. The definition of \replacedhigh- and low-frequencyhigh/low frequency bands is application specific. In this study, we choose the smallest four \replacedeigenpairseigen-pairs to be the low frequencies because we are interested in analyzing the structure of four major brain regions. We can demonstrate the effect of these filters by reconstructing the adjacency structures based on the original and matrices:\deleted[remark=rjd: made into equation with bold lambda]


where \replaced contains the first \replacedfour4 smallest eigenvalues on its diagonal, with all remaining elements set to zero.

The results are shown in Fig. 5, using visualizations.

(a) \replacedLow-passLow pass filtered
(b) \replacedLow-passLow pass filtered
(c) \replacedLow-passLow pass filtered
(d) \replacedHigh-passHigh pass filtered
Fig. 5: \replacedDepictions ( visualizations) of the reconstructed adjacency structures after filtering.The reconstructed adjacencies after high-pass filtering with visualization

Comparing Fig. 5(a) to Fig. 4, the \replacedhigh-frequencyhigh frequency signals from are filtered out, leaving mostly stronger cross-hemisphere connections. The filtered network highlights distinctly different structures, with \replacedlow-degreelow degree vertices from Fig. 4 now dominating the \replacedlow-frequencylow frequency spectrum. Strong cross-hemisphere connections associated with hub vertices are filtered out, revealing within-hemisphere connection patterns, especially those between parietal, occipital\added, and temporal lobes. In Fig. 5(c), the matrix is carefully designed to emphasize vertices in the frontal lobe. As expected, the result properly highlights the frontal lobe region and their internal connections, both within and across the hemispheres.

Finally, Fig. 5(d) depicts the effect of applying a \replacedhigh-passhigh pass filter obtained by replacing the four smallest eigenvalues of with zero and \replacedleaving the rest unchangedkeeping the rest. Comparing Figs. 5(a)–(d), we see that the different shift operators each produce unique patterns. Manipulating the Z-Laplacian allows exploration of different shift operators as we explore the brain connectivity. More importantly, the dynamical processes associated with each Z-Laplacian provides meaningful intuition and interpretations of the resulting shift operators and the induced family of linear invariant filters.

Vii Conclusions and future work

In this paper, we proposed the Z-Laplacian framework, which is capable of modeling different \replaceddiscrete- and continuous-timediscrete and continuous time dynamical processes on graphs, including diffusion and epidemic processes. We proved that the Z-Laplacian spans the space of Z-matrices, leading to a general framework that unifies existing linear operators in the literature. When used as graph shift operators in applications, Z-Laplacian operators and their induced signal processing analysis have intuitive connections to the dynamical processes they model. This is especially useful \replacedfor relating and comparing different aspects of the same topological structures.when analyzing different aspects of the same topological structure, where they can be related and compared using interpretable parameters.

The Z-Laplacian framework also naturally connects to concepts in network science, enabling graph theoretical methods to be used in signal processing problems. In particular, we showed a novel analysis that coupled network bottleneck discovery with the underlying wireless network protocol, including the impact of delay and collisions. We demonstrated how conductance can be used to find primary bottlenecks, whose location and effect may change with choice of MAC protocol, and we considered topology modifications to alleviate the bottleneck and heal the network. This leads to more general questions of the effects of protocols on network dynamical processes, such as consensus, as well as the study of resource allocation within the network, which are important issues for further study. We also showed how a variety of graph shift operators can be applied to the problem of structural brain connectivity frequency analysis. In the future, we plan to apply the \replacedGSPgraph signal processing tools to functional brain signals. We will also investigate the mathematical properties of Z-Laplacians, unifying vertex centrality and community structure under the framework as we did previously for the \replacedparameterized Laplaciansparametrized Laplacians [10].


  • [1] M. Newman, Networks: An Introduction. New York: Oxford, 2010.
  • [2] P. Bonacich, “Power and centrality: a family of measures,” American J. Sociology, vol. 92, pp. 1170–1182, March 1987.
  • [3] S. Fortunato, “Community detection in graphs,” Physics Reports, vol. 486, pp. 75–174, Jan. 2010.
  • [4] S. P. Borgatti, “Centrality and network flow,” Social Networks, vol. 27, pp. 55–71, Jan. 2005.
  • [5] R. Ghosh and K. Lerman, “Rethinking centrality: the role of dynamical processes in social network analysis,” Discrete Continuous Dynamical Syst. Series B, vol. 19, pp. 1355–1372, July 2014.
  • [6] R. Ghosh, S.-H. Teng, K. Lerman, and X. Yan, “The interplay between dynamics and networks: centrality, communities, and Cheeger inequality,” in Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2014, pp. 1406–1415.
  • [7] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. IEEE, vol. 95, pp. 215–233, Jan. 2007.
  • [8] A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on graphs,” IEEE Trans. Signal Process., vol. 61, pp. 1644–1656, Apr. 2013.
  • [9] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular data domains,” IEEE Sig. Processing Mag., vol. 61, pp. 1644–1656, Apr. 2013.
  • [10] X. Yan, S.-h. Teng, K. Lerman, and R. Ghosh, “Capturing the interplay of dynamics and networks through parameterizations of Laplacian operators,” PeerJ Computer Science, vol. 2, e57, May 2016.
  • [11] M. Fiedler, Special Matrices and Their Applications in Numerical Mathematics. New York: Dover, 2008.
  • [12] A. Sandryhaila and J. M. Moura, “Discrete signal processing on graphs: frequency analysis,” IEEE Trans. Signal Process., vol. 62, pp. 3042–3054, Apr. 2014.
  • [13] A. Gavili and X.-P. Zhang, “On the shift operator, graph frequency and optimal filtering in graph signal processing,” ArXiv e-prints, July 2017 [Online]. Available:
  • [14] X. Yan, S.-H. Teng, and K. Lerman, “Multi-layer network composition under a unified dynamical process,” Lecture Notes in Computer Science. vol. 10354, Springer, 2017.
  • [15] R. Lambiotte, J.-C. Delvenne, and M. Barahona, “Random walks, Markov processes and the multiscale modular organization of complex networks,” IEEE Trans. Netw. Sci. Eng., vol. 1, pp. 76–90, July–Dec. 2014.
  • [16] J. Gómez-Gardeñes and V. Latora, “Entropy rate of diffusion processes on complex networks,” Physical Review E, vol. 78, 065102, Dec. 2008.
  • [17] Y. Wang, D. Chakrabarti, C. Wang, and C. Faloutsos, “Epidemic spreading in real networks: an eigenvalue viewpoint,” in Proc. 22nd Int. Symp. Reliable Distributed Systems, 2003, pp. 25–34.
  • [18] A. Reibman and K. Trivedi, “Numerical transient analysis of Markov models,” Comput. Operations Research, vol. 15, pp. 19–36, 1988.
  • [19] D. Taylor, S. A. Myers, A. Clauset, M. A. Porter, and P. J. Mucha, “Eigenvector-based centrality measures for temporal networks,” Multiscale Modeling Simulation, vol. 15, pp. 537–574, March 2017.
  • [20] E. Pavez and A. Ortega, “Generalized Laplacian precision matrix estimation for graph signal processing,” in 2016 IEEE Int. Conf. Acoustics, Speech Signal Process., 2016, pp. 6350–6354.
  • [21] M. X. Cheng, Y. Ling, and B. M. Sadler, “Wireless ad hoc networks connectivity assessment and relay node deployment,” in 2014 IEEE Global Commun. Conf., pp. 399–404.
  • [22] M. X. Cheng, Y. Ling, and B. M. Sadler, “Network connectivity assessment and improvement through relay node deployment,” Theoretical Comput. Sci., vol. 660, pp. 86–101, Jan. 2017.
  • [23] R. F. Betzel, A. Griffa, A. Avena-Koenigsberger, J. Goñi, J.-P. Thiran, P. Hagmann, and O. Sporns, “Multi-scale community organization of the human structural connectome and its relationship with resting-state functional connectivity,” Netw. Sci., vol. 1, pp. 353–373, Dec. 2013.
  • [24] W. Huang, L. Goldsberry, N. F. Wymbs, S. T. Grafton, D. S. Bassett, and A. Ribeiro, “Graph frequency analysis of brain signals,” IEEE J. Selected Topics Signal Process., vol. 10, pp. 1189–1203, Oct. 2016.
  • [25] S. Mowlaei, A. Singh, and A. Ghuman, “Frequency bands are an organizational force of intrinsic brain networks,” Soc. Neuroscience, 2016.
  • [26] M. Daianu, A. Mezher, N. Jahanshad, D. P. Hibar, T. M. Nir, C. R. Jack, M. W. Weiner, M. A. Bernstein, and P. M. Thompson, “Spectral graph theory and graph energy metrics show evidence for the Alzheimer’s disease disconnection syndrome in APOE-4 risk gene carriers,” 2015 IEEE 12th Int. Symp. Biomedical Imaging, 2015, pp. 458–461.
  • [27] J. D. Medaglia, W. Huang, E. A. Karuza, S. L. Thompson-Schill, A. Ribeiro, and D. S. Bassett, “Functional alignment with anatomical networks is associated with cognitive flexibility,” ArXiv e-prints, Nov. 2016 [Online]. Available:
  • [28] M. Daianu, G. Ver Steeg, A. Mezher, N. Jahanshad, T. M. Nir, X. Yan, G. Prasad, K. Lerman, A. Galstyan, and P. M. Thompson, “Information-theoretic clustering of neuroimaging metrics related to cognitive decline in the elderly,” Lecture Notes in Computer Science. vol. 9601, Springer, 2016, pp. 13–23.
  • [29] A. Irimia, M. C. Chambers, C. M. Torgerson, and J. D. Van, “Circular representation of human cortical networks for subject and population-level connectomic visualization,” Neuroimage, vol. 60, pp. 1340–1351, April 2012.

Xiaoran Yan (M’17) received the B.S. degree from the Zhejiang University, China, in 2007 and the Ph.D. degree from the University of New Mexico, Albuquerque, NM, USA, in 2013, all in computer science. From 2013 to 2015, he was a Postdoctoral Research Associate at the University of Southern California Information Sciences Institute. He joined the Indiana University Network Science Institute in 2015 as an Assistant Research Scientist. His research interests include network science, statistical machine learning and their trans-disciplinary applications in social networks, communications networks and neuroscience. He has authored multiple journal and conference papers, and has participated as a reviewer or program committee member for major organizations such as IEEE and AAAI.

Brian M. Sadler (S’81–M’81–SM’02–F’07) received the B.S. and M.S. degrees from the University of Maryland, College Park, MD, USA, and the Ph.D. degree from the University of Virginia, Charlottesville, VA, USA, all in electrical engineering. He is the US Army Senior Scientist for Intelligent Systems and a Fellow of the Army Research Laboratory (ARL) in Adelphi, MD, USA. His research interests include information science and networked and autonomous intelligent systems. He received Best Paper Awards from the Signal Processing Society in 2006 and 2010 and was general co-chair of the 2016 IEEE Global Conference on Signal and Information Processing. He was an associate editor for IEEE Transactions on Signal Processing and IEEE Signal Processing Letters. He has been a guest editor for several journals, including IEEE JSTSP, IEEE JSAC, the IEEE SP Magazine, and the International Journal of Robotics Research.

Robert J. Drost (M’10–SM’14) received the B.S. degree in electrical engineering from the University of Arkansas, Fayetteville, AR, USA, in 2000 and the M.S. degree in electrical engineering, the M.S. degree in mathematics, and the Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign, Champaign, IL, USA, in 2002, 2005, and 2007, respectively.

From 2007 to 2009, he was a Digital Signal Processing Design Engineer with Finisar Corporation. He joined the Army Research Laboratory (ARL) in 2010 as a Postdoctoral Fellow and, subsequently, an Electronics Engineer. He has authored several journal and conference papers, receiving the 2013 ARL Honorary Award for Publication. He has also authored four patents or patent applications and has participated as a reviewer and panelist for major organizations such as NSF and IEEE. His research interests include optical communications, signal processing, and graphical models.

Paul Yu (Member) received the Ph.D. degree in Electrical Engineering from the University of Maryland, College Park. Since 2006, he has been with the U.S. Army Research Laboratory (ARL) where his work is in the area of signal processing for wireless networking and autonomy. His most recent work focuses on the exploitation of mobility for improved wireless network connectivity in complex propagation environments. He received the Outstanding Invention of the Year award in 2008 and the Jimmy Lin Award for Innovation and Invention in 2009, both from the University of Maryland, and a Best Paper award at the 2008 Army Science Conference.

Kristina Lerman received the A.B. degree from Princeton University, Princeton, NJ, and the Ph.D. degree from the University of California at Santa Barbara, Santa Barbara, CA, all in Physics. She works as a Research Team Lead at the University of Southern California Information Sciences Institute and holds a joint appointment as a Research Associate Professor in the USC Computer Science Department. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis. Her recent work on modeling and understanding cognitive biases in social networks has been covered by the Washington Post, Wall Street Journal, and MIT Tech Review.


-a Proof of Lemma 1

Lemma (Bias transformation).

Any biased random walk on , with the diagonal matrix specifying vertex bias factors , is equivalent to an unbiased random walk on the transformed graph . If is undirected, \replacedthen we instead consider the transformed graph \deletedis \added to maintain edges having equal weight in both directions.


Given the transformed graph , the transition probability of an unbiased random walk going from vertex to vertex is defined as\deleted,\deleted[remark=rjd: modified equation]


which is equivalent to the transition probability \replaced of the biased random walk on the original adjacency matrix .

For \addedan undirected graph, we need to guarantee the transition probabilities are preserved \replacedinon both directions. Given , we have\deleted[remark=rjd: modified equation]


-B Proof of Lemma 2

Lemma (Delay transformation).

Any unbiased \replacedcontinuous-timecontinuous time random walk on , with the diagonal matrix specifying vertex delay factors , is equivalent to a \replacedcontinuous-timecontinuous time random walk with \deletedthe delay factors on the transformed graph \replaced.


StartingAssuming we start from the unbiased random walk Laplacian with the delay factors \replaced, we have:\deleted[remark=rjd: modified equations]