Local Mixing Time: Distributed Computation and Applications

Local Mixing Time:
Distributed Computation and Applications

Anisur Rahaman Molla
School of Computer Sciences
NISER Bhubaneswar
Odisha 752050, India
anisurpm@gmail.com
Supported by DST Inspire Faculty research grant DST/INSPIRE/04/2015/002801.
   Gopal Pandurangan
Department of Computer Science
University of Houston
Houston, Texas 77204, USA
gopalpandurangan@gmail.com
Supported, in part, by NSF grants CCF-1527867, CCF-1540512, IIS-1633720, CCF-1717075 and BSF award 2016419.
Abstract

The mixing time of a graph is an important metric, which is not only useful in analyzing connectivity and expansion properties of the network, but also serves as a key parameter in designing efficient algorithms. We introduce a new notion of mixing of a random walk on a (undirected) graph, called local mixing. Informally, the local mixing with respect to a given node , is the mixing of a random walk probability distribution restricted to a large enough subset of nodes — say, a subset of size at least for a given parameter — containing . The time to mix over such a subset by a random walk starting from a source node is called the local mixing time with respect to . The local mixing time captures the local connectivity and expansion properties around a given source node and is a useful parameter that determines the running time of algorithms for partial information spreading, gossip etc.

Our first contribution is formally defining the notion of local mixing time in an undirected graph. We then present an efficient distributed algorithm which computes a constant factor approximation to the local mixing time with respect to a source node in rounds111The notation hides a factor., where is the local mixing time w.r.t in an -node regular graph. This bound holds when is significantly smaller than the conductance of the local mixing set (i.e., the set where the walk mixes locally); this is typically the interesting case where the local mixing time is significantly smaller than the mixing time (with respect to ). We also present a distributed algorithm that computes the exact local mixing time in rounds, where and is the diameter of the graph (this bound holds unconditionally without any assumptions on ). Our algorithms work in the CONGEST model of distributed computing. Since the local mixing time can be significantly smaller than the mixing time (or even the diameter) in many graphs, it serves as a tighter measure of distributed complexity in certain algorithmic applications. In particular, we show that local mixing time tightly characterizes the complexity of partial information spreading which in turn is useful in solving other problems such as the maximum coverage problem, full information spreading, leader election etc.

Keywords: distributed algorithm, random walk, mixing time, conductance, weak-conductance, information spreading

1 Introduction

Mixing time of a random walk in a graph is the time taken by a random walk to converge to the stationary distribution of the underlying graph. It is an important parameter which is closely related to various key graph properties such as graph expansion, spectral gap, conductance etc. Mixing time (denoted by ) is related to the conductance and spectral gap () of a -node graph due to the known relations ([14]) that and , where is the second largest eigenvalue of the adjacency matrix of the graph. Small mixing time means the graph has high expansion and spectral gap. Such a network supports fast random sampling (which has many applications [10]) and low-congestion routing [13]. Moreover, the spectral properties tell a great deal about the network structure [9]. Mixing time is also useful in designing efficient randomized algorithms in communication networks [1, 2, 8, 9, 16, 19].

There has been some previous work on distributed algorithms to compute mixing time. The work of Kempe and McSherry [15] estimates the mixing time in rounds. Their approach uses Orthogonal Iteration i.e., heavy matrix-vector multiplication process, where each node needs to perform complex calculations and do memory-intensive computations. This may not be suitable in a lightweight environment. It is mentioned in their paper that it would be interesting whether a simpler and direct approach based on eigenvalues/eigenvectors can be used to compute mixing time. Das Sarma et al. [10] presented a distributed algorithm based on sampling nodes by performing sub-linear time random walks and then comparing the distribution with stationary distribution. The work of Molla and Pandurangan [18] presented an efficient and simple distributed algorithm for computing the mixing time of undirected graphs. Their algorithm estimates the mixing time (with respect to a source node ) of any -node undirected graph in rounds and achieves high accuracy of estimation. This algorithm is based on random walks and require very little memory and use lightweight local computations, and works in the CONGEST model. The algorithm of Das Sarma et al. can be sometimes faster than the algorithm of Molla and Pandurangan, however, there is a grey area (in the comparison between the two distributions) for which the former algorithm fails to estimate the mixing time with any good accuracy (captured by the accuracy parameter defined in Section 2). The latter algorithm is sometimes faster (when the mixing time is ) and estimates the mixing time with high accuracy [18].

In this paper, we introduce a new notion of mixing (time) of a random walk on a (undirected) graph, called local mixing (time). Local mixing time (precisely defined in Definition 2) captures the local connectivity and expansion properties around a given source node and is a useful parameter that determines the run time of algorithms for information spreading, gossip etc. Informally, local mixing time is the time for a token from any node to reach (essentially) the stationary distribution of a large enough subset of nodes (say of size at least , for a given parameter ) containing (here, stationary distribution is computed with respect to that subset). (It is important to note that the set is not known a priori, it just needs to exist.) Local mixing time is a finer notion than mixing time and is always upper bounded by mixing time (trivially), but can be significantly smaller than the mixing time (and even the diameter) in many graphs (cf. Section 2.3). For example, the mixing time of a -barbell graph (cf. Section 2.3) is (and its diameter is ), whereas its local mixing time is ; hence partial information spreading (cf. Section 4) is significantly faster in such graphs.

Our main contribution is an efficient distributed algorithm for computing the local mixing time in undirected regular graphs. We show that we can compute a constant factor approximation (for any small fixed positive constant)222We actually compute a 2-factor approximation, but it can be easily modified to compute any -factor approximation, for any constant . of local mixing time in rounds in undirected graphs, where is the local mixing time333Please see Section 2.2 for the formal definition and notation; we formally denote the local mixing time by parameterized by (which determines the size of the set where the walk locally mixes) and by , an accuracy parameter (which measures how close the walk mixes). with respect to . This bound holds when is significantly smaller than the conductance of the local mixing set (i.e., the set where the walk mixes locally); this is typically the interesting case where the local mixing time is significantly smaller than the mixing time of . We also present a distributed algorithm that computes the exact local mixing time in rounds, where . This bound holds unconditionally without any assumptions on . The local mixing time of the graph is the maximum of the local mixing times with respect to every node in the graph. We note that one can compute the local mixing time with respect to the entire graph by taking the maximum of all the local mixing times starting from each vertex. This (in general) will incur an -factor additional overhead on the number of rounds (by running the distributed algorithm with respect to every node). However, depending on the input graph, one may be able to compute (or approximate) it significantly faster by sampling only a few source nodes and running it only from those source nodes (e.g., in a graph where the local mixing times are more or less the same with respect to any node).

Our definition of local mixing time is inspired by the notion of weak conductance [4] that similarly tries to capture the conductance around a given source vertex. It was shown in [4] that weak conductance captures the performance of partial information spreading. In partial information spreading, given a -node graph with each node having a (distinct) message, the goal is to disseminate each message to a fraction of the total number of nodes — say , for some — and to ensure that each node receives at least messages. It was shown that graphs which have large weak conductance (say a constant) admit efficient information spreading, despite having a poor (small) conductance [4]; hence weak conductance better captures the performance of partial information spreading. While it is not clear how to compute weak conductance efficiently, we show that local mixing time also captures partial information spreading. In Section 4, we show that the well-studied “push-pull” mechanism achieves partial information spreading in rounds, where is local mixing time with respect to the entire graph, i.e., . As shown in [4], an application of partial information spreading is to the maximum coverage problem which naturally arises in circuit layout, job scheduling and facility location, as well as in distributed resource allocation with a global budget constraint.

Our algorithms work in CONGEST model of distributed computation where only small-sized messages (-sized messages) are allowed in every communication round between nodes. Moreover, our algorithms are simple, lightweight (low-cost computations within a node) and easy to implement. We note that our bounds are non-trivial in the CONGEST model.444In the LOCAL model, all problems can be trivially solved in rounds by collecting all the topological information at one node, whereas in the CONGEST model, the same will take rounds, where is the number of edges in the graph. In particular, we point out that one cannot obtain these bounds by simply extending the algorithm of [18] that computes the mixing time (with respect to a source node ) of any -node undirected graph in rounds. Informally, the main difficulty in computing (or estimating) the local mixing time is that one does not (a priori) know the set where the walk locally mixes (there can be exponential number of such sets). This calls for a more sophisticated approach, yet we obtain a bound that is comparable to the bound obtained for computing the mixing time obtained in [18].

1.1 Distributed Network Model

We model the communication network as an undirected, unweighted, connected graph , where and . Every node has limited initial knowledge. Specifically, we assume that each node is associated with a distinct identity number (e.g., its IP address). At the beginning of the computation, each node accepts as input its own identity number and the identity numbers of its neighbors in . We also assume that the number of nodes and edges i.e., and (respectively) are given as inputs. (In any case, nodes can compute them easily through broadcast in , where is the network diameter.) The nodes are only allowed to communicate through the edges of the graph . We assume that the communication occurs in synchronous rounds. We will use only small-sized messages. In particular, in each round, each node is allowed to send a message of size bits through each edge that is adjacent to . The message will arrive to at the end of the current round. This is a widely used standard model known as the CONGEST model to study distributed algorithms (e.g., see [21, 20]) and captures the bandwidth constraints inherent in real-world computer networks.

We focus on minimizing the the running time, i.e., the number of rounds of distributed communication. Note that the computation that is performed by the nodes locally is “free”, i.e., it does not affect the number of rounds; however, we will only perform polynomial cost computation locally (in particular, very simple computations) at any node.

For any node , and denote the degree of and the set of neighbors of in respectively.

1.2 Related Work

We briefly discuss prior work that relates to the related problem of computing the mixing time of a graph. It is important to note that these algorithms do not give (or cannot be easily adapted) to give efficient algorithms for computing the local mixing time.

Das Sarma et al. [10] presented a fast decentralized algorithm for estimating mixing time, conductance and spectral gap of the network. In particular, they show that given a starting node , the mixing time with respect to , i.e, , can be estimated in rounds. This gives an alternative algorithm to the only previously known approach by Kempe and McSherry [15] that can be used to estimate in rounds. In fact, the work of [15] does more and gives a decentralized algorithm for computing the top eigenvectors of a weighted adjacency matrix that runs in rounds if two adjacent nodes are allowed to exchange messages per round, where is the mixing time and is the size of the network.

Molla and Pandurangan [18] presented an algorithm that estimates the mixing time for the source node in rounds in a undirected graph and achieves high accuracy of estimation. This algorithm is based on random walks. Their approach, on a high-level, is based on efficiently performing many random walks from a particular node and computing the fraction of random walks that terminate over each node. They show that this fraction estimates the random walk probability distribution. This approach achieves very high accuracy which is a requirement in some applications [6, 8, 16, 22]. As mentioned earlier, this approach does not extend to computing the local mixing time efficiently.

The algorithm of Das Sarma et al. [10] is based on sampling nodes by performing sub-linear time random walks of certain length and comparing the distribution with the stationary distribution. In particular, if is smaller than , then the algorithm of Molla and Pandurangan is faster. Also there is a grey area for the accuracy parameter for which the algorithm of Das Sarma et al. cannot estimate the mixing time. More precisely, the algorithm of Das Sarma et al. estimates the mixing time for accuracy parameter with respect to a source node , as follows: the estimated value will be between the true value and .

The notion of weak conductance was defined in the work of Censor-Hillel and Sachnai [4] which they then use as a parameter to capture partial information spreading. They also showed that partial information spreading is useful in solving several other important problems, e.g., maximum coverage, full information spreading, leader election etc. [4, 5].

There are some notions proposed in the literature that are alternative to the standard notion of mixing time and stationary distribution. These notions are different from the notion of local mixing time studied in this paper. The work of [3] introduces the concept of “metastable” distribution and pseudo-mixing time of Markov chains. Informally, a distribution is - metastable for a Markov chain if, starting from , the Markov chain stays at distance at most from for at least steps. The pseudo-mixing time of starting from a state is the number of steps needed by the Markov chain to get -close to when started from . Another notion that has been studied in literature is “quasi-stationarity”, which has been used to model the long-term behaviour of stochastic systems that appear to be stationary over a reasonable time period, see, e.g., [11] for more details.

2 Local Mixing

We define the notion of local mixing and local mixing time. Before we do that, we first recall some preliminaries on random walks.

2.1 Random Walk Preliminaries

Given an undirected graph and a starting point, a simple random walk is defined as: in each step, the walk goes from the current node to a random neighbor i.e., from the current node , the probability of moving to node is if , otherwise , where is the degree of .

Suppose a random walk starts at vertex . Let be the initial distribution with probability at the node and zero at all other nodes. Then the probability distribution at time starting from the initial distribution can be seen as the matrix-vector multiplication , where is the transpose of the transition probability matrix of . We denote the probability distribution vector at time by the bold letter and the probability of a co-ordinate i.e., probability at a node by . Sometime we omit the source node from the notations, when it is clear from the text— so the notations would be and respectively. The stationary distribution (a.k.a steady-state distribution) is the distribution such that i.e., the distribution doesn’t change (it has converged). The stationary distribution of an undirected connected graph is a well-defined quantity which is , where is the degree of node . We denote the stationary distribution vector by , i.e., for each node . The stationary distribution of a graph is fixed irrespective of the starting node of a random walk, however, the number of steps (i.e., time) to reach to the stationary distribution could be different for different starting nodes. The time to reach to the stationary distribution is called the mixing time of a random walk with respect to the source node . The mixing time corresponding to the source node is denoted by . The mixing time of the graph, denoted by , is the maximum mixing time among all (starting) nodes in the graph. Mixing time exists and is well-defined for non-bipartite graphs; throughout we assume non-bipartite graphs.555Bipartiteness or not is rather a technical issue, since if we consider a lazy random walk (i.e., random walk where at each step, with probability the walk stays in the same node and with probability , it goes to a random neighbor), then it is well-defined for all graphs. The formal definitions are given below.

Definition 1.

(–mixing time for source and –mixing time of the graph)
Define , where is the norm. Then is called the -near mixing time for any in . The mixing time of the graph is denoted by and is defined by . It is clear that .

We sometime omit from the notations when it is understood from the context. The definition of is consistent due to the following standard monotonicity property of distributions. We note that a similar monotonicity property does not hold for , the local mixing time with respect to source node ; this is one reason why computing local mixing time is more non-trivial compared to mixing time.

Lemma 1.

Proof.

(adapted from Exercise 4.3 in [17]) The monotonicity follows from the fact that , where is the transpose of the transition probability matrix of the graph and is any vector. That is, denotes the probability of transitioning from the node to the node . This in turn follows from the fact that the sum of entries of any column of is 1.

We know that is the stationary distribution of the transition matrix . This implies that if is -near mixing time, then , by definition of -near mixing time and . Now consider . This is equal to , since . However, this reduces to , (from the fact ). Hence, it follows that is also -near mixing time. ∎

2.2 Definition of Local Mixing and Local Mixing Time

For any set , we define is the volume of i.e., . Therefore, is the volume of the vertex set. The conductance of the set is denoted by and defined by

where is the set of edges between and .

Let us define a vector over the set of vertices as follows:

Notice that is the stationary distribution of a random walk over the graph , and is the restriction of the distribution on the subgraph induced by the set . Recall that we defined as the probability distribution over of a random walk of length , starting from some source vertex . Let us denote the restriction of the distribution over a subset by and define it as:

It is clear that is not a probability distribution over the set as the sum could be less than .

Informally, local mixing, with respect to a source node , means that there exists some (large-enough) subset of nodes containing such that the random walk probability distribution becomes close to the stationary distribution restricted to (as defined above) quickly. We would like to quantify how fast the walk mixes locally around a source vertex. This is called as local mixing time which is formally defined below.

Definition 2.

(Local Mixing and Local Mixing Time)
Consider a vertex . Let be a positive constant and be a fixed parameter. We first define the notion of local mixing in a set . Let be a fixed subset containing of size at least . Let be the restricted probability distribution over after steps of a random walk starting from and be as defined above. Define the mixing time with respect to set as . We say that the random walk locally mixes in if exists and well-defined. (Note that a walk may not locally mix in a given set , i.e., there exists no time such that ; in this case we can take the local mixing time to be .)

The local mixing time with respect to source node is defined as , where the minimum is taken over all subsets (containing ) of size at least , where the random walk starting from locally mixes. A set where the minimum is attained (there may be more than one) is called the local mixing set. The local mixing time of the graph, (for given parameters and ), is .

From the above definition, it is clear that always exists (and well-defined) for every fixed , since in the worst-case, it equals the mixing time of the graph; this happens when (for every ). We note that, crucially, in the above definition of local mixing time, the minimum is taken over subsets of size at least , and thus, in many graphs, local mixing time can be substantially smaller than the mixing time when (i.e., the local mixing can happen much earlier in some set of size than the mixing time). It is important to note that the set where the local mixing time is attained is not fixed a priori, it only requires that a set of size at least exists. (Since is not known a priori, the computation of local mixing time is more complicated, unlike mixing time; in our algorithms we do not explicitly compute the local mixing set, but only compute an approximation of the local mixing time).

It also follows from the definition that the local mixing time depends on the parameter , i.e., size of subset — in general, smaller the size of smaller the local mixing time. In particular, if , then , mixing time for source (cf. Definition 1) and in general, for any .

Intuitively, small local mixing time implicates that the random walk starting from a vertex mixes fast over a (large enough) subset (parameterized by ) around that vertex. Therefore, given an undirected graph , a source node and a parameter , the goal is to compute the local mixing time with respect to .666Similar to the case of the mixing time, one can compute the local mixing time with respect to the entire graph by taking the maximum of all the local mixing times starting from each vertex. This (in general) will incur an -factor additional overhead on the number of rounds. In our algorithm in Section 3, we compute a constant factor approximation to the local mixing time (we do not explicitly compute the set where the walk locally mixes). In Section 3.2, we give an algorithm to compute the exact local mixing time.

2.3 Local Mixing Time and Mixing Time in Some Graphs

The local mixing time w.r.t. a source node (also ) is monotonically decreasing function of . That is if then (also ). This follows directly from the definition since .

Let us now compare the local mixing time and mixing time in some well-known graph classes. It will strengthen understanding towards why local mixing time is a refined measure of mixing time of a random walk in a graph. Consider the following graphs:

  1. Complete graph: Both local mixing time and mixing time are constant. This is because, in one step of the random walk, the probability distribution becomes which is -close to the uniform distribution (which is the stationary distribution). Thus the mixing time of the complete graph is and hence, the local mixing time is equal to the mixing time.

  2. -regular expander: It is known that the mixing time of an expander graph is [17]. The proof follows from the expansion property of the graph. The rate of convergence of a probability distribution to the stationary distribution is bounded by the second largest eigenvalue of the transition matrix. The second largest eigenvalue of an expander graph is constant. It can be shown that mixing in a set of size at least , will take at least time (for constant and ). Thus the local mixing time is . Therefore, there is no substantial difference between mixing time and local mixing time in expander graphs.

  3. Path: It is known that mixing time of a path of nodes is [17]. The local mixing time is , since it requires so much time to mix in a sub-path of size . This can be substantially smaller than mixing time when is large.

  4. -barbell graph: This is a generalization of the barbell graph. The -barbell graph consists of a path of equal sized cliques, i.e., the size of each clique is (see Figure 1). The local mixing time is , but it is easy to show that mixing time is . In this graph, there is a siginificant difference between mixing time and local mixing time, e.g., for , the difference between mixing time and local mixing time is . Similar graph structures (e.g., class of graphs with equal-sized connected components, which have very small mixing time such as expanders, that are connected via a path or ring) have a large gap between mixing time and local mixing time.

Figure 1: -barbell graph: a path of cliques of equal size.

We next present a deterministic approach to compute the probability distribution of a random walk of any length . The idea is adapted from the paper [16] and explored in this paper to compute local mixing time.

2.4 Computation of Random Walk Probability Distribution

Let us compute the probability distribution starting from a given source node in the graph . We present an algorithm (Algorithm 1) which approximates in time in the CONGEST model. The algorithm essentially simulates the probability distribution of each step of the random walk starting from a source node by a deterministic flooding technique. At the beginning of any round , each node sends to its -neighbors and at the end of the round , each node computes . After rounds, each node will output its (estimated) probability . The estimated probabilities can be made as close as to the exact values , i.e., , for any small . In fact, this deterministic approach can compute exact probability distribution in principle. Since, in the CONGEST model, only bits are allowed to be exchanged, it’s not possible to send a real number through an edge; instead an approximated value (rounding off) of size bits can be sent. Thus, it is possible to compute a close approximation to the probability distribution of a random walk of any length .

Input: A graph , a source node and the length .
Output: Each node outputs .

1:  Initialization: at source node , and at all other nodes , .
2:  for each round  do
3:     Each node whose , does the following in parallel: (i) send to all the neighbors . (ii) Compute the sum (say, ) of the received values from all neighbors and round it to the closest integer multiple of , for any integer , where is nearest integer function. Store this rounded value as .
4:  end for
5:  Each node outputs .
Algorithm 1 Estimate-RW-Probability

Note that at each step the value at node is rounded to the closest integer multiple of . Intuitively, the error of estimation is at most for each step. Thus the following error bound (Lemma 2) of the approximation holds. The proof can be easily adapted from the Lemma 8 in [16].

Lemma 2.

At any time , , for all the nodes .

Therefore, the algorithm finishes in time and computes a close approximation of the probabilities . Since the mixing time (and hence the local mixing time) is at most for any graph, choosing would suffice to get a very small approximation error. It is to be noted that a randomized algorithm presented in [18] does the same job with high probability in time as well.

3 Local Mixing Time Computation

Let us assume the graph is regular and degree of each node is . Then the volume of any set is and the non-zero entries in the restricted stationary distribution are all . Let be the given source node from where the local mixing time needs to be computed. We assume the error of estimation to be any arbitrarily small (but fixed) positive constant in the Definition 2 (say, we can choose which is typically done). Further, we assume that the graph satisfies the condition for every and for every set , where is the set where the random walk locally mixes (cf. Definition 2). (Note that we don’t know a priori). We make this assumption so that our algorithm can compute a -approximation of the local mixing time efficiently; this is typically the interesting case, when the local mixing time is much smaller than the mixing time. We also show an easy extension of the algorithm to compute the (exact) local mixing time in general regular graphs (without any conditions), but that takes slightly longer time. Therefore, the goal is to compute the minimum time , such that , on a set that is as small as possible, but of size at least . Recall that .

Input: A graph , a source node , a positive constant and a fixed accuracy parameter (arbitrarily small positive constant).
Output: An approximate local mixing time .

1:  for each  do
2:     
3:     The node computes a BFS tree of depth via flooding.
4:     Run Algorithm 1 with as source node and as the length. Each node will have in the end.
5:     for  do
6:         Each node computes the difference .
7:         Node computes the sum of smallest values (let the sum is ) using the binary search method discussed below in Section 3.1.
8:         Node checks the following locally:
9:         if  then
10:            Output and STOP.
11:         end if
12:     end for
13:  end for
Algorithm 2 Local-Mixing-Time

The algorithm starts with the random walk length and the computation proceeds in iterations. After each iteration, the value of is incremented by a factor i.e., doubled. In an iteration, the algorithm first computes the probability distribution of a random walk of length starting from the given source node . For this, it uses Algorithm 1 from the previous section. Then every node locally computes the difference (the algorithm first looks for the minimum size mixing set , i.e., of size ). The source node then collects smallest of those s and checks if their sum is less than (note that our algorithm will check for instead of for technical reasons that will be explained later). If ‘yes’, then algorithm stops and outputs the length as the local mixing time. Otherwise, if the sum is greater than , the algorithm checks for the mixing set of size . That is the source node collects smallest of the differences and checks if their sum satisfies the -condition.777It is shown in the analysis that we compute local mixing time with the accuracy parameter . In general, if the sum of the values in a set did not satisfy the condition, the algorithm extends the search space by incrementing the size of the set by a factor of . The algorithm starts with as the size of the local mixing set is at least (by the definition). This way the algorithm checks if there exists a set of size larger than where the random walk mixes locally. If such a set exists, the algorithm stops and outputs the length as the local mixing time. Else, the algorithm goes to the next iteration and does the same computation by doubling the random walk length to . The output of the algorithm is correct because it gives the existence of a set of size where the local mixing time condition satisfies (cf. Definition 2). The algorithm only computes the local mixing time and not the set where the random walk probability mixes. Hence, finding an satisfying the local mixing time condition is sufficient. The pseudocode is given in Algorithm 2.

3.1 Description and Analysis

Let us now discuss the details of the computation in each iteration of Algorithm 2, where varies starting from and doubles in each iteration.

Compute BFS tree from :888Instead of computing a BFS tree in each iteration, one can simplify the algorithm by computing a BFS tree of depth just once in the beginning of the algorithm, i.e., before the for-loop on . However, this will incur an additional term in the running time of the algorithm. The source node computes a Breadth First Search (BFS) tree of depth via flooding (see e.g., [20]), where is the diameter of the graph. Each node knows its parent in the BFS tree. The BFS tree construction takes rounds [20].

Compute the probability distribution of a random walk of length starting from : The source node runs Algorithm 1 with input . At the end, each node will have the probability (some of the s could be zero). This takes rounds, see Section 2.4.

We next discuss the details of each iteration of the for loop (steps 5-12 of Algorithm 2) where the size of the set varies starting from and increases by a factor in each iteration.

Every node computes the difference : Since is known, each node can compute locally.

The source node collects smallest of values and checks if their sum is less than : Each node sends its value to the source node . A naive way of doing this is to upcast (see e.g., [20]) all the values through the BFS tree edges in a pipelining manner. Then the source node can take the smallest of them and checks locally if the sum is less than . The upcast may take time in the worst case due to congestion in the BFS tree.
To overcome the congestion, we use the following efficient approach. Instead of collecting all the at , the smallest of them can be found by doing a binary search on . All the nodes in the BFS tree send and (the minimum and maximum respectively among all ) to the root through a convergecast process (e.g., see [20]). This will take time proportional to the depth of the BFS tree. Then can count the number of nodes whose value is less than via a couple of broadcast and convergecast. In fact, broadcasts the value to all the nodes via the BFS tree and then the nodes whose value is less than (say, the qualified nodes), reply back with value through the convergecast. Depending on whether the number of qualified nodes is less than or greater than , the root updates the value (by again collecting or in the reduced set) and iterates the process until the count is exactly . Then can determine the sum of s from the qualified nodes (by a convergecast) and checks locally if the sum is less than . As a summary, this is essentially finding smallest values through a binary search on all the (for all ). Each broadcast and convergecast takes time (more precisely, the depth of the BFS tree) and being done a constant number of times to compute size of the qualified set. Further, another factor is incurred for the binary search over s, which gives time overall.

There might be multiple nodes with the same value. To handle this, each node chooses a very small random number and adds it to in the beginning. Then it can be shown that with high probability all the values are different and at the same time the addition does not affect the sum significantly (which has to be less than ). For an example, say all the nodes choose a random number from the interval . Then by adding to all the nodes, the sum value will increase by at most which is much smaller than . Further, using Chernoff’s bound it can be easily shown that with high probability the values are all distinct, since s are distinct.

Incrementing the size of the local mixing set by a factor : In the first iteration, algorithm checks the local mixing on a set of size . More specifically, the source node collects smallest of values and checks if their sum is less than . If true, then the algorithm stops and outputs as the local mixing time. If not, then the algorithm looks for larger set in the next iteration, i.e., size . The source node collects smallest of values and checks if their sum is less than . If true, it outputs ; if not, then it checks on the incremented set of size and so on. Below we discuss on why we check the sum condition with value (cf. Lemma 3). The main idea behind the slightly relaxed condition (i.e., ) is that it indirectly checks whether the sum condition is satisfied, i.e., for all set sizes between the sizes that are actually checked, i.e., , for . In this way, for a particular length , checks if there exists a local mixing set of size at least . If is successful on some set, the algorithm stops and outputs the length as the local mixing time. Otherwise, if there is no such local mixing set (i.e., is not the local mixing time), the algorithm goes to the next iteration by doubling the length of the random walk. The output is correct because it gives the existence of a set where the local mixing time condition satisfies. Hence, finding an satisfying the local mixing time condition is sufficient. The following lemma shows the correctness of the above incrementation approach.

Lemma 3.

Let be any set (of smallest values) such that lies between and , i.e., . Let be the set of size (this is a set considered by the algorithm). Further assume that . Then

Proof.

We have, . First note that:

Therefore,

(1)

Also note that:

Then,

(2)

Furthermore, since the algorithm compares the sum of the smallest differences in all the sets, we get,

The above lemma says that if there is a set of size that lies between and such that the sum difference in is less than , then the sum difference in the incremented set of size is less than . Moreover, if the sum , then automatically the sum is greater than . Hence, it is sufficient to check with for all the sets of size .

Doubling the length after each iteration: Finally, we show that the doubling of the random walk length in each iteration gives a -approximation of the local mixing time . We remark that the monotonicity property of the distribution doesn’t hold over a restricted set in general. Thus the local mixing time is not monotonic, unlike the mixing time of a graph, see Lemma 1. Hence, in general, binary search on length will not work. However, the idea of doubling the length in each iteration will work as we show that the amount of probability that goes out from a set (where the walk mixes locally) in the next steps of the random walk is very small i.e., . As we discussed in Section 2.3, the local mixing time is interesting and effective on the graphs where the local mixing time is very small compared to the mixing time. Also the mixing time estimates the conductance of the graph. This intuitively justifies our assumption , where is the local mixing time w.r.t the source and is the conductance of the set ( is the set where the random walk mixes locally). Recall that the conductance of the set is defined as .

Suppose be the local mixing time and is the set where the random walk locally mixes. Then we show that starting from the stationary distribution in , the amount of probability that goes out of the set after another steps of the walk is at most .

Lemma 4.

Let be a set of size where a random walk probability distribution locally mixes in steps when started from a source node . Let be the probability distribution at time . Assume, . Then , i.e., the local mixing time condition in is satisfied (with parameter ) at length .

Proof.

Since is the local mixing time, the restricted probability distribution is -close to the stationary distribution in . Let be the set of edges between and . The amount of probability goes out of in one step (i.e., at time ) is (each crossing edge carries fraction of the probability since the graph is -regular). Note that some amount of probability may come in to , but that’s good for our upper bound claim. We know that conductance of is . Therefore, the total amount of probability that goes out of in the next steps (i.e., at time ) is at most . Thus, . Hence, it follows from the assumption that the amount of probability that goes out of the set is . Moreover, . Hence for , , since and . That is at length , the local mixing time condition in is satisfied (with parameter ). ∎

From the above lemma, it follows that the Local-Mixing-Time algorithm must stop by the time , if the algorithm misses the exact local mixing time when doubling the length in an iteration999Recall that the algorithm checks the -norm condition with the accuracy parameter . Hence, it subsumes the case when the length is doubling.. Therefore, the output length is at most a -approximation of the local mixing time.

The running time of the above algorithm to compute the local mixing time is given in the following theorem.

Theorem 1.

Given an undirected regular graph , a source node and a positive integer , the algorithm Local-Mixing-Time computes a -approximation of the local mixing time with high probability and finishes in time, provided , where is the local mixing set.

Proof.

The correctness of the algorithm is described above. We calculate the running time. The algorithm iterates times, for . In each iteration:

  1. the source node computes a BFS tree, which takes rounds.

  2. the algorithm runs Algorithm 1 as a subroutine. It takes rounds.

  3. collects the sum of smallest s through the BFS tree. It takes rounds. This is done for all , where . It will take time as . Hence the time taken is rounds.

  4. checking if the sum of differences is less than and , can be done locally at .

Since , the total time required is , which is bounded by , since . ∎

3.2 Algorithm for Computing Exact Local Mixing Time

The above algorithm finds a -approximation of the local mixing time. It can be extended to compute exact local mixing time corresponding to the given parameters and . Moreover, the extended algorithm works for any general regular graph. The running time of the extended algorithm will increase by a multiplicative factor than the running time of the previous -approximation algorithm (Algorithm 2). The extended algorithm follows the same internal steps as in Algorithm 2, except the number of iterations. Instead of doubling the length in each iteration, the algorithm iterates for each value of . The algorithm starts with and the computation proceeds in iterations. In each iteration, the algorithm runs Steps 3-12 of Algorithm 2.

Now we explain how to compute the probability distribution of a random walk of length from the previous distribution in one round. We resume the deterministic flooding technique from the last step with the probability distribution and compute in one step by flooding. In particular, starting from the distribution , the algorithm runs Step 3 of Algorithm 1. The Step 3 essentially computes the probability distribution from the distribution in one round.

Since for each length , the source node checks if there exists a set where the probability mixes, the algorithm will find the exact local mixing time . At the same time, the algorithm works for an arbitrary regular graph without the condition (since we are not doubling the length). The running time of the algorithm to compute exact is given in the following theorem.

Theorem 2.

Suppose is the local mixing time w.r.t. the vertex . There is an algorithm which computes with high probability and finishes in time.

Proof.

The algorithm iterates for each . Inside each iteration, first the source node computes a BFS tree, which takes rounds, then it runs Step 3 of Algorithm 1 and then all the other steps of Algorithm 2. Therefore, inside one iteration, the total time taken is , which is bounded by (cf. Theorem 1). Since the number of iterations is , the time complexity of the algorithm is . Recall that which is bounded above by , could be much smaller than . ∎

4 Application to Partial Information Spreading

A main application of local mixing is that the local mixing time characterizes partial information spreading. As mentioned in Section 1, partial information spreading has many applications including to the maximum coverage problem [4] and full information spreading [5].

The partial information spreading problem is defined in [4] which can be considered a relaxed version of the well-studied (full) information spreading problem (see e.g., [12, 7]). Initially each node has a token, and unlike the full information spreading (which requires to send each token to all the nodes), the requirement of partial information spreading is to send each token to only nodes, and every node should have different tokens. A formal definition is:

Definition 3.

Each node has a token . For a given constant and for any fixed , a -partial information spreading means that, with probability at least , each token disseminates to at least nodes and every node receives at least different tokens.

To study partial information spreading, we use the well-studied (synchronous) push/pull model of communication, where each node chooses, in each round, a random neighbor to exchange information with. Note that this algorithm assumes the LOCAL model, i.e., in each round, there is no limit on the number of messages (tokens) that can be exchanged over an edge. In this setting, information spreading and partial information algorithms under the push-pull model have been extensively studied (see, e.g., [4, 5] and the references therein).

We show that partial information spreading in regular graphs can be accomplished in rounds with high probability, where is the local mixing time of the graph. We show that it holds in the LOCAL model (where there is no congestion); as mentioned earlier, LOCAL model is typically used in prior literature to analyze the push-pull mechanism [4, 5].101010In the CONGEST model, the bound will be ; note that is a time lower bound in general, since a node with degree needs at least rounds to get different tokens. We compare our bound with the previous bound of of [4] for -partial information spreading, where is the weak conductance of the graph. (Note that this bound is also for the LOCAL model.) This bound is for the “push-pull” algorithm: in every round, each node chooses a random neighbor and exchanges information (all their respective tokens) with it. Note that the algorithm does not specify any termination condition (i.e., how long show it run). To specify that, one should know a bound on the weak conductance (which is not known a priori). In contrast, we show that local mixing time (also) characterizes the run time of partial information spreading and our distributed computation of local mixing time (in the previous section) helps us to specify a termination condition for the push-pull mechanism.

We note that our bounds based on local mixing time are comparable to the bound based on weak conductance in many graphs (in fact, we conjecture a tight relationship between local mixing time and weak conductance, in the manner similar to the relationship between mixing time and conductance). However, the analysis of our bound is quite different to the one that uses weak conductance; it is simpler to analyze using random walks. We show our bound in the LOCAL model, which can be easily extended to the CONGEST model.

Theorem 3.

Partial information spreading in any (regular) graph can be accomplished by running the “push-pull” algorithm for rounds with high probability (whp), i.e., with probability at least for some constant .

Proof.

(sketch) First, we show that every token is disseminated to at least nodes whp. To analyze the performance of push-pull, it is enough to focus on a single message (token) and bounding the time taken for the message to reach at least nodes whp. Then, using union bound, it will follow that every token reaches at least nodes whp.

Fix a token . Let the token be initially at node . The analysis proceeds in phases with each phase consisting of (i.e., equal to the local mixing time) rounds. Since the token