Distributed Selfish Load Balancing with Weights and Speeds

Distributed Selfish Load Balancing with Weights and Speeds

Clemens Adolphs Petra Berenbrink Simon Fraser University, Burnaby, Canada
Abstract

In this paper we consider neighborhood load balancing in the context of selfish clients. We assume that a network of processors and tasks is given. The processors may have different speeds and the tasks may have different weights. Every task is controlled by a selfish user. The objective of the user is to allocate his/her task to a processor with minimum load.

We revisit the concurrent probabilistic protocol introduced in [6], which works in sequential rounds. In each round every task is allowed to query the load of one randomly chosen neighboring processor. If that load is smaller the task will migrate to that processor with a suitably chosen probability. Using techniques from spectral graph theory we obtain upper bounds on the expected convergence time towards approximate and exact Nash equilibria that are significantly better than the previous results in [6]. We show results for uniform tasks on non-uniform processors and the general case where the tasks have different weights and the machines have speeds. To the best of our knowledge, these are the first results for this general setting.

Load balancing, reallocation, equilibrium, convergence
11footnotetext: The research was carried out during a visit to SFU
Subject:

Distributed Algorithms

1 Introduction

Load Balancing is an important aspect of massively parallel computations as it must be ensured that resources are used to their full efficiency. Quite often the major constraint on balancing schemes for large networks is the requirement of locality in the sense that processors have to decide if and how to balance their load with local load information only. Global information is often unavailable and global coordination usually very expensive and impractical. Protocols for load balancing should respect this locality and still guarantee fast convergence to balanced states where every processor has more or less the same load.

In this paper we consider neighborhood load balancing in a selfish setting. We assume that a network of processors and tasks is given. The processors can have different speeds and the tasks can have different weights. Initially, each processor stores some number of tasks. The total number of tokens is time-invariant, i.e., neither do new tokens appear, nor do existing ones disappear. The load of a node at time is the total weight of all tasks assigned to that node at that time.

Every task is assumed to belong to a selfish user. The goal of the user is to allocate the task to a processor with minimum load. We assume neighborhood load balancing, meaning that task movements are restricted by the network. Users that are assigned to the processor represented by node of the network are only allowed to migrate their tasks over to processors that are represented by neighboring nodes of . Hence, the network models load balancing restrictions. Our model can be regarded as the selfish version of diffusion load balancing.

In this paper we revisit the concurrent probabilistic protocol introduced in [6]. The load balancing process works in sequential rounds. In each round every task is allowed to check the load of one randomly chosen neighboring processor. If that load is smaller the task will migrate to that processor with a suitably chosen probability. Note that, if the probability is too large (for example all tasks move to a neighbor with smaller load) the system would never be able to reach a balanced state. Here, we chose the migration probability as a function of the load difference of the two processors. No global information is necessary.

Using techniques from spectral graph theory similar to those used in [11], we can calculate upper bounds on the expected convergence time towards approximate and exact Nash equilibria that are significantly better than the previous results in [6]. We show results for uniform tasks on non-uniform processors and the general case where the tasks have different weights and the machines have speeds. To our best knowledge these are the first results for this general setting. For weighted tasks we deviate from the protocol for weighted tasks given in [6]. In our protocol, a player will move from one node to another only if the player with the largest weight would also do so. It is also straightforward to apply our techniques to discrete diffusive load balancing where each node sends the rounded expected flow of the randomized protocol to its neighbors ([2]).

1.1 Model and New Results

The computing network is represented by an undirected graph with vertices representing the processors and edges representing the direct communication links between them. The number of processors and the number of tasks is . The degree of a vertex is . The maximum degree of the network is denoted by , and for two nodes and the maximum of and is .

is the speed of processor . We assume that the speeds are scaled so that the smallest speed, called , is . If all speeds are the same we say the speeds are uniform. Let If all be the total capacity of the processors. Define as the maximum speed and as the minimum speed of the processors. In the case of weighted task task has a weight . In the case of uniform tasks we assume the weight of all tasks is one. Let denote the total sum of all weights, .

A state of the system is defined by the distribution of tasks among the processors. For the case of uniform indivisible tasks, we denote with the number of tasks on processor in state . For the case of weighted tasks, denotes the total weight on processor whereas denotes the weight of tasks . The load of a processor is defined as in the case of uniform tasks and as in the case of weighted tasks. The goal is to reach a state in which no task can benefit from migrating to a neighboring processor. Such s state is called Nash Equilibrium.

1.1.1 Uniform Tasks on Machines with Speeds

For uniform tasks, one round of the protocol goes as follows. Every task selects a neighboring node uniformly at random. If migrating to that node would lower the load experienced by the task, the task migrates to that node with proportional to the load difference and the speeds of the processors. For a detailed description of the protocol see Algorithm 1 in Section 3.

The first result concerns convergence to an approximate Nash equilibrium if the number of tasks, , is large enough. For a detailed definition of Laplacian matrix see Section 2. {theorem} Let and let denote the second smallest eigenvalue of the network’s Laplacian matrix. Then, Algorithm 1 (p. 1) reaches a state with in expected time

If for some , this state is an -approximate-Nash equilibrium with .

From the state reached in Theorem 1.1.1, we then go on to prove the following bound for convergence to a Nash equilibrium. {theorem} Let be defined as in Theorem 1.1.1, and let be the first time step in which the system is in a Nash equilibrium. Under the condition that the speeds are integer multiples of a common factor, , it holds

These theorems are proven in Section 3. Our bound of Theorem 1.1.1 is smaller by at least a factor of than the bound found in [6] (see Observation 3.2).

Graph -approximate NE Nash Equilibrium
This Paper [6] This Paper [6]
Complete Graph
Ring, Path
Mesh, Torus
Hypercube
Table 1: Comparison with existing results

We summarize the results for the most important graph classes in Table 1. The table gives an overview of asymptotic bounds on the expected runtime to reach an approximate or a exact Nash equilibrium. We omit the speeds from this table because they are independent of the graph structure and, therefore, the same for each column. We compare the results of this paper to the bounds obtained from [6]. These contain a factor , which we replace with , using . The table shows that for the graph classes at hand, our new bounds are superior to those in [6].

1.1.2 Weighted Tasks on Machines with Speeds

In Section 4, we study a slightly modified protocol (see 2) that allows tasks only to migrate to a neighboring processor if that would decrease their experienced load by a threshold depending on the speed of the processors. This protocol allows the tasks only to reach an approximate Nash Equilibrium.

{theorem}

Let and let denote the second smallest eigenvalue of the network’s Laplacian matrix. Then, Algorithm 2 (p. 2) reaches a state with in time

Under the condition that for some , this state is an -approximate Nash equilibrium. For the case of uniform speeds the theorem gives a bound of for the convergence time.

Outline.

After presenting the notation and preliminaries in Section 2, we treat the case of machines with speeds in Section 3. Section 4 treats the case of weighted tasks. Proofs are found in the appendix.

1.2 Related Work

The work closest to ours is in [4, 5, 6]. [4] considers the case of identical machines in a complete graph. The authors introduce a protocol similar to ours that reaches a Nash Equilibria (NE) in time . Note that for complete graphs the NE and the optima (where the load discrepancy is zero or one) are identical. An extension of this model to weighted tasks is studied in [5]. Their protocol converges to a NE in time polynomially in , , and the largest weight. In [6] the authors consider a model similar to ours, meaning general graphs with processors with speed and weighted tasks. They use a potential function similar to ours for the analysis. The potential drop is linked to the maximum load deviation, . The authors show that an edge must exist over which the load difference is at least . As long as the potential is large enough, it can then be shown that there is a multiplicative drop. This is then used to prove convergence to an approximate Nash equilibrium. Subsequently, a constant drop in is used to finally converge to a Nash equilibrium. The two main results of [6] for machines with speeds are presented in Table 1

Our paper relates to a general stream of works for selfish load balancing on a complete graph. There is a variety of issues that have been considered, starting with seminal papers on algorithms and dynamics to reach NE [13, 15]. More directly related are concurrent protocols for selfish load balancing in different contexts that allow convergence results similar to ours. Whereas some papers consider protocols that use some form of global information [14] or coordinated migration [19], others consider infinitesimal or splittable tasks [18, 3] or work without rationality assumptions [17, 1]. The machine models in these cases range from identical, uniformly related (linear with speeds) to unrelated machines. The latter also contains the case when there are access restrictions of certain agents to certain machines. For an overview of work on selfish load balancing see, e.g., [27].

Our protocol is also related to a vast amount of literature on (non-selfish) load balancing over networks, where results usually concern the case of identical machines and unweighted tasks. In expectation, our protocols mimic continuous diffusion, which has been studied initially in [10, 8] and later, e.g., in [25]. This work established the connection between convergence, discrepancy, and eigenvalues of graph matrices. Closer to our paper are discrete diffusion processes – prominently studied in [26], where the authors introduce a general technique to bound the load deviations between an idealized and the actual processes. Recently, randomized extensions of the algorithm in [26] have been considered, e.g., [12, 20].

2 Notation and Preliminaries

In this section we will give the more technical definitions.

A state of the system is defined by the distribution of tasks among the processors. For the case of uniform indivisible tasks, we denote with the number of tasks on processor in state . For the case of weighted tasks, denotes the total weight on processor whereas denotes the weight of tasks .

The task vector is defined as We define the load of processor in state as In analogy to the task vector, we define the load vector as For the processor speeds, we define the speed vector as and the speed matrix as Let denote the maximum speed. The task vector and the load vector are related by the speed matrix via The average load of the network is . In the completely balanced state, each node has exactly this load. The corresponding task vector is and we define of the deviation of the actual task vector from the average load vector, . It is clear that .

A state of the system is called a Nash equilibrium (NE) if no single task can improve its perceived load by migrating to a neighboring node while all other tasks remain where they are, i.e., for all edges . A state of the system is called an -approximate Nash equilibrium (-approximate-NE) if no single task can improve its perceived load by a factor of , i.e.

The Laplacian is a matrix widely used in graph theory. It is the matrix whose diagonal elements are , and the off-diagonal elements are if and otherwise. The generalized Laplacian , where is the diagonal matrix containing the speeds [11], is used to analyze the behavior of migration in heterogeneous networks.

3 Uniform Tasks on Machines with Speeds

The pseudo-code of our protocol is given in Algorithm 1. Recall that is defined as . is defined as .

begin
      foreach task in parallel do
            Let be the current machine of task Choose a neighboring machine uniformly at random if  then
                  Move task from node to node with probability
             end if
            
       end foreach
      
end
Algorithm 1 Distributed Selfish Load Balancing

The analysis of this protocol initially follows the steps of [6] up to Lemma 3.3 (Restated as Lemma D in the appendix). Before we outline the remainder of our proof, we introduce some more notation. {definition} For a given state , we define as the expected flow over edge . It holds

The following two potential functions will be used in the analysis. {definition} For , define

The potential is minimized for the average task vector, . We define the according normalized potential . {definition} The normalized potential is defined as

We want to relate this potential function to the load imbalance in the system. To this end, we define a new quantity. {definition} We define the maximum load difference as

{definition}

Let be some time step during the executing of our protocol and let denote the state of the system at that time step. We define as the drop in potential in time step . The sign convention for is such that a drop in from time step to gets a positive sign. This emphasizes that a large drop in is a desirable outcome of our process. is defined analogously. {lemma} The shifted potential has the following properties.

  1. The change in due to migrating tasks is the same as the change in , i.e.

  2. The potential can also be written using the generalized dot-product introduced in Section A.2,

{definition}

With the expected flow over edge in state , we define the set of non-Nash edges as

This is the set of edges for which tasks have an incentive to migrate. Edges with are called Nash edges or balanced edges. {definition} As an auxiliary quantity, we define

Our improved bound builds upon results in [6]. In that paper, the randomized process is analyzed by first lower-bounding the potential drop in the case that exactly the expected number of tasks is moved, and then by upper-bounding the variance of that process. This leads to Lemma D. Based on this lemma, we now prove a stronger bound on the expected drop in the potential . Let us briefly outline the necessary steps. The lower bound on the drop in the potential in Lemma D is a sum over the non-Nash edges and contains terms of the form , whereas the potential itself is a sum over the nodes and contains terms of the form . We will use the graph’s Laplacian matrix to establish a connection between and the expected drop in . This will allow us to prove fast convergence to a state where is below a certain critical value . If is sufficiently large, this state also is an -approximate Nash equilibrium. In the next stage of our approach, we use a constant drop in , a shifted version of , to prove convergence to an exact Nash equilibrium. The techniques from probability theory used in this this section are similar to the ones used in [4].

3.1 Convergence Towards an Approximate Nash Equilibrium

To make the connection with the Laplacian, we first have to rewrite the bound in Lemma D in the following way. {lemma} Under the condition that the system is in state , the expected drop in the potentials and is bounded by

Next, we use various technical results from spectral graph theory to prove the following bound. {lemma} Let be the Laplacian of the network. Let be its second smallest eigenvalue. Then

In a first step, we get rid of the conditioning of the potential drop on the previous state. {lemma} Let be defined such that .

Then, the expected value of the potential in time step is at most

As long as the expected value of the potential is sufficiently large, we can rewrite the potential drop as a multiplicative drop. {definition} Let be the second smallest eigenvalue of the Laplacian of the network. We define the critical value as . {lemma} Let be a time step for which the expected value of the potential satisfies . Let be defined as in Lemma 3.1. Then, the expected potential in time step is bounded by

This immediately allows us to prove the following. {lemma} For a given time step , there either is a so that , or

Thus, as long as holds, the expected potential drops by a constant factor. This allows us to derive a bound on the time it takes until is small. {lemma} Let . Then it holds

  1. There is a such that .

  2. There is a such that the probability that is at least

This is similar to a result in [6], but our factor is different. This is reflected in a different expected time needed to reach an -approximate Nash equilibrium, as we have pointed out in the introduction.

Next, we show that states with are indeed -approximate Nash equilibria if the number of tasks exceeds a certain threshold. This requires one further observation.

Observation \thetheorem.

For any state , we have .

{lemma}

Let for some . Then a state with is a -approximate Nash equilibrium. {remark} If is small, it still holds that we reach a state with , which is all we need to prove convergence to an exact Nash equilibrium in the next section. It is just that this intermediate state is then not an -approximate-Nash equilibrium.

Now we are ready to show Theorem 1.1.1.

Theorem 1.1.1.

Lemma 3.1 ensures that after steps the probability for not having reached a state with is at most . Hence, the expected number of times we have to repeat steps is less than

The expected time needed to reach such a state is therefore at most with from Lemma 3.1. ∎

If we let the algorithm iterate until a state with is obtained, Theorem 1.1.1 bounds the expected number of time steps we have to perform. However, by repeating a sufficient number of blocks with steps, we can obtain arbitrary high probability. {corollary} After many blocks of size , a state with is reached with probability at least .

Corollary 3.1.

The probability for not reaching a state with after steps is at most . We are interested in the complementary event, so its probability is at least . For the statement follows immediately. ∎

3.2 Convergence Towards a Nash Equilibrium

We now prove the upper bound for the expected time necessary to reach an exact Nash Equilibrium (Theorem 1.1.1, p. 1.1.1). To show this result, we have to impose a certain condition on the speeds. If the speeds are arbitrary non-integers, convergence can become arbitrarily slow. Therefore, we assume that there exists a common factor so that for every speed there exists an integer so that . We call the granularity of the speed distribution. The convergence factor , which was in the original protocol, must be changed to . For non-integer speeds, we have , so this effectively increases .

To show convergence towards an exact Nash Equilibrium we cannot rely solely on the potential , because when the system is close to a Nash equilibrium it is possible that the potential function increases even when a task makes a move that improves its perceived load. Therefore, we now look at potential . {definition} We define the shifted potential function

Let and denote the arithmetic mean and the harmonic mean of the speeds, i.e., and .

Then, we can write

Observation \thetheorem.

The shifted potential has the following properties.

  1. Let be the task deviation vector. Then

  2. .

Before we can lower-bound the expected drop in , we need a technical lemma regarding a lower bound to the load difference. It is similar to [6, Lemma 3.7], which concerned integer speeds, so the result here is more general. {lemma} Every edge with also satisfies

Potential differs from potential defined in [6] by a constant only. Therefore, potential differences are the same for both potentials and we can apply results for to .

{lemma}

If the system is in a state that is not a Nash equilibrium, then

Since the results of the previous section apply to whereas now we work with , we add this technical lemma relating the two. {lemma} For any state it holds

To obtain a bound on the expected time the system needs to reach the NE, we use a standard argument from martingale theory. Let us abbreviate . We introduce a new random variable which we define as . {lemma} Let be the first time step for which the system is in a Nash equilibrium. Then, for all times we have

{corollary}

Let be the first time step for which the system is in a Nash equilibrium. Let be defined as . Then the random variable is a super-martingale. {corollary} Let be the first time step for which the system is in a Nash equilibrium. Then .

Now we are ready to show Theorem 1.1.1.

Theorem 1.1.1.

First, we assume that at time the system is in a state with . Using the non-negativity of (Observation 3.2) allows us to state

Inserting the definition of and dividing by yields

where we have used that (Lemma A.1) to pull that expression outside of the square root in the first line.

This bound was derived under the assumption that at we had a state with . If this is not the case, let denote the number of time steps to reach such a state, and let denote the additional number of time steps to reach a NE from there. Combining the result from above with Theorem 1.1.1 allows us to write

{corollary}

Similarly to Corollary 3.1, after blocks of steps we have reached a Nash Equilibrium with probability at least .

Observation \thetheorem.

Our bound in Theorem 1.1.1 is asymptotically lower than the corresponding bound in [6] by at least a factor of .

Proof.

Lemma A.1 yields . Additionally, we have , since occurs (at least once) in the sum of all speeds. Hence, the asymptotic bound from [6] is larger than

The first part of this expression is the bound of Theorem 1.1.1, so the expression in the square brackets is the additional factor of the bound from [6]. ∎

4 Weighted Tasks

The set of tasks assigned to node is called . The weight of node becomes whereas the corresponding load is defined as .

We present a protocol for weighted tasks that differs from the one described in [6]. It is presented in Algorithm 2

begin
      foreach task in parallel do
            Let be the current machine of task Choose a neighboring machine uniformly at random if  then
                  Move task from node to node with probability
             end if
            
       end foreach
      
end
Algorithm 2 Distributed Selfish Load Balancing for weighted tasks

The notable difference to the scheme in [6] is that in our case, the decision of a task to migrate or not does not depend on that task’s weight. In the original protocol, a load difference of more than would suffice for task to have an incentive to migrate. In the modified protocol, a task will only move if the load difference is at least . The advantage of this approach is that for an edge , either all or none of the tasks on node have an incentive to migrate. This greatly simplifies the analysis. We will show that the system rapidly converges to a state where for all edges . Such a system is not necessarily a Nash equilibrium as might still be larger than the size of a given task . We will show, however, that such a state is an -approximate NE.

{definition}

In analogy to the unweighted case, we define the expected flow as the expected weight of the tasks migrating from to in state . It is given by

The potentials and are defined analogously to the unweighted case. Here, we concentrate on alone. The average weight per node is and the task deviation is defined as . We define in analogy to the unweighted case as the normalized version of ,

The auxiliary quantity is defined analogously to the unweighted case as

4.1 Convergence Towards an Approximate Nash Equilibrium

In close analogy to [6, Lemma 3.1], we first bound the drop of the potential when the flow is exactly the expected flow. {lemma} The drop in potential if the system is in state and if the flow is exactly the expected flow is bounded by

The proof is formally equivalent to the one in [6] and therefore omitted here. Next, we bound the variance of the process. {lemma} The variances of the weights on the nodes are bounded via

This allows us to formulate a bound on the expected potential drop in analogy to [6, Lemma 3.3] by combining Lemma 4.1 and Lemma 4.1. {lemma} The expected drop in potential if the system is in state is at least

The proof is analogous to the corresponding lemma in [6].

Theorem 1.1.2.

The rest of the proof is the same as the proof for the unweighted case. One may verify that, indeed, Lemma 3.1 and all subsequent results do not rely on the specific form of or the underlying nature of the tasks. Using the same eigenvalue techniques as in the unweighted case, this allows us to obtain a bound involving the second smallest eigenvalue of the graph’s Laplacian matrix. Following the steps of the unweighted case allows us to prove the main result of this section. ∎

Acknowledgements

The authors thank Thomas Sauerwald for helpful discussions.

References

  • [1] Heiner Ackermann, Simon Fischer, Martin Hoefer, and Marcel Schöngens. Distributed algorithms for qos load balancing. In Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures, SPAA’09, pages 197–203, New York, NY, USA, 2009. ACM.
  • [2] Clemens P. J. Adolphs and Petra Berenbrink. Improved Bounds for Discrete Diffusive Load Balancing. Manuscript, 2012.
  • [3] Baruch Awerbuch, Yossi Azar, and Rohit Khandekar. Fast load balancing via bounded best response. In Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ’08, pages 314–322, Philadelphia, PA, USA, 2008. Society for Industrial and Applied Mathematics.
  • [4] Petra Berenbrink, Tom Friedetzky, Leslie Ann Goldberg, Paul W. Goldberg, Zengjian Hu, and Russell Martin. Distributed Selfish Load Balancing. SIAM Journal on Computing, 37(4):1163, 2007.
  • [5] Petra Berenbrink, Tom Friedetzky, Iman Hajirasouliha, and Zengjian Hu. Convergence to equilibria in distributed, selfish reallocation processes with weighted tasks. In Lars Arge, Michael Hoffmann, and Emo Welzl, editors, Proceedings of the 15th Annual European Symposium on Algorithms (ESA 2007), volume 4698/2007 of Lecture Notes in Computer Science, pages 41–52. Springer, Springer, October 2007.
  • [6] Petra Berenbrink, Martin Hoefer, and Thomas Sauerwald. Distributed selfish load balancing on networks. In Proceedings of 22nd Symposium on Discrete Algorithms (SODA’11), pages 1487–1497, 2011.
  • [7] Rajendra Bhatia. Linear Algebra to Quantum Cohomology: The Story of Alfred Horn’s Inequalities. The American Mathematical Monthly, 108(4):289 – 318, 2001.
  • [8] J. E. Boillat. Load balancing and Poisson equation in a graph. Concurrency: Practice and Experience, 2(4):289–313, December 1990.
  • [9] Fan R. K. Chung. Spectral graph theory. AMS Bookstore, 1997.
  • [10] G Cybenko. Dynamic load balancing for distributed memory multiprocessors. Journal of Parallel and Distributed Computing, 7(2):279–301, October 1989.
  • [11] Robert Elsässer, Burkhard Monien, and Robert Preis. Diffusion Schemes for Load Balancing on Heterogeneous Networks. Theory of Computing Systems, 35(3):305–320, May 2002.
  • [12] Robert Elsässer, Burkhard Monien, and Stefan Schamberger. Distributing unit size workload packages in heterogeneous networks. Journal of Graph Algorithms and Applications, 10(1):51–68, 2006.
  • [13] Eyal Even-Dar, Alex Kesselman, and Yishay Mansour. Convergence time to Nash equilibrium in load balancing. ACM Transactions on Algorithms, 3(3):32–es, August 2007.
  • [14] Eyal Even-Dar and Yishay Mansour. Fast convergence of selfish rerouting. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms (SODA’05), pages 772–781, 2005.
  • [15] Rainer Feldmann, Martin Gairing, Thomas Lücking, Burkhard Monien, and Manuel Rode. Nashification and the coordination ratio for a selfish routing game. In Jos Baeten, Jan Lenstra, Joachim Parrow, and Gerhard Woeginger, editors, Automata, Languages and Programming, volume 2719 of Lecture Notes in Computer Science, pages 190–190. Springer Berlin / Heidelberg, 2003.
  • [16] Miroslav Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(2):298–305, 1973.
  • [17] S. Fischer, P. Mahonen, M. Schongens, and B. Vocking. Load balancing for dynamic spectrum assignment with local information for secondary users. In New Frontiers in Dynamic Spectrum Access Networks, 2008. DySPAN 2008. 3rd IEEE Symposium on, pages 1 –9, oct. 2008.
  • [18] Simon Fischer and Berthold Vöcking. Adaptive routing with stale information. In Proceedings of the twenty-fourth annual ACM symposium on Principles of distributed computing (PODC’05), pages 276–283, New York, NY, USA, 2005. ACM.
  • [19] D. Fotakis, A. Kaporis, and P. Spirakis. Atomic congestion games: Fast, myopic and concurrent. Theory of Computing Systems, 47:38–59, 2010. 10.1007/s00224-009-9198-2.
  • [20] Tobias Friedrich and Thomas Sauerwald. Near-perfect load balancing by randomized rounding. In Proceedings of the 41st annual ACM symposium on Symposium on theory of computing - STOC ’09, page 121, New York, New York, USA, May 2009. ACM Press.
  • [21] A. A. Klyachko. Random walks on symmetric spaces and inequalities for matrix spectra. Linear Algebra and its Applications, 319(1-3):37–59, November 2000.
  • [22] Bojan Mohar. Isoperimetric numbers of graphs. Journal of Combinatorial Theory, Series B, 47(3):274–291, December 1989.
  • [23] Bojan Mohar. Eigenvalues, diameter, and mean distance in graphs. Graphs and Combinatorics, 7(1):53–64, March 1991.
  • [24] Mohar, B. The Laplacian Spectrum of Graphs. In Y. Alavi, editor, Graph theory, combinatorics, and applications, volume 2, pages 871–898. Wiley, 1991.
  • [25] S Muthukrishnan, B. Ghosh, and M.H. Schultz. First- and Second-Order Diffusive Methods for Rapid, Coarse, Distributed Load Balancing. Theory of Computing Systems, 31(4):331–354, July 1998.
  • [26] Y. Rabani, A. Sinclair, and R. Wanka. Local divergence of Markov chains and the analysis of iterative load-balancing schemes. In Proceedings 39th Annual Symposium on Foundations of Computer Science (FOCS’98), pages 694–703. IEEE Comput. Soc, 1998.
  • [27] Berthold Vöcking. Selfish Load Balancing. In Noam Nisan, Eva Tardos, Tim Roughgarden, and Vijay Vazirani, editors, Algorithmic Game Theory, chapter 20. Cambridge University Press, 2007.
  • [28] Hermann Weyl. Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung). Mathematische Annalen, 71(4):441–479, December 1912.

Appendix A Spectral Graph Theory

In this appendix, we will briefly summarize some important theorems of spectral graph theory. For an excellent introduction, we recommend the book by Fan Chung [9]. Many important results are collected in an overview article by Mohar [24].

Results in this section are, unless indicated otherwise, taken from these sources. Let us begin by defining the matrix we are interested in. {definition} Let be an undirected graph with vertices and edges .

The Laplacian of is defined as

The following Lemma summarizes some basic properties of and, therefore, also of . These properties are found in every introduction to spectral graph theory. {lemma} Let be the Laplacian of a graph . For brevity, we omit the argument in the following. Then, satisfies the following.

  • For every vector we have

  • is symmetric positive semi-definite, i.e., and for every vector .

  • Each column (row) of sums to .

a.1 Spectral Analysis

We now turn our attention to the spectrum of the Laplacian. {definition} Let be the Laplacian of a graph . Lemma A and the spectral theorem of linear algebra ensure that has an orthogonal eigenbasis, i.e. there are (not necessarily distinct) eigenvalues with linearly independent eigenvectors which can be chosen to be mutually orthogonal.

We call the eigenvalues of the Laplacian spectrum of and write

where the are the eigenvalues of .

The corresponding eigenvectors are denoted . The Laplacian spectrum of contains valuable information about . Some very basic results are given in the next Lemma. {lemma} Let be a graph with Laplacian spectrum . For a graph the following holds for both the unweighted and the weighted spectrum.

  • The vector is eigenvector to and with eigenvalue . Hence, is always the smallest eigenvalue of any Laplacian.

  • The multiplicity of the eigenvalue is equal to the number of connected components of . In particular, a connected graph has and .

The second-smallest eigenvalue is closely related to the connectivity properties of . It was therefore called algebraic connectivity when it was first intensely studied by Fiedler [16]. The eigenvector corresponding to is also called Fiedler vector. A first, albeit weak, result is the preceding lemma. A stronger result with a corollary useful for simple estimates is given in the next lemma. {lemma}[[23]] Let be the second-smallest eigenvalue of the unweighted Laplacian of a graph . Let be the diameter of graph . Then

{corollary}

Using , we get {lemma} This is another useful result by Fiedler [16]. Let be the second-smallest eigenvalue of . Then,

For the maximum degree of graph , it immediately follows

A stronger relationship between and the network’s connectivity properties is provided via the graph’s Cheeger constant. {definition} Let be a graph and a subset of the nodes. The boundary of is defined as the set of edges having exactly one endpoint in , i.e.,

{definition}

Let be a graph. The isoperimetric number of is defined as

It is also called Cheeger constant of the graph. The isoperimetric number of a graph is a measure of how well any subset of the graph is connected to the rest of the graph. Graphs with a high Cheeger constant are also called expanders. The following was proven by Mohar. {lemma}[[22]] Let be the second-smallest eigenvalue of , and let be the isoperimetric number of . Then,

This concludes our introduction to spectral graph theory, which suffices for the analysis of identical machines. For machines with speeds, it turns out that a generalized Laplacian is a more expressive quantity.

a.2 Generalized Laplacian Analysis

Recall the speed-matrix from the introduction. Instead of analyzing the Laplacian , we are now interested in the generalized Laplacian, defined as . This definition is also used by Elsässer in [11] in the analysis of continuous diffusive load balancing in heterogeneous networks. In this reference, the authors prove a variety of results for the generalized Laplacian, which we restate here in a slightly different language.

It turns out that in the discussion of the properties of this generalized Laplacian, many results carry over from the analysis of the normal Laplacian. The similarity is made manifest by the introduction of a generalized dot-product. {definition} For vectors , we define the generalized dot-product with respect to as

{lemma}

The vector space together with forms an inner product space. This means that

  • , i.e., is symmetric,

  • for any scalars and , i.e., is linear in its first argument,

  • , with equality if and only if , i.e., is positive definite.

Proof.

All three properties follow immediately from Definition A.2, provided the are positive, which is true in our case. ∎

{remark}

The fact that is an inner product allows us to directly apply many results of linear algebra to it. For example, all inner products satisfy the Cauchy-Schwarz inequality, i.e.,

A proof of this important inequality can be found in every introductory book on Linear Algebra.

Another concept is that of orthogonality. Two vectors and are called orthogonal to each other, , if . Analogously, we call and orthogonal with respect to if . Let us now collect some of the properties of . These properties have also been used in [11]. We restate them here using the notation of the generalized dot product. {lemma} (Compare Lemma 1 in [11]) Let be the Laplacian of a graph, and let be the speed-matrix, . Then the following holds true for the generalized Laplacian .

  • The speed-vector is (right-)eigenvector to with eigenvalue .

  • is not symmetric any more. It is, however, still positive semi-definite.

  • Since is not symmetric, we have to distinguish left- and right-eigenvectors. Similar to the spectral theorem of linear algebra, we can find a basis of right-eigenvectors of that are orthogonal with respect to .

Proof.

(1)

via Lemma A.1. For (2) and (3), suppose that is a right-eigenvector of with eigenvalue . If we define , then we have

This proves that is right-eigenvector to with eigenvalue if and only if is eigenvector to with eigenvalue . The latter matrix is positive definite, because for every vector , we have

since itself is positive semi-definite. Now, since is symmetric positive semi-definite, all its eigenvalues are real and non-negative and it possesses an orthogonal eigenbasis. Let us denote the vectors of the eigenbasis with , . As we have just shown, this implies that the vectors