Load Balancing for Skewed Streams on Heterogeneous Clusters

Load Balancing for Skewed Streams on Heterogeneous Clusters

Muhammad Anis Uddin Nasir, Hiroshi Horii, Marco Serafini, Nicolas Kourtellis
Rudy Raymond, Sarunas Girdzijauskas, Takayuki Osogami Royal Institute of Technology, Sweden    IBM Research Tokyo, Japan    Qatar Computing Research Institute    Telefonica Research
anisu@kth.se, horii@jp.ibm.com, mserafini@qf.org.qa, nicolas.kourtellis@telefonica.com
rudyhar@jp.ibm.com, sarunasg@kth.se, osogami@jp.ibm.com
Abstract

Streaming applications frequently encounter skewed workloads and execute on heterogeneous clusters. Optimal resource utilization in such adverse conditions becomes a challenge, as it requires inferring the resource capacities and input distribution at run time. In this paper, we tackle the aforementioned challenges by modeling them as a load balancing problem. We propose a novel partitioning strategy called Consistent Grouping (cg), which enables each processing element instance (pei) to process the workload according to its capacity. The main idea behind cg is the notion of small, equal-sized virtual workers at the sources, which are assigned to workers based on their capacities. We provide a theoretical analysis of the proposed algorithm and show via extensive empirical evaluation that our proposed scheme outperforms the state-of-the-art approaches, like key grouping. In particular, CG achieves 3.44x better performance in terms of latency compared to key grouping.

I Introduction

Distributed stream processing engines (dspes) have recently gained much attention due to their ability to process huge volumes of data with very low latency on clusters of commodity hardware. dspes enable processing information that is produced at a very fast rate in a variety of contexts, such as IoT applications, software logs, and social networks. For example, Twitter users generate more than million tweets per day111http://www.internetlivestats.com/twitter-statistics/ and Facebook users upload more than million photos per day222http://www.businessinsider.com/facebook-350-million-photos-each-day-2013-9.

Streaming applications are represented by directed acyclic graphs (dags), where vertices are called processing elements (pes) and represent operators, and edges are called streams and represent the data flowing from one pe to the next. For scalability, streams are partitioned into sub-streams and processed in parallel on replicas of pes called processing element instances (pei).

Fig. 1: Example showing that key grouping generates imbalance in the presence of a heterogeneous cluster. The capacity and the resource utilization of the -th worker is represented by and respectively. Each key () is represented with different color box. Imbalance is the difference between the maximum and the average resource utilization (see section IV for details).

Applications of dspes, especially in data mining and machine learning, typically require accumulating state across the stream by grouping the data on common fields [ben-haim2010spdt, berinde2010heavyhitters]. Akin to MapReduce, this grouping in dspes is usually called key grouping (kg) and is implemented using hashing [nasir2015power]. kg allows each source pei to route each message solely via its key, without needing to keep any state or to coordinate among peis. However, kg is unaware of the underlying skewness in the input streams [lin2009curse], which causes a few peis to sustain a significantly higher load than others, as demonstrated in Figure 1 with a toy example. This sub-optimal load balancing leads to poor resource utilization and inefficiency.

The problem is further complicated when the underlying resources are heterogeneous [koliousis2016saber, schneider2016dynamic] or changing over time [zaharia2008improving, suresh2015c3]. For various commercial enterprises, the resources available for stream mining consist of dedicated machines, private clouds, bare metal, virtualized data centers and commodity hardware. For streaming applications, the heterogeneity is often invisible to the upstream peis and requires inferring the resource capacities in order to generate a fair assignment of the tasks to the downstream peis. However, gathering statistics and finding optimal placement often leads to bottlenecks, while at the same time microsecond latencies are desired [kalyvianaki2016themis].

Alternatively, stateless streaming applications, like interaction with external data sources, employ shuffle grouping (sg) to break down the stream load equally to each of the peis, i.e., by sending a message to a new pei in cyclic order, irrespective of its key. sg allows each source pei to send equal number of messages to each downstream pei, without the need to keep any state or to coordinate among peis. However, similarly to kg, sg is unaware of the heterogeneity in the cluster, which can cause some peis to sustain unpredictably higher load than others. Further, sg typically requires more memory to express stateful computations [nasir2015power, katsipoulakis2017holistic].

In this present work, we study the load balancing problem for a streaming engine running on a heterogeneous cluster and processing non-uniform workload. To the best of our knowledge, we are the first to address both challenges together. We envision a light-weight and fair key grouping strategy for both stateless and stateful streaming applications. Moreover, this strategy must limit the number of workers processing each key, which is analogous to reducing the memory footprint and aggregation cost for the stateful computation [nasir2015power, katsipoulakis2017holistic]. Towards this goal, we propose a novel grouping strategy called Consistent Grouping (cg), which handles both the potential skewness in input data distribution, as well as the heterogeneity in resources in dspes. cg borrows the concept of virtual workers from the traditional consistent hashing [godfrey2004load, godfrey2005heterogeneity] and employs rebalancing to achieve fair assignment, similar to [shah2003flux, gedik2014partitioning, balkesen2013adaptive, castro2013integrating, suresh2015c3]. In summary, our work makes the following contributions:

  • We propose a novel grouping scheme called Consistent Grouping to improve the scalability for dspes running on heterogeneous clusters and processing skewed workload.

  • We provide a theoretical analysis of the proposed scheme and show the effectiveness of the proposed scheme via extensive empirical evaluation on synthetic and real-world datasets. In particular, cg achieves bounded imbalance and generates almost optimal memory footprint.

  • We measure the impact of cg on a real deployment on Apache Storm. Compared to key grouping, it improves the throughput of an example application on real-world datasets by up to x, reduces the latency by x.

Ii Overview of the Approach

Consistent grouping relies on the concept of virtual workers and allows variable number of virtual workers for each pei. The main idea behind cg is to assign the input stream to the virtual workers in a way that each virtual worker receives approximately the same number of messages. Later, these virtual workers are assigned to the actual workers based on their capacity. We refer to downstream peis as workers and to upstream peis as sources throughout the paper. Similar approaches have been considered in the past in the context of distributed hash tables [godfrey2004load, godfrey2005heterogeneity]. cg allows an assignment of tasks to peis based on the capacity of the peis. Thus, the powerful peis are assigned more work compared to less powerful peis. Next, we provide an overview of cg’s components.

Fig. 2: Example showing that consistent grouping improves the imbalance in the presence of heterogeneous cluster, compared to key grouping. The capacity and the resource utilization of the -th worker is represented by and respectively. Also, each key () is represented with different color box. Imbalance is the difference between the maximum and the average resource utilization.

First, we propose a novel strategy called power of random choices (porc), which assigns the incoming messages to a set of equal sized virtual workers. The basic idea behind this scheme is to introduce the notion of capacity for the virtual workers. In particular, we set the capacity of each virtual worker to the average load (1+), for some parameter . Note that the capacity is calculated at run time using the average load. Given a sequence of virtual workers for each key, porc maps a key to the first virtual worker with a spare capacity. porc allows the heavy keys to spread across the other virtual workers, thus reducing the memory footprint and the aggregation cost. The parameter in the algorithm provides the trade off between the imbalance and memory footprint.

Second, cg takes a radically new approach towards load balancing and allows peis to decide their workload based on their capacities. We call this component as worker delegation. Each worker monitors its workload and sends a binary signal (increase or decrease workload) to the sources in case it experiences excessive workload. This simple modification changes the distributed load balancing problem to a local decision problem, where each pei can choose its share of workload based on its current capacity. Moreover, worker delegation provides the flexibility to implement various application-specific requirements at each pei. The sources react to the signals by moving virtual workers from one pei to another. Note that it is required that sources receive the signal and operate in a consistent manner, performing the same routing of messages. Such an operation might negatively impact the performance of a streaming application, as it requires one-to-many (from one worker to all sources) broadcast messages across the network. To overcome this challenge, we relax the consistency constraint in the dag and allow sources to be eventually consistent. Specifically, we propose piggybacking that allows encoding the binary signals along with the acknowledgment message to avoid extra communication overhead.

Lastly, cg ensures that each message is processed in a consistent manner by discarding the message migration phase. When a source receives a request to change (increase or decrease) the workload, cg relocates virtual workers assigned to the overloaded worker, thus, only affecting the future routing of the messages. cg follows the same programming primitive as Partial Key Grouping (pkg) [nasir2015power] for stream partitioning, supporting both stateless and stateful map-reduce like applications. We propose periodic aggregation to support map-reduce like stateful applications, which leverages the existing dag and imposes a very low-overhead in the stream application. Figure 2 provides an example using cg for the dag in Figure 1.

Iii Background on Stream Partitioning

Load Balancing is one of the very well-studied problems in distributed systems. Also, it is very extensively studied in theoretical computer science [mitzenmacher2001potc-survey]. Next, we provide a discussion on various ways load balancing has been addressed in distributed systems, as well as state-of-art partitioning strategies to assign load to workers in such systems.

Iii-a Load Balancing in Distributed Systems

In graph processing systems, load balancing is often found along with balancing graph partitioning, where the goal often is to minimize edge-cut between different partitions [gonzalez2012powergraph, malewicz2010pregel]. Further, several systems have been proposed specifically to solve the load balancing problem, e.g., Mizan [khayyat2013mizan], GPS [salihoglu2013gps], and others. Most of these systems perform dynamic load rebalancing at runtime via vertex migration [yan2015effective].

Load balancing and scheduling often appears in a similar context in map-reduce like systems, where the goal is to schedule the jobs to set of machines in order to maximize the resource utilization [hindman2011mesos, vavilapalli2013apache]. Sparrow [ousterhout2013sparrow] is a stateless distributed job scheduler that exploits a variant of the power of two choices [park2011multiplechoices]. ahmad2012tarazu improves the load balance for map-reduce in heterogenous environment by monitoring and scheduling the jobs based on communication patterns.

Dynamic Load balancing in database systems is often implemented using rebalancing, similar to all the other systems [rahm1995dynamic]. Also, online load migration is effective for elasticity in the database systems [suresh2015c3, 2014marcoestore]. Lastly, dynamic load balancing is considered in the context of web servers [cardellini1999dynamic], GPU [chen2010dynamic], and many others.

Iii-B Existing Stream Partitioning Functions

Messages are sent between peis by exchanging messages over the network. Several primitives are offered by dspes for sources to partition the stream, i.e., to route messages to different workers.

Key Grouping (kg). This partitioning ensures the messages with the same key are handled by the same pei (analogous to MapReduce). It is usually implemented through hashing. kg is the perfect choice for stateful operators. It allows each source pei to route each message solely via its key, without the need to keep any state or to coordinate among peis. However, kg does not take into account the underlying skewness in the input distribution, which causes a few peis to sustain a significantly higher load than others. This suboptimal load balancing leads to poor resource utilization and inefficiency.

Partial Key Grouping (pkg). pkg [nasir2015power, nasir2016two, nasir2015partial] adapts to the traditional power of two choices for load balancing in map-reduce like streaming operators. pkg guarantees nearly perfect load balance in the presence of bounded skew using two novel schemes: key splitting and local load estimation. The local load estimation enables each source to predict the load of workers leveraging the past history. However, similar to kg, pkg assumes that each worker has the same resources and the service time for the messages follows a uniform distribution, which is a strong assumption of many real-world use cases.

Shuffle Grouping (sg). This partitioning forwards messages typically in a round-robin fashion. It provides excellent load balance by assigning an almost equal number of messages to each pei. However, no guarantee is made on the partitioning of the key space, as each occurrence of a key can be assigned to any pei. It is the perfect choice for stateless operators. However, with stateful operators one has to handle, store and aggregate multiple partial results for the same key, thus incurring additional memory and communication costs.

Iii-C Consistent Hashing

Consistent Hashing is a special form of a hash function that requires minimal changes as the range of the function changes [karger1997consistent]. This strategy solves the assignment problem by systematically producing a random allocation. It relies on a standard hash function that maps both messages and workers into unit-size circular ID space, i.e., . Further, each task is assigned to the first worker that is encountered by moving in the clockwise direction on the unit circle. Consistent Hashing provides load balancing guarantees across the set of workers. Assuming are the number of available workers, and given that the load on a node is proportional to the size of the interval it owns, no worker owns more than of the interval (to which each task is mapped), with high probability [karger1997consistent].

One common solution to improve the load balance is to introduce virtual workers, which are copies of workers, corresponding to points in the circle. Whenever, a new worker is added, a fixed number of virtual workers is also created in the circle. As each worker is responsible for an interval on the unit circle, creating virtual workers spreads the workload for each worker across the unit circle.

Similar to other stream partitioning functions, consistent hashing does not take into account neither the heterogeneity in the cluster or the skewness in the input stream, which restricts its immediate applicability in the streaming context. A way to deal with both heterogeneity and skewness is to employ hash space adjustment for consistent hashing [hwang2013adaptive]. Such schemes require global knowledge of the tasks assigned to each worker to adjust the hash space for the workers, i.e., movement of tasks from the overloaded worker to the least loaded worker. Even though such schemes provide efficient results in terms of load balance, their applicability in stream context incurs additional overhead due to many-to-many communication across workers. On the other hand, if implemented without global information, these schemes may produce unpredictable imbalance due to random task movement across workers.

Consistent Hashing with Bounded Load (ch). Independent from our work, mirrokni2016consistent proposed a novel version of consistent hashing scheme that provides a constant bound on the load of the maximum loaded worker. The basic idea behind their scheme is to introduce the notion of capacity for each worker. In particular, set the capacity of each bin to the average load times (), for some parameter . Further, the tasks are assigned to workers in the clockwise direction with spare capacity.

Iii-D Other Approaches

Power of Two Choices (potc). potc achieves near perfect load balance by first selecting two bins uniformly at random and later assigning the message to the least loaded of the two bins. For potc, the load of each bin is solely based on the number of messages. Using potc, each key might be assigned to any of the workers. Therefore, the memory requirement in worst case is proportional to the number of workers, i.e., every key appearing on all the workers.

Rebalancing. Another way to achieve fair assignment is to leverage rebalancing [shah2003flux, balkesen2013adaptive, castro2013integrating, suresh2015c3, das2014adaptive]. Once load imbalance is detected, the system activates a rebalancing routine that moves some of the messages and the state associated with them, away from an overloaded worker. While this solution is easy to understand, its applicability in the streaming context requires answering challenging questions: How to identify the imbalance and how to plan the migration. The answers to these questions are often application-specific, as they involve a trade-off between imbalance and rebalancing cost that depends on the size of the state to migrate. For these reasons, rebalancing creates a difficult engineering challenge, which we address in our paper.

Iv Preliminaries & Problem Definition

This section introduces the preliminaries that are used in the rest of the paper.

We consider a dspe running on a cluster of machines that communicate by exchanging messages following the flow of a dag. For scalability, streams are partitioned into sub-streams and processed in parallel on a replica of the pe called processing element instance (pei). Load balancing across the whole dag is achieved by balancing along each edge independently. Each edge represents a single stream of data, along with its partitioning scheme. Given a stream under consideration, let be the set of sources, be the set of workers, and their sizes be and .

Each pei is deployed on a machine with a limited capacity . For simplicity, we assume that there is a single important resource on which machines are constrained, such as storage and processing. Moreover, each pei () has an unbounded input queue ().

The input to the dag is a sequence of messages where is the identifier, is the message key, is the value, and is the timestamp at which the message is received. The messages are presented to the engine in ascending order by timestamp. Upon receiving a message with key , we need to decide its placement among the workers. We assume one message arrives per unit of time.

We employ queuing theory as the cost model to define the delay and the overhead at each worker. In the model, a sequence of messages arrives at a worker . If the worker is occupied, the new message remains in the queue until it can be served. After the message is processed, it leaves the system. We represent the finish time for a message using . The difference between the arrival time and the represents the latency of executing the message.

We define a partitioning function , which maps each message into one of the peis. This function identifies the pei responsible for processing the message. Each pei is associated with one or more keys. The goal of the partitioning function is to generate an assignment of messages to the set of workers in a way that average waiting time is minimized.

Fig. 3: Naïve Bayes implemented via key grouping (kg).

We define the load of a worker using the number of messages that are assigned to the worker at time :

Also, we define normalized load at time as the ratio between the load and the capacity of the worker.

We use a definition of imbalance similar to others in the literature (e.g., Flux [shah2003flux] and pkg [nasir2015power]). We define imbalance at time as the difference between the maximum and the average normalized load:

Further, the memory footprint of a worker is the number of unique keys assigned to the worker:

Problem. Given the definition of imbalance, we consider the following problem in this paper.

Problem IV.1

Given a stream of messages drawn from a heavy-tailed distribution and a set of workers with capacities , find a partitioning function that minimizes memory footprint while keeping the imbalance () bounded by a constant factor at any time instance .

Memory Cost. One simple solution to address problem IV.1 is to employ round robin assignment as in sg, which provides an imbalance of at most one in case of a homogenous cluster. This load balance comes at the cost of memory, as messages with the same key might end up on all the workers. Also, the round robin assignment produces a higher aggregation cost [nasir2015power, nasir2016two, katsipoulakis2017holistic], which represents the communication cost for accumulating the partial results from the set of workers.

Example. To make the discussion more concrete, we introduce a simple application that will be our running example: the naïve Bayes classifier. A naïve Bayes classifier is a probabilistic model that assumes independence of features in the data (hence the naïve). It estimates the probability of a class given a feature vector by using Bayes’ theorem:

The answer given by the classifier is then the class with maximum likelihood

Given that features are assumed independent, the joint probability of the features is the product of the probability of each feature. Also, we are only interested in the class that maximizes the likelihood, so we can omit from the maximization as it is constant. The class probability is proportional to the product

which reduces the problem to estimating the probability of each feature value given a class , and a prior for each class . In practice, the classifier estimates the probabilities by counting the frequency of co-occurrence of each feature and class value. Therefore, it can be implemented by a set of counters, one for each pair of feature value and class value.

V Solution Primitives

In this section, we discuss our solution and its various components. Given a set of sources and a set of workers, the goal is to design a grouping strategy that is capable of assigning the messages to the workers proportionally to their capacity, while dealing with the messages’ embedded skew.

Overview. In our work, we propose a novel grouping scheme called consistent grouping (cg). Our scheme borrows the concept of virtual workers from the traditional consistent hashing [godfrey2004load, godfrey2005heterogeneity] and employs rebalancing to achieve fair assignment, similar to [shah2003flux, gedik2014partitioning, balkesen2013adaptive, castro2013integrating, suresh2015c3]. cg allows variable number of virtual workers for each pei. The main idea behind cg is to assign the input stream to the virtual workers in a way that each virtual worker has approximately equal number of messages. Later, these virtual workers are assigned to the actual workers based on their capacity. One of the challenges is to bound the load of each virtual worker, as it implies that moving a virtual worker from one worker to another actually increases the receiving worker’s load. For this, we propose a novel grouping strategy called power of random choices (porc) that is capable of providing bounded imbalance while keeping the memory cost low. Further, we propose three efficient schemes within cg: worker delegation, piggybacking and periodic aggregation, which enable efficient integration of our proposed scheme into standard dspes. Consistent grouping follows the same programming primitive as pkg for stream partitioning. We refer to [nasir2015power] for the examples of common data mining algorithms that benefit from cg.

V-a Power of Random Choices

porc assigns the incoming messages to the set of virtual workers in a way that the imbalance is bounded and the overall memory footprint of the keys on the virtual workers is low. The basic idea behind porc is to introduce the notion of continuous capacity, which is a function of average load. In particular, we set the capacity of each virtual worker to the average load times (), for some parameter . Note that the definition of capacity is based on the average load, rather than a hard constraint. Given a sequence of virtual workers for a key, porc maps the key to the first virtual worker with the spare capacity. The sequence of virtual workers for a key are produced by using a single hash function and concatenating the salt with the key to produce a new assignment333https://datarus.wordpress.com/2015/05/04/fighting-the-skew-in-spark/. We refer to the first virtual worker in the sequence as the principal virtual worker. The rational behind this approach is that the heavy keys in the skewed input distribution overload their principal worker. Therefore, we allow the heavy keys to spread across the other virtual workers, which reduces the memory footprint compared to other schemes, e.g., round robin. The parameter in porc provides the trade off between the imbalance and memory footprint. Algorithm 1 provides the pseudocode. porc provides an efficient and generalized solution for the fundamental problem of load balancing for the skewed stream in streaming settings, while minimizing the memory footprint. In our work, we adapt porc for fair load balancing for streaming applications, which shows its effectiveness and applicability.

1:key, hash-function, messages, workers, load-vector, imbalance-factor
2:
3:procedure getWorker(, , , , , )
4:      
5:      
6:      while (do
7:            
8:                   
9:      
10:      return
Algorithm 1 Pseudocode for Power of Random Choices.
Fig. 4: Normalized imbalance and memory overhead for different schemes using zipf distribution with different skew and number of virtual workers.

Discussion. To show the effectiveness of porc, we compare its performance with kg, pkg, potc, sg, and ch [mirrokni2016consistent] in terms of imbalance and memory footprint (see section III-A for the description of these schemes). We leverage a zipf-based dataset with different skews for this experiment (see Section VII for the description of the dataset). The top row of Figure 4 reports the imbalance for different schemes for different number of virtual workers, i.e., , , and . Results show that both key grouping and partial key grouping generate high imbalance as the skew and the number of virtual workers increase. However, the other schemes perform fairly well in terms of imbalance. Additionally, we report the memory overhead for all the schemes in Figure 4. The memory cost is calculated using the total number of unique keys that appear at each virtual worker. Results verify our claim that load balance is achieved at the cost of memory.

V-B Consistent Grouping

We propose a novel grouping strategy called Consistent Grouping (cg), inspired by consistent hashing. cg borrows the concept of virtual workers from traditional consistent hashing and allows variable number of virtual workers for each pei [godfrey2004load, godfrey2005heterogeneity]. It is a dynamic grouping strategy that is capable of handling both the heterogeneity in the resources and the variability in the input stream at runtime. cg achieves its goal by allowing the powerful workers to acquire additional virtual workers, which leads to ‘stealing’ work from the other workers. Moreover, it allows overloaded workers to gracefully revoke some of their existing virtual workers, which is equivalent to giving up on some of the allocated work.

cg is a lightweight and distributed scheme that allows assignment of messages to the workers in a streaming fashion. Moreover, it leverages porc for assignment of keys to each virtual worker in a balanced manner, which allows it to bound the load of each virtual worker. cg is able to balance the load across workers based on their capacities, which allows the dspe to operate under realistic scenarios like heterogeneous clusters and variable workloads.

Time Slot. We introduce the notion of time slot (), which represents the minimum monitoring time period for a pei. is an administrative preference that can be determined based on workload traffic patterns. If workloads are expected to change on an hourly basis, setting on the order of minutes will typically suffice. For slower changing workloads can be set to an hour. Time slot guarantees that workers have enough sample of the input stream to predict their workload.

Similar to consistent hashing, cg initializes with the same number of virtual workers for each worker, i.e., . cg manages a unit-size circular ID space, i.e., , and maps the virtual workers and keys on the unit-size ID space. We would like a scheme that is capable of monitoring the load at each worker throughout the lifetime of a streaming application and adjust the load according to the available capacity of the workers. In doing so, we introduce a novel scheme called pairing virtual worker.

Pairing virtual workers. The load of a worker equals to the sum of loads of the assigned virtual workers. Further, the load of each virtual worker equals to the load that is induced by the mapped messages. Ideally, we would like to assign one of the virtual workers from the overloaded worker to one of idle workers. However, it is not trivial until this point on how one can achieve such an assignment. To enable such an assignment, we propose to maintain two first-come-first-serve (FCFS) queues: idle and busy. These queues maintain the list of idle and busy workers in the dspe and allow cg to pair any removal and addition with the opposite to balance the number of virtual workers throughout the execution. For instance, when a worker is overloaded, it sends a message to the sources. Further, the source only removes the virtual workers of the corresponding worker if it is able to pair it with an addition on an idle worker. This simple scheme ensures that the number of virtual workers in the system are balanced throughout the execution and the load of each virtual worker is bounded, which enables cg to perform fair assignment. Note that mapping the virtual workers with similar keys to the same worker might reduce the memory footprint. However, this requires maintaining all the unique keys in each virtual worker and each worker. Therefore, we opt for FCFS mapping of virtual workers to workers.

V-C Integration in a dspe

While consistent grouping is easy to understand, its applicability to the case of real world stream processing engines is not trivial. We package cg with few efficient strategies that enable its applicability in a variety of dspes.

Worker Delegation. First, we propose an efficient scheme called worker delegation, which pushes the load balancing problem to the workers and allows them to decide their workload based on their capacity. Each worker requires monitoring its workload and needs to take the decision based on their current workload and the available capacity. The decision can either be to increase the workload or to decrease the workload. The intuition behind this approach is that it is often the case that the cluster consists of a large number of workers and collecting the statistics periodically from the workers creates an additional overhead for a streaming application.

The worker delegation scheme allows the workers to interact with sources by sending binary signals: (1) increase workload and (2) decrease workload. Each worker monitors its workload and tries to keep the workload between two thresholds, i.e., if the workload exceeds the upper threshold, the worker sends a decrease signal to the sources, and if the workload is below the lower threshold, the worker sends the increase signal to the sources. This simple modification comes along with the benefit that it gives the flexibility to the workers to easily adapt to the complex application-specific requirements, i.e., processing, storage, service time and queue length.

Piggybacking. Each worker requires updating all the sources in case of experiencing undesirable (low or high) workload. Note that it is required that sources receive the signal and operate in a consistent manner, performing the same routing of messages. Such deployment might negatively impact the performance of a streaming application, as it requires one-to-many broadcast messages across the network. To overcome this challenge, we propose to relax the consistency constraint in the dag and allow operators to be eventually consistent. We propose to encode the binary signals from the workers along with the acknowledgment messages. During the execution, the sources only receive the signal from the worker as a response to its messages. This means that the worker might continue receiving the messages with the same key even after triggering the decision.

Periodic Aggregation. When the sources receive a request to increase the workload, they move one of the virtual worker from the overloaded worker to an idle worker. During the change of routing, we need to ensure that the messages that are pending in the queue of the workers must be processed in a consistent manner.

cg ensures that each message is processed in a consistent manner by discarding the message migration phase. Concretely, each worker processes the messages that are assigned to it, and any change in the routing only affects the messages that arrive later.

As a message with the same key might be forwarded to different workers, cg performs periodic aggregation of partial results from the workers to ensure that the state per key is consistent. Periodic aggregation leverages the same dag and imposes a very low overhead in the stream application. Particularly, cg follows the same programming primitive as pkg for stream partitioning, supporting both stateless and stateful map-reduce like applications.

Vi Analysis

We proceed to analyze the conditions under which cg achieves good load balance. Recall from Section IV that we have a set of workers at our disposal. Each worker has a limited capacity, which is represented by . Capacities are normalized so that the average capacity is , that is . We assume that they are ordered in decreasing order of capacity, i.e., .

The input to the engine is a sequence of messages where is the identifier, is the message key, is the value, and is the timestamp at which the message is received. Upon receiving a message with key , we need to decide its placement among the workers. We assume one message arrives per unit of time. The messages arrive in ascending order by timestamp.

Key distribution. We assume the existence of an underlying discrete distribution supported on from which keys are drawn, i.e., is a sequence of independent samples from (). We represent the average arrival rate of messages as and the cardinality of set as , i.e., . We assume that they are ordered in decreasing order of average arrival rate, , and . We model the load distribution as a zipf distribution with values of exponent between and . The probability mass function of the zipf distribution with is

where r is the rank of each key, and is the total number of elements.

Our goal is to design an algorithm to solve Problem IV.1. In the analysis of cg, we assume that represents the time slot which corresponds to the minimum time period that each worker waits after sending a signal to the workers. Also, as we are not considering elasticity, we assume that the system is well provisioned, i.e., .

Vi-a Imbalance with Consistent Grouping

For simplification, we divide the analysis of cg into two parts: dividing the workload into small equal-sized virtual workers and assigning the virtual workers to workers based on their capacities. Assume that represents the number of virtual workers assigned to each worker at initial time. Then, for heterogeneous workers, we have homogeneous virtual workers. Each virtual worker has the same capacity (hence, homogeneous), and the capacity is guaranteed to be at most the capacity of the worker with the lowest capacity. The sources do not know the capacity of each worker. However, since all virtual workers are homogeneous, the sources can balance the load of each worker by assigning equal number of messages to each virtual worker, and by keeping the number of virtual workers assigned to each worker proportional to its capacity.

Vi-A1 Chromatic Balls and Bins

We model the first problem using the framework of balls and bins processes, where keys correspond to colors, messages to colored balls, and virtual workers to bins. Choose independent hash functions uniformly at random. Define the greedy- scheme as follows: at time , the -th ball (whose color is ) is placed on the bin with minimum current load among , i.e., . We define the imbalance as the difference between the maximum and the average load across the bins, at time .

Observe that when , each ball color is assigned to a unique bin so no choice has to be made; this models hash-based key grouping. At the other extreme, when , all bins are valid choices, and we obtain shuffle grouping.

pkg [nasir2015power] considers the case of , which is same as having two hash functions and . The algorithm maps each key to the sub-stream assigned to the least loaded worker between the two possible choices, that is:  = .

Lemma VI.1

Suppose we use bins and let . Assume a key distribution with maximum probability . Then, the imbalance after steps of the Greedy- process is , with high probability [nasir2015power].

Observe that the imbalance in case of pkg is only guaranteed for the case when . However, in the case when , the imbalance grows proportional with the frequency of the most frequent key and number of workers.

Power of Two Choices (potc) [azar1999balanced-allocations] leverages two random numbers and . The algorithm maps each message to the sub-stream assigned to the least loaded worker between the two possible choices, that is:  = . The above random numbers can be generated by using hash functions with messages as arguments. In this case, note that the potc is different from the pkg in the sense that two hashes are applied to the messages, rather than the keys. The procedure is identical to the standard greedy- process of azar1999balanced-allocations, therefore the following bounds hold.

Lemma VI.2

Suppose we use bins and let . Then, the imbalance after steps of the Greedy- process is , with high probability [azar1999balanced-allocations].

Note that these bounds can be generalized to the infinite process in which balls leave the system in each time unit (one from each worker) and the number of balls entering the system is less than . In such cases, the relative load remains the same, therefore the bound holds. porc generate imbalance that is bounded by the factor , i.e., .

Vi-A2 Fair Bin Assignment

Given that messages are assigned to set of workers using porc, our goal is to show that consistent grouping is able to perform fair assignment to messages to the workers over time. We achieve our goal by showing that consistent grouping reduces the imbalance (if it exists) over time. To make the discussion more concrete, we define the notion of busy worker using a threshold . In particular, we say that a worker is busy if the load . Similarly, we define the notion of idle worker using the threshold . We say that a worker is idle if its load .

Assume that represents the average number of virtual workers per worker, i.e., the total number of virtual workers equal . Also, assume that represents the optimal number of virtual workers for -th worker, namely, . Clearly, .

Thanks to the load balancing mechanisms such as pkg or potc, each virtual bin is guaranteed to have load at most with high probability, where denotes the imbalance factor of the load balancing mechanism used. For pkg and potc, the value of is at most as implied by Lemmas VI.1 and VI.2 (notice that the denominator is due to the normalization of the capacity in this paper). Therefore, the expected load of a worker having virtual workers is bounded above by

Now, consider that the worker is overloaded, i.e., . This implies:

We can rearrange the above equation to have:

which implies that when the worker is overloaded, it must have an imbalance that is lower bounded by the above equation. However, such an imbalance is guaranteed to be small by the load balancing mechanism used, i.e., for potc and pkg, when .

Therefore, we know that for an overloaded worker, it must hold that:

Now, by solving for , we get:

where we use the Bernoulli’s inequality to obtain the above second inequality.

Notice that the above inequality gives the lower bound on the number of virtual workers assigned to an overloaded worker. Since its optimal number of virtual workers is , we can see that , which is close to since . This gives an interesting property that once we know a worker is overloaded, we can be sure that its number of virtual workers is close to the optimal allocation. Thus, the sources can probe the capacity of workers by assigning virtual workers (taken from overloaded workers) to workers that have not reported becoming overloaded, or if there is no such one, to those that reported becoming overloaded least recently. Also notice that by letting , we can guarantee that the overloaded workers are having at least the optimal number of virtual workers they should have. However, when is large (due to bad load balancing mechanisms), or when is large (due to having many small virtual workers), will become large. This will burden the overloaded workers because they can only broadcast the overloaded cases when the threshold is surpassed. This illustrates the tradeoff of load balancing mechanisms, with small imbalance factor , and the right number of virtual workers (too many is not good) in our consistent grouping strategy.

Vi-B Memory with Consistent Grouping

kg generates the optimal memory footprint by forwarding each key to exactly one worker. Similarly, pkg produces nearly optimal memory overhead by allowing at most two workers per key. On the other end, potc and sg might assign each key to all the workers in the worst case, producing the memory footprint proportional to the number of workers. Assume that is a random variable representing the minimum number of bins required for a ball with color . Further, assume a random variable representing the sum of number of bins required for the balls, i.e., . A trivial upper bounded for in case of shuffle grouping is given by:

(1)

porc allows a tradeoff between imbalance and memory using the parameter . To analyze the memory footprint of porc, we answer a very simple question: What is the probability that a key is replicated on all the workers? For this to happen, the load of workers should exceed by of the average load. Only then a key is replicated on all the workers. However, for a sufficiently large value of , i.e., this can not happen. A trivial lower bound on the number of bins required for a ball with color is . Then,

(2)

This discussion provides the basic intuition on why the memory overhead of porc is lower than sg and potc. We plan to consider the detailed analysis in future work.

Vii Evaluation

We assess the performance of our proposal by using both simulations and a real deployment. In so doing, we answer the following questions:

  • What is a good set of values for the parameters of cg?

  • How does cg perform compared to other schemes?

  • How does cg adapt to changes in input stream and resources?

  • What is the overall effect of cg on applications deployed on a real dspe?

Vii-a Experimental Setup

Stream Symbol Messages Keys (%)
Wikipedia WP M M
Twitter TW G M
Zipf ZF M k
TABLE I: Summary of the datasets used in experiments. Note: Percentage of messages having the most frequent key ().
Fig. 5: Number of messages per hour for WP and TW datasets.
Symbol Algorithm
kg Key Grouping
pkg Partial Key Grouping
potc Power of Two Choices
porc Power of Random Choices
ch Consistent Hashing with Bounded Load
sg Shuffle Grouping
cg Consistent Grouping
TABLE II: Notation for the algorithms tested.

Datasets. Table I summarizes the datasets used. In particular, our goal is to be able to produce skewness in the input stream. We use two main real data streams, one from Wikipedia and one from Twitter. These datasets were chosen for their large size, different degree of skewness, and different set of applications in Web and online social network domains. The Wikipedia dataset (WP)444http://www.wikibench.eu/?page_id=60 is a log of the pages visited during a day in January 2008. Each visit is a message and the page’s URL represents its key. The Twitter dataset (TW) is a sample of tweets crawled during July 2012. We split each tweet into words, which are used as the key for the message. Figure 5 reports the ingestion rate of the streams in terms of number of messages per hour. Lastly, we generate synthetic datasets with keys following Zipf distributions with exponent in the range with 100 unique keys.

Simulation and Real Deployment. We process the datasets by simulating the dag presented in Figure 3. The stream is composed of timestamped keys that are read by multiple independent sources () via shuffle grouping, unless otherwise specified. The sources forward the received keys to the workers () downstream. In our simulations we assume that the sources perform data extraction and transformation, while the workers perform data aggregation, which is the most computationally expensive part of the dag. Thus, the workers are the bottleneck in the dag and the focus for the load balancing. Note that for simulation, we ignore the network latency. The selected workloads represent a variety of streaming applications. In particular, any application that performs reduce-by-key or group-by operation follows a similar pattern.

Algorithms. Table II defines the notations used for the different algorithms. We use a -bit Murmur hash function for implementation of kg to minimize the probability of collisions. Unlike the algorithms in Table II, other related load balancing algorithms [shah2003flux, cherniack2003scalable, xing2005dynamic, balkesen2013adaptive, castro2013integrating] require the dspe to support operator migration. Many top dspes, such as Apache Storm, do not support migration. Thus, we omit these algorithms from the evaluation.

Metrics. Table III defines the metrics used for the evaluation of the performance of different algorithms.

Metric Description
Memory Cost Replication cost of the keys
Queue Length Number of messages in the queue
Resource Utilization Ratio between number of messages
and capacity of worker.
Imbalance Difference between the maximum and the
average resource utilization.
Execution Latency Difference between arrival and finish time.
Throughput Number of messages processed per second.
TABLE III: Metric used for evaluation of the algorithms.

Monitoring Performance. For cg, each worker requires monitoring its resource utilization that enables the fair message assignment. In case of simulations, we define the resource utilization as the ratio between the number of assigned messages and the capacity of a worker. We define the notion of idle and busy worker using the and thresholds respectively. For the real-world experiments, we suggest using the queue length as a parameter for monitoring the resource utilization. In particular, the resource utilization is defined by:

The choice of the parameter was motivated by its availability in the standard Apache Storm distribution (ver 1.0.2).

Vii-B Experimental Results

Q1: In the first experiment, we simulate the cg scheme by varying the value of and fixing the number of sources to and the number of workers to . Each worker is homogeneous and the number of virtual workers per worker are set to . We select the WP dataset and simulate cg for different values of . We leverage kg, sg and porc for task assignment to the virtual workers. Figure 6 reports the imbalance and the memory overhead for the experiment. The results verify our claim that epsilon provides a trade-off between imbalance and memory. In particular, cg generates low imbalance at lower values of epsilon and produces low memory footprint for higher values of epsilon. Also, the experiment shows that cg is able to interpolate well between the kg and sg schemes. Based on this experiment, we use the value of = henceforth as it provides a middle ground between memory and imbalance.

Fig. 6: Experiment reporting the imbalance and the memory overhead for different values of epsilon. The setup includes workers, each having virtual workers mapped to it.
Fig. 7: Normalized imbalance and memory overhead comparing several assignment strategies along with consistent grouping on a homogeneous cluster with , , and workers using WP dataset. Each worker spawns virtual workers and the for ch and porc.

Next, we analyze the allocation strategies, i.e., kg, pkg, potc, porc, ch and sg. We simulate an experiment on a homogeneous cluster with , , and workers using the WP dataset. The number of virtual workers per worker are set to , i.e., equivalent to splitting the keys into , , and bins. For ch and porc, we set . Figure 7 shows the imbalance after the assignment of the streams. Results show that kg and pkg generate high imbalance, whereas potc and sg generate nearly perfect load balance. Both ch and porc bound the imbalance close to a constant factor from the value of . The imbalance in case of kg and pkg grows linearly with the increase in the number of workers. This behavior is due to the fact that both these schemes restrict a single key to a constant number of workers. ch and porc bound the imbalance upto a constant factor for each bin. potc and sg achieve near perfect imbalance by exploiting all the possible workers. Interestingly, porc achieves bounded imbalance while keeping the memory footprint as low as pkg, as shown in Figure 7. In particular, porc generates nearly perfect memory footprint and operates very close to kg. The gain in memory footprint depends on the distribution of the workload and the size of the deployment, and achieving gains in orders of magnitude is not always possible. Henceforth, we leverage porc for the next experiments and analyze consistent grouping.

Q2: To answer this question, we compare the imbalance and the memory overhead of cg with kg, pkg, potc, ch and sg. We simulate the DAG using the WP dataset and report the value of imbalance measured at the end of the simulation. The cluster consists of different number of workers, i.e., , , and workers. Each experiment considers a cluster of homogeneous machines. For cg and ch, we set the value of epsilon equal to . Figure 8 reports the imbalance and the memory overhead for different schemes (note the log scale). Results show that kg performs the worst in terms of the imbalance while generating the optimal memory footprint. pkg on the other hand provides nearly perfect imbalance and optimal memory footprint for smaller deployments, i.e., and workers. However, the imbalance grows as the number of workers increase. potc and sg provide very similar performance, i.e., provide nearly perfect imbalance and generate higher memory footprint. ch provides bounded imbalance and reduces the memory footprint compared to potc and sg. cg provides the bounded imbalance and improves the memory footprint compared to ch. This behavior is due to the fact that cg leverages randomness to redistribute the messages once the principal worker reaches the capacity, whereas ch always choose the next worker in the ring.

Fig. 8: Normalized imbalance and memory overhead comparing different grouping strategies on a homogeneous cluster with , , and workers using the WP dataset. Each worker spawns virtual workers and the for ch and porc.
Fig. 9: Effect on queue length, execution latency and resource utilization on a homogeneous cluster with 10 workers for kg and cg using the WP dataset. Each worker spawns virtual workers and the equals for cg.

Additionally, we report the queue length, execution latency and the resource utilization among the workers by setting the capacity of the workers in a way that each worker operates at of the capacity using shuffle grouping. We report each metric as a difference between the maximum and minimum value. Note the the difference between the maximum and minimum resource utilization represents the imbalance. Due to space restriction, we only report the results for 10 workers. For comparison, we also simulate and report kg and cg. Note that as pkg, potc and sg provide nearly perfect load balance, we do not report their results (the different between maximum and average queue length, execution latency and resource utilization equals 0). We simulate the WP dataset, set the value of equal to and set the number of virtual workers per worker equal to for cg. Figure 9 shows the results of the experiment over time. Results show that the difference between the maximum and minimum queue length and execution latency increases over time using kg, whereas cg keeps both queue length and execution latency very low. Also, the imbalance is high for kg, whereas cg keeps the imbalance to close to zero.

Next, we mimic the heterogeneity in the cluster by assuming a cluster consisting of machines in which machines are times more powerful than rest of the machines. In particular, we vary the value of between to and vary the value of between to . For instance, when and , a machine in a -machine cluster has twice the capacity than all the other nine machines. We simulate the kg, sg, cg for comparison and use the value of epsilon equal to . In case of cg, each worker is initialized with virtual workers. We observe similar behavior in all the configurations and report only a single iteration with and . Figure 10 reports the queue length, execution latency and resource utilization for the three approaches. Results show that queue length and execution latency grow for kg and sg. Similarly, the imbalance is pretty high for these approaches. On the other hand, cg provides the lower queue length and execution latency. Also, it keeps the imbalance close to zero. Note that there is a spike after hours for queue length, which is due to the fact that we leverage the resource utilization as a metric to segregate between idle and busy workers.

Fig. 10: Effect on queue length, execution latency and resource utilization due to heterogeneity in the cluster for kg, cg and sg.
Fig. 11: Normalized imbalance and memory overhead on a homogeneous cluster with , , and workers with using , , , and sources using WP dataset. Each worker spawns virtual workers and the equals for cg.

Q3: Further, we evaluate the performance of cg by increasing the number of sources. In particular, we compare the performance of different deployments using , , , and sources. For assignment of messages to sources, we use sg. Figure 11 reports the performance of cg in terms of imbalance and memory overhead. Results show that both imbalance and memory footprint almost remain the same on a log scale by both increasing the number of workers and number of sources. Therefore, we can conclude that cg is able to provide similar performance even under higher number of sources and workers.

Further, we study the behavior of cg on the number of virtual workers. We reuse the configuration for the experiment reported in Figure 10 and report the queue length, execution latency and resource utilization for cg, i.e., and . We perform the experiment using number of virtual workers equal to , , , , and . Figure 12 shows that setting the number of virtual workers to a value of does not provide desired results. This is due to the fact that there are not enough virtual workers to move around the workers. Similarly, when the number of virtual workers are equal to , the system takes longer time to converge, hence impacting the performance. Executions using and virtual workers provide similar performance. Lastly, the execution using virtual workers generate the best results.

Fig. 12: Queue length, execution latency and resource utilization for different number of virtual workers.

Next, we study the performance of cg by dynamically changing the resources over time. To initialize the resources, we reuse the configuration from the previous experiment and change the capacity of resources twice during the execution, i.e., after processing M and M messages. Concretely, we change the values of and (represented as {, }) after M and M messages to {} and {} respectively. We execute the experiment for virtual workers and change the resources in a way that the sum of resources remains the same. Also, we report the results of kg and sg for comparison. Figure 13 reports the queue length, execution latency and resource utilization of the experiment. Results show that cg adapts very efficiently to the change in resources.

Fig. 13: Queue length, execution latency and resource utilization for when resources are changing over time. The resources change after processing M and M messages.

Q4: Lastly, we study the effect of cg on streaming applications deployed on an Apache Storm cluster running in a private cloud. The storm cluster consists of medium sized machines with virtual CPUs and GB of memory each. Moreover, a Kafka cluster with partitions is used as a data source. We perform experiments to compare cg, pkg, kg, and sg on the TW dataset. The parameters are selected in a way that the number of sources and workers match the number of executors in the Storm cluster. In this experiment, we use a topology configuration with sources and workers. We report overall throughput, end-to-end latency and memory footprint.

Fig. 14: Throughput and latency for TW dataset on a homogenous Storm cluster.
Fig. 15: Throughput and latency for TW dataset on a heterogenous Storm cluster.

In the first experiment, we evaluate the performance of the algorithms in a homogeneous cluster. We emulate different levels of CPU consumption per key, by adding a fixed delay to the processing. We prefer this solution over implementing a specific application to control better the load on the workers. We choose a range that can bring our configuration to a saturation point, although the raw numbers would vary for different setups. Even though real deployments rarely operate at saturation point, cg allows better resource utilization, therefore supporting the same workload on a smaller number of machines, but working on a higher overall load point each. In this case, the minimum delay (ms) corresponds approximately to reading 400kB sequentially from memory, while the maximum delay (ms) to -th of a disk seek.555http://brenocon.com/dean_perf.html Nevertheless, even more expensive tasks exist: parsing a sentence with NLP tools can take up to ms.666http://nlp.stanford.edu/software/parser-faq.shtml#n

Figure 14 reports the throughput and end-to-end latency for the TW dataset on the homogenous cluster. Also, during the experiment, kg was consuming of memory in the cluster vs. for pkg and cg, and for sg. Results shows that kg provides low memory overhead but coupled with low throughput and high execution latency. Alternatively, pkg, sg and cg provide superior performance in terms of throughput, latency and memory consumption.

Further, we evaluate the performance of cg in the presence of heterogeneity in the cluster. We use the cpulimit application to change the resource capacity over time and monitor the behavior of different approaches in terms of throughput and end-to-end latency. In particular, we limit the cpu resources of two of the executors to of the available CPU resources to mimic the heterogeneity in the cluster. During the experiment, we give the system minutes grace period to reach a stable state before collecting the statistics. Figure 15 reports the throughput and the end-to-end latency of the experiment. Results show that cg outperforms the other approaches both in terms of throughput and end-to-end latency. In particular, and compared to kg, it provides up to better end-to-end latency and better performance in terms of throughput.

Overall, we observe that cg is a very competitive solution with respect to kg, pkg and sg, performing much better with respect to throughput and end-to-end latency and imposing a small memory footprint, while at the same time tackling the problem of heterogeneity of available resources at the workers in the cluster.

Viii Conclusion

We studied the load balancing problem for streaming engines running in a heterogeneous cluster and processing varying workload. We proposed a novel partitioning strategy called Consistent Grouping. cg leverages two very simple, but extremely powerful ideas: power of random choices and fair virtual worker assignment. It efficiently achieves fair load balancing for streaming applications processing skewed workloads. We provided a theoretical analysis of the proposed algorithm and showed via extensive empirical evaluation that the cg outperforms the state-of-the-art approaches. In particular, cg achieves x better performance in terms of latency compared to key grouping.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
313677
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description