On Throughput-Delay Optimal Access to Storage Clouds via Load Adaptive Coding and Chunking
Recent literature including our past work provide analysis and solutions for using (i) erasure coding, (ii) parallelism, or (iii) variable slicing/chunking (i.e., dividing an object of a specific size into a variable number of smaller chunks) in speeding up the I/O performance of storage clouds. However, a comprehensive approach that considers all three dimensions together to achieve the best throughput-delay trade-off curve had been lacking. This paper presents the first set of solutions that can pick the best combination of coding rate and object chunking/slicing options as the load dynamically changes. Our specific contributions are as follows: (1) We establish via measurements that combining variable coding rate and chunking is mostly feasible over a popular public cloud. (2) We relate the delay optimal values for chunking level and code rate to the queue backlogs via an approximate queuing analysis. (3) Based on this analysis, we propose TOFEC that adapts the chunking level and coding rate against the queue backlogs. Our trace-driven simulation results show that TOFEC’s adaptation mechanism converges to an appropriate code that provides the optimal throughput-delay trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over as many requests. (4) We propose a simpler greedy solution that performs on a par with TOFEC in average delay performance, but exhibits significantly more performance variations.
Cloud storage has gained wide adoption as an economic, scalable, and reliable mean of providing data storage tier for applications and services. Typical cloud storage systems are implemented as key-value stores in which data objects are stored and retrieved via their unique keys. To provide high degree of availability, scalability, and data durability, each object is replicated several times within the internal distributed file system and sometimes also further protected by erasure codes to more efficiently use the storage capacity while attaining very high durability guarantees .
Cloud storage providers usually implement a variety of optimization mechanisms such as load balancing and caching/prefetching internally to improve performance. Despite all such efforts, still evaluations of large scale systems indicate that there is a high degree of randomness in delay performance . Thus, services that require more robust and predictable Quality of Service (QoS) must deploy their own external solutions such as sending multiple/redundant requests (in parallel or sequentially), chunking large objects into smaller ones and read/write each chunk through parallel connections, replicate the same object using multiple distinct keys in a coded or uncoded fashion, etc.
In this paper, we present black box solutions111They use only the API provided by storage clouds and do not require any modification or knowledge of the internal implementation of storage clouds. that can provide much better throughput-delay performance for reading and writing files on cloud storage utilizing (i) parallelism, (ii) erasure coding, and (iii) chunking. To the best of our knowledge, our work is the first one that adaptively picks the best erasure coding rate and chunk size to minimize the expected latency without sacrificing the supportable rate region (i.e., maximum requests per second) of the storage tier. The presented solutions can be deployed over a proxy tier external to the cloud storage tier or can be utilized internally by the cloud provider to improve the performance of their storage services for all or a subset of their tenants with higher priority.
I-a State of the Art
Among the vast amount of research on improving cloud storage system’s delay performance emerged in the past few years, two groups in particular are closely related to our work presented in this paper:
Erasure Coding with Redundant Requests: As proposed by authors of [3, 4, 5], files (or objects) are divided into a pre-determined number of chunks, each of which is the size of the original file, and encoded into of “coded chunks” using an Maximum Distance Separable (MDS) code, or more generally a Forward Error Correction (FEC) code. Downloading/uploading of the original file is accomplished by downloading/uploading coded chunks using parallel connections simultaneously and is deemed served when download/upload of any coded chunks complete. Such mechanisms significantly improves the delay performance under light workload. However, as shown in our previous work  and later reconfirmed by , system capacity is reduced due to the overhead for using smaller chunks and redundant requests. This phenomenon is illustrated in Fig.1 where we plot the throughput-delay trade-off for using different MDS codes from our simulations using delays traces collected on Amazon S3. Codes with different are grouped in different colors. Using a code with high level of chunking and redundancy, in this case a code, although delivers gain in delay at light workload, reduces system capacity to only of the original basic strategy without chunking and redundancy, i.e., code!
This problem is partially addressed in  where we present strategies that adjust according to workload level so that it achieves the near-optimal throughput-delay trade-off for a predetermined . For example, if is used, the strategies in  will achieve the lower-envelope of the red curves in Fig.1. Yet, it still suffers from an almost 60% loss in system capacity.
Dynamic Job Sizing: It has been observed in [2, 6] that in key-value storage systems such as Amazon S3 and Microsoft’s Azure Storage, throughput is dramatically higher when they receive a small number of storage access requests for large jobs (or objects) than if they receive a large number of requests for small jobs (or objects), because each storage request incurs overheads such as networking delay, protocol-processing, lock acquisitions, transaction log commits, etc. Authors of  developed Stout in which requests are dynamically batched to improve throughput-delay trade-off of key-value storage systems. Based on the observed congestion Stout increase or reduce the batching size. Thus, at high congestion, a larger batch size is used to improve the throughput while at low congestion a smaller batch size is adopted to reduce the delay.
I-B Main Contributions
Our work unifies the ideas of redundant requests with erasure coding and dynamic job sizing together in one solution framework. Our major contributions can be listed as follows.
Providing dynamic job sizing while maintaining parallelism and erasure coding gains is a non-trivial undertaking. Key-value stores map an object key to one or more physical storage nodes (if replication is used). Depending on the implementation, a request for a key might always go to the same physical node or load balanced across all replicas. As detailed in Section III, one has the option of using unique keys for each chunk of an object or share the same key across chunks but assign them different byte ranges. The former wastes significant storage capacity, whereas the latter will likely demonstrate higher correlation across parallel reads/writes of distinct chunks of the same object. Nonetheless, our measurements in different regions over a popular public cloud establish that in fact sharing the same key results in reasonably well weak-correlations enabling parallelism and coding gains. However, our measurements also indicate that indeed universally good performance is not guaranteed as one region fails to deliver this weak-correlation.
The primary novelty of TOFEC is in its backlog-based adaptive algorithm for dynamically adjusting the chunk size as well as the number of redundant requests issued to fulfill storage access requests. This algorithm of variable chunk sizing can be viewed as a novel integration of prior observations from the two bodies of works discussed above. Based on the observed backlog level as an indicator of the workload, TOFEC increases or decreases the chunk size, as well as the number of redundant requests. In our trace-driven evaluations, we demonstrate that: (1) TOFEC successfully adapts to full range of workloads, delivering lower average delay than the basic static strategy without chunking under light workloads, and under heavy workloads over the throughput of a static strategy with high chunking and redundancy levels optimized for service delay; and (2) TOFEC provides good QoS guarantees as it delivers low delay variations.
Although TOFEC does not need any explicit information about the internal operations of the storage cloud, it needs to log latency performance and model the cumulative distribution of the delay performance of the storage cloud. We also propose a greedy heuristic that does not need to build such a model, and via trace-driven simulations we show that its performance on average latency is on a par with the performance of TOFEC, but exhibiting significantly higher performance variations.
Ii System Models
Ii-a Basic Architecture and Functionality
The basic system architecture captures how Internet services today utilize public or private storage clouds. The architecture consists of proxy servers in the front-end and a key-value store, referred to as storage cloud, in the back-end. Users interact with the proxy through a high-level API and/or user interfaces. The proxy translates every high-level user request (to read or write a file) into a set of tasks. Each task is essentially a basic storage access operation such as put, get, delete, etc. that will be accomplished using low-level APIs provided by the storage cloud. The proxy maintains a certain number of parallel connections to the storage cloud and each task is executed over one of these connections. After a certain number of tasks are completed successfully, the user request is considered accomplished and the proxy responds to the user with an acknowledgment. The solutions we present are deployed on the proxy server side transparent to the storage cloud.
For read request, we assume that the file is pre-coded into coded chunks with an MDS code and stored on the cloud. Completion of downloading any coded chunks provides sufficient data to reconstruct the requested file. Thus, the proxy decodes the requested file from the downloaded chunks and replies to the client. The unfinished and/or not-yet-started tasks are then canceled and removed from the system.
For write request, the file to be uploaded is divided and encoded into coded chunks using an MDS code and hence completion of uploading any coded chunks means sufficient data have been stored onto the cloud. Thus, upon receiving successful responses from the storage cloud, the proxy sends a speculative success response to the client, without waiting for the remaining tasks to finish. Such speculative execution is a commonly practiced optimization technique to reduce client perceived delay in many computer systems such as databases and replicated state machines . Depending on the subsequent read profile on the same file, the proxy can (1) continue serving the remaining tasks till all tasks finish, or (2) change them to low priority jobs that will be served only when system utilization is low, or (3) cancel them preemptively. The proxy can even (4) run a demon program in the background that generates all coded chunks from the already uploaded chunks when the system is not busy.
Accordingly, we model the proxy by the queueing system shown in Fig.2. There are two FIFO (first-in-first-out) queues: (i) the request queue that buffers all incoming user requests, and (ii) the task queue that is a multi-server queue and holds all tasks waiting to be executed. threads222We avoid the term “server” that is commonly used in queueing theory literature to prevent confusion., representing the set of parallel connections to the storage cloud, are attached to the task queue. The adaptation module of TOFEC monitors the state of the queues and the threads, and decides what coding parameter to be used for each request. Without loss of generality, we assume that the head-of-line (HoL) request leaves the request queue only when there is at least one idle thread and the task queue is empty. A batch of tasks are then created for that request and injected into the task queue. As soon as any tasks complete successfully, the request is considered completed. Such a queue system is work conserving since no thread is left idle as long as there is any request or task pending.
Ii-B Basics of Erasure Codes
An MDS code (e.g., Reed-Soloman codes) encodes data chunks each of bits into a codeword consisting of -bit long coded chunks. The coded chunks can sustain up to erasures such that the original data chunks can be efficiently reconstructed from any subset of coded chunks. and are called the length and dimension of the MDS code. We also define as the redundancy ratio of an MDS code. The erasure resistant property of MDS codes has been utilized in prior works [3, 4, 5], as well as in this paper, to improve delay of cloud storage systems. Essentially a coded chunk experiencing long delay is treated as an erasure.
In this paper, we make use of another interesting property of MDS codes to implement variable chunk sizing of TOFEC in a storage efficient manner: MDS code of high length and dimension for small chunk size can be used as MDS code of smaller code length and dimension for larger chunk size. To be more specific, consider any MDS code for chunks of bits. To avoid confusion, we will refer to these -bit chunks as strips. A different MDS code of length , dimension and chunk size for some can be constructed by simply batching every data/coded strips into one data/coded chunk. The resulting code is an MDS code for -bit chunks because any coded chunks covers coded strips, which is sufficient to reconstruct the original file of bits. This property is illustrated as an example in Fig. 3. In this example, a 3MB file is divided into 6 strips of 0.5MB and encoded into 12 coded strips of total size 6MB, using a MDS code. This code can then be used as a code for 3MB chunks, a code for 1.5MB chunks and a code for 1MB chunks simultaneously by batching 6, 3 and 2 strips into a chunk.
Ii-C Definitions of Different Delays
The delay experienced by a user request consists of two components: queueing delay () and service delay (). Both are defined with respect to the request queue: (i) the queueing delay is the amount of time a request spends waiting in the request queue and (ii) the service delay is the period of time between when the request leaves the request queue (i.e., admitted into the task queue and started being served by at least one thread) and when it finally leaves the system (i.e., the first time when any of the corresponding tasks complete). In addition, we also consider the task delays (), which is the time it takes for a thread to serve a task assuming it is not terminated or canceled preemptively. To clarify these definitions of delays, consider a request served with an MDS code, with being its arrival time, the starting times of the corresponding tasks333We assume if the -th task is never started.. Then the queueing delay is . Suppose are the corresponding task delays, then the completion times of these task will be if none is canceled. So the request will leave the system at time , which denotes the -th smallest value in , i.e., the time when tasks complete. Then the service delay of this request is .
Iii Variable Chunk Sizing
In this section, we discuss implementation issues as well as pros and cons of two potential approaches, namely Unique Key and Shared Key, for supporting erasure-code-based access to files on the storage cloud with a variety of chunk sizes. Suppose the maximum desired redundancy ratio is , then these approaches implement variable chunk sizing as follows:
Unique Key: For every choice of chunk size (or equivalently ), a separate batch of coded chunks are created and each coded chunk is stored as an individual object with its unique key on the storage cloud. The access to different chunks is implemented through basic get, put storage cloud APIs.
Shared Key: A coded file is first obtained by stacking together the coded strips obtained by applying a high-dimension MDS code to the original file, as described in Section II-B and illustrated in Fig.3. For read, the coded file is stored on the cloud as one object. Access to chunks with variable size is realized by downloading segments in the coded file corresponding to batches of a corresponding number of strips, using the same key with more advanced “partial read” storage cloud APIs. Similarly, for write, the file is uploaded in parts using “partial write” APIs and then later merged into one object in the cloud.
Iii-a Implementation and Comparison of the two Approaches
Iii-A1 Storage cost
When the user request is to write a file, storage cost of Unique Key and Shared Key is not so different. However, to support variable chunk sizing for read requests, Shared Key is significantly more cost-efficient than Unique Key. With Shared Key, a single coded file stored on the cloud can be reused to support essentially an arbitrary number of different chunk sizes, as long as the strip size is small enough. On the other hand, it seems impossible to achieve similar reusing with the Unique Key approach where different chunks of the same file is treated as individual objects. So with Unique Key, every additional chunk size to be supported requires an extra storage cost file size. Such linear growth of storage cost easily makes it prohibitively expensive even to support a small number of chunk sizes.
Iii-A2 Diversity in delays
The success of TOFEC and other proposals to use redundant requests (either with erasure coding or replication) for delay improvement relies on diversity in cloud storage access delays. In particular, TOFEC, as well as [3, 4, 5], requires access delays for different chunks of the same file to be weakly correlated.
With Unique Key, since different chunks are treated as individual objects, there is no inherent connection among them from the storage cloud system’s perspective. So depending on the internal implementation of object placement policy of the storage cloud system, chunks of a file can be stored on the cloud in different storage units (disks or servers) on the same rack, or in different racks in the same data center, or even to different data centers at distant geographical locations. Hence it is quite likely that delays for accessing different chunks of the same file show very weak correlation.
On the other hand, with Shared Key, since coded chunks are combined into one coded file and stored as one object in the cloud, it is very likely that the whole coded file, hence all coded chunks/strips, is stored in the same storage unit, unless the storage cloud system internally divides the coded file into pieces and distributes them to different units. Although many distributed storage systems do divide files into parts and store them separately, it is normally only for larger files. For example, the popular Hadoop distributed file system by default does not divide files smaller than 64MB. When different chunks are stored on the same storage unit, we can expect higher correlation in their access delays. It then is to be verified that the correlation between different chunks with the Shared Key approach is still weak enough for our coding solution to be beneficial.
Iii-A3 Universal support
Unique Key is the approach adopted in our previous work  to support erasure-code based file accessing with one predetermined chunk size. A benefit of Unique Key is that it only requires basic get and put APIs that all storage cloud systems must provide. So it is readily supported by all storage cloud systems and can be implemented on top of any one.
On the other hand, Shared Key requires more advanced APIs that allow the proxy to download or upload only the targeted segment of an object. Such advanced APIs are not currently supported by all storage cloud systems. For example, to the best of our knowledge currently Microsoft’s Azure Storage provides only methods for “partial read”444E.g. DownloadRangeToStream(target, offset, length) downloads a segment of length bytes starting from the offset-th byte of the target object (or “blob” in Azure’s jargon). but none for “partial write”. On the contrary, Amazon S3 provides partial access for both read and write: the proxy can download a specific inclusive byte range within an object stored on S3 by calling getObject(request,destination)555The byte range is set by calling request.setRange(start,end).; and for uploading an uploadPart method to upload segments of an object and an completeMultipartUpload method to merge the uploaded segments are provided. We expect more service providers to introduce both partial read and write APIs in the near future.
Iii-B Measurements on Amazon S3
To understand the trade-off between Unique Key and Shared Key, we run measurements over Amazon EC2 and S3. EC2 instance served as the proxy in our system model. We instantiated an extra large EC2 instance with high I/O capability in the same availability region as the S3 bucket that stores our objects. We conducted experiments on different week days in May to July 2013 with various chunk sizes between 0.5MB to 3MB and up to coded chunks per file. For each value of , we allow simultaneously active threads while the -th thread being responsible for downloading the -th coded chunk of each file. Each experiment lasted longer than 24 hours. We alternated between different settings to capture similar time of day characteristics across all settings.
The experiments are conducted within all 8 availability regions in Amazon S3. Except for the “US Standard” availability region, all other 7 regions demonstrate similar performance statistics that are consistent over different times and days. On the other hand, the performance of “US Standard” demonstrated significant variation even at different times in the same day, as illustrated in Fig.4(a) and Fig.4(b). We conjecture that the different and inconsistent behavior of “US Standard” might be due to the fact that it targets a slightly different usage pattern and it may employ a different implementation for that reason666See http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region. We will exclude “US Standard” from subsequent discussions. For conciseness, we only show a limited subset of findings for availability region “North California” that are representative for regions other than “US Standard”:
(1) In both Unique Key and Shared Key, the task delay distribution observed by different threads are almost identical. The two approaches are indistinguishable even beyond 99.9th percentile. Fig.4(c) shows the complementary cumulative distribution function (CCDF) of task delays observed by individual threads for 1MB chunks and . Both approaches demonstrate large delay spread in all regions.
(2) Task delays for different threads in Unique Key show close to zero correlation, while they demonstrate slightly higher correlation in Shared Key, as it is expected. With all different settings, the cross correlation coefficient between different threads stays below 0.05 in Unique Key and ranges from 0.11 to 0.17 in Shared Key. Both approaches achieve significant service delay improvements. Fig.5 plots the CCDF of service delays for downloading 3MB files with 1MB chunks () with , assuming all tasks in a batch start at the same time. In this setting, both approaches reduce the 99th percentile delays by roughly 50%, 65% and 80% by downloading 1, 2 and 3 extra coded chunks. Although Shared Key demonstrates up to 3 times higher cross correlation coefficient, there is no meaningful statistical distinction in service delay between the two approaches until beyond the 99th percentile. All availability regions experience different degrees of degradation at high percentiles with Shared Key due to the higher correlation. Significant degradation emerges from around 99.9th percentile and beyond in all regions except for “Sao Paulo”, in which degradation appears around 99th percentile.
(3) Task delays are always lower bounded by some constant that grows roughly linearly as chunk size increases. This constant part of delay cannot be reduced by using more threads: see the flat segment at the beginning of the CCDF curves in Fig.4 and Fig.5. Since this constant portion of task delays is unavoidable, it leads to the negative effect of using larger since there is a minimum cost of system resource of (timethread) that grows linearly in . This cost leads to a reduced capacity region for using more redundant tasks, as illustrated in the example of Fig.1. We observe that the two approaches deliver almost identical total delays (queueing + service) for all arrival rates, in spite of the degraded service delay with Shared Key at very high percentile. So we only plot the results with Shared Key in Fig.1.
(4) Both the mean and standard deviation of task delays grow roughly linearly as chunk size increases. Fig.6 plots the measured mean and standard deviation of task delays in both approaches at different chunk sizes. Also plotted in the figures are least squares fitted lines for the measurement results. As the figures show, performance of Unique Key and Shared Key are comparable also in terms of how delay statistics scale as functions of the chunk size. Notice that the extrapolations at chunk size = 0 are all greater than zero. We believe this observation reflects the costs of non-I/O-related operations in the storage cloud that do not scale proportionally to object size: for example, the cost to locate the requested object. We also believe such costs contribute partially to the minimum task delay constant .
SUMMARY: Our measurement study shows that dynamic chunking while preserving weak correlation across different chunks is realizable through both Unique Key and Shared Key. We believe Shared Key is a reasonable choice for implementing dynamic chunking given that it is able to deliver delay performance comparable to Unique Key at a much lower cost of storage capacity. We turn our attention on how to pick the best choices of chunking and FEC rate in the remaining parts of the paper.
Iii-C Model of Task Delays
For the analysis present in the next section, we model the task delays as independently distributed random variables whose mean and standard deviation grow linearly as chunk size increases. More specifically, we assume the task delay for chunk size following distribution in the form of
where captures the lower bound of task delays as in observation (3), and represents a exponential random variable that models the tail of the CCDF. The mean and standard deviation of the exponential tail both equal to With this model, constants and together capture the non-zero extrapolations of the mean and standard deviation of task delays at chunk size 0, and similarly, constants and together capture the rate at which the mean and standard deviation grow as chunk size increases, as in observation (4).
Iv Design of TOFEC
For the analysis in this section, we group requests into classes according to the tuple (type, size). Here type can be read or write, and can potentially be other type of operations supported by the cloud storage. Each type of operation has its own set of delay parameters . Subscripts will be used to indicate variables associated with each class. We use , and to denote the code length, dimension and redundancy ratio for the code used to serve class requests. Also let denote the fraction of total arrivals contributed by class . We use vectors , , and to denote the collection of corresponding variables for all classes.
The system throughput is defined as the average number of successfully served requests per unit time. The static code capacity is defined as the maximum deliverable throughput, assuming , , and are fixed. The full capacity is then defined as the maximum static code capacity considering all possible choices of with fixed. For a given request arrivals rate , the system throughput equals to the smaller of and the (static or full) capacity.
Iv-a Problem Formulation and Main Result for Static Strategy
Given total arrival rate and composition of requests , we want to find the best choice of FEC code for each class such that the average expected total delay is minimized. Relaxing the requirement for and being integers, this is formulated as the following minimization problem777Notice that all classes share the same queueing delay. Also, we require instead of for a technicality to simplify the proof of the uniqueness of the optimal solution. We require since . is imposed for queue stability.:
In the above formulation, we use and as the optimizing variables, instead of a more intuitive choice of and . This choice helps simplify the analysis because and can be treated as independent variables while being subject to the constraint . In subsequent sections, we first introduce approximations for the expected queueing and service delays assuming that the FEC code used to serve requests of each class is predetermined and fixed (Section IV-B). Then we show that optimal solutions to the above non-convex optimization problem exhibit the following property (Section IV-C):
The optimal values of , and can all be expressed as functions solely determined by – the expected length of the request queue: , and are all strictly decreasing functions of .
This finding is then used as the guideline in the design of our backlog-driven adaptive strategy TOFEC (Section IV-D).
Iv-B Approximated Analysis of Static Strategies
Denote as the file size of class . Consider a request of class served with an MDS code, i.e., . First suppose all tasks start at the same time, i.e., . In this case, given our model for task delays, it is trivial to show that the expected service delay equals to
For the analysis, we approximate the summation with its integral upper bound . The gap of approximation is always upper bounded by the Euler-âMascheroni constant for any and quickly diminishes to 0 when gets large, as illustrated in Fig.7. Although the gap goes to as , it does not really matter for the purpose of this paper since any optimal solution with closer to only means we should set .
Also define the system usage (or simply “cost”) of a request as the sum of the amount of time each of its tasks being served by a thread888The time a task being served is if it completes successfully; if it starts but is terminated preemptively; and 0 if it is canceled while waiting in the task queue.. When all tasks start at the same time, its expected system usage is (see Section IV of  for detailed derivation)
Given that class contributes to fraction of the total arrivals, the average cost per request is . With simultaneously active threads, the departure rate of the system as well as the request queue is (request/unit time). In light of this observation, we approximate the request queue with an queue with service rate 999This approximation is a special case of the approximation used in : , with . Our findings in this paper readily generalizes to accommodate the approximation. . In other words, the static code capacity for a given and fixed code choice is approximated by
represent the arrival rate of system usage imposed by the request arrivals. Then the last inequality constraint of the optimization problem ( ‣ IV-A) becomes
With the queue approximation, the queueing delay in the original system at total arrival rate is approximated by
Noticing that given , the (approximated) static coded capacity is maximized when , , , we approximate the full capacity , where denotes the all-one vector. We acknowledge that the above approximations are quite coarse, especially because tasks of the same batch do not start at the same time in general. However, remember that the main objective of this paper is to develop a practical solution that can achieve the optimal throughput-delay trade-off. According to the simulation results, these approximations are sufficiently good for this purpose.
Iv-C Optimal Static Strategy
Even with the above approximations, the minimization problem ( ‣ IV-A) is not a convex optimization problem: the feasible region is not a convex set due to the terms in . In general, non-convex optimization problems are difficult to solve. Fortunately, we are able to prove the following theorem according to which this non-convex optimization problem can be solved numerically with great efficiency.
The optimal solutions to ( ‣ IV-A) must satisfy the following equations, regardless of and .
where Moreover, when and are given, the optimal solution is the unique solution to the above equations and the one below:
The importance of Theorem 1 is two-fold:
With different classes of requests, the seemingly -dimension optimization problem is in fact 1-dimensional: According to Eq.8, the optimal is fully determined by the optimal (vice versa). Moreover, according to Eq.9, the optimal further fully determines the optimal choices of and for all other . In other words, the knowledge of the optimal choice of any (or ) is sufficient to derive the complete optimal choice of .
The optimal solution (, and ) is fully determined by , hence it is virtually independent of the particular and : and appear in the above equations only in the form of in Eq.10. So for any two different pairs of and , as long as , they share the same optimal choice of codes! An implication of this is that the -class optimization problem can be solved by solving a set of independent single-class subproblems: the -th subproblem solves for the optimal with class--only arrivals at rate such that , because it is equivalent to the -class problem when and such that and .
The second observation above is of particular interest to the purpose of this paper. It suggests that adaptation of different classes can be done separately, as if only arrivals are for the class under consideration. This significantly simplifies the design of our adaptive strategy TOFEC, resulting in great computational efficiency and flexibility.
Iv-D Adaptive Strategy TOFEC
Despite being the mathematical foundation for the design of TOFEC, Theorem 1 at its current formulation is not very useful in practice. This is because the code adaptation is based on the knowledge of the total workload and the popularity distribution of different classes as per Theorem 1. In practice, both quantities usually demonstrate high degree of volatility, making accurate on-the-fly estimation quite difficult and/or unreliable. So in order to achieve effective code adaptation, a more robust system metric that is easy to measure with high accuracy is desirable.
Observe that the expected length of the request queue is
which can be rewritten as
It is trivial and intuitive that is a strictly increasing function of and vice versa. On the other hand, it is not hard to verify that optimal , and are all strictly decreasing functions of according to Theorem 1. Replacing with Eq.12, we can conclude the following corollary:
The optimal values of , and can all be expressed as strictly decreasing functions of :
The findings of Corollary 1 conform to the following intuition:
At light workload (small ), there should be little backlog in the request queue (small ) and the service delay dominates the total delay. In this case, the system is not operating in the capacity-limited regime. So it is beneficial to increase the level of chunking and redundancy (large and ) to reduce delay.
At heavy workload (larger ), there will be a large backlog in the request queue (large ) and the queueing delay dominates the total delay. In this case, the system operates in the capacity-limited regime. So it is better to reduce the level of chunking and redundancy (small and ) to support higher throughput.
More importantly, it suggests the sufficiency to choose the FEC code solely based on the length of the request queue – a very robust and easy to obtain system metric – instead of less reliable estimations of and . As will be discussed later, queue length has other advantages over arrival rate in a dynamic setting.
The basic idea of TOFEC is to choose and for a request of class , where is the queue length upon the arrival of the request. When this is done to all request arrivals to the system, it can be expected the average code lengths (dimensions) and expected queue length satisfy Eq.13, hence the optimal delay is achieved. In TOFEC, this is implemented with a threshold based algorithm, which can be performed very efficiently. For each class , we first compute the expected queue length given is the optimal code length by
Here is the maximum number of tasks allowed for a class request. Since is a strictly decreasing function, its inverse is a well-defined strictly decreasing function. As a result, we have Note that our goal is to use code length if the queue length is around , so we want a set of thresholds such that
and will use such that . In our current implementation of TOFEC, we use for and . A set of thresholds for adaptation of is found in a similar fashion.
The adaptation mechanism of TOFEC is summarized in pseudo-codes as Algorithm 1. Note that in Step 1 we reduce to if the redundancy ratio of the code chosen in the previous steps is higher than – the maximum allowed redundancy ratio for class . Also, instead of comparing directly with the thresholds, we compare an exponential moving average , with a memory factor , against the thresholds to determine and . The moving average is used to mitigate the transient variation in queue length so that and will not change too frequently. It is obvious that we only need to set in order to use instantaneous queue length for the adaptation since in this case .
It is worth pointing out that TOFEC’s threshold based adaptation is
Independent of : The thresholds for each class is computed a priori without any knowledge or assumption of . Once computed, the thresholds can be reused for all realizations of different , even if is time-varying;
Independent across classes: For a class , computation of its thresholds require knowledge of neither the number nor the delay parameters of other classes. The adaptation of class is also independent of those of the other classes.
These two properties of independence are direct result of the implication of Theorem 1 we discussed before. Thanks to these nice properties, it is very easy in TOFEC to add support for a new class in an incremental fashion: simply compute the thresholds for the new class, leaving the thresholds for the existing, say , classes untouched. The old and new thresholds together will then produce the optimal choice of codes for the incremented set of classes.
We now demonstrate the benefits of TOFEC’s adaptation mechanism. We evaluate TOFEC’s adaptation strategy and show that is outperforms static strategies with both constant and changing workloads, as well as a simple greedy heuristic that will be introduced later.
V-a Simulation Setup
We conducted trace-driven simulations for performance evaluation for both single-class and multi-class scenarios with both read and write requests of different file sizes. Due to lack of space, we only show results for the scenario with one class (read,3MB). But we must emphasize that it is representative enough so that the findings to be discussed in this section are valid for other settings (different file sizes, write requests, and multiple classes). We assume that the system supports up to simultaneously active threads. We set the maximum code dimension and redundancy ratio to be and , because we observe negligible gain in service delay beyond this chunking and redundancy level from our measurements. We use traces collected in May and June 2013 in availability region “North California”. In order to compute the thresholds for TOFEC, we need estimations of the delay parameters . For this, we first filter out the worst 10% task delays in the traces, then we compute the delay parameters from the least squares linear approximation for the mean and standard deviation of the remaining task delays. We use memory factor in TOFEC.
In addition to the static strategies, we develop a simple Greedy heuristic strategy for the purpose of comparison. Unlike the adaptive strategy in TOFEC, Greedy does not require prior-knowledge of the distribution of task delays, yet as the results will reveal, it achieves a competitive mean delay performance. In Greedy, the code to be used to serve a request in class is determined by the number of idle threads upon its arrival: suppose there are idle threads, then
The idea of Greedy is to first maximize the level of chunking with the idle threads available, then increase the redundancy ratio as long as there are idle threads remaining.
V-B Throughput-Delay Trade-Off
Fig.8 shows the mean, median, 90th percentile and 99th percentile delays of TOFEC and Greedy with Poisson arrivals at different arrival rates of . We also run simulations with static strategies for all possible combinations of at every arrival rate. In a brute-force fashion, we find the best mean, median, 90th and 99th percentile delays achieved with static strategies and use them as the baseline. Fig.8(a) and Fig.8(b) also plot the mean and median delay performance of the basic static strategy with no chunking and no replication, i.e., code; the simple replication static strategy with a code; and the backlog-based adaptive strategy from  with fixed code dimension and .
As we can see, both TOFEC and Greedy successfully support the full capacity region – the one supported by basic static – while achieving almost optimal mean and median delays throughout the full capacity region. At light workload, TOFEC delivers about improvement in mean delay when compared with the basic static strategy, and about when compared with simple replication (from 205ms and 151ms to 84ms). It also reduces the median delay by about from that of basic and simple replication (from 156ms and 138ms to 74ms). Meanwhile Greedy achieves about improvement in both mean (89ms) and median delays (79ms) over basic.
With heavier workload, both TOFEC and Greedy successfully adapt their codes to keep track with the best static strategies, in terms of mean and median delays. It is clear from the figures that both TOFEC and Greedy achieve our primary goal of retaining full system capacity, as supported by basic static strategy. On the contrary, although simple replication has slightly better mean and median delays than basic under light workload, it fails to support arrival rates beyond 70% of the capacity of basic. Meanwhile, the adaptive strategy from  with fixed code dimension can only support less than 30% of the original capacity region, although it achieves the best delay at very light workload.
While the two adaptive strategies have similar performance in mean and median, TOFEC outperforms Greedy significantly at high percentiles. As Fig.8(c) and Fig.8(d) demonstrate, TOFEC is on a par with the best static strategies at the 90th and 99th percentile delays throughout the whole capacity region. On the other hand, Greedy fails to keep track of the best static performance at lower arrival rates. At light workload, TOFEC’s is over and better than Greedy at the 90th and 99th percentiles. Less interesting is the case with heavy workload when the system is capacity-limited. Hence both strategies converge to the basic static strategy using mostly code, which is optimal at this regime.
V-C Delay Variation and Choice of Codes
We further compare the standard deviation (STD) of TOFEC, Greedy and the best static strategy. STD is a very important performance metric because it directly relates to whether customers can receive consistent QoS. In certain applications, such as video streaming, maintaining low STD in delay can be even more critical than achieving low mean delay. As we can see in Fig.9, for the region of interest with light to medium workload, TOFEC delivers to lower STD than Greedy does. Moreover, in spite of its dynamic adaptive nature, TOFEC in fact matches with the best static strategy very well throughout the full capacity region. This suggests the code choice in TOFEC indeed converges to the optimal.
The convergence to optimal becomes more obvious when we look into the fraction of requests served by each choice of code. In Fig.10 we plot the compositions of requests served by different code dimension ’s. At each arrival rate, the two bars represent TOFEC and Greedy. For each bar, blocks in different colors represent the fraction of requests served with code dimension 1 through 6, from bottom to top. TOFEC’s choice of demonstrates a high concentration around the optimal value: at all arrival rate, over 80% requests are served by 2 neighboring values of around the optimal, and this fraction quickly diminishes to 0 for codes further from the optimal. Moreover, as arrival rate varies from low to high, TOFEC’s choice of transitions quite smoothly as and eventually converges to a single value as workload approaches system capacity.
On the contrary, Greedy tends to round-robin across all possible choices of and majority of requests are served by either or . So Greedy is effectively alternating between the two extremes of no chunking and very high chunking, instead of staying around the optimal. Such “all or nothing” behavior results in the to worse STD shown in Fig.9. So TOFEC provides much better QoS guarantee.
V-D Adapting to Changing Workload
We further examine how well the two adaptive strategies adjust to changes in workload. In Fig.11 we plot the total delay experienced by requests arriving at different times within a 600-second period, as well as the choice of code in the same period. The 600 seconds is divided into 3 phases, each lasts 200 seconds. The arrival rate is 10 request/second in phases 1 and 3, and 80 request/second (slightly ) in phase 2. The corresponding optimal choices of codes are for phases 1 and 3, and for phase 2. For the purpose of comparison, we also implement an “Ideal” rate-driven strategy that has perfect knowledge of the arrival rate of each phase and picks the optimal code accordingly as the baseline. We can see that both TOFEC and Greedy are quite agile to changes in arrival rate and quickly converge to a good composition of codes that delivers optimal mean delays within each phase, comparable to that of Ideal.
From Fig.11(b) we can further observe that TOFEC is especially responsive in face of workload surge (from phase 1 to 2). This is because the suddenly increased arrival rate immediately builds up a large backlog, which in turn forces TOFEC to pick a code with the smallest . When the arrival drops (from phase 2 to 3), instead of immediately switching back to codes with , TOFEC gradually transitions to the optimal value of . Such “smoothening” behavior when workload reduces is actually beneficial. This is because the request queue has been built up during the preceding period of heavy workload. So, if is set to 5 right after arrival rate drops, it will produce a throughput so low that it takes a much longer time to reduce the queue length to the desired level, and requests arrive during this period will suffer from long queueing delay even though they are being served with the optimal code. On the other hand, TOFEC’s queue length driven adaptation sticks with smaller , which delivers higher throughput, to drain the queue much faster to the desired level. As we can see in Fig.11(c), which plots the delay traces for requests arrive in the first 10 seconds of phase 3, TOFEC and Greedy both reduce their delay to optimal almost faster than Ideal does after workload decreases. This is another advantage of using queue length instead of arrival rate to drive code adaptation.
We can also see that TOFEC’s choice of code is much more stable than that of Greedy. While TOFEC shows little variation around the optimal in each phase, Greedy keeps oscillating between and when the optimal is 1! This is consistent with the “all or nothing” behavior of Greedy observed in Fig.10.
Vi Related Work
FEC in connection with multiple paths and/or multiple servers is a well investigated topic in the literature [8, 9, 10, 11]. However, there is very little attention devoted to the queueing delays. FEC in the context of network coding or coded scheduling has also been a popular topic from the perspectives of throughput (or network utility) maximization and throughput vs. service delay trade-offs [12, 13, 14, 15]. Although some incorporate queuing delay analysis, the treatment is largely for broadcast wireless channels with quite different system characteristics and constraints. FEC has also been extensively studied in the context of distributed storage from the points of high durability and availability while attaining high storage efficiency [16, 17, 18].
Authors of  conducted theoretical study of cloud storage systems using FEC in a similar fashion as we did in our work . Given that exact mathematical analysis of the general case is very difficult, authors of  considered a very simple case with a fixed code of tasks. Shah et al.  generalize the results from  to . Both works rely on the assumption of exponential task delays, which hardly captures the reality. Therefore, some of their theoretical results cannot be applied in practice. For example, under the assumption of exponential task delays, Shah et al. have proved that it is optimal to always use the largest possible throughout the full capacity region , contradicting with simulation results using real-world measurements in  and this paper.
This paper presents the first set of solutions for achieving the optimal throughput-delay trade-off for scalable key-value storage access using erasure codes with variable chunk sizing and rate adaptation. We establish the viability of this approach through extensive measurement study over the popular public cloud storage service Amazon S3. We develop two adaptation strategies: TOFEC and Greedy. TOFEC monitors the local backlog and compares it against a set of thresholds to dynamically determine the optimal code length and dimension. Our trace-driven simulation shows that TOFEC is on a par with the best static strategy in terms of mean, median, 90th, and 99th percentile delays, as well as delay variation. To compute the thresholds, TOFEC requires knowledge of the mean and variance of cloud storage access delays, which is usually obtained by maintaining a log of delay traces. On the other hand, Greedy does not require any knowledge of the delay profile or logging but is able to achieve mean and median delays comparable to those of TOFEC. However, it falls short in important QoS metrics such as higher percentile delays and variation. It is part of our ongoing work to develop a strategy that matches TOFEC’s high percentile delay performance without prior knowledge and logging.
-  C. Huang, H. Simitci, Y. Xu, A. Ogus, B. Calder, P. Gopalan, J. Li, and S. Yekhanin, “Erasure Coding in Windows Azure Storage,” in USENIX ATC, 2012.
-  S. L. Garfinkel, “An Evaluation of Amazon’s Grid Computing Services: EC2, S3 and SQS,” Harvard University, Tech. Rep., 2007.
-  G. Liang and U. C. Kozat, “FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage with Coding,” IEEE/ACM Trans. Networking, preprint, 13 Nov. 2013, doi: 10.1109/TNET.2013.2289382.
-  L. Huang, S. Pawar, H. Zhang, and K. Ramchandran, “Codes Can Reduce Queueing Delay in Data Centers,” in IEEE ISIT, 2012.
-  N. B. Shah, K. Lee, and K. Ramchandran, “The MDS Queue: Analysing Latency Performance of Codes and Redundant Requests,” arXiv:1211.5405, Apr. 2013.
-  J. C. McCullough, J. Dunagan, A. Wolman, and A. C. Snoeren, “Stout: an Adaptive Interface to Scalable Cloud Storage,” in USENIX ATC, 2010.
-  R. Kotla, L. Alvisi, M. Dahlin, A. Clement, and E. Wong, “Zyzzyva: Speculative Byzantine fault tolerance,” ACM Transactions on Computer Systems, vol. 27, pp. 7:1–7:39, 2010.
-  V. Sharma, S. Kalyanaraman, K. Kar, K. K. Ramakrishnan, and V. Subramanian, “MPLOT: A Transport Protocol Exploiting Multipath Diversity Using Erasure Codes,” in IEEE INFOCOM, 2008.
-  E. Gabrielyan, “Fault-Tolerant Real-Time Streaming with FEC thanks to Capillary MultiPath Routing,” Computing Research Repository, 2006.
-  J. W. Byers, M. Luby, and M. Mitzenmacher, “Accessing Multiple Mirror Sites in Parallel: Using Tornado Codes to Speed Up Downloads,” in IEEE INFOCOM, 1999.
-  R. Saad, A. Serhrouchni, Y. Begliche, and K. Chen, “Evaluating Forward Error Correction performance in BitTorrent protocol,” in IEEE LCN, 2010.
-  A. Eryilmaz, A. Ozdaglar, M. Medard, and E. Ahmed, “On the Delay and Throughput Gains of Coding in Unreliable Networks,” IEEE Trans. Inf. Theor., 2008.
-  W.-L. Yeow, A. T. Hoang, and C.-K. Tham, “Minimizing Delay for Multicast-Streaming in Wireless Networks with Network Coding,” in IEEE INFOCOM, 2009.
-  T. K. Dikaliotis, A. G. Dimakis, T. Ho, and M. Effros, “On the Delay of Network Coding over Line Networks,” Computing Research Repository, 2009.
-  U. C. Kozat, “On the Throughput Capacity of Opportunistic Multicasting with Erasure Codes,” in IEEE INFOCOM, 2008.
-  A. G. Dimakis, P. B. Godfrey, Y. Wu, M. J. Wainwright, and K. Ramchandran, “Network Coding for Distributed Storage Systems,” IEEE Trans. Inf. Theor., 2010.
-  R. Rodrigues and B. Liskov, “High Availability in DHTs: Erasure Coding vs. Replication,” in 4th International Workshop, IPTPS, 2005.
-  J. Li, S. Yang, X. Wang, and B. Li, “Tree-Structured Data Regeneration in Distributed Storage Systems with Regenerating Codes,” in IEEE INFOCOM, 2010.
[Proof of Theorem 1]
It is easy to verify that the objective function ( ‣ IV-A) is continuous and differentiable everywhere within the feasible region and the partial derivatives are
Notice that for the whole the feasible region including the boundary, ( ‣ IV-A) is always lower bounded by 0. So there must exist at least one global optimal solution that minimizes ( ‣ IV-A). Moreover, ( ‣ IV-A) goes to if and only if the operating point approaches the boundary, i.e., , or , or . Since ( ‣ IV-A) is for the whole boundary, the global optimal must reside strictly within the feasible region. As a result, both Eq.15 and Eq.16 must evaluate to 0 at . In the subsequent discussion, we prove that Eq.15 = 0 and Eq.16 = 0 has an unique solution within the feasible region. As a result, equals to this solution and is also unique.
We do not consider the other solution to Eq.18 because it is always . It is easy to verify that is a strictly increasing function of within the feasible region.
Substituting with , the right hand side of Eq.17 can be written as a function , which can be shown to be strictly decreasing. Notice that Eq.17 must be satisfied for all and the left hand side remains unchanged. Then
Note that and are strictly decreasing functions of and , respectively. This means that there is a one-to-one mapping between any and at the optimal solutions, and is a strictly increasing function of , namely .
Now with and , Eq.17 becomes a equation that contains only one variable . It is then not hard to show that for any given and , the left hand side of Eq.17 is a strictly increasing function of , while the right hand side is , which is strictly decreasing. As a result, these two functions can equal for at most one value of . In other words, equations Eq.18 and Eq.17 have at most one solution. The existence of a solution to these equations is guaranteed by the existence of , so it must be unique. This completes the proof. \qed
Guanfeng Liang (S’06-M’12) received his B.E. degree from University of Science and Technology of China, Hefei, Anhui, China, in 2004, M.A.Sc. degree in Electrical and Computer Engineering from University of Toronto, Canada, in 2007, and Ph.D. degree in Electrical and Computer Engineering from the University of Illinois at Urbana-Chanpaign, in 2012. He currently works with DOCOMO Innovations (formerly DOCOMO USA Labs), Palo Alto, CA, as a Research Engineer.
Ulaş C. Kozat (Sâ97-Mâ04-SMâ10) received his B.Sc. degree in Electrical and Electronics Engineering from Bilkent University, Ankara, Turkey, in 1997, M.Sc. degree in Electrical Engineering from the George Washington University, Washington, DC, in 1999, and Ph.D. degree in Electrical and Computer Engineering from the University of Maryland, College Park, in 2004. He currently works at DOCOMO Innovations (formerly DOCOMO USA Labs), Palo Alto, CA, as a Principal Researcher.