Optimized Video Streaming over Cloud: A Stall-Quality Trade-off

FastTrack: Minimizing Stalls for CDN-based Over-the-top Video Streaming Systems

Video Streaming over Cloud, Erasure Codes, Mean Stall Duration, Video Quality, Two-stage Probabilistic Scheduling
12

1. Implementation and Evaluation

In this section, we evaluate our proposed algorithm for weighted stall duration tail probability.

Node 1 Node 2 Node 3 Node 4 Node 5 Node 6
Node 7 Node 8 Node 9 Node 10 Node 11 Node 12
Table 1. The value of used in the evaluation results with units of 1/ms. We set ms.

1.1. Parameter Setup

We simulate our algorithm in a distributed storage cache system of distributed nodes, where some segments, i.e., , of video file are stored in the storage cache nodes and thus servered from the cache nodes. The non-cached segments are severed from the data-center. Without loss of generality, we assume , (unless otherwise explicitly stated) and files, whose sizes are generated based on Pareto distribution (arnold2015pareto) (as it is a commonly used distribution for file sizes (Vaphase)) with shape factor of and scale of , respectively. While we stick in the simulation to these parameters, our analysis and results remain applicable for any setting given that the system maintains stable conditions under the chosen parameters. Since we assume that the video file sizes are not heavy-tailed, the first file-sizes that are less than 60 minutes are chosen. We also assume that the segment service time follows a shifted exponential distribution whose parameters are depicted in Table 1, where the different values of server rates , and are summarized. These values are extracted from our testbed (explained below) where the largest value of corresponds to a bandwidth of 110Mbps and the smallest value is corresponding to a bandwidth of 25Mbps. Unless explicitly stated, the arrival rate for the first files is while for the next files is set to be . Segment size is set to be equal to seconds and the cache servers are assumed to store only out of the total number of video file segments. When generating video files, the size of each video file is rounded up to the multiple of seconds. In order to initialize our algorithm, we assume uniform scheduling, , , . Further, we choose , , and . However, these choices of the initial parameters may not be feasible. Thus, we modify the parameter initialization to be closest norm feasible solutions.

1.2. Baselines

We compare our proposed approach with five strategies, which are described as follows.

  1. Projected Equal Server-PSs Scheduling, Optimized Auxiliary variables, Cache Placement and Bandwidth Wights (PEA): Starting with the initial solution mentioned above, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1, LABEL:alg:NOVA_Alg3, and LABEL:alg:NOVA_Alg5, respectively) using alternating minimization. Thus, the values of , , will be approximately close to , and , respectively, for all .

  2. Projected Equal Bandwidth, Optimized Access Servers and PS scheduling Probabilities, Auxiliary variables and cache placement (PEB): Starting with the initial solution mentioned above, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1Pi, LABEL:alg:NOVA_Alg1, and LABEL:alg:NOVA_Alg5, respectively) using alternating minimization. Thus, the bandwidth allocation weights, , , will be approximately , , and , respectively.

  3. Projected Proportional Service-Rate, Optimized Auxiliary variables, Bandwidth Wights, and Cache Placement (PSP): In the initialization, the access probabilities among the servers, are given as . This policy assigns servers proportional to their service rates. The choice of all parameters are then modified to the closest norm feasible solution. Using this initialization, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1, LABEL:alg:NOVA_Alg3, and LABEL:alg:NOVA_Alg5, respectively) using alternating minimization.

  4. Projected Equal Caching, Optimized Scheduling Probabilities, Auxiliary variables and Bandwidth Allocation Weights (PEC): In this strategy, we divide the cache size equally among the video files. Using this initialization, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1Pi, LABEL:alg:NOVA_Alg1, and LABEL:alg:NOVA_Alg3, respectively) using alternating minimization.

  5. Projected Caching-Hottest files, Optimized Scheduling Probabilities, Auxiliary variables and Bandwidth Allocation Weights (CHF): In this strategy, we cache the video files that have the highest arrival rates (i.e., hottest files) at the distributed storage caches. Using this initialization, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1Pi, LABEL:alg:NOVA_Alg1, and LABEL:alg:NOVA_Alg3, respectively) using alternating minimization.

  6. Fixed-t Algorithm: In this strategy, we optimize all optimization variables except the auxiliary variable where it is assigned a fixed value equals to .

Figure 1. Convergence of weighted stall-duration tail probability.
Figure 2. Cumulative Density Function of the weighted stall-duration tail probability.
Figure 3. Weighted stall-duration tail probability versus arrival rate of video files.

1.3. Numerical Results

Figure 4. Weighted stall-duration tail probability for different number of files. We vary the number of files per group from 500 files to 1100 files with an increment step of 200 files. For each set, the base file arrival rate is scaled by respectively.
Figure 5. Weighted stall-duration tail probability for different scaling for the server bandwidth . We scale up by
Figure 6. Weighted stall-duration tail probability for different number of parallel streams and . We scale up the number of parallel streams by

Convergence of the Proposed Algorithm

Figure 1 shows the convergence of our proposed algorithm, which alternatively optimizes the weighted stall duration tail probability of all files over scheduling probabilities , auxiliary variables , bandwidth allocation weights and cache placement . We see that for video files of size s with cache storage nodes, the weighted stall duration tail probability converges to the optimal value within less than 300 iterations.

Weighted SDTP

In Figure 2, we plot the cumulative distribution function (CDF) of weighted stall duration tail probability with (in seconds) for different strategies including our proposed algorithm, PSP algorithm, PEA algorithm, PEB algorithm, PEC algorithm, and Fixed t algorithm. We note that our proposed algorithm for jointly optimizing , , and provides significant improvement over considered strategies as weighted stall duration tail probability reduces by orders of magnitude. For example, our proposed algorithm shows that the weighted stall-duration tail probability will not exceed s, which is much lower when comparing to other considered strategies. Further, uniformly accessing servers, equally allocating bandwidth and cache are unable to optimize the request scheduler based on factors like cache placement, request arrival rates, and different stall weights, thus leading to much higher stall duration tail probability. Since the Fixed t policy performs significantly worse than the other considered policies, we do not include this policy in the rest of the paper.

Effect of Arrival Rates

Figure 3 shows the effect of increasing system workload, obtained by varying the arrival rates of the video files from to , where is the base arrival rate, on the stall duration tail probability for video lengths generated based on Pareto distribution defined above. We notice a significant improvement in the QoE metric with the proposed strategy as compared to the baselines. For instance, at the arrival rate of , where is the base arrival rate defined above, the proposed strategy reduces the weighted stall duration tail probability by about as compared to the nearest strategy, i.e., PSP Algorithm.

Effect of Video File Weights on the Weighted SDTP

While weighted stall duration tail probability increases as arrival rate increases, our algorithm assigns differentiated latency for different video file groups as captured in Figure 4 to maintain the QoE at a lower value. Group 3 that has highest weight (i.e., most tail stall sensitive) always receive the minimum stall duration tail probability even though these files have the highest arrival rate. Hence, efficiently reducing the stall tail probability of the high arrival rate files which accordingly reduces the overall weighted stall duration tail probability. In addition, we note that efficient server/PSs access probabilities help in differentiating file latencies as compared to the strategy where minimum queue-length servers are selected to access the content obtaining lower weighted tail latency probability.

Effect of Scaling up bandwidth of the Cache Servers and Datacenter

We show the effect of increasing the server bandwidth on the weighted stall duration tail probability in Figure 1.4. Intuitively, increasing the storage node bandwidth will increase the service rate of the storage nodes thus reducing the weighted stall duration tail probability.

Effect of the Parallel Connections and

To study the effect of the number of the parallel connections, we plot in Figure 6 the weighted stall duration tail probability for varying the number of parallel streams, and for our proposed algorithm. We vary the number of PSs from the , to , with increment step of , and , respectively. We can see that increasing and improve the performance since some of the bandwidth splits can be zero thus giving the lower solution as one of the possible feasible solution. Increasing the number of PSs results in decreasing the stall durations since more video files can be streamed concurrently. We note that for and , the weighted stall duration tail probability is almost zero. However, the streaming servers may only be able to handle a limited number of parallel connections which limit the choice of both and in the real systems.

1.4. Testbed Configuration

Cluster Information
Control Plane Openstack Kilo
VM flavor 1 VCPU, 2GB RAM, 20G storage (HDD)
Software Configuration
Operating System Ubuntu Server 16.04 LTS
Origin Server(s) Apache Web Server (apacheweb): Apache/2.4.18 (Ubuntu)
Cache Server(s) Apace Traffic Server (trafficserv) 6.2.0 (build # 100621)
Client Apache JMeter (jmeter) with HLS plugin (hlsplugin)
Table 2. Testbed Configuration
Figure 7. Testbed in the cloud

.

We constructed an experimental environment in a virtualized cloud environment managed by Openstack (openstack).

We allocated one VM for an origin server and 5 VMs for cache servers intended to simulate two locations (i.e., different states). The schematic of our testbed is illustrated in Figure 1.4 . One VM per location is used for generating client workloads. Table 2 summarizes a detailed configuration used for the experiments. For client workload, we exploit a popular HTTP-trafic generator, Apache JMeter, with a plug-in that can generate traffic using HTTP Streaming protocol. We assume the amount of available bandwidth between origin server and each cache server is 200 Mbps, 500 Mbps between cache server 1/2 and edge router 1, and 300 Mbps between cache server 3/4/5 and edge router 2. In this experiments, to allocate bandwidth to the clients, we throttle the client (i.e., JMeter) traffic according to the plan generated by our algorithm. We consider threads (i.e., users) and set , . Based on one week trace from our production system, we estimate the aggregate arrival rates at edge router 1 and router 2 to be , , respectively. Then, HLS sampler (i.e., request) is sent every s. We assume of the segments are stored in the cache and hence the remaining segments are servers from origin server. The video files are s of length and the segment length is set to be s. For each segment, we used JMeter built-in reports to estimate the downloaded time of each segment and then plug these times into our model to get the SDTP.

Service Time Distribution: We first run experiments to measure the actual service time distribution in our cloud environment. Figure 8 depicts the cumulative distribution function (CDF) of the chunk service time for different bandwidths. Using these results, we show that the service time of the chunk can be well approximated by a shifted-exponential distribution with a rate of 24.60s, 29.75s for a bandwidth of 25 Mbps and 30 Mbps, respectively. These results also verify that actual service time does not follow an exponential distribution. This observation has also been made earlier in (Yu-TON16). Further, the parameter for the exponential is almost proportional to the bandwidth while the shift is nearly constant, which validates the model.

SDTP Comparisons: Figure 9 shows four different policies where we compare the actual SDTP, analytical SDTP, PSP, and PEA based SDTP algorithms. We see that the analytical SDTP is very close to the actual SDTP measurement on our testbed. To the best of our knowledge, this is the first work to jointly consider all key design degrees of freedom, including bandwidth allocation among different parallel streams, cache content placement, the request scheduling, and the modeling variables associated with the SDTP bound.

Arrival Rates Comparisons: Figure 10 shows the effect of increasing system workload, obtained by varying the arrival rates of the video files from to with an increment step of on the stall duration tail probability. We notice a significant improvement of the QoE metric with the proposed strategy as compared to the baselines. Further, the gap between the analytical bound and actual SDTP is small which validates the tightness of our proposed SDTP bound.

Mean Stall Duration Comparisons: We further plots the weighted mean stall duration (WMSD) in Figure 11. As expected, the proposed approach achieves the lowest stall durations and the gap between the analytical and experimental results is small and thus the proposed bound is tight. Also, caching hottest files does not help much since caching later segments is not necessary as they can be downloaded when the earlier segments are being played. Thus, prioritizing earlier segments over later ones for caching is more helpful in reducing the stalls than caching complete video files.

Figure 8. Comparison of actual chunk service time distribution and shifted-exponential distribution with the corresponding mean and shift. It verifies that the actual service time of a chunk can be well approximated by a shifted exponential distribution.
Figure 9. Comparison of implementation results of our SDTP Algorithm to Analytical SDTP and PEA-based SDTP.
Figure 10. Comparison of implementation results of our SDTP algorithm to analytical SDTP and PEA-based SDTP when the arrival rate is varied from to with an increment step of .
Figure 11. Weighted mean stall duration versus the arrival rate for different policies.

References

Footnotes

  1. conference: ; June 2018; Irvine, California, USA
  2. journalyear: 2017
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
220583
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description