FastTrack: Minimizing Stalls for CDNbased Overthetop Video Streaming Systems
1. Implementation and Evaluation
In this section, we evaluate our proposed algorithm for weighted stall duration tail probability.
Node 1  Node 2  Node 3  Node 4  Node 5  Node 6 

Node 7  Node 8  Node 9  Node 10  Node 11  Node 12 

1.1. Parameter Setup
We simulate our algorithm in a distributed storage cache system of distributed nodes, where some segments, i.e., , of video file are stored in the storage cache nodes and thus servered from the cache nodes. The noncached segments are severed from the datacenter. Without loss of generality, we assume , (unless otherwise explicitly stated) and files, whose sizes are generated based on Pareto distribution (arnold2015pareto) (as it is a commonly used distribution for file sizes (Vaphase)) with shape factor of and scale of , respectively. While we stick in the simulation to these parameters, our analysis and results remain applicable for any setting given that the system maintains stable conditions under the chosen parameters. Since we assume that the video file sizes are not heavytailed, the first filesizes that are less than 60 minutes are chosen. We also assume that the segment service time follows a shifted exponential distribution whose parameters are depicted in Table 1, where the different values of server rates , and are summarized. These values are extracted from our testbed (explained below) where the largest value of corresponds to a bandwidth of 110Mbps and the smallest value is corresponding to a bandwidth of 25Mbps. Unless explicitly stated, the arrival rate for the first files is while for the next files is set to be . Segment size is set to be equal to seconds and the cache servers are assumed to store only out of the total number of video file segments. When generating video files, the size of each video file is rounded up to the multiple of seconds. In order to initialize our algorithm, we assume uniform scheduling, , , . Further, we choose , , and . However, these choices of the initial parameters may not be feasible. Thus, we modify the parameter initialization to be closest norm feasible solutions.
1.2. Baselines
We compare our proposed approach with five strategies, which are described as follows.

Projected Equal ServerPSs Scheduling, Optimized Auxiliary variables, Cache Placement and Bandwidth Wights (PEA): Starting with the initial solution mentioned above, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1, LABEL:alg:NOVA_Alg3, and LABEL:alg:NOVA_Alg5, respectively) using alternating minimization. Thus, the values of , , will be approximately close to , and , respectively, for all .

Projected Equal Bandwidth, Optimized Access Servers and PS scheduling Probabilities, Auxiliary variables and cache placement (PEB): Starting with the initial solution mentioned above, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1Pi, LABEL:alg:NOVA_Alg1, and LABEL:alg:NOVA_Alg5, respectively) using alternating minimization. Thus, the bandwidth allocation weights, , , will be approximately , , and , respectively.

Projected Proportional ServiceRate, Optimized Auxiliary variables, Bandwidth Wights, and Cache Placement (PSP): In the initialization, the access probabilities among the servers, are given as . This policy assigns servers proportional to their service rates. The choice of all parameters are then modified to the closest norm feasible solution. Using this initialization, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1, LABEL:alg:NOVA_Alg3, and LABEL:alg:NOVA_Alg5, respectively) using alternating minimization.

Projected Equal Caching, Optimized Scheduling Probabilities, Auxiliary variables and Bandwidth Allocation Weights (PEC): In this strategy, we divide the cache size equally among the video files. Using this initialization, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1Pi, LABEL:alg:NOVA_Alg1, and LABEL:alg:NOVA_Alg3, respectively) using alternating minimization.

Projected CachingHottest files, Optimized Scheduling Probabilities, Auxiliary variables and Bandwidth Allocation Weights (CHF): In this strategy, we cache the video files that have the highest arrival rates (i.e., hottest files) at the distributed storage caches. Using this initialization, the problem in (LABEL:eq:optfun) is optimized over the choice of , , and (using Algorithms LABEL:alg:NOVA_Alg1Pi, LABEL:alg:NOVA_Alg1, and LABEL:alg:NOVA_Alg3, respectively) using alternating minimization.

Fixedt Algorithm: In this strategy, we optimize all optimization variables except the auxiliary variable where it is assigned a fixed value equals to .
1.3. Numerical Results
Convergence of the Proposed Algorithm
Figure 1 shows the convergence of our proposed algorithm, which alternatively optimizes the weighted stall duration tail probability of all files over scheduling probabilities , auxiliary variables , bandwidth allocation weights and cache placement . We see that for video files of size s with cache storage nodes, the weighted stall duration tail probability converges to the optimal value within less than 300 iterations.
Weighted SDTP
In Figure 2, we plot the cumulative distribution function (CDF) of weighted stall duration tail probability with (in seconds) for different strategies including our proposed algorithm, PSP algorithm, PEA algorithm, PEB algorithm, PEC algorithm, and Fixed t algorithm. We note that our proposed algorithm for jointly optimizing , , and provides significant improvement over considered strategies as weighted stall duration tail probability reduces by orders of magnitude. For example, our proposed algorithm shows that the weighted stallduration tail probability will not exceed s, which is much lower when comparing to other considered strategies. Further, uniformly accessing servers, equally allocating bandwidth and cache are unable to optimize the request scheduler based on factors like cache placement, request arrival rates, and different stall weights, thus leading to much higher stall duration tail probability. Since the Fixed t policy performs significantly worse than the other considered policies, we do not include this policy in the rest of the paper.
Effect of Arrival Rates
Figure 3 shows the effect of increasing system workload, obtained by varying the arrival rates of the video files from to , where is the base arrival rate, on the stall duration tail probability for video lengths generated based on Pareto distribution defined above. We notice a significant improvement in the QoE metric with the proposed strategy as compared to the baselines. For instance, at the arrival rate of , where is the base arrival rate defined above, the proposed strategy reduces the weighted stall duration tail probability by about as compared to the nearest strategy, i.e., PSP Algorithm.
Effect of Video File Weights on the Weighted SDTP
While weighted stall duration tail probability increases as arrival rate increases, our algorithm assigns differentiated latency for different video file groups as captured in Figure 4 to maintain the QoE at a lower value. Group 3 that has highest weight (i.e., most tail stall sensitive) always receive the minimum stall duration tail probability even though these files have the highest arrival rate. Hence, efficiently reducing the stall tail probability of the high arrival rate files which accordingly reduces the overall weighted stall duration tail probability. In addition, we note that efficient server/PSs access probabilities help in differentiating file latencies as compared to the strategy where minimum queuelength servers are selected to access the content obtaining lower weighted tail latency probability.
Effect of Scaling up bandwidth of the Cache Servers and Datacenter
We show the effect of increasing the server bandwidth on the weighted stall duration tail probability in Figure 1.4. Intuitively, increasing the storage node bandwidth will increase the service rate of the storage nodes thus reducing the weighted stall duration tail probability.
Effect of the Parallel Connections and
To study the effect of the number of the parallel connections, we plot in Figure 6 the weighted stall duration tail probability for varying the number of parallel streams, and for our proposed algorithm. We vary the number of PSs from the , to , with increment step of , and , respectively. We can see that increasing and improve the performance since some of the bandwidth splits can be zero thus giving the lower solution as one of the possible feasible solution. Increasing the number of PSs results in decreasing the stall durations since more video files can be streamed concurrently. We note that for and , the weighted stall duration tail probability is almost zero. However, the streaming servers may only be able to handle a limited number of parallel connections which limit the choice of both and in the real systems.
1.4. Testbed Configuration
Cluster Information  

Control Plane  Openstack Kilo 
VM flavor  1 VCPU, 2GB RAM, 20G storage (HDD) 
Software Configuration  
Operating System  Ubuntu Server 16.04 LTS 
Origin Server(s)  Apache Web Server (apacheweb): Apache/2.4.18 (Ubuntu) 
Cache Server(s)  Apace Traffic Server (trafficserv) 6.2.0 (build # 100621) 
Client  Apache JMeter (jmeter) with HLS plugin (hlsplugin) 
We constructed an experimental environment in a virtualized cloud environment managed by Openstack (openstack).
We allocated one VM for an origin server and 5 VMs for cache servers intended to simulate two locations (i.e., different states). The schematic of our testbed is illustrated in Figure 1.4 . One VM per location is used for generating client workloads. Table 2 summarizes a detailed configuration used for the experiments. For client workload, we exploit a popular HTTPtrafic generator, Apache JMeter, with a plugin that can generate traffic using HTTP Streaming protocol. We assume the amount of available bandwidth between origin server and each cache server is 200 Mbps, 500 Mbps between cache server 1/2 and edge router 1, and 300 Mbps between cache server 3/4/5 and edge router 2. In this experiments, to allocate bandwidth to the clients, we throttle the client (i.e., JMeter) traffic according to the plan generated by our algorithm. We consider threads (i.e., users) and set , . Based on one week trace from our production system, we estimate the aggregate arrival rates at edge router 1 and router 2 to be , , respectively. Then, HLS sampler (i.e., request) is sent every s. We assume of the segments are stored in the cache and hence the remaining segments are servers from origin server. The video files are s of length and the segment length is set to be s. For each segment, we used JMeter builtin reports to estimate the downloaded time of each segment and then plug these times into our model to get the SDTP.
Service Time Distribution: We first run experiments to measure the actual service time distribution in our cloud environment. Figure 8 depicts the cumulative distribution function (CDF) of the chunk service time for different bandwidths. Using these results, we show that the service time of the chunk can be well approximated by a shiftedexponential distribution with a rate of 24.60s, 29.75s for a bandwidth of 25 Mbps and 30 Mbps, respectively. These results also verify that actual service time does not follow an exponential distribution. This observation has also been made earlier in (YuTON16). Further, the parameter for the exponential is almost proportional to the bandwidth while the shift is nearly constant, which validates the model.
SDTP Comparisons: Figure 9 shows four different policies where we compare the actual SDTP, analytical SDTP, PSP, and PEA based SDTP algorithms. We see that the analytical SDTP is very close to the actual SDTP measurement on our testbed. To the best of our knowledge, this is the first work to jointly consider all key design degrees of freedom, including bandwidth allocation among different parallel streams, cache content placement, the request scheduling, and the modeling variables associated with the SDTP bound.
Arrival Rates Comparisons: Figure 10 shows the effect of increasing system workload, obtained by varying the arrival rates of the video files from to with an increment step of on the stall duration tail probability. We notice a significant improvement of the QoE metric with the proposed strategy as compared to the baselines. Further, the gap between the analytical bound and actual SDTP is small which validates the tightness of our proposed SDTP bound.
Mean Stall Duration Comparisons: We further plots the weighted mean stall duration (WMSD) in Figure 11. As expected, the proposed approach achieves the lowest stall durations and the gap between the analytical and experimental results is small and thus the proposed bound is tight. Also, caching hottest files does not help much since caching later segments is not necessary as they can be downloaded when the earlier segments are being played. Thus, prioritizing earlier segments over later ones for caching is more helpful in reducing the stalls than caching complete video files.
References
Footnotes
 conference: ; June 2018; Irvine, California, USA
 journalyear: 2017