Joint Optimization of QoE and Fairness Through Network Assisted Adaptive Mobile Video Streaming

Joint Optimization of QoE and Fairness Through Network Assisted Adaptive Mobile Video Streaming

Abstract

MPEG has recently proposed Server and Network Assisted Dynamic Adaptive Streaming over HTTP (SAND-DASH) for video streaming over the Internet. In contrast to the purely client-based video streaming in which each client makes its own decision to adjust its bitrate, SAND-DASH enables a group of simultaneous clients to select their bitrates in a coordinated fashion in order to improve resource utilization and quality of experience. In this paper, we study the performance of such an adaptation strategy compared to the traditional approach with large number of clients having mobile Internet access. We propose a multi-servers multi-coordinators (MSs-MCs) framework to model groups of remote clients accessing video content replicated to spatially distributed edge servers. We then formulate an optimization problem to maximize jointly the QoE of individual clients, proportional fairness in allocating the limited resources of base stations as well as balancing the utilized resources among multiple serves. We then present an efficient heuristic-based solution to the problem and perform simulations in order to explore parameter space of the scheme as well as to compare the performance to purely client-based DASH.

1Introduction

According to statistics, the majority of Internet traffic is video, such as Netflix, YouTube, or other streaming applications [23], [24]. The network conditions, such as high fluctuation in the available bandwidth when multiple clients simultaneously compete for the shared bottleneck link, can significantly affect the users’ quality of experience (QoE) in mobile video streaming applications [15], [9]. Mobile and wireless access further complicate the situation. In order to avoid playback interruption and rebuffering events due to changes in available bandwidth during a streaming session, most media players nowadays use adaptive streaming, such as the non-standard HTTP Live Streaming (HLS) or protocols based on the dynamic adaptive video streaming (DASH) standard. In adaptive streaming, the whole video is divided into chunks and encoded with different qualities on the server [5]. The client adapts dynamically to bandwidth fluctuations by downloading appropriate bitrate chunks and therefore improving the QoE of end users [11].

Numerous research efforts, both theoretical and experimental, have been carried out in recent years on designing efficient adaptation mechanisms for mobile video streaming [18]. Almost all of them focus on improving the client-side adaptation strategy. However, recent innovations in mobile network architectures and cloud computing, namely Mobile Edge Computing (MEC)[21] and Fog Computing [25], provide an opportunity to further optimize the content delivery and adaptation through in-network and edge computing mechanisms. The recently proposed Server and network assisted DASH (SAND-DASH) standard specifies mechanisms and message types so that clients, network, and servers can exchange information and collaborate in video quality adaptation in order to improve QoE and fairness [16]. However, the standard specifies no adaptation logic, which is left open for innovation on purpose, and some preliminary work has already emerged studying different network assisted adaptation mechanisms [13].

Our overarching goal is to understand how much, in which way, and at which cost (esp. computational complexity) the QoE and fairness between mobile video streaming clients could be improved through network and server assisted adaptation mechanisms compared to pure client-based mechanisms. To this end, we present in this paper a general framework for studying network-assisted quality adaptation of large number of mobile DASH clients streaming replicated video content from mobile edge servers. We first formulate an optimization problem for jointly maximizing the QoE of individual clients, the proportional fair (PF) resource allocation at the base stations as well as balancing the utilized resources among multiple servers. Then, we design an efficient solution to this problem and compare its performance to purely client-based adaptation heuristics, namely rate-based and buffer-based adaptation, and examine the parameter space of the proposed solution using simulations.

The rest of the paper is organized as follows: We discuss related work in Section 2 and describe the proposed framework and its components in Section 3. The optimization problem is laid out in Section 4 and its solution is detailed in Section 5. We present simulation-based evaluation in Section 6 before concluding the paper and pointing out avenues for future work.

2Related Work

Several approaches for adaptive video streaming have been proposed during the past years [7], [8], [11], [14], [18], [19]. Seufert et al. [5] provide a comprehensive study on video quality adaptation and the major QoE related factors that both client and network need to consider. While the first generation of adaptive streaming protocols, such as HLS, select video bitrate merely based on measured download rate of previous chunks, Huang et al. [7] proposed an adaptation strategy that selects a video bitrate purely based on the current playback buffer occupancy level. Spetiri et al. [12] designed an online bitrate adaptation algorithm, BOLA, that is also based on buffer occupancy level only and prove its performance guarantee. Since purely buffer-based adaptation mechanisms may be sub-optimal especially under high throughput fluctuation, some techniques combine the buffer occupancy level with throughput prediction [11].

Several recent papers have investigated quality adaptation considering multiple clients associated with either single or multiple video servers [1]. Petrangeli et al. [1] examined the fair bandwidth utilization when multiple clients compete on shared bottleneck link, but their proposed objective function and the adaptation heuristic fail to capture the trade-off between client-perceived QoE and fair network resource utilization. The objective of bitrate selection by Bethanabhotla et al. [2] is to maximize video qualities subject to the stability of servers’ queues without considering the instantaneous and dynamic nature of bandwidth fluctuation, which according to [15] has direct impact on the QoE. Bouten et al. [3] propose in-network optimization of clients’ bitrates according to the available bandwidth on multiple bottleneck links. However, in this work, only one bitrate is allocated to each client and also each client is associated to a specified server which is known in advance. Furthermore, the proposed objective function neglects the initial buffering delay and the number of stalling events which have significant impact on the QoE of individual clients [5].

Concerning fair resource allocation in cellular networks, schedulers usually aim for proportional fairness (PF) when allocating radio resources to multiple competing clients in order to balance cell throughput with fairness. Chen et al. [4] propose a scheduling framework called AVIS that strives for proportional fairness while controlling the number of bitrate switching in scheduling of multiple simultaneous clients. However, the competition for accessing the available resources is considered on the shared bottleneck of only one base station and also this work mostly focuses on the fair resource scheduling without considering QoE-related parameters such as the initial buffering delay or the buffer stalling.

The recently proposed Server and network assisted DASH (SAND-DASH) standard specifies means for clients, network elements, and servers to exchange information in order to optimize video delivery and quality adaptation. It does not specify any adaptation logic but some work already exists on trying to understand the effectiveness of this approach [10]. Our work contributes to these efforts and, to the best of our knowledge, is the first one to try to quantify the benefits of network assisted quality adaptation in mobile video streaming with edge caching.

3Network-Assisted Mobile Video Streaming

3.1Multi-Servers Multi-Coordinators (MSs-MCs) Framework

Figure 1 illustrates the proposed multi-servers multi-coordinators (MSs-MCs) framework for network assisted adaptive video streaming. The DASH servers at the top store the replicated videos. We assume that a discrete set of videos denoted by are divided into multiple chunks with fixed size (in Seconds) and replicated on mobile edge servers each of which is associated to a base station. The base station allocates available radio resources in a proportionally fair manner to clients [4]. Each server stores video chunks with multiple bitrate resolutions such that

Figure 1: Multi-servers multi-coordinators (MSs-MCs) framework for dynamic adaptive video streaming at large scale.
Figure 1: Multi-servers multi-coordinators (MSs-MCs) framework for dynamic adaptive video streaming at large scale.

denotes the discrete set of bitrate resolutions for every video chunk which is offered by server . We partition the potentially large mobile network and its clients into subnets and groups. Considering groups (subnets) of DASH clients distributed over a potentially large geographical area, the clients join the network so that denotes the number of currently active clients in subnet . From the practical point of view, the clients which are in close vicinity are managed as one group and their information (arrival/departure times, physical location, buffer occupancy) is exchanged with the central scheduler located on the cloud through the local coordination proxy on the edge of the network. Following the discrete time slotted DASH scheduling [3] and with total number of time slots, at each time slot , the data transmission between the base station associated with video server and different clients goes through a shared bottleneck link with capacity of . Please note that refers to the available resource blocks i.e. the number of subcarriers in the frequency domain, at time slot on base station . We also note that the clients are assumed to be stationary throughout this work while we consider the clients’ mobility and the impact of handover as one of our interesting future works.

Let and denote the arrival and the departure times, respectively, of client belong to network which correspond to the time that client sends its request for first chunk and the time that it either abandons the streaming session or finishes downloading the last chunk. In the ideal case when no stalling happens during the session and with negligible network delay, the quantity is obviously equal to the watching duration of the video requested by client and consequently is the number of streaming chunks of the video. The media player of each client maintains a playback buffer for which the client determines a fixed target filling level denoted by (in Kb). represents the level of data in the client’s buffer at time slot . The coordination proxy does the client to server mapping based on client information (buffer occupancy, radio link conditions), QoE metrics considered, proportional fair bitrate allocation at the base stations as well as load balancing between servers. For the client to server mapping, we define a binary variable such that if client is allocated to server for downloading the current chunk at time slot and , otherwise. Furthermore, the integer decision variable denotes the allocated bitrate for chunk index which is downloaded by client from server .

Before we formulate the optimization problem in Section 4, we discuss next the different optimization criteria related to QoE, fairness, and server load balancing.

3.2Quality of Experience

A recent comprehensive study on QoE in dynamic adaptive video streaming [5] shows that four major factors can significantly

affect the quality of experience perceived by DASH clients: video quality, startup delay, stalling ratio and quality switching.

Video quality

is dependent on the video bitrate but the relationship is not necessarily linear [14]. There is a trade-off between video quality and stalling: Streaming high quality video increases the probability of experiencing a stall event because the download throughput has a higher chance to drop below the video bitrate due to low bandwidth available on the bottleneck link. Streaming at low quality reduces the possibility of stalling but also significantly degrades the client’s quality of experience. On the other hand, bitrate does not directly express the video quality and we need a function that maps a bitrate to a quality . In Section 6, we use the function adopted from [14] as the Structure Similarity (SSIM) index for the mapping function .

Startup delay

refers to the time duration which is needed to reach the target buffer filling level of the client upon its arrival. It corresponds to the waiting time of client from click to start of the playback. According to [6], the startup delay has a clearly smaller impact on the dissatisfaction of a viewer than stall events.

Stalling ratio

is the the amount of time spent so that video playback is stalled divided by the total duration of the session. Stall events occur when playback buffer empties caused by too low download throughput compared to the video bitrate. Avoiding stall events is critically important because of their prominent role in determining QoE. Therefore, we design the optimization problem with such constraints that stall events are avoided whenever possible, i.e. whenever the total amount of resources suffices to support lowest available video bitrates for all clients.

Figure 2: Theoretical received data throughput at client’s buffer.
Figure 2: Theoretical received data throughput at client’s buffer.

Frequent quality switching is also considered harmful for QoE [5]. To express it as QoE metric, we consider the difference between the quality level of consecutive chunks of the video downloaded by the client as the metric.

3.3Proportional Fairness

In cellular networks, such as LTE, the base stations usually schedule radio resources to multiple competing clients at each time slot according to a proportional fairness (PF) policy [4]. Specifically, the amount of resources allocated to a client is proportional to its link quality (data rate). It is remarked that in a different way from [20] in which the optimized bandwidth allocation is investigated on the base station side, we consider the QoE-aware optimal bitrate allocation taking into account the PF resource allocation policy by the base stations.

3.4Sever Load Balancing

We also consider load balancing between the servers as a criteria for optimization in order to avoid situations where a particular server’s computational capacity becomes a bottleneck. We consider at each time instant the percentage of the resources used out of the total available resources on each server as the metric for resource utilization for that server [4].

4Joint Optimization Problem

In formulating the optimization problem, we consider the four QoE metrics, fairness, and load balancing discussed in the previous section. Please, note that the optimization framework is independent of the way the metrics are computed. In order to balance the impact of these main factors, we define three weighting parameters , and that control the relative importance of resulting QoE metrics, proportional fairness in bitrate allocation, and server load balancing. We further define three adjustable weighting parameters to control the individual QoE metrics, namely video quality, initial playback delay, and the accumulated quality switching. In addition, we include constraint in the optimization problem in order to avoid stall whenever possible.

The problem formulation follows a discrete time slotted scheduling operation with fixed time duration of each time slot. We define the problem as a utility maximization problem over all clients using the integer non-linear programming (INLP) formulation in . The variables and are the binary and integer decision variables, respectively, while the values of other parameters are known in advance. We next explain how we obtain each individual parameter and constraint.

Subject to:

In , we obtain the average video quality over downloaded chunks by client with .

We denote the startup delay by which is the time delay to reach the target buffer filling level (in Kb) for client . Consequently, we have the constraint , where is the effective data throughput (in Kbps) received by client from server at time slot . Denoted by as the theoretical throughput over the wireless link, we employ the simple path attenuation model for its computation. is the maximum transmission power of the base station, denotes the physical distance between the client and base station at time slot and is the path loss exponent parameter which is normally between 2 and 5. We note that the effective share throughput of client is computed by the relation where the summation in denominator is taken over all clients which have been assigned to base station at time slot .

The accumulated quality switching for client during the streaming session is obtained with .

As for avoiding stall events, we assume that the player starts to play the video after the startup phase. Given , the buffer level (in Kb) of client at time slot is given by .

In , is the allocated bitrate for the currently played out chunk with index . Accounting for the arrival time of client and initial playback delay, the index of the chunk played out at time slot is equal to . Thus, we obtain as the constraint for buffer occupancy, which simply states that for client it must be non-negative and also kept below or at the target filling level and ensures that no stall events happen provided that sufficient resources to sustain smallest available video bitrates for all clients exist.

Proportional fair (PF) bitrate allocation for client during a streaming session is defined in .

The maximization of is subject to the available resources at the base stations at each time slot, i.e. constraint .

We compute the standard deviation of resource utilization on servers from the average utilization as a criteria for measuring the load balancing on servers. Denoted by , the ratio of occupied resources by client to the total available resources on the bottleneck link of base station :

Note that the allocated bitrate for the client is divided by its actual physical (theoretical) transmission rate to be converted to the amount of resources which are allocated by base station to the client. Let denote the average utilization efficiency of client on bottleneck links. For each client , the objective of load balancing is to decide on servers and the allocate bitrates to the client depending on its physical location and channel quality in such a way that the standard deviation of loads incurred by the client i.e. is minimized.

As for the remaining constraints, states that at any time instant , the DASH client is allocated to only one server for downloading its current video chunk and enforces that the client receives one complete chunk of video upon its access to the allocated server. Finally, constraint specifies that the discrete allocated bitrate for a requested chunk of video which is resided on a given server belongs to the set of bitrate

Subroutine 1:

Startup Phase

resolutions offered by that server and also the binary allocation of the client to a server at each time slot.

5Centralized Scheduling Algorithm

The joint optimization problem formulated in - belongs to the class of NP-hard problems because it contains integer decision variables. Although the brute-force strategy can be applied for the offline case when all the clients’ information are available in advance, however, with a total of active clients at each time slot and servers, the computational complexity of this approach is . Here, is the number of time slots, , the chunk size and is the number of different video bitrates available. That means the complexity of exhaustive approach grows dramatically with the increase in the number of servers or clients making it impractical for large scale deployments. Therefore, we devise an efficient online and centralized greedy algorithm, which we name GreedyMSMC with the pseudocode in Algorithm ?. It is noteworthy to mention that the high computational complexity of the proposed centralized algorithm specially in large scale deployments, can be degraded using its decentralized implementation. It should be also mentioned that although the decentralized algorithm improves the computation time but, it sacrifices slightly the performance gain.

With dynamic arrival and departure of clients, for each client which is active in the current slot, the algorithm first sorts the set of available base stations based on their closeness to the client’s location. It then checks for the possibility of allocating the client to each base station and selects in a greedy manner the target base station and a sustainable bitrate from its associated server such that the constraints - are satisfied and the objective function achieves locally the maximum utility. We note that in the selection of bitrate for the current chunk, the algorithm takes into account the instantaneous client’s buffer occupancy in order to avoid the possibility of happening a stalling event. In the startup phase, Subroutine 1 is run in order to quickly fill up the buffer and then, the algorithm runs the steady phase shown in Subroutine 2 in which both average quality and bitrate switching are accounted for when selects bitrates. With active clients and servers, the worst case complexity of the greedy algorithm is , which is a significant reduction in complexity compared to the exhaustive search.

Subroutine 2:

Steady State

6Evaluation

In this section, we evaluate the performance of GreedyMSMC through simulations. We compared the results of GreedyMSMC to those obtained with two client-based adaptation heuristics, namely buffer-based adaptation (BBA) [7] and rate-based adaptation (RBA) [8]. The simulator and all the algorithms are implemented on MATLAB.

6.1Simulation Setup

We consider the scheduling of DASH clients during one hour with time duration . For the network setup, we assume a rectangular area with size where base stations are located with equal distances and the clients are randomly distributed around the base stations. Clients arrival time is uniformly distributed within the first and they depart after an active session which its length duration is chosen from the uniform distribution . All clients are stationary. The video is divided into chunks. Each video chunk is available in six different bitrates with the same replication on each edge server. In the simulations, we consider 12 number of servers unless otherwise stated, and, the number of clients vary from 100 to 500. The maximum transmission power of each base station associated with a video server is fixed at 3.6 and the path loss exponent is considered in the path attenuation model. With time slot duration and total available per slot bandwidth , the total LTE resource blocks per slot at each base station follows the uniform distribution [22]. The tuning parameters in the objective function in are set to , , and , and in the simulations. It is noteworthy to mention that these values have been chosen properly after our simulations to study the impact of varying the tuning parameters on the performance gain, although, we have not reported those results here due to the space limitations. We compare our network-assisted method to the two following client-based adaptation strategies which both of them assign each client to the closest neighborhood base station for the whole video streaming session of the client.

Buffer Based Adaptation (BBA)

[7] means that each client independently selects the bitrate for the next chunk to download based on instantaneous buffer occupancy level, i.e, the amount of video data in the playback buffer of the client

Figure 3: Pattern of utilized resources
Figure 3: Pattern of utilized resources
Figure 4: Achievable throughput and allocated bitrate
Figure 4: Achievable throughput and allocated bitrate

at each time slot. The heuristic allocates the highest bitrate for the first chunk and then looks at the current client’s buffer level to decide on the bitrate for the next video chunk to be downloaded. The heuristic considers five different thresholds which are equal to , , , and fraction of the maximum buffer filling level and depending on the buffer level, it chooses the most closest bitrate from the server.

Rate Based Adaptation (RBA)

[8] works so that each client makes chooses the highest sustainable bitrate among the available ones based on throughput obtained when downloading the previous chunks. In particular, RBA computes a moving average of the download rate of the last consecutive chunks to determine the bitrate for the next video chunk to be downloaded. The bitrate for chunk is obtained using the moving average .

6.2Resource Utilization and Bitrate Allocation

As the first result, we have shown in Figure 3 the pattern of utilized resource blocks of one randomly chosen base station during 2500 time slots using GreedyMSMC algorithm. The amount of utilized resource blocks of the base station at each time slot can vary depending on the instantaneous number of allocated clients. As we can see from the pattern, the utilization level can reach up to 200 which is the maximum number of available resource blocks on the base station at each time slot.

We have also shown in Figure 4, the pattern of achievable throughput and the allocated bitrates using GreedyMSMC algorithm for one randomly chosen client during 1300 time slots. As we can see, the throughput of the client is less for the intermediate time slots where most of the clients are active within their streaming session. We can also see that the proposed algorithm behaves well in determining the best sustainable bitrate from the discrete set by observing the effective obtainable throughput at each time slot.

6.3Comparison to Client-based Adaptation Approaches

In this section, we compare GreedyMSMC algorithm with two client-based DASH heuristics BBA and RBA in terms of the average achievable throughput for clients and the deviation of resource utilization

Figure 5: Average achievable throughput
Figure 5: Average achievable throughput
Figure 6: Resource utilization deviation
Figure 6: Resource utilization deviation

among the base stations. Note that in the implementation of RBA heuristic, we set as the number of previously observed chunks when estimating the achievable throughput for each current chunk.

Figure 5 shows the comparison between GreedyMSMC algorithm and two client-based adaptation heuristics in term of the average achievable throughput per time slot for different number of DASH clients. As we can see, the clients achieve significantly higher effective throughput using the proposed algorithm compared to the purely client-based heuristics. The reason is that allocating each client merely to the closest base station during the whole active session of the client will result in lowering the average throughput especially under the high dynamic arrival and departure of clients. In contrast, GreedyMSMC algorithm takes into account the current load of the base stations and seeks for the most suitable base station for the client where the higher throughput can be obtained. It is also seen that the average throughput drops as the number of clients increases which is due to high competition for sharing the available resources of base stations.

Figure 6 shows the comparison in term of the deviation of the utilized resource blocks among the base stations. To measure the utilization efficiency, we employ the root mean square deviation (RMSD) of the utilized resources among the base stations during the whole streaming session of clients. As we can see from the result, using the proposed algorithm results in lesser utilization deviation since GreedyMSMC algorithm allocates clients to appropriate base stations in order to minimize the utilization deviation.

With the same dataset as the previous part, we have also compared GreedyMSMC algorithm with BBA and RBA in terms of the QoE metrics. Figure ? shows that GreedyMSMC outperforms both heuristics in terms of average video bitrates and the initial buffer delay per client as well as the magnitude and frequency

Figure 7: Average video bitrates
Figure 7: Average video bitrates
Figure 8: Initial buffer delay
Figure 8: Initial buffer delay
Figure 9: Bitrate switching frequency
Figure 9: Bitrate switching frequency
Figure 10: Bitrate switching magnitude
Figure 10: Bitrate switching magnitude

of bitrate switching per chunk duration. As observed from Figure 7, the improvement in average bitrate using the proposed algorithm is due to achieving the higher share of throughput for clients by taking into account the instantaneous load of base stations. Since in the startup phase, GreedyMSMC chooses the bitrates which minimize the gap between the instantaneous and the maximum buffer level, therefore, about 50% reduction in initial buffer delay per client is achieved compared to client-based heuristics (Figure 8).

Figure 9 and Figure 10 show respectively the frequency and the magnitude (Kbps) of bitrate switching per chunk duration during the whole active sessions of all clients. We have excluded from the charts the switching values for BBA which were around 10 times bigger than RBA in both frequency and magnitude. As an example to interpret the meaning of values on y-axis in Figure 9, for 100 number of clients and using RBA, the switching happens for around 1.6 percentage of all chunks and the magnitude of each switching will be around according to Figure 10. GreedyMSMC and RBA are both effective in significantly reducing the frequency and magnitude of bitrate switching compared to BBA. The reason is that the buffer occupancy level can highly fluctuate especially under high dynamic arrival and departure of clients which results in larger number of bitrate switching per client. It is also observed that although RBA exhibits less switching frequency for larger number of clients, however, it has bigger switching magnitude per chunk as we can see from Figure 10. We should also acknowledge that the authors in [7] have proposed a variation of BBA which can reduce to some extent the bitrate switching by having an estimation of the throughput variation for the future chunks.

In Figure 11, we have compared three adaptation approaches in term of fairness in the average bitrate that each client perceives during its active session. We employ the Jain’s fairness index [3] which is defined as where is the total number of clients and denotes the average bitrate of client during its streaming session. As an example, we have considered a scenario in which 30% of clients are located far from the base stations while the remaining 70% of clients are closer to the base stations and therefore are prone to get higher average bitrates. As we can see from the result, using GreedyMSMC results in better fairness index for different number of clients. This is because in contrast to client-based heuristics, the proposed algorithm strives to improve the average bitrate of far-away clients by exploring among the best possible base stations to allocate them.

Figure 11: Comparison in term of Jain’s fairness index.
Figure 11: Comparison in term of Jain’s fairness index.

It is also seen that the fairness value drops as the number of clients increases which is due to increasing the degree of competition among them for the shared bandwidth.

Finally, it is noteworthy to mention that although the fairness values of three approaches are closer for large number of clients, using our algorithm results in higher average bitrate (Figure 7) which also confirms the trade-off between the fairness and the achievable bitrate.

7Conclusion and Future Works

In this work, we studied the use of network assisted adaptive video streaming to mobile clients from mobile edge servers. We proposed an optimization model to jointly maximize the QoE of individual clients, enforce proportional fair video bitrate selection between the clients, and balance load among video servers. We then design an efficient centralized scheduling algorithm to tackle the large scale optimization problem. Our simulation based evaluation results suggest that network assistance indeed helps to achieve better QoE and fairness.

As future work, we intend to study the impact of clients mobility and varying the video chunks and buffer size on the performance of the proposed framework. We also plan to design a decentralized solution to the optimization problem and compare its performance to the centralized one.

Acknowledgment

This work has been financially supported by the Academy of Finland (grant numbers 278207 and 297892), Tekes - the Finnish Funding Agency for Innovation, and the Nokia Center for Advanced Research.

References

  1. S. Petrangeli, J. Famaey, M. Claeys, S. Latre, and F. D. Turk, “QoE driven rate adaptation heuristic for fair adaptive video streaming,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 12, no. 2, pp. 1-15, Oct. 2015.
  2. D. Bethanabhotla, G. Caire, and M. J. Neely, “Adoptive video streaming for wireless networks with multiple users and helpers,” IEEE Trans. Commun, vol. 63, no. 1, pp. 268-285, Jan. 2015.
  3. N. Bouten, S. Latre, and J. Famaey, W. V. Leekwijck, and F. D. Turk, “In-network quality optimization for adaptive video streaming services,” IEEE Trans. Multimedia, vol. 16 no. 8, pp. 2281-2293, Dec. 2014.
  4. J. Chen, R. Mahindra, M. A. Khojastepour, S. Rangarajan, and M. Chiang, “A scheduling framework for adaptive video delivery over cellular networks,” in Proc. ACM MobiCom’13, pp. 389-400, Sep. 2013.
  5. M. Seufert, S. Egger, M. Slanina, T. Zinner, T. Hossfeld, and P. Tran-Gia, “A survey on quality of experience of HTTP adaptive streaming,” IEEE. Commun. Survey & Tut., vol. 17, no. 1, First Quarter 2015.
  6. T. Hossfeld, S. Egger, R. Schatz, M. Fiedler, K. Masuch, and C. Lorentzen, “Initial delays vs. interruptions: Between the devil and deep blue,” in Proc. IEEE International Workshop on Quality of Multimedia Experience (QoMEX), pp. 1-6, Aug. 2012.
  7. T-Y. Huang, R. Johari, N. McKeown, M. Trunnell, and M. Watson “A buffer-based approach to rate adaptation: Evidence from a large video streaming service,” in Proc. 2014 ACM Conference on SIGCOMM (SIGCOMM’14), pp. 187-198, Aug. 2014.
  8. T. Mangla, N. Theera-Ampornpunt, M. Ammar, E. Zegura, and S. Bagchi, “Video through a crystal ball: Effect of bandwidth prediction quality on adaptive streaming in mobile environments,” in Proc. 8th ACM International Workshop on Mobile Vido, pp. 1-6, May. 2016.
  9. H. Riiser, T. Endestad, P. Vigmostad, C. Griwodz, and P. Halvorsen, “Video streaming using a location-based bandwidth-lookup service for bitrate planning,” ACM Trans. Multimedia Comput. Commun. Appl., 8. 3. Article 24, pp. 1-19, Jul. 2012.
  10. Z. Li, S. Zhao, D. Medhi, and I. Bouazizi, “Wireless video traffic bottleneck coordination with a DASH SAND framework,” in Proc. IEEE Visual Communications and Image Processing, pp. 1-4, Nov. 2016.
  11. Y. Sun, X. Yin, J. Jiang, V. Sekar, F. Lin, N. Wang, T. Liu, and B. Sinopoli, “CS2P: Improving Video Bitrate Selection and Adaptation with Data-Driven Throughput Prediction,” in Proc. 2016 ACM Conference on SIGCOMM (SIGCOMM’16), pp. 272-285, Aug. 2016.
  12. K. Spiteri, R. Urgaonkar, and R. K. Sitaraman, “BOLA: Near-Optimal Adaptation for Online Videos,” in Proc. 35th Annual IEEE International Conference on Computer Communications (INFOCOM), pp. 1-9, Apr. 2016.
  13. G. Cofano, L. D. Cicco, T. Zinner, A. Nguyen-Ngoc, P. Tran-Gia, and S. Mascolo, “Design and Experimental Evaluation of network-assisted strategies for HTTP adaptive streaming,” in Proc. 7th ACM International Conference on Multimedia Systems (MMSys’16), pp. 1-12, May. 2016.
  14. A. Bentaleb, A. C. Begen, and R. Zimmermann, “SDNDASH : Improving QoE of HTTP adaptive streaming using software defined networking,” in Proc. 2016 ACM Conference on Multimedia (MM’16), pp. 1296-1305, Oct. 2016.
  15. J. Yao, S. Kanhere, I. Hossain, and M. Hassan, “Empirical evaluation of HTTP adaptive streaming under vehicular mobility,” in Proc. International Federation for Information Processing, Springer, pp. 92-105, 2011.
  16. E. Thomas, M. O. v. Deventer, T. Stockhammer, Ali. C. Begen, M-L Champel, and O. Oyman, “Applications and deployments of server and network assisted DASH (SAND),” International Broadcasting Convention (IBC) Conference, pp. 1-8, 2016.
  17. R. Margolies, A. Sridharan, V. Aggarwal, R. Jana, N. K. Shankaranarayanan, V. A. Vaishampayan, and Gil Zussman, “Exploiting mobility in proportional fair cellular scheduling : Measurements and algorithms,” IEEE/ACM Trans. Netw, vol. 24, no. 1, Feb. 2016.
  18. J. Jiang, V. Sekar, and H. Zhang, “Improving fairness, efficiency and stability in HTTP-based adaptive video streaming with FESTIVE”, in Proc. 8th ACM International on Emerging Networking Experiments and Technologies (CoNEXT’12), pp. 97-108, Dec. 2012.
  19. C. Wang, A. Rizk, and M. Zink, “SQUAD: a spectrum-based quality adaptation for dynamic adaptive streaming over HTTP”, in Proc. 7th ACM International Conference on Multimedia Systems (MMSys’16), pp. 1-12, May. 2016.
  20. S. Colonnese, F. Cuomo, T. Melodia, and I. Rubin, “A cross layer bandwidth allocation schema for HTTP-based video streaming in LTE cellular networks”, IEEE Commun. Lett., vol. 21, no. 2, pp. 386-389, Feb. 2017.
  21. T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative mobile edge computing in 5G networks: New paradigms, challenges and scenarios”, IEEE Commun. Mag., vol. 55, no. 4, pp. 54-61, Apr. 2017.
  22. S. Sesia, I. Toufik, and M. Baker, “LTE- The UMTS long term evolution: From theory to practice”, New York, NY, USA, Wiley, 2009.
  23. Sandvine: Global Internet Phenomena Report 2012 Q2. http://tinyurl.com/nyqyarq
  24. Sandvine: Global Internet Phenomena Report 2013 H2. http://tinyurl.com/nt5k5qw
  25. https://en.wikipedia.org/wiki/Fog_computing
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14528
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description