Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks The authors are with the Department of Electrical and Computer Engineering, Rutgers University–New Brunswick, NJ, USA (e-mail: tuyen.tran@rutgers.edu, pompili@cac.rutgers.edu). This work was supported by the National Science Foundation (NSF) Grant No. CNS-1319945.

Joint Task Offloading and Resource Allocation for Multi-Server Mobile-Edge Computing Networks The authors are with the Department of Electrical and Computer Engineering, Rutgers University–New Brunswick, NJ, USA (e-mail: tuyen.tran@rutgers.edu, pompili@cac.rutgers.edu). This work was supported by the National Science Foundation (NSF) Grant No. CNS-1319945.

1Introduction

Motivation:

The rapid growth of mobile applications and the Internet of Things (IoTs) have placed severe demands on cloud infrastructure and wireless access networks such as ultra-low latency, user experience continuity, and high reliability. These stringent requirements are driving the need for highly localized services at the network edge in close proximity to the end users. In light of this, the Mobile-Edge Computing (MEC) [1] concept has emerged, which aims at uniting telco, IT, and cloud computing to deliver cloud services directly from the network edge. Differently from traditional cloud computing systems where remote public clouds are utilized, MEC servers are owned by the network operator and are implemented directly at the cellular Base Stations (BSs) or at the local wireless Access Points (APs) using a generic-computing platform. With this position, MEC allows for the execution of applications in close proximity to end users, substantially reducing end-to-end (e2e) delay and releasing the burden on backhaul networks [2].

With the emergence of MEC, the ability of resource-constrained mobile devices to offload computation tasks to the MEC servers is expected to support a myriad of new services and applications such as augmented reality, IoT, autonomous vehicles and image processing. For example, the face detection and recognition application for airport security and surveillance can be highly benefit from the collaboration between mobile devices and MEC platform [3]. In this scenario, a central authority such as FBI would extend their Amber alerts such that all available cell phones in the area where a missing child was last seen that opt-in to the alert would actively capture images. Due to the significant amount of processing and the need for a large database of images, the captured images are then forwarded to the MEC layer to perform face recognition.

Task offloading, however, incurs extra overheads in terms of delay and energy consumption due to the communication required between the devices and the MEC server in the uplink wireless channels. Additionally, in a system with a large number of offloading users, the finite computing resources at the MEC servers considerably affect the task execution delay [4]. Therefore, offloading decisions and performing resource allocation become a critical problem toward enabling efficient computation offloading. Previously, this problem has been partially addressed by optimizing either the offloading decision [4], communication resources [6], or computing resources [8]. Recently, Sardellitti et al. [10] addressed the joint allocation of radio and computing resources, while the authors in [11] considered the joint task offloading and resources optimization in a multi-user system. Both of these works, however, only concentrate on a system with a single MEC server.

Our Vision:

Unlike the traditional approaches mentioned above, our objective is to design a holistic solution for joint task offloading and resource allocation in a multi-server MEC-assisted network so as to maximize the users’ offloading gains. Specifically, we consider a multi-cell ultra-dense network where each BS is equipped with a MEC sever to provide computation offloading services to the mobile users. The distributed deployment of the MEC servers along with the densification of (small cell) BSs—as foreseen in the 5G standardization roadmap [12]—will pave the way for real proximity, ultra-low latency access to cloud functionalities. Additionally, the benefits brought by a multi-server MEC system over the single-server MEC (aka single-cloud) system are multi-fold: (i) firstly, as each MEC server may be overloaded when serving a large number of offloading users, one can release the burdens on that server by directing some users to offload to the neighboring servers from the nearby BSs, thus preventing the limited resources on each MEC server from becoming the bottle neck; (ii) secondly, each user can choose to offload its task to the BS with more favorable uplink channel condition, thus saving transmission energy consumption; (iii) finally, coordination of resource allocation to offload users across multiple neighboring BSs can help mitigate the effect of interference and resource contention among the users and hence, improve offloading gains when multiple users offload their tasks simultaneously.

Challenges and Contributions:

To exploit in full the benefits of computation offloading in the considered multi-cell, multi-server MEC network, there are several key challenges that need to be addressed. Firstly, the radio resource allocation is much more challenging than the special cases studied in the literature (cf. [11]) due to the presence of inter-cell interference that introduces the coupling among the achievable data rate of different users, which makes the problem nonconvex. Secondly, the complexity of the task-offloading decision is high as, for each user, one needs to decide not only whether it should offload the computation task but also which BS/server to offload the task to. Thirdly, the optimization model should take into account the inherent heterogeneity in terms of mobile devices’ computing capabilities, computation task requirements, and availability of computing resources at different MEC servers.

In this context, the main contributions of this article are summarized as follows.

  • We model the offloading utility of each user as the weighted-sum of the improvement in task-completion time and device energy consumption; we formulate the problem of Joint Task Offloading and Resource Allocation (JTORA) as a Mixed Integer Non-linear Program (MINLP) that jointly optimizes the task offloading decisions, users’ uplink transmit power, and computing resource allocation to offloaded users at the MEC servers, so as to maximize the system offloading utility.

  • Given the NP-hardness of the JTORA problem, we propose to decompose the problem into (i) a Resource Allocation (RA) problem with fixed task offloading decision and (ii) a Task Offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem.

  • We further show that the RA problem can be decoupled into two independent problems, namely the Uplink Power Allocation (UPA) problem and the Computing Resource Allocation (CRA) problem; the resulting UPA and CRA problems are addressed using quasi-convex and convex optimization techniques, respectively.

  • We propose a novel low-complexity heuristic algorithm to tackle the TO problem and show that it achieves a suboptimal solution in polynomial time.

  • We carry out extensive numerical simulations to evaluate the performance of the proposed solution, which is shown to be near-optimal and to improve significantly the users’ offloading utility over traditional approaches.

Article Organization:

The remainder of this article is organized as follows. In Sect. Section 2, we review the related works. In Sect. Section 3, we present the system model. The joint task offloading and resource allocation problem is formulated in Sect. Section 4, followed by the NP-hardness proof and decomposition of the problem itself. We present our proposed solution in Sect. Section 5 and numerical results in Sect. Section 6. Finally, in Sect. Section 7 we conclude the article.

2Related Works

The MEC paradigm has attracted considerable attention in both academia and industry over the past several years. In 2013, Nokia Networks introduced the very first real-world MEC platform [13], in which the computing platform—Radio Applications Cloud Servers (RACS)—is fully integrated with the Flexi Multiradio BS. Saguna also introduced their fully virtualized MEC platform, so called Open-RAN [14], that can provide an open environment for running third-party MEC applications. Recently, a MEC Industry Specifications Group (ISG) was formed to standardize and moderate the adoption of MEC within the RAN [1].

A number of solutions have also been proposed to exploit the potential benefits of MEC in the context of the IoTs and 5G. For instance, our previous work in [2] proposed to explore the synergies among the connected entities in the MEC network and presented three representative use-cases to illustrate the benefits of MEC collaboration in 5G networks. In [15], we proposed a collaborative caching and processing framework in a MEC network whereby the MEC servers can perform both caching and transcoding so as to facilitate Adaptive Bit-Rate (ABR) video streaming. Similar approach was also considered in [16] which combined the traditional client-driven dynamic adaptation scheme, DASH, with network-assisted adaptation capabilities. In addition, MEC is also seen as a key enabling technique for connected vehicles by adding computation and geo-distributed services to the roadside BSs so as to analyze the data from proximate vehicles and roadside sensors and to propagate messages to the drivers in very low latency [17].

Recently, several works have focused on exploiting the benefits of computation offloading in MEC network [18]. Note that similar problems have been investigated in conventional Mobile Cloud Computing (MCC) systems [19]. However, a large body of existing works on MCC assumed an infinite amount of computing resources available in a cloudlets, where the offloaded tasks can be executed with negligible delay [20]. The problem of offloading scheduling was then reduced to radio resource allocation in [6] where the competition for radio resources is modeled as a congestion game of selfish mobile users. In the context of MEC, the problem of joint task offloading and resource allocation was studied in a single-user system with energy harvesting devices [23], and in a multi-cell multi-user systems [10]; however the congestion of computing resources at the MEC server was omitted. Similar problem is studied in [11] considering the limited edge computing resources in a single-server MEC system.

In summary, most of the existing works did not consider a holistic approach that jointly determines the task offloading decision and the radio and computing resource allocation in a multi-cell, multi-server system as considered in this article.

3System Model

Figure 1: Example of a cellular system with MEC servers deployed at the BSs.
Figure 1: Example of a cellular system with MEC servers deployed at the BSs.

We consider a multi-cell, multi-server MEC system as illustrated in Figure 1, in which each BS is equipped with a MEC server to provide computation offloading services to the resource-constrained mobile users such as smart phones, tablets, and wearable devices. In general, each MEC server can be either a physical server or a virtual machine with moderate computing capabilities provisioned by the network operator and can communicate with the mobile devices through wireless channels provided by the corresponding BS. Each mobile user can choose to offload computation tasks to a MEC server from one of the nearby BSs it can connect to. We denote the set of users and MEC servers in the mobile system as and , respectively. For ease of presentation, we will refer to the MEC server , server , and BS interchangeably. The modeling of user computation tasks, task uploading transmissions, MEC computation resources, and offloading utility are presented here below.

3.1User Computation Tasks

We consider that each user has one computation task at a time, denoted as , that is atomic and cannot be divided into subtasks. Each computation task is characterized by a tuple of two parameters, , in which specifies the amount of input data necessary to transfer the program execution (including system settings, program codes, and input parameters) from the local device to the MEC server, and specifies the workload, i.e., the amount of computation to accomplish the task. The values of and can be obtained through carefully profiling of the task execution [24]. Each task can be performed locally on the user device or offloaded to a MEC server. By offloading the computation task to the MEC server, the mobile user would save its energy for task execution; however, it would consume additional time and energy for sending the task input in the uplink.

Let denote the local computing capability of user in terms of CPU . Hence, if user executes its task locally, the task completion time is . To calculate the energy consumption of a user device when executing its task locally, we use the widely adopted model of the energy consumption per computing cycle as [6], where is the energy coefficient depending on the chip architecture and is the CPU frequency. Thus, the energy consumption, , of user when executing its task locally, is calculated as,

3.2Task Uploading

In case user offloads its task to one of the MEC servers, the incurred delay comprises: (i) the time to transmit the input to the MEC server on the uplink, (ii) the time to execute the task at the MEC server, and (iii) the time to transmit the output from the MEC server back to the user on the downlink. Since the size of the output is generally much smaller than the input, plus the downlink data rate is much higher than that of the uplink, we omit the delay of transferring the output in our computation, as also considered in [11].

In this work, we consider the system with OFDMA as the multiple access scheme in the uplink [26], in which the operational frequency band is divided into equal sub-bands of size . To ensure the orthogonality of uplink transmissions among users associated with the same BS, each user is assigned to one sub-band. Thus, each BS can serve at most users at the same time. Let be the set of available sub-band at each BS. We define the task offloading variables, which also incorporate the uplink sub-band scheduling, as , where indicates that task from user is offloaded to BS on sub-band , and otherwise. We define the ground set that contains all the task offloading variables as and the task offloading policy expressed as . As each task can be either executed locally or offloaded to at most one MEC server, a feasible offloading policy must satisfy the constraint below,

Additionally, we denote as the set of users offloading their tasks to server , and as the set of users that offload their tasks.

Furthermore, we consider that each user and BS have a single antenna for uplink transmissions (as also considered in [27]). Extension to the case where each BS uses multiple antennas for receiving uplink signals will be addressed in a future work. Denote as the uplink channel gain between user and BS on sub-band , which captures the effect of path-loss, shadowing, and antenna gain. Note that the user-BS association usually takes place in a large time scale (duration of an offloading session) that is much larger than the time scale of small-scale fading. Hence, similar to [28], we consider that the effect of fast-fading is averaged out during the association. Let denote the users’ transmission power, where is the transmission power of user when uploading its task’s input to the BS, subject to a maximum budget . Note that . As the users transmitting to the same BS use different sub-bands, the uplink intra-cell interference is well mitigated; still, these users suffer from the inter-cell interference. In this case, the Signal-to-Interference-plus-Noise Ratio (SINR) from user to BS on sub-band is given by,

where is the background noise variance and the first term at the denominator is the accumulated intra-cell interference from all the users associated with other BSs on the same sub-band . Since each user only transmits on one sub-band, the achievable rate of user when sending data to BS is given as,

where . Moreover, let . Hence, the transmission time of user when sending its task input in the uplink can be calculated as,

3.3MEC Computing Resources

The MEC server at each BS is able to provide computation offloading service to multiple users concurrently. The computing resources made available by each MEC server to be shared among the associating users are quantified by the computational rate , expressed in terms of number of CPU . After receiving the offloaded task from a user, the server will execute the task on behalf of the user and, upon completion, will return the output result back to the user. We define the computing resource allocation policy as , in which is the amount of computing resource that BS allocates to task offloaded from user . Hence, clearly . In addition, a feasible computing resource allocation policy must satisfy the computing resource constraint, expressed as,

Given the computing resource assignment , the execution time of task at the MEC servers is,

3.4User Offloading Utility

Given the offloading policy , the transmission power , and the computing resource allocation ’s, the total delay experienced by user when offloading its task is given by,

The energy consumption of user , , due to uploading transmission is calculated as , where is the power amplifier efficiency of user . Without loss of generality, we assume that . Thus, the uplink energy consumption of user simplifies to,

In a mobile cloud computing system, the users’ QoE is mainly characterized by their task completion time and energy consumption. In the considered scenario, the relative improvement in task completion time and energy consumption are characterized by and , respectively [11]. Therefore, we define the offloading utility of user as,

in which , with , specify user ’s preference on task completion time and energy consumption, respectively. For example, a user with short battery life can increase and decrease so as to save more energy at the expense of longer task completion time. Note that offloading too many tasks to the MEC servers will cause excessive delay due to the limited bandwidth and computing resources at the MEC servers, and consequently degrade some users’ QoE compared to executing their tasks locally. Hence, clearly user should not offload its task to the MEC servers if .

The expressions of the task completion time and energy consumption in clearly shows the interplay between radio access and computational aspects, which motivates a joint optimization of offloading scheduling, radio, and computing resources so as to optimize users’ offloading utility.

4Problem Formulation

We formulate here the problem of joint task offloading and resource allocation, followed by the outline of our decomposition approach.

4.1Joint Task Offloading and Resource Allocation Problem

For a given offloading decision , uplink power allocation , and computing resource allocation , we define the system utility as the weighted-sum of all the users’ offloading utilities,

with given in and specifying the resource provider’s preference towards user , . For instance, depending on the payments offered by the users, the resource provider could prioritize users with higher revenues for offloading by increasing their corresponding preferences. With this position, we formulate the Joint Task Offloading and Resource Allocation (JTORA) problem as a system utility maximization problem, i.e.,

The constraints in the formulation above can be explained as follows: constraints and imply that each task can be either executed locally or offloaded to at most one server on one sub-band; constraint implies that each BS can serve at most one user per sub-band; constraint specifies the transmission power budget of each user; finally, constraints and state that each MEC server must allocate a positive computing resource to each user associated with it and that the total computing resources allocated to all the associated users must not excess the server’s computing capacity. The JTORA problem in is a Mixed Integer Nonlinear Program (MINLP), which can be shown to be NP-hard; hence, finding the optimal solution usually requires exponential time complexity [29]. Given the large number of variables that scale linearly with the number of users, MEC servers, and sub-bands, our goal is to design a low-complexity, suboptimal solution that achieves competitive performance while being practical to implement.

4.2Problem Decomposition

Given the high complexity of the JTORA problem due to the combinatorial nature of the task offloading decision, our approach in this article is to temporarily fix the task offloading decision and to address the resulting problem, referred to as the Resource Allocation (RA) problem. The solution of the RA problem will then be used to derive the solution of the original JTORA problem. The decomposition process is described as follows. Firstly, we rewrite the JTORA problem in ( ?) as,

Note that the constraints on the offloading decision, , in , , , and the RA policies, , in , , , are decoupled from each other; therefore, solving the problem in is equivalent to solving the following Task Offloading (TO) problem,

in which is the optimal-value function corresponding to the RA problem, written as,

In the next section, we will present our solutions to both the RA problem and the TO problem so as to finally obtain the solution to the original JTORA problem.

5Low-complexity Algorithm for Joint Task Offloading and Resource Allocation

We present now our low-complexity approach to solve the JTORA problem by solving first the RA problem in and then using its solution to derive the solution of the TO problem in .

Firstly, given a feasible task offloading decision that satisfies constraints , , and , and using the expression of in , the objective function in can be rewritten as,

We observe that the first term on the right hand side (RHS) of is constant for a particular offloading decision, while can be seen as the total offloading overheads of all offloaded users. Hence, we can recast as the problem of minimizing the total offloading overheads, i.e.,

Furthermore, from , , and , we have,

in which, for simplicity, , , and . Notice from and that the problem in has a separable structure, i.e., the objectives and constraints corresponding to the power allocation ’s and computing resource allocation ’s can be decoupled from each other. Leveraging this property, we can decouple problem into two independent problems, namely the Uplink Power Allocation (UPA) and the Computing Resource Allocation (CRA), and address them separately, as described in the following sections.

5.1Uplink Power Allocation (UPA)

The UPA problem is decoupled from problem by considering the first term on the RHS of as the objective function. Specifically, the UPA problem is expressed as,

Problem is non-convex and difficult to solve because the uplink SINR corresponding to user depends on the transmit power of the other users associated with other BSs on the same sub-band through the inter-cell interference , as seen in . Our approach is to find an approximation for and thus for such that problem can be decomposed into sub-problems that, in turn, can be efficiently solved. The optimal uplink power allocation still generates small objective value for . Suppose each BS calculates its uplink power allocation independently, i.e., without mutual cooperation, and informs its associated users about the uplink transmit power; then, an achievable upper bound for is given by,

Similar to [30], we argue that is a good estimate of since our offloading decision is geared towards choosing the appropriate user-BS associations so as that be small in the first place. This means that a small error in should not lead to large bias in [30].

By replacing with , we get the approximation for the uplink SINR for user uploading to BS on sub-band as,

Let and . The objective function in can now be approximated by . With this position, it can be seen that the objective function and the constraint corresponding to each user’s transmit power is now decoupled from each other. Therefore, the UPA problem in can be approximated by sub-problems, each optimizing the transmit power of a user , and can be written as,

Problem is still non-convex as the second-order derivative of the objective function with respect to (w.r.t) , i.e., , is not always positive. However, we can employ quasi-convex optimization technique to address problem based on the following lemma.

See Appendix.

In general, a quasi-convex problem can be solved using the bisection method, which solves a convex feasibility problem in each iteration [31]. However, the popular interior cutting plane method for solving a convex feasibility problem requires iterations, where is the dimension of the problem. We now propose to further reduce the complexity of the bisection method.

Firstly, notice that a quasi-convex function achieves a local optimum at the diminishing point of the first-order derivative, and that any local optimum of a strictly quasi-convex function is the global optimum [32]. Therefore, based on Lemma ?, we can confirm that the optimal solution of problem either lies at the constraint border, i.e., or satisfies . It can be verified that when,

Moreover, we have, , and . This implies that is a monotonically increasing function and is negative at the starting point . Therefore, we can design a low-complexity bisection method that evaluates in each iteration instead of solving a convex feasibility problem, so as to obtain the optimal solution , as presented in Algorithm ?.

In Algorithm ?, if , the algorithm will terminate in exactly iterations. Let denote the optimal uplink transmit power policy for a given task offloading policy . Denote now as the objective value of problem corresponding to .

5.2Computing Resource Allocation (CRA)

The CRA problem optimizes the second term on the RHS of and is expressed as follows,

Notice that the constraint in is convex. Denote the objective function in as ; by calculating the second-order derivatives of w.r.t. , we have,

It can be seen that the Hessian matrix of the objective function in is diagonal with the strictly positive elements, thus it is positive-definite. Hence, is a convex optimization problem and can be solved using Karush-Kuhn-Tucker (KKT) conditions. In particular, the optimal computing resource allocation is obtained as,

and the optimal objective function is calculated as,

5.3Joint Task Offloading Scheduling and Resource Allocation

In the previous sections, for a given task offloading decision , we obtained the solutions for the radio and computing resources allocation. In particular, according to , , , and , we have,

where can be obtained through Algorithm ? and can be calculated using the closed-form expression in . Now, using , we can rewrite the TO problem in as,

Problem consists in maximizing a set function w.r.t over the ground set defined by , and the constraints in and define two matroids over . Due to the NP-hardness of such problem [33], designing efficient algorithms that guarantee the optimal solution still remains an open issue. In general, a brute-force method using exhaustive search would require evaluating possible task offloading scheduling decisions, where , which is clearly not a practical approach.

To overcome the aforementioned drawback, we propose a low-complexity heuristic algorithm that can find a local optimum to problem in polynomial time. Specifically, our algorithm starts with an empty set and repeatedly performs one of the local operations, namely the remove operation or the exchange operation, as described in Routine ?, if it improves the set value . As we are dealing with two matroid constraints, the exchange operation involves adding one element from outside of the current set and dropping up to elements from the set, so as to comply with the constraints. In summary, our proposed heuristic algorithm for task offloading scheduling is presented in Algorithm ?.

: (Complexity Analysis of Algorithm ?) Parameter in Algorithm ? is any value such that is at most a polynomial in . Let be the optimal value of problem over the ground set . It is easy to see that where is the element with the maximum over all elements of . Let be the number of iterations for Algorithm ?. Since after each iteration the value of the function increases by a factor of at least , we have , and thus . Note that the number of queries needed to calculate the value of the objective function in each iteration is at most . Therefore, the running time of Algorithm ? is , which is polynomial in .

: (Solution of JTORA) Let be the output of Algorithm ?. The corresponding solutions for the uplink power allocation and for computing resource sharing can be obtained using Algorithm ? and the closed-form expression in , respectively, by setting . Thus, the local optimal solution for the JTORA problem is . While characterizing the degree of suboptimality of the proposed solution is a non-trivial task—mostly due to the combinatorial nature of the task offloading decision and the nonconvexity of the original UPA problem—in the next section we will show via numerical results that our heuristic algorithm performs closely to the optimal solution using exhaustive search method.

6Performance Evaluation

In this section, simulation results are presented to evaluate the performance of our proposed heuristic joint task offloading scheduling and resource allocation strategy, referred to as hJTORA. We consider a multi-cell cellular system consisting of multiple hexagonal cells with a BS in the center of each cell. The neighboring BSs are set apart from each other. We assume that both the users and BSs use a single antenna for uplink transmissions. The channel gains are generated using a distance-dependent path-loss model given as , and the log-normal shadowing variance is set to . In most simulations, if not stated otherwise, we consider cells and the users’ maximum transmit power set to . In addition, the system bandwidth is set to and the background noise power is assumed to be .

Table 1: Runtime Comparison Among Competing Schemes
IOJRA GOJRA DORA hJTORA Exhaustive

Runtime [ms]

In terms of computing resources, we assume the CPU capability of each MEC server and of each user to be and , respectively. According to the realistic measurements in [24], we set the energy coefficient as . For computation task, we consider the face detection and recognition application for airport security and surveillance [3] which can be highly benefit from the collaboration between mobile devices and MEC platform. Unless otherwise stated, we choose the default setting values as , (following [3]), , , and , . In addition, the users are placed in random locations, with uniform distribution, within the coverage area of the network, and the number of sub-bands is set equal to the number of users per cell. We compare the system utility performance of our proposed hJTORA strategy against the following approaches.

  • Exhaustive

    : This is a brute-force method that finds the optimal offloading scheduling solution via exhaustive search over possible decisions; since the computational complexity of this method is very high, we only evaluate its performance in a small network setting.

  • Greedy Offloading and Joint Resource Allocation (GOJRA)

    : All tasks (up to the maximum number that can be admitted by the BSs) are offloaded, as in [10]. In each cell, offloading users are greedily assigned to sub-bands that have the highest channel gains until all users are admitted or all the sub-bands are occupied; we then apply joint joint resource allocation across the BSs as proposed in Sect. Section 5-A, B.

  • Independent Offloading and Joint Resource Allocation (IOJRA)

    : Each user is randomly assigned a sub-band from its home BS, then the users independently make offloading decision [21]; joint resource allocation is employed.

  • Distributed Offloading and Resource Allocation (DORA)

    : Each BS independently makes joint task offloading decisions and resource allocation for users within its cell [11].

6.1Suboptimality of Algorithm

Figure 2: Comparison of average system utility with 95\% confidence intervals.
Figure 2: Comparison of average system utility with confidence intervals.

Firstly, to characterize the suboptimality of our proposed hJTORA solution, we compare its performance with the optimal solution obtained by the Exhaustive method, and then with the three other described baselines. Since the Exhaustive method searches over all possible offloading scheduling decisions, its runtime is extremely long for a large number of variables; hence, we carry out the comparison in a small network setting with users uniformly placed in the area covered by cells, each having sub-bands. We randomly generate large-scale fading (shadowing) realizations and the average system utilities (with confident interval) of different schemes are reported in Figure 2(a,b) when we set and , respectively. It can be seen that the proposed hJTORA performs very closely to that of the optimal Exhaustive method, while significantly outperforms the other baselines. In both cases, the hJTORA algorithm achieves an average system utility within that of the Exhaustive algorithm, while providing upto , , and gains over the DORA, GOJRA, and IOJRA schemes, respectively. Additionally, in Table 1, we report the average runtime per simulation drop of different algorithms, running on a Windows 7 desktop with CPU and RAM. It can be seen that the Exhaustive method takes very long time, about longer than the hJTORA algorithm for such a small network. The DORA algorithm runs slightly faster than hJTORA while IOJRA and GOJRA requires the lowest runtimes.

6.2Effect of Number of Users

Figure 3: Comparison of average system utility against different number of users, evaluated with two task workload distributions: (a) uniform, c_u = 1000~\rm{Megacycles}, \forall u \in \mathcal{U}, (b) non-uniform, c_u = 500~\rm{Megacycles}, \forall u in cells \left\{ 1,3,5,7 \right\} and c_u = 2000~\rm{Megacycles}, \forall u in cells \left\{ 2,4,6 \right\}.
Figure 3: Comparison of average system utility against different number of users, evaluated with two task workload distributions: (a) uniform, , (b) non-uniform, in cells and in cells .

We now evaluate the system utility performance against different number of users wishing to offload their tasks, as shown in Figure 3(a,b). In particular, we vary the number of users per cell from to and perform the comparison in two scenarios with different task workload distribution: (a) uniform, , and (b) non-uniform, in cells and . Note that the number of sub-bands is set equal to the number of users per cell, thus the bandwidth allocated for each user decreases when there are more users in the system. Observe from Figure 3(a,b) that hJTORA always performs the best, and that the performance of all schemes significantly increases when the tasks’ workload increases. This is because when the tasks require more computation resources the users will benefit more from offloading them to the MEC servers. We also observe in both scenarios that, when the number of users is small, the system utility increases with the number of users; however, when the number of users exceeds some threshold, the system utility starts to decrease. This is because when there are many users competing for radio and computing resources for offloading their tasks, the overheads of sending the tasks and executing them at the MEC servers will be higher, thus degrading the offloading utility.

6.3Effect of Task Profile

Here, we evaluate the system utility performance w.r.t. to the computation tasks’ profiles in terms of input size ’s and workload ’s. In Figure 4(a,b), we plot the average system utility of the four competing schemes at different values of and , respectively. It can be seen that the average system utilities of all schemes increase with task workload and decrease with the task input size. This implies that the tasks with small input sizes and high workloads benefit more from offloading than those with large input sizes and low workloads do. Moreover, we observe that the performance gains of the proposed hJTORA scheme over the baselines also follow the similar trend, i.e., increasing with task workloads and decreasing with task input size.

6.4Effect of Users’ Preferences

Figure 5 shows the average time and energy consumption of all the users when we increase the users’ preference to time, ’s, between and while at the same time decrease the users’ preference to energy as . It can be seen that the average time consumption decreases when increases, at the cost of higher energy consumption. In addition, when , the users experience a larger average time and energy consumption than in the case when . This is because when there are more users competing for the limited resources, the probability that a user can benefit from offloading its task is lower.

Figure 4: Comparison of average system utility against (a) different task workloads, and (b) different sizes of task input; with U = 28.
Figure 4: Comparison of average system utility against (a) different task workloads, and (b) different sizes of task input; with .
Figure 5: Average time and energy consumption of all users obtained using hJTORA, with the number of users being U = 14 and 21.
Figure 5: Average time and energy consumption of all users obtained using hJTORA, with the number of users being and .

6.5Effect of Inter-cell Interference Approximation

To test the effect of the approximation to model the inter-cell interference as in in Sect. Section 5-A, we compare the results of the hJTORA solution to calculate the system utility using the approximated expression versus using the exact expression of the inter-cell interference. Figure 6 shows the system utility when the users’ maximum transmit power ’s vary between and . It can be seen that the performance obtained using the approximation is almost identical to that of the exact expression when is below , while an increasing gap appears when . However, as specified in LTE standard, 3GPP TS36.101 section 6.2.31, the maximum UE transmit power is ; hence, we can argue that the proposed approximation can work well in practical systems.

Figure 6: Average system utility obtained by hJTORA solution with exact expression and approximation of the inter-cell interference.
Figure 6: Average system utility obtained by hJTORA solution with exact expression and approximation of the inter-cell interference.

7Conclusions

We proposed a holistic strategy for a joint task offloading and resource allocation in a multi-cell Mobile-Edge Computing (MEC) network. The underlying optimization problem was formulated as a Mixed-Integer Non-linear Program (MINLP), which is NP-hard. Our approach decomposes the original problem into a Resource Allocation (RA) problem with fixed task offloading decision and a Task Offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We further decouple the RA problem into two independent subproblems, namely the uplink power allocation and the computing resource allocation, and address them using quasi-convex and convex optimization techniques, respectively. Finally, we proposed a novel heuristic algorithm that achieves a suboptimal solution for the TO problem in polynomial time. Simulation results showed that our heuristic algorithm performs closely to the optimal solution and significantly improves the average system offloading utility over traditional approaches.

Appendix

Firstly, it is straightforward to verify that is twice differentiable on . We now check the second-order condition of a strictly quasi-convex function, which requires that a point satisfying also satisfies [31].

The first-order and second-order derivatives of can be calculated, respectively, as,

and

in which,

Suppose that ; to satisfy , it must hold that,

By substituting into , we obtain,

It can be easily verified that both and are strictly positive . Hence, , which confirms that is a strictly quasi-convex function in .

Footnotes

  1. Refer to: 3GPP TS36.101, V14.3.0, Mar. 2017

References

  1. Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile Edge Computing – A Key Technology Towards 5G,” ETSI White Paper, vol. 11, 2015.
  2. T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative mobile edge computing in 5G networks: New paradigms, scenarios, and challenges,” IEEE Communications Magazine, vol. 55, no. 4, pp. 54–61, 2017.
  3. T. Soyata, R. Muraleedharan, C. Funai, M. Kwon, and W. Heinzelman, “Cloud-vision: Real-time face recognition using a mobile-cloudlet-cloud acceleration architecture,” in in Proc. IEEE Symposium on Computers and Communications (ISCC), pp. 59–66, 2012.
  4. L. Yang, J. Cao, H. Cheng, and Y. Ji, “Multi-user computation partitioning for latency sensitive mobile cloud applications,” IEEE Trans. Comput., vol. 64, no. 8, pp. 2253–2266, 2015.
  5. V. Cardellini, V. D. N. Personé, V. Di Valerio, F. Facchinei, V. Grassi, F. L. Presti, and V. Piccialli, “A game-theoretic approach to computation offloading in mobile cloud computing,” Mathematical Programming, vol. 157, no. 2, pp. 421–449, 2016.
  6. X. Chen, “Decentralized computation offloading game for mobile cloud computing,” IEEE Trans. Parallel Distrib. Syst., vol. 26, no. 4, pp. 974–983, 2015.
  7. X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM Trans. Netw., vol. 24, no. 5, pp. 2795–2808, 2016.
  8. L. Yang, J. Cao, Y. Yuan, T. Li, A. Han, and A. Chan, “A framework for partitioning and execution of data stream applications in mobile cloud computing,” ACM SIGMETRICS Performance Evaluation Review, vol. 40, no. 4, pp. 23–32, 2013.
  9. M. R. Rahimi, N. Venkatasubramanian, and A. V. Vasilakos, “Music: Mobility-aware optimal service allocation in mobile cloud computing,” in Proc. IEEE Int. Conf. on Cloud Computing, pp. 75–82, 2013.
  10. S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radio and computational resources for multicell mobile-edge computing,” IEEE Trans. Signal Inf. Process. Over Netw., vol. 1, no. 2, pp. 89–103, 2015.
  11. X. Lyu, H. Tian, P. Zhang, and C. Sengul, “Multi-user joint task offloading and resources optimization in proximate clouds,” IEEE Trans. Veh. Technol., vol. PP, no. 99, 2016.
  12. X. Ge, S. Tu, G. Mao, C.-X. Wang, and T. Han, “5G ultra-dense cellular networks,” IEEE Wireless Commun., vol. 23, no. 1, pp. 72–79, 2016.
  13. Intel and Nokia Siemens Networks, “Increasing mobile operators’ value proposition with edge computing,” Technical Brief, 2013.
  14. Saguna and Intel, “Using mobile edge computing to improve mobile network performance and profitability,” White paper, 2016.
  15. T. X. Tran, P. Pandey, A. Hajisami, and D. Pompili, “Collaborative Multi-bitrate Video Caching and Processing in Mobile-Edge Computing Networks,” in Proc. IEEE/IFIP Conf. on Wireless On-demand Network Systems and Services (WONS), pp. 165–172, 2017.
  16. J. O. Fajardo, I. Taboada, and F. Liberal, “Improving content delivery efficiency through multi-layer mobile edge adaptation,” IEEE Network, vol. 29, no. 6, pp. 40–46, 2015.
  17. Nokia, “LTE and Car2x: Connected cars on the way to 5G.” [Online]: http://www.cambridgewireless.co.uk/Presentation/MB06.04.16-Nokia-Uwe Putzschler.pdf.
  18. Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “Mobile edge computing: Survey and research outlook,” arXiv preprint arXiv:1701.01090, 2017.
  19. Z. Sanaei, S. Abolfazli, A. Gani, and R. Buyya, “Heterogeneity in mobile cloud computing: taxonomy and open challenges,” IEEE Commun. Surveys Tuts., vol. 16, no. 1, pp. 369–392, 2014.
  20. W. Zhang, Y. Wen, and D. O. Wu, “Energy-efficient scheduling policy for collaborative execution in mobile cloud computing,” in Proc. IEEE Int. Conf. on Comput. Commun. (INFOCOM), pp. 190–194, 2013.
  21. W. Zhang, Y. Wen, and D. O. Wu, “Collaborative task execution in mobile cloud computing under a stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 14, no. 1, pp. 81–93, 2015.
  22. Z. Cheng, P. Li, J. Wang, and S. Guo, “Just-in-time code offloading for wearable computing,” IEEE Trans. Emerg. Topics Comput., vol. 3, no. 1, pp. 74–83, 2015.
  23. Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” IEEE J. Sel. Areas in Commun., vol. 34, no. 12, pp. 3590–3605, 2016.
  24. A. P. Miettinen and J. K. Nurminen, “Energy efficiency of mobile clients in cloud computing,” in Proc. USENIX Conf. Hot Topics Cloud Comput. (HotCloud), June 2010.
  25. Y. Wen, W. Zhang, and H. Luo, “Energy-optimal mobile application execution: Taming resource-poor mobile devices with cloud clones,” in Proc. IEEE INFOCOM, pp. 2716–2720, 2012.
  26. Academic press, 2013.
    E. Dahlman, S. Parkvall, and J. Skold, 4G: LTE/LTE-advanced for mobile broadband.
  27. W. Saad, Z. Han, R. Zheng, M. Debbah, and H. V. Poor, “A college admissions game for uplink user association in wireless small cell networks,” in Proc. IEEE INFOCOM, pp. 1096–1104, 2014.
  28. Q. Ye, B. Rong, Y. Chen, M. Al-Shalash, C. Caramanis, and J. G. Andrews, “User association for load balancing in heterogeneous cellular networks,” IEEE Trans. Wireless Commun., vol. 12, no. 6, pp. 2706–2716, 2013.
  29. Springer Science & Business Media, 2006.
    Y. Pochet and L. A. Wolsey, Production planning by mixed integer programming.
  30. Y. Du and G. De Veciana, ““Wireless networks without edges”: Dynamic radio resource clustering and user scheduling,” in Proc. IEEE Int. Conf. on Comput. Commun. (INFOCOM), pp. 1321–1329, 2014.
  31. Cambridge university press, 2004.
    S. Boyd and L. Vandenberghe, Convex optimization.
  32. B. Bereanu, “Quasi-convexity, strictly quasi-convexity and pseudo-convexity of composite objective functions,” Revue française d’automatique, informatique, recherche opérationnelle. Mathématique, vol. 6, no. 1, pp. 15–26, 1972.
  33. J. Lee, V. S. Mirrokni, V. Nagarajan, and M. Sviridenko, “Non-monotone submodular maximization under matroid and knapsack constraints,” in Proc. Annual ACM Symp. Theory of Comput., pp. 323–332, 2009.
10505
This is a comment super asjknd jkasnjk adsnkj
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.