ENGINE:Cost Effective Offloading in Mobile Edge Computing with FogCloud Cooperation
Abstract
Mobile Edge Computing (MEC) as an emerging paradigm utilizing cloudlet or fog nodes to extend remote cloud computing to the edge of the network, is foreseen as a key technology towards next generation wireless networks. By offloading computation intensive tasks from resource constrained mobile devices to fog nodes or the remote cloud, the energy of mobile devices can be saved and the computation capability can be enhanced. For fog nodes, they can rent the resource rich remote cloud to help them process incoming tasks from mobile devices. In this architecture, the benefit of short computation and computation delay of mobile devices can be fully exploited. However, existing studies mostly assume fog nodes possess unlimited computing capacity, which is not practical, especially when fog nodes are also energy constrained mobile devices. To provide incentive of fog nodes and reduce the computation cost of mobile devices, we provide a cost effective offloading scheme in mobile edge computing with the cooperation between fog nodes and the remote cloud with task dependency constraint. The mobile devices have limited budget and have to determine which task should be computed locally or sent to the fog. To address this issue, we first formulate the offloading problem as a task finish time minimization problem with given budgets of mobile devices, which is NPhard. We then devise two more algorithms to study the network performance. Simulation results show that the proposed greedy algorithm can achieve the near optimal performance. On average, the Brute Force method and the greedy algorithm outperform the simulated annealing algorithm by about on the application finish time.
I Introduction
Mobile devices such as smartphones, tablets and laptops, are gaining enormous popularity with their capabilities of mobile and portability. As expected, they are playing the leading roles to support various computation intensive applications such as mobile gaming and augmented reality [1]. However, such applications are usually delay sensitive and require high computing resources such as power, memory and battery life that frequently exceeds mobile devices can bear. Due to the physical small size, mobile devices are usually constrained by limited computing power [2], which has become one of the most challenging issues [3, 4, 5].
With the growing traffic data into the wireless communication networks such as WiFi, 3G/4G and the emerging 5G, mobile cloud computing (MCC) is designated as a promising solution to address such challenge. By offloading computationally intense tasks to the cloud, which can be viewed as a self managing data center with ample resources, the computing capabilities of mobile devices can be extended [6]. To offload the tasks, the data have to be transmitted from devices to the cloud through wireless communication channels with techniques like network virtualization [7]. A mobile application can be partitioned into multiple subtasks with task dependency. The subtasks can be executed either on the mobile device itsself locally or onto the remote cloud. With this setting, by carefully select tasks for remote execution, the lifetime of mobile devices can be prolonged and user experiences can be enhanced.
Although MCC enables convenient access to a pool of computation resources in the cloud, moving all the tasks on mobile devices to the remote cloud would result in large transmission latencies that degrade the Quality of Experience (QoE) of mobile users. Mobile Edge Computing (MEC) or fog computing [8] has recently emerged as a remedy to the above limitations. By deploying fog or cloudlet nodes that are closer to mobile users at the edge of the network, mobile users can share the same services as the remote cloud. Whereas the transmission delay can be reduced while meeting the computation resource demands of mobile devices with MEC. For example, in the heterogeneous wireless network, small cell base stations can be deployed with fog nodes to serve local mobile users [9]. The fog nodes can be any devices with storage, computing capabilities and network interfaces. In a local community, fog nodes can be deployed at shopping centers, hotels or even busstops with WiFi access and deliver computing results back to their mobile users. Although fog computing demonstrates its potential to improve the QoE of mobile users by bring services close to users, fog nodes themselves can be resource constrained. When burst traffic arrives, fog nodes on their own may not be able to serve users. Therefore, the remote cloud resources can be borrowed by fog nodes via fogcloud cooperation.
It should be noted that MEC with fogcloud cooperation promises enormous benefits, designing energy efficient schemes for computation offloading should answer the following questions. (i) Which sub task should be executed locally on the mobile device and which should be offloaded? (ii) How much moneytary compensation should be paid by mobile users to stimulate the offloading of fog nodes and remote cloud? (iii) Which tasks should be migrated to which remote cloud server by the fog such that the total cost is within the threshold of mobile devices?
To answer the above questions, in this paper, we concentrate on the cost Effective offloadiNg in mobile edGe computINg with fogcloud coopEration (ENGINE) problem, in which the following issues will be addressed. Firstly, for an application with multiple subtasks that follow taskdependency, the offloading strategy adopted by the precedence task can affect the successor’s action. Secondly, the remote cloud is abundant in storage and computation resources with long delay while the fog nodes are resource constrained with short latencies. Therefore, the coordination between fog and remote cloud servers should be carefully designed to meet the QoE demands of mobile users. Thirdly, to guarantee the QoE of mobile users, the cost constraint of mobile devices should be taken into account in designing offloading strategies.
In this paper, the objective is to design a cost effective computation offloading and resource scheduling scheme with given cost budget of the mobile device running an application. Compared to existing work [10, 11, 12, 13, 14], the main contribution of this paper is summarized as follows.

Taking task dependency into consideration, the detailed task execution procedures, such as which task should be executed on the mobile device and which task should be offloaded, how to determine which task to be further offloaded to which remote cloud server.

The ENGINE problem is formulated as a response time minimization problem under the constraints of cost budgets and taskprecedence requirements.

To solve the optimization problem, we propose a distributed algorithm for the joint optimal offloading task selection and fogcloud cooperation. Extensive experiments demonstrate the effectiveness of proposed schemes.

To the best of authors’ knowledge, it is the first to consider the joint devicefogcloud scheduling and offloading work that minimizing the execution delay of the application.
The remainder paper is organized as follows. Related works on offloading in MEC are presented in Section II. Section IIIC4 presents the system model and computational model. The problem formulation of ENGINE problem is described in Section IV. Section V presents the distributed algorithm for ENGINE. The performance evaluation is presented in Section VII and Section VIII concludes this paper followed by the future work.
Ii Related Work
Computation offloading in MCC has been extensively studied in the literature with a variety of architectures and offloading policies [11] [15] [16]. However, implementing such offloading invokes extra communication delay due to the long distance of remote cloud servers. Instead of conventional MCC, MEC as defined by the European Telecommunications Standards Institute (ETSI) [8] is widely recognized as a key technology for the next generation network.
In MEC, computation offloading can be basically classified into two types, i.e. full offloading [17, 18, 19] and partial offloading [20, 21, 22, 23]. For full offloading, the whole computation tasks are offloaded and processed by the MEC. In [17], based on Markov decision process, Kamoun et al. proposed both online learning and offline schemes to minimize energy consumption of mobile devices by offloading all packets to edge cellular base stations with delay constraints. Chen et al. in [18] proposed a game theoretic offloading scheme in the multichannel wireless contention environment. Souza et al. in [19] studied the service allocation problem with the objective of minimizing total delay of resources allocation. For partial offloading, part of the computation tasks are processed locally on the mobile devices while the rest are offloaded to the MEC. In [20], by using convex optimization, You et al. presented multiuser offloading algorithms to reduce the energy consumption of mobile devices with delay constraint. In [21], Wang et al. provided a dynamic voltage scaling based partial offloading scheme. Similarly as [17], Guo et al. [22] presented a discretetime Markov decision process to achieve optimal powerdelay tradeoff. Moreover, Farris et al. [23] [10] proposed QoE guaranteed service replication for delay sensitive applications in 5G edge network. However, they all focus on how much workload should be distributed to the MEC without considering task dependency for an application.
Recently, there have been some works on computation offloading with task dependency in MEC [24, 25]. In [24], Tziritas et al. proposed a data replication based virtual machine migration scheme in edge network. In [25], under the multitenant cloud computing environment, Rimal et al. designed a few algorithms to schedule workflows with flow delay constraints. However, both of [24] and [25] are not suitable for the scenario which is investigated by [14] in MCC where applications with subtasks that must be executed on the mobile device. In this paper, for an application, we investigate partial offloading scheme with joint consideration of computation cost of mobile devices and the fog nodes.
To the best of authors’ knowledge, there is only a little work that has addressed the computation offloading problem in MEC taking into account the cost of edge servers such as energy consumption and related communication. In [26], Deng et al. investigated the power consumption and delay tradeoff with the objective to minimize total system power consumption of fog nodes and remote cloud servers. Compared to our work, however, they failed to consider the task dependency and cost budgets of mobile devices. In this paper, we conduct offloading study from the perspective of mobile users.
Iii System Model and Computational Model
This section firstly describes the system model and formulates the ENGINE problem with local computing, fog computing and fogcloud cooperation.
Iiia System Model
As shown in Fig. 1, we assume a group of mobile devices denoted as which are located in the vicinity of its corresponding wireless access points. Each access point can be a WiFi or a small cell base station in a HetNet and fog nodes are connected with those access points. The access points are connected with each other via wired links with wich the remote cloud servers are connected. A mobile application in MEC is partitioned into subtasks, denoted by a set of . Let and denotes the set of cloud servers and the set of fog servers respectively. There is a set of cloud nodes in total since we assume there are more remote servers than fog nodes. Hence, .
The application is modeled as a weighted Directed Acyclic Graph (DAG) , where the set of vertices denotes subtasks of the application with and , is the set of communication edges representing the precedence relation such that task should complete its execution before task starts.
Next, we introduce the communication and computation models for mobile devices, fog nodes and between fog nodes and remote cloud servers in detail.
IiiB Wireless Communication Model
We first present the wireless communication model and then provide the wired communication model.
The channel from mobile device to access point follows quasistatic block fading. We let , and , denote the computation offloading strategy made by the mobile device. Particularly, means that mobile device chooses to offload the computation task to the th fog node in MEC while implies that device chooses to execute the th task locally on its own device. Similarly, means the subtask is further offloaded to the th remote cloud server and does not. We can compute the uplink data rate for wireless communication with access point , as
(1) 
where , and is the transmission power of mobile device to offload its task to the access point . The channel gain from the th device to the th access point is when transmitting task . The channel bandwidth is and the surrounding noise power at the receiver with the transmission link is . From 1, we can see the transmission rate is in proportion to the transmission power of mobile devices and is in inverse proportion to the interference power of neighbouring devices.
IiiC Computation Model
Let be the completion time of local execution of task on device . Let be the transmission time between the mobile device and its corresponding access point, and be the execution time on fog node. Denote as the transmission time between the fog nodes and the remote cloud server . Denote as the execution time of task of device on remote cloud server . Next, we present the computation overhead on energy consumption, task completion time as well as the coordination between fog nodes and remote cloud.
IiiC1 Local Computing
Let be the computation capability, i.e. the CPU clock speed (cycles/second) of mobile devices. We note different mobile devices may have different CPU clock speed. The computation execution time of task on mobile device is then calculated as
(2) 
and the energy consumption of mobile device for the corresponding task is given by
(3) 
where is set to be according to [27] and is the workload of task of mobile device .
IiiC2 Fog and Cloud Computing
Similar to [27], we ingnore the time and energy consumption that the cloud returns the computation outcome back to the user. That’s because for many applications, the size of the outcome usually small. Let be the computation ability of fog node , i.e. the machine CPU frequency of the fog node. Then the computation execution time is given by
(4) 
and the energy consumption of the fog node is given by
(5) 
where and are positive constants which can be obtained by offline power fitting and the value ranges from to [28]. Similarly, the energy consumption of the remote cloud server for the task of mobile device is given by
(6) 
where the remote execution time is calculated as
(7) 
Similarly, denotes the CPU frequency of the th remote cloud server and , are also positive constants.
IiiC3 Data Transferring Cost
Given the th task size of mobile device , including the input data i.e. , can be expressed as
(8) 
Then the energy cost when transferring the data to the access point is given by
(9) 
Further more, we can get the delay data transfer if the remote server is employed by the corresponding fog node as
(10) 
where is the average transfer rate or bandwidth between fog node and the corresponding remote server. The energy cost of fog node during the offloading to the remote server which is denoted by , is given by
(11) 
where is the amount of additional power when performing data transfer per unit time from fog node to the remote server.
IiiC4 Basic Contraints
Before we formulate the ENGINE problem, we present some definitions, QoS as well as user budget constraints. Note that for a particular task of a mobile user, it cannot be executed unless all its precedence tasks have already been processed. We name this constraint as precedence constraint followed by [11] and [14]. Let be the time when task of mobile device is ready to be processed. Then we have
(12) 
where the receiving delay is neglected following [27] and (12) can be rewritten as
(13) 
where denotes the prececessors of task , in (13) means the local computing of task can be executed only after task has been executed. Therefore, the local task completion time of mobile device , which is denoted by is given by
(14) 
Similarly, let , be the time when task of mobile device is ready to be processed on the fog node and the corresponding remote cloud server, given by
(15) 
and
(16) 
In (15), is the finished transmission time from mobile device to the corresponding fog node and is given by
(17) 
is the task finish time on the fog node, given by
(18) 
and is the task finish time on remote cloud server, given by
(19) 
In (16), is the completion time of transmission between fog and remote cloud server, which can be defined as similar as (17), given by
(20) 
In (19), is the time when the task is ready for processing at the remote cloud server.
It can be observed from (15) that if the predecessor task of is executed locally, then , , . The term indicates all the predecessors of task that are offloaded to the fog node have finished execution and means all the predecessors of task that on the remote cloud server have finished execution as well. Therefore, the precedence constraints in (15) and (16) can be rewritten as
(21) 
and
(22) 
Next, we derive the utility constraints of fog nodes as well as remote cloud servers and cost constraints of mobile devices. The utility of fog node can be derived as
(23) 
where and is the charging price by the th fog node to cover the transmission or execution cost per unit data in the network. It can be observed from (23) that the transmission cost and execution cost cannot coexist for a particular task . Similarly, the utility of the remote cloud server can be expressed as
(24) 
where is the charging price of the th remote cloud server. It should be noted that, to motivate computation offloading, the utility of both fog nodes and the remote cloud server should not be negative, therefore we have
(25) 
For the mobile device processing task , its cost is consists of local execution cost, the payment for the corresponding fog node and the payment for the remote cloud server, which can be expressed as
(26) 
where for a particular task of mobile device , , cannot be one at the same time.
Finally, we derive the runtime expression of the whole application for the mobile device. Denote as the total application response time for mobile device , then is the time when all the tasks in an application are finished, given by
(27) 
We can observe from (27) that the total application delay is the time when the final task of mobile device has been finished on the mobile device.
Iv Problem Formulation
In this section, we will formulate the ENGINE problem.
To solve ENGINE problem, taking mobile device budget and the cost of cloud into consideration, we try to design effective computation offloading strategy with fog and remote cloud cooperation. The aim is to minimize the total application response delay. Therefore, according to constraints (13), (21), (22) and (25), the ENGINE problem for all mobile devices can be formulated as a constrained minimization problem as follows
(28) 
subject to , , , ,
where is the sum budget for mobile device . Constraint is the local task dependency constraint that ensures task can start to execute only after all its predecessor tasks have finished. Constraints and are fog and remote cloud task dependency constraints which implies that task can be executed on the fog and remote cloud server only after the task has been completely offloaded to the fog and remote cloud accordingly. Constraint are the utility constraints for fog nodes and the remote cloud server. Constraint ensures for a task , it can only be executed on one of the three places, i.e. the local mobile device, the fog node and the remote cloud server. The binary constraints are presented in and is the budget constraint for mobile device .
V Analysis for ENGINE Problem
In this section, we solve with the relaxation of constraint and give the analysis on ENGINE problem.
Va Lagrangian Relaxation
The challenge to solve is due to constraint . That’s because the binary constraints in make the problem a mixed integer programming problem which is nonconvex and NPhard. Therefore, as mentioned by [29], we first relax the binary constraints in to a real number i.e. and . Obviously, the problem with the relaxed optimization values and is still not convex because the constraint C1 is not convex. Therefore, to solve the problem, we have to solve the Lagrange dual problem of , which is usually convex and provides a lower bound of on the optimal value [29]. By relaxing the task ready time constraints with nonnegative Lagrangian mulipliers, we first derive the Lagrangian Relaxed (LR) function of the primal problem . It is worthy noting that the term and makes the constraint C1 and the object function (28) nonconvex, to deal with the issue, we introduce a variable , , the Lagrangian relaxed function of (28) is formulated as (29).
(29) 
It should be noted that constraints C14 and C15 in (29) is relaxed from and to ensure the convexity, and
(30) 
VB Dual Problem Formulation
The corresponding Lagrangian Dual (LD) problem is formulated as follows:
(31) 
subject to constraints C11C15 and (30). The dual problem is decomposed into two layers i.e., the inner layer minimization in (31) with subproblems that can be executed in parallel since there are mobile users and the outer layer maximization problem. In the following, we give the distributed solution of the computation offloading selection and transmission power allocation.
VC Computation Offloading Decision
In the computation offloading decision procedure, the system will determine which subtask should be executed on the mobile device, which subtask should be offloaded to the fog node and which further be transmitted to the remote cloud server by the fog node. The objective is to determine the strategy to minimize execution delay with given budget constraints. Meanwhile, the task dependency cosntraints should be preserved. The optimal computation offloading decision subproblem can be obtained by solving the following minimization problem:
(32) 
subject to constraints C2C4, C7 and C11C18.
It can be observed from (32) that if and , the subtask of mobile user will be executed locally on the mobile phone. Then object function will achieve the minimum value if , . If and , then the task will be offloaded to the fog node. If and , the remote cloud will be chosen. Note that , , will we zero when the minium value of (32) is obtained.
VD Optimal Power
The optimal power allocation strategy tries to allocate the power for mobile devices to minimize the total task completion delay with budget constraints. It is obviously that the strategy is valid when and or and . Therefore, the power allocation scheme can be obtained by solving the following minimization subproblem:
(33) 
subject to constraints C2C4, C7 and C11C18, when fog computing is adopted, i.e. and . When remote cloud computing happens, i.e. and , the objective becomes
(34) 
subject to constraints C2C4, C7 and C11C18. For (33), there are three cases.
In (33), there are three cases listed as follows.
Case I: If and , we have . Therefore, the objective function can be rewritten as:
(35) 
Case II: If and , we have , hence the objective function can be rewritten as:
(36) 
Case III: If and , we have . The objective function can be rewritten as:
(37) 
It is easy to verify that is nonconvex w.r.t. , however, the optimal transmission power can be determined by adopting the maximum transmission power , which is the same for case II and case III of (33). In (34), there are also three cases as follows.
Case I’: If and , we have . The objective function of (34) can be rewritten as:
(38) 
where the minimum value can be achieved when the transmission power is .
Case II’: If and , we have . The objective function of (34) can be rewritten as:
(39) 
where the is minimal when the transmission power is also .
Case III’: If and , we have . The objective function of (34) can be rewritten as:
(40) 
and similarly the device should transmit with its maximum power.
Vi Algorithms for ENGINE Problem
Via Greedy Implementation
Based on the above analysis, first of all, we design a greedy offloading policy to minimize the task completion time. To acquire the minimum finish time of all subtasks in the application on mobile device , the minimum completion time of subtask is selected from , , and . To meet the utility constraint of the remote cloud server, should be larger than according to (6). By determine offloading policy for each task iteratively, we can get the initial offloading solution for mobile device . This subprocedure is shown between Line to Line .
Then, we iteratively adjust the initial offloading policy to satisfy the sum cost budget for mobile device . Considering the fact that in real scenarios, the price of remote cloud service, i.e. is much more expensive than the fog node service price . Therefore, the cost of a task on the cloud is higher than that on the fog node, and the cost of a task on the fog node is higher than that on the mobile device with respect to (26). If there have been some tasks deployed on the cloud, i.e. , we can move them to the fog node one by one with respect to the ascending order of . Similarly, when there is no task deployed on the cloud, we can move tasks from the fog node to the mobile device one by one in ascending order of .
Finally, we adjust the offloading policy to satisfy the utility constraint of the fog node. In order to improve the utility of the fog node, we remove tasks from cloud server to fog node in descending order of