Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading

Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading

Abstract

Mobile-edge computation offloading

(MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation.

1Introduction

The realization of Internet of Things (IoT) [1] will connect tens of billions of resource-limited mobiles, e.g., mobile devices, sensors and wearable computing devices, to Internet via cellular networks. The finite battery lives and limited computation capacities of mobiles pose significant challenges for designing IoT. One promising solution is to leverage mobile-edge computing [2] and offload intensive mobile computation to nearby clouds at the edges of cellular networks, called edge clouds, with short latency, referred to as mobile-edge computation offloading (MECO). In this paper, we consider a MECO system with a single edge cloud serving multiple users and investigate the energy-efficient resource allocation.

1.1Prior Work

Mobile computation offloading (MCO) [3] (or mobile cloud computing) has been extensively studied in computer science, including system architectures (e.g., MAUI [4]), virtual machine migration [5] and power management [6]. It is commonly assumed that the implementation of MCO relies on a network architecture with a central cloud (e.g., a data center). This architecture has the drawbacks of high overhead and long backhaul latency [7], and will soon encounter the performance bottleneck of finite backhaul capacity in view of exponential mobile traffic growth. These issues can be overcome by MECO based on a network architecture supporting distributed mobile-edge computing. Among others, designing energy-efficient control policies is a key challenge for the MECO system.

Energy-efficient MECO requires the joint design of MCO and wireless communication techniques. Recent years have seen research progress on this topic for both single-user [8] and multiuser [12] MECO systems. For a single-user MECO system, the optimal offloading decision policy was derived in [8] by comparing the energy consumption of optimized local computing (with variable CPU cycles) and offloading (with variable transmission rates). This framework was further developed in [9] and [10] to enable adaptive offloading powered by wireless energy transfer and energy harvesting, respectively. Moreover, dynamic offloading was integrated with adaptive LTE/WiFi link selection in [11] to achieve higher energy efficiency. For multiuser MECO systems, the control policies for energy savings are more complicated. In [12], distributed computation offloading for multiuser MECO at a single cloud was designed using game theory for both energy-and-latency minimization at mobiles. A multi-cell MECO system was considered in [13], where the radio and computation resources were jointly allocated to minimize the mobile energy consumption under offloading latency constraints. With the coexistence of central and edge clouds, the optimal user scheduling for offloading to different clouds was studied in [14]. In addition to total mobile energy consumption, cloud energy consumption for computation was also minimized in [15] by designing the mapping between clouds and mobiles for offloading using game theory. The cooperation among clouds was further investigated in [16] to maximize the revenues of clouds and meet mobiles’ demands via resource pool sharing. Prior work on MECO resource allocation focuses on complex algorithmic designs and yields little insight into the optimal policy structures. In contrast, for a multiuser MECO system based on time-division multiple access (TDMA), the optimal resource-allocation policy is shown in the current work to have a simple threshold-based structure with respect to a derived offloading priority function. This insight is used for designing the low-complexity resource-allocation policy for a orthogonal frequency-division multiple access (OFDMA) MECO system.

Resource allocation for traditional multiple-access communication systems has been widely studied, including TDMA (see e.g., [17]), OFDMA (see e.g., [18]) and code-division multiple access (CDMA) (see e.g., [19]). Moreover, it has been designed for existing networks such as cognitive radio [20] and heterogenous networks [21]. Note that all of them only focus on the radio resource allocation. In contrast, for the newly proposed MECO systems, both the computation and radio resource allocation at the edge cloud are jointly optimized for the maximum mobile energy savings, making the algorithmic design more complex.

1.2Contribution and Organization

This paper considers resource allocation in a multiuser MECO system based on TDMA and OFDMA. Multiple mobiles are required to compute different computation loads with the same latency constraint. Assuming that computation data can be split for separate computing, each mobile can simultaneously perform local computing and offloading. Moreover, the edge cloud is assumed to have perfect knowledge of local computing energy consumption, channel gains and fairness factors at all users, which is used for designing centralized resource allocation to achieve the minimum weighted sum mobile energy consumption. In the TDMA MECO system, the optimal threshold-based policy is derived for both the cases of infinite and finite cloud capacities. For the OFDMA MECO system, a low-complexity sub-optimal algorithm is proposed to solve the mixed-integer resource allocation problem.

The contributions of current work are as follows.

  • TDMA MECO with infinite cloud capacity: For TDMA MECO with infinite (computation) capacity, a convex optimization problem is formulated to minimize the weighted sum mobile energy consumption under the time-sharing constraint. To solve it, an offloading priority function is derived that yields priorities for users and depends on their channel gains and local computing energy consumption. Based on this, the optimal policy is proved to have a threshold-based structure that determines complete and minimum offloading for users with priorities above and below a given threshold, respectively.

  • TDMA MECO with finite cloud capacity: The above results are extended to the case of finite capacity. Specifically, the optimal resource allocation policy is derived by defining an effective offloading priority function and modifying the threshold-based policy as derived for the infinite-capacity cloud. To reduce the complexity arising from a two-dimension search for Lagrange multipliers, a simple and low-complexity algorithm is proposed based on the approximated offloading priority order. This reduces the said search to a one-dimension search, shown by simulation to have close-to-optimal performance.

  • OFDMA MECO: For a infinite-capacity cloud based on OFDMA, the insight of priority-based policy structure of TDMA is used for optimizing its resource allocation. Specifically, to solve the corresponding mixed-integer optimization problem, a low-complexity sub-optimal algorithm is proposed. Using average sub-channel gains, the OFDMA resource allocation problem is transformed into its TDMA counterpart. Based on this, the initial resource allocation and offloaded data allocation can be determined by defining an average offloading priority function. Moreover, the integer sub-channel assignment is performed according to the offloading priority order, followed by adjustments of offloaded data allocation over assigned sub-channels. The proposed algorithm is shown to have close-to-optimal performance by simulation and can be extended to the finite-capacity cloud case.

The reminder of this paper is organized as follows. Section II introduces the system model. Section III presents the problem formulation for multiuser MECO based on TDMA. The corresponding resource allocation policies are characterized in Section IV and Section V for both the cases of infinite and finite cloud capacities, respectively. The above results are extended in Section VI for the OFDMA system. Simulation results and discussion are given in Section VII, followed by the conclusion in Section VIII.

2System Model

Consider a multiuser MECO system shown in Figure 1 with single-antenna mobiles, denoted by a set , and one single-antenna base station (BS) that is the gateway of an edge cloud. These mobiles are required to compute different computation loads under the same latency constraint. 1 Assume that the BS has perfect knowledge of multiuser channel gains, local computing energy per bit and sizes of input data at all users, which can be obtained by feedback. Using these information, the BS selects offloading users, determines the offloaded data sizes and allocates radio resource to offloading users with the criterion of minimum weighted sum mobile energy consumption.

2.1Multiple-Access Model

Both the TDMA and OFDMA systems are considered as follows. For the TDMA system, time is divided into slots each with a duration of seconds where is chosen to meet the user-latency requirement. As shown in Figure 1, each time slot comprises two sequential phases for 1) mobile offloading or local computing and 2) cloud computing and downloading of computation results from the edge cloud to mobiles. Cloud computing has small latency; the downloading consumes negligible mobile energy and furthermore is much faster than offloading due to relative smaller sizes of computation results. For these reasons, the second phase is assumed to have a negligible duration compared to the first phase and not considered in resource allocation. For the OFDMA system, the total bandwidth is divided into multiple orthogonal sub-channels and each sub-channel can be assigned to at most one user. The offloading mobiles will be allocated with one or more sub-channels.

Considering an arbitrary slot in TDMA/OFDMA, the BS schedules a subset of users for complete/partial offloading. The user with partial or no offloading computes a fraction of or all input data, respectively, using a local CPU.

2.2Local-Computing Model

Assume that the CPU frequency is fixed at each user and may vary over users. Consider an arbitrary time slot. Following the model in [12], let denote the number of CPU cycles required for computing -bit of input data at the -th mobile, and the energy consumption per cycle for local computing at this user. Then the product gives computing energy per bit. As shown in Figure 2, mobile is required to compute -bit input data within the time slot, out of which -bit is offloaded and -bit is computed locally. Then the total energy consumption for local computing at mobile , denoted as , is given by . Let denote the computation capacity of mobile that is measured by the number of CPU cycles per second. Under the computation latency constraint, it has As a result, the offloaded data at mobile has the minimum size of with , where

2.3Computation-Offloading Model

First, consider the TDMA system for an arbitrary time slot. Let denote the channel gain for mobile that is constant during offloading duration, and its transmission power. Then the achievable rate (in bits/s), denoted by , is:

where and are the bandwidth and the variance of complex white Gaussian channel noise, respectively. The fraction of slot allocated to mobile for offloading is denoted as with , where corresponds to no offloading. For the case of offloading (), under the assumption of negligible cloud computing and result downloading time (see Section 2.1), the transmission rate is fixed as since this is the most energy-efficient transmission policy under a deadline constraint [22]. Define a function . It follows from that the energy consumption for offloading at mobile is

Note that if either or , is equal to zero.

Next, consider an OFDMA system with sub-channels, denoted by a set . Let and denote the transmission power and channel gain of mobile on the -th sub-channel. Define as the sub-channel assignment indicator variable where indicates that sub-channel is assigned to mobile , and verse vice. Then the achievable rate (in bits/s) follows:

where and are the bandwidth and noise power for each sub-channel, respectively. Let denote the offloaded data size over the offloading duration time that can be set as the OFDMA symbol duration. The corresponding offloading energy consumption can be expressed as below, which is similar to that in [18], namely,

where and .

Figure 1: Multiuser MECO systems based on TDMA and OFDMA.
Figure 1: Multiuser MECO systems based on TDMA and OFDMA.
Figure 2: Mobile computation offloading.
Figure 2: Mobile computation offloading.

2.4Cloud-Computing Model

Considering an edge cloud with finite (computation) capacity, for simplicity, the finite capacity is reflected in one of the following two constraints. 2 The first one upper-bounds CPU cycles of sum offloaded data that can be handled by the cloud in each time slot. Let represent the cloud computation capacity measured by CPU cycles per time slot. Then it follows: . This constraint ensures negligible cloud computing latency. The other one considers non-negligible computing time at the cloud that performs load balancing as in [23], given as where is the cloud computation capacity measure by CPU cycles per second. Note that is factored into the latency constraint in the sequel.

3Multiuser MECO for TDMA: Problem Formulation

In this section, resource allocation for multiuser MECO based on TDMA is formulated as an optimization problem. The objective is to minimize the weighted sum mobile energy consumption: , where the positive weight factors account for fairness among mobiles. Under the constraints on time-sharing, cloud computation capacity and computation latency, the resource allocation problem is formulated as follows:

First, it is easy to observe that the feasibility condition for Problem P1 is: . It shows that whether the cloud capacity constraint is satisfied determines the feasibility of this optimization problem, while the time-sharing constraint can always be satisfied and only affects the mobile energy consumption. Next, one basic characteristic of Problem P1 is given in the following lemma, proved in Appendix @.1.

Assume that Problem P1 is feasible. The direct solution for Problem P1 using the dual-decomposition approach (the Lagrange method) requires iterative computation and yields little insight into the structure of the optimal policy. To address these issues, we adopt a two-stage solution approach that requires first solving Problem P2 below, which follows from Problem P1 by relaxing the constraint on cloud capacity:

If the solution for Problem P2 violates the constraint on cloud capacity, Problem P1 is then incrementally solved building on the solution for Problem P2. This approach allows the optimal policy to be shown to have the said threshold-based structure and also facilitates the design of low-complexity close-to-optimal algorithm. It is interesting to note that Problem P2 corresponds to the case where the edge cloud has infinite capacity. The detailed procedures for solving Problems P1 and P2 are presented in the two subsequent sections.

4Multiuser MECO for TDMA: Infinite Cloud Capacity

In this section, by solving Problem P2 using the Lagrange method, we derive a threshold-based policy for the optimal resource allocation.

To solve Problem P2, the partial Lagrange function is defined as

where is the Lagrange multiplier associated with the time-sharing constraint. For ease of notation, define a function . Let denote the optimal solution for Problem P2 that always exists satisfying the feasibility condition.

Then applying KKT conditions leads to the following necessary and sufficient conditions:

Note that for and , it can be derived from and that

Based on these conditions, the optimal policy for resource allocation is characterized in the following sub-sections.

4.1Offloading Priority Function

Define a (mobile) offloading priority function, which is essential for the optimal resource allocation, as follows:

with the constant defined as

This function is derived by solving a useful equation as shown in the following lemma.

Lemma ? is proved in Appendix @.2. The function generates an offloading priority value, , for mobile , depending on corresponding variables quantifying fairness, local computing and channel. The amount of offloaded data by a mobile grows with an increasing offloading priority as shown in the next sub-section. It is useful to understand the effects of parameters on the offloading priority that are characterized as follows.

Lemma ? is proved in Appendix @.3, by deriving the first derivatives of with respect to each parameter. This lemma is consistent with the intuition that, to reduce energy consumption by offloading, the BS should schedule those mobiles having high computing energy consumption per bit (i.e., large and ) or good channels (i.e., large ).

4.2Optimal Resource-Allocation Policy

Based on conditions in - and Lemma ?, the main result of this section is derived, given in the following theorem.

See Appendix @.4.

Theorem ? reveals that the optimal resource-allocation policy has a threshold-based structure when offloading saves energy. In other words, since the exact case of rarely occurs in practice, the optimal policy makes a binary offloading decision for each mobile. Specifically, if the corresponding offloading priority exceeds a given threshold, namely , the mobile should offload all input data to the edge cloud; otherwise, the mobile should offload only the minimum amount of data under the computation latency constraint. This result is consistent with the intuition that the greedy method can lead to the optimal resource allocation. Note that there are two groups of users selected to perform the minimum offloading. One is the group of users for which it has positive minimum offloading data, i.e., , and offloading cannot save energy consumption since they have bad channels or small local computing energy such that and . The second group is the set of users for which offloading is energy-efficient, i.e., , however, have relatively small offloading priorities, i.e., ; they cannot perform complete offloading due to the limited radio resource.

Furthermore, with the assumption of infinite cloud capacity, the effects of finite radio resource (i.e., the TDMA time-slot duration) are characterized in the following two propositions in terms of the number of offloading users, which can be easily derived using Theorem ?.

It indicates that short time slot limits the number of offloading users. From another perspective, it means that if the winner user has excessive data, it will take up all the resource.

Proposition ? reveals that when exceeds a given threshold, the offloading-desired mobiles for which offloading brings energy savings, will offload all computation to the cloud.

4.3Special Cases

The optimal resource-allocation policies for several special cases considering equal fairness factors are discussed as follows.

Uniform Channels and Local Computing

Consider the simplest case where are identical for all . Then all mobiles have uniform offloading priorities. In this case, for the optimal resource allocation, all mobiles can offload arbitrary data sizes so long as the sum offloaded data size satisfies the following constraint:

Uniform Channels

Consider the case of . The offloading priority for each mobile, say mobile , is only affected by the corresponding local-computing parameters and . Without loss of generality, assume that . Then the optimal resource-allocation policy is given in the following corollary of Theorem ?.

The result shows that the optimal resource-allocation policy follows a greedy approach that selects mobiles in a descending order of energy consumption per bit for complete offloading until the time-sharing duration is fully utilized.

Uniform Local Computing

Consider the case of . Similar to the previous case, the optimal resource-allocation policy can be shown to follow the greedy approach that selects mobiles for complete offloading in the descending order of channel gains.

5Multiuser MECO for TDMA: Finite Cloud Capacity

In this section, we consider the case of finite cloud capacity and analyze the optimal resource-allocation policy for solving Problem P1. The policy is shown to also have a threshold-based structure as the infinite-capacity counterpart derived in the preceding section. Both the optimal and sub-optimal algorithms are presented for policy computation. The results are extended to the finite-capacity cloud with non-negligible computing time.

5.1Optimal Resource-Allocation Policy

To solve the convex Problem P1, the corresponding partial Lagrange function is written as

where is the Lagrange multiplier associated with the cloud capacity constraint. Using the above Lagrange function, it is straightforward to show that the corresponding KKT conditions can be modified from their infinite-capacity counterparts in - by replacing with , called the effective computation energy per cycle. The resultant effective offloading priority function, denoted as , can be modified accordingly from that in as

where

Moreover, it can be easily derived that a cloud with smaller capacity leads to a larger Lagrange multiplier . It indicates that compared with in for the case of infinite-capacity cloud, the effective offloading priority function here is also determined by the cloud capacity. Based on above discussion, the main result of this section follows.

Computing the threshold for the optimal resource-allocation policy requires a two-dimension search over the Lagrange multipliers , described in Algorithm ?. For an efficient search, it is useful to limit the range of and shown as below, which can be easily proved.

Note that corresponds to the case of infinite-capacity cloud and to the case where offloading yields no energy savings for any mobile.

5.2Sub-Optimal Resource-Allocation Policy

To reduce the computation complexity of Algorithm ? due to the two-dimension search, one simple sub-optimal policy is proposed as shown in Algorithm ?. The key idea is to decouple the computation and radio resource allocation. In Step , based on the approximated offloading priority in for the case of infinite-capacity cloud, we allocate the computation resource to mobiles with high offloading priorities. Step optimizes the corresponding fractions of slot given offloaded data.

This sub-optimal algorithm has low computation complexity. Specifically, given a solution accuracy , the iteration complexity for one-dimensional search can be given as . For each iteration, the resource-allocation complexity is . Thus, the total computation complexity for the sub-optimal algorithm is . Moreover, its performance is shown by simulation to be close-to-optimal in the sequel.

5.3Extension: MECO with Non-Negligible Computing Time

Consider another finite-capacity cloud for which the computing time is non-negligible. Surprisingly, the resultant optimal policy is also threshold based, with respect to a different offloading priority function.

Assume that the edge cloud performs load balancing for the uploaded computation as in [23]. In other words, the CPU cycles are proportionally allocated for each user such that all users experience the same computing time: (see Section 2.4). Then the latency constraint is reformulated as , accounting for both the data transmission and cloud computing time. The resultant optimization problem for minimizing weighted sum mobile energy consumption is re-written by

The key challenge of Problem P3 is that the amount of offloaded data size for each user has effects on offloading energy consumption, offloading duration and cloud computing time, making the problem more complicated.

The feasibility condition for Problem P3 can be easily obtained as: Note that the case makes Problem P3 infeasible since the resultant offloading time () cannot enable computation offloading.

Similarly, to solve Problem P3, the partial Lagrange function is written as

Define two sets of important constants: and for all . Using KKT conditions, we can obtain the following offloading priority function

where

This function is derived by solving a equation in the following lemma, proved in Appendix @.5.

Recall that for a cloud that upper-bounds the offloaded computation, its offloading priority (i.e., in ) is function of a Lagrange multiplier which is determined by . However, for the current cloud with non-negligible computing time, the offloading priority function in is directly affected by the finite cloud capacity via .

In the following, the properties of , which is the key component of , are characterized.

It is proved in Appendix @.6 and indicates that the condition that offloading saves energy comsumption for this kind of finite-capacity cloud is same as that of infinite-capacity cloud.

Similar to Lemma ?, Lemma ? can be proved by deriving the first derivatives of with respect to each parameter. It shows that enhancing the cloud capacity will increase the offloading priority for all users that is same as the result of a cloud with upper-bounded offloaded computation.

Based on above discussion, the main result of this section are presented in the following theorem.

The optimal policy can be computed with a one-dimension search for , following a similar procedure in Algorithm ?.

6Multiuser MECO for OFDMA

In this section, consider resource allocation for MECO OFDMA. Both OFDM sub-channels and offloaded data sizes are optimized for the energy-efficient multiuser MECO. To solve the formulated mixed-integer optimization problem, a sub-optimal algorithm is proposed by defining an average offloading priority function from its TDMA counterpart and shown to have close-to-optimal performance in simulation.

6.1Multiuser MECO for OFDMA: Infinite Cloud Capacity

Consider an OFDMA system (see Section 2) with mobiles and sub-channels. The cloud is assumed with infinite cloud capacity. Given time-slot duration , the latency constraint for local computing is rewritten as . Moreover, the time-sharing constraint is replaced by sub-channel constraints, expressed as for all . Then the corresponding optimization problem for the minimum weighted sum mobile energy consumption based on OFDMA is readily re-formulated as:

Observe that Problem P4 is a mixed-integer programming problem that is difficult to solve. It involves the joint optimization of both continuous variables and integer variables . One common solution method is relaxation-and-rounding, which firstly relaxes the integer constraint as the real-value constraint [18], and then determines the integer solution using rounding techniques. Note that the integer-relaxation problem is a convex problem which can be solved by powerful convex optimization techniques. An alternative method is using dual decomposition as in [24], which has been proved to be optimal when the number of sub-channels goes to infinity. However, both algorithms performing extensive iterations shed little insight on the policy structure.

To reduce the computation complexity and characterize the policy structure, a low-complexity sub-optimal algorithm is proposed below by a decomposition method, motivated by the following existing results and observations. First, for traditional OFDMA systems, low-complexity sub-channel allocation policy was designed in [25] via defining average channel gains, which was shown to achieve close-to-optimal performance in simulation. Next, for the integer-relaxation resource allocation problem, applying KKT conditions directly can lead to its optimal solution. It can be observed that for each sub-channel, users with higher offloading priorities should be allocated with more radio resource. Therefore, in the proposed algorithm, the initial resource and offloaded data allocation is firstly determined by defining average channels gains and an average offloading priority function. Then, the integer sub-channel assignment is performed according to the offloading priority order, followed by the adjustment of offloaded data allocation over assigned sub-channels for each user. The main procedures of this sequential algorithm are as follows.

  1. Phase 1 [Sub-Channel Reservation for Offloading-Required Users]: Consider the offloading-required users that have . The offloading priorities for these users are ordered in the descending manner. Based on this, the available sub-channels with high priorities are assigned to corresponding users sequentially and each user is allocated with one sub-channel.

  2. Phase 2 [Initial Resource and Offloaded Data Allocation]: For the unassigned sub-channels, using average channel gain over these sub-channels for each user, the OFDMA MECO problem is transformed into its TDMA counterpart. Then, by defining an average offloading priority function, the optimal total sub-channel number and offloaded data size for each user are derived. Note that the resultant sub-channel numbers may not be integer.

  3. Phase 3 [Integer Sub-Channel Assignment]: Given constraints on the rounded total sub-channel numbers for each user derived in Phase , specific integer sub-channel assignment is determined by the offloading priority order. Specifically, each sub-channel is assigned to the user that requires sub-channel assignment and has higher offloading priority than others.

  4. Phase 4 [Adjustment of Offloaded Data Allocation]: For each user, based on the sub-channel assignment in Phase , the specific offloaded data allocation is optimized.

Before stating the algorithm, let define the offloading priority function for user at sub-channel . It can be modified from the TDMA counterpart in by replacing , and with , and , respectively. Let reflect the offloading priority order, which is constituted by , arranged in the descending manner, e.g., . The set of offloading-required users is denoted by , given as . The sets of assigned and unassigned sub-channels are denoted by and , initialized as and . For each user, say user , the assigned sub-channel set is represented by , initialized as . In addition, sub-channel assignment indicators are set as at the beginning.

Using these definitions, the detailed control policies are elaborated as follows.

Sub-Channel Reservation for Offloading-Required Users

The purpose of this phase is to guarantee that the computation latency constraints are satisfied for all users. This can be achieved by reserving one sub-channel for each offloading-required user as presented in Algorithm ?.

Observe that Step in the loop searches for the highest offloading priority over unassigned sub-channels for the remaining offloading-required users ; and then allocates sub-channel to user . This sequential sub-channel assignment follows the descending offloading priority order. Moreover, the condition for the loop ensures that all offloading-required users will be allocated with one sub-channel. This phase only has a complexity of since it just performs the operation for at most iterations.

Initial Resource and Offloaded Data Allocation

This phase determines the total allocated sub-channel number and offloaded data size for each user. Note that the integer constraint on sub-channel allocation makes Problem P4 challenging, which requires an exhaustive search. To reduce the computation complexity, we first derive the non-integer total number of sub-channels for each user as below.

Using a similar method in [26], for each user, say user , let denote its average sub-channel gain, give by where gives the cardinality of unassigned sub-channel set resulted from Phase . Then, the MECO OFDMA resource allocation Problem P4 is transformed into its TDMA counterpart Problem P5 as:

where are the allocated total sub-channel numbers and offloaded data sizes.

Define an average offloading priority function as in by replacing with . The optimal control policy, denoted by , can be directly obtained following the same method as for Theorem ?. Note that this phase only invokes the bisection search. Similar to Section 5.2, the computation complexity can be represented by .

Integer Sub-Channel Assignment

Given the non-integer total sub-channel number allocation obtained in Phase , in this phase, users are assigned with specific integer sub-channels based on offloading priority order. Specifically, it includes the following two steps as in Algorithm ?.

In the first step, to guarantee that sub-channels are enough for allocation, each user is allocated with sub-channels. However, allocating specific sub-channels to users given the rounded numbers is still hard, for which the optimal solution can be obtained using the Hungarian Algorithm [27] that has the complexity of . To further reduce the complexity, a priority-based sub-channel assignment is proposed as follows. Let denote the set of users that require sub-channel assignment, which is initialized as and will be updated as in Step , by deleting the user that has been allocated with the maximum sub-channels. During the loop, for users in set and available sub-channels , we search for the highest offloading priority function, indexed as , and assign sub-channel to user .

In the second step, all users compete for remaining sub-channels since is the lower-rounding of in the first step. In particular, each unassigned sub-channel in is assigned to the user with highest offloading priority. In total, the computation complexity of this phase is .

Adjustment of Offloaded Data Allocation

Based on results from Phase , for each user, say , this phase allocates the total offloaded data over assigned sub-channels for minimizing the individual mobile energy consumption. The corresponding optimization problem is formulated as below with the solution given in Proposition ?.

Note that it is possible that some sub-channels are allocated to user but without offloaded data allocation due to their poor sub-channel gains. For each user, the optimal solution is obtained by performing one-dimension search for , whose computation complexity is since . Thus, the total complexity of this phase is , considering offloaded data allocation for all users.

Figure 3: Total mobile energy consumption vs. cloud computation capacity for a TDMA system.
Figure 3: Total mobile energy consumption vs. cloud computation capacity for a TDMA system.

6.2Multiuser MECO for OFDMA: Finite Cloud Capacity

For the case of finite-capacity cloud based on OFDMA, the corresponding sub-optimal low-complexity algorithm can be derived by modifying that of infinite-capacity cloud as follows.

Recall that for TDMA MECO, modifying the offloading priority function of infinite-capacity cloud leads to the optimal resource allocation for the finite-capacity cloud. Therefore, by the similar method, modifying Phase to account for the finite computation capacity will give the new optimal initial resource and offloaded data allocation for all users. Other phases in Section 6.1 can be straightforwardly extended to the current case and are omitted for simplicity.

7Simulation Results

In this section, the performance of the proposed resource-allocation algorithms for both TDMA and OFDMA systems is evaluated by simulation based on channel realizations. The simulation settings are as follows unless specified otherwise. There are users with equal fairness factors, i.e., for all such that the weighted sum mobile energy consumption represents the total mobile energy consumption. The time slot ms. Both channel in TDMA and sub-channel in OFDAM are modeled as independent Rayleigh fading with average power loss set as . The variance of complex white Gaussian channel noise W. Consider mobile . The computation capacity is uniformly selected from the set GHz and the local computing energy per cycle follows a uniform distribution in the range J/cycle similar to [12]. For the computing task, both the data size and required number of CPU cycles per bit follow the uniform distribution with KB and cycles/bit. All random variables are independent for different mobiles, modeling heterogeneous mobile computing capabilities. Last, the finite-capacity cloud is modeled by the one with upper-bounded offloaded computation, set as cycles per slot. 3

7.1Multiuser MECO for TDMA

Consider a MECO system where the bandwidth MHz. For performance comparison, a baseline equal resource-allocation policy is considered, which allocates equal offloading time duration for mobiles that satisfy and based on this, the offloaded data sizes are optimized.

The curves of total mobile energy consumption versus the cloud computation capacity are displayed in Figure 3. It can be observed that the performance of the sub-optimal policy approaches to that of the optimal one when the cloud computation capacity increases and achieves substantial energy savings gains over the equal resource-allocation policy. Furthermore, the total mobile energy consumption is invariant after the cloud computation capacity exceeds some threshold (about ). This suggests that there exists some critical value for the cloud computation capacity, above which increasing the capacity yields no reduction on the total mobile energy consumption.

Fig. ? shows the curves of total mobile energy consumption versus the time slot duration . Several observations can be made. First, the total mobile energy consumption reduces as the time-slot duration grows. Next, the sub-optimal policy computed using Algorithm ? is found to have close-to-optimal performance and yields total mobile energy consumption less than half of that for the equal resource-allocation policy. The energy reduction is more significant for a shorter time slot duration. The reason is that without the optimization on time fractions, the offloading energy of baseline policy grows exponentially as the allocated time fractions decreases.

Next, Fig. ? plots the curves of total energy consumption versus the number of mobiles given fixed cloud computation capacity set as cycles per slot. It shows the total energy consumption of the proposed policy grows with the number of mobiles at a much slower rate than that of the equal-allocation policy. Again, the designed sub-optimal policy is observed to have close-to-optimality.

7.2Multiuser MECO for OFDMA

Consider an OFDMA system where cycles per slot (modeling large cloud capacity), MHz and W. The proposed low-complexity sub-optimal resource allocation policy is compared with two baseline policies. One is the relaxation-and-rounding resource-allocation policy, for which the integer-relaxation convex problem is computed by a convex problem solver, CVX in Matlab, and then the integer solution is determined by rounding technique. The other one is a greedy resource-allocation policy. It assigns each sub-channel to the user that has highest offloading priority over this sub-channel, followed by the optimal data allocation over assigned sub-channels for each user. However, this policy does not consider the effect of heterogeneous computation loads.

Fig. ? depicts the curves of total mobile energy consumption versus the number of sub-channels in an OFDMA MECO system with users. It can be observed that the performance of proposed sub-optimal resource allocation is close to that of relaxation-and-rounding policy, especially when the number of sub-channels is large (e.g., ). However, the proposed sub-optimal policy has much smaller computation complexity as discussed in Remark ?. In addition, the proposed policy has significant energy-savings gain over the greedy policy. The reason is that it considers the varying computation loads over users and allocates more sub-channels to heavy-loaded users, while the greedy policy only offloads computation from users with high priorities. It also suggests that increasing the number of sub-channels has little effect on the total energy savings if this number is above a threshold (about ), but otherwise it decreases the total mobile energy consumption significantly.

Fig. ? gives the curves of total mobile energy consumption versus number of users for an OFDMA system with sub-channels. It shows that the energy consumptions for three policies increase with the number of users in the same trend that is almost linear. However, the proposed policy has much smaller increasing rate than the greedy one and approaches the performance of the relaxation-and-rounding policy.

8conclusion

This work studies resource allocation for a multiuser MECO system based on TDMA/OFDMA, accounting for both the cases of infinite and finite cloud computation capacities. For the TDMA MECO system, it shows that to achieve the minimum weighted sum mobile energy consumption, the optimal resource allocation policy should have a threshold-based structure. Specifically, we derive an offloading priority function that depends on the local computing energy and channel gains. Based on this, the BS makes a binary offloading decision for each mobile, where users with priorities above and below a given threshold will perform complete and minimum offloading. Furthermore, a simple sub-optimal algorithm is proposed to reduce the complexity for computing the threshold for finite-capacity cloud. Then, we extend this threshold-based policy structure to the OFDMA system and design a low-complexity algorithm to solve the formulated mixed-integer optimization problem, which has close-to-optimal performance in simulation.


@.1Proof of Lemma

Since is a convex function, its perspective function [29], i.e., , is still convex. Using the same technique in [17], jointly considering the cases for and , is still convex. Thus, the objective function, the summation of a set of convex functions, preserves the convexity. Combining it with the linear convex constraints leads to the result.

@.2Proof of Lemma

First, we derive a general result that is the root of equation: with respect to as follows. According to the definitions of and , it has

Therefore, the solution for the general equation is

Note that to ensure in Problem P1, it requires from . Combining this with , it leads to where is defined in . Then, substituting and to and making arithmetic operations gives the desired result as in .

@.3Proof of Lemma

First, the monotone increasing property in terms of is straightforward, since the offloading priority function in is linear to . Next, by rewriting as

it is easy to conclude that is monotone increasing with respect to and . Last, the first derivative of for can be derived as:

For , we have , leading to the desired results.

@.4Proof of Theorem

First, to prove this theorem, we need the following two lemmas which can be easily proved using the definition of Lambert function and its property.

Then, consider case 1) in Theorem ?. Note that for mobile , if and , it results in derived from . Thus, if these two conditions are satisfied for all , it leads to .

For case 2), if there exists mobile such that or , it leads to . And the time-sharing constraint should be active since remaining time can be used for extending offloading duration so as to reduce transmission energy. Moreover, consider each user, say user . If , then from and , should satisfy the following:

Using Lemma ? and Lemma ?, we have the following:

  1. If , it has . Then, from , it gives From , and , it follows that .

  2. If , it has .

  3. If , it has . Combining it with leads to From , and , it follows that .

Furthermore, if , it has . Note that this case can be included in the scenario of with the definition of in . Last, from , it follows that

where is obtained using Lemma ?, ending the proof.

@.5Proof of Lemma

First, by arithmetic operations with the Lambert function, it can be proved that the solution for a general equation is

Next, to solve equation , let and use the derivation method in Lemma ?, it has

Defining , can be rewritten as

where and are defined in Lemma ?. Using Lambert function, the solution for can be obtained: where is defined in . Then, it follows that

where comes from the relationship among and ; follows the definition of and ; is derived from . This leads to the desired result.

@.6Proof of Lemma

It is equivalent to proved as below that when , it has . According to the definition of Lambert function, it has Then, it leads to

Using the monotone increasing property of Lambert function, is equivalent to .

Footnotes

  1. For asynchronous computation offloading among users, the maximum additional latency for each user is one time slot. Moreover, this framework can be extended to predictive computing by designing control policies for the coming data.
  2. For simplicity, we consider either a computation-load or a computation-time constraint at one time but not both simultaneously. However, note that the two constraints can be considered equivalent. Specifically, limiting the cloud computation load allows the computation to be completed within the required time and vice versa. The current resource-allocation policies can be extended to account for more elaborate constraints, which are outside the scope of the paper.
  3. The performance of finite-capacity cloud with non-negligible computing time has similar observations and is omitted due to limited space.

References

  1. M. Swan, “Sensor mania! The Internet of Things, wearable computing, objective metrics, and the quantified self 2.0,” J. Sens. Actuator Netw., vol. 1, pp. 217–253, 2012.
  2. M. Patel, B. Naughton, C. Chan, N. Sprecher, S. Abeta, A. Neal, et al., “Mobile-edge computing introductory technical white paper,” White Paper, Mobile-edge Computing (MEC) industry initiative, 2014.
  3. K. Kumar and Y.-H. Lu, “Cloud computing for mobile users: Can offloading computation save energy?,” IEEE Computer, no. 4, pp. 51–56, 2010.
  4. E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu, R. Chandra, and P. Bahl, “MAUI: Making smartphones last longer with code offload,” in Proc. ACM MobiSys, pp. 49–62, Jun. 2010.
  5. Z. Xiao, W. Song, and Q. Chen, “Dynamic resource allocation using virtual machines for cloud computing environment,” IEEE Trans. Parallel Distrib. Syst., vol. 24, pp. 1107–1117, Sep. 2013.
  6. H. N. Van, F. D. Tran, and J.-M. Menaud, “Performance and power management for cloud infrastructures,” in Proc. IEEE Cloud Computing, pp. 329–336, 2010.
  7. A. Ahmed and E. Ahmed, “A survey on mobile edge computing,” in Proc. IEEE ISCO, 2016.
  8. W. Zhang, Y. Wen, K. Guan, D. Kilper, H. Luo, and D. O. Wu, “Energy-optimal mobile cloud computing under stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 12, no. 9, pp. 4569–4581, 2013.
  9. C. You, K. Huang, and H. Chae, “Energy efficient mobile cloud computing powered by wireless energy transfer,” IEEE J. Select. Areas Commun., vol. 34, no. 5, pp. 1757–1771, 2016.
  10. Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” to appear in IEEE J. Select. Areas Commmu. (Available: http://arxiv.org/pdf/1605.05488v1.pdf).
  11. X. Xiang, C. Lin, and X. Chen, “Energy-efficient link selection and transmission scheduling in mobile cloud computing,” IEEE Wireless Commmu. Lett., vol. 3, pp. 153–156, Jan. 2014.
  12. X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE Trans. Netw., vol. 24, pp. 2795–2808, Oct. 2016.
  13. S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radio and computational resources for multicell mobile-edge computing,” IEEE Trans. Signal Inf. Process. Netw., vol. 1, pp. 89–103, Jun. 2015.
  14. T. Zhao, S. Zhou, X. Guo, Y. Zhao, and Z. Niu, “A cooperative scheduling scheme of local cloud and Internet cloud for delay-aware mobile cloud computing,” Proc. IEEE Globecom, 2015.
  15. Y. Ge, Y. Zhang, Q. Qiu, and Y.-H. Lu, “A game theoretic resource allocation for overall energy minimization in mobile cloud computing system,” in Proc. IEEE ISLPED, pp. 279–284, 2012.
  16. R. Kaewpuang, D. Niyato, P. Wang, and E. Hossain, “A framework for cooperative resource management in mobile cloud computing,” IEEE J. Select. Areas Commun., vol. 31, no. 12, pp. 2685–2700, 2013.
  17. X. Wang and G. B. Giannakis, “Power-efficient resource allocation for time-division multiple access over fading channels,” IEEE Trans. Inf. Theory, vol. 54, pp. 1225–1240, Mar. 2008.
  18. C. Y. Wong, R. S. Cheng, K. B. Lataief, and R. D. Murch, “Multiuser OFDM with adaptive subcarrier, bit, and power allocation,” IEEE J. Select. Areas Commun., vol. 17, pp. 1747–1758, Oct. 1999.
  19. S.-J. Oh, D. Zhang, and K. M. Wasserman, “Optimal resource allocation in multiservice CDMA networks,” IEEE Trans. Wireless Commun., vol. 2, pp. 811–821, Jul. 2003.
  20. L. B. Le and E. Hossain, “Resource allocation for spectrum underlay in cognitive radio networks,” IEEE Trans. Wireless Commun., vol. 7, pp. 5306–5315, Dec. 2008.
  21. Y. Choi, H. Kim, S.-w. Han, and Y. Han, “Joint resource allocation for parallel multi-radio access in heterogeneous wireless networks,” IEEE Trans. Wireless Commun., vol. 9, pp. 3324–3329, Nov. 2010.
  22. B. Prabhakar, E. Uysal Biyikoglu, and A. El Gamal, “Energy-efficient transmission over a wireless link via lazy packet scheduling,” in Proc. IEEE INFOCOM, vol. 1, pp. 386–394, 2001.
  23. S.-C. Wang, K.-Q. Yan, W.-P. Liao, and S.-S. Wang, “Towards a load balancing in a three-level cloud computing network,” in Proc. IEEE Int. Conf. Comput. Sci. Inf. Tech., vol. 1, pp. 108–113, 2010.
  24. M. Tao, Y.-C. Liang, and F. Zhang, “Resource allocation for delay differentiated traffic in multiuser OFDM systems,” IEEE Trans. Wireless Commun., vol. 7, no. 6, pp. 2190–2201, 2008.
  25. J. Huang, V. G. Subramanian, R. Agrawal, and R. Berry, “Joint scheduling and resource allocation in uplink OFDM systems for broadband wireless access networks,” IEEE J. Select. Areas Commun., vol. 27, no. 2, pp. 226–234, 2009.
  26. D. Kivanc, G. Li, and H. Liu, “Computationally efficient bandwidth allocation and power control for OFDMA,” IEEE Trans. Wireless Commun., vol. 2, no. 6, pp. 1150–1158, 2003.
  27. H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.
  28. Siam, 2001.
    A. Ben-Tal and A. Nemirovski, Lectures on modern convex optimization: analysis, algorithms, and engineering applications, vol. 2.
  29. Cambridge university press, 2004.
    S. Boyd and L. Vandenberghe, Convex optimization.
10502
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description