Multiuser Resource Allocation for Mobile-Edge Computation Offloading

Multiuser Resource Allocation for Mobile-Edge Computation Offloading


Mobile-edge computation offloading

(MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we consider resource allocation in a MECO system comprising multiple users that time share a single edge cloud and have different computation loads. The optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under constraint on computation latency and for both the cases of infinite and finite edge cloud computation capacities. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Computing the threshold requires iterative computation. To reduce the complexity, a sub-optimal resource-allocation algorithm is proposed and shown by simulation to have close-to-optimal performance.

[theorem]Acknowledgement [theorem]Axiom [theorem]Case [theorem]Claim [theorem]Conclusion [theorem]Condition [theorem]Conjecture [theorem]Criterion [theorem]Exercise [theorem]Notation [theorem]Problem [theorem]Solution [theorem]Summary



The realization of Internet of Things (IoT) will connect tens of billions of resource-limited mobiles, e.g., mobile devices, sensors and wearable computing devices, to Internet via cellular networks. The finite battery lives and limited computation capacities of mobiles pose challenges for designing IoT. One promising solution is to leverage mobile-edge computing [1] and offload intensive mobile computation to nearby clouds at the edges of cellular networks, called edge clouds, with short latency, referred to as mobile-edge computation offloading (MECO). In this paper, we consider a MECO system with a single edge cloud serving multiple users and investigate the energy-efficient resource allocation.

Mobile computation offloading (MCO) (or mobile cloud computing) has been extensively studied in computer science, including system architectures [2], virtual machine migration [3] and server consolidation [4]. It is commonly assumed that the implementation of MCO relies on a network architecture with a central cloud (e.g., a data center). This architecture has the drawbacks of high overhead and long backhaul latency [5] and will soon encounter the performance bottleneck of finite backhaul capacity in view of exponential mobile traffic growth. These issues can be overcome by MECO based on a network architecture supporting distributed mobile-edge computing.

Energy efficient MECO requires the joint design of MCO and wireless communication techniques. Recent years have seen research progress on this topic. For a single-user MECO system, the optimal offloading decision policy was derived in [6] by comparing the energy consumption of optimized local computing (with variable CPU cycles) and offloading (with variable transmission rates). This framework was further developed in [7] and [8] to enable adaptive offloading powered by wireless energy transfer and energy harvesting, respectively. In [9], also for a single-user MECO system, dynamic offloading was integrated with adaptive LTE/WiFi link selection. Moreover, resource allocation for MECO has been studied for various types of multiuser systems [10]. In [10], considering a multi-cell MECO system, the radio and computation resources were jointly allocated to minimize the mobile energy consumption under offloading latency constraints. With the coexistence of central and edge clouds, the optimal user scheduling for offloading to different clouds was studied in [11]. In addition, the distributed offloading for multiuser MECO was designed in [12] using game theory for both energy-and-latency minimization. Prior work on MECO resource allocation focuses on complex algorithmic designs and yields little insight into the optimal policy structures. In contrast, for a multiuser MECO system based on time-division multiple access (TDMA), the optimal resource-allocation policy is shown in current work to have a simple threshold-based structure with respect to a derived offloading priority function.

Resource allocation has been widely studied for various types of multiuser communication systems, e.g., TDMA (see e.g., [13]), orthogonal frequency-division multiple access (OFDMA) (see e.g., [14]) and code-division multiple access (CDMA) (see e.g., [15]). Note that all of them only focus on the radio resource allocation. In contrast, for newly proposed MECO systems, both the computation and radio resource allocation at edge clouds need to be jointly optimized for the maximum mobile energy savings, which makes the algorithmic design more complex.

This paper considers a multiuser MECO system based on TDMA. Consider both the cases of infinite and finite cloud computation capacities. The optimal resource-allocation policy is derived by solving a convex optimization problem that minimizes the weighted sum mobile energy consumption. Note that the consideration of MECO simplifies the problem formulation since the long backhaul latency and heavy overhead in central clouds can be neglected. To solve the problem, an offloading priority function is derived that yields priorities for users and depends on their channel gains and local computing energy consumption. Based on this, the optimal policy is proved to have an insightful threshold-based structure that determines complete or minimum offloading for users with priorities above or below a given threshold, respectively. Moreover, to reduce the complexity for computing the threshold, a simple sub-optimal resource-allocation algorithm is designed and shown to have close-to-optimal performance by simulation.

2System Model

Consider a multiuser MECO system shown in Fig. ?(a) that comprises single-antenna mobiles, indexed as , and one single-antenna base station (BS) that is the gateway of an edge cloud. Time is divided into slots each with a duration of seconds. As shown in Fig. ?(a), each slot comprises two sequential phases for 1) mobile offloading or local computing and 2) cloud computing and downloading of computation results from the edge cloud to mobiles. Cloud computing has small latency; the downloading does not consume mobile energy and furthermore is much faster than offloading due to relative smaller sizes of computation results. For these reasons, the second phase is assumed to have a negligible duration compared with the first phase and not considered in resource allocation. Considering an arbitrary slot, the BS schedules a subset of users for complete/partial offloading based on TDMA. The user with partial or no offloading computes a fraction of or all input data, respectively, using a local CPU. Moreover, the BS is assumed to have perfect knowledge of multiuser channel gains, local computing energy per bit and sizes of input data at all users. Using these information, the BS selects offloading users, determines the offloaded data sizes and allocates fractions of the slot to offloading users with the criterion of minimum weighted sum mobile energy consumption. In addition, channels are assumed to remain constant within each slot.

The model of local computing is described as follows. Assume that the CPU frequency is fixed at each user and may vary over users. Consider an arbitrary time slot. Following the model in [12], let denote the number of CPU cycles required for computing -bit of input data at the -th mobile, and the energy consumption per cycle for local computing at this user. Then the product gives computing energy per bit. As shown in Fig. ?(b), mobile is required to compute -bit input data within the slot, out of which -bit is offloaded and -bit is computed locally. Then the total energy consumption for local computing at mobile , denoted as , is given by . Let denote the computation capacity of mobile that is measured by the number of CPU cycles per second. Under the computation latency constraint, . As a result, the offloaded data at mobile has the minimum size of with , where the function

Next, the energy consumption for offloading is modeled. Let denote the channel gain and the transmission power for mobile . Then the achievable rate, denoted by , is given as:

where is the variance of complex white Gaussian channel noise. The fraction of slot allocated to mobile for offloading is denoted as with , where corresponds to no offloading. For the case of offloading (), the transmission rate is fixed as since this is the most energy-efficient transmission policy under a deadline constraint. Define a function . It follows from that the energy consumption for offloading at mobile is

Note that if either or , is equal to zero.

Last, consider the edge cloud. It is assumed that the edge cloud has finite computation capacity, denoted as , measured as the maximum CPU cycles allowed for computing the sum offloaded data in each slot: . This constraint ensures low latency for cloud computing.

3Multiuser MECO: Problem Formulation

In this section, resource allocation for multiuser MECO is formulated as an optimization problem. The objective is to minimize the weighted sum mobile energy consumption: , where the positive weight factors account for fairness among mobiles. Under the constraints on time-sharing, cloud computation capacity and computation latency, the resource allocation problem is formulated as follows:

Several basic characteristics of Problem P1 are given in the following two lemmas.

See Appendix Section 7.1.

See Appendix Section 7.2.

Lemma ? shows that whether the cloud computation capacity constraint is satisfied determines the feasibility of this optimization problem, while the time-sharing constraint can always be satisfied and only affects the mobile energy consumption.

Assume that Problem P1 is feasible. The direct solution of Problem P1 using the dual-decomposition approach (the Lagrange method) requires iterative computation and yields no insight into the structure of the optimal policy. To address these issues, we adopt a two-stage solution approach that requires first solving Problem P2 below that relaxes Problem P1 by removing the constraint on the cloud computation capacity:

If the solution for Problem P2 violates the constraint on cloud computation capacity, Problem P1 is then incrementally solved building on the solution for Problem P2. This approach allows the optimal policy to be shown to have the said threshold-based structure and also facilitates the design of low-complexity close-to-optimal resource-allocation algorithm. It is interesting to note that Problem P2 corresponds to the case where the edge cloud has infinite computation capacity. The detailed procedures for solving Problems P1 and P2 are presented in the subsequent two sections.

4Multiuser MECO: Infinite Cloud Capacity

In this section, by solving Problem P2 using the Lagrange method, we derive a threshold-based policy for the optimal resource allocation. Moreover, the policy is simplified for several special cases.

To solve Problem P2, the Lagrange function is defined as

where is the Lagrange multiplier associated with the time-sharing constraint. For ease of notation, define a function . Let denote the solution for Problem P2 that always exists according to Lemma ?. Then applying KKT conditions leads to the following necessary and sufficient conditions:

Based on these conditions, the optimal policy for resource allocation is characterized in the following sub-sections.

4.1Offloading Priority Function

Define an (mobile) offloading priority function, which is essential for the optimal resource allocation, as follows:

with the constant defined as

This function is derived by solving a useful equation as shown in the following lemma.

See Appendix Section 7.3.

The function generates an offloading priority value, , for mobile depending on corresponding variables quantifying fairness, local computing and channel. The amount of offloaded data by a mobile grows with an increasing offloading priority as shown in the next sub-section. It is useful to understand the effects of parameters on the offloading priority that are characterized as follows.

Lemma ? can be easily proved by deriving the first derivatives of with respect to each parameter. Moreover, it is consistent with the intuition that, to reduce energy consumption by offloading, the BS should schedule those mobiles having high computing energy consumption per bit (i.e., large and ) or good channels (i.e., large ).

4.2Optimal Resource-Allocation Policy

Based on conditions in - and Lemma ?, the main result of this section is derived, given in the following theorem.

See Appendix Section 7.4.

Theorem ? reveals that the optimal resource-allocation policy has a threshold-based structure when offloading saves energy. In other words, since the exact case of rarely occurs in practice, the optimal policy makes a binary offloading decision for each mobile. Specifically, if the corresponding offloading priority exceeds a given threshold, the mobile should offload all input data to the edge cloud; otherwise, the mobile should offload only the minimum amount of data under the computation latency constraint. This result is consistent with the intuition that the greedy method can lead to the optimal resource allocation.

4.3Special Cases

The optimal resource-allocation policies for several special cases considering equal weight factors are discussed as follows.

Uniform channels and local computing

Consider the simplest case where are identical for all . Then all mobiles have uniform offloading priorities. In this case, for optimal resource allocation, different mobiles can offload arbitrary data sizes so long as the sum offloaded data size satisfies the following constraint:

Uniform channels

Consider the case of . The offloading priority for each mobile, say mobile , is only affected by the corresponding local-computing parameters and . Without loss of generality, assume that . Then the optimal resource-allocation policy is given in the following corollary of Theorem ?.

The result shows that the optimal resource-allocation policy follows a greedy approach that selects mobiles in a descending order of energy consumption per bit for complete offloading until the time-sharing duration is fully utilized.

Uniform local computing

Consider the case of . Similar to the previous case, the optimal resource-allocation policy can be shown to follow the greedy approach that selects mobiles for complete offloading in the descending order of channel gain.

5Multiuser MECO: Finite Cloud Capacity

In this section, we consider the case of finite cloud computation capacity and analyze the optimal resource-allocation policy for solving Problem P1. The policy is shown to also have a threshold-based structure as the infinite-capacity counterpart derived in the preceding section. Both the optimal and sub-optimal algorithms are presented for policy computation.

5.1Optimal Resource-Allocation Policy

To solve the convex Problem P1, the corresponding Lagrange function can be written as

where is the Lagrange multiplier corresponding to the cloud computation capacity constraint. Using the above Lagrange function, it is straightforward to show that the corresponding KKT conditions can be modified from their infinite-capacity counterparts in - by replacing with , called the effective computation energy per cycle. The resultant effective offloading priority function, denoted as , can be modified accordingly from that in as

where . Based on above discussion, the main result of this section follows as shown below.

Computing the threshold for the optimal resource-allocation policy requires a two-dimension search over the Lagrange multipliers , using Algorithm ?. For an efficient search, it is useful to limit the range of and as follows.

See Appendix Section 7.5

Note that corresponds to the case of infinite cloud computation capacity and to the case where offloading yields no energy savings for any mobile.

5.2Sub-Optimal Resource-Allocation Policy

To reduce the computation complexity of Algorithm ? due to the two-dimension search, one simple sub-optimal policy is designed using Algorithm ?. The key idea is to decouple the computation and radio resource allocation. In Step , based on the approximated offloading priority in for the case of infinite cloud computation capacity, we allocate the computation resource to mobiles with high offloading priorities. Step optimizes the corresponding fractions of slot given offloaded data. This sub-optimal algorithm has low complexity requiring only a one-dimension search. Moreover, its performance is shown by simulation to be close-to-optimal in the sequel.

6Simulation Results

The simulation settings are as follows unless specified otherwise. The MECO system comprises mobiles with equal fairness weight factors, namely that for all such that the weighted sum mobile energy consumption represents the total mobile energy consumption. The time slot ms and channels are modeled as independent Rayleigh fading with average power loss set as . In addition, the variance of complex white Gaussian channel noise is W and the bandwidth Mhz. Consider mobile . The CPU computation capacity is uniformly selected from the set Ghz and the local computing energy per cycle follows a uniform distribution in the range J/cycle. For the computing task, both the data size and required number of CPU cycles per bit follow the uniform distribution with KB and cycles/bit. All random variables are independent for different mobiles, modeling heterogeneous mobile computing capabilities. Last, the cloud computation capacity is set as cycles per slot.

For performance comparison, a baseline equal resource-allocation policy is considered, which allocates equal offloading duration for mobiles satisfying and based on this, the offloaded data sizes are optimized.

Figure 1 shows the curves of total mobile energy consumption versus the time slot duration . Several observations can be made. First, the total mobile energy consumption reduces as the slot duration grows. Next, the sub-optimal policy computed using Algorithm ? is found to have close-to-optimal performance and yields total mobile energy consumption less than half of that for the equal resource-allocation policy. The energy reduction is more significant for a shorter slot duration since without the optimization on fractions of slot, the offloading energy of baseline policy grows exponentially with the decrease of allocated time fractions.

The curves of total mobile energy consumption versus the cloud computation capacity are displayed in Figure 2. It can be observed that the performance of the sub-optimal policy approaches to that of the optimal one when the cloud computation capacity increases and achieves substantial energy savings gains over the equal resource-allocation policy. Furthermore, the total mobile energy consumption is invariant after the cloud computation capacity exceeds some threshold (about ). This suggests that there exists some critical value for the cloud computation capacity, above which increasing the capacity yields no reduction on the total mobile energy consumption.

Figure 1: Total mobile energy consumption vs. time slot duration.
Figure 1: Total mobile energy consumption vs. time slot duration.
Figure 2: Total mobile energy consumption vs. cloud computation capacity.
Figure 2: Total mobile energy consumption vs. cloud computation capacity.


Consider a multiuser MECO system based on TDMA. This work shows that the optimal energy-efficient resource-allocation policy for clouds with infinite or finite computation capacities, is featured with a threshold-based structure. Specifically, the BS makes a binary offloading decision for each mobile, where users with priorities above or below a given threshold will perform complete or minimum offloading. Moreover, a simple sub-optimal algorithm is proposed to reduce the complexity for computing the threshold.

7.1Proof of Lemma

Since is a convex function, its perspective function, defined as , is still convex. Thus, the objective function, the summation of a set of convex functions, preserves the convexity. Combining it with the linear convex constraints leads to the desired result.

7.2Proof of Lemma

Whether Problem P1 is feasible depends on the following two key constraints: and . Assume is satisfied. Then it has

Thus, only when , Problem P1 is feasible.

7.3Proof of Lemma

First, we derive a general result that is the root of equation: with respect to as follows.

According to the definitions of and , it has

Thus, the solution for the general equation is

Note that to ensure in Problem P1, we need , which is equivalent to derived from . Then, by substituting and to and making arithmetic operations gives the desired result.

7.4Proof of Theorem

First, to prove this theorem, we need the following lemmas.

Since denotes the root of equation for , it has

Thus, based on the definition for Lambert function, we have . Then the desired result follows.

From , for , it has . Since the single-valued Lambert function is monotone increasing for , we can easily obtain the desired result.

Then, consider case 1) in Theorem ?. Note that for mobile , if and , it results in derived from . Thus, if these two conditions are satisfied for all , it leads to . For case 2), if there exists mobile such that or , it ensures . And the time-sharing constraint should be active since remaining time can be used for offloading so as to reduce the transmission energy. Moreover, consider each user . If , from and , should satisfy the following:

Using Lemma ? and Lemma ?, we have the following:

  1. If , it has . Then, from , it gives From , and , it follows that .

  2. If , it has .

  3. If , it has . Combining it with leads to From , and , it follows that .

Furthermore, if , it has . Note that this case can be included in the scenario of with the definition of in .

Last, from , it follows that

where is derived using Lemma ?, completing the proof.

7.5Proof of Lemma

If there exists offloading mobile , it must satisfy and . Thus, considering all mobiles, it follows and The latter condition is equivalent to , completing the proof.


  1. M. Patel, B. Naughton, C. Chan, N. Sprecher, S. Abeta, A. Neal, et al., “Mobile-edge computing introductory technical white paper,” White Paper, Mobile-edge Computing (MEC) industry initiative, 2014.
  2. H. T. Dinh, C. Lee, D. Niyato, and P. Wang, “A survey of mobile cloud computing: architecture, applications, and approaches,” J. Wireless Commun. and Mobile Computing, vol. 13, no. 18, pp. 1587–1611, 2013.
  3. Z. Xiao, W. Song, and Q. Chen, “Dynamic resource allocation using virtual machines for cloud computing environment,” IEEE Trans. Parallel and Distributed Systems, vol. 24, pp. 1107–1117, Sep. 2013.
  4. S. Srikantaiah, A. Kansal, and F. Zhao, “Energy aware consolidation for cloud computing,” in Proc. HotPower, vol. 10, pp. 1–5, 2008.
  5. A. Ahmed and E. Ahmed, “A survey on mobile edge computing,” in Proc. IEEE Intl. Conf. Intel. Sys and Cont, 2016.
  6. W. Zhang, Y. Wen, K. Guan, D. Kilper, H. Luo, and D. O. Wu, “Energy-optimal mobile cloud computing under stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 12, no. 9, pp. 4569–4581, 2013.
  7. C. You, K. Huang, and H. Chae, “Energy efficient mobile cloud computing powered by wireless energy transfer (extended version),” [Online]. Available:
  8. Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” submitted to IEEE J. Select. Areas Commun., 2016.
  9. X. Xiang, C. Lin, and X. Chen, “Energy-efficient link selection and transmission scheduling in mobile cloud computing,” IEEE Wireless Commmu. Letters, vol. 3, pp. 153–156, Jan. 2014.
  10. S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radio and computational resources for multicell mobile-edge computing,” IEEE Trans. Signal and Info. Processing over Nectworks, Jun. 2015.
  11. T. Zhao, S. Zhou, X. Guo, Y. Zhao, and Z. Niu, “A cooperative scheduling scheme of local cloud and internet cloud for delay-aware mobile cloud computing,” Proc. IEEE Globecom, pp. 1–6, 2015.
  12. X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE Trans. Networking, vol. PP, pp. 1–1, Oct. 2015.
  13. X. Wang and G. B. Giannakis, “Power-efficient resource allocation for time-division multiple access over fading channels,” IEEE Trans. Info. Theory, vol. 54, pp. 1225–1240, Mar. 2008.
  14. C. Y. Wong, R. S. Cheng, K. B. Lataief, and R. D. Murch, “Multiuser OFDM with adaptive subcarrier, bit, and power allocation,” IEEE J. Select. Areas Commun., vol. 17, pp. 1747–1758, Oct. 1999.
  15. S.-J. Oh, D. Zhang, and K. M. Wasserman, “Optimal resource allocation in multiservice CDMA networks,” IEEE Trans. Wireless Commun., vol. 2, pp. 811–821, Jul. 2003.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description