Minimizing Total Busy Time with Application to Energyefficient Scheduling of Virtual Machines in IaaS clouds
Abstract
InfrastructureasaService (IaaS) clouds have become more popular enabling users to run applications under virtual machines. Energy efficiency for IaaS clouds is still challenge. This paper investigates the energyefficient scheduling problems of virtual machines (VMs) onto physical machines (PMs) in IaaS clouds along characteristics: multiple resources, fixed intervals and nonpreemption of virtual machines. The scheduling problems are NPhard. Most of existing works on VM placement reduce the total energy consumption by using the minimum number of active physical machines. There, however, are cases using the minimum number of physical machines results in longer the total busy time of the physical machines. For the scheduling problems, minimizing the total energy consumption of all physical machines is equivalent to minimizing total busy time of all physical machines. In this paper, we propose an scheduling algorithm, denoted as EMinTRELFT, for minimizing the total energy consumption of physical machines in the scheduling problems. Our extensive simulations using parallel workload models in Parallel Workload Archive show that the proposed algorithm has the least total energy consumption compared to the stateoftheart algorithms.
update
nergy efficiency; energyaware; poweraware; vm placement; IaaS; total busy time; fixed interval; fixed starting time; scheduling
1 Introduction
InfrastructureasaService (IaaS) cloud [1] service provisions users with computing resources in terms of virtual machines (VMs) to run their applications [2, 3, 4]. These IaaS cloud systems are often built from virtualized data centers. Power consumption in a largescale data centers requires multiple megawatts [5, 3]. Le et al. [3] estimate the energy cost of a single data center is more than $15M per year. As these data centers has more physical servers, they will consume more energy. Therefore, advanced scheduling techniques for reducing energy consumption of these cloud systems are highly concerned for any cloud providers to reduce energy cost. Energy efficiency is an interesting research topic in cloud systems. Energyaware scheduling of VMs in IaaS cloud is still challenging [2, 3, 6, 7].
Many previous works [8, 9] proved that the scheduling problems with fixed interval times are NPhard. They [4, 10] present techniques for consolidating virtual machines in cloud data centers by using binpacking heuristics (such as FirstFit Decreasing [10], and/or BestFit Decreasing [4]). They attempt to minimize the number of running physical machines and to turn off as many idle physical machines as possible. Consider a dimensional resource allocation where each user requests a set of virtual machines (VMs). Each VM requires multiple resources (such as CPU, memory, and IO) and a fixed quantity of each resource at a certain time interval. Under this scenario, using a minimum of physical machines can result in increasing the total busy time of the active physical machines [11][9]. In a homogeneous environment where all physical servers are identical, the power consumption of each physical machine is linear to its CPU utilization [4], i.e., a schedule with longer working time will consume more energy than another schedule with shorter working time.
This paper presents a proposed heuristic, denoted as EMinTRELFT, to allocate VMs that request multiple resources in the fixed interval time and nonpreemption into physical machines to minimize total energy consumption of physical machines while meeting all resource requirements. Using numerical simulations, we compare EMinTRELFT with the stateoftheart algorithms include PowerAware BestFit Decreasing (PABFD) [4], vector binpacking normbased greedy (VBPNormL2) [10], and Modified FirstFitDecreasingEarliest (TianMFFDE) [9]. Using three parallel workload models [12], [13] and [14] in the Feitelson’s Parallel Workloads Archive [15], the simulation results show that the proposed EMinTRELFT can reduce the total energy consumption of the physical servers by average of 23.7% compared with TianMFFDE [9]. In addition, EMinTRELFT can reduce the total energy consumption of the physical servers by average of 51.5% and respectively 51.2% compared with PABFD [4] and VBPNormL2 [10]. Moreover, EMinTRELFT has also less total energy consumption than MinDFTLDTF [11] in the simulation results.
The rest of this paper is structured as follows. Section 2 discusses related works. Section 3 describes the energyaware VM allocation problem with multiple requested resources, fixed starting and duration time. We also formulate the objective of scheduling, and present our theorems. The proposed EMinTRELFT algorithm presents in Section 4. Section 5 discusses our performance evaluation using simulations. Section 6 concludes this paper and introduces future works.
2 Related Works
The interval scheduling problems have been studied for many years with objective to minimizing total busy time. In 2007, Kovalyov et al. [16] has presented work to describe characteristics of a fixed interval scheduling problem in which each job has fixed starting time, fixed processing time, and is only processed in the fixed duration time on a available machine. The scheduling problem can be applied in other domains. Angelelli et al. [17] considered interval scheduling with a resource constraint in parallel identical machines. The authors proved the decision problem is NPcomplete if number of constraint resources in each parallel machine is a fixed number greater than two. Flammini et al. [8] studied using new approach of minimizing total busy time to optical networks application. Tian et al. [9] proposed a Modified FirstFit Decreasing Earliest algorithm, denoted as TianMFFDE, for placement of VMs energy efficiency. The TianMFFDE sorts list of VMs in queue order by longest their running times first) and places a VM (in the sorted list) to any first available physical machine that has enough VM’s requested resources. Our VM placement problem differs from these interval scheduling problems [16][17][9], where each VM requires for multiple resource (e.g. computing power, physical memory, network bandwidth, etc.) instead of all jobs in the interval scheduling problems are equally on demanded computing resource (i.e. each physical machine can process the maximum of jobs in concurrently).
Energyaware resource management in cloud virtualized data centers is critical. Many previous research [4, 18, 7, 19] proposed algorithms that consolidate VMs onto a small set of physical machines (PMs) in virtualized datacenters to minimize energy/power consumption of PMs. A group in Microsoft Research [10] has studied firstfit decreasing (FFD) based heuristics for vector binpacking to minimize number of physical servers in the VM allocation problem. Some other works also proposed metaheuristic algorithms to minimize the number of physical machines. Beloglazov’s work [4] has presented a modified bestfit decreasing heuristic in binpacking problem, denoted as PABFD, to place a new VM to a host. PABFD sorts all VMs in a decreasing order of CPU utilization and tends to allocate a VM to an active physical server that would take the minimum increase of power consumption. Knauth et al. [18] proposed the OptSched scheduling algorithm to reduce cumulative machine uptime (CMU) by 60.1% and 16.7% in comparison to a roundrobin and Firstfit. The OptSched uses an minimum of active servers to process a given workload. In a heterogeneous physical machines, the OptSched maps a VM to a first available and the most powerful machine that has enough VM’s requested resources. Otherwise, the VM is allocated to a new unused machine. In the VM allocation problem, however, minimizing the number of used physical machines is not equal to minimizing total of total energy consumption of all physical machines. Previous works do not consider multiple resources, fixed starting time and nonpreemptive duration time of these VMs. Therefore, it is unsuitable for the poweraware VM allocation considered in this paper, i.g. these previous solutions can not result in a minimized total energy consumption for VM placement problem with certain interval time while still fulfilling the qualityofservice.
Chen et al [19] observed there exists VM resource utilization patterns. The authors presented an VM allocation algorithm to consolidate complementary VMs with spatial and temporalawareness in physical machines. They introduce resource efficiency and use normbased greedy algorithm, which is similar to in [10], to measure distance of each used resource’s utilization and maximum capacity of the resource in a host. Their VM allocation algorithm selects a host that minimizes the value of this distance metric to allocate a new VM. Our proposed EMinTRELFT uses a different metric that unifies both increasing time and the norm of diagonal vector that is presenting available resources. In our proposed TRE metric, the increasing time is the difference between two total busy time of a PM after and before allocating a VM.
Our proposed EMinTRELFT algorithm that differs from these previous works. Our EMinTRELFT algorithm use the VM’s fixed starting time and duration to minimize the total busy time on physical machines, and consequently minimize the total energy consumption in all physical servers. To the best of our knowledge, no existing works that surveyed in [20, 21, 22, 23] have thoroughly considered these aspects in addressing the problem of VM placement.
3 Problem Description
3.1 Notations
We use the following notations in this paper:
: The virtual machine to be scheduled.
: The physical machine.
: A feasible schedule.
: The minimum power consumed when is 0% CPU utilization.
: The maximum power consumed when is 100% CPU utilization.
: Power consumption of at a time point .
: Fixed starting time of .
: Duration time of .
: The maximum schedule length, which is the time that the last virtual machine will be finished.
: Set of virtual machines that are allocated to in the whole schedule.
: The total busy time (ON time) of .
: Energy consumption for running in the physical machine that is allocated.
: The maximum number of virtual machines that can be assigned to any physical machine.
3.2 Power consumption model
Notations:
 is the CPU utilization of at time .
 is the total number cores of .
 is the allocated MIPS of the processing element to the by .
 is the maximum computing power (in MIPS) of the core on .
In this paper, we use the following energy consumption model proposed in [5][4] for a physical machine. Let call is fraction of the minimum power consumed when is idle (0% CPU utilization) and the maximum power consumed when the physical machine is fully utilized (100% CPU utilization). The power consumption of , denoted as with (), is formulated as follow:
(1) 
We assume that all cores in CPU are homogeneous, i.e. . The CPU utilization is formulated as follow:
(2) 
The energy consumption of the in the time period [] denoted as with CPU utilization is formulated as follow:
(3) 
where:
: The busy time of that is defined as: .
Assume that a virtual machine changes the CPU utilization is for during [] and the uses full utilization of its requested resources in the worst case on . The energy consumption by the , denoted as , is formulated as:
(4) 
Let be the total busy time of , let be energy consumed by , and let be set of virtual machines () that are allocated to in the whole schedule. Let be the total energy consumed by and is the sum of energy consumption during the total busy time that is formulated as:
(5) 
where is called the base (ON) energy consumption for during the total busy time, i.e., , and is the increasing energy consumed by some VMs scheduled to .
(6) 
3.3 Problem formulation
Consider the following scheduling problem. We are given a set of virtual machines to be scheduled on a set of identical physical servers , each server can host a maximum number of virtual machines. Each VM needs dimensional demand resources in a fixed interval with nonmigration. Each is started at a fixed starting time () and is nonpreemptive during its duration time (). Types of resource considered in the problem include computing power (i.e., the total Million Instruction Per Seconds (MIPS) of all cores in a physical machine), physical memory (i.e., the total MBytes of RAM in a physical machine), network bandwidth (i.e., the total Kb/s of network bandwidth in a physical machine), and storage (i.e., the total free GBytes of file system in a physical machine), etc.
The objective is to find out a feasible schedule that minimizes the total energy consumption in the equation (8) with , , as following:
(7) 
where:
 is the fraction of idle power and maximum power consumption by physical machine .  is the total busy time of .
In homogeneous physical machines (PMs), all PMs have the same idle power and maximum power consumption. Therefore is the same for all PMs. We rewrite the objective scheduling as following:
(8) 
The scheduling problem has the following hard constraints that are described in our previous work [11] as following:

Constraint 1: Each VM is only processed by a physical server at any time with nonmigration and nonpreemption.

Constraint 2: Each VM does not request any resource larger than the maximum total capacity resource of any physical server.

Constraint 3: The sum of total demand resources of these allocated VMs is less than or equal to the total capacity of the resources of .
3.4 Preliminaries
Definition 1 (Length of intervals.)
Given a time interval , the length of I is . Extensively, to a set of intervals, length of is .
Definition 2 (Span of intervals.)
For a set of intervals, we define the span of as .
Definition 3 (Optimal schedule)
An optimal schedule is the schedule that minimizes the total busy time of physical machines. For any instance and parameter , denotes the cost of an optimal schedule.
In this paper, we denote is set of time intervals that derived from given set of all requested VMs. In general, we use instance is alternative meaning to a given set of all requested VMs in context of this paper.
Observations: Cost, capacity, span bounds.
For any instance , which is set of time intervals derived from given set of all requested VMs, and capacity parameter , which is the maximum number of VMs that
can be allocated on any physical machine,
the following bounds are held:
The optimal cost bound: .
The capacity bound: .
The span bound: .
For any feasible schedule on a given set of virtual machines, the total busy time of all physical machines that are used in the schedule is bounded by the maximum total length of all time intervals in a given instance . Therefore, the optimal cost bound holds because iff all intervals are nonoverlapping, i.e., then .
Intuitively, the capacity bound holds because iff, for each physical server, exactly VMs are neatly scheduled in that physical server. The span bound holds because at any time at least one machine is working.
3.5 Theorems
In the following theorems, all physical machines are homogeneous. Let and are the minimum/idle power and maximum power consumption of a physical machine respectively. We have .
Theorem 1
Minimizing total energy consumption in (8) is equivalent to minimizing the sum of total busy time of all physical machines ().
(9) 
A proof for this theorem see detail in [11].
Based on the above theorem, we propose our energyaware algorithms denoted as EMinTRELFT which is presented in the next section.
Definition 4
For any schedule we denote by the set of virtual machines allocated to the physical machine by the schedule. Let denote the total busy time of is the span of , i.e., .
Definition 5
For any instance , the total busy time of the entire schedule of computed by the algorithm , denoted as , is defined as , where as is the number of physical machines used at the time by the algorithm .
Definition 6
For any instance and parameter , , which is denoted as the minimized total energy consumption
of all physical machines in an optimal schedule for the , is formulated as:
.
Theorem 2
For any instance , the lower and upper of the total energy consumption in an optimal schedule are bounded by:
.
For any instance , let be the total busy time of the optimal schedule for the , and let be the total energy consumption for the optimal schedule for the .
The total energy consumption of an optimal schedule needs to account for all physical machines running during . We have: .
From Definition 6, we have .
Apply the capacity bound in Theorem 3, we have . Thus, .
Recall that the energy consumption of each virtual machine is nonnegative, thus . Therefore, . Thus
(10) 
We prove the upper bound of the minimized total energy consumption as following. Apply the optimal cost bound in Theorem 3, we have .
Thus
(11) 
Apply the linear power consumption as in the Equation (1) and Equation (3), the energy consumption of each th virtual machine in period time of [] that denotes as is:
where is the percentage of CPU usage of the th virtual machine on a th physical machine.
Because any virtual machine always requests CPU usage lesser than or equal to the maximum total capacity CPU of every physical machine, i.e., .
Note that in this proof, all physical machines are identical with same power consumption model thus and are the maximum power consumption and the idle power consumption of each physical machine. Thus:
Let is interval of each th virtual machine, . By the definition the length of interval is that is duration time of each th virtual machine. Thus:
The total energy consumption of virtual machines is formulated as:
(12) 
From Equation (11), we have:
(13) 
By the definition, the unit energy of a physical machine equals to the idle power consumption in the unit time, i.e., . From the Equation (13):
(14) 
(15) 
(16) 
We prove the theorem.
4 Scheduling Algorithms
4.1 EMinTRELFT scheduling algorithm
In this section, we present the proposed energyaware scheduling algorithm, denoted as EMinTRELFT, with pseudocode of EMinTRELFT in Algorithm 1. Algorithm EMinTRELFT has two (2) steps:sorts the list of virtual machines in order decreasing finishing time first. Next, EMinTRELFT allocates the first next virtual machine to the first physical machine such that has enough resource to provision the virtual machine and TRE metric of denoted as is minimum. The is formulated as in the following equation 19. The EMinTRELFT solves these scheduling problems in time complexity of where is the number of VMs to be scheduled, is the number of physical machines, and is the maximum number of allocated VMs in the physical machines .
Based on the equation 2, the utilization of a resource (resource can be cores, computing power, physical memory, network bandwidth, storage, etc.) of the , denoted as , is formulated as:
(17) 
where is the list of virtual machines that are assigned to the , is the amount of requested resource of the virtual machine (note that in our study the value of is fixed for each user request), and is the maximum capacity of the resource in .
The available resource is presented using diagonal vector, where the norm of the diagonal vector (denoted as ) is formulated as:
(18) 
where R is the set of resource types in a host (={core, mips, ram, netbw, io, storage}) and is weight of resource in a physical machine.
In this paper, we propose the TRE metric for the increasing total busy time and the norm of the diagonal vector () of the physical machine th that is calculated as:
(19) 
5 Performance Evaluation
5.1 Algorithms
In this section, we study the following VM allocation algorithms:

PABFD, a poweraware and modified bestfit decreasing heuristic [4]. The PABFD sorts the list of (i=1, 2,…, n) by their total requested CPU utilization, and assigns new VM to any host that has a minimum increase in power consumption.

VBPNormL2, a vector packing heuristics that is presented as Normbased Greedy with degree 2 [10]. Weights of these Normbased Greedy heuristics use FFDAvgSum which are , which is the value of the exponential function at the point , where is average of sum of demand resources (e.g. CPU, memory, storage, network bandwidth, etc.). VBPNormL2 assigns new VM to any host that has minimum of these norm values.

MinDFTLDTF: the algorithm sorts list of (i=1, 2,…, n) by their starting time () and respectively by their finished time (), then MinDFTLDTF allocates each VM (in a given sorted list of VMs) to a host that has a minimum increase in total completion times of hosts as in algorithm MinDFT [11].

EMinTRELDTF, the algorithm is proposed in the Section 4.
5.2 Methodology
VM Type  MIPS  Cores  Memory  Network  Storage 
(Unit: MBytes)  (Unit: Mbits/s)  (Unit: GBytes)  
Type 1  2500  8  6800  100  1000 
Type 2  2500  2  1700  100  422.5 
Type 3  3250  8  68400  100  1000 
Type 4  3250  4  34200  100  845 
Type 5  3250  2  17100  100  422.5 
Type 6  2000  4  15000  100  1690 
Type 7  2000  2  7500  100  845 
Type 8  1000  1  1875  100  211.25 
Type  MIPS  Cores  Memory  Network  Storage  
(Unit: MBytes)  (Unit: Mbits/s)  (Unit: GBytes)  (Unit: Watts)  (Unit: Watts)  
M1  3250  16  140084  10000  10000  175  250 
Algorithm  Energy  Norm.  Saving Energy 

(Unit: kWh)  Energy  (+:better;:worst)  
PABFD  1,055.42  1.598  60% 
VBPNormL2  1,054.69  1.597  60% 
MinDFTLDTF  603.90  0.915  9% 
TianMFFDE  660.30  1.000  0% 
EMinTRELFT wt1  503.43  0.762  24% 
EMinTRELFT wt0.01  503.43  0.762  24% 
EMinTRELFT wt0.001  503.43  0.762  24% 
Algorithm  Energy  Norm.  Saving Energy 

(Unit: kWh)  Energy  (+:better;:worst)  
PABFD  878.01  1.523  52.3% 
NormVBPL2  876.49  1.520  52.0% 
TianMFFDE  576.55  1.000  0.0% 
MinDFTLDTF  502.61  0.872  12.8% 
EMinTRELFT wt1  416.35  0.722  27.8% 
EMinTRELFT wt0.01  416.35  0.722  27.8% 
EMinTRELFT wt0.001  416.35  0.722  27.8% 
Algorithm  Energy  Norm.  Saving Energy 

(Unit: kWh)  Energy  (+:better;:worst)  
PABFD  460.66  1.601  60.1% 
NormVBPL2  453.23  1.575  57.5% 
TianMFFDE  287.78  1.000  0.0% 
MinDFTLDTF  263.86  0.917  8.3% 
EMinTRELFT wt0.001  232.29  0.807  19.3% 
EMinTRELFT wt0.01  232.29  0.807  19.3% 
EMinTRELFT wt1  232.29  0.807  19.3% 
We evaluate these algorithms by simulation using the CloudSim [24] to create simulated cloud data center systems that have identical physical machines, heterogeneous VMs, and with thousands of CloudSim’s cloudlets [24] (we assume that each HPC job’s task is modeled as a cloudlet that is run on a single VM). The information of VMs (and also cloudlets) in these simulated workloads is extracted from two parallel job models are Feitelson’s parallel workload model [12], Downey98’s parallel workload model [13] and Lublin99’s parallel workload model [14] in Parallel Workloads Archive (PWA) [15]. When converting from the generated logtrace files, each cloudlet’s length is a product of the system’s processing time and CPU rating (we set the CPU rating is equal to included VM’s MIPS). We convert job’s submission time, job’s start time (if the start time is missing, then the start time is equal to sum of job’s submission time and job’s waiting time), job’s request runtime, and job’s number of processors in job data from the logtrace in PWA [15] to VM’s submission time, starting time and duration time, and number of VMs (each VM is created in roundrobin in the four types of VMs in Table 1 on the number of VMs). Eight (08) types of VMs as presented in the Table 1 are used in the [9] that are similar to categories in Amazon EC2’s VM instances: highCPU VM, highmemory VM, small VM, and micro VM, etc.. All physical machines are identical and each physical machine is a typical physical machine (Hosts) with 16 cores CPU (3250 MIPS/core), 136.8 GBytes of available physical memory, 10 Gb/s of network bandwidth, 10 TBytes of available storage. The minimum and maximum power consumed of each physical machine is 175W and 250W respectively (the minimum power when a PM idle is 175:250 = 70% of the maximum power consumption as in [5][4]). In the simulations, we use weights as following: (i) weight of increasing time for mapping a VM to PM: {0.001, 0.01, 1}; (ii) weights of computing resources such as number of MIPS per CPU core, physical memory (RAM), network bandwidth, and storage respectively are equally to 1. We denoted EMinTRELFT wt0.001, EMinTRELFT wt0.01 and EMinTRELFT wt1 as the total energy consumption of algorithm EMinTRELFT in the simulations has weight of increasing time for mapping a VM to PM is {0.001, 0.01, 1} respectively.
We choose Modified FirstFit Decreasing Earliest (denoted as TianMFFDE) [9] as the baseline because TianMFFDE is the best algorithm in the energyaware scheduling algorithm to time interval scheduling. We also compare our proposed VM allocation algorithms with PABFD [4] because the PABFD is a famous poweraware bestfit decreasing in the energyaware scheduling research community, and a vector binpacking algorithm (VBPNormL2) to show the importance of with/without considering VM’s starting time and finish time in reducing the total energy consumption of VM placement problem.
5.3 Results and Discussions
The simulation results are shown in the three tables (Table 3, Table 4 and Table 5) and figures. Three (03) figures include Fig. 1, Fig. 2 and Fig. 3 show bar charts comparing energy consumption of VM allocation algorithms that are normalized with the TianMFFDE. None of the scheduling algorithms use VM migration techniques, and all of them satisfy the Quality of Service (e.g. the scheduling algorithm provisions maximum of user VM’s requested resources). We use total energy consumption as the performance metric for evaluating these VM allocation algorithms.
Using three parallel workload models [12], [13] and [14] in the Feitelson’s Parallel Workloads Archive [15], the simulation results show that the proposed EMinTRELFT can reduce the total energy consumption of the physical servers by average of 23.7% compared with TianMFFDE [9]. In addition, EMinTRELFT can reduce the total energy consumption of the physical servers by average of 51.5% and respectively 51.2% compared with PABFD [4] and VBPNormL2 [10]. Moreover, EMinTRELFT has also less total energy consumption than MinDFTLDTF [11] in the simulation results.
6 Conclusions and Future Work
In this paper, we formulated an energyaware VM allocation problem with multiple resource, fixed interval and nonpreemption constraints. We also discussed our key observation in the VM allocation problem, i.e., minimizing total energy consumption is equivalent to minimize the sum of total completion time of all physical machines (PMs). Our proposed algorithm EMinTRELFT can all reduce the total energy consumption of the physical servers compared with the stateoftheart algorithms in simulation results on three parallel workload models of Feitelson’s [12], Downey98’s [13], and Lublin99’s [14].
We are developing the algorithm EMinTRELFT into a cloud resource management software (e.g. OpenStack Nova Scheduler). In the future, we would like to evaluate more with the weights of increasing time and norm of diagonal vector on available resources. Additionally, we are working on IaaS cloud systems with heterogeneous physical servers and job requests consisting of multiple VMs using EPOBF [6]. We are studying how to choose the right weights of time and resources (e.g. computing power, physical memory, network bandwidth, etc.) in Machine Learning techniques.
Acknowledgment
References
 Q. Zhang, L. Cheng, and R. Boutaba, “Cloud computing: stateoftheart and research challenges,” Journal of Internet Services and Applications, vol. 1, no. 1, pp. 7–18, apr 2010.
 S. K. Garg, C. S. Yeo, A. Anandasivam, and R. Buyya, “EnergyEfficient Scheduling of HPC Applications in Cloud Computing Environments,” CoRR, vol. abs/0909.1146, 2009.
 K. Le, R. Bianchini, J. Zhang, Y. Jaluria, J. Meng, and T. D. Nguyen, “Reducing electricity cost through virtual machine placement in high performance computing clouds,” in SC, 2011, p. 22.
 A. Beloglazov, J. Abawajy, and R. Buyya, “Energyaware resource allocation heuristics for efficient management of data centers for cloud computing,” Future Generation Comp. Syst., vol. 28, no. 5, pp. 755–768, 2012.
 X. Fan, W.D. Weber, and L. Barroso, “Power provisioning for a warehousesized computer,” in ISCA, 2007, pp. 13–23.
 N. QuangHung, N. Thoai, and N. T. Son, “EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud,” TLDKS XVI, vol. LNCS 8960, pp. 71–86, 2014.
 I. Takouna, W. Dawoud, and C. Meinel, “Energy Efficient Scheduling of HPCjobs on Virtualize Clusters using Host and VM Dynamic Configuration,” Operating Systems Review, vol. 46, no. 2, pp. 19–27, 2012.
 M. Flammini, G. Monaco, L. Moscardelli, H. Shachnai, M. Shalom, T. Tamir, and S. Zaks, “Minimizing total busy time in parallel scheduling with application to optical networks,” Theoretical Computer Science, vol. 411, no. 4042, pp. 3553–3562, Sep. 2010.
 W. Tian and C. S. Yeo, “Minimizing total busy time in offline parallel scheduling with application to energy efficiency in cloud computing,” Concurrency and Computation: Practice and Experience, vol. 27, no. 9, pp. 2470–2488, jun 2013.
 R. Panigrahy, K. Talwar, L. Uyeda, and U. Wieder, “Heuristics for Vector Bin Packing,” Microsoft Research, Tech. Rep., 2011.
 N. QuangHung, D.K. Le, N. Thoai, and N. T. Son, “Heuristics for EnergyAware VM Allocation in HPC Clouds,” Future Data and Security Engineering (FDSE 2014), vol. 8860, pp. 248–261, 2014.
 D. G. Feitelson, “Packing schemes for gang scheduling,” in Job Scheduling Strategies for Parallel Processing. Springer, 1996, pp. 89–110.
 A. B. Downey, “A parallel workload model and its implications for processor allocation,” Cluster Computing, vol. 1, no. 1, pp. 133–145, 1998.
 U. Lublin and D. G. Feitelson, “The workload on parallel supercomputers: modeling the characteristics of rigid jobs,” Journal of Parallel and Distributed Computing, vol. 63, no. 11, pp. 1105–1122, 2003.
 D. G. Feitelson, “Parallel Workloads Archive,” (retrieved on 31 Januray 2014), http://www.cs.huji.ac.il/labs/parallel/workload/.
 M. Y. Kovalyov, C. Ng, and T. E. Cheng, “Fixed interval scheduling: Models, applications, computational complexity and algorithms,” European Journal of Operational Research, vol. 178, no. 2, pp. 331–342, 2007.
 E. Angelelli and C. Filippi, “On the complexity of interval scheduling with a resource constraint,” Theoretical Computer Science, vol. 412, no. 29, pp. 3650–3657, 2011.
 T. Knauth and C. Fetzer, “Energyaware scheduling for infrastructure clouds,” in 4th IEEE International Conference on Cloud Computing Technology and Science Proceedings. IEEE, Dec. 2012, pp. 58–65.
 L. Chen and H. Shen, “Consolidating complementary VMs with spatial/temporalawareness in cloud datacenters,” in IEEE INFOCOM 2014  IEEE Conference on Computer Communications. IEEE, Apr. 2014, pp. 1033–1041.
 A. Beloglazov, R. Buyya, Y. C. Lee, and A. Zomaya, “A Taxonomy and Survey of EnergyEfficient Data Centers and Cloud Computing Systems,” Advances in Computers, vol. 82, pp. 1–51, 2011.
 A.C. Orgerie, M. D. de Assuncao, and L. Lefevre, “A survey on techniques for improving the energy efficiency of largescale distributed systems,” ACM Computing Surveys, vol. 46, no. 4, pp. 1–31, Mar. 2014.
 A. Hameed, A. Khoshkbarforoushha, R. Ranjan, P. P. Jayaraman, J. Kolodziej, P. Balaji, S. Zeadally, Q. M. Malluhi, N. Tziritas, A. Vishnu, S. U. Khan, and A. Zomaya, “A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems,” pp. 1–24, Jun. 2014.
 T. Mastelic, A. Oleksiak, H. Claussen, I. Brandic, J.M. Pierson, and A. V. Vasilakos, “Cloud computing: Survey on energy efficiency,” ACM Comput. Surv., vol. 47, no. 2, pp. 33:1–33:36, Dec. 2014.
 R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. F. De Rose, and R. Buyya, “Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms,” Softw., Pract. Exper., vol. 41, no. 1, pp. 23–50, 2011.