EdgeFlow: Open-Source Multi-layer Data Flow Processing in Edge Computing for 5G and Beyond

EdgeFlow: Open-Source Multi-layer Data Flow Processing in Edge Computing for 5G and Beyond

Chao Yao, Xiaoyang Wang, Zijie Zheng, Guangyu Sun, and Lingyang Song
School of Electronics Engineering and Computer Science
Peking University, Beijing, China
Email: {chao.yao, yaoer, zijie.zheng, gsun, lingyang.song}@pku.edu.cn
Abstract

Edge computing has evolved to be a promising avenue to enhance the system computing capability by offloading processing tasks from the cloud to edge devices. In this paper, we propose a multi-layer edge computing framework called EdgeFlow. In this framework, different nodes ranging from edge devices to cloud data centers are categorized into corresponding layers and cooperate together for data processing. With the help of EdgeFlow, one can balance the trade-off between computing and communication capability so that the tasks are assigned to each layer optimally. At the same time, resources are carefully allocated throughout the whole network to mitigate performance fluctuation. The proposed open-source data flow processing framework is implemented on a platform that can emulate various computing nodes in multiple layers and corresponding network connections. Evaluated on a mobile sensing scenario, EdgeFlow can significantly reduce task finish time and is more tolerant to run-time variation, compared to traditional cloud computing and the pure edge computing approach. Potential applications of EdgeFlow, including network function visualization, Internet of Things, and vehicular networks, are also discussed in the end of this work.

I Introduction

As we are moving towards the 5G communication era, various modern applications including Internet-of-Things (IoT), vehicular networks, mobile caching, and E-health, have been generating tremendous amount of data every day. The data explosion motivates new challenges and requirements for the equipment upgrade on each device and the computing framework evolution throughout the whole network. Besides deployment of more powerful servers in cloud data-centers (CCs), the computation capabilities of wireless access points (APs), such as macro-cell base stations (MBSs), small-cell base stations (SBSs), and WiFi APs, have been improved continuously. In addition, most of them have been equipped with Linux operating systems nowadays [1] to support processing of complex computing programs. At the same time, the processing power of edge devices (EDs), such as Internet protocol cameras, mobile phones, personal laptops, and smart cars, is also increased rapidly thanks to improvement of System-on-Chip (SoC) platforms.

Along with the developments of the computing capabilities in the CCs, the APs and the EDs, the computing framework has also evolved as well. Traditionally, the EDs and the APs are only responsible for data collection and task submission to CCs. Such a computing model has a number of limitations due to following reasons. The sustained colossal computation load will incur the enormous resources overload of the CCs, such as on computing resources, energy supply, and cooling systems. In addition, the remote geographic localization of the CCs will result in the long transmission latency from the EDs to the APs and finally to the CCs, especially when the network includes a great number of EDs and APs and only have limited communication resources. Therefore, a promising solution, namely edge computing, is proposed to leverage the idle computing resources at the edge of the network, i.e., the EDs and the APs, and to save the communication resources as well [2].

I-a Existing Edge Computing Platforms

In an edge computing scenario, part of computing tasks can be offloaded from the cloud end (e.g. CC), to the edge end of networks, such as EDs and APs. When the data have already been processed at the edge end, only a small amount of results rather than the raw data in a huge quantity need to be transmitted to the CCs. Thus, the transmission pressure can be reduced as well [3].

Beyond the concept, a number of practical edge computing platforms have already been designed, where some typical ones are listed as follows.

  • Cloudlet: Cloudlet is proposed to reduce the transmission delay through letting the data generators, (usually the EDs), send the computing tasks to the nearest deployed servers rather than the remote CCs, where the WiFi APs are selected to help collect data from the EDs and then send the data to the servers nearby [4].

  • Femto Cloud: Femto Cloud is a fog computing platform that leverages the nearby underutilized EDs, to serve the computing tasks at the network edge, which uses the greedy heuristic optimization model to schedule the incoming tasks [5].

  • Paradrop: Paradrop is an edge-computing platform deployed on the smart WiFi routers [6]. With a complex computer equipped inside the routers, the Paradrop can enable new applications involving video, e.g. augmented reality, sensor-actuator coordination, and educational applications without the assistance of the remote CCs.

  • Iox: Iox is a fog device product from the Cisco [7]. Similar to the Paradrop, the Iox works by hosting applications in a guest operating system running directly on the smart router. It is mainly developed to support ubiquitous IoT business applications.

Although existing platforms demonstrate the potentials to implement the edge computing in practical networks, they have some common limitations. First, most of them only exploit the computing resources in the edge end. For example, Cloudlet [4], Paradrop [6] and Iox [7] try to leverage computing power on the APs, while Femto Clould tries to utilize the processing power on the EDs. However, the coordination of the whole computing resources throughout the CCs, the APs and the EDs, is still not well exploited. Second, all existing platforms only addressed that processing time can be reduced when the EDs and the APs process more data. Although it can alleviate the transmission pressure of the data, it may aggravate the computing pressures on the EDs and the APs at the edge [8]. How to balance the computing and communication trade-off still remains as an open problem. Third, existing solutions are normally proposed based on an assumption that the run-time environment is stable. However, in real scenarios, run-time variations, such as data burst in some ED, can have impact on processing efficiency. Thus, it may cause significant performance fluctuation.

I-B EdgeFlow

Targeting the issues mentioned above, an EdgeFlow framework is proposed to coordinate the task partitioning among all data processing devices, and deal with the computing and communication trade-off through the optimal resources allocation.

The EdgeFlow is composed of multiple layers. In this work, we categorize all devices into three layers. At the bottom layer, various EDs are located. The data in the EdgeFlow is continuously generated by each ED as a flow. The middle layer includes different types of APs. The top layer is normally a CC. Note that the total system can be further extended to more layers as required in real scenarios. We focus on the three-layer case in this work to simplify discussion.

Each device in the EdgeFlow possesses some computing resources, e.g. CPUs. When a user submits a data processing task (usually directly notified to the CC), the EdgeFlow can assign part of the task to the EDs, part to the APs, and the rest in the CC. The task offloading can directly determine how much computing resources of each device is needed. When the data has been fully processed at the lower layer, only the results need to be transmitted to the upper layer. Otherwise, the raw data needs to be transmitted to the upper layer. Then, the limited communication resources, especially the wireless resources, e.g. time slots, which are supporting the transmission from the lower layer to the upper layer, can also be optimally allocated in the EdgeFlow. The algorithm of the entire task division and the computing and communication resources allocation is summarized in a time aligned task offloading (TATO) scheme. For implementation, the demo platform is deployed on the Intel Next Units of Computing (Intel NUCs) and the Universal Software Radio Peripherals (USRPs) [9], which is available in [10]. It can emulate various computing nodes in multiple layers and corresponding network connection. Evaluated on a mobile sensing scenario, EdgeFlow can significantly reduce data finish time and is more tolerant to run-time variation, compared to traditional cloud computing and the edge computing approach.

The rest of the article is organized as follows: The system architecture is given in Section II. The system schedule is presented in Section III and the TATO scheme is presented in Section IV, respectively. The demo is implemented in Section V. The potential applications for EdgeFlow are discussed in Section VI. Finally, the conclusion is given in Section VII.

Ii System Architecture

The system architecture of EdgeFlow is shown in Fig. 1. The bottom layer includes a large number of EDs, such as wireless sensors, mobile phones, and personal laptops. The middle layer includes the APs, such as SBSs, MBSs and WiFi APs. The top layer is a CC, including multiple servers. Each ED is connected to at most one AP by wireless links, while each AP is connected to the CC by wired links Any one of all the nodes in this architecture is assumed to have a certain amount of computing and communication capabilities. An online user is able to inquire some information from the system by initiating a task and assigning it to the CC. The CC is able to complete the task with the help of the APs and the EDs by utilizing their computing and communication abilities. In the following subsections, the system functions of these three layers are listed in detail.

Fig. 1: The architecture of the EdgeFlow framework.

Ii-a Edge Devices

Generally, the data flow that related to the users’ tasks is generated in the EDs [11]. The EDs are responsible for the jobs of sensing, collecting, and generating raw data. With a certain amount of computing ability, each ED is able to process some of the raw data and submit the unprocessed data and the processed data together to its corresponding AP.

Ii-B Access Points

Each AP receives the raw data from its controlled EDs. Correspondingly, to facilitate the transmission between the AP and the EDs, the wireless transmission resources allocation among the EDs are also scheduled by the AP. Besides, similar with each ED, each AP can continue to process a part of the raw data from EDs, and then use the wired link to submit data to the CC on the top layer. These data include those results processed by the EDs and APs, and the rest of the raw data.

Ii-C Cloud Center

The CC can collect the data from the APs through the wired links and process the rest of the raw data. Then, the CC forwards the final result to the user that generates the task. Furthermore, the CC gathers the global information and carries out the task offloading strategy, e.g. to decide the amount of data processed at each device for each layer and help calculate the optimal computing and communication resources allocation.

Iii System Schedule

In this section, the system schedule of the EdgeFlow is presented, as shown in the Fig. 2. There are four procedures, including task notification, system registration, task offloading, and data processing, which are shown in detail.

Fig. 2: The system schedule of EdgeFlow.

Iii-a Task Notification

The user submits a task to the CC. Then, the CC broadcasts the task to the APs. Finally, each AP broadcasts the task to the EDs it controls.

Iii-B System Registration

After receiving the task notification, each device estimates its computing capacity, and participates in the task when it has idle computing and communication resources. Then, the EDs and the APs upload their registration information with the available computing and communication resources to the CC. After that, the CC can create a logical graph of the involved nodes with the information of the resources.

Iii-C Task Offloading

After the CC receives the information from the available EDs and APs, the CC determines a task offloading strategy, TATO, which is introduced in detail in the following Section IV. Based on the task offloading strategy, the CC assigns the task division files and the resources allocation configurations to the EDs and the APs. The task division file is utilized to tell each device how much data it will process. The resources allocation configuration tells each device how much computing resources it will use. Besides, the schedule configuration also tells each AP how to allocate the wireless communication resources among the EDs it controls, and how much wired bandwidth it can use for data submission to the CC.

Iii-D Data Processing

After the CC completes the computing task offloading scheme, the system starts the processing procedure. The data processing has five processing procedures.

  • Data Processing at Each ED: Each ED collects the raw data and processes the part of the data decided by the TATO.

  • Data Submission to Each AP: Each ED sends the processed results and the rest of the raw data to the corresponding AP through the wireless link.

  • Data Processing at Each AP: Each AP processes the part of the data decided by the TATO.

  • Data Submission to the CC: Each AP delivers its own processed data, the processed data from its controlled EDs, and the rest of the raw data to the CC through the wired link.

  • Data Processing at the CC: The CC processes the rest of the raw data. Finally, the CC summarizes the results and submits them to the user.

Iv Time Aligned Task Offloading

Before showing the details of the TATO, we formulate the task offloading as a mathematical problem. Then, we explain the TATO for the case with one ED and one AP as well as the case with multiple EDs and multiple APs, respectively. Finally, from the perspective of the generation speed of the data flow, we analyze the properties of the TATO.

Iv-a Analytical Model

After one user submits a task, the data generated by each ED are at a speed , i.e., a data flow. Then, in any given time span, , the data that each ED can generate are . We use the task division percentage parameters , , and to describe the data that each ED, each AP, and the CC need to process, respectively. Then, as described in Section III.C, to cope with the data flow at a speed , there are five consuming times to process or transmit the data:

  • Data Processing Time at Each ED, : The amount of the data that each ED needs to processed is determined by , , and . Then, can be calculated as , where is the computing speed of each ED [12].

  • Data Submission Time to Each AP, : The amount of the data that each ED needs to transmit to the AP includes, the processed data by the ED and the rest of the raw data. Then, can be calculated as , where is the compression ratio after the data processing, and is the communication speed which depends on the wireless communication resources allocated to the ED [12].

  • Data Processing at Each AP, : The raw data arriving at each AP can be calculated as . The data arriving time is the data submission time to each AP, . Then, the raw data arriving speed, , can be calculated as . Finally, can be calculated as , where is the computing speed of each AP.

  • Data Submission Time to the CC, : The amount of the data that each AP needs to submit includes the processed data by the ED, the processed data of its own, and the rest of the raw data. Similar with the analysis of each ED, the data submission time, , can be calculated as , where is the communication speed which depends on the wired bandwidth allocated to the AP.

  • Data Processing Time at the CC, : The raw data arriving at the CC can be calculated as . The data arriving time is the data submission time to the CC, . Then, the raw data arriving speed, , can be calculated as . Finally, can be calculated as , where is the computing speed of the CC.

The whole data processing and transmission on the data flow can be regarded as a pipeline. Then, the task finish time depends on the longest time of all the times above, . When any data processing time, , equals to , it indicates that the computing speed is the system’s bottleneck. When any data transmission time, , equals to , it indicates that the communication speed is the system’s bottleneck, which limits the task finish time. The objective is to minimize the longest time in the data processing and transmission:

(1)

where vector represents the task division, vector represents the computing speeds of all devices, which is determined by the computing resources allocation, and vector represents the transmission speeds between pairs of devices, which is determined by the transmission resources allocation.

Iv-B TATO with One ED and One AP

The TATO is proposed to help divide the task and allocate the computing and communication resources. The network with one ED, one AP, and the CC is used to demonstrate the computing and communication tradeoff, the time-aligned principle, and the specific process of the TATO.

Iv-B1 Computing and Communication Tradeoff

The computing and communication tradeoff exists on each device. For example, when the ED processes more data, it will cost more computing resources. However, since more data has been processed as the compressed results, the ED will transmit less data to the AP. Similar tradeoff can be observed in the AP. Thus, the TATO will first balance the computing and communication tradeoff based on the times calculated in the Section IV. A.

Iv-B2 Time-Aligned Principle

Since the whole data processing and transmission can be regarded as a pipeline, the ideal case is that all parts in the EdgeFlow keeps working. That is to say, all the data processing and transmission times remain equal, namely time-aligned principle. However, this case can hardly happen when we analytically solve the time minimization problem in the Section IV. A. Some times cannot reach the longest time, which indicates that they works faster than the slowest ones and part of the computing (or communication) resources are wasted. Fortunately, the time-aligned principle can also help to solve the problem even though the most ideal case cannot happen. The minimization of the time can be analytically proved to reach, when we try to make as many as times keep equal to the longest time. This principle can help to design the specific process of the TATO.

Iv-B3 TATO Scheme

The TATO is divided into three steps, which are stated as follows and illustrated in the Fig. 3.

Fig. 3: TATO for the network with one ED, one AP, and the CC.
  • Step 1. Task Division at the ED: When , it takes more time for the ED to process the data than to transmit it. This indicates that the ED uses too many computing resources and wastes some transmission resources. Therefore, the data processed by the ED will be reduced and the ED will transmit more raw data. If , it takes more time to transmit the data to the AP, which causes the computing resources are not taken fully use of on the ED. Therefore, the TATO will let the ED process more data. Through this way, the algorithm reaches the optimal point where  111A special case is that still happens even though all data are processed by the ED. This indicates that the transmission is too slow. Thus, the optimal solution is to let all data be processed by the ED..

  • Step 2. Task Division at the AP: To fully use the computing resources at the AP, we initiate the task division to maximize under the limitation that . Then, when , the transmission speed is not the bottleneck, the algorithm achieves an optimal solution. When , it takes more time to transmit the data to the CC, which results in the waiting of the data transmission. Then, the system allocates more data to the ED for processing, and returns to Step 1. Through iterations (or analytically solved), the algorithm reaches the optimal solution, where .

  • Step 3. Task Division at the CC: At the CC, all the rest of data should be processed. When , the EdgeFlow reaches an optimal solution. When , it takes more time to process the data by the CC. Then, the TATO will process more data at the ED and the AP, and then repeat Step 1 and 2, to reduce the processing time . With iterations (or analytically solved), the algorithm reaches the optimal solution, where .

Through the three steps above, the system achieves the optimal solution to minimize the task finish time.

Iv-C TATO with Multiple EDs and Multiple APs

The network with multiple EDs and multiple APs are further considered to demonstrate the resources allocation among devices. Based on the observations on the case with one ED and one AP, the following corollaries of the TATO can be intuitively achieved.

Iv-C1 Computing Resources Allocation

Since the computing resources are held independently from device to device, each device can solitarily adjust its computing speed and the computing resources provided for the task. Observed from the case with one ED and one AP, the optimal point of the TATO can be achieved when all devices take full use of their computing resources. This can also be proved for the cases with multiple EDs and multiple APs. Then, the TATO tries to divide the tasks to let the devices on the same layer have the same data processing time, when all the devices take full use of their computing resources.

Iv-C2 Communication Resources Allocation

For the wireless communication resources, each AP can allocate them on the multiple EDs it controls. The time-aligned principle also works in this case theoretically. That is to say, the TATO tries to let the transmission time are less than or equal to the data processing time on the same same devices. Then, the TATO lets as many as transmission time slots equal to . For the wired transmission bandwidth, we assume they are independent among different APs. Then, it is intuitively to take full use of the wired transmission bandwidth for each AP.

Similar with the case with one ED and one AP, the TATO for multiple devices can also be divided into three steps. Due to the page limit, the similar statements are omitted.

Iv-D Performance Analysis on the Data Generation Speed

Besides the optima on the computing and communication tradeoff balancing, the task division, and the computing and transmission resources allocation, we discuss the properties of the TATO under the various data generation speed.

Iv-D1 Tasks with Light Data

The light data indicates that the task finish time is shorter than the data arriving period, . Thus, each device in the EdgeFlow has at least time to deal with other tasks. Thus, when multiple tasks exist in the network, the TATO has the potential to support the multiple tasks when the sum of their task finish times is less than . In this paper, we only consider one task for implementation and the multiple case is left as one of the future directions.

Iv-D2 Tasks with Heavy Data

The heavy data indicates that the task finish time is longer than the data arriving period, . In other word, a data burst happens. Thus, the raw data will accumulate on each device. The TATO tends to let all the exceed computing time and to be equal on various devices. which can allocate the overloaded data uniformly on various devices. The advantage is that when the burst vanishes, the EdgeFlow will process the accumulated data quickly in the parallel manner and recover for the new tasks.

In the following section, the implementation evaluates the efficiency of the TATO on the computing and communication tradeoff balancing, task division, and resources allocation. Moreover, the implementation reflects the properties of the TATO with various data generation speeds.

V Implementation and Evaluation

In this section, we measure the EdgeFlow framework on a platform consisting of four EDs, two APs and one CC, and compare the TATO scheme with two heuristic solutions.

Fig. 4: The implementation for the EdgeFlow framework.

V-a Experimental Setup

As is shown in Fig. 4, one single server stands for the CC layer, and two NUC nodes communicate with it performing as two APs. The bandwidth of wired link between the AP and CC is set to Mbps, which is reasonable for the scale of the wireless mesh backbones [13]. Each AP node connects with two ED nodes over USRP devices, which run at the bandwidth of 5MHz and emulate a wireless time-division multiple access network with the transmission power of 20 dbm.

The CPU frequency of each ED, AP and CC are limited as Hz, Hz and Hz respectively. By default, the rate of packet arrival is one packet per second. The size of each packet is reduced to 10% compared to the raw data.

V-B Performance Analysis

We evaluates the performance of two heuristic solutions besides our TATO scheme. A pure cloud computing means the input stream is forwarded to CC directly, and all the processing work is accomplished centrally. A pure edge computing means each ED deals with all of its input tasks, and delivers the result towards the cloud. To attest the claim that the TATO is better than the other two schemes, we make experiments on two kinds of scenarios.

Fig. 5: Comparison of TATO and the heuristic methods (pure cloud computing and pure edge computing) in terms of the packet data size and system robustness.

The first one varies the amount of data for each packet and compares the average task finish time. Since more data requires more computation and transmission resources, this experiment allows us to study the efficiency of those three schemes under the different burden of tasks. As is shown in the Fig. 5(a), the TATO performs the best in most cases. Moreover, two heuristic solutions meet their bottleneck earlier, where some resources hit the wall and data start to accumulate as time goes on, and thus task finish time surges abruptly. As for the pure edge computing scheme, the increase in packet size drains the computation resource at the EDs. A traffic jam raises for the pure cloud computing scheme as the amount of data exceeds the capacity of the bandwidth between the ED and the AP. In contrast, the TATO scheme schedules the data processing work among all levels, and consequently enhances the system throughput.

The other experiment injects two data bursts and observes how task finish time changes over time. It is a typical issue resulted from various reasons, such as an increase in amount of data, a reduction in available computing resource, or an abrupt network congestion, which has been mentioned in the Section IV. D. In this case, we analyze the system robustness for the tasks with heavy data, which indicates how fast the system recovers to the stable state after the data burst. During the burst, data accumulates somewhere and thus affects the task finish time. As is shown in Fig. 5(b), the first burst causes a data accumulation for pure edge computing scheme, while the other two are hardly affected. After pure edge scheme recovers from the data jam, a bigger burst raises and affect both of the two heuristic schemes. On the contrary, with the help of the TATO scheme, EdgeFlow gains the most robustness for the tasks with heavy data.

Vi Potential Applications

In the following, we introduce three potential applications for the EdgeFlow framework in the 5G communication networks and beyond.

Vi-a Network Function Virtualization

Network function virtualization (NFV) is an network architecture which can virtualize the system into general-purpose high volume servers [14]. Due to the virtualization technology, there are abundant free computing and communication resources on the APs which can carry out extra jobs. However, how to efficiently utilize the available resources is a significant challenge. The EdgeFlow provides an ideal choice to leverage the benefits of functions virtualization. With the TATO, it can coordinate the task division among all the virtualized APs. In addition, the EdgeFlow is designed based on the open-source Linux operating systems, which can be directly deployed in the NFV architecture, without adding any new type of equipment.

Vi-B Internet of Things

These are a large scale of IoT sensors monitoring the environment, whose data can be mined and analyzed. It is evident that the excessive demand for the IoT sensors will quickly overwhelm the processing speed of the traditional cloud computing architecture. With the involvement of the EDs directly connected to the sensors and APs, the EdgeFlow can enhance the system computing capabilities to meet the explosive data flow in the IoT scenarios. The data processing before the CC can shrink the amount of the data traffic, which can relieve the communication pressure due to the limited wireless communication resources.

Vi-C Vehicular Networks

The vehicular network technology senses the vehicles’ behaviors and thus enhances the traffic safety[15]. The researchers estimate that there are more than one-gigabyte data generated by each car every second. The data generated from the vehicle require the real-time processing to make the right decision, which severely affect the traffic safety. Thus, the computing tasks must be offloaded to the vehicles and the roadside units. The EdgeFlow is able to be an excellent choice for the vehicular networks, which can reduce the response time with the limited bandwidth consumption. Besides, the system robustness can handle the traffic congestion scenarios efficiently.

Vii Conclusion

In this paper, we proposed an open-source multi-layer data flow processing framework, EdgeFlow, which realized a task offloading scheme, TATO. In the EdgeFlow, the TATO scheme is used to balance the tradeoff between the computing and communication capability in each device, optimally divide the tasks among various layers, and allocate the available resources throughout the whole network. The proposed data flow processing framework was implemented on a platform that could emulate various computing nodes in multiple layers and corresponding network connections. Experimented on a mobile sensing scenario, the implementation showed the effectiveness by significantly reducing the task finish time and being more tolerant to the run-time variation, compared with traditional cloud computing and the pure edge computing approachs. The framework has also shown its potential benefits for the 5G communication networks and beyond in some typical applications, such as NFV, IoT, and vehicular networks.

References

  • [1] A. Checko, L. H. Christiansen, Y. Yan, L. Scolari, “Cloud RAN for Mobile Networks–A Technology Overview,” IEEE Commun. Surveys & Tutorials, vol. 17, no. 1, pp. 405-426, Sept. 2014.
  • [2] W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, “Edge Computing: Vision and Challenges,” IEEE Internet of Things J., vol. 3, no. 5, pp. 637-646, Oct. 2016.
  • [3] P. Mach, and Z. Becvar, “Mobile Edge Computing: A Survey on Architecture and Computation Offloading,” IEEE Commun. Surveys & Tutorials, vol. 19, no. 3, pp. 1628-1656, Mar. 2017.
  • [4] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The Case for VM-Based Cloudlets in Mobile Computing,” IEEE Pervasive Computing, vol. 8, no. 4, pp. 14-23, Oct. 2009.
  • [5] K. Habak, M. Ammar, K. A. Harras, and E. Zegura, “Femto clouds:Leveraging mobile devices to provide cloud service at the edge,” in Proc. IEEE 8th Int. Conf. Cloud Computing, New York, NY, Aug. 2015, pp. 9-16.
  • [6] D. F. Willis, A. Dasgupta, and S. Banerjee, “Paradrop: a multi-tenant platform for dynamically installed third party services on home gateways,” in Proc. ACM SIGCOMM Wksp. Distributed Cloud Computing, Maui, Hawaii, Sept. 2014, pp. 43-44.
  • [7] S. Yi, C. Li, and Q. Li, “A Survey of Fog Computing:Concepts, Applications and Issues,” in Proc. ACM MobiHoc Wksp. Mobile Big Data, New York, NY, Jun. 2015, pp. 37-42.
  • [8] T. G. Rodrigues, K. Suto, H. Nishiyama, and N. Kato, “Hybrid Method for Minimizing Service Delay in Edge Cloud Computing Through VM Migration and Transmission Power Control,” IEEE Trans. on Comput., vol. 66, no. 5, pp. 810-819, Oct. 2017.
  • [9] R. Dhar, G. George, A. Malani, and P. Steenkiste, “Supporting Integrated MAC and PHY Software Development for the USRP SDR,” in Proc. 1st IEEE Wksp. Netw. Technol. Softw. Defined Radio Netw., Reston, VA, Sept. 2006, pp. 68-77.
  • [10] The EdgeFlow framework is available at https://github.com/pyyaoer/EdgeFlow.
  • [11] I. Vilajosana, J. Llosa, B. Martinez, and M. Domingo-Prieto, “Bootstrapping smart cities through a self-sustainable model based on big data flows,” IEEE Commun. Mag., vol. 51, no. 6, pp. 128-134, Jun. 2013.
  • [12] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, and Y. Zhang, “Energy-Efficient Offloading for Mobile Edge Computing in 5G Heterogeneous Networks,” IEEE Access, vol. 4, no. 99, pp. 5896-5907, Aug. 2017.
  • [13] N. Kato, Z. M. Fadlullah, B. Mao, F. Tang, O. Akashi, T. Inoue, and K. Mizutani, “The Deep Learning Vision for Heterogeneous Network Traffic Control: Proposal, Challenges, and Future Perspective,” IEEE Wireless Commun., vol. 24, no. 3, pp. 146-153, Dec. 2016.
  • [14] R. Vilalta, A. Mayoral, D. Pubill, R. Casellas, R. Mart¨ªnez, J. Serra, C. Verikoukis, and R. Munoz, “End-to-End SDN orchestration of IoT services using an SDN/NFV-enabled edge node,” in Proc. Optical Fiber Conf., Anaheim, CA, Aug. 2016.
  • [15] H. Wu, R. Fujimoto, and G. Riley, “Analytical Models for Information Propagation in Vehicle-to-Vehicle Networks,” in Proc. 60th IEEE Vehic. Tech. Conf., Los Angeles, CA, Apr. 2004, pp. 4548-4552.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
60136
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description