CONDENSE: A Reconfigurable Knowledge Acquisition Architecture for Future 5G IoT

CONDENSE: A Reconfigurable Knowledge Acquisition Architecture for Future 5G IoT

Dejan Vukobratovic, Dusan Jakovetic, Vitaly Skachek, Dragana Bajovic, Dino Sejdinovic, Gunes Karabulut Kurt, Camilla Hollanti, and Ingo Fischer D. Vukobratovic is with the Department of Power, Electronics and Communications Engineering, University of Novi Sad, Serbia, e-mail: dejanv@uns.ac.rs.D. Jakovetic is with the BioSense Institute, Novi Sad, Serbia, and with the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Serbia, email: djakovet@uns.ac.rs.V. Skachek is with the Institute of Computer Science, University of Tartu, Estonia, email: vitaly.skachek@ut.ee.D. Bajovic is with the BioSense Institute, Novi Sad, Serbia, and with the Department of Power, Electronics and Communication Engineering, University of Novi Sad, Serbia, email: dbajovic@uns.ac.rs.D. Sejdinovic is with the Department of Statistics, University of Oxford, UK, email: dino.sejdinovic@stats.ox.ac.uk.G. Karabulut Kurt is with the Department of Electronics and Communication Engineering, Istanbul Technical University, Turkey, email: gkurt@itu.edu.tr.C. Hollanti is with the Department of Mathematics and Systems Analysis, Aalto University, Finland, email: camilla.hollanti@aalto.fi.I. Fischer is with the Institute for Cross-Disciplinary Physics and Complex Systems (UIB-CSIC), Spain, email: ingo@ifisc.uib-csic.es.D. Vukobratovic is financially supported by Rep. of Serbia TR III 44003 grant. V. Skachek is supported in part by the grant PUT405 from the Estonian Research Council. G. Karabulut Kurt is supported by TUBITAK Grant 113E294. C. Hollanti is financially supported by the Academy of Finland grants #276031, #282938 and #283262.
Abstract

In forthcoming years, the Internet of Things (IoT) will connect billions of smart devices generating and uploading a deluge of data to the cloud. If successfully extracted, the knowledge buried in the data can significantly improve the quality of life and foster economic growth. However, a critical bottleneck for realising the efficient IoT is the pressure it puts on the existing communication infrastructures, requiring transfer of enormous data volumes. Aiming at addressing this problem, we propose a novel architecture dubbed Condense (reconfigurable knowledge acquisition systems), which integrates the IoT-communication infrastructure into data analysis. This is achieved via the generic concept of network function computation: Instead of merely transferring data from the IoT sources to the cloud, the communication infrastructure should actively participate in the data analysis by carefully designed en-route processing. We define the Condense architecture, its basic layers, and the interactions among its constituent modules. Further, from the implementation side, we describe how Condense can be integrated into the 3rd Generation Partnership Project (3GPP) Machine Type Communications (MTC) architecture, as well as the prospects of making it a practically viable technology in a short time frame, relying on Network Function Virtualization (NFV) and Software Defined Networking (SDN). Finally, from the theoretical side, we survey the relevant literature on computing “atomic” functions in both analog and digital domains, as well as on function decomposition over networks, highlighting challenges, insights, and future directions for exploiting these techniques within practical 3GPP MTC architecture.

Internet of Things (IoT), Big Data, Network Coding, Network Function Computation, Machine learning, Wireless communications.

I Introduction

A deluge of data is being generated by an ever-increasing number of devices that indiscriminately collect, process and upload data to the cloud. An estimated 20 to 40 billion devices will be connected to the Internet by 2020 as part of the Internet of Things (IoT) [1]. IoT has the ambition to interconnect smart devices across cities, vehicles, appliances, connecting industries, retail and healthcare domains, thus becoming a dominant fuel for the emerging Big Data revolution [2]. IoT is considered as one of the key technologies to globally improve the quality of life, economic growth, and employment, with the European Union market value expected to exceed one trillion Euros in 2020 [3]. However, a critical bottleneck for the IoT vision is the pressure it puts on the existing communication infrastructures, by requiring transfer of enormous amounts of data. By 2020 IoT data will exceed 4.4 ZB (zettabytes) amounting to 10 of the global “digital universe” (compared to 2 in 2013) [4]. Therefore, a sustainable solution for IoT and cloud integration is one of the main challenges for contemporary communications technologies.

The state-of-the-art in IoT/cloud integration assumes uploading and storing all the raw data generated by IoT devices to the cloud. The IoT data is subsequently processed by cloud-based data analysis that aims to extract useful knowledge [5]. For majority of applications, this approach is inefficient since there is typically a large amount of redundancy in the collected data. As a preprocessing step prior to data analysis, projections to a much lower-dimensional space are often employed, essentially discarding large portions of data. With the growth of IoT traffic, the approach where communications and data analysis are separated will become unsustainable, necessitating a fundamental redesign of IoT communications.

In this work, we propose a generic and reconfigurable IoT architecture capable of adapting the IoT data transfer to the subsequent data analysis. We refer to the proposed architecture as Condense (reconfigurable knowledge acquisition systems). Instead of merely transferring data, the proposed architecture provides an active and reconfigurable service leveraged by the data analysis process. We identify a common generic interface between data communication and data analysis: the function computation, and we distinguish it as a core Condense technology. Instead of communicating a stream of data units from the IoT devices to the cloud, the proposed IoT architecture processes the data units en-route through a carefully designed process to deliver a stream of network function evaluations stored in the cloud. In other words, the Condense architecture does not transfer all the raw data across the communications infrastructure, but only what is needed from the perspective of the current application at hand.

To illustrate the idea with a toy example, consider a number of sensors which constitute a fire alarm system, e.g., [6][7]. Therein, we might only be interested in the maximal temperature across the sensed field, and not in the full sensors readings vector. Therefore, it suffices to deliver to the relevant cloud application only an evaluation of the maximum function applied over the sensors readings vector; Condense realizes this maximum function as a composition of “atomic” functions implemented across the communications infrastructure.

We describe how to implement the proposed approach explained above in the concrete third generation partnership project (3GPP) Machine Type Communications (MTC) architecture [8]. The 3GPP MTC service is expected to contribute a dominant share of the IoT traffic via the upcoming fifth generation (5G) mobile cellular systems, thus providing an ideal setup for the demonstration of Condense concepts. We enhance the 3GPP MTC architecture with the network function computation (NFC) – a novel envisioned MTC-NFC service. We define the layered Condense architecture comprised of three layers: i) atomic function computation layer, ii) network function computation layer, and iii) application layer, and we map these layers onto the 3GPP MTC architecture. In the lowermost atomic function computation (AFC) layer, carefully selected atomic modules perform local function computations over the input data. The network function computation layer orchestrates the collection of AFC modules into the global network-wide NFC functionality, thus evaluating non-trivial functions of the input data as a coordinated composition of AFCs. Furthermore, the NFC layer provides a flexible and reconfigurable MTC-NFC service to the topmost application layer, where cloud-based data analysis applications directly exploit the outputs of the NFC layer. Throughout the system description, we provide a review of the theoretical foundations that justify the proposed architecture and point to the tools for the system design and analysis. Finally, we detail practical viability of incorporating NFC services within 3GPP MTC service, relying on emerging concepts of Network Function Virtualization (NFV) [9][10] and Software Defined Networking (SDN) [11][12]; this upgrade is, thanks to the current uptake of the SDN/NFV concepts, achievable within a short time frame.

This paper is somewhat complementary with respect to other works that consider architectures for 5G IoT communications. For example, reference [13] focuses on machine-type multicast services to ensure end-to-end reliability, low latency and low energy consumption of MTC traffic (including both up and downlinks). Reference [14] provides a detailed analysis of integration of 5G technologies for the future global IoT, both from technological and standardization aspects. However, while existing works consider making communication of the MTC-generated raw data efficient, here we aim to improve the overall system efficiency through communicating over the network infrastructure only the application-requested functions over data. In other words, this paper describes how we can potentially exploit decades of research on function computation and function decomposition over networks within the concrete, practical and realizable knowledge acquisition system for the IoT-generated data. In particular, we review the main results on realizing (atomic) function computation in the analog (wireless and optical) and digital domains, as well as on function evaluation and decomposition over networks, including the work on sensor fusion, e.g., [6, 7], network coding for computing [15, 16][19], and neural networks [20][24]. While this paper does not provide novel contributions to these fields, it identifies and discusses main challenges in applying them within the practical 3GPP MTC architecture, and it points to interesting future research directions.

Paper organization. The rest of the paper is organized as follows. In Sec. II, we review the state-of-the-art 3GPP MTC architecture, briefly present SDN/NFV concepts, and give notational conventions. In Sec. III, we introduce the novel layered Condense architecture that, through the rest of the paper, we integrate into the 3GPP MTC architecture. In Sec. IV, we describe the atomic function computation layer that defines the basic building block of the architecture, distinguishing between the analog (or in-channel) AFC and digital (in-node) AFC modules. The theoretical fundamentals and practical aspects of the NFC layer are presented in Sec. V. In Sec. VI, the interaction between the application layer and the NFC layer is discussed, where several application layer examples are presented in detail. Further implementation issues are discussed in Sec. VII, and the paper is concluded in Sec. VIII.

Ii Background and Preliminaries

Subsection II-A reviews the current 3GPP MTC architecture, Subsection II-B gives background on software defined networking (SDN) and network function virtualization (NFV), while Subsection II-C defines notation used throughout the rest of the paper.

Ii-a The 3GPP MTC Architecture

Machine Type Communications (MTC) is an European Telecommunications Standards Institute (ETSI)-defined architecture that enables participating devices to send data to each other or to a set of servers [8]. While ETSI is responsible for defining the generic MTC architecture, specific issues related with mobile cellular networks are addressed in 3GPP standardization [25]. 3GPP MTC is first included in Release 10 and will evolve beyond current 3GPP Long Term Evolution (LTE)/LTE-Advanced Releases into the 5G system [26].

Fig. 1: The 3GPP MTC architecture.

Fig. 1 illustrates the 3GPP MTC architecture. It consists of: i) the MTC device domain containing MTC devices that access MTC service to send and/or receive data, ii) the network domain containing network elements that transfer the MTC device data, and iii) the MTC application domain containing MTC applications running on MTC servers. MTC devices access the network via Radio Access Network (RAN) elements: base stations (eNB: eNodeB) and small cells (HeNB: Home-eNodeB). Packet data flows follow the Evolved Packet Core (EPC) elements: HeNB Gateway (HeNB-GW), Service Gateway (S-GW) and Packet Gateway (P-GW), until they reach either a mobile operator MTC server or a third party MTC server via the Internet. In this work, we address MTC device data processing and focus on the data plane while ignoring the control plane of the 3GPP architecture.

Abstracted to its essence, the current 3GPP MTC approach in the context of IoT/cloud integration is represented by three layers (Fig. 1). The MTC device domain, or data layer, contains billions of devices that generate data while interacting with the environment. The network domain, or communication layer, provides mere data transfer services to the data layer by essentially uploading the generated data to the cloud in its entirety. The application domain, or application layer, contains data centres running MTC servers which provide storage and processing capabilities. MTC applications running in data centres enable, e.g., machine learning algorithms to extract knowledge from the collected data. In this paper, we challenge this 3GPP MTC layered structure and propose a novel Condense layered architecture described in Sec. III.

Ii-B Software Defined Networking (SDN) and Network Function Virtualization (NFV)

SDN and NFV are novel concepts in networking research that increase network flexibility and enable fast implementation of new services and architectures. Both SDN and NFV are under a current consideration for future integration in the 3GPP cellular architecture [27][28]. Although not yet part of the 3GPP architecture, in Fig. 1, we present main NFV/SDN management entities: the NFV manager and the SDN controller, as they will be useful for the description of the Condense architecture.

SDN is a novel network architecture that decouples the control plane from the data plane [11][29]. This is achieved by centralizing the traffic flow control where a central entity, called the SDN controller, manages the physical data forwarding process in SDN-enabled network nodes. The SDN controller remotely manages network nodes and flexibly controls traffic flows at various flow granularities. In other words, the SDN controller can easily (re)define forwarding rules for data flows passing through an SDN network node. Using SDN, various network services are able to quickly re-route their data flows, adapting the resulting (virtual) network topology to their needs.

NFV is another recent trend in networking where, instead of running various network functions (e.g., firewalls, NAT servers, load balancers, etc.) on dedicated network nodes, the network hardware is virtualized to support software-based implementations of network functions [10]. This makes network functions easy to instantiate anywhere across the network when needed. Multiple instances of network functions are jointly administered and orchestrated by the centralized NFV management.

NFV and SDN are complementary concepts that jointly provide flexible and efficient service chaining: a sequence of data processing tasks performed at different network nodes [30]. The NFV manager has the capability to actually instantiate the targeted (atomic) function computations at each node in the network. Similarly, SDN has the power to steer data flows and hence establish a desired (virtual) network topology which supports the desired network-wide computation. This feature will be fundamental for a fast implementation and deployment of the Condense architecture, as detailed in the rest of the paper. For more details about SDN/NFV concepts in 3GPP networks, we refer the interested reader to [27].

Ii-C Notational preliminaries

Throughout, we use bold symbols to denote vectors, where -th entry of a vector  of length  is denoted by , . We denote by the set of real numbers, and by the -dimensional real coordinate space. A finite field is denoted by , a finite alphabet (finite discrete set) by , and by the set of -dimensional vectors with the entries from . Symbol denotes the cardinality of a set. We deal with vectors , , and also , and it is clear from context which of the three cases is in force. Also, addition and multiplication over both and are denoted in a standard way – respectively as and (or the multiplication symbol is simply omitted), and again the context clarifies which operation is actually applied.

We frequently consider a directed acyclic graph , where denotes the set of nodes, and the set of directed edges (arcs). An arc from node to node is denoted by . Set , where , , and denote, respectively, the set of source nodes, atomic nodes, and destination nodes. We let . We also introduce , , and . As we will see further ahead, source nodes correspond to MTC devices ( data generators), atomic nodes correspond to the 3GPP communication infrastructure nodes which implement atomic functions ( atomic nodes), and destination nodes are MTC servers in data centers which are to receive the desired function computation results ( destination nodes). We index an arbitrary node in by , and similarly we write , and . When we do not intend to make a distinction among , and , we index an arbitrary node by . For each node , we denote by its in-neighborhood, i.e., the set of nodes  in such that the arc exists. Analogously, denotes the node ’s out-neighborhood. As we frequently deal with in-neighborhoods, we will simply write . We call the in-degree of the cardinality of , and we analogously define the out-degree. Although not required by the theory considered ahead, just for simplicity of notation and presentation, all sections except Section V consider the special case where is a directed rooted tree with sources and a single destination. Pictorially, we visualize as having the source nodes at the bottom, and the destination node at the top (see Sec. V, Fig. 6, right-hand side). In the case of a directed rooted tree graph , the leaf nodes’ set of  coincides with its in-neighborhood , and all nodes except the destination nodes have the out-degree one, the destination node having the out-degree zero.

We index (vector) quantities associated with sources through subscripts, i.e., is the source ’s vector. When considering a generic directed acyclic graph  (Section V), we associate to each arc a vector quantity . With directed rooted trees (Sections III, IV, and VI), each node (except the destination node) has the out-degree one; hence, for simplicity, we then use node-wise (as opposed to edge-wise) notation, i.e., we index quantity as . Note that this notation is sufficient as, with directed rooted trees, there is only a single arc outgoing a (non-destination) node. When needed, time instances are denoted by ; a vector associated with source and time is denoted by ; similarly, we use for non-source nodes.

Iii CONDENSE Architecture: IoT/Cloud Integration for 5G

In this section, we present the Condense architecture that upgrades the 3GPP MTC architecture with the concept of network function computation (NFC). NFC creates a novel role 3GPP MTC service should offer: instead of communicating raw data, it should deliver function computations over the data, providing for a novel MTC-NFC service. The NFC design should be generic, flexible and reconfigurable to meet the needs of increasing number of MTC applications that extract knowledge from MTC data. For most applications, indiscriminate collection of MTC data is extremely wasteful and MTC-NFC service may dramatically reduce MTC traffic while preserving operational efficiency of MTC applications.

The Condense architecture challenges the conventional division into data, communications and application layer (Sec. 2A). Instead, we propose a novel architecture consisting of: i) atomic function computation (AFC) layer, ii) network function computation (NFC) layer, and iii) application layer. In this section, we provide a high-level modular description of the architecture by carefully defining its basic building blocks (modules). In the following three sections, we delve into details of each layer and provide both theoretical justifications and implementation discussion that motivated this work.

Iii-a CONDENSE Architecture: Modules and Layers

The Condense architecture is presented in Fig. 2. It consists of an interconnected collection of basic building blocks called AFC modules. Each AFC module evaluates an (atomic) function over the input data packets and delivers an output data packet representing the atomic function evaluation. A generic AFC module may have multiple input and multiple output interfaces, each output interface representing a different AFC over the input data. The collection of interconnected and jointly orchestrated AFC modules delivers a network function computation over the source data packets. The resulting NFC evaluations are the input to application layer MTC server application.

Fig. 2: Condense MTC-NFC architecture.

Let us assume that an MTC network contains MTC devices representing the set of source modules (or source nodes) . Source node produces a message containing symbols from a given alphabet . The message is transmitted at an output interface of the source module . For simplicity, we assume that every source module has a single output interface.

In addition to the source nodes, the MTC network contains AFC modules (or AFC nodes) representing the set . An arbitrary AFC node has input and output interfaces. For simplicity, unless otherwise stated, we will assume single-output AFC modules, i.e., . At input interfaces, the AFC node receives the set of input data packets , while at the output interface, it delivers the output data packet . AFC node  associates an atomic function to the output interface, where . Finally, the MTC network contains MTC servers (or destination nodes) representing the set of destination nodes .

The source nodes , AFC nodes and destination nodes are interconnected into an NFC graph where is the set of nodes (modules) and is the set of edges, i.e., connections between modules. For simplicity, unless otherwise stated, we restrict our attention to directed rooted trees (also called in-trees), where each edge is oriented towards the root node111We note that this restriction is for simplicity of presentation only; extension to directed acyclic graphs is straightforward and will be required in Sec. V.. Source nodes represent leaves of . The set of all edges in the graph is completely determined by the set of child nodes of all AFC and destination nodes. We let denote the set of child nodes of an arbitrary node . The collection of sets fully describes the set of connections between modules.

Finally, we introduce the control elements: topology processor and function processor, that organize AFC modules into a global NFC evaluator. Based on the MTC server application requirements, the topology and function processors reconfigure the AFC modules to provide a requested MTC-NFC service. In particular, the function processor decomposes a required global NFC into a composition of local AFCs and configures each AFC module accordingly. In other words, based on the requested global network function , the function processor defines a set of atomic functions and configures the respective AFC modules. Similarly, by defining the graph via the set and by configuring each AFC node accordingly, the topology processor will interconnect AFC modules into a directed graph of MTC data flows. The topology and function processor are key NFC layer entities. They manage, connect and orchestrate the AFC layer entities, i.e., source modules and AFC modules.

Iii-B CONDENSE Architecture: Implementation

The above described abstract Condense architecture can be mapped onto the 3GPP MTC architecture. We present initial insights here, while details are left for the following sections.

The AFC layer is composed of AFC modules that evaluate atomic functions. Examples of atomic functions suitable for AFC implementations are the addition, modulo addition, maximum/minimum, norm, histogram, linear combination, threshold functions, etc. Atomic functions can be evaluated straightforwardly in the digital domain using digital processing in network nodes. In addition to that, atomic functions could be realized by exploiting superposition of signals in the analog domain. Thus, we consider two types of AFC modules: i) Analog-domain AFC (A-AFC), and ii) Digital-domain AFC (D-AFC) modules.

An A-AFC, also referred to as an in-channel AFC, harnesses interference in a wireless channel or signal combining in an optical channel to perform atomic function evaluations. An example of the technology that can be easily integrated as an A-AFC module is the Physical Layer Network Coding (PLNC) [31][32], where the corresponding atomic function is finite field addition, e.g., bit-wise modulo 2 sum in the case of the binary field.

A D-AFC, also referred to as in-node AFC, evaluates atomic functions in the digital domain using, e.g., reconfigurable hardware-based modules in the context of SDN-based implementation [29]. Alternatively, they can also be implemented using software-based virtual network functions in the context of a NFV-based implementation [10]. An example of the technology that can be easily integrated as a D-AFC module is the packet-level Random Linear Network Coding (RLNC) [33][34]. RLNC is a mature technology in terms of optimized software implementations (see, e.g., [35]) and it evaluates linear combinations over finite fields as atomic functions. We note that it has been recently proposed and demonstrated within the SDN/NFV framework [36][37].

The NFC layer can be naturally implemented within the SDN/NFV architecture. In particular, the topology processor naturally fits as an SDN application running on top of the SDN controller within the SDN architecture. In addition, the function processor role may be set within an NFV manager entity, e.g., taking the role of the NFV orchestrator. Using the SDN/NFV framework, MTC-NFC service can be quickly set and flexibly reconfigured according to requests arriving from a diverse set of MTC applications.

Iv Atomic Function Computation Layer

In this Section, we discuss theoretical and implementation aspects of realizing atomic functions within AFC modules. Subsection IV-A discusses the AFC modules operating in the analog domain, while Subsection IV-B considers digital domain AFCs.

Iv-a Analog-domain Atomic Function Computation (A-AFC)

Wireless-domain A-AFC: Theory. An A-AFC module’s functionality of computing functions over the incoming packets is based on harnessing interference, i.e., the superposition property of wireless channels. We survey the relevant literature on such function computation over wireless channels, finalizing the subsection with presenting current theoretical and technological capabilities.

The idea of harnessing interference for computation is investigated in terms of a joint source-channel communication scheme in [38], targeting to exploit multiple access channel characteristics to obtain optimal estimation of a target parameter from noisy sensor readings. Extensions of the analog joint source-channel communication are further investigated in the literature, see e.g., [39, 40, 41, 42]. Following the impact of network coding ideas across the networking research, reference [31] proposes the concept of PLNC to increase throughput of wireless channels; PLNC essentially performs specific A-AFC computations (finite field arithmetics) in a simple two-way relay channel scenario. Computation of linear functions or, more precisely, random linear combinations of the transmitted messages over multiple access channels (MAC) has been considered in [43] and extended in [44]; therein, the authors propose the compute-and-forward (CF) transmission scheme for computing linear functions at the relays, who attempt to decode the received random message combinations (the randomness is induced by the fading channel coefficients) to integer combinations, which hence become lattice points in the original code lattice. After this, the relays forward the lattice points to the destination, who can then solve for the original messages provided that the received system of equations is invertible.

Finally, reference [45] addresses non-linear function computation over wireless channels (see also [43]). While it is intuitive that a linear combination of packets (signals) from multiple sources can be obtained through a direct exploitation of interference, more general, non-linear functions can also be computed through introducing a non-linear (pre-)processing of packets prior to entering the wireless medium, and their (post-)processing after the pre-processed signals have been superimposed in the wireless channel.

Fig. 3: Wireless-domain A-AFC module: representation via 3GPP elements (left), modules (middle) and NFC graph nodes (right).

Following [45], we now describe in more detail how this non-linear function computation works – and hence how the A-AFC modules (in principle) operate. Assume that length- source node data packets , arrive at the input interfaces of an AFC node . The packets are first pre-processed by the source node (MTC device) through a pre-processing function . The result is the transmitted symbol sequence , where . Assuming a block-fading wireless channel model for narrowband signals, the received sequence can be modelled as , where:

(1)

At the destination, a post-processing function is used to obtain , where . Therefore, symbol-wise, the A-AFC module  realizes computation of the following (possibly non-linear) function:

(2)

The class of functions computable via A-AFC modules, i.e., which are of form (2), are called nomographic, and they include important functions such as the arithmetic mean and the Euclidean norm [45].

Fig. 3 illustrates an A-AFC: its position in the real-world system (left), its representation as an A-AFC module (central), and as part of the NFC graph (right).

Wireless-domain A-AFC: Implementation. The above framework implies that an A-AFC module is physically spread across all input devices and the output device connected to the A-AFC module, as illustrated in Fig. 4. At the input devices (e.g., MTC devices), an appropriate input A-AFC digital interface needs to be defined that accepts input data packets and implements pre-processing function before the signal is transmitted into the channel. Similarly, at the output device, e.g., small base station (HeNB), an appropriate output A-AFC digital interface needs to be defined that delivers output data packets after the signal received from the channel is post-processed using . We also note that, although above we assume input nodes to A-AFC module are source nodes (MTC devices), wireless-domain A-AFC module can be part of the wireless backhaul network, e.g., connecting several HeNBs to the eNB.

Fig. 4: Wireless-domain A-AFC module: Functional diagram.

Challenges and Future Directions. In the current state-of-the-art, A-AFC is investigated in the context of joint computation and communications in wireless sensor networks. Current research works are limited in terms of the computed functions, such as addition, multiplication, norm, arithmetic/geometric means, and are also limited in scope, as they are only targeting the wireless – and not optical – communication links. Design and implementation of generic A-AFC in wireless setting which is adaptive to the channel conditions remains an open problem. Note that any such design should take practical implementation aspects into account, including channel estimation errors, timing and frequency offsets and quantization issues. Furthermore, the link qualities between network nodes, including adaptive schemes that select the computation nodes according to the robustness of communications between links, need to be considered to improve the reliability of the computed function outputs.

Optical-domain A-AFC: Discussion. If we consider the PLNC example, it is clear that wireless domain A-AFC modules are close to become a commercially available technology (see, e.g., [32]). The question that naturally arises is whether A-AFC modules can be implemented in optical channels within optical access networks such as passive optical networks (PON). This would further increase the richness of AFC layer and bring novel AFC modules into the MTC-NFC network. Here, we briefly comment on the status of optical-domain function computation.

Recent works analyzed applicability of network coding of data packets within PONs in some simple scenarios [46][47]. However, in contrast to the above vision of A-AFC modules, in these works signals are not “in-channel” combined, rather, network coding is done at the end-nodes, in the digital domain.

Information processing in the photonic domain has been envisioned in the 1970s. But implementations of digital optical computing could not keep pace with the development of electronic computing. Nevertheless, with advances in technology, the role of optics in advanced computing has been receiving reawakened interest [48]. Moreover, unconventional computing techniques, in particular reservoir computing (RC), find more and more interest and are being implemented in different photonic hardware. RC is a neuro-inspired concept for designing, learning, and analysing recurrent neural networks – neural networks where, unlike the most popular feed-forward neural networks, the interconnection network of neurons possesses cycles (feedback loops). A consequence of the presence of loops is, as pointed out in [49], that recurrent neural networks can process and account for temporal information at their input. A recent breakthrough was a drastic simplification of the information-processing concept of reservoir computing (RC) in terms of hardware requirements [50]. The appeal of RC therefore resides not only in its simple learning, but moreover in the fact that it enables simple hardware implementations. Complex networks can be replaced by a single or a few photonic hardware nodes with delayed feedback loops [51], [52], [53]. Different tasks, including spoken digit recognition, nonlinear time series prediction and channel equalization have been performed with excellent performance, speed and high energy efficiency [53], [54]. Beyond these first successes, meanwhile, using simple hardware, learning approaches including RC, extreme learning machines and back-propagation learning of recurrent neural networks have been demonstrated [55], illustrating the flexibility and potential of this approach.

Iv-B Digital-domain Atomic Function Computation (D-AFC)

Fig. 5: D-AFC module: representation via 3GPP elements (left), modules (middle) and NFC graph nodes (right).

D-AFC modules evaluate atomic functions in the digital domain, within the network nodes such as base stations (eNB or HeNB) and core network gateways (HeNB-GW, S-GW, P-GW). Although digital-domain in-node processing offers many possibilities for D-AFC implementation, here we address two possible options suitable for the SDN and NFV architectures.

The first option for D-AFC are reconfigurable hardware-based Field Programmable Gate Array (FPGA) platforms. FPGA platforms are frequently used in combination with high-speed networking equipment to perform various work-intensive and high-throughput demanding functions over data packets such as packet filtering [56]. FPGAs are either integrated in network nodes as co-processing units, or can be easily attached as external units to network nodes via high-speed network interfaces. FPGAs offer flexible and reconfigurable high-throughput implementations of various linear or non-linear atomic functions. For example, implementing random linear combinations over input data packets in network nodes – as part of RLNC – is considered in several recent works [57][58]. D-AFC implementations via FPGA platforms offer seamless integration in SDN concepts, because SDN data flows can be easily filtered and fed into either internal or external FPGA units. Depending on the application, FPGAs achieve speed increase over general processing units by factor of tens to hundreds. Note also that FPGAs can be reprogrammed and reconfigured in short time intervals (order of minutes).

The second possibility for efficient D-AFC is to use software-based implementations in high-level programming languages that run on general processing units, either in network nodes or externally on dedicated general-purpose servers [35]. This approach offers full flexibility for atomic function evaluation at the price of lower data processing throughput as compared with the FPGA approach. An example of a D-AFC implementation of random linear combinations over incoming data packets in the context of RLNC is given in [36][37]. Software-based D-AFC implementations can be easily and remotely instantiated across the network nodes in a virtualized environment following NFV concepts.

Fig. 5 illustrates a D-AFC: its position in the real-world system (left), its representation as an D-AFC module (center), and as part of the NFC graph (right).

V Network Function Computation Layer

The NFC layer is responsible for configuring the Condense topology and assigning the appropriate atomic functions across the AFC modules, such that a desired network-wide function computation is realized. Subsection V-A discusses theoretical aspects (capabilities and limitations) of computing functions over networks, surveying the relevant literature on sensor fusion and network coding for computing. Subsection V-B describes a possible implementation of NFC functionalities within the 3GPP MTC system, through a more detailed view of SDN/NFV modules, i.e., the function and topology processors.

V-a Theoretical Aspects of NFC Layer

The need for mathematical theory of function computation in networks is advocated in [6][7]. The authors discuss various challenges in sensor networks, and argue that computation of functions in a sensor network could lead to a lower data overhead, as well as to a reduced data traffic. For our toy example, in the fire alarm sensor network, we are only interested in the measurements of the highest temperature in the set of sensors. Alternatively, in monitoring temperature range in a green house, we might only be interested in the measurements of the average temperature from the set of sensors. Therefore, for various practical applications, it would be beneficial if the network node would be able to perform basic (atomic) computation, which in the context of the whole network could lead to computation of more sophisticated functions in the destination nodes.

This subsection elaborates on the mathematical tools behind the realization of the Condense NFC layer. There is a number of works studying function computation over a network, which are available in the literature. The relevant work includes those in contexts of sensor fusion, network coding for computing, and neural networks. The two former work threads are discussed here, while the latter is discussed in Subsection VI-C. Hereafter, we mostly follow the framework defined in [15, 16], adapting notation to our needs here.

Mathematical settings. Consider a finite directed acyclic graph , consisting of AFC nodes belonging to set , a set of sources (MTC devices) , and a set of destinations , such that .

The network uses a finite alphabet , called network alphabet. Each source generates random symbols . Here, we say that the source symbol belongs to the -th generation of the source symbols.

We assume that each packet sent over a network link is a vector of length over . Suppose that each of the destination nodes requests computation of a (vector-valued) function of the incoming MTC device vectors . The target vector function is of the form , where is a function alphabet, and each component function is of the same form, applied to each source’s -th symbol, . More precisely, we wish to compute , .

With each arc outgoing an AFC node , we associate the atomic function , which takes the  length- incoming vectors , , and produces the length- outgoing vector , i.e.:

Similarly, with each arc outgoing a source node , the atomic function takes the  length- incoming vectors , , as well as the generated symbols = , and produces the length- outgoing vector , i.e.:

(Note that we consider here the most general case in which a source node does not have to lie on the “bottom-most” level of the network, i.e., it can also have some incoming edges.) We refer here to both ’s and ’s as encoding functions.

Fig. 6: Mapping between MTC network and NFC graph.

Finally, a destination node takes its  incoming length- messages and performs decoding, i.e., it produces the vector of function evaluation estimates , as follows:

where is the destination node ’s function. Note that recovers back the -dimensional vector from the -dimensional incoming quantities (where ), and it is therefore referred to as a decoding function.

We say that the destination computes the function , if for every generation , it holds that:

Further, we say that the problem of computing is solvable if there exist atomic functions , across all arcs in and decoding functions , , such that is computed at all destinations (that is, their corresponding composition computes at all destinations).

Connection to network coding and beyond. The reader can observe that the problem of network coding [17] is a special case of the function computation problem with , where the target function is an identity function: In particular, in linear network coding, the alphabet is taken as a finite field , the function alphabet is , and all encoding functions , and decoding functions are linear mappings over . The case of linear network coding is relatively well understood. In particular, it is known that the problem of computing is solvable if and only if each of the minimum cuts between all the sources and any destination has capacity of at least  [60]. In Subsection VI-A, we provide further details on this special case.

For non-linear network coding, the universal criteria for network coding problem solvability are not fully understood. It is known, for example, that for the case where each sink requests a subset of the original messages, there exist networks, which are not solvable by using linear functions , and , yet they can be solved by using non-linear functions (see, for example, [19]).

In order to understand the fundamental limits on solvability of the general function computation problem, the authors of [15] define what they term the computing capacity of a network as follows:

(3)

They derive a general min-cut type upper bound on the computing capacity, as well as a number of more specific lower bounds. In particular, special classes of functions, such as symmetric functions, divisible functions and exponential functions, are considered therein (see [19] for more detail). It should be mentioned that the considered classes of functions are rather restricted, and that they possess various symmetry properties. The problem turns out to be very difficult, however, for more general, i.e., less restricted, classes of functions.

Another related work is [18], where a set-up with linear functions , and and general linear target function is considered. The authors are able to characterize some classes of functions, for which the cut-set bound gives sufficient condition for solvability, and for which it does not.

Other results. In [6], the function computation rate is defined and lower bounds on such rate are obtained for various simple classes of functions. Recently, in [65], information-theoretic bounds on the function computation rate were obtained for a special case, when network is a directed rooted tree, and the set of sinks contains only the root of the tree, and the source symbols satisfy a certain Markov criterion.

A number of works study computation of sum in the network. It is shown in [63] that if each sink requests a sum of the source symbols, the linear coding may not be sufficient in networks where non-linear coding can be sufficient. Other related works include [64, 59, 61, 66, 62].

There is a significant number of works related to secure function computation available in the literature. However, usually, the main focus of these works is different. We leave that topic outside of the scope of this paper.

Challenges and research directions. Research on network function computation is still in its infancy and general theoretic foundations are yet to be developed. Here, we identify several challenges with network function computation relevant for Condense architecture. First, it is important to consider the issue of solvability when the encoding and decoding functions , and are restricted to certain classes dictated by the underlying physical domain. For instance, A-AFC modules operating in the wireless domain are currently restricted to a certain class of functions (see Subsection IV-A), while, clearly, D-AFCs operating in the digital domain have significantly more powerful capabilities. Second, it interesting to study NFC in simpler cases, when the network topology is restricted to special classes of graphs, for example rooted trees, directed forests, rings, and others. Third, under the above defined constraints on the AFC capabilities, the question that arises is how well we can approximate a desired function, even if solvability is impossible. Finally, practical and efficient ways for actual constructions of , and , as opposed to existence-type results, are fundamental for Condense implementation.

V-B Implementation Aspects of NFC Layer

In practical terms, the NFC layer should deal with control and management tasks of establishing and maintaining an NFC graph of AFC modules for a given service request, as sketched in Fig. 6. In our vision, the main control modules that define the NFC layer functionality are: the function processor (FP) and the topology processor (FP) (see Fig. 2). Both modules can be seamlessly integrated in the SDN/NFV architecture.

The TP module organizes the MTC data flows and sends configuration instructions via the SDN control plane. In abstract terms, for all nodes in a directed rooted tree (or directed acyclic graph), TP needs to provide the set of child nodes from which to accept MTC data flows, and to identify the exact MTC data flows that will be filtered for each output flow, if there are multiple output flows.

Based on the MTC server application requests and the configured topology, FP processes the global function request, and, based on the available library of AFC modules, it generates the set of atomic functions to be used: . Note that, as described before, AFC modules may be: i) A-AFC modules, ii) hardware-based D-AFC modules, and ii) software-based D-AFC modules. A-AFC modules (e.g., PLNC module) need more complex instantiation control as they spread over several physical nodes and involve configuration of input and output interfaces and pre/post-processing functions (Sec. IVA). Hardware-based D-AFC modules (e.g., internal/external FPGA modules within or attached to network elements) require SDN-based control of MTC data flows that will filter selected flows and direct them through the D-AFC module. Finally, the most flexible case of software-based D-AFC modules (e.g., software-based modules in virtual machines running over the virtualized hardware in network elements or external servers) is a library of AFC implementations where each atomic function from the library can be remotely instantiated via the NFV control. Overall, FP needs to know the list of available AFC resources in the entire NFC network in order to optimize the set of instantiated AFC modules.

Fig. 7: NFC layer modules within SDN/NFV architecture.

In terms of realization, we use the standard proposal for NFV/SDN complementary coexistence [30] where the FP module can be implemented as an NFV architecture block called the NFV orchestrator. The TP module can be implemented as an SDN application. Besides communicating directly, both FP and TP modules, observed as SDN applications, approach the SDN control plane via the SDN northbound interface. Based on the FP/TP inputs, the SDN control plane will configure physical devices (network nodes) via the southbound interface. This NFV/SDN based control of Condense is illustrated in Fig. 7.

Vi Application Layer

In this section, we present three examples of applications at the application layer of the Condense architecture: data recovery through network coding, minimizing population risk via a stochastic gradient method, and binary classification via neural networks222Strictly speaking, learning a neural network can be considered a special case of a stochastic gradient method (with a non-convex loss function). We present it here as a distinct subsection as we consider the implementation where the neural network weight parameters are distributed across the Condense network.. The purpose of these examples is two-fold. First, they demonstrate that a wide range of applications can be handled via the Condense architecture. Second, they show that Condense is compatible with widely-adopted concepts in learning and communications, such as random linear network coding, stochastic gradient methods, and neural networks.

The three examples are also complementary from the perspective of the workload required by the FP module. With the first example (data recovery via network coding), the function of interest is decomposed into mutually uncoordinated atomic (random) linear functions, and hence no central intervention by the function controller is required, nor is the inter-AFC modules coordination needed as long as atomic functions are concerned. With the third example (binary classification via neural networks), the desired network-wide function is realized through a distributed coordination of the involved AFC modules. Finally, with the second example (minimizing population risk via a stochastic gradient method), the most generic case requires the intervention of the central FP module, in order that the desired network-wide function be decomposed and computed.

Vi-a Data recovery through network coding

A special case of an application task with the Condense architecture is to deliver the raw data to the data center of interest. This corresponds to a trivial, identity function over the input data as a goal of the overall network function computation. However, this is not achieved through simply forwarding the raw data to the data center, but through the usage of network coding. In other words, atomic functions are not identity functions but random linear combinations over the input data. While such solution may not reduce the total communication cost with respect to the conventional (forwarding) solution, this solution is significantly more flexible, robust and reliable, e.g., [33][34]. As recently noted, it can be flexibly implemented within the context of network coded cloud storage [67].

We follow the standard presentation of linear network coding, e.g., [68], adapting it to our setting. For ease of presentation, we assume here that graph is a directed rooted tree. Therein, the destination node is the root of the tree. Suppose that each of the available MTC devices has a packet consisting of symbols, each symbol belonging to a finite field . We adopt the finite field framework as it is typical with network coding. The goal is to deliver the whole packet vector to the data center (the destination node ). With Condense, this is achieved as follows. Each atomic node  generates the message pair (to be sent to the parent node) based on the received messages from its child nodes , where , and we recall that is the set of child nodes of node . As we will see, the quantity is by construction a linear combination of the (subset of) MTC devices’ packets , . The quantity stacks the corresponding weighting (or coding) coefficients; that is:

(4)

Now, having received , , node computes using random linear network coding approach. It first generates new random (local) coding coefficients , , uniformly from , and independently of the received messages. Then, it forms as:

Once has been computed, node also has to compute the global coding coefficients with respect to the MTC packets , , as per (4). It can be shown that:

For the end-leaf (MTC device) nodes , we clearly have that and , . Once the destination (root) node  receives all its incoming messages, it has available a random linear combination of the MTC’s packets :

and the corresponding global coding coefficients vector . Afterwards, the whole process described above is repeated sequentially times, such that the data center obtains  additional pairs , . It can be shown that, as long as is slightly larger than , MTC data vector can be recovered with high probability through solving the linear system of equations with unknowns :

Note that, for this application example, each atomic function is linear. Moreover, there is no requirement on the coordination of the atomic functions which correspond to different atomic nodes, as they are generated randomly and mutually independently [69]. Hence, this application does not require a centralized control by the FP module.

Finally, when certain a priori knowledge on is available (e.g., sparsity, i.e., many of the packets are the zero -tuples of symbols from ), then the recovery probability close to one can be achieved even when the number of linear combinations at the MTC server is significantly smaller than . Omitting details, this can be in principle achieved using the theories of compressed sensing and sparse recovery, e.g., [70][71].

Vi-B Statistical estimation and learning

A dominant trend in current machine learning research are algorithms that scale to large datasets and are amenable to modern distributed processing systems. Machine learning systems are widely deployed in architectures with a large number of processing units at different physical locations and communication is becoming a resource that is taking the center stage in the algorithm design considerations [72, 73, 74, 75, 76].

Typically, the task of interest (parameter estimation, prediction, etc.) is performed through solving an optimization problem of minimizing a risk function [77]. In most widely used models of interest, which include logistic regression and neural networks, this optimization needs to be performed numerically using gradient descent methods and is simply based on successive computation of the gradient of the loss function of interest. In large datasets (of size ), obtaining the full gradient comes with a prohibitive computational (a linear computational cost in per iteration of gradient descent cannot be afforded) as well as a prohibitive communication cost (due to the need to access all training examples even though they may be, and typically are, stored at different physical locations). For these reasons, stochastic gradient methods are the norm – they typically access only a small number of data points at a time, giving an unbiased estimate to the gradient of the loss function needed to update the parameter values – and have enjoyed tremendous popularity and success in practice [78, 79, 80, 81].

Most existing works assume that the data has already been collected and transmitted through the communication architecture and is available at request. That is, typically the data is first transmitted in its raw form from the MTC devices to the data center, and only afterwards a learning (optimization) algorithm is executed.

In contrast, the Condense architecture integrates the learning task into the communication infrastructure. That is, a learning task is seen as a sequence of oracle calls to a certain network function computation, and the role of the NFC layer is to provide these function computations at the data center’s processing unit (destination node) and only the computed value (e.g., of the gradient to the loss function) is being communicated. This way, Condense will generically embed various learning algorithms into the actual 3GPP MTC communication infrastructure.

We now dive into more details and exemplify learning over the proposed Condense system with the estimation of an unknown parameter vector through the minimization of population risk. Specifically, we consider stochastic gradient-type methods to minimize the risk.

To begin, consider a directed rooted tree NFC graph , and assume that there are MTC devices which generate samples over time instants , drawn i.i.d. in time from a distribution , defined over . Here, , where  is a sample of generated by the MTC device . The goal is to learn the parameter vector that minimizes the population risk:

(5)

where expectation is well-defined for each , and, for each , function is differentiable. In the rest of this subsection, we specialize the approach on a single but illustrative example of Consensus; more elaborate examples such as logistic regression are relevant but not included here for brevity.

Example: Consensus – computing the global average; e.g., [82, 83, 84]. When , (each MTC device generates scalar data), and , then solving (5) corresponds to finding . E.g., when MTC devices are pollution sensors at different locations in a city, this corresponds to finding the city-wide average pollution.

Conventional 3GPP MTC solution. Consider first the conventional 3GPP MTC system, where samples , , arrive (through the communication layer) to a processing unit at the data center at time instants in their raw form. (We ignore here the communication delays.) Upon reception of each new sample , the processing unit (destination node ) performs a stochastic gradient update to improve its estimate of :

(6)

where is the gradient of at , and is the step-size (learning rate). For the consensus example with learning rate , it can be shown that update (6) takes the particularly simple form:

(7)

Note that, with the conventional 3GPP MTC architecture, data samples are transmitted to the destination node  for processing in their entirety, i.e., the communication infrastructure acts only as a routing network (forwarder) of the data.

Condense solution. In contrast with the conventional solution, with the Condense architecture the raw data sample is not transmitted to the data center and is hence not available at the corresponding processing unit. Instead, update (6) is implemented as follows. Given the current estimate , the processing unit (destination node ) defines function . Subsequently, it sends the request to the function processor to perform the decomposition of over the NFC layer. The function processor performs the required decomposition of  into atomic functions and remotely installs the corresponding obtained atomic function at each atomic node (eNB, HeNB, etc.) of the topology.333We assume that performing decomposition of  and the installation of the atomic functions across the AFC layer is completed prior to the initiation of flow of sample “onwards” through the NFC topology. In other words, the time required for the latter process is sufficiently smaller than the time intervals of generation of data samples . Once the required atomic functions are ready, the sample starts travelling up the graph , and upon the completion of evaluation of all intermediate atomic functions, the value becomes available at the data center. This in turn means that the processing unit can finalize update (6). Specifically, with the consensus example in (7), function takes a particularly simple form of the average: , and it is independent of  and of . There exist many simple and efficient methods to decompose444Strictly speaking, is not defined for the consensus example here as the gradient of at , but it is defined as the additive term in (6) which is dependent upon . The gradient actually equals ; applying (5) to this gradient form with yields (6). the computation of the average, e.g., [85], and hence algorithm (7) can be implemented very efficiently within the Condense architecture.

Challenges, insights and research directions. We close this subsection by discussing several challenges which arise when embedding learning algorithms in the Condense architecture. Such challenges are manyfold but are nonetheless already a reality in machine learning practice. First, data arrives in an asynchronous, delayed, and irregular fashion, and it is often noisy. Condense actually embraces this reality and puts the learning task at the center stage: the desired function of the data is of interest, not the data itself. Secondly, it is often the case that, depending on the infrastructure, interface and functionality constraints of the network computation layer, approximations of the desired function computations (as opposed to exact computations) will need to be employed. For instance, function  in the example above may only be computable approximately in general. The quality of such an approximation leads to trading-off statistical efficiency of the learning procedure with the accuracy of the network function computation, and the analyses of such trade-offs will be an important research topic. Finally, from a more practical perspective, an important issue is to ensure interoperability with the existing distributed processing paradigms (e.g., Graphlab [86] and Hadoop [87]).

Vi-C Neural networks

With modern large scale applications of neural networks, the number of parameters to be learned (neuron’s weights) can be excessively large, like, e.g., with deep neural networks [21]. As such, storage of the parameters themselves should be distributed, and their updates also include a large communication cost that needs to be managed [20]. However, neural networks can be naturally embedded into the Condense architecture – somewhat similarly to the related work on distributed training for deep learning [23] – as detailed next.

Specifically, consider the example of binary classification of the MTC devices’ generated data. At each time instant , MTC devices generate data vector , where each device generates an -dimensional vector . Each data vector is associated with its class label . A binary classifier takes a data sample and generates an estimate of its class label . Classifier  is “learned” from the available training data , , where is the learning period. In other words, once the learning period is completed and  is learned, then the prediction period is initiated, and for each new data sample , , classifier  generates an estimate of . For example, can correspond to measurements of pressure, temperature, vibration, acoustic, and other sensors in a large industrial plant within a time period ; can correspond to the “nominal” plant operation, while to the “non-nominal” operation, defined for example as the operation where energy efficiency or greenness standards are not fully satisfied.

We consider neural network-based classifiers  embedded in the Condense architecture. Therein, the classifier function  is a composition of the neuron functions associated with each AFC module (node). We consider a Condense rooted tree graph with sources and one destination node , however, here we assume is undirected as we will need to pass messages upwards and downwards. For convenience, as it is common with neural networks, we organize all nodes in (source, atomic, and the destination node) in levels , such that the leaves of nodes at the first level () are the MTC devices (sources in ), while the data center’s processing unit (the destination node ) corresponds to . Then, all nodes (sources, atomic nodes, and the destination node) are indexed through the index pair , where is the level number and is the order number of a node within its own level, . Here, denotes the number of nodes at the -th level.

Prediction. We first consider the prediction period , assuming that the learning period is completed. This corresponds to actually executing the application task of classification, through evaluating the network function  at a data sample . (As we will see ahead, the learning period corresponds to learning function , which essentially parallels the task of how a desired network function is decomposed across the AFC modules into the appropriate atomic functions.) Each node  is assigned a weight vector (obtained within the learning period), whose length equals the number of its associated leaf nodes. Denote by the output (also referred to as activity) of node  associated with the data sample , , to be computed based on the incoming activities from the adjacent lower level nodes . Also, denote by the vector that stacks all the ’s at the level . Then, is calculated by:

(8)

where is the logistic unit function. Therefore, with neural networks, the atomic function  associated with each AFC module (node)  is a composition of 1) the linear map parameterized with its weight vector ; and 2) the logistic unit function.

Learning. The learning period corresponds to learning function , i.e., learning the weight vectors of each AFC module. Differently from the example of minimizing a generic population risk in Subsection VI-B, here learning  (learning atomic functions of ATC modules) can be done in a distributed way, without the involvement of the FP module. The learning is distributed in the sense that it involves passing messages in the “upward” direction (from the MTC devices towards the data center) and the “downward” direction (from the data center towards the MTC devices) along the Condense architecture (graph ).

Specifically, we assume that weight vectors are learned by minimizing the log-loss  via a stochastic gradient descent (back-propagation) algorithm, wherein one upward/downward pass corresponds to a single training data sample , . We now proceed with detailing both the upward and the downward pass [22]. We assume that, before initiating the pass, is available at the bottom-most layer (MTC devices), while label  is available at the data center (destination node ). This is reasonable to assume as the label’s data size per  is insignificant (here it is just one bit) and can be delivered to the data center, e.g., by forwarding (conventional) means through the 3GPP MTC system.

Upward pass. Each node computes the gradient of its activity with respect to its weights as well as with respect to the incoming activities:

At this point, node stores tuple (these are local gradients, needed for weight update in the downward pass.).

Downward pass. Label has been received at the data center’s processing node. Now gradients of loss function are backpropagated. Having obtained , each node sends to its lower layer neighbour the message consisting of (which we refer to here as gradient contribution), where

Node now can compute

This is instantiated at the top layer:

Moreover, after sending , node updates its weights with stochastic gradient update and step-size :

and removes “local gradients” from the memory.

Challenges, insights and research directions. We close this subsection with several challenges and practical considerations which arise when embedding neural networks into the Condense architecture.

The first challenge is on implementing the required AFC modules (atomic functions) in the analog domain. These modules are typically linear combinations plus nonlinearities (sigmoids, rectified linear units). Secondly, even when they are implemented in the digital domain, an interesting question is to study the effects of propagation of the quantization error across the Condense architecture.

Next, network topology and busy nodes will dictate that not all nodes see the activities corresponding to the -th example. This is just like the dropout method [24] which deliberately “switches off” neurons randomly during each learning stage. Dropout is a hugely successful method for learning regularization as it prevents overfitting by weight co-adaptation and demonstrates that the learning process can be inherently robust to the node failures, echoing the overall case against the learning and network layer separation, which presumes all data to be available on request at all times. Moreover, many activities will not be sent to all the nodes in the upper layer. In this case, , so weights will not be affected. In this case, corresponding gradients do not need to be stored, nor does the downward pass need to happen. More problematic is the situation in which upward pass has happened but downward pass fails at some point, i.e., some of the are not received at . This injects additional noise to the gradient. Studying the effect of this noise is an interesting research topic.

We finally provide some insights on the communication and computational costs. Each AFC module (node) broadcasts one real number per training data example: its activity (together with data example index ), in the upward pass, and one real number per example, per receiver: gradient contribution (together with index ), in the downward pass. Thus, each node broadcasts to upper layers and sends specific messages to specific nodes in bottom layers. Upward pass happens whenever a new input is obtained, while downward pass whenever a new output is obtained. Due to this asynchrony, gradient updates might be out of date – therefore, each node could purge local gradients for outdated examples.

Vii Other implementation aspects

This Section briefly discusses some aspects of the Condense architecture not considered in earlier Sections.

Size of NFC graph . We first discuss a typical size of an NFC graph. Referring to Figure 6 and assuming a directed rooted tree graph, it can typically have depth (number of layers) around . Regarding the number of source nodes (MTC devices – lower most layer), it is estimated that the number of MTC devices per macro-cell eNB will be in the range of . The number of small cells per macro cell is in the range of , which makes the number of MTC devices per small cell approximately . Assuming a city area and a coverage of a small cell, we can have a total of small cells within a city-wide Condense network. Therefore, in a city-wide Condense network, we may have MTC devices (number of nodes at the lower-most layer), and on the order of nodes at the (base station) layer above. The number of nodes at the upper layers going further upwards is lower and is few tens or less. In summary, a typical city-wide Condense rooted tree network may have a total of nodes, it has a large “width” and a moderate “depth”. This goes relatively well in line with the supporting theory; e.g., neural networks are considered deep with depths of order or so, while arbitrary functions can be well-approximated even with shallow neural networks. Of course, the graph size can be virtually adjusted according to current application needs both horizontally (to adjust width) and vertically (to adjust depth) through implementing multiple (virtual) nodes within a single physical device.

Synchronization. We initially assess that synchronization may not be a major issue with realizing Condense. This is because, actually, synchronization is critical only with implementing analog atomic functions, e.g., within a single HeNB module. Network-wide orchestration of atomic functions may be successfully achieved through the control mechanisms of SDN and NFV, as well as through the usage of buffering at the upper Condense layers (HeNB-GW, S-GW, and P-GW), to compensate for delays and asynchrony.

Features and pros Theory and implementation Future directions
- Reconfigurable architecture; - Analog: wireless and optical domains; - Implementation challenges: channel
- Novel service of computing - Digital: FPGA/software; estimation, timing and frequency
functions over MTC-data; AFC layer - Theory: Nomographic functions offsets and quantization issues;
- Three layers: atomic, (analog wireless) and Reservoir - Development of standardized A-AFC
network and application; computing (analog optical) and D-AFC modules;
- Two control elements: topology - Topology processor: NFV orchestrator; - Actual constructions
and function processor; - Function processor: SDN application; of function decompositions;
- Can be integrated within NFC layer - Theory: Network coding for computing, - Decomposability (solvability) under restricted
3GPP MTC architecture; sensor fusion and neural networks function classes and network topologies;
- Can exploit theories of - Development of function and topology
sensor fusion, network coding and processor SDN/NFV modules;
computation and neural networks; - Implementation examples: RLNC, neural - Asynchronous, delayed and irregular
- Can be customized for variety Application networks and stochastic gradient descent; arrival of data;
of MTC applications; layer - Theory: neural networks, statistical - Inexact network function computation;
learning and prediction - Interoperability with existing data
analytics platforms;
TABLE I: Summary of CONDENSE architecture.

Communication, computational, and storage costs. We now discuss reductions of communication costs (per application task) of Condense with respect to the conventional (forwarding) 3GPP MTC solution. How much communications is saved depends largely on the application at hand. For very simple tasks (functions), like, e.g., computing maximum or global average, it is easy to see that the savings can be very high. In contrast, for forwarding (computing the identity function), the savings may not be achieved (but the reliability is improved through random linear network coding). Also, overall communication savings depend on the overhead incurred by the signalling from the topology and function processors to the AFC modules (in order to orchestrate the topology, perform function decomposition and instal the appropriate atomic functions, etc.) However, this overhead is projected to eventually become small, as, upon a significant development of the technology, atomic function libraries and NFC decompositions will be pre-installed. Further, it is clear that Condense requires additional storage and computational functionalities at network nodes, when compared with current 3GPP MTC systems. However, this is a reasonable assumption for the modules (eNBs, GWs, etc.) of 3GPP MTC systems due to upcoming trends in mobile edge computing [88].

Data privacy and data loss. Condense naturally improves upon privacy of IoT systems, as the data center (except when computing the identity function) does not receive the data in its raw from. Finally, if certain IoT-generated data has to be stored in a cloud data center in its raw form so as to ensure its long lifetime, Condense supports this functionality through identity functions. However, it is natural to expect that this request is, on average across all IoT data sources and all applications, only occasionally imposed, rendering significant communication savings overall.

Finally, Table I provides a summary of the proposed architecture. The table briefly indicates main points presented in this paper in terms of the advantages of the proposed architecture, relevant theoretical and implementation aspects, and main future research directions.

Viii Conclusions

In this paper, we proposed a novel architecture for knowledge acquisition of IoT-generated data within the 3GPP MTC (machine type communications) systems, which we refer to as Condense. The Condense architecture introduces a novel service within 3GPP MTC systems – computing linear and non-linear functions over the data generated by MTC devices. This service brings about the possibility that the underlying communication infrastructure communicates only the desired function of the MTC-generated data (as required by the given application at hand), and not the raw data in its entirety. This transformational approach has the potential to dramatically reduce the pressure on the 3GPP MTC communication infrastructure.

The paper provides contributions along two main directions. First, from the architectural side, we describe in detail how the function computation service can be realized within 3GPP MTC systems. Second, from the theoretical side, we survey the relevant literature on the possibilities of realizing “atomic” functions in both analog and digital domains, as well as on the theories and techniques for function decomposition over networks, including the literature on sensor fusion, network coding for computing, and neural networks. The paper discusses challenges, provides insights, and identifies future research directions for implementing function computation and function decomposition within practical 3GPP MTC systems.

Acknowledgment

The authors thank the following researchers for valuable help in developing the Condense concept: J. Coon, R. Vicente, M. Greferath, O. Gnilke, R. Freij-Hollanti, A. Vazquez Castro, V. Crnojevic, G. Chatzikostas, C. Mirasso, P. Colet, and M. C. Soriano. The authors would also like to acknowledge valuable support for collaboration through the EU COST IC 1104 Action.

References

  • [1] G. Rohling, “Facts and forecasts: Billions of things, trillions of dollars, pictures of the future,” available online: http://www.siemens.com/innovation/en/home/pictures-of-the-future/ digitalization-and-software/internet-of-things-facts-and-forecasts.html (last accessed: 15.06.2016.)
  • [2] T. Bajarin, “The Next Big Thing for Tech: The Internet of Everything, Time Magazine,” available online: http://time.com/539/the-next-big-thing-for-tech-the-internet-of-everything/ (last accessed: 15.06.2016.)
  • [3] http://ec.europa.eu/digital-agenda/en/internet-things (last accessed: 15.06.2016.)
  • [4] http://www.zdnet.com/topic/the-power-of-iot-and-big-data/ (last accessed: 15.06.2016.)
  • [5] J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of Things (IoT): A vision, architectural elements, and future directions,” Future Generation Computer Systems, vol. 29, no. 7, pp. 1645-1660, 2013.
  • [6] A. Giridhar, and P. R. Kumar, “Computing and communicating functions over sensor networks,” IEEE Journal on Selected Areas in Communications, vol. 23, no. 4, pp. 755-764, 2005.
  • [7] A. Giridhar and P. R. Kumar, “Toward a theory of in-network computation in wireless sensor networks,” IEEE Commun. Mag., vol. 44, no. 4, pp. 98–107, Apr. 2006.
  • [8] T. Taleb, and A. Kunz, “Machine type communications in 3GPP networks: Potential, challenges, and solutions,” IEEE Communications Magazine, vol. 50, no. ), pp. 178-184, 2012.
  • [9] The European Telecommunications Standards Institute. Network Functions Virtualisation (NFV); Architectural Framework. GS NFV 002 (V1.1.1), Oct. 2013.
  • [10] B. Han, V. Gopalakrishnan, L. Ji, and S. Lee, “Network function virtualization: Challenges and opportunities for innovations,” IEEE Communications Magazine, vol. 53, no. 2, pp. 90-97, 2015.
  • [11] B. Lantz, B. Heller, and M. McKeown, “A network in a laptop: rapid prototyping for software-defined networks,” 9th ACM SIGCOMM Workshop on Hot Topics in Networks, pp. 19, ACM, 2010.
  • [12] D. Kreutz, F. M. Ramos, P. Esteves Verissimo, C. Esteve Rothenberg, S. Azodolmolky, and S. Uhlig, “Software-defined networking: A comprehensive survey,” Proc. IEEE, vol. 103, no. 1, pp. 14-76, 2015.
  • [13] M. Condoluci, G. Araniti, T. Mahmoodi, M. Dohler, “Enabling the IoT Machine Age with 5G: Machine-Type Multicast Services for Innovative Real-Time Applications,” IEEE Access (transactions), Special Issue on IoT, in press, 2016.
  • [14] M.R. Palattella, M. Dohler, A. Grieco, G. Rizzo, J. Torsner, T. Engel, L. Ladid, “Internet of Things in the 5G Era: Enablers, Architecture and Business Models,” IEEE Journal on Selected Areas in Communications, in press, 2016.
  • [15] R. Appuswamy, M. Franceschetti, N. Karamchandani, and K. Zeger, “Network coding for computing: cut-set bounds,” IEEE Transactions on Information Theory, vol. 57, no. 2, pp. 1015–1030, Feb. 2011.
  • [16] H. Kowshik and P.R. Kumar, “Optimal function computation in directed and undirected graphs,” IEEE Transactions on Information Theory, 58(6), pp.3407-3418, June 2012.
  • [17] R. Ahlswede, N. Cai, S.-Y.R. Li, and R.W. Yeung, “Network information flow,” IEEE Transactions on Information Theory, vol. 46, no. 4, pp. 1204-1216, Apr. 2000.
  • [18] R. Appuswamy and M. Franceschetti, “Computing linear functions by linear coding over networks,” IEEE Transactions on Information Theory, vol. 60, no. 1, pp. 422–431, Jan. 2014.
  • [19] R. Dougherty, C. Freiling, and K. Zeger, “Insufficiency of linear coding in network information flow,” IEEE Transactions on Information Theory, vol. 51, no. 8, pp. 2745–2759, Aug. 2005.
  • [20] M. Li, D.G. Andersen, J.W. Park, A.J. Smola, A. Ahmed, V. Josifovski, J. Long, E.J. Shekita, and B.-Y. Su, “Scaling distributed machine learning with the parameter server,” 11th USENIX Symposium on Operating Systems Design and Implementation, 2014.
  • [21] Y. Bengio, Y. LeCun, and G. Hinton, “Deep learning,” Nature 521, pp. 436–444, 2015.
  • [22] B. D. Ripley, “Pattern recognition and neural networks,” Cambridge University Press, 1996.
  • [23] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, A. Ng, “Large scale distributed deep networks,” Advances in Neural Inf. Proc. Systems 25, 2012.
  • [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journ. Mach. Learn. Research, vol. 15 (Jun), pp. 1929−-1958, 2014.
  • [25] 3GPP TS 22.368, “Service requirements for Machine-Type Communications (MTC),” V13.1.0, Dec. 2014.
  • [26] H. Shariatmadari, R. Ratasuk, S. Iraji, A. Laya, T. Taleb, R. Jäntti, and A. Ghosh, “Machine-type communications: current status and future perspectives toward 5G systems,” IEEE Communications Magazine, vol. 53, no. 9, pp. 10-17, 2015.
  • [27] L. E. Li, Z. M. Mao and J. Rexford, “Toward software-defined cellular networks,” European Workshop on Software Defined Networking (EWSDN), pp. 7-12, October 2012.
  • [28] http://www.3gpp.org/DynaReport/32842.htm
  • [29] N. Zilberman, P. M. Watts, C. Rotsos, and A. W. Moore, “Reconfigurable network systems and software-defined networking,” Proceedings of the IEEE, vol. 103, no. 7, pp. 1102-1124, 2015.
  • [30] Y. Li, and M. Chen, “Software-defined network function virtualization: A survey,” IEEE Access, 3, pp. 2542-2553, 2015.
  • [31] S. Zhang, S. C. Liew, and P. P. Lam, “Hot topic: Physical-layer network coding,” ACM Annual Conference on Mobile Computing and Networking, pp. 358-365, Sep. 2006.
  • [32] http://www.inc.cuhk.edu.hk/research/projectsphysical-layer-network-coding-pnc
  • [33] C. Fragouli, J. Y. Le Boudec, and J. Widmer, “Network coding: An instant primer,” ACM SIGCOMM Computer Communication Review, vol. 36, no. 1, pp. 63-68, 2006.
  • [34] P. A. Chou, and Y. Wu, “Network coding for the internet and wireless networks,” IEEE Signal Processing Magazine, vol. 24, no. 5, 2007.
  • [35] http://steinwurf.com/tag/kodo/
  • [36] J. Hansen, D. E. Lucani, J. Krigslund, M. Médard, F. H. P. Fitzek, “Network coded software defined networking: enabling 5G transmission and storage networks,” IEEE Comm. Mag., vol. 53, no. 9, pp. 100–107
  • [37] D. Szabo, A. Csoma, P. Megyesi, A. Gulyas, and F. H. Fitzek, “Network coding as a service,” arXiv preprint arXiv:1601.03201, 2016.
  • [38] M. Gastpar and M. Vetterli, Information Processing in Sensor Networks: Second International Workshop, IPSN 2003, Palo Alto, CA, USA, April 22–23, 2003 Proceedings, 2003, ch. Source-Channel Communication in Sensor Networks, pp. 162–177.
  • [39] G. Mergen and L. Tong, “Type based estimation over multiaccess channels,” IEEE Transactions on Signal Processing, vol. 54, no. 2, pp. 613–626, Feb 2006.
  • [40] W. U. Bajwa, J. D. Haupt, A. M. Sayeed, and R. D. Nowak, “Joint source-channel communication for distributed estimation in sensor networks,” IEEE Transactions on Information Theory, vol. 53, no. 10, pp. 3629–3653, Oct. 2007.
  • [41] S. Stańczak, M. Wiczanowski, and H. Boche, “Distributed utility-based power control: Objectives and algorithms,” IEEE Transactions on Signal Processing, vol. 55, no. 10, pp. 5058-5068, Oct. 2007.
  • [42] M. K. Banavar, C. Tepedelenlioglu, and A. Spanias, “Distributed SNR estimation with power constrained signaling over Gaussian multiple-access channels,” IEEE Transactions on Signal Processing, vol. 60, no. 6, pp. 3289-3294, June 2012.
  • [43] B. Nazer and M. Gastpar, “Computation over multiple-access channels,” IEEE Trans. Info. Theory, vol. 53, no. 10, pp. 3498-3516, Oct. 2007.
  • [44] B. Nazer, and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Info. Theory, vol. 57, no. 10, pp. 6463-6486, Oct. 2011.
  • [45] M. Goldenbaum, H. Boche, and S. Stańczak, “Harnessing interference for analog function computation in wireless sensor networks,” IEEE Trans. Signal Processing, vol. 61, no. 20, pp. 4893–4906, Oct. 2013.
  • [46] K. Miller, T. Biermann, H. Woesner, and H. Karl, “Network coding in passive optical networks,” IEEE International Symp. Network Coding, Toronto, Ontario, Canada, June 2010, pp. 1–6.
  • [47] K. Fouli, M. Maier, and M. Médard, “Network coding in next-generation passive optical networks,” IEEE Communications Magazine, vol. 49, no. 9, pp. 38-46, Sept. 2011.
  • [48] H. J. Caulfield and S. Dolev, Nat. Phot. 4, 261 (2010).
  • [49] M. Lukosevicius, H. Jaeger, “Reservoir computing approaches to recurrent neural network training,” Computer Science Review, vol. 3, no. 3, pp. 127-149, Aug. 2009.
  • [50] L. Appeltant, M. C. Soriano, G. Van der Sande, J. Danckaert, S. Massar, J. B. Dambre, C. R. Mirasso, and I. Fischer, “Information processing using a single dynamical node as complex system,” Nature Commun. 2, 468, 2011.
  • [51] L. Larger, M.C. Soriano, D. Brunner, L. Appeltant, J.M. Gutierrez, L. Pesquera, C.R. Mirasso, I. Fischer, “Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing,” Optics Express 20(3), 3241-3249, 2012.
  • [52] Y. Paquot, F. Duport, A. Smerieri, J. Dambre, B. Schrauwen, M. Haelterman, S. Massar, “Optoelectronic reservoir computing,” Scientific Reports 2, 287, 2012.
  • [53] D. Brunner, M. C. Soriano, C. R. Mirasso, and I. Fischer, “Parallel photonic information processing at Gbps data rates using transient states,” Nature Communications 4, 1364, 2013.
  • [54] K. Vandoorne, S. Member, J. Dambre, D. Verstraeten, B. Schrauwen, and P. Bienstman, “Parallel RC using optical amplifiers,” IEEE Trans. Neural Nets, vol. 22, no. 9, 2011.
  • [55] M. Hermans, M.C. Soriano, J. Dambre, P. Bienstman, I. Fischer, “Photonic delay systems as machine learning implementations”, J. Mach. Learn. Res. 16, 2081, 2015.
  • [56] J. W. Lockwood, N. McKeown, G. Watson, G. Gibb, P. Hartke, J. Naous, R. Raghuraman, and J. Luo, “NetFPGA–an open platform for gigabit-rate network switching and routing,” IEEE International Conference on Microelectronic Systems Education, pp. 160-161, 2007.
  • [57] S. Kim, W. S. Jeong, W. W. Ro, and J. L. Gaudiot, “Design and evaluation of random linear network coding Accelerators on FPGAs,” ACM Transactions on Embedded Computing Systems (TECS), vol. 13, no. 1, pp. 13, 2013.
  • [58] S. M. Choi, K. Lee, and J. Park, “Massive parallelization for random linear network coding,” Appl. Math. Inf. Sci., vol. 9, no. 2L, pp. 571-578, 2013.
  • [59] S. Kannan, “Layering principles for wireless networks,” Ph.D. dissertation, University of Illinois at Urbana-Champaign, 2012.
  • [60] R. Kötter and M. Médard, “An algebraic approach to network coding,” IEEE/ACM Trans. Networking, vol. 11, no. 5, pp. 782–795, Oct. 2003.
  • [61] H. Kowshik and P.R. Kumar, “Optimal computation of symmetric boolean functions in collocated networks,” IEEE J. Selected Areas in Communications, vol. 31, no. 4, pp. 639–654, Apr. 2013.
  • [62] V. Lalitha, N. Prakash, K. Vinodh, P. Vijay Kumar, and S. Sandeep Pradhan, “Linear coding schemes for the distributed computation of subspaces,” IEEE J. Selected Areas in Communications, vol. 31, no. 4, pp. 678–690, Apr. 2013.
  • [63] B.K. Rai and B.K. Dey, “On network coding for sum-networks,” IEEE Transactions on Information Theory, vol. 58, no. 1, pp. 50–63, Jan. 2012.
  • [64] A. Ramamoorthy and M. Langberg, “Communicating the sum of sources over a network,” IEEE Journ. Sel. Areas in Comm’s, vol. 31, no. 4, pp. 655–665, Apr. 2013.
  • [65] M. Sefidgaran and A. Tchamkerten, “Distributed function computation over a rooted directed tree,” to appear in IEEE Trans. Info. Theory.
  • [66] V. Shah, B.K. Dey, and D. Manjunath, “Network flows for function computation,” IEEE Journ. Sel. Areas in Comm’s, vol. 31, no. 4, pp. 714–730, Apr. 2013.
  • [67] F. H.Fitzek, T. Toth, A. Szabados, M. V. Pedersen, D. E. Lucani, M. Sipos, H. Charaf and M. Médard, “ Implementation and performance evaluation of distributed cloud storage solutions using random linear network coding,” IEEE ICC Workshops 2014, pp. 249–254, Sydney, Australia.
  • [68] C. Fragouli, and E. Soljanin, “Network coding: Fundamentals and applications,” Foundations and Trends in Networking, vol. 2, no. 1, 2007.
  • [69] T. Ho, M. Médard, R. Koetter, D. R. Karger, M. Effros, J. Shi, and B. Leong, “A random linear network coding approach to multicast,” IEEE Trans. Info. Theory, vol. 52, no. 10, pp. 4413-4430, Oct. 2006.
  • [70] E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Processing Magazine, vol. 25, pp. 21-30, March 2008.
  • [71] K. Hayashi, M. Nagahara, and T. Tanaka, “A user’s guide to compressed sensing for communications systems,” IEICE Trans. on Communications, vol. E96-B, no. 3, pp. 685-712, Mar. 2013.
  • [72] M. I. Jordan, and T. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science, 349, pp. 255-260, 2015.
  • [73] Y. Zhang, J. C. Duchi, and M. J. Wainwright, “Communication-efficient algorithms for statistical optimization,” Journal of Machine Learning Research, vol. 14, pp. 3321-−3363, Nov. 2013.
  • [74] A. Agarwal, J. Duchi, “Distributed delayed stochastic optimization,” proc. Advances in Neural Information Processing Systems, 2011.
  • [75] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends in Machine Learning, vol. 3, no. 1, 2011.
  • [76] O. Shamir, N. Srebro, T. Zhang, “Communication efficient distributed optimization using an approximate Newton-type method,” 31st International Conference on Machine Learning, ICML, 2014.
  • [77] L. Wasserman, “All of statistics,” Springer, 2004.
  • [78] L. Bottou and O. Bousquet, “The tradeoffs of large scale learning,” in Optimization for Machine Learning, 351-368, MIT Press, 2011.
  • [79] F. Yousefian, A. Nedic, U. V. Shanbhag, “On stochastic gradient and subgradient methods with adaptive steplength sequences,” Automatica, vol. 48, no. 1, pp. 56-67, 2012.
  • [80] F. Niu, B. Recht, C. Re, S. J. Wright, “HOGWILD!: A lockFree approach to parallelizing stochastic gradient descent,” available at: http://arxiv.org/abs/1106.5730.
  • [81] L. Bottou, “Large-scale machine learning with stochastic gradient descent,” 19th Int’l Conf. on Comp. Statistics (COMPSTAT 2010), Paris, France, pp. 177–187, Aug. 2010.
  • [82] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. of the IEEE, vol. 95, no. 1, pp. 215–-233, Jan. 2007.
  • [83] M. H. DeGroot, “Reaching a consensus,” Journal of the American Statistical Association, vol. 69, no. 345, March 1974.
  • [84] S. Kar and J. M. F. Moura, “Distributed consensus algorithms in sensor networks: link failures and channel noise,” IEEE Trans. Signal Processing, vol. 57, no. 1, pp. 355-369, Jan. 2009.
  • [85] E. Fasolo, M. Rossi, J. Widmer, and M. Zorzi, “In-network aggregation techniques for wireless sensor networks: a survey,” IEEE Trans. Wireless Comm’s, vol. 14, no. 2, pp. 70–87, Apr. 2007
  • [86] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein, “Graphlab: A new framework for parallel machine learning,” Uncertainty in Artificial Intelligence, 2010.
  • [87] M. Bhandarkar, “MapReduce programming with apache Hadoop,” IEEE Parallel and Dist. Processing (IPDPS), 2010.
  • [88] http://www.etsi.org/technologies-clusters/technologies/mobile-edge-computing
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
20215
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description