Optimizing Pipelined Computation and Communication for LatencyConstrained Edge Learning
Abstract
Consider a device that is connected to an edge processor via a communication channel. The device holds local data that is to be offloaded to the edge processor so as to train a machine learning model, e.g., for regression or classification. Transmission of the data to the learning processor, as well as training based on Stochastic Gradient Descent (SGD), must be both completed within a time limit. Assuming that communication and computation can be pipelined, this letter investigates the optimal choice for the packet payload size, given the overhead of each data packet transmission and the ratio between the computation and the communication rates. This amounts to a tradeoff between bias and variance, since communicating the entire data set first reduces the bias of the training process but it may not leave sufficient time for learning. Analytical bounds on the expected optimality gap are derived so as to enable an effective optimization, which is validated in numerical results.
I Introduction
Edge learning refers to the training of machine learning models on devices that are close to the end users [1]. The proximity to the user is instrumental in facilitating a lowlatency response, in enhancing privacy, and in reducing backhaul congestion. Edge learning processors include smart phones and other userowned devices, as well as edge nodes of a wireless network that provide wireless access and computational resources [1]. As illustrated in Fig. 1, the latter case hinges on the offloading of data from the databearing device to the edge processor, and can be seen as an instance of mobile edge computing [2].
Research on edge learning has so far instead focused mostly on scenarios in which training occurs locally at the databearing devices. In these setups, devices can communicate either through a parameter server [3] or in a devicetodevice manner [4]. The goal is to either learn a global model without exchanging directly the local data [5] or to train separate models while leveraging the correlation among the local data sets [6]. Devices can exchange either information about the local model parameters, as in federated learning [7], or gradient information, as in distributed Stochastic Gradient Descent (SGD) methods [8, 9].
In this work, we consider an edge learning scenario in which training takes place at an edge node of a wireless system as illustrated in Fig. 1. The data is held by a device and has to be offloaded through a communication channel to the edge node. The learning task has to be executed within a time limit, which might be insufficient to transmit the complete dataset. Transmission of data blocks from device to edge node, and training at the edge node can be carried out simultaneously (see Fig. 2). Each transmitted packet contains a fixed overhead, accounting e.g. for metadata and pilots. Given the overhead of each data packet transmission, what is the optimal size of a communication block? Communicating the entire data set first reduces the bias of the training process but it may not leave sufficient time for learning. We investigate a more general strategy that communicates in blocks and pipelines communication and computation with an optimized block size, which is shown to be generally preferable. Analysis and simulation results provide insights into the optimal duration of the communication block and on the performance gains attainable with an optimized communication and computation policy.
The rest of this letter is organized as follows. In Sec. II, we provide an overview of the model and the associated notations. In Sec. III, we examine the technical assumptions necessary for our work. In Sec. IV, we provide our main result and discuss its implications. Finally, in Sec. V, we consider numerical experiments in the light of our result.
Ii System model
As seen in Fig. 1, we study an edge learning system in which a device communicates with an edge node, and associated server, over an errorfree communication channel. The device has access to a local training dataset of data points , and training of a machine learning model is carried out at the edge node based on data received from the device. As illustrated in Fig. 2, communication and learning must be completed within a time limit . To this end, the transmissions are organized into blocks, and transmission and computing at the edge node can be performed in parallel.
Training at the edge node aims at identifying a model parametrized by a vector within a given hypothesis class. Training is carried out by (approximately) solving the Empirical Risk Minimization (ERM) problem (see, e.g, [10]). This amounts to the minimization with respect to vector of the empirical average of a loss function over all the data points in the training dataset, i.e.,
(1) 
As detailed below, the minimization of the function is carried out at the edge node using SGD, based on the data points received from the device.
In order to elaborate on the communication and computation protocol illustrated in Fig. 2, we normalize all time measures to the time required to transmit one data sample from the device to the edge node. With this convention, we denote as the time required to make one SGD update at the edge node.
As seen in Fig. 2, transmission from the device to the edge node is organised into blocks. In this study, we ignore the effect of channel errors, which is briefly discussed in Sec. VI. In the th block, the device transmits a subset of new samples from its local dataset. At the end of the block, the edge node adds these samples to the subset of samples it has available for training in the th block, i.e., with The samples in are randomly and uniformly selected from the set of samples not yet transmitted to the edge node. A packet sent in any block contains an overhead, e.g., for pilots and metadata, of duration , irrespective of the number of transmitted samples. It follows that the duration of a transmission block is .
There are at most transmission blocks, since blocks are sufficient to deliver the entire dataset to the edge node. Therefore, we need to distinguish two cases. As seen in Fig. 2(a), when , the device is only able to deliver a fraction of the samples. In particular, denoting as the number of blocks, the fraction of data points delivered at the edge node at time equals . In contrast, if , as illustrated in Fig. 2(b), the edge node has the entire dataset available after blocks, that is, for a duration equal to . Henceforth, we refer to this last period as block .
During each block , the edge node computes local SGD updates (2). During block , the edge node computes SGD updates. The th local update at block , with , is given as
(2) 
where is the learning rate, and is a data point sampled i.i.d. uniformly from the subset of samples currently available at the edge node. Note that we have .
The goal of this work is to optimize the number of samples sent in each block with the aim of minimizing the empirical loss (1) at the edge node at the end of time . In the next sections, we present an analysis of the empirical loss obtained at time that allows us to gain insights into the optimal choice of .
Iii Technical assumptions
In order to study the training loss achieved at the edge node at the end of the training process, we make the following standard assumptions, which apply, for instance, to linear models with quadratic or crossentropy losses under suitable constraints (see the comprehensive review paper [9]):

the sequence of iterates in (2) is contained in a bounded open set with radius over which the function is bounded below by a scalar for all ;

the function is continuously differentiable in for any fixed value of and is smooth in , i.e.,
(3) for all , and for all . This implies
(4) for all , and for all ;

the loss function is convex and satisties the PolyakLojasiewicz condition in , i.e., there exists a constant such that
(5) for all where is a minimizer of . The PL condition is implied by, but does not imply, strong convexity [9].
We further need to make assumptions on the statistics of the gradient used in the update (2). To this end, for each block , we define the empirical loss limited to the samples available at the edge node at block as
(6) 
the empirical loss over the samples transmitted at iteration as
(7) 
and the empirical loss over the samples not available at the edge at iteration
(8) 
Note that we have the identity .
First, we observe that given the previously transmitted data samples, the gradient is an unbiased estimate of the gradient of the empirical loss limited to the samples available at the edge node at block . In formulas, , where is the conditional expectation given the previously transmitted samples. We finally make the following assumption (see, e.g., [9]):

For any set of samples available at the edge node, there exist scalars and such that
(9) where is the variance.
Iv Convergence analysis
In this section, we present our main result and its implications on the optimal choice of the number of transmitted samples per block. Henceforth, we use the notation to indicate the conditional expectation on the samples selected for the SGD updates in the th block given the set of samples available at the edge node at . We similarly define as the conditional expectation on the samples selected for the SGD updates in block (see Fig. 2(b)).
Theorem 1
Under assumptions 14, assume that the SGD stepsize satisfies
(10) 
and define
(11) 
Then, for any sequence the expected optimality gap at time is upper bounded as
(12) 
if ; and by
(13) 
if .
Proof: See Appendix A.
The bound (1)(13) extends the classical analysis of the convergence of SGD for the case in which the entire dataset is available at the learner [9, Theorem 4.6] to the set up under study. The bound distinguishes the case in which the edge node has the entire data set by the last block, and the complementary case, as seen in Fig. 2.
The first term in the bound (13) represents an asymptotic bias that does not vanish with the number of SGD updates, even when all the data points are available at the edge node. It is due to the variance (9) of the stochastic gradient. The bound (1) for smaller values of also comprises an additional bias term, that is the second term in (13), due to the lack of knowledge about samples not received at the edge node by the end of the training process. In contrast, the last term in bound (1)(13) accounts for the standard geometric decrease of the initial error in gradientbased learning algorithms. Here, the initial error for each block is given by . Note that the additional factor with exponent in (13) accounts for the number of updates made after all the samples have been received at the edge node.
The bound (1)(13) can be in principle optimized numerically in order to find an optimal value to the block size . However, in practice, doing so would require fixing the choice of the sequence , and running Monte Carlo experiments for every randomly selected sample of the sequence of SGD updates (2), which is computationally intractable. Therefore, in the following, we derive a generally looser bound that can be directly evaluated numerically without running any Monte Carlo simulations. This bound will then be used in order to obtain an optimized value for .
Corollary 1
Under the conditions of Theorem 1, the expected optimality gap at time is upper bounded as
(14) 
if ; and by
(15) 
if .
Proof: See Appendix B.
We plot bound (14)(15) in Fig. 3. These results are obtained for , , , , , , , . We note that and represent respectively the smallest and largest eigenvalues of the data Gramian matrix for the example studied in Sec. V. For each value of , we mark in the figure both the value of that minimizes the upper bound in Corollary 1 and the value of at which we have the condition . As seen in Fig. 2, this is the minimum value of that allows the full transmission of the training set by the last training block.
A first observation is that the optimized value of , henceforth referred to as , is generally smaller than the number of training points in , suggesting the advantages of pipelining communication and computation. Furthermore, as the overhead increases, it becomes preferable, in terms of the bound (14)(15), to choose larger values for the block size . This is because a larger value of needs to be amortized by transmitting more data in each block, lest the transmission time is dominated by overhead transmission. Finally, for smaller values of , the minimum of the bound is obtained when the entire data set is eventually transferred to the edge node, i.e., , while the opposite is true for larger value of . Interestingly, this suggests that it may be advantageous in terms of final training loss, to forego the transmission of some training points in exchange for more time to carry out training on a fraction of the data set.
V Numerical experiments
In this section, we validate the theoretical findings of the previous sections by means of a numerical example based on ridge regression on the California Housing dataset [11]. The dataset contains 20640 covariate vectors , each with a real label . We randomly select of the samples to define the set for training, i.e., we have . As for Fig. 4, we choose and . The parameter vector is initialized using i.i.d. zeromean Gaussian entries with unitary power. The loss function is defined as where and the regularization coefficient is chosen as .
By computing the average final training loss for each value of , we can experimentally determine the optimal value of the block size. We compare the performance using this experimental optimum with the performance obtained using the minimum of the bound (14)(15). To this end, in Fig. 4, given a fixed overhead size , we plot the average training loss against the normalized training time for and for the value obtained from the bound (14)(15). As references, we also plot as dotted lines the losses obtained for selected values of . The choice of the block size minimizing the average final loss is seen to be a tradeoff between the rate of decrease of the loss and the final attained accuracy. In particular, decreasing allows the edge node to reduce the loss more quickly, albeit with noisier updates and at the cost of a potentially larger final training loss due to the transmitted packet being dominated by the overhead. Importantly, determining the optimum block size experimentally instead of using bound (14)(15) only provides a gain of in terms of the final training loss, at the cost of a computationally burdensome parameter optimization.
Vi Conclusions
In this work, we considered an edge computing system in which an edge learner carries out training over a limited time period while receiving the training data from a device through a communication link. Considering a strategy that allows communication and computation to be pipelined, we have analysed the optimal communication block size as a function of the packet overhead. Among interesting directions for future work, we mention the inclusion of the effect of delays due to errors in the communication channel. In this case, the optimization problem could be generalized to account for the selection of the data rate. Other interesting extensions would be to consider online learning, where data sent in previous packets can be only partially stored at the server, and to investigate a scenario with multiple devices.
Appendix A Proof of Theorem 1
Using the same arguments as in the proof of [9, Theorem 4.6], we can directly obtain the following inequality for each block :
(16) 
Note that we have , since the initial parameter at block is the final parameter obtained at block . By definition of the local empirical losses (6)(7), we have the equality
(17) 
Plugging (17) into (16), we have
(18) 
Iterating this substitution for all blocks , we obtain
(19) 
While inequality (19) applies for any choice of , we now specialize the result to the case where the allocated amount of time is not sufficient to transmit the whole dataset, i.e., . (see Fig. 2(a)). According to (6)(8), for this case, we have the equality
(20) 
Plugging (20) into (19) for block , we then obtain
(21) 
Appendix B Proof of Corollary 1
Defining for all , the optimum solution , we can write , and hence also the inequality
(23) 
Writing the Lipschitz continuity property of the gradients 2 with and 1, we have . Using a similar argument, we can write , where . Plugging this into (A), we obtain the inequality
(24) 
which is (14) in Corollary 1. Following the same approach with (A), we obtain
(25) 
References
 [1] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless network intelligence at the edge.” [Online]. Available: http://arxiv.org/abs/1812.02858
 [2] D. S. N. S. Yun Chao Hu, Milan Patel and V. Young, Mobile Edge Computing A key technology towars 5G. Sophia Antipolis, France: ETSI (European Telecommunications Standards Institute, 2015.
 [3] U. Mohammad and S. Sorour, “Adaptive task allocation for mobile edge learning.” [Online]. Available: http://arxiv.org/abs/1811.03748
 [4] S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “When edge meets learning: Adaptive control for resourceconstrained distributed machine learning,” in IEEE INFOCOM 2018 Proc, April 2018, pp. 63–71.
 [5] S. Teerapittayanon, B. McDanel, and H. T. Kung, “Distributed deep neural networks over the cloud, the edge and end devices,” in 2017 IEEE Conf on Distributed Computing Systems (ICDCS), June 2017.
 [6] V. Smith, C.K. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multitask learning,” in Advances in Neural Information Processing Systems 30, 2017, pp. 4424–4434.
 [7] H. B. McMahan, E. Moore, D. Ramage, and B. A. y Arcas, “Federated learning of deep networks using model averaging.” [Online]. Available: http://arxiv.org/abs/1602.05629
 [8] M. M. Amiri and D. Gündüz, “Machine learning at the wireless edge: Distributed stochastic gradient descent overtheair.” [Online]. Available: http://arxiv.org/abs/1901.00844
 [9] L. Bottou, F. Curtis, and J. Nocedal, “Optimization methods for largescale machine learning,” SIAM Review, vol. 60, no. 2, pp. 223–311, 2018.
 [10] O. Simeone, A Brief Introduction to Machine Learning for Engineers. F&T in Signal Processing, 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8453245
 [11] R. K. Pace and R. Barry, “Sparse spatial autoregressions,” Statistics and Probability Letters, vol. 33, pp. 291–297, 1997.