NetworkDensityControlled Decentralized Parallel Stochastic Gradient Descent in Wireless Systems
Abstract
This paper proposes a communication strategy for decentralized learning on wireless systems. Our discussion is based on the decentralized parallel stochastic gradient descent (DPSGD), which is one of the stateoftheart algorithms for decentralized learning. The main contribution of this paper is to raise a novel open question for decentralized learning on wireless systems: there is a possibility that the density of a network topology significantly influences the runtime performance of DPSGD. In general, it is difficult to guarantee delayfree communications without any communication deterioration in real wireless network systems because of path loss and multipath fading. These factors significantly degrade the runtime performance of DPSGD. To alleviate such problems, we first analyze the runtime performance of DPSGD by considering real wireless systems. This analysis yields the key insights that dense network topology (1) does not significantly gain the training accuracy of DPSGD compared to sparse one, and (2) strongly degrades the runtime performance because this setting generally requires to utilize a lowrate transmission. Based on these findings, we propose a novel communication strategy, in which each node estimates optimal transmission rates such that communication time during the DPSGD optimization is minimized under the constraint of network density, which is characterized by radio propagation property. The proposed strategy enables to improve the runtime performance of DPSGD in wireless systems. Numerical simulations reveal that the proposed strategy is capable of enhancing the runtime performance of DPSGD.
I Introduction
Based on the rapid development of deep neural networks (DNNs), many machine learning techniques have been proposed over the past decade. In general, constructing an accurate DNN incurs high computational costs and requires massive numbers of training samples. This problem has motivated many researchers to investigate machine learning techniques exploiting distributed computing resources, such as multiple graphical processing units in one computer, multiple servers in a data center, or smartphones distributed over a city [9, 4]. If one can efficiently utilize distributed computation resources, classifiers (or regressors) can be trained in a shorter time period compared to utilizing one machine with singlethread computation.
Several researchers have proposed algorithms for distributed machine learning [15, 11, 6, 3, 5, 12]. According to these past studies, we can categorize distributed machine learning techniques into (a) centralized [15, 11, 6], and (b) decentralized settings [3, 5, 12].
The centralized algorithms assume to prepare a centralized server, and all the nodes can connect to this server. Generally, centralized algorithms construct more accurate classifiers compared to decentralized algorithms because a centralized server allows such algorithms to exploit the conditions of all computation nodes (e.g., number of datasets, computational capabilities, and network status), facilitating the construction of an optimal learning strategy. However, applications of centralized algorithms are restricted to specific situations, such as federated learning [6, 7, 8], because all nodes must communicate with the centralized server. In contrast, decentralized algorithms enable these systems to construct a classifier in a distributed manner over the local wireless network, thereby facilitating novel applications of machine learning such as image recognition in cooperative autonomous driving [14] and the detection of white space in spectrum sharing systems [1], without any clouds and edge computing servers. Towards exploring further applicabilities of distributed machine learning, this paper studies decentralized learning algorithms on wireless systems.
Ia Problem of Decentralized Learning in Wireless Systems
There is a crucial problem that must be considered to realize decentralized machine learning on wireless network systems. Existing algorithms for decentralized machine learning [3, 5, 12] mainly consist of the following two steps: (1) updating local models and (2) communicating between nodes. In the procedure for local model updating, each computation node refines the model parameters of the classifier to be trained utilizing its own dataset (specific training samples at each computation node). During the communication procedure, the updated model parameters are shared between neighboring nodes. These procedures are performed iteratively until training loss converges. However, the communication procedure tends to be a bottleneck in terms of runtime performance because the number of model parameters that must be communicated is often enormous (e.g., VGG16 [10] requires more than 100 million model parameters). Furthermore, in wireless systems, the communication time required to guarantee successful communication tends to increase based on path loss and multipath fading [2]. These factors significantly deteriorate the runtime performance of machine learning.
This problem is challenging, but should be addressed utilizing either lower or higher transmission rates. Let us consider the situation where the transmitter can controlls the communication coverage by adjusting the transmission rate under given transmission power and bandwidth (e.g., WiFi with adaptive modulation techniques). In general, highrate transmission can easily reduce communication time. However, this strategy reduces communication coverage, meaning the network topology becomes sparse. Some theoretical works [5, 12] have argued that the training accuracy of decentralized algorithms deteriorates in a sparse network topology. In contrast, lowrate transmission makes network topologies denser, meaning training accuracy versus the number of iterations can be improved. However, runtime performance deteriorates because total communication time increases. We summarize these relationships in Fig. 1(a)(b), and the tradeoffs between training accuracy and runtime performance that are raised by the differences in the network topology, in Fig. 1(c).
Therefore, it is important to develop a communication strategy for decentralized learning in wireless systems that improves runtime performance.
IB Objective of This Paper
In this paper, we analyze the performance of decentralized learning by considering the influences of network topology on wireless systems and propose a novel communication strategy for improving runtime performance. We specifically focus on decentralized parallel stochastic gradient descent (DPSGD) [5], which is one of the stateoftheart algorithms for decentralized learning, as a reference algorithm for our discussion. Wang et al. [12] formulated a relationship between network density and the performance of DPSGD. They analyzed the performance of DPSGD from the perspective of computation of the average squared gradient norm of a learning model, which directly affects training accuracy. Based on this analysis, we first discuss when and how network density affects the runtime performance of DPSGD. This discussion yields the following two insights: dense network topology (1) does not significantly gain the training accuracy of DPSGD compared to sparse one, and (2) strongly degrades the runtime performance because this setting generally requires to utilize a lowrate transmission. These insights suggest that the runtime performance of DPSGD can be improved by highrate transmission, which makes the network topology relatively sparse, but shortens the communication time between nodes (e.g., Fig. 1(c)). Motivated by these insights, we propose a communication strategy that makes each node highrate transmissions whenever possible. In this method, each node adapts its transmission rate such that the required time for model sharing is minimized under a constraint on network topology density. By increasing the transmission rate without making the network less dense than necessary, this method improves runtime performance while maintaining training accuracy. To the best of our knowledge, this work is the first attempt that incorporated characteristics of wireless channels into the DPSGD algorithm in wireless systems.
Ii System Model
Iia Overview of DPSGD
Consider situation in which nodes are randomly deployed in a twodimensional area. The th node stores independent and identically distributed datasets that follow the probability distribution , and has the dimensional model parameter vector of the classifier (or regressor) that consists of the data size [bits]. We assume that each node location has been preliminarily shared with all nodes via periodic shortlength communication (e.g., beaconing). Additionally, we also assume that all nodes can be roughly (ms order) synchronized once the aforementioned periodic short length communication or global positioning system is deployed.
The objective of distributed learning in a decentralized setting is to optimize the model vector. According to [5], this objective can be modeled as
(1) 
where , denotes the data sample and represents the loss function for the th node. After the optimization, each node can utilize as its classifier. Note that is not directly calculated during the optimization.
Under the conditions described above, decentralized learning can be performed utilizing a DPSGD optimizer. DPSGD iteratively performs the following procedure until the value of the loss function is minimized: (1) updating the model parameter at each node based on its dataset with the learning rate , (2) sharing updated model parameters with connected neighboring nodes, and (3) averaging received and own model parameters. The pseudocode of this algorithm is summarized in Algorithm 1. In Algorithm 1, we denote the set of model vectors at the th iteration as .
IiB Radio Propagation Model and Protocol
In wireless systems, the communication coverage is strongly affected by the relationships between the radio propagation characteristics, bandwidth, transmission rate, etc. In order to discuss the influence of these relationships on the performance of DPSGD, we consider a typical wireless channel.
Because the communication coverage is mainly determined by the path loss, we model the received signal power at a distance [m] as , where is the transmission power in dBm and is the path loss index. We assume that all nodes transmit with the same and the bandwidth . Under these conditions, the channel capacity at can be expressed as
(2) 
where is the signaltonoise ratio and is the noise floor in dBm. Additionally, we define channelcapacity matrix whose element represents the channel capacity between the th and the th nodes.
This paper assumes situations where each node can controll its communication coverage by adjusting the transmission rate. In such situations, we consider that each node broadcasts its own updated model at a transmission rate [bps] (Step 3 in Algorithm 1). If , the receiver can accurately receive the model parameters from neighboring nodes. We assume that and are constant over the area and that they can be given as prior knowledge to all nodes. Additionally, to avoid communication collisions between the nodes in Step 3 of Algorithm 1, the nodes share the spectrum based on the time division multiplexing; the model parameter is broadcasted to nodes in consecutive order from the terminal on the west side of the target area. With these assumptions, the communication time spent in one iteration is given by
(3) 
If the transmission power and bandwidth are constrained, the transmission rate must be reduced to expand communication coverage (i.e., to make the network dense). This fact indicates that there is a tradeoff between network density and communication time when sharing model parameters. Therefore, even if the training accuracy of DPSGD for a given number of iterations can be improved, runtime performance would deteriorate.
Note that our discussion can be extended to fading channels without loss of generality of our claim. This can be achieved by considering the following condition for successful communications: , where is a constant scalar that behaves as the margin of uncertainty for fading channels. These conditions enable each node to set a transmission rate to perform accurate communication.
IiC Modeling DPSGD using Averaging Matrix
Previous studies [5, 12] have utilized an averaging matrix , which is automatically determined based on the network topology, for the analysis of DPSGD. This averaging matrix satisfies , where is an dimensional column vector of ones. Each element can be calculated by
(4) 
where represents the connectivity between the th and the th nodes.
The use of allows us to analyze the influence of network topology on DPSGD. The model updating rule at the th iteration (i.e., Step 5 in Algorithm 1) can be redefined as
(5) 
In this paper, we also utilize Eq. (5) for analyzing the influence of network topology on the runtime performance of DPSGD.
Iii NetworkDensityControlled DPSGD
Iiia Effects of Network Density
We will briefly discuss how the density of a network topology influences the training accuracy of DPSGD. Wang et al. [12] analyzed the performance of DPSGD from the perspective of convergence analysis of the expected value of the squared gradient norm , where is the number of iterations of optimization for DPSGD. Because this expected value is directly related to training accuracy, we present this value as “training accuracy” throughout this paper.
According to [12], the training accuracy of DPSGD decreases as the parameter ( and are the 2nd and th largest eigenvalue of , respectively) increases. The parameter approaches zero as the number of nonzero elements in increases. This behavior of suggests that the value of represents the sparseness of a network topology because a denser network topology causes the number of nonzero elements in to increase.
To derive theoretical proof of DPSGD performance evaluations, the authors of [12] introduced the following assumptions:

(Smoothness): ( is the Lipschitz constant of the loss function ).

(Lower bounded): .

(Unbiased gradients): ( is the gradient of )

(Bounded variance) ( and are nonnegative constants that are inversely proportional to the minibatch size).

(Averaging matrix): .

(Learning rate): learning rate should satisfies
(6)
Under these assumptions, when all local models are initialized with the same vector , the average squared gradient norm at the th iteration is bounded by:
(7) 
This equation indicates that the upper bound of the average squared gradient norm can be expressed based on the following two factors. The first ((1) in Eq. (7)) is a component obtained from fullysynchronized SGD (i.e., ). The second ((2) in Eq. (7)) is a component generated by network errors, which are influenced by the density of network topology. The condition in Eq. (7) implies that training accuracy is strongly affected by , when and are large. Therefore, we evaluated effects of these parameters on the training accuracy.
Figs. 2(a)(c) plot three numerical examples of Eq. (7) where , and , respectively. To highlight the influence of the network topology on the training accuracy of DPSGD, we plot three curves: the total upper bound (value of the right side of Eq. (7)), effects of fullysynchronized SGD (value of the term (1) on the right side of Eq. (7)), and the effect of network errors (value of the term (2) on the right side of Eq. (7)). These examples show that as the number of iterations increases, the impact of network density on the training accuracy of DPSGD increases, i.e., the effects of network error turns out being dominant with respect to the training accuracy (upper bound). However, the effect is small when the value of is below a certain threshold. For example, although the effects of become significant when , the upper bound in this case is on the order of in all regions where . The effect of the number of nodes , where , is presented in Fig. 2(d). Although the effect of on the training accuracy increases as increases, a similar dependence on threshold can be observed in this case (e.g., in where ). These numerical examples suggest that runtime performance can be improved by making a network topology more sparse (i.e., by increasing the transmission rates of nodes) without a significant degradation in training accuracy.
IiiB Proposed Communication Strategy
As shown earlier, setting a higher transmission rate under the constraint of the network density will improve the runtime performance. Considering the relationships between transmission rate, network density, communication time, and the training accuracy of DPSGD, we propose a novel communication strategy. In this strategy, each node selects a suitable transmission rate prior to initiating DPSGD. Once is determined, each node broadcasts its model vector based on the transmission rate . This transmission rate is selected, such that communication time is minimized under constraints with respect to . This strategy can be modeled as
(8) 
where denotes the set of transmission rates and represents the predetermined maximum value of (that satisfies Eq. (6)). This strategy enables one to increase each transmission rate , resulting in a sparse network topology. Because the constraint of prevents significant degradation of training accuracy, runtime performance can be improved while maintaining training accuracy.
IiiC Solver for Eq. (8)
Eq. (8) should be solved at each node in a decentralized manner. There are some methods for optimizing based on given conditions, such as prior knowledge (i.e., with or without location information) and channel characteristics. This paper considers that both preshared information at node locations and path loss characteristics, i.e., the received signal power , the bandwidth and the noise floor can be obtained beforehand. With this knowledge, each node can construct the channelcapacity matrix independently. This matrix enables the th node to estimate the required transmission rate that guarantees successful communications with the th node. Thus, Eq. (8) can be expressed as a combination problem. In this paper, each node solves this problem utilizing a brute force search. We summarize these procedures in Algorithm 2. Even if each node solves this problem in a decentralized manner, all nodes arrive at the same result.
After is determined, each node initiates DPSGD with the optimized transmission rate.
Iv Performance Evaluation
We simulated the proposed strategy on a computer employing a multicore CPU.
This computer employs AMD Ryzen Threadripper 2970WX, which consists of 24physical cores
We conducted simulations of a case where six nodes are placed in a 200 m200 m area as shown in Fig. 3(a). We focus on the training accuracy at Node 1.
Iva Experimental Setup
We evaluated the proposed strategy on an image classification task utilizing the FashionMNIST dataset [13], which has been widely used as a benchmark for image classification performance in the machine learning community. This dataset includes 60 000 images for training that have already been categorized into ten different categories. This dataset also includes 10 000 images for test data. Each sample in this dataset is a singlechannel, 8bit image with a resolution of . In this experiment, we utilized a convolutional neural network (CNN) as an architecture to perform image classification. The details of the CNN we utilized are as follows: two convolutional layers (with 10 and 20 channels, respectively, each of which was activated by a rectified linear unit (ReLU) function), two maxpooling layers, and three fullyconnected layers (320 and 50 units, respectively, with ReLU activation and an additional 10 units activated by the softmax function). Additionally, dropout was applied in the second convolutional layer and the first fullyconnected layer with a dropout ratio of 0.5. Therefore, the total number of model parameters for the CNN was 21 840, and its data size was =698 880â¬ bits (32bit floating point numbers). Each node broadcasted data to neighboring nodes to train the CNN utilizing DPSGD. To train a CNN utilizing the DPSGD optimizer, we shuffled all of the training samples, then equally distributed them to six computation nodes. Therefore, each node was given 10 000 independently and identically distributed training samples. Additionally, we set the batch size for DPSGD optimization to 1, meaning the number of iterations per epoch was , because each node was given 10 000 training samples.
We exected processes in parallel to train the CNN with DPSGD on the computer, where we assigned one physical core to each process. The runtime of the calculation portion of DPSGD was calculated based on the real elapsed time on the computer, and the communication time was calculated by Eq. (3). Note that we fixed a random seed at the start of the simulation to ensure reproducibility.
IvB Runtime Performance Results
In this section, we discuss the experimental results for the proposed strategy in terms of runtime performance.
We analyzed the performance of the proposed strategy by varying the path loss index because communication coverage, which is a key factor influencing runtime performance, is strongly affected by path loss. The path loss index is an environmentdependent factor that has been determined empirically. It tends to take on large values in environments with many obstacles, e.g., indoor and urban channels [2].
Fig. 3(a) present dependences of the training accuracy against the number of epochs for and 6, respectively. We highlight examples of the obtained training accuracy values at 100 epochs: 0.841 (), 0.833 (), and 0.821 (). These results indicate that the training accuracy decreases slightly as increases. They agree with the theoretical and numerical evaluations of the performance of DPSGD in Fig. 2 and Eq. (7). Note that this epoch performance does not depend on because the proposed method always constructs the same network topology for a given and node placements, regardless of .
Figs. 3(c)(f) present the runtime performances for and , respectively. When is large, a greater value of (i.e., the higher transmission rate and sparse network topology) significantly improves runtime performance, although this strategy degrades the training accuracy versus epoch performance. We highlight some comparisons on the real elapsed time required for which the training accuracy exceeds 0.8 in the case of . We obtained that the required times when setting to 0.1, 0.3, and 0.8 were approximately 270, 132, and 8 minutes, respectively. This comparison shows that the runtime performance with is approximately 3.9 times faster than that with , and 8.0 times faster than that with . Therefore, we would like to contend that the runtime performance can be improved significantly by setting to large (i.e., high transmission rate), when the path loss index is large.
These results suggest that highrate transmissions with sparse network topology will facilitate the development of efficient decentralized machine learning, especially in situations such as in indoor or urban channels.
V Conclusion
We proposed a novel communication strategy for DPSGD on wireless systems by incorporating influences of the network topology. We found that the influence of network density on the training accuracy of DPSGD is less significant. Based on this finding, we designed the communication strategy for DPSGD, in which each node communicates with a highrate transmission rate under the constraint of the network density. This strategy enables to improve the runtime performance of DPSGD while retaining high training accuracy.
Numerical evaluations showed that the network topology (transmission rate) highly influences on the runtime performance, especially in situations where the path loss index is large. We would like to conclude that the influences of the network topology will be a crucial factor that should be nonnegligible to perform decentralized learning in wireless systems effectively, especially in indoor or urban scenarios.
In future work, we will develop sophisticated optimization methods for the proposed strategy that can be applied to more complex situations such as those where location information is not available.
Acknowledgements
This research was funded by The Telecommunications Advancement Foundation and the Japan Society for the Promotion of Science through KAKENHI under Grant 19K14988.
Footnotes
 Simultaneous multithreading (SMT) was disabled.
References
 (2013Third) A survey on machinelearning techniques in cognitive radios. IEEE Commun. Surveys and Tuts. 15 (3), pp. 1136–1159. External Links: Document, ISSN Cited by: §I.
 (2005) Wireless communications. Cambridge University Press. Cited by: §IA, §IVB.
 (2019) Decentralized stochastic optimization and gossip algorithms with compressed communication. arXiv abs/1902.00340. External Links: Link, 1902.00340 Cited by: §IA, §I.
 (201801) Learning IoT in edge: deep learning for the internet of things with edge computing. IEEE Netw. 32 (1), pp. 96–101. External Links: Document, ISSN Cited by: §I.
 (2017) Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. In Advances in NeurIPS 30, pp. 5330–5340. Cited by: Fig. 1, §IA, §IA, §IB, §I, §IIA, §IIC, Algorithm 1.
 (2017Apr.) CommunicationEfficient Learning of Deep Networks from Decentralized Data. In Proc. AISTATS 2017, Cited by: §I, §I.
 (2019) Federated learning for wireless communications: motivation, opportunities and challenges. arXiv arXiv:1908.06847v3. External Links: Link Cited by: §I.
 (201905) Client selection for federated learning with heterogeneous resources in mobile edge. In Proc. IEEE ICC2019, Vol. , pp. . External Links: Document, ISSN Cited by: §I.
 (201610) Edge computing: vision and challenges. IEEE J. Internet of Things 3 (5), pp. 637–646. External Links: Document, ISSN Cited by: §I.
 (2014) Very deep convolutional networks for largescale image recognition. arXiv abs/1409.1556. External Links: Link Cited by: §IA.
 (201905) Local SGD converges fast and communicates little. In Proc. ICLR 2019, Cited by: §I.
 (2018) Cooperative SGD: A unified Framework for the Design and Analysis of CommunicationEfficient SGD Algorithms. External Links: 1808.07576, Link Cited by: Fig. 1, §IA, §IA, §IB, §I, §IIC, §IIIA, §IIIA, §IIIA.
 (2017) FashionMNIST: a novel image dataset for benchmarking machine learning algorithms. In arXiv, Vol. arXiv:1708.07747. External Links: 1708.07747, Link Cited by: §IVA.
 (201806) Machine learning for vehicular networks: recent advances and application examples. IEEE Veh. Technol. Mag. 13 (2), pp. 94–101. External Links: Document, ISSN Cited by: §I.
 (2009) Parallelized Stochastic Gradient Descent. Opt. Lett. 34 (19), pp. 36. External Links: Document, ISSN 01469592 Cited by: §I.