Collective Learning
Abstract
In this paper, we introduce the concept of collective learning (CL) which exploits the notion of collective intelligence in the field of distributed semisupervised learning. The proposed framework draws inspiration from the learning behavior of human beings, who alternate phases involving collaboration, confrontation and exchange of views with other consisting of studying and learning on their own. On this regard, CL comprises two main phases: a selftraining phase in which learning is performed on local private (labeled) data only and a collective training phase in which proxylabels are assigned to shared (unlabeled) data by means of a consensusbased algorithm. In the considered framework, heterogeneous systems can be connected over the same network, each with different computational capabilities and resources and everyone in the network may take advantage of the cooperation and will eventually reach higher performance with respect to those it can reach on its own. An extensive experimental campaign on an image classification problem emphasizes the properties of CL by analyzing the performance achieved by the cooperating agents.
1 Introduction
The notion of collective intelligence has been firstly introduced in (engelbart1962augmenting) and widespread in the sociological field by Pierre Lévy in (levy1997collective). By borrowing the words of Lévy, collective intelligence “is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills”. Moreover, “the basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities”.
In this paper, we aim to exploit some concepts borrowed from the notion of collective intelligence in a distributed machine learning scenario. In fact, by cooperating with each other, machines may exhibit performance higher than those they can obtain by learning on their own. We call this framework collective learning (CL).
Distributed systems
The learning framework we want to address in CL is the one of semisupervised learning. In particular, we consider problems in which private data at each node are labeled, while shared (and cloud) data are unlabeled. This captures a key challenge in today’s learning problems. In fact, while unlabeled data can be easy to retrieve, labeled data are often expensive to obtain (both in terms of time and money) and can be unshareable (due, e.g., to privacy restrictions or trade secrets). Thus, one typically has few local labeled samples and a huge number of (globally available) unlabeled ones. Hybrid problems in which also shared labeled and private unlabeled data are available can be easily included in the proposed framework.
In order to perform CL in the above setup, we propose an algorithmic idea that is now briefly described. First of all, in order to take advantage of the possible peculiarities and heterogeneity of all the agents in the network, each agent can use a custom architecture for its learning function. The algorithm starts with an initial preliminary phase, called selftraining, in which agents independently train their local learning functions on their private labeled data. Then, the algorithm proceeds with the collective training phase, by iterating through the shared (unlabeled) set. For each unlabeled data, each agent makes a prediction of the corresponding label. Then, by using a weighted average of the predictions of its neighbors (as in consensusbased algorithms), it produces a local proxylabel for the current data and uses such a label to train the local learning function. Weights for the predictions coming from the neighbors are assigned by evaluating the performance on local validation sets. During the collective training phase, the local labeled dataset are reviewed from time to time in order to give more importance to the local correctly labeled data.
We want to emphasize right now that addressing the theoretical properties of the proposed algorithm is beyond the scope of this paper and will be subject to future investigation. Rather, in this work, we present the CL framework for distributed semisupervised learning, and we provide some experimental results in order to emphasize the features and the potential of the proposed algorithm.
The paper is organized as follows. The relevant literature for CL is reported in the next section. Then, the problem setup is formalized and the proposed CL algorithm is presented in details. Finally, an extensive numerical analysis is performed on an image classification problem to evaluate the performance of CL.
2 Related work
The literature related to this paper can be divided in two main groups: works addressing distributed systems and those involving widely known machine learning techniques that are strictly related to CL.
A vast literature has been produced for dealing with distributed systems in different fields, including computer science, control, estimation, cooperative robotics and learning. Many problems arising in these fields can be cast as optimization problems and need to be solved in a distributed fashion via tailored algorithms. Many of them are based on consensus protocols, which allows to reach agreement in multiagent systems (olfati2007consensus) and have been widely studied under various network structures and communication protocols (bullo2009distributed; kar2009distributed; garin2010survey; liu2011consensus; kia2015dynamic). On the optimization side, depending on the nature of the optimization problem to be solved, various distributed algorithms have been developed. Convex problems have been studied within a very large number of frameworks (boyd2006randomized; nedic2009distributed; zhu2012distributed; ram2010distributed; nedic2010constrained; wei2012distributed; farina2019randomized), while nonconvex problems have been originally addressed via the distributed stochastic gradient descent (tsitsiklis1986distributed) and have received recent attention in (bianchi2013convergence; di2016next; tatarenko2017non; notarnicola2018distributed; farina2019distributed). In this paper, we consider a different setup with respect to the one usually found in the above distributed optimization algorithms. In fact, each agent has its own learning function, and hence a local optimization variable that is not related with the ones of other agents. Thus, there is no explicit coupling in the optimization problem. As it will be shown in the next sections, the collective training phase of CL is heavily based on consensus algorithms, but agreement is sought on data and not on decision variables. Other relevant algorithms and frameworks specifically designed for learning with noncentralized systems include the recent works on distributed learning from constraints (farina2019LFC), federated learning (konevcny2015federated; mcmahan2016communication; konevcny2016federated; smith2017federated) and many other frameworks (dean2012large; low2012distributed; kraska2013mlbase; li2014scaling; chen2015mxnet; meng2016mllib; chen2016revisiting). Except (farina2019LFC) and some papers on federated learning, most of these works, however, look for data/model distribution and parallel computation. They usually do not deal with fully distributed systems, because a central server is required to collect and compute the required parameters.
Machine learning techniques related to CL are, mainly, those involving proxy labeling operations on unsupervised data (in semisupervised learning scenarios). In fact, there exist many techniques in which fictitious labels are associated to unsupervised data, based on the output of one (or more) models that have been previously trained on supervised data only. Cotraining (blum1998combining; nigam2000analyzing; chen2011co) exploit two (or more) views of the data, i.e., different feature sets representing the same data in order to let models produce labeled data for each other. Similarly, in democratic colearning (zhou2004democratic) different training algorithms on the same views are exploited, by leveraging off the fact that different learning algorithms have different inductive biases. Labels on unsupervised data are assigned by using the voted majority. Tritraining (zhou2005tri) is similar to democratic colearning, but only three independently trained models are used. In selftraining (mcclosky2006effective; rosenberg2005semi) and pseudolabeling (wu2006fuzzy; lee2013pseudo) a single model is first trained on supervised data, then it assigns labels to unsupervised data and uses them for training. Moreover, strictly related to this work are the concepts of ensemble learning (tumer1996error; dietterich2000ensemble; wang2003mining; rokach2010ensemble; deng2014ensemble) in which an ensemble of models is used to make better predictions, transfer learning (bengio2012deep; weiss2016survey) and distillation (hinton2015distilling) in which models are trained by using other models, and learning with ladder networks (rasmus2015semi) and noisy labels (natarajan2013learning; liu2016classification; han2018co).
To sum up, we point out that this paper utilizes some of the above concepts both from distributed optimization and machine learning. In particular, we exploit consensus protocols and proxy labeling techniques in order to produce collective intelligence from networked machines.
3 Problem setup
In this section the considered problem setup is presented. First, we describe the structure of the network over which agents in the network communicate. Then, the addressed distributed learning setup is described.
3.1 Communication network structure
We consider a network composed by agents, which is modeled as a timevarying directed graph , where is the set of agents, is the set of directed edges connecting the agents at time and is the weighted adjacency matrix associated to . the elements of which satisfy

for all ,

if and only if ,

(i.e., is row stochastic),
for all . We assume the timevarying graph is jointly strongly connected, i.e., there exists such that the graph is strongly connected for all (see Figure 1 for a graphical representation). We denote by the set of inneighbors of node at iteration (including node itself), i.e., . Similarly we define the set of outneighbors of node at time as . The jointstrong connectivity of the graph sequence is a typical assumption and it is needed to guarantee the spread of information among all the agents.
3.2 Learning setup
We consider a semisupervised learning scenario. Each agent is equipped with a set of private labeled data points , where is the th data of node (with being the dimension of the input space), and the corresponding label. The set is divided in a training set and a validation set. The training set consists of the first samples and is defined as , while the validation set is defined as . Besides, all agents have access to a shared dataset consisting of unlabeled data, , with . The goal of each agent is to learn a certain local function (representing a local classifier, regressor, etc.), where we denote by the learnable parameters of and by a generic input data. Notice that we are not making any assumption on the local functions . In fact, in general, for any and . A graphical representation of the considered learning setup is given in Figure 2.
In the experiments, we will consider as a metric to evaluate the actual performance of the agents the accuracy computed on a shared test set . Such a dataset is intended for test purposes only and cannot be used to train the local classifiers.
4 Collective Learning
In this section, we present in details our algorithmic idea for CL. For the sake of exposition, let us consider a problem in which all agents want to learn the same task through their local functions . Multitask problems can be directly addressed in the same way, at the price of a more involved notation.
Each agent in the network is equipped with a local private learning function . The structure of each can be arbitrary and different from one agent to the other. For example, can be a shallow neural network with input units, a deep one with many hidden layers, a CNN and so on. As said, the functions are private for each agent, and, consequently, their learnable parameters should not be shared.
Collective learning consists of two main phases:

a preliminary phase (referred to as selftraining), in which each agent trains its local learning function by using only its private (labeled) data contained in the local training set ;

a collective training phase, in which agents collaborate in order for collective intelligence to emerge.
4.1 Selftraining
This first preliminary phase does not require any communication in the network since each agent tries to learn from its private labeled training set . It allows agents to perform the successive collective training phase after exploiting their private supervised data.
Define as the loss function associated by agent to a generic datum . Then, the optimization problem to be addressed by agent in this phase is
(1) 
Usually, such a problem is solved (meaning that a stationary point is found) by iteratively updating . Rules for updating usually depends on (sub)gradients of or, in stochastic methods, on the ones computed on batches of data from .
Consider for simplicity the particular case in which a batch consisting of only one datum is used at each iteration. Most of the current stateofart algorithms usable in this setup, starting from the classical SGD (bottou2010large) to Adagrad (duchi2011adaptive), Adadelta (zeiler2012adadelta) and Adam (kingma2014adam), can be implicitly written as algorithms in which is updated by computing
(2) 
where is some initial condition and denotes the implicit update rule given the current estimate of the parameter and the data chosen at iteration . We leave the update rule implicit, since, depending on the architecture of its own classifier, the available computational power and other factors, each agent can choose the more appropriate way to perform a training step on the current data. As an example, in the classical SGD, the update rule reads where is a stepsize and denotes the gradient operator.
In order to approach a stationary point of problem (1), the procedure in (2) typically needs to be repeated multiple times, i.e., one needs to iterate over the set multiple times. We call (2) an epoch of the training procedure. Moreover, we denote by the value of obtained after a selftraining phase started from and carried out for epochs.
We assume that the locally available data at each node are relatively few, so that the performance that can be reached on the test set by solving (1) are intrinsically lower than the ones that can be reached by training on a larger and more representative dataset.
4.2 Collective training
This is the main phase of collective learning. It resembles the typical human cooperative behavior that is at the heart of collective intelligence. Algorithmically speaking, this phase exploits the communication among the agents in the network and uses the shared (unlabeled) data .
Learning from shared data
In order to learn from shared (unlabeled) data, agents are asked to produce at each iteration proxylabels for each point in . In general, at each iteration, a batch from the set is drawn and processed. To fix the ideas, consider the case in which,, at each iteration , a single sample is drawn from the set . Each node produces a prediction for the sample , by computing , and broadcasts it to its outneighbors . With the received predictions, each node produces a proxylabel (which we call local collective label) for the data , by converting the weighted average of its own predictions and the ones of its inneighbors into a label. Finally, it uses as the label associated to to update . Summarizing, for all , each node draws from and then it computes
(3)  
(4)  
(5) 
where we denoted by the operator converting its argument into a label. For example, in a binary classification problem the lbl operator could be a simple thresholding one, i.e., if and if .
Note that the labeling procedure adopted in this phase highly resembles the human behavior. When unlabeled data are seen, their labels are guessed by resorting to the opinion of neighboring agents.
Weights computation
Let us now elaborate on the choice of the weights . Clearly, they must account for the expertise and quality of prediction of each agent with respect to the others. In particular, we use as performance index the accuracy computed on the local validation sets . Let us call the accuracy obtained at iteration by agent , and, in order to possibly accentuate the differences between the nodes, let us define
(6) 
with . Then, the weights of the weighted adjacency matrix are computed as
(7) 
where . By doing so, we guarantee that and weights are assigned proportionally to the performance of each neighboring agent. Moreover, agents are capable to locally compute the weights to assign to their neighbors, since only locally available information is required.
The value of may not be computed at every iteration . In fact, it is very unlikely that it changes too much from one iteration to another. Thus, we let agents update their local performance indexes every iterations. In the iterations in which the scores are not updated, they are assumed to be the same as in the previous iteration.
Notice that one can think to different rules for the computation of the weights in the adjacency matrix. For example one can use the F1 score or some other metric in place of the accuracy or assign weights with a different criterion. As a guideline, however, we point out that the weights should always depend on performance of the agents on some (possibly common) task. Moreover, the local validation sets should be sufficiently equally informative in order to evaluate agents on an fairly equally difficult task. For example, when available, a common validation set could be used.
Review step
By taking again inspiration from the human behavior, the collective training phase also includes a review step which is to be performed occasionally by each node (say every iterations for each ). Similarly to humans that occasionally review what they have already learned from reliable sources (e.g., books, articles,…), agents in the network will review the data in the local set (which are correctly labeled). Formally, every iterations, node performs a training epoch on the local data set , i.e., it modifies step (5) as
(8)  
(9) 
As it will be shown next, the frequency of the review step plays a crucial role in the learning procedure. A too high frequency tends to produce a sort of overfitting behavior, while too low one makes agent forget their reliable data.
4.3 Remarks
Before proceeding with the experimental results, a couple of remarks should be done. The framework presented so far is quite general and can be easily implemented over networks consisting of various heterogeneous systems. In fact, each agent is allowed to use a custom structure for the local function . This accounts for, e.g., different systems with different computational capabilities. More powerful units can use more complex models, while those with lower potential will use simpler ones. Clearly, there will be units that will intrinsically perform better with respect to the others, but, at the same time agents starting with low performance (e.g., due to low representative local labeled datasets) will eventually reach higher performance by collaborating with more accurate units. Finally, CL is intrinsically privacypreserving since each agent shares with its neighbors only predictions on shared data. Thus, it is not possible to infer anything about the internal architecture or private data of each node, since they are never exposed.
5 Experimental results
Consider an image classification problem in which each agent has a certain number of private labeled images and a huge amount of unlabeled ones is available from some common source (for example the internet). In this setup, we select the FashionMNIST dataset (xiao2017fashionmnist) to perform an extensive numerical analysis, and CL is implemented in Python by combining TensorFlow (abadi2016tensorflow) with the distributed optimization features provided by DISROPT (farina2019disropt). The FashionMNIST dataset, consists of 28x28 greyscale images of clothes. Each image is associated with a label from to , which corresponds to the type of clothes depicted in the image. The dataset is divided in a training set with samples and a test set with samples.
Next, we first consider a simple communication network and perform a Montecarlo analysis to show the influence of some of the algorithmic and problemdependent parameters involved in CL. Then, we compare CL with other nondistributed methods and, finally, an example with a bigger and timevarying network is provided. The accuracy computed on is picked as performance metric and the samples in are used to build the local sets and the shared set in CL.
5.1 Montecarlo analysis
Consider a simple scenario in which agents cooperates over a fixed network (represented as a complete graph, depicted in Figure 3) to learn to correctly classify clothes’ images. To mimic heterogeneous agents, the local learning functions of the agents are as follows.

is represented as convolutional neural networks (CNN) consisting of (i) a convolutional layer with filters, kernel size of 3x3 and ReLU activation combined with a maxpool layer with pool size of 2x2; (ii) a convolutional layer with filters, kernel size of 3x3 and ReLU activation combined with a maxpool layer with pool size of 2x2; (iii) a convolutional layer with filters, kernel size of 3x3, ReLU activation and flattened output; (iv) a dense layer with 64 units and ReLU activation; (v) an output layer with 10 output units and softmax activation.

is represented as a neural network with 2 hidden layers (HL2) consisting of and units respectively, with ReLU activation, and an output layer with 10 output units and softmax activation.

is represented as a neural network with 1 hidden layer (HL1) of units, with ReLU activation, and an output layer with 10 output units and softmax activation.

is a shallow network (SHL) with 10 output units with softmax activation.
Next, the role of the algorithmic and problemdependent parameters involved in CL is studied. In particular, we study the performance of the algorithm in terms of the accuracy on the test set by varying: (i) the size of the local training sets, (ii) the review step frequency, and (iii) the parameter in the weights’ computation. In all the next simulations, we use with the samples composing such set randomly picked at each run from the set . Moreover, we use the Adam update rule in (2) and (5), and a batch size equal to both in the selftraining and in the collective training phases.
Influence of the local training set size
The number of private labeled samples locally available at each agent is clearly expected to play a crucial role in the performance achieved by each agent. In order to show this, we consider . For each value we perform a Montecarlo simulation consisting of runs. In each run, we randomly pick the samples in each from (along with those in each ). The remaining samples of are then unlabeled and put in the set . Then, the algorithm is ran for epochs over with the weights computation and the review step performed every and iterations respectively and in (6). The results are depicted in Figure 4, and two things stand out. First, a higher number of private labeled samples leads to an higher accuracy on the test set . Second, as the number of local samples increases, the variance of the performance tends to decrease. Moreover, it can be seen that all the network architectures reach almost the same accuracy for , while for , the shallow network is overcome by the other three.
Influence of the review step frequency
Also the frequency with which the review step is performed (which is inversely proportional to the magnitude of ) influences the performance of the agents. In fact, as humans need to review things from time to time (but not to rarely), also here a too high value for leads to a performance decay. We perform a Montecarlo simulation for . For each value we run instances of the algorithm in which we build each set with random samples from (along with those in each ). The remaining samples of are then unlabeled and put in the set . Then, the algorithm is ran for epochs over with for all and the weights computation performed every with in (6). The results are reported in Figure 5. A higher time interval between two review steps, produce a higher variance and also leads to a lower accuracy. This is due to the fact that if the review step is performed too rarely, agents tends to forget their knowledge on labeled data and start to learn from wrongly labeled samples. Then, when the review occurs they seems to increase again their accuracy. On the contrary, a too high frequency of review step produces a slightly overfitting behavior over the private labeled data. This can be appreciated by comparing in Figure 5 the cases for and . It can be seen that the accuracy on the test set for is higher with respect to the one for . A more overfitting behavior can be seen by further reducing .
Influence of
The last parameter we study is in (6). A small value of means that small differences in the local performances results in small differences in the weights. On the contrary, a high value produce a weight near to for the best agent in the neighborhood. In the extreme case when all agents have in the neighborhood are assigned the same weight, independently of their performance. In Figure 6 the results for a Montecarlo simulation for are reported, where we use , and . In this setup, the best accuracy is obtained with with a slightly higher standard deviation for . When (i.e., when employing a uniform weighing), the accuracies tend to reach a satisfactory value and, then, start to decrease. This is probably due to the fact that all agents has the same importance and hence, in this case, all of them seems to obtain the performance of the worst of them. Finally, when there is a substantial performance degradation. This should be caused by the fact that, in the first iterations, there is an agent which is slightly better than the others and leads all the others towards its (wrong) solution.
It is worth mentioning that the influence of on the performance may vary, depending on the considered setup. For example, if we consider for all , the results are depicted in Figure 7. A choice of in the range seems to still work well, while for a steady state is reached. Moreover, for bigger generic communication graphs a choice of too small may not work at all, due to, e.g., having a lot of intrinsically lowperformant neighbors whose weight in the creation of the proxy label tends to produce a lot of wrong labels.
5.2 Comparison with noncooperative methods
In this section we compare the results obtained by CL in the presented setup when , , and for all with those obtained by using other (noncooperative) methods. In particular, we consider the following two approaches.

Independently train each learning function over a dataset with the same size of the local private training dataset used in CL, i.e., with samples.

Assume that the entire training dataset of FashionMNIST is available, and independently train each learning function over the entire dataset .
These approaches gives two benchmarks. On one side, the performance obtained by the four learning function with ST coincides with those that can be achieved by the agents by performing the selftraining phase only and without cooperating. On the other side, the performance obtained with FS, i.e., in a fully supervised case, represents the best performance that can be achieved by the selected learning architectures. In order for CL to be worth for the agents it should lead to better performance with respect to ST and approach as much as possible those obtained by FS.
To compare CL, ST and FS we perform a Montecarlo simulation consisting of 100 runs of each of the three approaches. In each run, the sets and for CL have been created randomly as in the previous sections, and CL is run for epochs over . Similarly, the samples for training each function of ST are randomly drawn from at each run. The three approaches are compared in terms of the obtained accuracy on the test set and the results are reported in Table 1.
CL  ST  FS  

Architecture/agent  Mean  Std  Mean  Std  Mean  Std 
CNN  0.8149  0.0060  0.7663  0.0127  0.9021  0.0043 
HL2  0.8144  0.0051  0.7670  0.0171  0.8476  0.0058 
HL1  0.8153  0.0049  0.7728  0.0112  0.8465  0.0062 
SHL  0.8065  0.0050  0.7498  0.0077  0.8406  0.0028 
It can be seen that CL reaches an higher accuracy (with a lower standard deviation) with respect to ST, thus confirming the benefits obtained through cooperation. The target performance of FS, however are not reached. On this regard we want to point out that the comparison with FS is a bit unfair, since the amount of usable information (in terms of labeled samples) is extremely different. However, it can be shown that with an higher number of samples in an accuracy near to FS can be reached. For example, from Figure 4, it is clear that with , the learning functions HL2 and HL1 already matches (via CL) the accuracy of FS.
5.3 Example with a larger, timevarying communication network
In this section we perform an experiment with a larger network consisting of agents. Each agent is equipped with a learning function randomly chosen from those introduced in the previous section (CNN, HL2, HL1, SHL). In particular, there are CNNs, HL2s, HL1s and SHLs. Agents in the network communicate across a timevarying (random) graph that changes every iterations. Each graph is generated according to an ErdősRènyi random model with connectivity parameter (see Figure 8 for an illustrative example). Each agent is equipped with training samples randomly picked from the FashionMNIST training set. Moreover, we select , and . We run a simulation in this setup for epochs over the shared set and the results at the end of the simulation are reported in Figure 9 in terms of the accuracy on the test set . It can be seen that all the agents reach an accuracy between and . Moreover, in the last iterations, some of them also outperform the target accuracy of FS obtained in the previous section (for HL2, HL1 and SHL). Agents equipped with the CNN, on the other side seems to be unable to reach the accuracy of FS in this setup.
6 Conclusions
In this paper we presented the collective learning framework to deal with semisupervised learning problems in a distributed setup. The proposed algorithm allows heterogeneous interconnected agents to cooperate for the purpose of collectively training their local learning functions. The algorithmic idea draws inspiration from the notion of collective intelligence and the related human behavior. The obtained experimental results show the potential of the proposed scheme and call for a thorough theoretical analysis of the collective learning framework.
References
Footnotes
 When talking about distributed systems, the word distributed can be used with different meanings. Here, we refer to those networks composed by peer agents, without any central coordinator.