Global convergence of Negative Correlation Extreme Learning Machine
Ensemble approaches introduced in the Extreme Learning Machine (ELM) literature mainly come from methods that relies on data sampling procedures, under the assumption that the training data are heterogeneously enough to set up diverse base learners. To overcome this assumption, it was proposed an ELM ensemble method based on the Negative Correlation Learning (NCL) framework, called Negative Correlation Extreme Learning Machine (NCELM). This model works in two stages: i) different ELMs are generated as base learners with random weights in the hidden layer, and ii) a NCL penalty term with the information of the ensemble prediction is introduced in each ELM minimization problem, updating the base learners, iii) second step is iterated until the ensemble converges.
Although this NCL ensemble method was validated by an experimental study with multiple benchmark datasets, no information was given on the conditions about this convergence. This paper mathematically presents the sufficient conditions to guarantee the global convergence of NCELM. The update of the ensemble in each iteration is defined as a contraction mapping function, and through Banach theorem, global convergence of the ensemble is proved.
Keywords:Ensemble Negative Correlation Learning Extreme Learning Machine Fixed-Point Banach Contraction mapping.
Over the years, Extreme Learning Machine (ELM) Huang et al. (2012) has become a competitive algorithm for diverse machine learning tasks: time series prediction Ren and Han (2019), speech recognition Xu et al. (2019), deep learning architectures Chang et al. (2018); Chaturvedi et al. (2018), …. Both the Single-Hidden-Layer Feedforward Network (SLFN) and the kernel trick versions Huang et al. (2012) are widely used in supervised machine learning problems, due mainly to its low computational burden and its powerful nonlinear mapping capability. The neural network version of the ELM framework relies on the randomness of the weights between the input and the hidden layer, to speed the training stage while keeping competitive performance results Li et al. (2020).
Ensemble learning, also known as committee-based learning Zhou (2012); Kuncheva and Whitaker (2003), has attracted much interest in the machine learning community Zhou (2012) and has been applied widely in many real-world tasks such as object detection, object recognition, and object tracking Girshick et al. (2014); Wang et al. (2012); Zhou et al. (2014); Ykhlef and Bouchaffra (2017). The main characteristic of these methodologies lies in the training data to generate diversity among the base learners. The ensemble methods can be separated whether they promote the diversity implicitly (for example, using data sampling methods, such as Bagging Breiman (1996) and Boosting Freund (1995)) or explicitly (introducing parameter diversity terms, such as Negative Correlation Learning Framework Masoudnia et al. (2012); Huanhuan Chen and Xin Yao (2009)). In this context, Bagging and Boosting are the most common approaches Domingos (1997); Wyner et al. (2017), although the convergence of these ensemble methods is not always assured Rudin et al. (2004); Mukherjee et al. (2013).
Negative Correlation Learning is a framework, originally designed for neural network ensemble, that introduces the promotion of the diversity among the base learners as another term to optimize in the training stage of the model Huanhuan Chen and Xin Yao (2009). This ensemble learning method has been applied to multi-class problems Wang et al. (2010), deep learning tasks Shi et al. (2018) and semi-supervised machine learning problems Chen et al. (2018). In the Extreme Learning Machine community, Negative Correlation Extreme Learning Machine was introduced by adding to the regularized ELM Huanhuan Chen and Xin Yao (2009) the diversity term directly in the loss function Perales-González et al. (2020). This allows managing de diversity along with the regularization and the error. However, this method relies on the convergence of the ensemble, and it was not clarified in the original paper.
In this paper, training conditions for convergence are presented and discussed. The training stage of Negative Correlation Extreme Learning Machine (NCELM) is reformulated as a fixed-point iteration, and the solution of each step can be represented as a contraction mapping. Using Banach Theorem, this contraction mapping implies there is a convergence, and the ensemble method is stable.
The manuscript is organized as follows: Extreme Learning Machine for classification problems and the ensemble method Negative Correlation Extreme Learning machine are explained in Section 2. Conditions about convergence are studied in Section 3, and discussion about hyper-parameter boundaries and graphic examples are in Section 4. Conclusions are on the final segment of the article, Section 5.
2 Negative Correlation Extreme Learning Machine and its formulation
2.1 Extreme Learning Machine as base learner
For a classification problem, training data could be represented as , where
is the vector of features of the -th training pattern,
is the dimension of the input features,
is the target of the -th training pattern, 1-of-J encoded (all elements of the vector are 0 except the corresponding to the label of the pattern, which is 1),
is the number of classes.
Following this notation, the output function of the Extreme Learning Machine classifier Huang et al. (2012) is , where each is
where is the hidden layer output. The predicted class corresponds to the vector component with highest value,
The ELM model estimates the coefficient vectors , where is the number of nodes in the hidden layer, that minimizes the following equation:
is the output of the hidden layer for the training patterns,
is the matrix with the desired targets
is the -th column of the matrix.
2.2 Negative Correlation Extreme Learning Machine
Negative Correlation Extreme Learning Machine model Perales-González et al. (2020) is an ensemble of base learners, and each -th base learner is an ELM, , where is the number of base classifiers. The result output of a testing instance is defined as the average of their outputs,
In the Negative Correlation Learning proposal for ELM framework Perales-González et al. (2020), minimization problem for each -th base learner is similar to Eq. (3), but the diversity among the outputs of the individual , and the final ensemble is introduced as a penalization, with as a problem-dependent parameter that controls the diversity. The minimization problem for each -th base learner is
where is the output of the ensemble,
Because appears in , the proposed solution for Eq. (6) is to transform the problem in an iterated sequence, with solution of Eq. (3) as the first iteration , for . The output weight matrices in the -th iteration , , for each individual are obtained from the following optimization problem
where is updated as
The result is introduced in in order to obtain iteratively. However, the convergence of this iteration was not assured in the original paper Perales-González et al. (2020), but it can be proved with Banach fixed-point theorem.
3 Conditions for the convergence of NCELM
3.1 Banach fixed-point theorem
As Stephen Banach defined Banach (1922),
Let be a non-empty complete metric space with a contraction mapping . Then admits a unique fixed-point in (). Furthermore, can be found as follows: start with an arbitrary element and define a sequence , then .
Let a complete metric space, then a map is called a contraction mapping on if there exists such that
3.2 Reformulation of NCELM model as a contraction mapping
In order to prove that the iteration of Eq. (8) over , is a fixed-point iteration, the elements of the NCELM model are going to be defined into a metric space with a map . Later, it is proved that is a contraction mapping. An element is defined as
thus is the subspace that contains the posible solutions of Eq. (10), and it is included in the space . The output of the ensemble, , is then a function of , since it is composed by all the by definition in Eq. (7). Noting this as , the map
is the applied Equation (10) to this point . The map depends of each classification problem, because of , and are problem-dependent. Individuals can be considered,
Following this formulation, the NCELM model always starts from the initial point
that leads to , thus the first element in the sequence is
3.3 Definition of distance
For two points from the space , it is defined the distance metric as
where is the norm power to 2,
The distance after the map is
so distance is just a sum of . It is trivial that if
so it is only needed to prove that
3.4 Proof that T is a contraction mapping
After computing the training data, the coefficient matrix are fixed. If both points are obtained by Eq. (4), , and by Eq. (10), , because both equations give unique solution, and in this case the inequality from Eq. (20) is assured.
Let assume arbitrary , initial points from Eq. (10) are,
From these new predictions can be obtained,
Note that an example of could be , .
The application of would result in
so the inverse of matrix in Eq. (25) can be rewritten as
The distance can be expressed as
Since the solution is discarded, Eq. (31) can be divided by the distance ,
and applying Eq. (20),
Because real terms powers to 2 are greater than 0, it is only needed to prove that
Left inequality is assured, due to . Powering the fraction to 2 and applying norm properties,
This problem is equivalent to
where in this problem is the distance between , which is nonzero because that problem was discarded. This can be solved used Lagrange multipliers,
Maximizing respect to , a Generalized Eigenvalue Problem (GEP) is obtained,
In the GEP, the eigenvalues can be calculated as
Using norm property in Eq. (35), and adding previous knowledge , a bottom bound can be set. Taking inverse,
If , the same reasoning could be followed. From norm property in Eq. (36), an upper bound for norm inverse matrix can be set,
Where . Replacing in Eq. (43), it is reached to the inequality
so and can be imposed, the maximum eigenvalue to be under condition in equation (47),
After consider as
and replacing into Equation (48),
values can be obtained numerically, by finding the zero in the following equation
because it is an implicit equation, where and depends on . However, can be relaxed using norm property in Equation (45),
thus, a more restrictive bound can be set,
It is trivial to see that, if , then . Although is still implicit in Equation (52) through , in the same Section this problem could be avoided.
which means that, under certain condition of , there is an upper bound that allows to formulate that NCELM as a fixed-point iteration. Moreover, because sequence Eq. (10) is a fixed-point iteration, , with
the solution of the system, , thus by definition of ,
and condition for in Eq. (48) is relaxed over the iterations, since the upper bound for increases,
Eq. (55) implies that any value can be chosen, whether or not, because the condition is relaxed over iterations and becomes more and more large. If Eq. (11) would be not fulfilled in the first iteration for , then could be chosen, and the boundary would be relaxed during the training stage, until . Using the base learners obtained at this point of the training stage, the fixed-point iteration could continue with .
4.2 Experimental results
Because the base learners converge to an ensemble optimum, the difference between the coefficient vectors in iteration , and the values in the next iteration , always decreases. For the explanatory purpose, this Section shows graphically an example of this convergence
Hyper-parameters are , and . To reduce the computational burden, vector norm chosen for plotting where not norm but ,