s-DRN: Stabilized Developmental Resonance Network
Online incremental clustering of sequentially incoming data without prior knowledge suffers from changing cluster numbers and tends to fall into local extrema according to given data order. To overcome these limitations, we propose a stabilized developmental resonance network (s-DRN). First, we analyze the instability of the conventional choice function during node activation process and design a scalable activation function to make clustering performance stable over all input data scales. Next, we devise three criteria for the node grouping algorithm: distance, intersection over union (IoU) and size criteria. The proposed node grouping algorithm effectively excludes unnecessary clusters from incrementally created clusters, diminishes the performance dependency on vigilance parameters and makes the clustering process robust. To verify the performance of the proposed s-DRN model, comparative studies are conducted on six real-world datasets whose statistical characteristics are distinctive. The comparative studies demonstrate the proposed s-DRN outperforms baselines in terms of stability and accuracy.
Clustering, one of unsupervised learning algorithms, aims to group data instances into a number of categories. Clustering algorithms allow the analysis of data characteristics without prior knowledge, which can be applied to memory design [15, 10, 13, 7]. Clustering includes two main types of approaches: 1) batch learning and 2) online learning. The batch learning approaches, whose representative algorithms include -means  and GMM , are straightforward and simple to implement. However, they, in general, require a predefined cluster number from the user and all the training data to be given in advance. These features limit the application of batch learning algorithms in real-world applications where data are observed sequentially and continuously.
On the other hand, online learning approaches can handle the varying number of clusters and incrementally process continuous data. Thus, we focus on developing an effective online incremental clustering algorithm in this paper. Previous online learning approaches such as distance metric learning (DML)  and self-organizing incremental neural network (SOINN)  memorize all the given input and processing each input instance takes computation. Fusion adaptive resonance theory (ART)  and Fuzzy ART  networks are efficient in the perspective of computation and memory usage, but they demand inputs to be normalized in the range of [0, 1] and the problem of node proliferation lingers . Developmental resonance network (DRN)  has attempted to solve the two limitations, although its remedy for the normalization problem works for a certain range of input and it suffers from an inefficient grouping algorithm which is to solve the node proliferation problem.
To overcome the limitations mentioned above, we propose a stabilized developmental resonance network (s-DRN)
Next, we design a node grouping algorithm to alleviate the node proliferation problem. Since DRN and s-DRN allow unrestricted input scales, they cannot employ the complement coding scheme to prevent node proliferation and a node grouping algorithm for inhibiting node proliferation is essential. Three criteria, distance, intersection over union (IoU) and size criteria, are devised for the node grouping algorithm to effectively exclude unnecessary clusters from incrementally created clusters. In particular, we define and formulate the concept of IoU criterion for the node grouping algorithm. With the proposed IoU criterion, the node grouping algorithm becomes both scalable and stable in that the performance dependency on the vigilance parameter decreases. The proposed node grouping algorithm of s-DRN is computationally more efficient than that of DRN and s-DRN displays more effective clustering performance than conventional methods due to the proposed node grouping algorithm.
The remainder of this paper is structured as follows. Section II summarizes DRN as a preliminary. Section III proposes the s-DRN model and Section IV presents the experiment results with a thorough analysis. Concluding remarks follow in Section V.
Ii Developmental Resonance Network
In this section, we briefly delineate the computation flow of the DRN model as a preliminary.
Ii-a Global Weight Update
DRN utilizes a global weight vector (, where is the number of channels and ) to cope with unknown scales of multi-channel inputs, which gets updated as follows:
where is the -th step input of the -th channel and is the learning rate of .
Ii-B Node Activation
The input activates the -th node as follows:
where is a contribution parameter and is a slope parameter, is the choice function that normalizes the activation value to , and is the distance between and the weight vector .
Ii-C Template Matching
The template matching process identifies if the node with the largest activation value (say -th node) resonates with the activity vector . First, the ratio between the two vectors and for each element ( and is the dimension of the -th channel) is calculated using the global diagonal vector of the -th channel and the decision diagonal vector of the -th node.
Then, the resonance condition is defined as
where is a resonance value, = , and is a vigilance parameter.
Ii-D Template Learning
If the -th node has resonated in the template matching process, the weight gets updated by
where is the learning rate of the -th channel.
Ii-E -Node Selection and Connection
After the node activation process, DRN selects the largest nodes and connects them to improve the efficiency of the following grouping process. connections are created if has updated or connections if has created a new node. Moreover, DRN defines the center point vector for the -th node and the concept of synaptic strength that represents the strength of the connection between -th and -th nodes.
Ii-F Node Grouping
In the final computation step, DRN iterates through the connected nodes in the order of the synaptic strength. DRN groups a pair of nodes if the two nodes in the pair resonate satisfying the condition in (3) and stops iteration.
In this section, we describe the proposed s-DRN, summarized as Algorithm 1, with the proposed activation function for scalability and the node grouping algorithm for stability. Moreover, we analyze the computational efficiency of the proposed s-DRN.
First, we analyze the normalization problem that remains with (2). For the exponential function to perform as a distance normalization function, it should satisfy the following condition:
where is the minimum value a processor supports. (5) reduces to
As modern 64-bit processors support  and represents a commonly-used value set, the distance should be approximately less than 10,000 for the conventional activation function to perform normally. Otherwise, the clustering performance degrades dramatically (Fig. 1)
To overcome the limitation and the normalization problem, we propose a scalable activation function as follows.
With the proposed activation function, s-DRN can handle all scales of input since is invariably satisfied.
Iii-B Node Grouping
We propose a node grouping algorithm to mitigate the performance instability attributed to data input order and the dependency on vigilance parameters. The proposed node grouping process compares the activated cluster with nearby clusters when an input vector arrives and groups a pair if two clusters in the pair satisfy three criteria: distance, IoU and size criteria. The three conditions are examined over all channels and all the channels should satisfy each condition for the examination of the next condition.
For the formulation of the criteria, let and denote a pair of neighboring clusters (Fig. 2). The weight vectors representing each cluster for the -th channel are:
where is the number of channels and semicolon represents concatenation. We define the distance vector between a pair of clusters as
where each element of the vector is defined as
where operator chooses the minimum element of a vector. The proposed distance criterion is
where operator chooses the maximum element of a vector and is a vigilance parameter for the template matching. Note that in s-DRN, is used instead of due to unnecessity of vigilance parameter for each channel.
We propose the IoU criterion since the distance criterion can become loose and combine all the clusters when a low valued vigilance parameter is used. The IoU criterion tests if the hypothetically grouped cluster could encompass the two compared clusters with the least extension. This guarantees the grouped cluster does not occupy un-investigated feature space substantially. The below represents the hypothetically grouped cluster for the -th channel:
For each category cluster, we define the volume of the -th channel as
Next, we define the IoU criterion for the -th channel as
where determines the final threshold for the grouping process. The range of IoU value is in [0,2] and we set as 0.85.
The size criterion limits the maximum size of a category cluster. Excessively large clusters resulted from node grouping hinder the normal template matching process. Thus, we limit the size of a cluster. The maximum size of the -th cluster for the -th channel () is limited to , which is congruent to (3).
Iii-C Computational Efficiency
The computational complexity of fusion ART on which DRN is based is , where is the number of categories, is the dimension of the input, and is the number of data samples. With its grouping algorithm, the computational complexity of DRN becomes
where and are the average numbers of global weight updates and connected category pairs, respectively.
On the other hand, the computational complexity of s-DRN is
The increase of computation with s-DRN is minuscule compared to that of DRN which is .
In this section, we illustrate the experiment setting for performance verification and establish the effectiveness of the proposed s-DRN model.
Iv-a Experiment Setting
We retrieved six real-world benchmark datasets from the UCI machine learning repository
For quantitative analysis, we employed three performance metrics. First, Davies-Bouldin index (DBI)  estimates the ratio of within-cluster scatter to between-cluster separation as follows:
where is the cluster number, is the center point of cluster , is the average distance of every element in a cluster to and is the distance between and . The lower value of DBI indicates higher clustering performance.
Next, clustering purity (CP)  matches each output cluster to the ground-truth cluster as follows:
where is the set of clusters and is the set of ground-truth classes. Since a large number of clusters can bias CP, we complemented CP with normalized mutual information (NMI)  which is defined as
where is entropy and is mutual information between and . Both CP and NMI lie in the range [0, 1], where a larger value implies higher performance.
For comparative studies, we employed three baseline algorithms: -means , GMM  and DRN . -means and GMM are two representative batch-based clustering algorithms and the number of clusters should be given in advance. On the other hand, DRN and s-DRN are online learning algorithms and the number of clusters increases in an incremental manner.
To reduce the effect of randomness, we conducted each experiment 100 times and report the average and the standard deviation of each metric. In addition, each experiment received the data instances in different orders. For -means and GMM, we split the datasets into train and test sets with the ratio of 5:5. We set the ratio, which showed the best performance for -means and GMM after sweeping the ratio from 1:9 to 9:1. Moreover, we provided -means and GMM with the ground-truth cluster numbers.
For DRN and s-DRN, we sequentially input data instances and did not provide the ground-truth cluster numbers. We set one vigilance parameter, for both DRN and s-DRN. Parameters were obtained using the follow metric:
where , and are reciprocals of standard deviations of DBI, CP and NMI, respectively. We swept the vigilance from 0.1 to 0.9 and found the best value (0.7 and 0.5 for DRN and s-DRN, respectively) according to (20). We use one vigilance parameter since vigilance parameter cannot be fine-tuned in the real-world setting. In the real-world setting, no prior knowledge of dataset is given and data instances come sequentially.
Iv-B Results and Analysis
|Algorithm||Balance Scale||Liver Disorder||Blood Transfusion|
|Algorithm||Banknote||Car Evaluation||Wholesale Customers|
Table I summarizes the results of comparative studies. s-DRN consistently displays superior performance over all six datasets achieving small values for DBI and large values for CP and NMI. We note that s-DRN outperforms -means and GMM on average although -means and GMM were given the ground-truth cluster numbers and half of each dataset was given as a training set. The comparative studies corroborate that s-DRN guarantees satisfactory clustering performance in an online incremental manner compared to batch-based clustering algorithms. Moreover, the performance of s-DRN surpasses that of DRN over all six datasets, which verifies the effectiveness of the proposed node grouping algorithm.
Particularly, the performance gap between DRN and s-DRN is the largest for the wholesale customer dataset. The large input scale of the dataset interrupts DRN’s activation function and its performance deteriorates sharply. The result of the wholesale customer dataset confirms that the proposed activation function truly resolves the normalization problem. Fig. 3 further investigates the effect of input scale on the clustering performance. We tested each algorithm on the liver disorder dataset and varied the scale from to . The effect of input scale on other algorithms including s-DRN is insignificant while the performance of DRN gets sensitively affected.
Fig. 4 illustrates the effect of the vigilance parameter on clustering performance for DRN and s-DRN. For all six datasets, we varied the vigilance parameter from 0.1 to 0.9 and observed the performance variation in DBI. As the figure exhibits, the clustering performance of s-DRN is stable over all vigilance values in all six datasets. However, the clustering performance of DRN strongly depends on the value of the vigilance parameter. For quantitative analysis, we report the averages of standard deviations of DBI scores for DRN and s-DRN, which are 0.307 and 0.143, respectively.
In this paper, we proposed a resonance-based online incremental clustering network, s-DRN, which is a stabilized model of DRN. The proposed s-DRN model resolves the normalization problem remaining in conventional methods with the proposed activation function. Thus, s-DRN can effectively handle all input scales. Moreover, s-DRN equipped with the proposed node grouping algorithm becomes robust to variation of vigilance parameter, and the need for fine-tuning vigilance parameter disappears. In addition, the clustering performance improves with the proposed node grouping algorithm. A thorough examination of s-DRN through experiments on six real-world benchmark datasets established the effectiveness of s-DRN. We expect s-DRN can be applied to various real-world settings where no prior knowledge on sequentially incoming data is given.
In-Ug Yoon received the M.S. and B.S. degrees in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2018 and 2016, respectively. He is currently pursuing the Ph.D. degree at KAIST. His current research interests include anomaly detetion, learning algorithms and computational memory systems.
Ue-Hwan Kim received the M.S. and B.S. degrees in Electrical Engineering from KAIST, Daejeon, Korea, in 2015 and 2013, respectively. He is currently pursuing the Ph.D. degree at KAIST. His current research interests include visual perception, service robot, cognitive IoT, computational memory systems, and learning algorithms.
Jong-Hwan Kim (F’09) received the Ph.D. degree in electronics engineering from Seoul National University, Korea, in 1987. Since 1988, he has been with the School of Electrical Engineering, KAIST, Korea, where he is leading the Robot Intelligence Technology Laboratory as KT Endowed Chair Professor. Dr. Kim is the Director for both of KoYoung-KAIST AI Joint Research Center and Machine Intelligence and Robotics Multi-Sponsored Research and Education Platform. His research interests include intelligence technology, machine intelligence learning, and AI robots. He has authored 5 books and 5 edited books, 2 journal special issues and around 400 refereed papers in technical journals and conference proceedings.
- (2012) Automatic feature selection for bci: an analysis using the davies-bouldin index and extreme learning machines. In The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. Cited by: §IV-A2.
- (2000) General fuzzy min-max neural network for clustering and classification. IEEE Transactions on Neural Networks 11 (3), pp. 769–783. Cited by: §I.
- (2013) Adaptive resonance theory: how a brain learns to consciously attend, learn, and recognize a changing world. Neural Networks 37, pp. 1–47. Cited by: §I.
- (2013) Extensions of kmeans-type algorithms: a new clustering framework by integrating intracluster compactness and intercluster separation. IEEE Transactions on Neural Networks and Learning Systems 25 (8), pp. 1433–1446. Cited by: §I, §IV-A3.
- (1996) IEEE standard 754 for binary floating-point arithmetic. Lecture Notes on the Status of IEEE 754 (94720-1776), pp. 11. Cited by: §III-A.
- (2010) The fuzzy art algorithm: a categorization method for supplier evaluation and selection. Expert Systems with Applications 37 (2), pp. 1235–1240. Cited by: §I.
- (Early Access, 2018) A stabilized feedback episodic memory (sf-em) and home service provision framework for robot and iot collaboration. IEEE Transactions on Cybernetics. Cited by: §I.
- (2006) Normalized mutual information based registration using k-means clustering and shading correction. Medical Image Analysis 10 (3), pp. 432–439. Cited by: §IV-A2.
- (2019) Clustergan: latent space clustering in generative adversarial networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4610–4617. Cited by: §IV-A2.
- (2017) User preference-based dual-memory neural model with memory consolidation approach. IEEE Transactions on Neural Networks and Learning Systems 29 (6), pp. 2294–2308. Cited by: §I.
- (2019) Kernel-based distance metric learning for supervised k-means clustering. IEEE Transactions on Neural Networks and Learning Systems 30 (10), pp. 3084–3095. Cited by: §I.
- (2019-04) Developmental resonance network. IEEE Transactions on Neural Networks and Learning Systems 30 (4), pp. 1278–1284. External Links: Cited by: §I, §IV-A3.
- (2017) Deep art neural model for biologically inspired episodic memory and its application to task performance of robots. IEEE Transactions on Cybernetics 48 (6), pp. 1786–1799. Cited by: §I.
- (2013) Approximating gaussian mixture model or radial basis function network with multilayer perceptron. IEEE Transactions on Neural Networks and Learning Systems 24 (7), pp. 1161–1166. Cited by: §I, §IV-A3.
- (2012) Neural modeling of episodic memory: encoding, retrieval, and forgetting. IEEE Transactions on Neural Networks and Learning Systems 23 (10), pp. 1574–1586. Cited by: §I.
- (2019) Online topology learning by a gaussian membership-based self-organizing incremental neural network.. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §I.