Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments
Abstract
The feasibility of deep neural networks (DNNs) to address data stream problems still requires intensive study because of the static and offline nature of conventional deep learning approaches. A deep continual learning algorithm, namely autonomous deep learning (ADL), is proposed in this paper. Unlike traditional deep learning methods, ADL features a flexible structure where its network structure can be constructed from scratch with the absence of initial network structure via the self-constructing network structure. ADL specifically addresses catastrophic forgetting by having a different-depth structure which is capable of achieving a trade-off between plasticity and stability. Network significance (NS) formula is proposed to drive the hidden nodes growing and pruning mechanism. Drift detection scenario (DDS) is put forward to signal distributional changes in data streams which induce the creation of a new hidden layer. Maximum information compression index (MICI) method plays an important role as a complexity reduction module eliminating redundant layers. The efficacy of ADL is numerically validated under the prequential test-then-train procedure in lifelong environments using nine popular data stream problems. The numerical results demonstrate that ADL consistently outperforms recent continual learning methods while characterizing the automatic construction of network structures.
Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments
Andri Ashfahani^{†}^{†}thanks: Equal contribution. SCSE, NTU, Singapore, (andriash001@e.ntu.edu.sg, mpratama@ntu.edu.sg). ^{†}^{†}thanks: This work is supported by A*STAR-NTU-SUTD AI Partnership Grant No. RGANS1902. The code can be downloaded in https://bit.ly/2MoliVQ and Mahardhika Pratama^{1}^{1}footnotemark: 1 ^{2}^{2}footnotemark: 2 |
1 Background and Motivation
State-of-the-art theoretical studies show that the increase of depth of neural networks increases the representational and generalization power of neural networks (NNs) [12]. Nevertheless, the problem of data stream remains an uncharted territory of conventional deep neural networks (DNNs). Unlike conventional data stream methods built upon a shallow network structure [1, 18, 19], DNNs potentially offers significant improvement in accuracy and aptitude to handle unstructured data streams. Direct application of conventional DNNs for data stream analytic is often impossible because of their considerable computational and memory demand making them impossible for deployment under limited computational resources [14]. Ideally, the data streams should be handled in a sample-wise manner without any retraining phase to prevent the catastrophic forgetting problem in addition to scale up with the nature of continual environments [14, 28]. Another challenge comes from the fixed and static structure of traditional DNNs [9]. In other words, the network capacity has to be estimated before process runs. This trait does not mirror the dynamic and evolving characteristics of data streams.
The use of flexible structure with the growing and pruning mechanism has picked up research attention in DNN literature [19, 18, 20] where the key idea is to evolve the DNN’s structure on demand. Incremental learning of denoising autoencoder (DAE) realizes the structural learning mechanism via the network’s loss and the hidden unit merging mechanism [29]. The underlying drawback of this approach is located in the over-dependence on problem-dependent predefined thresholds in growing and merging hidden units. The elastic consolidation weight (ECW) [10] and the hedge backpropagation (HBP) [22] are proposed to train DNN in the online situation where the ECW method addresses the catastrophic forgetting problem by preventing the output weights of new task to be deviated too far from the old one, while the HBP realizes a direct connection of hidden layer to output layer which enables representation of different concepts in each layer. However, these approaches call for network initialization step and operates under a fixed capacity.
The progressive neural networks (PNN) [21], the dynamically expandable networks (DEN) [28] and incremental learning of DAE (DEVDAN) [17] are proposed to address limited network capacity and catastrophic forgetting problems. PNN creates a new network structure for every new task, DEN grows hidden nodes whenever the loss criteria are not satisfied, while DEVDAN is capable of growing and pruning the hidden units based on the estimation of network significance (NS). Nevertheless, the three approaches utilize a fixed-depth structure [9, 12]. It is understood from [27] that addition of network depth leads to more significant improvement of generalization power than addition of hidden unit because it boosts the network capacity more substantially. To the best of our knowledge, the three approaches have not been tested under the prequential test-then-train scenario which reflects a situation where data stream arrives without label [14].
2 Problem Formulation
Continual learning of evolving data streams is defined as learning approach of continuously generated data batches where the number of data batches and the type of data distributions are unknown before the process runs. can be either a single data point or a particular size of data batch , where and denote the dimension of the input space and the number of data points in a batch, respectively. Note that the batch size often varies across different time stamps. In the data stream problems, data points come into picture with the absence of true class label [14]. The execution of labelling process is subject to the access of the ground truth or expert knowledge. In other words, a delay is expected while revealing the true class labels . The encoding scheme can applied to obtain multi-output target matrix where is the number of target. This issue limits the feasibility of cross-validation or direct train-test partition methods as an evaluation protocol because those methods assume that the overall data batches are fully observable and risks on loss of data temporal order [14, 13].
The data streams require DNN to handle which may be originated from different data distributions, also known as the concept drift. Specifically, there may exist a change of joint-class posterior probability . The concept drift is commonly classified into two types: real and covariate [9]. The real drift usually is more severe than the covariate drift because the input variations lead to the shift of decision boundary which decreases the classification performance. In addition, this leads to a model created by previous concept being outdated. This characteristic shares some relevance with the multi-task learning problem where each data batch is of different tasks. Nevertheless, DNN differs from the multi-task approaches in which all data batches are to be processed by a single model rather than rely on task-specific classifiers. Another problem of data streams exists in achieving trade-off between plasticity and stability which increases the risk of suffering from catastrophic forgetting [11]. These demands call for an online DNN model which is capable to incrementally construct its network structure from scratch in respect to data streams distribution. In addition, a mechanism to flexibly reuse and retain the old knowledge, or to learn the new one should be embedded to prevent catastrophic forgetting.
3 Proposed Methods
A fully elastic deep neural network (DNN), namely Autonomous Deep Learning (ADL), is proposed in this paper. ADL features an open structure where not only its hidden nodes can be self-organized but also the hidden layers can be constructed under the lifelong learning paradigm. These mechanisms enable ADL to perform dynamic resource allocation which tracks the dynamic variation of data streams [21, 28]. The adaptation of network width is governed by network significance (NS) method which governs creation of new hidden units and pruning of inconsequential hidden units. The adaptation of network depth is driven by drift detection scenario (DDS) where a new hidden layer is added if a drift is identified. Every hidden layer embraces different concepts played in different time windows of data streams [3]. The complexity reduction mechanism in the hidden layer level is implemented through the hidden layer merging procedure which quantifies mutual information of hidden layers and coalesces those suffering from high mutual information [20]. A new DNN structure is introduced where it puts into perspective the different-depth concept. That is, every layer is connected to a softmax layer which produces a local output. The global output is obtained from aggregation of each local output using the dynamic voting scenario. The generalization power of ADL is evaluated under the prequential test-then-train protocol with only a single epoch where the data are first use to test before exploited to update the model.
The major contributions are elaborated as follows:
1) Different-depth network structure. Unlike traditional DNN structure, where the final output relies on the last hidden layer, ADL puts forward the different-depth structure where there exists a direct connection of each layer to the output layer by inserting a softmax layer in each hidden layer to produce a layer-specific output. The dynamic voting scheme is integrated to deliver the final classification decision where every layer is assigned with a voting weight adapted with different intensities in respect to layer’s relevance. This approach is capable of overcoming the catastrophic forgetting problem because a network structure is constructed as a complete summary of data distributions [21]. Moreover, the dynamic voting weight mechanism is designed with dynamic decaying rates in respect to the prequential error which enables the strongest layer to dominate the voting process.
2) Network width adaptation. ADL features elastic network width which supports automatic generation of new hidden nodes and pruning of inconsequential nodes. This mechanism is controlled by the NS method [17] which estimates the network generalization power in terms of bias and variance. A new hidden node is added in the case of underfitting (high bias) while the pruning mechanism is activated in the case of overfitting (high variance). Another salient feature of NS is not dependent on the user-defined parameters which enables the plug-and-play operation. It uses an adaptive threshold which dynamically adapts to the bias and variance estimation. This work offers an extension of [17] for a deep network structure.
3) Network depth adaptation. The drift detection scenario (DDS) is employed to self-organize the depth of network structure where the depth of network structure increases if a drift is signalled. This idea is supported by the fact that addition of hidden layer induces more active regions than addition of hidden units, thereby being able to rectify the high bias situation due to drift effectively [15]. Note that active region here refers to the amount of unique representation carried by a hidden layer. In other words, DDS guides ADL to arrive with the hidden layers carrying the different concepts of data streams. Furthermore, the DDS method detects the real drift - variation of input space causing variation of output space via the evaluation of accuracy matrix based on the Hoeffding’s bounds method [7]. ADL also implements the complexity reduction scenario shrinking the depth of network structure. This scenario is achieved by the analysis of mutual information across hidden layers. A hidden layer sharing high correlation is discarded. This concept follows [18] but here this concept is played under the context of DNN.
4) Solution of catastrophic forgetting. The key property of ADL in addressing the catastrophic forgetting problem lies in the different-depth architecture which allows to accommodate new knowledge while revisiting old knowledge with ease [21]. Moreover, the final output is produced by the dynamic voting scheme which enables to flexibly give more emphasis either to the old knowledge or to the new ones. This is evident because each layer is assigned with unique voting weights which increases and decreases with different rates. Moreover, the parameter tuning process is localized to the most relevant concept, namely the winning layer while freezing other layers to assure stable old concepts - old concept is not perturbed.
4 Autonomous Deep Learning
This section explains the network structure of ADL and its learning policy, which is depicted in Figure 1.
4.1 Network structure and working principle.
The ADL is constructed by the multilayer perceptron (MLP). The first layer defines the input feature, while the intermediate layers consists of multiple linear transformations interspersed by sigmoid function. The hidden layers and the hidden nodes of ADL can be automatically constructed which are controlled by DDS and NS formula, respectively. ADL characterizes the different-depth structure formalized as follows:
(4.1) | |||
From (4.1), it is observed that every hidden layer has a connection to a unique classifier producing the multiclass probability , where is the number of hidden layers. The network parameters of -th hidden layer are denoted as that is, , , , , where and is the number of hidden nodes and the number of input in the -th hidden layer, respectively. It is worth noting that the dimension of those matrices is changing according to the evolution of hidden nodes. The hidden layer is assigned with a voting weight which is dynamically adjusted by a dynamic penalty and reward factor . The voting weights are normalized, , to ensure the partition of unity. Finally, the predicted label is the class label embracing the highest , obtained by combining the weighted hidden layer output, as per in (4.1).
ADL starts its learning process from scratch without initial structure. ADL here is simulated under the prequential test-then-train procedure where data stream is first used for the testing process followed by the training process. This scenario realizes the fact that data stream come unlabelled. ADL consists of two learning stages: the high level learning and the low level learning. The former one concerns on the evolution of hidden layer while the later stage focuses on the network parameters and the number of hidden nodes of the winning layer using SGD and NS formula, respectively, in a single-pass learning fashion. The winning layer is a hidden layer embracing the highest . The voting weight is deemed as an appropriate indicator of the hidden layer performance since it is adjusted using dynamic factor. Generally, the low level learning enables ADL to learn new knowledge while retaining the old ones. Moreover, it helps ADL to handle the virtual-drift, that is a distributional change of the input space [9]. After executing the low level learning, the generalization performance of ADL is evaluated using the labelled data batch .
The evaluation results are then exploited in the high level learning process which consists of three mechanisms. The first one is dynamic voting weight adaptation. Every will be penalized if it makes an incorrect prediction and, conversely, it will be rewarded if it makes a correct prediction using dynamic penalty and reward factor . Secondly, hidden layer pruning scenario is carried out to discard the redundant hidden layer. It is defined as the -th hidden layer, , which is highly correlated to others yet it has low performance. The MICI method [18] is employed to explore the mutual correlation of hidden layer, . The first two mechanisms enable ADL to ignore the less useful representations and to emphasize the useful ones while obtaining the predicted output . Lastly, network depth adaptation is conducted by executing DDS. This method monitors the statistics of accuracy matrix and categorizes the behaviour into three stages, i.e., stable, warning, and drift. A new hidden layer is constructed when a drift is confirmed. The voting weight of newly created layer and its decreasing factor are set to 1, while the network parameters are initialized via the low level learning phase using the current data batch. The last adjustment aims to increase the generalization and representational power of ADL. Figure 2 exemplifies the overall incremental learning process of ADL where .
4.2 Network width adaptation.
This policy is carried out in the low level learning process which consists of two mechanisms as follows.
1) Hidden node growing. The hidden node growing mechanism is controlled by the NS formula which evaluates the generalization power of network structure formalized as the expectation of squared error under a normal distribution as per in (4.2). This expression leads us to the bias-variance formula as per in (4.3).
(4.2) | |||
(4.3) | |||
The solution of (4.2) requires to calculate . Note that is the deterministic function of that is, the input of . Therefore, the key to solve the definite integral in (4.2) is the solution of . Suppose possesses normal distribution, the probability density function is given as , where and are the recursive mean and recursive standard deviation of data streams, , which can be calculated easily. It is worth noting that a sigmoid function can be approximated by a probit function where and . The integral of probit function is another probit function [16], it yields:
(4.4) |
Next, can be obtained by generalizing (4.4) via times-forward-chaining operation. It yields:
(4.5) | |||
(4.6) |
After that, the bias can be calculated by substituting to the bias term, . This approach is different from the loss function used in [28], because while approximating the generalization of DNN, the bias formula takes into account the influence of all past and future samples under the assumption of normal distribution. The high bias indicates the underfitting situation which can be circumvented by increasing the network capacity.
The hidden node growing condition is derived based on -sigma rule concept adopted from the theory of statistical process control [8]. However, instead of using the binomial distribution to calculate the mean and variance, ADL directly utilizes the bias itself () because the hidden node growing strategy evaluates the real-variable bias instead of the accuracy score. The high bias problem, triggering the construction of a hidden node in the -th layer, is formulated as follows:
(4.7) |
where and governs the confidence degree of sigma rule. It is designed that is a function of and revolves around . A high bias triggers to return a low confidence level - close to 1, realizing around confidence degree. Conversely, a high confidence level - close to 2, equivalent to , is generated by when the bias is low. This provides a flexibility for the hidden nodes growing mechanism and eliminates the involvement of problem-specific threshold. are the recursive mean and standard deviation of up to -th time instant, whereas denote the minimum value of up to -th time instant but are reset whenever (4.7) is satisfied. Equation (4.7) signifies the existence of changing data distribution represented by the increase of network bias. The network bias should decrease or at least be stable when there is no drift in data streams. When (4.7) is satisfied, a hidden node is added in the -th hidden layer, , and the new network parameters in the -th hidden node are initialized using Xavier initialization.
2) Hidden node pruning. It is derived from the same principle of the hidden node growing mechanism, yet it exploits instead of . A high variance, overfitting, should be handled by reducing the network complexity. Before calculating , it is required to derive the expression of and . The second expression can be obtained easily by applying squared operation to . It is worth noting that is the IID variable. Therefore, can be obtained by first calculating and followed by forward-passing the result to -th hidden layer. It is similar to the way of calculating (4.5), yet it takes as the initial input instead of .
The hidden node pruning condition implements the same principal as the growing part where the statistical process control is adopted to identify the high variance problem, as per in (4.8). Unlike the growing condition in (4.7), the -part is multiplied by 2 to avoid direct-pruning-after-growing problem. It is worth mentioning that the addition of a hidden node leads to the increase of network variance yet progressively decrease as the next information arrives. is designed similar to yet it takes as the input instead of . Consequently, the sigma rule revolves in the range of providing around to confidence level.
(4.8) |
If (4.8) is satisfied, the pruning scenario is undertaken to remove the weakest hidden node in the -th hidden layer. The least significant hidden node can be observed by calculating (4.6). That is, the importance of all hidden nodes in the -th hidden layer. The pruning mechanism discarding the hidden node with the lowest is formalized as follows: . Consequently, the number of hidden nodes decreases to as an effort to address the overfitting dilemma. Note that a small value indicates that -th hidden node plays a small role in producing the output and thus can be discarded without significance loss of accuracy. The concept of statistical contribution of hidden node can be categorized as performance estimation strategy of neural architecture search because it estimates the generalization power of the network on unseen data [6].
4.3 Network depth adaptation.
ADL realizes the different-depth structure using the DDS as an effort to deal with the concept drift. It also utilizes the MICI method as the complexity reduction procedure. The following explain those mechanisms.
1) Hidden layer growing. A new hidden layer is constructed if there is a concept change in the data streams. The DDS signals a drift status by monitoring the accuracy matrix. The drift situation signifies that the network is underfitting as indicated by low statistic of accuracy matrix meaning that the ADL’s performance deteriorates. In other words, the crafted knowledge alone cannot adequately describes the new data distribution. This dilemma can be solved by increasing the network capacity in two ways those are, hidden node growing or hidden layer expansion. The second option, however, augments the network capacity more significantly since the extension of depth increases the number of active regions [15] more than that expansion of network width. In addition, addition of network depth has been theoretically more meaningful than addition of neuron or hidden units [27].
The accuracy matrix stores the generalization performance of the testing phase. It records 1 if the misclassification happens , whereas 0 is stored if ADL correctly classify an observation . The switching point is determined by evaluating two accuracy matrices, and , where is the hypothetical switching point which can be found using the following condition:
(4.9) |
where and denote the statistics and the Hoeffding’s error bounds of . The condition (4.9) spots a transition between two concepts where . Note that is expected to decrease or at least be constant in the stable phase. This strategy performs better while dealing with sudden drift, the most common type of drift, yet it is less sensitive to the gradual drift where change slowly appears because every sample is treated equally without any weights [7]. The Hoeffding’s error bounds are formulated as follows:
(4.10) |
where denotes the size of accuracy matrices and denotes the significance level of Hoeffding’s bound. Note that is statistically justifiable since it is associated to the confidence level . It is not classified as a problem-specific threshold because a high provides a low confidence level whereas a low returns a high one. The values indicate the minimum and the maximum entries of the accuracy matrices .
(4.11) | |||
(4.12) |
The condition (4.9) aims at finding the cutting point, , where the accuracy matrix is not in the decreasing trend. Once it is spotted, the accuracy matrix, , can be formed. This matrix is used as a reference whether the null hypothesis is valid or not. The null hypothesis evaluates the increase of statistics of accuracy matrix which verifies the drift condition. The drift status is signalled when the null hypothesis is rejected with the size of , as per in (4.12). Conversely, the warning status is returned when the null hypothesis is rejected with the size , formalized in (4.12), which aims to signal the gradual drift. The value of and can be calculated via (4.10) using and . If none of those conditions are satisfied, the stable condition is returned.
A drift condition (4.11) displays the phase where the empirical mean of is lower than indicating the evidence that the classification performance degenerates. This case signals the hidden layer growing procedure to increase the network depth, . The newly created layer is then trained in the low level learning phase using the current data batch . Meanwhile, the network parameters of other hidden layers are frozen to preserve the old knowledge which prevents the catastrophic forgetting. The warning phase (4.12) indicates the transition situation where more observations are required to signal a concept drift. Because of this reason, the current data batch is stored in the buffer and is exploited to initialize a new hidden layer if the drift occurs in the next timestamp . The stable condition yields to the adjustment of the current structure via the low level learning phase and the deletion of the data in the buffer.
2) Hidden layer pruning. ADL employs the hidden layer pruning mechanism to handle the redundancy across different hidden layers. This is achieved by analyzing the correlation of the output [18]. Based on the manifold learning concept, a redundant hidden layer embracing similar concept is expected not to inform an important representation of the given problem or at least very well covered by other hidden layers, because it does not open the manifold of learning problem to a unique representation. The MICI method is utilized to explore the correlation between two hidden layers it yields to the pruning condition, as follows:
(4.13) |
is a user-defined threshold which is proportional to the maximum correlation index, where the lower the value the less pruning mechanism is executed. If (4.13) is satisfied, the pruning process encompasses the hidden layer with the lowest , i.e. . Note that is expected as an appropriate indicator of a hidden layer performance because it is dynamically adjusted using dynamic decreasing factor. Consequently, the direct connection from -th hidden layer to the output is deleted yet that hidden layer still performs the forward-pass operation providing the representation . This strategy also accelerates the model update because the pruned hidden layer is ignored in the learning procedure. It can be regarded as the dropout scenario in the realm of deep learning [24], yet ADL relies on the similarity analysis (4.13) instead of the probabilistic approach. The illustration of incremental learning aspect of ADL is depicted in Figure 2.
4.4 The solution of catastrophic forgetting.
Having a flexible structure embracing different-depth enables ADL to address the problem via two mechanisms elaborated in this section.
1) Dynamic voting weight adaptation. Every voting weight is dynamically adjusted by a unique decreasing factor which plays an important role while adapting to the concept drift. A high value of provides slow adaptation to the rapidly changing environment, yet it handles gradual or incremental drift very well. Conversely, a low value of gives frequent adaptation to sudden drift, yet it forfeits the stability while dealing with gradual drift where data samples embrace two distributions. This issue is handled by continuously adjusting to represent the performance of each hidden layer using a step size , as per in (4.14). These are realized by setting to either or when the -th hidden layer returns a correct prediction or incorrect one, respectively. This also considers the fact that the voting weight of a hidden layer embracing relevant representation should decrease slowly when making misclassification while that embracing irrelevant representation should increase slowly when returning the correct prediction.
(4.14) | |||
(4.15) | |||
(4.16) |
The reward and penalty scenario are carried out by increasing and decreasing the voting weight based on the performance of its respective hidden layers, . The reward is given when a hidden layer returns a correct prediction, as per in (4.15). Conversely, a hidden layer is penalized if it makes an incorrect prediction, as per in (4.16). The reward scenario is capable of handling the cyclic drift by reactivating the hidden layer embracing a small . Unlike its predecessors in [19, 18], the aims of reward and penalty scenario carried out here are to augment the impact of a strong hidden layer by providing a high reward and a low penalty and to diminish a weak hidden layer by giving a small reward and a high penalty. Note that ADL possesses different-depth structure where every hidden layer has a direct connection to the output. As a result, the classification decision should consider the relevance of each hidden layer based on the prequential error. This approach aligns with the DDS as a method to increase the network depth because it guarantees ADL to embrace a different concept in each hidden layer.
2) Winning layer adaptation. SGD method is employed to adjust the network parameters of the winning layer, i.e. , using labelled data batch in a single-pass manner. It is derived using the cross-entropy loss minimization. However, instead of using the global error derivative, ADL exploits the local one which is backpropagated from the winning layer . By this approach, each hidden layer is optimized based on a different objective which embraces a different concept. This enables ADL to improve its generalization power while reducing the risk of being suffered from catastrophic forgetting problem. Note that the parameter adjustment mechanism is executed under a dynamic network which consists of single hidden node in the beginning and can grow on demand.
5 Empirical Evaluation
This section outlines the empirical study of ADL in which it is compared against four algorithms.
1) Experimental setting. ADL is numerically validated using nine prominent data stream problems, i.e., Permuted MNIST [11], Weather [5], KDDCup [25], SEA [26], hyperplane [4], SUSY, Hepmass [2], RLCPS [23], and RFID localization [1]. The first five characterize non-stationary properties, while the others feature prominent characteristics in examining the performance of data stream algorithms: big size, high input feature, etc. ADL is compared against fixed-structure DNN to observe kind of improvement produced by ADL while embracing the flexible different-depth structure. The value of are set to in all problems. It is worth noting that determines the confidence level of Hoeffding bound . The selected values return very high confidence levels close to . DNN network structure is initialized before the execution. ADL is compared to another deep stacked network embracing a flexible different-depth structure, that is DEVFNN (DFN) [20]. ADL is also compared against pEnsemble+ (pE+) [18] and pEnsemble (pE) [19] aims to present the improvement over an evolving ensemble structure.
Class. rate | ET | HL | HN | NoP | ||
S | ADL | K | K | |||
U | pE+ | K | ||||
S | pE | K | ||||
Y | DFN | K | K | |||
DNN | K | K | ||||
H. | ADL | K | K | |||
M | pE+ | K | ||||
A | pE | K | ||||
S | DFN | K | K | |||
S | DNN | K | K | |||
R | ADL | K | ||||
L | pE+ | K | ||||
C | pE | K | ||||
P | DFN | K | ||||
S | DNN | K | K | |||
R | ADL | |||||
F | pE+ | K | ||||
I | pE | K | ||||
D | DFN | K | ||||
DNN | K | |||||
P. | ADL | 1 | K | |||
M | pE+ | NA | NA | NA | NA | NA |
NI | pE | NA | NA | NA | NA | NA |
S | DFN | NA | NA | NA | NA | NA |
T | DNN | K | ||||
W | ADL | |||||
e | pE+ | |||||
a | pE | |||||
t | DFN | |||||
h. | DNN | |||||
K | ADL | |||||
D | pE+ | K | ||||
D | pE | K | ||||
C | DFN | K | ||||
p. | DNN | |||||
S | ADL | |||||
E | pE+ | K | ||||
A | pE | K | ||||
DFN | ||||||
DNN | K | |||||
H | ADL | |||||
y | pE+ | K | ||||
p | pE | |||||
e | DFN | K | ||||
r. | DNN | K | ||||
The performance of all algorithms are assessed using five performance metrics: classification rate, execution time (ET), HL, HN, and the number of parameters (NoP). The prequential evaluation is conducted in a single-pass mode to simulate real data stream environments. The numerical results are averaged across all time stamps except the execution time. HL is the number of hidden-layer-to-output connection in ADL, the number of ensemble in pEnsemble and pEnsemble+, and the number of stacked building unit in DEVFNN. HN represents the total nodes in ADL and DNN, while in the remainder methods it signifies the total fuzzy rule. All experiments are executed in the same computational environment to assure fair comparisons under MATLAB environments with the Intel(R) Xeon(R) CPU E5-1650 @3.20 GHz processor and 16 GB RAM.
2) Numerical results. From Table 1, ADL delivers up to performance improvement over consolidated algorithms in terms of accuracy. This also demonstrates that the fully elastic network of ADL, where the hidden node and the hidden layer can be added or discarded on demand, can arrive at appropriate complexity for a specific problem and is comparable to those three evolving algorithms. ADL delivers the fastest execution time compared to those evolving algorithms in most cases. This result is understood because ADL is built upon MLP, while those algorithms are constructed by the multi-classifier concept possessing high computational and space complexity. This enables ADL to execute the high dimensional data, permuted MNIST problem, which results in improved accuracy over DNN, while the evolving algorithms are not scalable to deal with this problem. In terms of resolving the catastrophic forgetting, ADL delivers the most encouraging performance. The evidence can be seen from the numerical results of big datasets, SUSY and Hepmass, where ADL delivers the highest classification rate. These facts are reasonable since ADL characterizes different-depth structure supported by dynamic voting weight and winning layer adaptation which enables ADL to flexibly recall the previous knowledge or craft the new one.
6 Conclusion
This paper presents a novel self-organizing DNN, namely ADL. It possesses a flexible different-depth structure where the network structure can be automatically constructed from scratch with the absence of problem-specific user-defined parameters. The adaptation of network width is controlled by the estimation of bias and variance while the hidden layer can be deepened using the drift detection method. Possessing different-depth structure becomes the key characteristics of ADL to address catastrophic forgetting problem in the lifelong learning environment. It enables ADL to put more emphasis on the most relevant layer via dynamic voting scenario and winning layer adaptation. Our empirical evaluation has validated the effectiveness of ADL in dealing with non-stationary data streams under prequential test-then-train protocol. It also demonstrates the increase in performance over fixed structure DNN embracing the same network complexity. Future work inspired by this method should investigate the feasibility of ADL to handle unstructured data streams.
References
- [1] A. Ashfahani, M. Pratama, E. Lughofer, Q. Cai, and H. Sheng, An Online RFID Localization in the Manufacturing Shopfloor, Springer International Publishing, 2019, pp. 287–309.
- [2] P. Baldi, P. D. Sadowski, and D. Whiteson, Searching for exotic particles in high-energy physics with deep learning., Nature communications, 5 (2014), p. 4308.
- [3] Y. Bengio, A. Courville, and P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), pp. 1798–1828.
- [4] A. Bifet, G. Holmes, R. Kirkby, and B. Pfahringer, Moa: Massive online analysis, J. Mach. Learn. Res., 11 (2010), pp. 1601–1604.
- [5] G. Ditzler and R. Polikar, Incremental learning of concept drift from streaming imbalanced data, IEEE Trans. on Knowl. and Data Eng., 25 (2013), pp. 2283–2301.
- [6] T. Elsken, J. H. Metzen, and F. Hutter, Neural architecture search: A survey, arXiv preprint arXiv:1808.05377, (2018).
- [7] I. Frías-Blanco, J. del Campo-Ávila, G. Ramos-Jiménez, R. Morales-Bueno, A. Ortiz-Díaz, and Y. Caballero-Mota, Online and non-parametric drift detection methods based on hoeffdingâs bounds, IEEE Transactions on Knowledge and Data Engineering, 27 (2015), pp. 810–823.
- [8] J. a. Gama, R. Fernandes, and R. Rocha, Decision trees for mining data streams, Intell. Data Anal., 10 (2006), pp. 23–45.
- [9] J. a. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, A survey on concept drift adaptation, ACM Comput. Surv., 46 (2014), pp. 44:1–44:37.
- [10] F. Huszár, Note on the quadratic penalties in elastic weight consolidation, Proceedings of the National Academy of Sciences, (2018).
- [11] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, Overcoming catastrophic forgetting in neural networks, 2016. cite arxiv:1612.00796.
- [12] A. Krizhevsky, I. Sutskever, and G. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds., Curran Associates, Inc., 2012.
- [13] A. Liu, J. Lu, F. Liu, and G. Zhang, Accumulating regional density dissimilarity for concept drift detection in data streams, Pattern Recognition, 76 (2018), pp. 256–272.
- [14] J. Lu, A. Liu, F. Dong, F. Gu, J. Gama, and G. Zhang, Learning under concept drift: A review, IEEE Transactions on Knowledge and Data Engineering, (2018).
- [15] G. F. Montúfar, R. Pascanu, K. Cho, and Y. Bengio, On the number of linear regions of deep neural networks, in Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, 2014, pp. 2924–2932.
- [16] K. P. Murphy, Machine Learning: A Probabilistic Perspective, The MIT Press, 2012.
- [17] M. Pratama, A. Ashfahani, Y. S. Ong, S. Ramasamy, and E. Lughofer, Autonomous Deep Learning: Incremental Learning of Denoising Autoencoder for Evolving Data Streams, ArXiv e-prints, (2018).
- [18] M. Pratama, E. Dimla, E. Lughofer, W. Pedrycz, and T. Tjahjowidodo, Online tool condition monitoring based on parsimonious ensemble+, IEEE Transactions Cybernetics, (2018).
- [19] M. Pratama, W. Pedrycz, and E. Lughofer, Evolving ensemble fuzzy classifier, IEEE Transactions on Fuzzy Systems, (2018), pp. 1–1.
- [20] M. Pratama, W. Pedrycz, and G. I. Webb, An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams, ArXiv e-prints, (2018).
- [21] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, Progressive neural networks, CoRR, abs/1606.04671 (2016).
- [22] D. Sahoo, Q. D. Pham, J. Lu, and S. C. Hoi, Online deep learning: Learning deep neural networks on the fly, arXiv preprint arXiv:1711.03705, abs/1711.03705 (2017).
- [23] M. Sariyar, A. Borg, and K. Pommerening, Controlling false match rates in record linkage using extreme value theory, Journal of Biomedical Informatics, 44 (2011), pp. 648–654.
- [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, The Journal of Machine Learning Research, 15 (2014), pp. 1929–1958.
- [25] S. J. Stolfo, W. Fan, W. Lee, A. Prodromidis, and P. K. Chan, Cost-based modeling for fraud and intrusion detection: Results from the jam project, in In Proceedings of the 2000 DARPA Information Survivability Conference and Exposition, IEEE Computer Press, 2000, pp. 130–144.
- [26] W. N. Street and Y.-S. Kim, A streaming ensemble algorithm (sea) for large-scale classification, in Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’01, New York, NY, USA, 2001, ACM, pp. 377–382.
- [27] D. H. Wolpert, The power of depth for feed-forward neural networks, Journal of Machine Learning Research, 49 (2016), pp. 1–39.
- [28] J. Yoon, E. Yang, J. Lee, and S. J. Hwang, Lifelong learning with dynamically expandable networks, ICLR, 2018.
- [29] G. Zhou, K. Sohn, and H. Lee, Online incremental feature learning with denoising autoencoders, Journal of Machine Learning Research, 22 (2012), pp. 1453–1461.