Deep Incremental Boosting

Deep Incremental Boosting

Alan Mosca and George D. Magoulas
Department of Computer Science and Information Systems
Birkbeck, University of London
Malet Street, London - UK
Abstract

This paper introduces Deep Incremental Boosting, a new technique derived from AdaBoost, specifically adapted to work with Deep Learning methods, that reduces the required training time and improves generalisation. We draw inspiration from Transfer of Learning approaches to reduce the start-up time to training each incremental Ensemble member. We show a set of experiments that outlines some preliminary results on some common Deep Learning datasets and discuss the potential improvements Deep Incremental Boosting brings to traditional Ensemble methods in Deep Learning.

1 Introduction

AdaBoost [schapire90] is considered a successful Ensemble method and is commonly used in combination with traditional Machine Learning algorithms, especially Boosted Decision Trees [dietterich2000experimental]. One of the main principles behind it is the additional emphasis given to the so-called hard to classify examples from a training set.

Deep Neural Networks have also had great success on many visual problems, and there are a number of benchmark datasets in this area where the state-of-the-art results are held by some Deep Learning algorithm [wan2013regularization, graham14a].

Ideas from Transfer of Learning have found applications in Deep Learning; for example, in Convolutional Neural Networks (CNNs), when sub-features learned early in the training process can be carried forward to a new CNN in order to improve generalisation on a new problem of the same domain [yosinski2014transferable]. It has also been shown that these Transfer of Learning methods reduce the “warm-up” phase of the training, where a randomly-initialised CNN would have to re-learn basic feature selectors from scratch.

In this paper, we explore the synergy of AdaBoost and Transfer of Learning to accelerate this initial warm-up phase of training each new round of boosting. The proposed method, named Deep Incremental Boosting, exploits additional capacity embedded into each new round of boosting, which increases the generalisation without adding much training time. When tested in Deep Learning benchmarks, the new method is able to beat traditional Boosted CNNs on benchmark datasets, in a shorter training time.

The paper is structured as follows. Section 2 presents an overview of prior work on which the new development is based. Section 3 presents the new learning algorithm. Section 4 reports the methodology of our preliminary experimentation and the results. Section 5 provides examples where state-of-the-art models have been used as the base classifiers for Deep Incremental Boosting. Lastly, Section 6 makes conclusions on our experiments, and shows possible avenues for further development.

2 Prior Work

This section gives an overview of previous work and algorithms on which our new method is based.

2.1 AdaBoost

AdaBoost [schapire90] is a well-known Ensemble method, which has a proven track record of improving performance. It is based on the principle of training Ensemble members in “rounds”, and at each round increasing the importance of training examples that were misclassified in the previous round. The final Ensemble is then aggregated using weights calculated during the training. Algorithm 1 shows the common AdaBoost.M2 [freundschapire96] variant. This variant is generally considered better for multi-class problems, such as those used in our experimentation, however the same changes we apply to AdaBoost.M2 can be applied to any other variant of AdaBoost.

  
   for all
  
  
  while  do
      pick from original training set with distribution
      train new classifier on
     
     
     
     where is a normalisation factor such that is a distribution
     
  end while
  
Algorithm 1 AdaBoost.M2

2.2 Transfer of Learning applied to Deep Neural Networks

Over the last few years a lot of progress has been made in the Deep Networks area due to their ability to represent features at various levels of resolution. A recent study analysed how the low-layer features of Deep Networks are transferable and can be considered general in the problem domain of image recognition [yosinski2014transferable]. More specifically it has been found that, for example, the first-layer of a CNN tends to learn filters that are either similar to Gabor filters or color blobs. Ref [bengio2012deep] studies Transfer of Learning in an unsupervised setting on Deep Neural Networks and also reached a similar conclusion.

In supervised Deep Learning contexts, transfer of learning can be achieved by setting the initial weights of some of the layers of a Deep Neural Network to those of a previously-trained network. Because of the findings on the generality of the first few layers of filters, this is traditionally applied mostly to those first few layers. The training is then continued on the new dataset, with the benefit that the already-learned initial features provide a much better starting position than randomly initialised weights, and as such the generalisation power is improved and the time required to train the network is reduced.

3 Deep Incremental Boosting

3.1 Motivation

Traditional AdaBoost methods, and related variants, re-train a new classifier from scratch every round. While this, combined with the weighted re-sampling of the training set, appears at first glance to be one of the elements that create diversity in the final Ensemble, it may not be necessary to re-initialize the Network from scratch at every round.

It has already been previously shown that weights can be transferred between Networks, and in particular between subsets of a network, to accelerate the initial training phase. In the case of Convolutional Neural Networks, this particular approach is particularly fruitful, as the lower layers (those closest to the input) tend to consistently develop similar features.

3.2 Applying Transfer of Learning to AdaBoost

Intuition 1

Because each subsequent round of AdaBoost increases the importance given to the errors made at the previous round, the network at a given round can be repurposed at round to learn the newly resampled training set.

In order for this to make sense, it is necessary to formulate a few conjectures.

Definition 1

Let be a set composed of training example vectors , with its corresponding set of correct label vectors

Definition 2

A training set is mostly similar to another set if the sets of unique instances and have more common than different elements, and the difference set is smaller than an arbitrary significant amount .

This can be expressed equivalently as:

(1)
(2)

or

(3)

Given the Jaccard Distance

(4)

this can be formulated as

(5)
Conjecture 1

At a round of Boosting , the resampled training set and the previous resampled training set are mostly similar, as in Definition 2:

(6)

or

(7)

If we relax to be as large as we like, In the case of Boosting, we know this to be true because both and are resampled from with the weighting from the initial dataset , so the unique sets and are large resampled subsets of the initial training set :

(8)
(9)
(10)
Definition 3

We introduce a mistake function which counts the number of mistakes by the classifier on dataset :

(11)

where is the ground truth for example , taken from the correct label set .

Conjecture 2

Given Conjecture 1 and provided that the dataset and are mostly similar as per Definition 2, a classifier that classifies better than randomly will still perform better than randomly on a new dataset .

Given that all sets are of the same size by definition, as they are resampled that way, we can ignore the fact that the error count on a dataset would need to be divided by the size of the dataset , thus simplifying the notation.

We can therefore redefine the errors made by on both and as:

(12)
(13)

From Conjecture 1, the last two terms are negligible, leaving:

(14)
(15)

therefore .

Assumption 1

The weights and structure of a classifier that correctly classifies the training set will not differ greatly from the classifier that correctly classifies the training set , provided that the two sets are mostly similar.

Conjecture 3

Given Conjecture 2 classifier and its classification output , it is possible to construct a derived classifier that learns the corrections on the residual set .

When using Boosting in practice, we find these assumptions to be true most of the time. We can therefore establish a procedure by which we preserve the knowledge gained from round into the next round :

  1. At , a new CNN is trained with random initialisations on the re-sampled dataset , for iterations.

  2. The new dataset is selected. The calculation of the error , the sampling distribution and the classifier weight remain the same as per AdaBoost.M2.

  3. At every subsequent round, the struture of network is copied and extended by one additional hidden layer, at a given position in the network , and all the layers below are copied into the new network. By doing so, we preserve the knowledge captured in the previous round, but allow for additional capacity to learn the corrections on . This new network is trained for iterations, where .

  4. Steps and are repeated iteratively until the number of rounds has been exhausted.

Because the Network doesn’t have to re-learn basic features, and already incorporates some knowledge of the dataset, the gradients for the lower layers will be smaller and the learning will be concentrated on the newly added hidden layer, and those above it. This also means that all classifiers will require a smaller number of epochs to converge, because many of the weights in the network are already starting from a favourable position to the dataset.

At test time, the full group of hypotheses is used, each with its respective weight , in the same way as AdaBoost.

Algoritm 2 shows the full algorithm in detail.

   for all
  
   randomly initialised weights for first classifier
  while  do
      pick from original training set with distribution
      create untrained classifier with additional layer of shape
     copy weights from into the bottom layers of
      train classifier on current subset
      all weights from
     
     
     
     where is a normalisation factor such that is a distribution
     
     
  end while
  
Algorithm 2 Deep Incremental Boosting

4 Experimental Analysis

Each experiment was repeated times, both for AdaBoost.M2 and Deep Incremental Boosting, using the same set of weight initialisations (one for each run), so that any possible fluctuation due to favourable random starting conditions was neutralised. Each variant ran for a fixed rounds of boosting. We trained each Ensemble member using Adam[kingma2014adam], and used a hold-out validation set to select the best model.

All the experiments were run on an Intel Core i5 3470 cpu with a nVidia GTX1080 GPU using the toupee Ensemble library available online at https://github.com/nitbix/toupee.

Code and parameters for these experiments is available online at https://github.com/nitbix/ensemble-testing.

4.1 Datasets

4.1.1 Mnist

MNIST [mnistlecun] is a common computer vision dataset that associates pre-processed images of hand-written numerical digits with a class label representing that digit. The input features are the raw pixel values for the images, in grayscale, and the outputs are the numerical value between and .

The CNN used for MNIST has the following structure:

  • An input layer of nodes, with no dropout

  • convolutions, with no dropout

  • max-pooling

  • convolutions, with no dropout

  • max-pooling

  • A fully connected layer of nodes, with dropout

  • a Softmax layer with outputs (one for each class)

This network has million weights.

The layer added during each round of Deep Incremental Boosting is a convolutional layer of channels, with no dropout, added after the second max-pooling layer.

4.1.2 Cifar-10

CIFAR-10 is a dataset that contains small images of categories of objects. It was first introduced in [krizhevsky2009learning]. The images are pixels, in RGB format. The output categories are airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The classes are completely mutually exclusive so that it is translatable to a 1-vs-all multiclass classification. Of the samples, there is a training set of instances, a validation set of and a test set of another . All sets have perfect class balance.

The CNN used for CIFAR-10 has the following structure:

  • An input layer of nodes, with no dropout

  • convolutions, with dropout

  • convolutions, with dropout

  • max-pooling

  • convolutions, with dropout

  • convolutions, with dropout

  • max-pooling

  • convolutions, with dropout

  • convolutions, with dropout

  • max-pooling

  • A fully connected layer of nodes, with dropout

  • a Softmax layer with outputs (one for each class)

This network has million weights.

The layer added during each round of Deep Incremental Boosting is a convolutional layer of channels, with no dropout, added after the second max-pooling layer.

4.2 Cifar-100

CIFAR-100 is a dataset that contains small images of categories of objects, grouped in super-classes. It was first introduced in [krizhevsky2009learning]. The image format is the same as CIFAR-10. Class labels are provided for the classes as well as the super-classes. A super-class is a category that includes of the fine-grained class labels (e.g. “insects” contains bee, beetle, butterfly, caterpillar, cockroach). Of the samples, there is a training set of instances, a validation set of and a test set of another . All sets have perfect class balance.

The model we used has the same structure as the one we trained on CIFAR-10.

4.3 Results

We can see from these preliminary results in Table 1 that Deep Incremental Boosting is able to generalise better than AdaBoost.M2. We have also run AdaBoost.M2 with larger CNNs, up to the size of the largest CNN used in Deep Incremental Boosting (e.g. with 10 additional layers) and found that the classification performance was gradually getting worse as the weak learners were overfitting the training set. We therefore assume that the additional capacity alone was not sufficient to justify the improved generalisation and it was specifically due to the transferred weights from the previous round, and the new layer learning the “corrections” from the new training set.

Single Network AdaBoost.M2 Deep Incremental Boosting
CIFAR-10 % % %
CIFAR-100 % % %
MNIST % % %
Table 1: Mean misclassification rate on the test set

4.4 Training Time

We see from Table 2 that with Deep Incremental Boosting the best validation error is reached much earlier during the last boosting round. This confirms our observation in Section 3 that the learning would converge at an earlier epoch at subsequent rounds (). Based on this we have used a shorter training schedule for these subsequent rounds, which means that we were able to save considerable time compared to the original AdaBoost, even though we have trained a network with a larger number of parameters. A summary of the improved training times is provided in Table 3.

AdaBoost.M2 Deep Incremental Boosting
CIFAR-100
CIFAR-10
MNIST
Table 2: Typical “best epoch” during the round of Boosting
AdaBoost.M2 Deep Incremental Boosting
CIFAR-100 hrs hrs
CIFAR-10 hrs hrs
MNIST hrs hrs
Table 3: Mean training times for each dataset

5 Larger models

The base classifiers we used in the experimentation in Section 4 are convenient for large numbers of repetitions with lock-stepped random initialisations, because they train relatively quickly. The longest base classifier to train is the one used for CIFAR-100 and it took hours. However, these models give results that are still far from the state-of-the-art, so we experimented further with some of these more complicated models and applied Deep Incremental Boosting.

Because of the time required to train each model, and the differences in the problem setup, we have not been able to run them with the same schedule as the main experiments, therefore they have been documented separately.

5.1 Mnist

The best result on MNIST that doesn’t involve data augmentation or manipulation is obtained by applying Network in Network [lin2013network]. In the paper, a full model is described, which we have been able to reproduce. Because our goal is to train Ensembles quickly, we reduced the training schedule to epochs and applied Adam as the update rule, which also sped up the training significantly. This network has a total of million weights, however, it has a significantly higher number of computations.

After the first dropout layer, we added a new convolutional layer of filters, at each Deep Incremental Boosting round.

Method Mean Test Misclassification Mean Training Time
NiN % min
AdaBoost.M2 % min
DIB % min
Table 4: Network-in-Network results on MNIST

Table 4 shows that, although the remaining examples to be learned are very few, DIB is able to improve where AdaBoost no longer offers any benefits. In addition to this, the training time has been reduced significantly compared to AdaBoost.

5.2 Cifar-10

The published models that achieve state-of-the-art performance on CIFAR-10 and CIFAR-100 do not make use of a hold-out validation set. Instead, they use the additional examples as additional training data. In order to reproduce similar test error results, the same principle was applied to this experimental run.

A very efficient model of all- convolutional networks has been proposed, with state-of-the-art results on the CIFAR-10 dataset, which replaces the max-pooling with an additional convolution with stride , and does not use a fully-connected layer after the convolutions [springenberg2014striving]. Instead, there are further convolutions to reduce the dimensionality of the output, until it is possible to perform Global Average Pooling. We based our larger model on this architecture, but in order to make the computations feasible for an Ensemble we had to modify it slightly. The final structure of the network is as follows:

  • An input layer of nodes, with no dropout

  • convolutions, with dropout

  • convolutions, with dropout

  • convolutions, with a stride length of

  • convolutions, with dropout

  • convolutions, with dropout

  • convolutions, with a stride length of

  • convolutions, with dropout

  • convolutions, with dropout

  • convolutions, with a stride length of

  • max-pooling

  • A fully connected layer of nodes, with dropout

  • a Softmax layer with outputs (one for each class)

This network has million weights and is considerably harder to train than the one in the original experiment. The results are reported in Table 5, including training time and a comparison with vanilla AdaBoost.

Method Mean Test Misclassification Mean Training Time
Single Network % mins
AdaBoost.M2 % mins
DIB % mins
Table 5: All-CNN results on CIFAR-10

Each original member was trained for epochs, while each round of Deep Incremental Boosting after the first was only trained for epochs. No additional layer was created, due to GPU memory limitations, which is why the improvement is not as dramatic as seen in the original experiments. However, the time improvement alone is sufficient to justify using this new method.

6 Concluding Remarks

In this paper we have introduced a new algorithm, called Deep Incremental Boosting, which combines the power of AdaBoost, Deep Neural Networks and Transfer of Learning principles, in a Boosting variant which is able to improve generalisation. We then tested this new algorithm and compared it to AdaBoost.M2 with Deep Neural Networks and found that it generalises better on some benchmark image datasets, further supporting our claims.

One final observation that can be made is about the fact that we are still using the entire Ensemble at test time. In certain situations, it has been shown that a small model can be trained to replicate a bigger one, without significant loss of generalisation [ba2013deep]. In future work we will investigate the possibility to modify Deep Incremental Boosting such that only one final test-time Deep Neural Network will be necessary.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
101506
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description