One model to rule them all

One model to rule them all

Abstract

We present a new flavor of Variational Autoencoder (VAE) that interpolates seamlessly between unsupervised, semi-supervised and fully supervised learning domains. We show that unlabeled datapoints not only boost unsupervised tasks, but also the classification performance. Vice versa, every label not only improves classification, but also unsupervised tasks. The proposed architecture is simple: A classification layer is connected to the topmost encoder layer, and then combined with the resampled latent layer for the decoder. The usual evidence lower bound (ELBO) loss is supplemented with a supervised loss target on this classification layer that is only applied for labeled datapoints. This simplicity allows for extending any existing VAE model to our proposed semi-supervised framework with minimal effort. In the context of classification, we found that this approach even outperforms a direct supervised setup.

\keywords

Machine Learning, Semi-supervised learning, Variational Autoencoder, Anomaly Detection, Transfer Learning, Representation Learning

1 Introduction

In many domains, unlabeled data is abundant, whereas obtaining rich labels may be time consuming, expensive and rely on manual annotations. As such, the value proposition of semi-supervised learning algorithms is immense: It allows us to train well performing predictive systems with only a fraction of labeled datapoints.

In this paper, we present a new flavor of Variational Autoencoder (VAE) that enables semi-supervised learning. The model architecture requires only minimal modifications on any given purely unsupervised VAE. The semi-supervised classification accuracy has similar performance as slightly more complex approaches known in the literature [1]. This was benchmarked using the MNIST (section 3.1.1), Fashion-MNIST (section 3.2.1) and UCI-HAR (section 3.3) data sets. We verified that even if every single datapoint is labeled, framing the training process in the context of VAE training improves the classification accuracy compared to the common way of training the classification network in isolation. We conjecture that supplementing the classification loss with the VAE loss forces the network to learn better representations of the data. Here the VAE reconstruction task acts as a regularizer during training of the classification network.

We also verified that the availability of labels helps the model to find better latent representations of the data: We used the betaVAE disentanglement metric to asses the quality of the found representations (section 4). Furthermore, we applied the VAEs to the problem of anomaly detection, and observed that its performance increases when the model is trained with additional labeled samples - see sections 3.1.3, 3.2.3 and 3.3 for benchmarks on MNIST, Fashion-MNIST and UCI-HAR respectively. In that sense, not only is the reconstruction of the model boosted by the availability of unlabeled datapoints (which is the normal semi-supervised setup), but vice versa the anomaly detection performance is also improved by the availability of labels.

In summary, we have developed a model which adapts seamlessly on the full 0-100% range of available labels. The result is a ‘unified’ model in which the anomaly detection capability is improved by any available label, and vice versa in which the predictive capability is significantly boosted by the abundance of unlabeled data. This paper provides a more thorough investigation and benchmark of the concepts which were published in a blog post in 2018 [5].

2 Model

2.1 Model architecture

The general model architecture is depicted in Figure 0(a). As can be seen, the model is an extension of the original VAE [2] which is depicted in Figure 0(c). The only addition is that a classification layer (typically a one-hot classifying layer using softmax activation) is introduced that is attached to the topmost encoder layer. The and layer encode the mean and standard deviation of the gaussian prior in the latent layer:

(1)

After sampling the latent variable z using the probability distribution (1), z and the activations of are merged and fed into the decoder :

(2)

where denotes the reconstructed data of the decoder. Hence, the classification predictions are also contributing to the reconstruction of the data.

(a) Semi-Supervised VAE, model
(b) Supervised Classifier, model
(c) Unsupervised Anomaly Detector, model
Figure 1: Comparison of our model architecture (a) to the supervised (b) / unsupervised (c) equivalents. The greyed out cells shown in (b) and (c) are not part of the models, and highlight the difference to our model (a). Note that the layer (and its loss) represents the extension to the standard VAE proposed in this paper.

2.2 Loss function

We propose an ad-hoc modification of the standard VAE evidence lower bound [2] loss function:

(3)

where

(4)
(5)

denotes the Kullback-Leibler divergence of and a standard normal distribution, which is only applied to the latent variables z, but not the labels . is the probability distribution of the latent variable generated by the encoder. represents the label of the datapoint. is equal to zero if there is no label (ie belongs to the ’unlabeled class’), else it is one. Normalizing to the number of all labeled datapoints per batch aids stabilizing training. denotes the classification log-loss.

2.3 Upsampling the labeled data

In order to prevent artificial noise from a stochastic number of labeled contributions in the log-loss term (5), we chose to not only normalize this term but also fix the number of labeled samples per batch: Besides the completely (un)supervised edge cases, we sampled datasets such that each batch contained labeled and unlabeled samples in a ratio of 1:1. Additionally, this prevents unlabeled datapoints from dominating training in cases of very sparsely labeled datasets.

2.4 Differences to Kingma’s VAE [1]

Our work is largely inspired by [1]. However, the model we are proposing differs from the model M2 of [1] in several aspects as shown in Table 1.

our model Kingma et al
encoder
single encoder network
sharing weights
two independent encoder networks
latent layer
latent activations only
depend on x:
latent activations depend on
both x and :
treatment of unlabeled data
will omit
contribution to
unknown is summed over
Table 1: Differences to Kingma et al.

The simplicity of our model allows to turn any existent VAE into a semi-supervised VAE by simply adding the layer and extending the loss function. In particular, all learned weights can directly be reused when transitioning into the semi-supervised learning scenario. This is very useful, as in many real world applications, a labeled dataset (even partially labeled) is only built up over time and not available at project initiation.

2.5 Classification - Decoder as a regularizer

We benchmarked the classification performance of our model for various data sets (MNIST, Fashion-MNIST, UCI-HAR, see sections 3.2.1, 3.2.1 and 3.3) as a function of available labels. Not surprisingly, more labeled or unlabeled samples generally improves performance.

Moreover, we also tested our model in the scenario where all datapoints were labeled. Interestingly, we found that the obtained model was performing better than training the same classification model (Figure 0(b)) in a standard supervised scenario. In other words, framing the training process in the context of VAE training allows the classification network to learn better weights compared to training it the ’standard way’ with only the classification loss.

The additional training target of reproducing the input via the decoder forces the network to learn more meaningful representations in its deeper layers, from which the classification benefits. The decoder and VAE training act as a regularizer, as it challenges the network to find more subtle and granular representations of the input data, i.e. it will combat overfitting. At the same time, these representations are meaningful, as they contain valuable information about how to reconstruct the datapoint properly - hence it is expected that they enhance any task built on top of them (for instance, classification).

2.6 Semi-Unsupervised learning

As we have seen in the previous sections, the availability of unlabeled datapoints aids the model to form better representations in its deeper layers, hence enabling semi-supervised learning. Maybe the opposite is true as well: Does the availability of labels also aid with finding better representations? Does it perform better on reconstruction related tasks such as anomaly detection?

This problem setup can be generally described as a flavor of ’transfer learning’: can the model improve its task related to unsupervised learning by leveraging the availability of labels that are primarily associated to the supervised learning task?

This was investigated in two different kind of experiments: (a) we benchmarked the quality of the representations directly via the betaVAE score as a function of available labels (section 4). In this case the added layer can be interpreted as an additional loss term directly reflecting the betaVEA score.

And (b) we used the VAE as an anomaly detector (see sections 3.1.3, 3.2.3 and 3.3). In this case our approach can be viewed as feature engineering: usually labels incorporate domain knowledge of some very specific, yet important, property of the data set. Thereby our method can guide the layer towards an extractor for those very specific high-level features. In this experiment, we contrasted the semi-supervised model to an equivalent purely unsupervised VAE by removing the layer from the network and the loss function (this corresponds to the left and right most panels of Figure 1). We then compared its anomaly detection performance with the anomaly detection performance of our model trained either on a portion or all normal datapoints labeled.

The term ’semi-unsupervised learning’ is a perfect description of this task - as semi-supervised learning enhances the performance of a supervised task by using unlabeled data, ’semi-unsupervised’ learning would enhance the performance of an unsupervised task by using labeled data. The only other mention of the term was used to describe experiments [3] [4] on some other variations of the classic VAE [2]. This unsupervised task was however quite different, in which the objecting was to cluster unlabeled datapoints and subsequently classify them using one-shot-learning.

3 Experiments

The networks used are described in detail in appendix A. Each semi-supervised model (our architecture as described above in 2.1 ), was contrasted with two sibling networks: (a) An equivalent supervised network , corresponding only to the encoder plus the layer of the , being trained only on the cross-entropy loss. And (b) an equivalent unsupervised network , which is identical to our architecture, but with the layer removed. Throughout this section, we will refer to our models using the abbreviations as shown in Table 2. The error bars were generated by re-running each scenario at least 10 times (unless specified otherwise).


Dense Convolutional Recurrent
Semi-Supervised (ours)
Equivalent Supervised
Equivalent Unsupervised
Table 2: Model abbreviations. Generally speaking the equivalent models and are derived from by either removing the resampling step and decoder (thus making it supervised, ) or by removing the layer (thus making it fully unsupervised, ).

3.1 Mnist

For almost any type of image classifier, the natural place to begin any benchmarking is by using the MNIST [6] dataset, a well known dataset containing ( training and testing) grey-scale images of hand written digits. Given the versatility of the model, we conducted the following three benchmarks.

Semi-Supervised performance

The first task is semi-supervised learning, the area within which the model was designed to bring the most benefit. To create a semi-supervised dataset, we simply discard a certain percentage of the labels, but not the images themselves. This means an equivalent supervised model will only be able to train on the sample of the dataset which is labeled, whereas the semi-supervised model will benefit from all the additional unlabeled samples.

The semi-supervised model was trained for epochs, whilst the supervised equivalent was trained for epochs, such that the comparison is made between fully converged models. For comparison, different labeled subsets of the dataset were taken, with each model trained on identical data, the results of which are displayed in Table 3.


model
labels labels
accuracy log loss accuracy log loss
Table 3: Semi-Supervised MNIST classification results

Unsurprisingly, both variants of the semi-supervised model outperform their purely supervised equivalents for both sets of labels. For the CNN variant, there was a clear increase in the accuracy of the classification for both and labels as it scored and higher than the supervised equivalent respectively.

Interestingly, on labels the log loss was lower for the supervised model than for the semi-supervised model for both the dense and CNN variants; however when trained on labels this trend was reversed. A possible interpretation of this could be that the supervised model is trained on a much smaller dataset and hence starts to overfit, producing predictions with a higher certainty. This might be beneficial for the log loss since the supervised model has predictive power nonetheless as is corroborated by the accuracy score.

Decoder as a regularizer

The next test for the model is in the purely supervised domain, testing the hypothesis that the decoder acts as a regularizer and assists the model in finding better representations of the dataset. For this test, both models were trained on the full ( sample) training dataset until converged, with the results displayed in Table 4.


model
accuracy log loss
Table 4: Supervised MNIST classification results

For both the dense and CNN cases the accuracy scores were very similar and, in the case of the CNN model, the error bars are almost overlapping. The log loss of the semi-supervised model however was significantly lower, in particular for the dense model there was a reduction when compared with that of the supervised model. This strongly suggests that the addition of the reconstruction task introduces a much higher confidence in the classifications of the semi-supervised model.

Semi-Unsupervised Learning

The purpose of the semi-unsupervised task was to verify that the introduction of labels can improve the performance of an anomaly detection task. The results presented in Table 5 were obtained by training the model on of the classes (designated ’normal’ using the terminology of anomaly detection) in the MNIST dataset and inferring on all classes, essentially declaring the left out class to be ’anomalous’ data. The anomaly score returned by the models is the log reconstruction probability, which is expected to be higher for the anomalous classes. The performance of the scores are evaluated using the AUC (area under curve, also ROC) score.

anomalous class AUC of AUC of
Table 5: Label assisted results of anomaly detection on MNIST

With the exception of the digit , there was a considerable improvement for each anomalous class, with an average improvement of over the purely unsupervised model. It demonstrates that the addition of labels aids the model in learning a better representation of normal data. Again, as hypothesized, this is most likely due to the additional label information assisting the model in identifying the category as important high level feature.

The very poor performance of using the digit as the anomaly class could also be evidence to support this. Given the similarities between the digits and , it is likely that the dense representation found by the model for the digit was also sufficient for reconstructing the ’s, especially given that the shape of the digit is most often also found within the digit . This would also explain why there was not a similar drop in performance when considering as the anomaly class. Considering clustered embeddings, the dense representation of the digit would not be enough to properly reconstruct a , leading to a higher anomaly score

Based on these results, a further experiment was run to assess the effect of the amount of available labels. Classes ’7’ and ’9’ were chosen as they achieved the largest improvement over the unsupervised equivalent and the results are displayed in Table 6.

label anomaly class anomaly class
Table 6: Anomaly detection AUC w.r.t. label availability

For this test, the model was trained by re-sampling the available labels such that the model is trained on an equal amount of labeled and unlabeled data, without making any changes to the testing data. If the label fraction is below , this amounts to up-sampling the labeled data, while vice-versa for a label fraction above to downsampling of the labeled data. For both classes, there is almost no difference in the label percentage as the error bars all overlap. Not only is this result somewhat surprising, but it is an advantage of such a model.

Firstly, it demonstrates that a tiny fraction of labels is all that is required to bring a substantial increase in performance compared to the unsupervised domain, i.e. almost the maximum pay-off can be achieved straight away.

Secondly, although one would have intuitively expected the performance of the anomaly detector to increase with respect to the number of labels, this is not the case and does not conflict with the hypothesis. Given that the labels are up-sampled during training to balance the learning objective, increasing the percentage of labels in the training set simply increases the diversity of the labeled data rather than the quantity.

Considering the hypothesis that learning clustered representation of the data assists the model in identifying anomalies, a possible explanation for the lack of improvement in the anomaly detection performance could be reasoned as follows: if a small fraction of labels is enough to push the model to finding such a clustered representation, a larger diversity within the label classes may not provide any further contribution. Perhaps a more diverse representation within the clusters would improve the performance by helping the model to identify anomalous data which lie on the class boundaries.

Data generation

The decoder part of a VAE samples the latent layer and attempts to reconstruct the original input. An advantage of the semi-supervised variant is that the decoder can be used as a generative model by providing both the target label and by sampling from the latent layer. Given that the prior distribution of the latent layer is Gaussian, we can simply sample from a normal distribution to feed as an input to the decoder. Depending on where we sample the normal distribution, the digit which is generated will be a representation of a different region of the training data. In other words, we can separate both class and style. This is demonstrated in Figure 2, which illustrates the range of styles for each digit the model has learned.

Figure 2: Digits generated by our Semi-Supervised model when trained on MNIST

3.2 Fashion-MNIST

Fashion-MNIST [7] is an image dataset published by Zalando. It is designed as a drop-in replacement of MNIST, i.e. exactly like the original MNIST dataset, Fashion-MNIST contains of training (and testing) grey-scale images with a total of classes. However, there is much more variability within a given Fashion-MNIST class than there is within a given MNIST class. For instance, the individual samples of the class ’Ankle Boot’ vary much more wildly than any digit in the MNIST dataset. As a consequence, Fashion-MNIST is a much more challenging dataset than MNIST, and hence serves as more realistic proxy to evaluate model performance; especially as most modern image classification models can almost perfectly solve MNIST. At the same time, Fashion-MNIST preserves the big advantage of MNIST: It is still a small dataset that allows rapid training and experimentation when researching new models.

Semi-Supervised performance

The set-up for the semi-supervised classification task is the same as with the original MNIST dataset. That is, only the labels for a subset of all samples the dataset are retained. The models and are then only trained on that subset, whilst the and models are trained on all the samples, but only make use of the labels of the subset. The other samples are treated as unlabeled. The results are shown in Table 7.


model
labels labels
accuracy log loss accuracy log loss

Table 7: Semi-Supervised Fashion-MNIST classification results

The difference in the accuracy scores between the semi-supervised and their supervised counterparts is almost identical as with the original MNIST. Unlike the original MNIST however, there is a clear improvement in the log loss from each of the semi-supervised models.

Decoder as a regularizer

The results of training our model and an equivalent supervised model on the full dataset with every datapoint labeled (see section 2.5) are displayed in Table 9.

model mean accuracy log loss
Table 8: Supervised Fashion-MNIST classification results

As can be seen, the fully connected (dense) model profits massively from using the decoder as a regularizer. For the CNN model, the effect is less pronounced. However, the best accuracy is still achieved by .

Semi-unsupervised learning

The anomaly detection task was set up in the same way as with the original MNIST dataset. One of the 10 classes was designated anomalous, the others were declared normal. The model is trained on a subset of the samples from the remaining nine classes. The held out samples from these nine classes and the samples from the anomalous class are used as a validation set, with the performance of the results evaluated using the AUC score. The results are summarized in Table 9. The error bounds were obtained by rerunning every experiment five times.

anomalous class AUC of AUC of
T-shirt/top
Trouser
Pullover
Dress
Coat
Sandal
Shirt
Sneaker
Bag
Ankle Boot
Table 9: Label assisted results of anomaly detection on Fashion-MNIST

performs better than for most anomalous classes. The only exceptions are the ’Shirt class’, where there is a tie between both models within the denoted error bar, while wins for the ’Sneaker class’ and the ’Ankle Boot class’; although the performance for the ’Sneaker class’ for both models is extremely bad.

In general, the model performance varies drastically depending on the anomalous class. While they perform very well for classes like ’Pullover’ or ’Bag’, they struggle with the ’Sneaker’, ’Ankle Boot’, ’Trouser’ and ’Dress’ classes. This can be understood by looking at a samples of each class (see Figure 4). In particular the ’Sneaker’ and ’Ankle Boot’ classes bear a resemblances, which explains why an anomaly detector trained on the ’Sneaker’ class has hard time to flag the ’Ankle Boot’ samples as anomalous (and vice versa). The same is true, though to a lesser degree, for the ’Trouser’ and ’Dress’ classes.

Data Generation

Figure 3 shows an example of generated Fashion-MNIST data, using the same strategy as described in section 3.1.4. Once again there is an indication of the styles the model has learned to the embed within the latent space. The style itself is mostly captured in the shape of the class and also the position of highlighted features, for example the position of the straps on the sandals. Comparing the generated data to some of the examples images from Figure 4, it can be seen that the generated data lacks any of the detailing of the original images. Given that the dataset is more detailed than the digits, but the model architecture is the same, this is likely due to the size of the latent layer and its lack of capacity for storing these additional details.

Figure 3: Samples generated by our Semi-Supervised model when trained on Fashion-MNIST

3.3 Human activity recognition (UCI-HAR)

The UCI-HAR dataset [9] contains samples, consisting of gyroscopic data recorded from humans, labeled with one of the following six activities: walking, walking upstairs, walking downstairs, sitting, standing and laying.

The testing conducted on the UCI-HAR dataset was not as thorough as with MNIST and Fashion-MNIST, and we performed only one run per test scenario (hence no error bars are given). The classification results are presented in Table 10 and compare the performance of the semi-supervised model to its supervised equivalent.


model
labels labels labels
accuracy log loss accuracy log loss accuracy log loss
Table 10: UCI-HAR classification results

Again, the semi-supervised model is able to outperform its supervised equivalent for each subset of labels. In this case, the performance gain is particularly high when labels are in short supply, as demonstrated by the improvement over the supervised model for the labels test.

The performance of the model as an anomaly detector was also briefly evaluated, using ’walking’ as the anomaly class. The results, displayed in Table 11, again show an improvement in performance when providing an anomaly detector with labels.


model
AUC score

Table 11: UCI-HAR anomaly detection results

4 Disentangled representations

One major application of VAEs is to find low dimensional representations of real world data [11, 15]. For this task, disentanglement is considered an important quality metric [15, 14]. Generally speaking, disentanglement attempts to quantify how well a particular framework is able to identify important yet independent generating factors of its dataset. For this, multiple distinct metrics and benchmark data sets have been suggested, yet they have been shown to agree at least on a qualitative level [11]. In order to benchmark our semi-supervised model, we chose to benchmark via the betaVAE score [12] on the Small-NORB data set [13]. Our scores are calculated based on the latent layer (but not the layer), and are shown in Table 12.

The network architecture is described in the appendix, Table 16 and is similar to 14 with two modifications: (a) the labels of the data set were incorporated using multiple cross-entropy loss terms (and one-hot sigmoid layers) for each of the four dimensions, (b) in this architecture the layers are intended to only function as an additional and sparse loss term on the latent layer, and thus is not forwarded to the decoder (no connection between and ’Merge’ in Figure 0(a)). Latter step is necessary, since otherwise the network could bypass the latent layer using the layer, while maintaining reconstruction quality.

There are two hyperparameters, , the overall weight of the supervised cross-entropy term (equ. 5) and the overall weight of the KL-divergence term. Both parameters were kept fixed for all cases of this experiment. They were optimized for maximum disentanglement in the completely unsupervised case. Note that this puts all other cases at an disadvantage, since their optimal hyperparameters presumably differ from the unsupervised case - but this is to emulate a real life scenario in which labeled datapoints are either sparse and thus cannot be used for hyperparameter tuning. We empirically found that good results are achieved when is selected such that its average contribution to the total loss is about of the same order of magnitude as the reconstruction loss (after training converges). The optimal was found to be rather small (), in accordance with [11]. For the betaVAE score itself, we used a plain logistic regressor. Each datapoint for this classifier was based on the averaged absolute differences between the representations of batches. The classifier was trained and tested with 2048 datapoints each.

The final results are shown in Table 12: We always used about unlabeled datapoints, but augmented training by various amounts of labeled datapoints. As expected our approach can outperform both purely unsupervised and a supervised scenarios significantly. Even a relatively small amount of labeled datapoints (, %) seems sufficient. It should be noted, that for few (around ) labels there is a small, yet statistically significant, decrease in the betaVAE score. This could be due to the aforementioned fact, that the hyperparameters used where optimized for an unsupervised scenario, yet labels were not sufficient to offset this disadvantage.

# unlabeled data # labeled data betaVAE score

Table 12: betaVAE score of the representations generated by our semi-supervised model on varying label availability. All results were obtained using the same hyperparameters, which were optimized for the first row.

5 Future work

It would be interesting to see how much larger networks would benefit from the suggested regularization technique. For instance, the same technique could directly be applied to state-of-the-art computer vision networks like [10]. We leave this avenue for future investigation.

While the results of the semi-unpervised learning already look promising, this approach so far makes no use of an additional input that labels could provide: incorporating user feedback by labeling a false-positive and false-negative detection as such, with the goal of suppressing future false-positive/false-negative detections. In principle, our model architecture should allow to incorporate such feedback.

One way to achieve this would be as follows: Prepare new classes for false-positive and false-negative anomaly detections. In the beginning, there will be no samples in these classes. However, once a sufficient amount of false-positive/false-negative detections accumulated, the model is re-trained. The anomaly score could then be heuristically adjusted, for instance:

(6)

where and are the false-positive and false-negative probabilities, respectively, outputted by the model at inference time.

This treatment would even work when there are no labels available except the false-positive/false-negative assignments.

6 Conclusions

With one of the most common issues associated with training machine learning models being the availability of training data, semi-supervised models present the perfect opportunity to take advantage of every available scrap of data. This is a particularly valuable improvement in the supervised domain, given the availability of unlabeled data in comparison to labeled data.

The value added by a semi-supervised approach is even greater when considering such a model in the context of failure prediction, given that with a purely supervised approach, the dataset would consist entirely of failure events. Depending on the system in question, such a dataset could take decades to collect; an obstacle which would often make a predictive system unobtainable. Taking advantage of the often abundant and easy to produce unlabeled data, this semi-supervised approach demonstrates the ability to converge towards accurate predictions on only a fraction of the labels.

The versatility of the semi-supervised model proposed in this paper delivers concrete improvements across the entire spectrum of label availability.

Within the labeled domain, the value added by a semi-supervised approach is even greater when considering such a model in the context of failure prediction. With a purely supervised approach, a training dataset would consist entirely of failure events, requiring the system in question to fail hundreds if not thousands of times to gather a sizeable dataset. Depending on the system in question, collecting such a dataset from scratch could take decades; an obstacle which can often make a predictive system unobtainable. Taking advantage of the often abundant and easy to produce unlabeled data, this semi-supervised approach demonstrates the ability to converge towards an accurate predictive system on only a fraction of the labels.

In addition to reducing the time to deployment, the model offers further benefits in the supervised learning domain, able to outperform equivalent classifiers due the regularizing effect of the decoder and its associated reconstruction task.

In the purely unsupervised domain, the model achieves identical performance to a VAE, yet demonstrates a huge increase of performance with a tiny fraction of labels. Traditionally, a VAE must find a suitable dense representation of the system it’s modelling in the latent layer. With the introduction of labels, the latent activations must not only embed a representation of the system, but also a classification of the system state. Ultimately, this additional information results in improved embedding of the system state, not only enabling classifications, but improving reconstructions. In short, the labels provide the model with a better understanding of the system which it is reconstructing.

Appendix A Network architectures and training

a.1 MNIST and Fashion-MNIST: Model 1, FCN

We used the raw pixels, nromalized to as input, corresponing to size feature vectors. The architecture is shown below in Table 13. All encoder and decoder layers are simply stacked. The latent layers ( and for a gaussian prior) are forked from the last encoder layer, and merged together with the resampled latent layer as input to the decoder. The model is trained with RMSprop [8] without decay and a momentum parameter of . If not explicitly mentioned otherwise, the learning rate was set to .

layer type dimensions comments
encoder input layer 784
fully connected 1024 relu activation
fully connected 1024 relu activation
latent layer fully connected 2 linear activation; latent gaussian mean
fully connected 2 linear activation; latent gaussian variance
fully connected 10 softmax activation; class prediction
decoder fully connected 1024 relu activation
fully connected 1024 relu activation
output layer 784 sigmoid activation
Table 13: Fully connected network architecture. The input image corresponds to a 28x28 image reshaped into a single feature vector.

a.2 MNIST and Fashion-MNIST: Model 2, CNN

The images were rescaled to but not reshaped. Here we used a series of convolutional layers as detailed in Table A.1. The last dimension in the dimensions column corresponds to the feature dimension, while the first two correspond to the image dimensions. Again, all encoder and decoder layers were simply stacked, whereas the latent and layer were forked from the last encoder layer. The resampled latent layer and layer were concatenated as input for the decoder. The model is trained with RMSprop without decay and a momentum parameter of . If not explicitly mentioned otherwise, the learning rate was set to .

layer type dimensions comments
encoder input layer
CNN (28, 28, 64) kernel ; stride
BN (28, 28, 64) with relu activation
CNN (28, 28, 64) kernel ; stride
BN (28, 28, 64) with relu activation
CNN (14, 14, 64) kernel ; stride
BN (14, 14, 64) with relu activation
Flatten 12544
fully connected 512
BN 512 with relu activation
Dropout 512 dropout rate
latent layer fully connected 2 linear activation; latent gaussian mean
fully connected 2 linear activation; latent gaussian variance
fully connected 10 softmax activation; class prediction
decoder fully connected 12544
BN 12544 with relu activation
Dropout 12544 dropout rate
Reshape (14, 14, 64)
ConvCNN (14, 14, 64) kernel ; stride
BN (14, 14, 64) with relu activation
ConvCNN (14, 14, 64) kernel ; stride
BN (14, 14, 64) with relu activation
ConvCNN (28, 28, 64) kernel ; stride
BN (28, 28, 64) with relu activation
ConvCNN (output) (28, 28, 1) kernel ; stride
Table 14: Convolutional network architecture (CNN). The input shape corresponds to a single greysclae image (x-axis,y-axis, channel). BN is an abbriavtaion for batch normalization layer, ConvCNN for a transposed convolution layer.

a.3 UCI-HAR dataset: RNN

The recurrent VAE flavor was applied on the UCI HAR dataset [9]. We used a look-back dimension of with the full architecture described in the appendix in Table 15. The first dimension in the dimensions column corresponds to the look-back dimension, while the second corresponds to the feature dimension. The latent layers are forked from the last encoder layer, and concatenated prior to the first decoder layer. The model is trained with RMSprop without decay and a momentum parameter of . If not explicitly mentioned otherwise, the learning rate was set to .

layer type dimensions comments
encoder input layer (128, 6)
fully connected along feature dim (128, 40) relu activation
LSTM (128, 40)
fully connected along feature dim (128, 30) relu activation
latent layer fully connected (128, 2) linear activation; latent gaussian mean
fully connected (128, 2) linear activation; latent gaussian variance
fully connected (128, 6) softmax activation; class prediction
decoder fully connected along feature dim (128, 30) relu activation
LSTM (128, 40)
fully connected along feature dim (128, 40) relu activation
output fully connected along feature dim (128, 6) linear activation; gaussian mean
fully connected along feature dim (128, 6) softplus activation; gaussian variance
Table 15: RNN network architecture.The input shape corresponds to a single time series with (128 time steps, 6 features).

a.4 Small-NORB dataset: CNN

The input images were rescaled to . Each stero image pair was stacked into a single feature vector of shape . The encoder and decoder are simply stacked and the full architecture is shown in Table 16. For each of the four generating factors of this data set (category, elevation, azimuth and lighting) a sperate layer was added after the encoder. The latent layers ( and ) of a gaussian prior are also added on top of the encoder. By contrast to the other models, only the resampled latent layer is used for the decoder. The model is trained with RMSprop without decay and a momentum parameter of . If not explicitly mentioned otherwise, the learning rate was set to .

layer type dimensions comments
encoder input layer
CNN (48, 48, 32) kernel ; stride
BN (48, 48, 32) with relu activation

CNN (24, 24, 32) kernel ; stride
BN (24, 24, 32) with relu activation

CNN (12, 12, 64) kernel ; stride
BN (12, 12, 64) with relu activation

CNN (6, 6, 64) kernel ; stride
BN (6, 6, 64) with relu activation

Flatten 2304
fully connected 256
BN 256 with relu activation
Dropout 256 dropout rate

latent layer
fully connected 32 linear activation; latent gaussian mean
fully connected 32 linear activation; latent gaussian variance
fully connected 5 softmax activation; one-hot prediction (category)
fully connected 9 softmax activation; one-hot prediction (elevation)
fully connected 18 softmax activation; one-hot predictioon (azimuth)
fully connected 6 softmax activation; one-hot prediction (lighting)

decoder
fully connected 2304 based on resampled gaussian latent layers
BN 2304 with relu activation
Dropout 2304 dropout rate
Reshape (6, 6, 64)

ConvCNN (12, 12, 64) kernel ; stride
BN (12, 12, 64) with relu activation

ConvCNN (24, 24, 64) kernel ; stride
BN (24, 24, 64) with relu activation

ConvCNN (48, 48, 32) kernel ; stride
BN (48, 48, 32) with relu activation

ConvCNN (96, 96, 32) kernel ; stride
BN (96, 96, 32) with relu activation

ConvCNN (output) (96, 96, 2) kernel ; stride
Table 16: CNN network architecture used for generating represenations of the Small-NORB data set. The input shape corresponds to a single stero image pair (x-axis, y-axis, left/right). BN is an abbriavtaion for batch normalization layer, ConvCNN for a transposed convolution layer.

Appendix B Samples from Fashion-MNIST data set

Figure 4: This figure shows some examples from the Fashion-MNIST dataset. Every class always corresponds to three consecutive rows. The classes are (from top to down): T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag and Ankle Boot.

References

  1. Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling Semi-supervised Learning with Deep Generative Models In Advances in Neural Information Processing Systems, 3581-3589, 2014.
  2. Diederik P. Kingma, Max Welling Auto-Encoding Variational Bayes In Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014.
  3. Matthew J.F. Willets, Stephen J. Roberts, Christopher C. Holmes Semi-Unsupervised Learning with Deep Generative Models: Clustering and Classifying using Ultra-Sparse Labels arXiv:1901.08560 2019-01-24
  4. Matthew J.F. Willets, Aiden Doherty, Stephen J. Roberts, Chris Holmes Semi-Unsupervised Learning of Human Activity using Deep Generative models arXiv:1810.12176 2018-10-29
  5. Felix Berkhahn, Richard Keys, Wajih Ouertani, Nikhil Shetty One model to rule them all https://relayr.io/blog/one-model-to-rule-them-all/ 2018-09-21
  6. Y. LeCun, C.Cortes MNIST handwritten digit database http://yann.lecun.com/exdb/mnist/
  7. Han Xiao and Kashif Rasul and Roland Vollgraf, Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms, arXiv cs.LG/1708.07747 2017-08-28
  8. Geoffrey Hinton, Nitish Srivastava, Kevin Swersky Overview of mini-batch gradient descent https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf
  9. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz, A Public Domain Dataset for Human Activity Recognition Using Smartphones 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April
  10. Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, Kaiming He Aggregated Residual Transformations for Deep Neural Networks IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Honolulu, HI, USA, 21-26 July 2017
  11. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations arXiv:1811.12359 29-11-2018
  12. Irina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework International Conference on Learning Representations 2017
  13. Yann LeCun, Fu Jie Huang, Léon Bottou Learning methods for generic object recognition with invariance to pose and lighting Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2004. CVPR 2004.
  14. Yoshua Bengio, Aaron Courville, Pascal Vincent Representation learning: A review and new perspectives IEEE transactions on pattern analysis and machine intelligence 5(8): 1798–1828, 2013
  15. Michael Tschannen, Olivier Bachem, Mario Lucic Recent advances in autoencoder-based representation learning arXiv:1812.05069 12-12-2018
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
385508
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description