A Close Look at Deep Learning with Small Data
In this work, we perform a wide variety of experiments with different deep learning architectures on datasets of limited size. According to our study, we show that model complexity is a critical factor when only a few samples per class are available. Differently from the literature, we show that in some configurations, the state of the art can be improved using low complexity models. For instance, in problems with scarce training samples and without data augmentation, low-complexity convolutional neural networks perform comparably well or better than state-of-the-art architectures. Moreover, we show that even standard data augmentation can boost recognition performance by large margins. This result suggests the development of more complex data generation/augmentation pipelines for cases when data is limited. Finally, we show that dropout, a widely used regularization technique, maintains its role as a good regularizer even when data is scarce. Our findings are empirically validated on the sub-sampled versions of popular CIFAR-10, Fashion-MNIST and, SVHN benchmarks.
Machine Learning (ML) popularity has rapidly increased thanks to the success of deep learning . In particular, convolutional neural network (CNN) architectures, have achieved considerable success in a wide range of computer vision tasks including object classification , object detection  and semantic segmentation , just to cite a few. The two main ingredients that have favored the rise of this type of algorithms are i) networks with deeper structure and ii) the use of large annotated datasets. The latter requirement can not always be fulfilled for several reasons. Obtaining and labeling data is needed to achieve strong results but this process might be extremely expensive or not possible at all. For instance, in the medical field, high-quality annotations by radiology experts are often costly and not manageable at large scales . Several sub-domains of ML are trying to mitigate the necessity of training data, tackling the problem from different perspectives. Transfer learning aims at learning representations from one domain and transfer the learned features to a closely related target domain , . Similarly, few-shot learning uses a base set of labelled pairs to generalize from a scarce support set of target classes . Both approaches have gained much attention in the community but still require a large source of annotated data. Furthermore, the target domain must be related to the source domain. Another research direction is trying to reduce the demand for annotations. This field is known as self-supervised learning. Usually, a large pool of images is used to teach how to solve a pretext task to a CNN . This task does not require human annotations and is conceived to teach general visual features that can be transferred to the downstream task. In this manner, costly human annotations are not needed but there is still the problem of collecting many images. In general, we would like to have systems that can recognize objects from just a few exemplars.
In this work, we present a detailed empirical study of deep learning models in the small data regime. Similarly to what has been done in  and , we benchmark the approaches by varying the number of data points in the training sample while keeping it low with respect to the current standards of computer vision datasets. Such a scenario can also be seen as the starting periods of an application that collects data over time and aims to perform the task in the best possible way before it has collected large amounts of data. For example, a robot involved in an interactive activity with people needs to recognize human actions or behaviour , . Due to its inherent difficulty, not many past works have studied such a problem. Indeed, it is very hard to train function approximators when the availability of multi-dimensional points to interpolate is scarce. Any ML model is highly prone to overfit the training dataset, especially if its complexity is too high to handle the current task. CNNs have proved to be extraordinarily resistant to overfitting, although the number of trainable parameters is usually much greater than the number of data points . In this paper, we show that model complexity is a critical factor in small-data domains and that small nets can be better than big ones in scenarios with limited training samples. A large number of works have proposed techniques to increase the generalization capabilities of deep networks. For instance, data augmentation is commonly used to fight the data scarcity problem and to increase generalization . We perform our analysis with and without data augmentation to understand its effectiveness when the dataset size is limited. In our experiments, we show that standard augmentation pipelines improve recognition performance up to large margins. Clearly, the augmentation type should be carefully designed since its success is highly correlated to the image type. Moreover, dropout is an extremely popular technique that regularizes neural networks  by randomly dropping units to prevent co-adaptation and to favor generalization through ensembles. In this paper, we show that dropout is a good regularizer even when data is scarce, slightly less effective when the number of samples per class is extremely low (e.g. ). In summary, the contributions of this paper are the following: 1) we perform a large set of experiments with different quantities of training data and deep architectures over three popular computer vision benchmarks; 2) we show that for small-data problems, low-complexity CNNs are comparable to or better than high-complexity ones depending on the training set dimension and the use of data augmentation; 3) we demonstrate that standard data augmentation consistently improves testing accuracy of deep networks trained with few samples; 4) we show that dropout results to be a good regularizer even in these small-data settings.
Ii Related Work
Learning from datasets of limited size is extremely challenging and, for this reason, largely unsolved.
As previously said, few works have tried to tackle the problem of training deep architectures with a small number of samples due to the difficulty of generalizing to novel instances.
We start by mentioning a series of works that focused on the classification of vector data and mainly used the UCI Machine Learning Repository as a benchmark. In  the authors have shown the superiority of random forests over a large set of classifiers including feed-forward networks. Later,  used a linear program to empirically decompose fitted neural networks into ensembles of low-bias sub-networks. They showed that these sub-networks were relatively uncorrelated which lead to an internal regularization process similar to what happens in random forests, obtaining comparable results. More recently,  proposed the use of neural tangent kernel (NTK) architectures in low data tasks and obtained significant improvements over all previously mentioned classifiers.
All previous works did not test CNNs since inputs were not images. When the input dimensionality increases, the classification task inevitably becomes more complex. For this reason, a straightforward approach to improve generalization is to implement techniques that try to synthesize new images through different transformations (e.g. data augmentation ). Some previous knowledge regarding the problem at hand might turn to be useful in some cases . However, this makes data augmentation techniques not always generalizable to all possible image classification domains. It has also been proposed to train generative models (e.g. GANs) to increase the dataset size and consequently, performance . Generating new images to improve performance is extremely attractive and effective. Yet, training a generative model might be computationally intensive or present severe challenges in the small sample domain. In our work, we use standard data augmentation and do not focus on approaches that improve image synthesis. We show that even standard augmentation can be highly effective to improve recognition performance and confirm that data augmentation/generation is a promising approach for datasets of limited size.
 suggested to train CNNs with a greedy layer-wise method, analogous to that used in unsupervised deep networks and showed that their method could learn more interpretable and cleaner visual features.  proposed the use of the cosine loss to prevent overfitting claiming that the L2 normalization involved in the cosine loss is a strong, hyper-parameter-free regularizer when the availability of samples is scarce. They obtained the best results on fine-grained datasets that have between and samples per class. On the other hand,  performed experiments with convolutional neural tangent kernel (CNTK) networks on small CIFAR-10 showing the superiority of CNTK in comparison to a ResNet-34 model. Finally,  studied the generalization performance of deep networks as the size of the training set varies. They found out that even larger networks can handle overfitting and obtain comparable or better results than smaller nets if properly optimized and calibrated.
Iii Small-Data Classification Problem
In this section, we first outline the definition of our classification problem with limited data, followed by describing the deep learning architectures studied in our experiments. Finally, we describe the datasets and regularization techniques used in this work.
Iii-a Problem definition
As a standard supervised ML problem, given a training distribution of images and a label distribution , our objective is to learn a classifier , parametrized by a set of variables , such that for any image with corresponding label , . Let us suppose that our distribution is made of different classes and that our training dataset sampled from and , is composed of images per class.
Differently from the standard computer vision datasets, we choose to keep low. Despite the notion of low is highly subjective, we follow two different sub-sampling protocols for setting our experiments. Firstly, to directly compare our tested models with the networks proposed in  is doubled each time starting from up to . Secondly, since we were also interested in configurations with more data, we also varied in the set , corresponding to a ten-time increase of samples per class with respect to the just cited protocol. In this manner, it is easy to benchmark models capabilities as the number of samples per class increases. In the result section, we specify when we are using the first or second sub-sampling protocol.
As previously said, such a protocol could be easily found in a situation where data is collected through time. Understanding which is the more effective model before reaching large quantities of data is an important research question.
In general, there is no specific limit that regards the number of classes . In our work, we have chosen three problems with .
Since we are interested in comparing deep models with different complexity we define three CNNs with increasing filter widths and a more complex ResNet-20.
CNN. We test a standard CNN architecture made of convolutional and max-pooling layers as feature extractors. The entire network minimizes the standard classification loss:
More details about the structure will follow.
ResNet-20. Residual networks were introduced in  to improve the training of very deep neural networks. The basic learning block of such networks is the residual block. ResNets are made of different blocks of stacked layers (convolutions with non-linear activation functions and batch normalization). Shortcut connections join the input of each block to the output. This addition helps gradients to flow backward and ease the training of the overall network. ResNet-20 minimizes the same loss described in Eq. 1.
Models structure. The CNNs process the input image through four convolutional layers. The kernel is of size and moves with stride on both width and height directions. On the other hand, the max-pooling layers have a pool-size and stride of . The output of the convolutional feature extractor is flattened. Then, a feed-forward layer maps the extracted features to the requested output dimensionality .
We do not describe the full structure of ResNet-20 that can be found in . For what concerns the kernel widths of ResNet-20, we set them to , and as in the default CIFAR-10 network proposed in .
Models complexity. We control the complexity of the CNNs by the number of filters in convolutional layers. We fix the lowest model complexity for the CNN by setting a number of filters for the first convolutional layer. Then, for each deeper layer, we simply double their number in order to increase complexity. We define a low, medium and high complexity CNN that we will call CNN-lc, CNN-mc, CNN-hc. The three architectures have respectively , and base filters. A detailed description can be found in Tab. I. We also report the complexity in terms of parameters and floating-point operations (FLOPs) of all networks.
Training setup. All standard CNNs have been optimized with Adam and default parameters .
For what concerns ResNet-20, we slightly modified the training policy originally proposed in . The optimizer used is stochastic gradient descent (SGD) with weight decay and Nesterov momentum respectively set to and . We start with a learning rate of and decrease it after 75% of the total number of iterations by an order of magnitude. We increased the number of iterations with the initial learning rate to be sure of decreasing it after having reached the training loss plateau. Furthermore, we have noticed that decreasing a second time the learning rate did not improve further the testing performance. The network was fed with a mini-batch of images. We preferred a smaller batch with respect to the original since led to better performance. We also trained the other CNNs with a batch size of for the same reason. Due to different training set dimensions (including the use of data augmentation), we train the networks for a different number of epochs. For the sub-sampling protocol with , assuming that the first element of the set corresponds to datasets with samples per class and the last one to , architectures are trained for .
For the other sub-sampling protocol () and without data augmentation, models are trained for epochs. When augmentation is used, we slightly increase the number of epochs to be sure that models have converged (). Note that standard CNNs require far fewer iterations than ResNet-20 to converge because of their inherent lower complexity and the optimizer used. Indeed, Adam is known to be faster than SGD in terms of converge speed. To be sure that overfitting did not influence our results, for each run, the maximum value of the accuracy scored on the test set was considered. We mainly use the categorical cross-entropy as classification loss (). When specified, we perform some comparisons with the cosine loss proposed in .
We perform our experiments on three popular computer vision datasets, namely CIFAR-10 , FMNIST (Fashion-MNIST)  and SVHN .
FMNIST is a popular dataset comprised of black and white images of dimension . The total number of categories is and each class belongs to fashion items including dresses, trousers, sandals, etc. The dataset has originally training images and testing images with both sets balanced.
CIFAR-10 is an established computer vision benchmark consisting of color images () coming from different classes of objects and animals. The sizes of training and test splits of CIFAR-10 are equal to the ones of FMNIST.
Finally, SVHN is a real-world image dataset semantically similar to MNIST since contains images of digits. However, it is significantly harder because recognizing numbers in natural scene images constitutes a more complex task. The cropped-version sets used originally have training and testing images of dimension .
We have chosen datasets made of ten classes and relatively small images since the restrictions imposed on the number of samples per class make the problem already challenging. As previously anticipated, since all three original datasets contain several samples per class (many more than our defined ), we build, for each dimension, a sub-sampled version of the original dataset. We will refer to any down-sampled version of any dataset as . For instance, sCIFAR-10-20 indicates the down-sampled version of CIFAR-10 with twenty samples per class.
The testing sets remain fixed to the original sizes throughout all the evaluations. To ensure consistency of the results, we perform runs for each sub-sampled version of each dataset.
Iii-D Regularization techniques
We study the influence of two popular regularization techniques for some of our experiments with datasets of limited size.
Dropout. We use dropout in the last layer of CNN-lc, CNN-mc and CNN-hc. We vary the dropping-rate probabilities in the set , corresponding to absent, medium and, high dropout. We will show that high rates help to prevent overfitting except for rare cases.
Data augmentation. Artificially increasing the images of a dataset by applying data augmentation techniques has been shown to improve the generalization capabilities of deep models even when a great quantity of data is available .
The transformations applied to the images should not modify the semantics needed to perform the classification task. To this end, we applied three different sets of augmentations to the three datasets. Inputs are augmented at each epoch, constituting a non-static dataset.
For CIFAR-10 images we used the standard augmentation pipeline also used in . More precisely, each image is padded 4 pixels on each side and a crop is randomly sampled from the padded image or its horizontal flip.
FMNIST objects present small variations in orientation and scale with respect to the ones of CIFAR-10. In this case, we padded the images with 2 pixels per side and cropped a window of pixels. We have not applied horizontal flipping.
Finally, digits in SVHN undergo noticeable changes in terms of illumination and contrast. For this reason, we adjusted image brightness and contrast by random factors. The first one was contained in while the second one in between and . Further, we applied the cropping policy of CIFAR-10 without considering the horizontal flipping that could change the numbers semantics.
Iv-a Influence of models complexity on performance
First, we analyze the impact of model complexity in our small-data classification problems. For this reason, we have chosen to test the three standard CNNs with increasing complexity (CNN-lc, CNN-mc, and CNN-hc) and ResNet-20 that has the highest complexity in terms of FLOPs (check Tab. I for more details). In this case, all networks are trained with the cross-entropy loss. The results of this analysis are shown in Fig. 1. Note that for this analysis, data augmentation is not used. The standard CNNs consistently outperform ResNet-20 in the sCIFAR10 problem when is smaller than samples. ResNet scores the best accuracy out of the four models only when . The difference between ResNet-20 and CNN-hc is roughly for sCIFAR10-10 up to sCIFAR10-320. For larger training sets, the difference sharply decreases until becomes negative. It is interesting to note that also the CNN-lc keeps a consistent gap up to sCIFAR10-320 despite it has roughly two orders of magnitudes fewer FLOPs than ResNet-20. Similar behavior can be noted in sFMNIST and sSVHN with the only difference that the most complex model increases its performance more rapidly. This is probably due to the intrinsic difficulty of the classification problem. When the training and testing distributions are more similar (e.g. sFMNIST or sSVHN), the addition of training samples helps the classifier at a faster rate. In sFMNIST and sSVHN, ResNet starts to have a comparable accuracy when the number of samples per class is, respectively, at least and . The gap reaches a net difference in terms of average accuracy of around for sSVHN-40 and sFMNIST-10. For what concerns the comparison between the CNNs, we note that results are comparable although there is a significant difference in terms of model complexity (i.e. CNN-hc has more than one order of magnitude further parameters than CNN-lc).
Iv-B Influence of regularization techniques on performance
Here, we study how two popular regularization techniques such as dropout and data augmentation influence recognition performance when a few samples are available.
Dropout. Fig. 3 shows the results of the three CNNs with different levels of dropout on the three datasets. Exact numbers are reported in Tab. II. For all three cases, dropout regularizes the dense layer of the classifier.
In sCIFAR-10, we note that high drop-rates () improve the generalization abilities of all three networks. Higher improvements over the case without dropout are obtained with the CNN-lc (around ). However, also CNN-hc outperforms its variants with absent and medium dropout.
A similar picture can be seen for sSVHN. Here, improvements are even more noticeable when the training set is limited. For instance, CNN-mc with high dropout outperforms its version without dropout by in sSVHN-40. Improvements decrease as the training set increases, but they are still noticeable.
Finally, for sFMNIST the improvements are less important, and using or not using dropout does not seem to influence much the final testing accuracy. When happening, improvements are very small. The case in which dropout seems to be more effective is when using CNN-hc and or samples per class.
To summarize, dropout maintains its role of good regularizer also when the dataset has a limited size. Networks equipped with it either generalize better either perform comparably well depending on the type of dataset.
Data augmentation. We report the results regarding the use of data augmentation in Fig. 2. Refer to Tab. II to read the exact values. We have chosen to train two networks with the previously specified augmentation policies. The first one is the CNN-hc that generally scored the highest testing accuracy among the standard CNNs. The second one is the ResNet-20 which is expected to noticeably improve given the help of data augmentation. Starting from sCIFAR-10, we note that data augmentation boosts recognition abilities in almost all cases, playing an important role from roughly samples per class. The maximum and minimum gains of CNN-hc corresponds to and with and . On the other hand, net improvements of augmented ResNet-20 are and with and samples per class. It is interesting to notice that data augmentation allows ResNet-20 to match the CNN-hc testing accuracy with . We have previously seen in the analysis without augmentation that this result was happening given at least samples per class.
Large gains are also obtained on sSVHN. However, the trend is different from the one seen on sCIFAR-10. Indeed, in this case, the greatest improvements are obtained with smaller training sets (). Augmented CNN-hc scores a maximum increase of with while ResNet-20 of with . Differently from sCIFAR-10, we also notice that augmentation does not give a clear advantage to ResNet-20. Indeed, the two architectures perform comparably well, except for larger datasets () where ResNet-20 scores improvements of around .
Finally, data augmentation in sFMNIST shows low gains for CNN-hc (maximum around ). On the other hand, ResNet-20 is still largely helped when data is very scarce. Testing accuracy is boosted by in the case of sFMNIST-10. As we have also seen in sCIFAR-10, here ResNet-20 can match CNN-hc performance when trained with smaller datasets. In this scenario, it happens with half of the samples, more precisely when is equal to .
To sum up, from our experiments we have seen that the success of data augmentation seems to be related to the dataset size and type. However, in many cases, testing accuracy has been boosted (up to margins) making even standard augmentation policies a must-use when dealing with a small dataset.
Iv-C Comparison with the state of the art
Finally, in Fig. 4 we compare the CNN-hc and ResNet-20 trained with the cross-entropy loss to state-of-the-art techniques.
More precisely, we use the sub-sampling protocol of CIFAR-10 originally proposed in  (, ) to compare our networks to CNTK.
In this case, augmentation is turned off to match the procedure used in .
Further, we also train CNN-hc and ResNet-20 with the cosine loss that was proposed as a state-of-the-art loss function for deep learning with tiny datasets in .
We start to compare the CNN-hc and ResNet-20 trained with the two different losses. We note that the cosine loss makes the two architectures either comparable to either worst than networks trained with the cross-entropy. Indeed, the model trained with the cross-entropy loss is more performant in many cases. We presume that the dataset type, size, and complexity of the model greatly influence the final performance. The experiments in  were run with high-capacity neural nets (i.e. ResNet-110, ResNet-50) on mainly fine-grained datasets with a greater number of classes. Furthermore, in all training set-ups, data augmentation was included. In this setup proposed by Arora et al. , the cosine loss does not outperform the cross-entropy. This setup is extremely hard for larger networks since the number of training samples is extremely low and data augmentation is not used. The obtained results are reasonable and somewhat expected. Indeed, ResNets trained with cross-entropy or cosine loss are the worst models. More studies regarding the use of the cosine loss should be performed to actually understand when it’s advantageous over the cross-entropy.
Then, we compare CNN-hc and ResNet-20 with CNTK. As was already shown in , the CNTK architecture outperforms ResNets. In the original paper, the comparison was with a ResNet-34. We confirm the superiority of CNTK also over a smaller ResNet. However, we note that our CNN-hc clearly outperforms CNTK in all cases. The gains are around throughout all training splits with the maximum obtained with and minimum with .
To recap, when using the hard training protocol proposed in  with extremely limited samples and without data augmentation, the cosine loss does not give a clear advantage over the cross-entropy considering the two main architectures that we have used in this work. Moreover, the computationally simple CNN-hc outperforms CNTK in all sub-sampled versions of CIFAR-10 making low-complexity CNNs a viable choice in these extreme training conditions.
In this work, we have tackled the problem of image classification with few samples per class.
Although the results are still very far from the successes of high-capacity models with tons of data, we encourage the community to study and improve the capabilities of deep networks with tiny datasets.
We have run several experiments and trained different flavors of deep architectures on three popular computer vision benchmarks.
In the experiments regarding the influence of model complexity on performance, we have shown that relatively simple networks can play a major role in being less prone to overfitting and generalizing better when facing small datasets. Therefore, architectures that will be proposed in the future, should not only be compared with state-of-the-art models (e.g. ResNet-34), but also with simpler networks. On the other hand, when data augmentation is used, deeper networks rapidly gain higher performance even with basic augmentation policies thanks to the artificial addition of training images.
Furthermore, we run a wide analysis to test the benefits of using popular regularization techniques such as data augmentation and dropout in small data settings. We figured out that the first one can induce large improvements in recognition capabilities depending on the dataset type and size. For what concerns dropout, we have shown that it generally improves results and should be used as well. For future works, we would like to perform the proposed analysis on other datasets and implement more sophisticated data augmentation techniques.
Work partially funded by the EUHorizon 2020 research and innovation program under AI4EU project grant N. 825619.
- (2020) Harnessing the power of infinitely wide deep nets on small-data tasks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, External Links: Cited by: §I, §II, §III-A, §IV-C.
- (2020) Deep learning on small datasets without pre-training using cosine loss. In The IEEE Winter Conference on Applications of Computer Vision, pp. 1371–1380. Cited by: §I, §II, §III-B, §IV-C.
- (2012) Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML workshop on unsupervised and transfer learning, pp. 17–36. Cited by: §I.
- (2020) Small data, big decisions: model selection in the small-data regime. Proceedings of the 37th International Conference on Machine Learning. Cited by: §II.
- (2014) Do we need hundreds of classifiers to solve real world classification problems?. The journal of machine learning research 15 (1), pp. 3133–3181. Cited by: §II.
- (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §I.
- (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §III-B, §III-D.
- (2017) Frankenstein: learning deep face representations using small data. IEEE Transactions on Image Processing 27 (1), pp. 293–303. Cited by: §II.
- (2020) Self-supervised visual feature learning with deep neural networks: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §I.
- (2017) Generalization in deep learning. arXiv preprint arXiv:1710.05468. Cited by: §I.
- (2015) Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun (Eds.), External Links: Cited by: §III-B.
- (2009) Learning multiple layers of features from tiny images. Cited by: §III-C.
- (2017) A survey on deep learning in medical image analysis. Medical image analysis 42, pp. 60–88. Cited by: §I.
- (2019) Generative modeling for small-data object detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6073–6081. Cited by: §II.
- (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: §I.
- (2011) Reading digits in natural images with unsupervised feature learning. Cited by: §III-C.
- (2018) Modern neural networks generalize on small data sets. In Advances in Neural Information Processing Systems, pp. 3619–3628. Cited by: §II.
- (2009) A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22 (10), pp. 1345–1359. Cited by: §I.
- (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §I.
- (2015) Supervised greedy layer-wise training for deep convolutional networks with small datasets. In Computational Collective Intelligence - 7th International Conference, ICCCI 2015, Madrid, Spain, September 21-23, 2015. Proceedings, Part I, M. Núñez, N. T. Nguyen, D. Camacho and B. Trawinski (Eds.), Lecture Notes in Computer Science, Vol. 9329, pp. 275–284. External Links: Cited by: §II.
- (2019) A survey on image data augmentation for deep learning. Journal of Big Data 6 (1), pp. 60. Cited by: §I, §II, §III-D.
- (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §I.
- (2017) Incremental learning for robot perception through hri. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2772–2777. Cited by: §I.
- (2018) Meta-learning: a survey. arXiv preprint arXiv:1810.03548. Cited by: §I.
- (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §III-C.
- (2015) Deep learning. Nature 521 (7553), pp. 436–444. External Links: Cited by: §I.
- (2019) Prediction of intentions behind a single human action: an application of convolutional neural network. In 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 670–676. Cited by: §I.