Classification-Reconstruction Learning for Open-Set Recognition
Open-set classification is a problem of handling ‘unknown’ classes that are not contained in the training dataset, whereas traditional classifiers assume that only known classes appear in the test environment. Existing open-set classifiers rely on deep networks trained in a supervised manner on known classes in the training set; this causes specialization of learned representations to known classes and makes it hard to distinguish unknowns from knowns. In contrast, we train networks for joint classification and reconstruction of input data. This enhances the learned representation so as to preserve information useful for separating unknowns from knowns, as well as to discriminate classes of knowns. Our novel Classification-Reconstruction learning for Open-Set Recognition (CROSR) utilizes latent representations for reconstruction and enables robust unknown detection without harming the known-class classification accuracy. Extensive experiments reveal that the proposed method outperforms existing deep open-set classifiers in multiple standard datasets and is robust to diverse outliers. The code is available in https://nae-lab.org/~rei/research/crosr/.
To be deployable to real applications, recognition systems need to be tolerant of unknown things and events that were not anticipated during the training phase. However, most of the existing learning methods are based on the closed-world assumption, that is, the training datasets are assumed to include all classes that appear in the environments where the system will be deployed. This assumption can be easily violated in real-world problems, where covering all possible classes is almost impossible . Closed-set classifiers are error-prone to samples of unknown classes, and this limits their usability [47, 44].
In contrast, open-set classifiers  can detect samples that belong to none of the training classes. Typically, they fit a probability distribution to the training samples in some feature space, and detect outliers as unknowns. For the features to represent the samples, almost all existing deep open-set classifiers rely on those acquired via fully supervised learning [2, 9, 41], as shown in Fig. 1 (a). However, they are for emphasizing the discriminative features of known classes; they are not necessarily useful for representing unknowns or separating unknowns from knowns.
In this study, our goal is to learn efficient feature representations that are able to classify known classes as well as to detect unknowns as outliers. Regarding the representations of outliers that we cannot assume beforehand, it is natural to add unsupervised learning as a regularizer so that the learned representations acquire information that are important in general but may not be useful for classifying given classes. Thus, we utilize unsupervised learning of reconstructions in addition to supervised learning of classifications. Reconstruction of input samples from low-dimensional latent representations inside the networks is a general way of unsupervised learning . The representation learned via reconstruction are useful in several tasks . Although there are previous successful examples of classification-reconstruction learning, such as semi-supervised learning  and domain adaptation , this study is the first to apply deep classification-reconstruction learning to open-set classification.
Here, we present a novel open-set classification framework, called Classification-Reconstruction learning for Open-Set Recognition (CROSR). As shown in Fig. 1 (b), the open-set classifier consists of two parts: a closed-set classifier and an unknown detector, both of which exploit a deep classification-reconstruction network.111We refer to detection of unknowns as unknown detection, and known-class classification as known classification. While the known-class classifier exploits supervisedly learned prediction , the unknown detector uses a reconstructive latent representation together with . This allows unknown detectors to exploit a wider pool of features that may not be discriminative for known classes. Additionally, in higher-level layers of supervised deep nets, details of input tend to be lost [51, 6] , which may not be preferable in unknown detection. CROSR can exploit reconstructive representation to complement the lost information in the prediction .
To provide effective and simultaneously, we further design deep hierarchical reconstruction nets (DHRNets). The key idea in DHRNets is the bottlenecked lateral connections, which is useful to learn rich representations for classification and compact representations for detection of unknowns jointly. DHRNets learn reconstruction of each intermediate layer in classification networks using latent representations, i.e., mapping to low-dimensional spaces, and as a result it acquires hierarchical latent representation. With the hierarchical bottlenecked representation in DHRNets, the unknown detector in CROSR can exploit multi-level anomaly factors easily thanks to the representations’ compactness. This bottlenecking is crucial, because outliers are harder to detect in higher dimensional feature spaces due to concentration on the sphere . Existing autoencoder variants, which are useful for outlier detection by learning compact representations [52, 1], cannot afford large-scale classification because the bottlenecks in their mainstreams limit the expressive power for classification. CROSR with a DHRNet becomes more robust to a wide variety of unknown samples, some of which are very similar to the known-class samples. Our experiments in five standard datasets show that representations learned via reconstruction serve to complement those obtained via classification.
Our contribution is three-fold: First, we discuss the usefulness of deep reconstruction-based representation learning in open-set recognition for the first time; all of the other deep open-set classifiers are based on discriminative representation learning in known classes. Second, we develop a novel open-set recognition framework, CROSR, which is based on DHRNets and jointly performs known classification and unknown detection using them. Third, we conducted experiments on open-set classification in five standard image and text datasets, and the results show that our method outperforms existing deep open-set classifiers for most combinations of known data and outliers. The code related to this paper is available in https://nae-lab.org/~rei/research/crosr/.
2 Related work
Open-set classification Compared with closed-set classification, which has been investigated for decades [7, 5, 8], open-set classification has been surprisingly overlooked. The few studies on this topic mostly utilized either linear, kernel, or nearest-neighbor models. For example, Weibull-calibrated SVM  considers a distribution of decision scores for unknown detection. Center-based similarity space models  represent data by their similarity to class centroids in order to tighten the distributions of positive data. Extreme value machines  model class-inclusion probabilities using an extreme-value-theory-based density function. Open-set nearest neighbor methods  utilizes the distance ratio to the nearest and second nearest classes. Among them, sparse-representation-based open-set recognition  shares the idea of reconstruction-based representation learning with ours. The difference is in that we consider deep representation learning, while  uses a single-layer linear representation. These models cannot be applied to large-scale raw data without feature engineering.
The origin of deep open-set classifiers was in 2016 , and few deep open-set classifiers have been reported since then. G-Openmax , a direct extension of Openmax, trains networks with synthesized unknown data by using generative models. However, it cannot be applied to natural images other than hand-written characters due to the difficulty of generative modeling. DOC (deep open classifier) [41, 42], which is designed for document classification, enables end-to-end training by eliminating outlier detectors outside networks and using sigmoid activations in the networks for performing joint classification and outlier detection. Its drawback is that the sigmoids do not have the compact abating property ; namely, they may be activated by an infinitely distant input from all of the training data, and thus its open space risk is not bounded.
Outlier detection Outlier (also called anomaly or novelty) detection can be incorporated in the concept of open-set-classification as an unknown detector. However, outlier detectors are not open-set classifiers by themselves because they have no discriminative power within known classes. Some of the generic methods for anomaly detection are one-class extension of discriminative models such as SVM  or forests , generative models such as Gaussian mixture models , and subspace methods . However, most of the recent anomaly-detection literature focuses on incorporating domain knowledge specific to the task at hand, such as cues from videos [48, 14], and they cannot be used to build a generic-purpose open-set classifiers.
Deep nets have also been examined for outlier detection. The deep approaches mainly use autoencoders trained in an unsupervised manner , in combination with GMM , clustering , or one-class learning . Generative adversarial nets  can be used for outlier detection  by using their reconstruction errors and discriminators’ decisions. This usage is different from ours that utilizes latent representations. However, in outlier detection, deep nets are not always the absolute winners unlike in supervised learning, because nets need to be trained in an unsupervised manner and are less effective because of that.
Some studies use networks trained in a supervised manner to detect anomalies that are not from the distributions of training data [13, 19]. However, their methods cannot be simply extended to open-set classifiers because they use input preprocessing, for example, adversarial perturbation , and this operation may degrade known-class classification.
Semi-supervised learning In semi-supervised learning settings including domain adaptation, reconstruction is useful as a data-dependent regularizer [32, 23]. Among them, ladder nets  are partly similar to ours in terms of using lateral connections, except that ladder nets do not have the bottleneck structure. Our work aims at demonstrating that the reconstructive regularizers are also useful in open-set classification. However, the usage of the regularizers is largely different; CROSR uses them to prevent the representations from overly specializing to known classes, while semi-supervised learners use them to incorporate unlabeled data in their training objectives. Furthermore, in semi-supervised learning settings reconstruction errors are computed on unlabeled data as well as labeled training data. In open-set settings, it is impossible to compute reconstruction errors on any unknown data; we only use labeled (known) training data.
Before introducing CROSR, we briefly review Openmax , the existing deep open-set classifier. We also introduce the terminology and notation.
Openmax is an extension of Softmax. Given a set of known classes and an input data point , Softmax is defined as following:
where denotes the network as a function and denotes the representation of its final hidden layer, whose dimensionality is equal to the number of the known classes. To be consistent with , we refer to it as the activation vector (AV). Softmax is designed for closed-set settings where , and in open-set settings, we need to consider . This is achieved by calibrating the AV by the inclusion probabilities of each class:
where represents the belief that belongs to the known class . Here, , the calibrated activation vector prevents Openmax from giving high confidences to outliers that give small , i.e., the unknown samples that do not belong to . Formally, the class represents the unknown class. Usage of can be understood as a proxy for , which is harder to model due to inter-class variances.
For modeling class-belongingness , we need a distance function and its distribution. The distance measures the affinity of a data point to each class. Statistical extreme-value theory suggests that the Weibull family of distributions is suitable  for this purpose. Assuming that of the inliers follows a Weibull distribution, class-belongingness can be expressed using the cumulative density function,
Here, are parameters of the distribution that are derived from the training data of the class . is a heuristic calibrator that makes a larger discount in more confident classes, and is defined by a hyperparameter . is the index in the AV sorted in descending order.
As a class-belongingness measure, we used the distance of AVs from the class means, similarly to nearest non-outlier classification :
This gives a strong simplification assuming that depends only on the .
4 CROSR: Classification-reconstruction learning for open-set recognition
Our design of CROSR is based on observations about Openmax’s formulation: AVs are not necessarily the best representations for modeling the class-belongingness . Although AVs in supervised networks are optimized to give correct , they are not encouraged to encode information about , and it is not sufficient to test whether itself is probable in . We alleviate this problem by exploiting reconstructive latent representations, which encode more about .
4.1 Open-set classification with latent representations
To enable the use of latent representations for reconstruction in the unknown detector, we extend the Openmax classifier (Eqns. 1 – 4) as follows. We replace Eqn. 1 for applying the main-body network to both known classification and reconstruction:
Here we have introduced , a decoder network only used in training to make the latent representation meaningful via reconstruction. is the reconstruction of using . These equations correspond to the left part of Fig. 1 (b).
The network’s prediction and latent representation are jointly used in the class-belongingness modeling. Instead of Eqn. 4, CROSR considers the joint distributions of and to be a hypersphere per class:
Here, denotes concatenation of the vectors of and , and denotes their mean within class .
4.2 Deep Hierarchical Reconstruction Nets
After designing the open-set classification framework, we must specify the function form, i.e., the network architecture for . The network used in CROSR needs to effectively provide a prediction and latent representation . Our design of deep hierarchical reconstruction nets (DHRNets) simultaneously maintains the accuracy of in known classification and provides a compact .
For a conceptual explanation, DHRNet extracts the latent representations from each stage of middle-level layers in the classification network. Specifically, it extracts a series of latent representations from multi-stage features . We refer to these latent representations as bottlenecks. The advantage of this architecture is that it can detect outlying factors that are hidden in the input data but vanish in the middle of the inference chains. Since we cannot presume a stage where the outlying factors are most obvious, we construct the input vector for the unknown detector by simply concatenating from the layers. Here, can be interpreted as decomposed factors to generate . To draw an analogy, unknown detection using decomposed latent representations is similar to overhauling  mechanical products, where one disassembles into parts , investigates the parts for anomalies, and reassembles them into .
Figure 2 compares the existing architectures and DHRNet. Most of the closed-set classifiers and Openmax rely on supervised classification-only models (a) that do not have useful factors for outlier detection other than , because usually has high dimensionality for known-class classification. Employing autoencoders (b) is a straightforward way to introduce latent representations for reconstruction, but there is a problem in using them for open-set classification. Deep autoencoders gradually reduce the dimensionality of the intermediate layers for effective information compression. This is not good for large-scale closed-set classification, which needs a fairly large number of neurons in all layers to learn a rich feature hierarchy. LadderNet (c) can be regarded as a variant of an autoencoder, because it performs reconstruction. However, the difference lies in the lateral connections, through which part of flows to the reconstruction stream without further compression. Their role is in a detail-abstract decomposition ; that is, LadderNet encodes abstract information in the main stream and details in the lateral paths. While this is preferable for open-set classification because the outlying factors of unknowns may be in the details as well as in the abstracts, LadderNet itself does not provide compact latent variables.DHRNet (d) further enhances the decomposed information’s effectiveness for unknown detection by compressing the lateral streams in compact representations .
In detail, the -th layer of DHRNet is expressed as
Here, denotes a block of a feature transformation in the network, i.e., a series of convolutional layers between downsampling layers in a plain CNN or a densely-connected block in DenseNet . denotes an operation of non-linear dimensionality reduction, which consists of a ReLU and a convolution layer, while means a reprojection to the original dimensionality of . The pair of and is similar to an autoencoder. is a combinator of the top-down information and lateral information . While the function forms for are investigated by , we choose to use an element-wise sum and subsequent convolutional and ReLU layers as the simplest form among the possible variants. When inputting to the unknown detectors, the spatial axes are reduced by global max pooling to form a one-dimensional vector. This performs slightly better than vectorization by using average pooling or flattening. Figure 3 illustrates these operations, and the stack of operations gives the overall network shown in Fig. 2 (d).
Training We minimize the sum of classification errors and reconstruction errors in training data from known classes. To measure the classification error, we use softmax cross entropy of and the ground-truth labels. To measure the reconstruction error of and , we use the distance in the images and the cross entropy of one-hot word representations in the texts. Note that we cannot use the data of the unknown classes in training and the reconstruction loss is computed only with known samples. The whole network is differentiable and trainable using gradient-based methods. After the network is trained and its weights fixed, we compute Weibull distributions for unknown detection.
Implementation There are some more minor differences between our implementation and the ladder nets in . First, we use dropout in intermediate layers instead of noise addition, because it results in slightly better closed-set accuracy. Second, we do not penalize reconstruction errors of intermediate layers. This enables us to avoid the separate computation of ’noisy’ and ’clean’ layers that was originally needed for intermediate-layer reconstruction. We simply refer to our network without bottlenecks; in other words where and are identity transformations, as LadderNet. For the experiments, we implement LadderNet and DHRNet with various backbone architectures.
We experimented with CROSR and other methods on five standard datasets: MNIST, CIFAR-10, SVHN, TinyImageNet, and DBpedia. These datasets are for closed-set classification, and we extended them in two ways: 1) class separation and 2) outlier addition. In class-separation setting, we selected some classes randomly in order to use them as knowns. We used the remainder as unknowns. In this setting, which has been used in the open-set literature [41, 28], unknown samples come from the same domain as that of knowns. Outlier addition is a protocol introduced for out-of-distribution detection ; the networks are trained on the full training data, but in the test phase, outliers from another dataset are added to the test set as unknowns. The merit of doing so is that we can test the robustness of the classifiers against a larger diversity of data than in the original datasets. The class labels of the unknowns were not used in any case and they all were treated as a single unknown class.
|Plain CNN||Supervised only||0.991||0.934||0.943|
|Backbone network Training method UNK detector Omniglot MNIST-noise Noise Plain CNN Supervised only Softmax 0.592 0.641 0.826 Openmax 0.680 0.720 0.890 LadderNet Softmax 0.588 0.772 0.828 Openmax 0.764 0.821 0.826 DHRNet (ours) Softmax 0.595 0.801 0.829 Openmax 0.780 0.816 0.826 CROSR (ours) 0.793 0.827 0.826|
|Backbone network||Training method||UNK detector||ImageNet-crop||ImageNet-resize||LSUN-crop||LSUN-resize|
|Plain CNN||Counterfactual ||0.636||0.635||0.650||0.648|
|Plain CNN||Supervised only||Softmax||0.639||0.653||0.642||0.647|
MNIST MNIST is the most popular hand-written digit benchmark. It has 60,000 images for training and 10,000 for testing from ten classes. Although near-100% accuracy has been achieved in closed-set classification , the open-set extension of MNIST remains a challenge due to the variety of possible outliers.
As outliers, we used datasets of small gray-scale images, namely Omniglot, Noise, and MNIST-Noise. Omniglot is a dataset of hand-written characters from the alphabets of various languages. We only used the test set because the outliers are only needed in the test phase. ‘Noise’ is a set of images we synthesized by sampling each pixel value independently from a uniform distribution on [0, 1]. MNIST-Noise is also a synthesized set, made by superimposing MNIST’s test images on Noise, and thus its images are more similar to the inliers. Figure 4 shows their samples. Each dataset has 10,000 test images, the same as MNIST, and this makes the known-to-unknown ratio 1:1.
We used a seven-layer plain CNN for MNIST. It consists of five convolutional layers with kernels and 100 output channels, followed by ReLU non-linearities. Max pooling layers with a stride of 2 are inserted after every two convolutional layers. At the end of the convolutional layers, we put two fully connected layers with 500 and 10 units, and the last one was directly exposed to the Softmax classifier. In DHRNet, lateral connections are put after every pooling layer. The dimensionalities of the latent representations were all fixed to 32.
CIFAR-10 CIFAR-10 has 50,000 natural images for training and 10,000 for testing. It consists of ten classes, containing 5,000 training images for each class. In CIFAR-10, each class has large intra-class diversities by color, style, or pose difference, and state-of-the-art deep nets make a fair number of classification errors within known classes.
We examined two types of network, a plain CNN and DenseNet , a state-of-the-art network for closed-set image classification. The plain CNN is a VGGNet -style network re-designed for CIFAR, and it has 13 layers. The layers are grouped into three convolutional and one fully connected block. The output channels of each convolutional block number 64, 128, and 256, and they consist of two, two, and four convolutional layers with the same configuration. All convolutional kernels are . We set the depth of DenseNet to 92 and the growth rate to 24. The dimensionalities of the latent representations were all fixed to 32, the same as in MNIST.
We used the outliers collected by  from other datasets, i.e., ImageNet and LSUN, and we resized or cropped them so that they would have the same sizes 222URL: https://github.com/facebookresearch/odin. Among the outlier sets used in , we did not use synthesized sets of Gaussian and Uniform because they can be easily detected by baseline outlier-removal techniques. The datasets each have 10,000 test images, which is the same as in MNIST and this makes the known-to-unknown ratio 1:1.
SVHN and TinyImageNet SVHN is a dataset of 10-class digit photographs, and TinyImageNet is a 200-class subset of ImageNet. In these datasets, we compare CROSR with recent GAN-based methods [9, 28] that utilize unknown training data synthesized by GANs. A concern in the comparisons was the instability of the training and resulting variance in the quality of the training data generated by the GAN-based mechanisms, which may make comparisons hard . Thus, we exactly followed the evaluation protocols used in  (class separation within each single dataset, averaging over five trials, area-under-the-curve criteria), and directly compared our results against the reported numbers. Our backbone network was the same as the one used in  that consists of nine convolutional layers and one fully connected layers, except that ours had decoding parts as shown in Eqn. 4.2.
DBpedia The DBpedia ontology classification dataset contains 14 classes of Wikipedia articles, 40,000 instances for training and 5,000 for testing. We selected this dataset because it has the largest number of classes among the often-used datasets in the literature of the convnet-based large-scale text classification  and for ease in making various class splits. We conducted the open-set evaluation with class separation using 4 random classes as knowns and 4, 8, and 10 as unknowns.
In DBpedia, we implemented DHRNet on the basis of a shallow-and-wide convnet , which had three convolutional layers with kernels whose sizes were 3, 4, and 5, and whose output dimension was 100. Text-classification convnets are extendable to DHRNet by setting and in Fig. 3. The dimensionality of its bottleneck was 25. We also implemented DOC  using the same architecture as ours for a fair comparison.
Training DHRNet We confirmed that DHRNet can be trained by using the joint classification-reconstruction loss. We used the SGD solver with learning-rate scheduling tuned in each dataset. We set the weights of the reconstruction loss and the classification loss to the same value 1.0. In principle, the weight of reconstruction error should be as large as possible while keeping the close-set validation accuracy, which would give the most regularized and well-fitted model. However, we obtained satisfactory results with the default value and did not tune them further. The closed-set test errors of the networks for each dataset are listed in Table 1. All of the networks were trained without any large degradation in closed-set accuracy from the original ones. This and the subsequent experiments were conducted using Chainer .
Weibull distribution fitting We used libmr library  to compute the parameters in Weibull distribution. It has the hyperparameters from Eqn. 3 and tail_size, the number of extrema used to define the tails of the distributions. We used the values suggested in , namely and . For MNIST and CIFAR-10, we did not use the rank calibration with in Eqn. 3, since it does not improve the performance due to the small number of classes. For DenseNet in CIFAR-10, we noticed that Openmax performed worse with the default parameters, so we changed tail_size to . Since heavily tuning these hyperparameters for specific types of outlier runs counter to the motivation of open-set recognition for handling unknowns, we did not tune them for each of the test sets.
Results We show the results for MNIST in Table 4, for CIFAR-10 in Table 3, and for DBpedia in Table 4. The reported values are F1-scores  of known classes and unknown as a class with a threshold 0.5. CROSR outperformed all of the other methods consistently except in two settings. Specifically, in MNIST, CROSR outperformed Supervised + Openmax by more than 10% in F1-score when using Omniglot or MNIST-noise as outliers, whereas it slightly underperformed with Noise, the easiest outliers. CROSR also performed better than or as well as the stronger baselines LadderNet + Openmax and DHRNet + Openmax. In CIFAR-10, the results for varying thresholds are also shown in Fig. 5, in which it is clear that CROSR outperformed the other methods regardless of the threshold.
Interestingly, LadderNet with Openmax outperformed the supervised-only networks. For instance, LadderNet-Openmax achieved an 8.4% gain in F1-score in the MNIST-vs-Omniglot setting and a 10.1% gain in the MNIST-vs-MNIST-Noise setting. This means regularization using the reconstruction loss is beneficial for unknown detection; in other words, using supervised losses in known classes is not the best for training open-set deep networks. However, no gains were had by adding only the reconstruction-error term to training objectives in the natural image datasets. This means we need to use the reconstructive factors in the networks in a more explicit form by adopting DHRNet.
For DBpedia, CROSR outperformed the other methods, except when the number of train/test classes was 4/4, which is equivalent to the closed-set settings. While DOC and Openmax performed almost on a par with each other, the improvement of CROSR over Openmax was also significant in this dataset.
Comparison with GAN-based methods
Table 5 summarizes the results of ours and the GAN-based methods. Ours outperformed all of the other methods in MNIST and TinyImageNet, and all except Counterfactual in SVHN. While the relative improvements are within the ranges of the error bars, these results still means that our method, which does not use any synthesized training data, can perform on par or slightly better than the state-of-the-art GAN-based methods.
|Method / dataset||MNIST||SVHN||TinyImageNet|
|Openmax||0.981 0.005||0.894 0.013||0.576|
|G-Openmax||0.984 0.005||0.896 0.017||0.580|
|Counterfactual||0.988 0.004||0.910 0.010||0.586|
|CROSR (ours)||0.991 0.004||0.899 0.018||0.589|
In combination with anomaly detectors
To investigate how latent representations can be exploited more effectively, we replaced the distance in Eqn. 6 by one-class learners. We used the most popular one-class SVM (OCSVM) and Isolation Forest (IsoForest). For simplicity, we used the default hyperparameters in scikit-learn . The results are shown in Table 6. It reveals that OCSVM had a more than 15% gain in F1-score in synthesized outliers, while it caused a 9% degradation in Omniglot. Although we did not find an anomaly detector that consistently gave performance improvements on all the datasets, the results are still encouraging. The results suggest that DHRNet encodes more useful information that is not fully exploited by the per-class centroid based outlier modeling.
|Our DHRNet +|
Visualization Figure 6 shows the test data from the known and unknown classes, sorted by the models’ final confidences computed by Eqn. 3. In this figure, unknown data at higher order mean that the model is deceived by that data. It is clear that our methods gave lower confidences to the unknown samples, and they were deceived only by samples that had high similarity to the inlier.
We additionally visualize the learned representations by using t-distributed stochastic neighbor embedding (t-SNE) . Figure 7 shows distributions of the representations extracted from known- and unknown-class images in the test sets, embedded into two-dimensional planes. Here we compare the distributions of the prediction from the supervised net and that of the concatenation of the prediction and the latent variable from our DHRNet. Their usages are shown in Eqns. (4) and (6) of the main text. While the existing deep open-set classifiers exploit only , our CROSR exploits . With the latent representation, the clusters of knowns and unknowns are more clearly separated, and this suggests that the representations learned by our DHRNet are preferable for open-set classification.
Run time Despite of the extensions we made to the network, CROSR’s computational cost in the test was not much larger than Openmax’s. Figure 7 shows the run times, which were computed on a single GTX Titan X graphic processor. The overhead of computing the latent representations was as small as 3–5 ms/image, negligible in relation to the original cost when the backbone network is large.
|Method / Architecture||Plain CNN||DenseNet|
We described CROSR, a deep open-set classifier augmented by latent representation learning for reconstruction. To enhance the usability of latent representations for unknown detection, we also developed a novel deep hierarchical reconstruction net architecture. Comprehensive experiments conducted on multiple standard datasets demonstrated that CROSR outperforms previous state-of-the-art open-set classifiers in most cases.
This work is in part supported by JSPS KAKENHI Grant Number JP18K11348, and Grant-in-Aid for JSPS Fellows JP16J04552. The authors would like to thank Dr. Ari Hautasaari for his helpful advice to improve the manuscript.
-  (2018) Clustering and unsupervised anomaly detection with L2 normalized deep auto-encoder representations. In IJCNN, Cited by: §1, §2.
-  (2016) Towards open set deep networks. In CVPR, pp. 1563–1572. Cited by: §1, §2, §3, §3, §5.
-  (2015) Towards open world recognition. In CVPR, pp. 1893–1902. Cited by: §3.
-  (2010) Deep, big, simple neural nets for handwritten digit recognition. Neural computation 22 (12), pp. 3207–3220. Cited by: §5.
-  (1995) Support-vector networks. Machine learning 20 (3), pp. 273–297. Cited by: §2.
-  (2016) Inverting visual representations with convolutional networks. In CVPR, pp. 4829–4837. Cited by: §1.
-  (1936) The use of multiple measurements in taxonomic problems. Annals of eugenics 7 (2), pp. 179–188. Cited by: §2.
-  (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 55 (1), pp. 119–139. Cited by: §2.
-  (2017) Generative OpenMax for multi-class open set classification. BMVC. Cited by: §1, §2, Table 5, §5.
-  (2016) Deep reconstruction-classification networks for unsupervised domain adaptation. In ECCV, pp. 597–613. Cited by: §1.
-  (2015) Explaining and harnessing adversarial examples. In ICLR, Cited by: §2.
-  (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: §2.
-  (2017) A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, Cited by: §2, §5, §5.
-  (2017) Joint detection and recounting of abnormal events by learning deep generic knowledge.. In ICCV, pp. 3639–3647. Cited by: §2.
-  (2006) Reducing the dimensionality of data with neural networks. Science 313 (5786), pp. 504–507. Cited by: §1.
-  (2017) Densely connected convolutional networks.. In CVPR, Vol. 1, pp. 3. Cited by: §4.2, §5.
-  (2017) Nearest neighbors distance ratio open-set classifier. Machine Learning 106 (3), pp. 359–386. Cited by: §2.
-  (2014) Convolutional neural networks for sentence classification. In EMNLP, Cited by: §5.
-  (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. In ICLR, Cited by: §2, §5.
-  (2016) Breaking the closed world assumption in text classification. In NAACL-HLT, Cited by: §2.
-  (2008) Isolation forest. In International Conference on Data Mining (ICDM), pp. 413–422. Cited by: §2.
-  (2018) Are GANs created equal? A large-scale study. In NIPS, Cited by: §5.
-  (2016) Auxiliary deep generative models. In ICML, Cited by: §2.
-  (2008) Visualizing data using t-SNE. JMLR 9 (Nov), pp. 2579–2605. Cited by: §5.
-  (2001) One-class SVMs for document classification. JMLR 2 (Dec), pp. 139–154. Cited by: §2.
-  (1969) Some philosophical problems from the standpoint of artificial intelligence. In Machine Intelligence 4, B. Meltzer and D. Michie (Eds.), pp. 463–502. Note: reprinted in McC90 Cited by: §1.
-  (2008) Maintenance engineering handbook. Mcgraw-hill New York, NY. Cited by: §4.2.
-  (2018) Open set learning with counterfactual images. ECCV. Cited by: Table 3, §5, §5.
-  (2011) Scikit-learn: machine learning in Python. JMLR 12, pp. 2825–2830. Cited by: §5.
-  (2018) Learning deep features for one-class classification. arXiv preprint arXiv:1801.05365. Cited by: §2.
-  (2016) Deconstructing the ladder network architecture. In ICML, pp. 2368–2376. Cited by: §4.2.
-  (2015) Semi-supervised learning with ladder networks. In NIPS, pp. 3546–3554. Cited by: §1, §2, §4.2.
-  (2007) Sensitivity of PCA for traffic anomaly detection. ACM SIGMETRICS Performance Evaluation Review 35 (1), pp. 109–120. Cited by: §2.
-  (1994) A probabilistic resource allocating network for novelty detection. Neural Computation 6 (2), pp. 270–284. Cited by: §2.
-  (2017-03) The extreme value machine. PAMI 40 (3). Cited by: §2, §3.
-  (2007) The truth of the F-measure. Teach Tutor mater 1 (5), pp. 1–5. Cited by: §5.
-  (2013) Toward open set recognition. PAMI 35 (7), pp. 1757–1772. Cited by: §1.
-  (2014) Probability models for open set recognition. PAMI 36 (11), pp. 2317–2324. Cited by: §2, §2.
-  (2011) Meta-recognition: the theory and practice of recognition score analysis. PAMI 33, pp. 1689–1695. Cited by: §5.
-  (2017) Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International Conference on Information Processing in Medical Imaging, pp. 146–157. Cited by: §2.
-  (2017) DOC: deep open classification of text documents. In EMNLP, Cited by: §1, §2, §5, §5.
-  (2018) Unseen class discovery in open-world classification. arXiv preprint arXiv:1801.05609. Cited by: §2.
-  (2015) Very deep convolutional networks for large-scale image recognition. ICLR. Cited by: §5.
-  (2018) The limits and potentials of deep learning for robotics. The International Journal of Robotics Research 37 (4-5), pp. 405–420. Cited by: §1.
-  (2015) Chainer: a next-generation open source framework for deep learning. In NIPSW, Vol. 5, pp. 1–6. Cited by: §5.
-  (2015) From neural PCA to deep unsupervised learning. In Advances in Independent Component Analysis and Learning Machines, pp. 143–171. Cited by: §4.2.
-  (2013) Animal recognition in the mojave desert: vision tools for field biologists. In WACV, pp. 206–213. Cited by: §1.
-  (2015) Learning deep representations of appearance and motion for anomalous event detection. BMVC. Cited by: §2.
-  (2017) Sparse representation-based open set recognition. PAMI 39 (8), pp. 1690–1696. Cited by: §2.
-  (2015) Character-level convolutional networks for text classification. In NIPS, pp. 649–657. Cited by: §5.
-  (2016) Augmenting supervised neural networks with unsupervised objectives for large-scale image classification. In ICML, pp. 612–621. Cited by: §1, §1.
-  (2017) Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 665–674. Cited by: §1, §2.
-  (2012) A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analysis and Data Mining: The ASA Data Science Journal 5 (5), pp. 363–387. Cited by: §1.
-  (2018) Deep autoencoding gaussian mixture model for unsupervised anomaly detection. Cited by: §2.