Imbalanced Deep Learning by Minority Class Incremental Rectification
Model learning from class imbalanced training data is a long-standing and significant challenge for machine learning. In particular, existing deep learning methods consider mostly either class balanced data or moderately imbalanced data in model training, and ignore the challenge of learning from significantly imbalanced training data. To address this problem, we formulate a class imbalanced deep learning model based on batch-wise incremental minority (sparsely sampled) class rectification by hard sample mining in majority (frequently sampled) classes during model training. This model is designed to minimise the dominant effect of majority classes by discovering sparsely sampled boundaries of minority classes in an iterative batch-wise learning process. To that end, we introduce a Class Rectification Loss (CRL) function that can be deployed readily in deep network architectures. Extensive experimental evaluations are conducted on three imbalanced person attribute benchmark datasets (CelebA, X-Domain, DeepFashion) and one balanced object category benchmark dataset (CIFAR-100). These experimental results demonstrate the performance advantages and model scalability of the proposed batch-wise incremental minority class rectification model over the existing state-of-the-art models for addressing the problem of imbalanced data learning.
Machine learning from class imbalanced data, in which the distribution of training data across different object classes is significantly skewed, is a long-standing problem [1, 2, 3]. Most existing learning algorithms produce inductive bias (learning bias) towards the frequent (majority) classes if training data are not balanced, resulting in poor minority class recognition performance. However, accurately detecting minority classes is often important, e.g. in rare event discovery . A simple approach to overcoming class imbalance in model learning is to re-sample the training data (a pre-process), e.g. by down-sampling majority classes, over-sampling minority classes, or some combinations [5, 6, 7]. Another common approach is cost-sensitive learning, which reformulates existing learning algorithms by weighting the minority classes more [8, 9, 10].
Over the past two decades, a range of class imbalanced learning methods have been developed . However, they mainly investigate the single-label binary-class imbalanced learning problem in small scale data with class imbalance ratios being small, e.g. within 1:100. These methods are limited when applied to learning from big scale data in computer vision. Visual data are often interpreted by multi-label semantics, e.g. web person images with multi-attributes on clothing and facial characteristics. Automatic recognition of these nameable properties is very useful [14, 15], but challenging for model learning due to: (1) Very large scale imbalanced training data [10, 16, 17], with clothing and facial attribute labelled data exhibiting power-law distributions (Fig. 1). (2) Subtle appearance discrepancy between different fine-grained attribute classes, e.g. “Woollen-Coat” can appear very similar to “Cotton-Coat”, whilst “Mustache” can be visually hard to be distinct (Fig. 1(b)). To discriminate subtle classes from multi-labelled images at large scale, standard learning algorithms require a vast quantity of class balanced training data for all labels [11, 18].
There is a vast quantity of severely imbalanced visual data on the Internet. Conventional learning algorithms [19, 13] are poorly suited for three reasons: First, conventional imbalanced data learning methods without deep learning rely on hand-crafted features extracted from small data, which are inferior to big data deep learning based richer feature representations [20, 21, 22, 23]. Second, deep learning in itself also suffers from class imbalanced training data [24, 25, 17] (Table IX and Sec. 4.3). Third, directly incorporating existing imbalanced data learning algorithms into a deep learning framework does not provide effective solutions [26, 27, 28].
Overall, imbalanced big data deep learning is under-studied partly due to that popular image benchmarks for large scale deep learning, e.g. ILSVRC, do not exhibit significant class imbalance after some careful sample filtering being applied in those benchmark constructions (Table I). More recently, there are a few emerging large scale clothing and facial attribute datasets that are significantly more imbalanced in class labelled data distributions (Fig. 1), as these datasets are drawn from online Internet sources without artificial sample filtering [11, 29, 30, 12]. For example, the imbalance-ratio (lower is more extreme) between the minority classes and the majority classes in the CelebA face attribute dataset  is 1:43 (3,713 : 159,057 samples), whilst the X-Domain clothing attributes are even more imbalanced with an imbalance-ratio of 1:4,162 (20 : 204,177)  (Table I).
|ILSVRC2012-14 ||MS-COCO ||VOC2012 ||CIFAR-100 |
|1 : 2||-||1 : 13||1 : 1|
|Caltech 256 ||CelebA ||DeepFashion ||X-Domain |
|1 : 1||1 : 43||1 : 733||1 : 4,162|
This work addresses the problem of large scale imbalanced data deep learning for multi-label classification. This problem is characterised by (1) Large scale training data; (2) Multi-label per data sample; (3) Extremely imbalanced training data with an imbalance-ratio being greater than 1:1000; (4) Variable per-label attribute values, ranging from binary to multiple attribute values per label. The contributions of this work are: (I) We solve the large scale imbalanced data deep learning problem. This differs from the conventional imbalanced data learning studies focusing on small scale data single-labelled with a limited number of classes and small data imbalance-ratio. (II) We present a novel approach to imbalanced deep learning by minority class incremental rectification using batch-wise mining of hard samples on the minority classes in a batch-wise optimisation process. This differs from contemporary multi-label learning methods [11, 29, 30, 37, 18], which either assume class balanced training data or simply ignore the imbalanced data learning problem all together. (III) We formulate a Class Rectification Loss (CRL) regularisation algorithm for minority class incremental rectification. In particular, the computational complexity of imposing this rectification loss is restrained by iterative mini-batch-wise model optimisation (small data pools). This is in contrast to the global model optimisation over the entire training data pool of the Large Margin Local Embedding (LMLE) algorithm111 In LMLE, a computationally expensive data pre-processing (including clustering and quintuplet construction) is required for each round of deep model learning. In particular, to cluster (e.g. 150,000+) training images w.r.t. an attribute label by -means, its operation complexity is super-polynomial with the need for at least ( the lower bound complexity) iterations of cluster refinement on samples . As each iteration is linear to and , a clustering takes the complexity at ( the upper bound complexity). To create a quintuplet for each data sample, four cluster- and class-level searches are needed, each proportion to the training data size with the overall search complexity as quadratic to (). Given a large scale training set, it is likely that this pairwise search part takes the most significant cost in the pre-processing. Both clustering and quintuplet operations are needed for each attribute label, and their costs are proportional to the total number of attribute values, e.g. 80 times for CelebA and 178 times for X-domain. Consequently, the total complexity of the pre-processing per round is . . There are two advantages of our approach: First, the model only requires incremental class imbalanced data learning for all attribute labels concurrently without any additional single-label sampling assumption (e.g. per-label oriented quintuplet construction); Second, model learning is independent to the overall training data size, the number of class labels, and without pre-determined global data clustering. This makes the model much more scalable to learning from large training data.
Extensive evaluations were performed on the CelebA face attribute  and X-Domain clothing attribute  benchmarks, with further evaluation on the DeepFashion clothing attribute  benchmark. These experimental results show a clear advantage of CRL over 12 state-of-the-art models compared, including 7 attribute models (PANDA , ANet , Triplet-NN , FashionNet , DARN , LMLE , MTCT ). We further evaluated the CRL method on the class balanced single-label object recognition benchmark CIFAR-100 , and constructed different class imbalance-ratios therein to quantify the model performance gains under controlled varying degrees of imbalance-ratio in training data.
2 Related work
Class imbalanced learning aims to mitigate model learning bias towards majority classes by lifting the importance of minority classes [1, 2, 3]. Existing methods include: (1) Data-level: Aiming to rebalance the class prior distributions in a pre-processing procedure. This scheme is attractive as the only change needed is to the training data rather than to the learning algorithms. Typical methods include down-sampling majority classes, over-sampling minority classes, or both [6, 5, 40, 3, 7, 41]. However, over-sampling can easily cause model overfitting owing to repeatedly visiting duplicated samples . Down-sampling, on the other hand, throws away valuable information [5, 42]. (2) Algorithm-level: Modifying existing algorithms to give more emphasis on the minority classes [43, 44, 10, 45, 46, 47, 48, 49, 9, 50, 8]. One strategy is the cost-sensitive learning which assigns varying costs to different classes, e.g. a higher penalty for minority class samples [2, 8, 50, 44, 24, 9]. However, it is in general difficult to optimise the cost matrix or relationships. Often, it is given by experts therefore problem-specific and non-scalable. In contrast, the threshold-adjustment technique changes the decision threshold in test time [24, 51, 52, 53, 54]. (3) Hybrid: Combined data-level and algorithm-level rebalancing [55, 56, 57]. These methods still only consider small scale imbalanced data learning, characterised by: (a) Limited number of data samples and classes, (b) Non-extreme imbalance ratios, (c) Single-label classification, (d) Problem-specific hand-crafted low-dimensional features. In large, these classical techniques are poor for severely imbalanced data learning given big visual data.
There are early neural network based methods for imbalanced data learning [24, 25, 58, 59, 60, 61, 26, 27, 28]. However, these works still only address small scale imbalanced data learning with neural networks merely acting as nonlinear classifiers without end-to-end learning. A few recent studies [62, 41, 63, 64, 65] have exploited classic strategies in single-label deep learning. For example, the binary-class classification problem is studied by per-class mean square error loss , synthetic minority sample selection , and constant class ratio in mini-batch . The multi-class classification problem is addressed by online cost-sensitive loss . More recently, the idea of preserving local class structures (LMLE) was proposed for imbalanced data deep learning, but without end-to-end model training and with only single-label oriented training unit design . In contrast, our model is designed for end-to-end imbalanced data deep learning for multi-label classification, scalable to large training data.
Hard sample mining has been extensively exploited in computer vision, e.g. object detection [67, 68], face recognition , image categorisation , and unsupervised representation learning . The rational for mining hard negatives (unexpected) is that, they are more informative than easy negatives (expected) as they violate a model class boundary by being on the wrong side and also far away from it. Therefore, hard negative mining enables a model to improve itself quicker and more effectively with less training data. Similarly, model learning can also benefit from mining hard positives (unexpected), i.e. those on the correct side but very close to or even across a model class boundary. In our model learning, we only consider hard mining on the minority classes for efficiency. Moreover, our batch-balancing hard mining strategy differs from that of LMLE  by eliminating exhaustive searching of the entire training set (all classes), hence computationally more scalable than LMLE.
Deep metric learning is based on the idea of combining deep neural networks with metric loss functions in a joint end-to-end learning process [71, 39, 72, 69]. Whilst adopting similarly a generic margin based loss function [73, 74], deep metric learning does not consider the class imbalanced data learning problem. In contrast, our method is specifically designed to address this problem by incrementally rectifying the structural significance of minority classes in a batch-wise end-to-end learning process, so to achieve scalable imbalanced data deep learning.
Deep learning of clothing and facial attributes has been recently exploited [11, 29, 30, 37, 12, 18], given the availability of large scale datasets and deep models’ strong capacity for learning from big training data. However, existing methods ignore mostly imbalanced class data distributions, resulting in suboptimal model learning and poor model performance on the minority classes. One exception is the LMLE model  which studies imbalanced data deep learning . Compared to our end-to-end learning using mini-batch hard sample mining on the minority classes only, LMLE is not end-to-end learning and with global hard mining over the entire training data, it is computationally complex and expensive, not lending itself naturally to big training data.
3 Scalable Imbalanced Deep Learning
For the problem of imbalanced data deep learning from large training data, we consider the problem of person attribute recognition, both facial and clothing attributes. This is a multi-label multi-class learning problem given imbalanced training data, a generalisation of the more common single-label binary-/multi-class recognition problem. Specifically, we wish to construct a deep learning model capable of recognising multi-labelled person attributes in web images, with a total of different attribute labels. Each label has its respective class value range , e.g. multi-class clothing attribute or binary-class facial attribute. Suppose we have training images with their attribute annotation vectors , and where refers to the -th attribute class value of the image . The number of images available for different attribute classes varies greatly (Fig. 1) therefore imposing a significant multi-label imbalanced class data distribution challenge to model learning. Most attributes are localised to image regions, even though the location information is not annotated (weakly labelled). We consider to jointly learn features and all the attribute label classifiers from class imbalanced training data in an end-to-end process. Specifically, we introduce incremental minority class discrimination learning by formulating a Class Rectification Loss (CRL) regularisation. The CRL imposes an additional batch-wise class balancing on top of the cross-entropy loss so to rectify model learning bias due to the over-representation of the majority classes by promoting under-represented minority classes (Fig. 3).
3.1 Limitations of Cross-Entropy Classification Loss
Convolutional Neural Networks (CNN) are designed to take inputs as two-dimensional images for recognition tasks . For learning a multi-class (per-label) classification CNN model (details in “Network Architecture”, Sections 4.1, 4.2, and 4.3), the Cross-Entropy (CE) loss function is commonly used by firstly predicting the -th attribute posterior probability of over the ground truth :
where refers to the feature vector of for the -th attribute label, and is the corresponding classification function parameter. Then compute the overall loss on a mini-batch of images as the average additive summation of attribute-level loss with equal weight over all labels:
By design, the cross-entropy loss enforces model learning to respect two conditions: (1) The same-class samples should have class distributions with the identical peak position corresponding to the groundtruth one-hot label. (2) Each class corresponds to a different peak position in the class distribution. As such, the model is supervised end-to-end to separate the class boundaries explicitly in the prediction space and implicitly in the feature space by some in-between linear or nonlinear transformation. The CE loss minimises the amount of training error by assuming that individual samples and classes are equally important. To achieve model generalisation with discriminative inter-class boundary separation, it is necessary to have a large training set with sufficiently balanced class distributions (Fig. 2(a)).
However, given highly class imbalanced training data, e.g. X-Domain benchmark, model learning by the conventional cross-entropy loss is suboptimal. The model suffers from generalising inductive decision boundaries biased towards majority classes with ignorance on minority classes (Fig. 2(b)). To address this problem, we reformulate the learning objective loss function by explicitly imposing structural discrimination of minority classes against others, i.e. inter-class geometry structure modelling (Fig. 4). This stresses the structural significance of minority classes in model learning, orthogonal and complementary to the uniform single-class independent modelling enforced by the cross-entropy loss (Fig. 4(b)). Conceptually, this design may bring simultaneous benefits to majority class boundary learning as shown in our experimental evaluations (Tables III and V).
3.2 Minority Class Hard Sample Mining
We explore a hard sample mining strategy to enhance minority class manifold rectification by selectively “borrowing” majority class samples from class decision boundary marginal (border) regions. Specifically, we estimate minority class neighbourhood structure by mining both hard-positive and hard-negative samples for every selected minority class in each mini-batch of training data222 We only consider those minority classes having at least two sample images or more in each batch, i.e. ignoring those minority classes having only one sample image or none. This enables a more flexible loss function selection, e.g. triplet loss functions which typically requires at least two matched samples. . Our idea is to rectify incrementally the per-batch class distribution bias of multi-labels in model learning. Hence, each improved intermediate model from per-batch training is less inclined towards the over-sampled majority classes and more discriminative to the under-sampled minority classes (Fig. 2(c)). Unlike LMLE  which aims to preserve the local structures of both majority and minority classes by global clustering of and sampling from the entire training data, our model design aims to enhance progressively minority class discrimination by incremental projective structure refinement. This idea is inherently compatible with batch-wise hard-positive and hard-negative sample mining along the model training trajectory. This eliminates the LMLE’s drawback in assuming that local group structures of all classes can be estimated reliably by offline global clustering before model learning.
Incremental Batch-Wise Class Profiling For hard sample mining, we first profile the minority and majority classes per label in each training mini-batch with training samples. We profile the class distribution over class for each attribute (label) , where denotes the number of training samples with the -th attribute value assigned to class . We then sort in the descent order. As such, we define the minority classes for attribute label in this mini-batch as the smallest classes subject to:
In the most studied two-class setting , the minority (majority) class is defined as the one with fewer (more) samples, i.e. under (above) 50%. However, to our best knowledge there is no standard definition for the multi-class case. For the definition in Eqn. (3) being conceptually consistent to the two-class setting, we also set =50%. This means that all minority classes collectively account for at most half or less samples per batch. The remaining classes are deemed as the majority classes. We analysed the effect of choosing different values on model performance (Table XIII).
Given the minority classes, we then consider hard mining therein at two levels: class-level (Fig. 5(a)) and instance-level (Fig. 5(b)). Let us next define the “hardness” metrics, hard samples and their selection.
Hardness Metrics For hard sample mining, it is necessary to have quantitative metrics for “hardness” measurement. Two metrics are considered: (1) Score based: A model’s class prediction score, suitable for class-level hard mining. (2) Feature based: The feature distance between data points, suitable for instance-level hard mining.
Class-Level Hard Samples At the class-level, we quantify the sample hardness regarding a given class per label. Particularly, for any minority class of the attribute label , we refer “hard-positives” to the images of class ( with the groundtruth class of the attribute ) given low prediction scores on class by the thus-far model, i.e. poor recognitions. Conversely, by “hard-negatives”, we refer to the images of other classes () given high prediction scores on class by thus-far model, i.e. obvious mistakes. Formally, we define them as:
where and denote the hard positive and negative sample sets of a minority class of the attribute label .
Instance-Level Hard Samples At the instance-level, we quantify the sample hardness regarding any specific sample instance (groundtruth class ) from each minority class of the attribute label . Specifically, we define “hard-positives” of as those class images (groundtruth class ) by thus-far model with large distances (low matching scores) from in the feature space. In contrast, we define “hard-negatives” as those images not from class () with small distances (high matching scores) to in the feature space. We formally define them as:
where and are the hard positive and negative sample sets w.r.t. sample instance of minority class in the attribute label , is the Euclidean distance metric.
Hard Mining Intuitively, mining hard-positives enables the model to discover and expand sparsely sampled minority class boundaries, whilst mining hard-negatives aims to efficiently improve the margin structures of minority class boundary corrupted by visually similar distracting classes. To facilitate and expedite model training, we adopt the top- hard samples mining (selection) strategy. Specifically, at training time, for a minority class of attribute label (or a minority class instance ) in each training batch data, we select hard-positives as the bottom- scored on (or bottom- (largest) distances to ), and hard-negatives as the top- scored on (or top- (smallest) distance to ), given the current model (or feature space).
Remarks The proposed hard sample mining strategy encourages model learning to concentrate particularly on either weak recognitions or obvious mistakes when discriminating sparsely sampled class margins of the minority classes. In doing so, the overwhelming bias towards the majority classes in model learning is mitigated by explicitly stressing minority class discriminative boundary characteristics. To avoid useful information of unselected “easier” data samples being lost, we perform scalable hard sample mining independently in each mini-batch during model training and incrementally so over successive mini-batches. As a result, all training samples are utilised randomly in the full learning cycle. Our model can facilitate naturally both class prediction score and instance feature distance based matching. The experiments show that class score rectification yields superior performance due to a better compatibility effect with the score based cross-entropy loss.
3.3 Minority Class Neighbourhood Rectification
We introduce a Class Rectification Loss (CRL) regularisation to rectify model learning bias of the standard CE loss (Eqn. (2)) due to class imbalanced training data. This is achieved by incrementally reinforcing the minority class decision boundary margins with CRL aiming to discover latent class boundaries whilst maximising their discriminative margins either directly in the decision score space or indirectly in the feature space. We design the CRL regularisation by the learning-to-rank principle [74, 73, 71] specifically on the minority class hard samples, and re-formulate the model learning objective loss function Eqn. (2) as:
where is a parameter designed to be linearly proportional to a training class imbalance measure . Given different individual class data sample sizes, we define as the minimum percentage count of data samples required over all classes in order to form an overall uniform (i.e. balanced) class distribution in the training data. Eqn. (8) imposes an imbalance-adaptive learning mechanism in CRL regularisation – more weighting is assigned to more imbalanced labels333Multi-label multi-class, e.g. an attribute label has 655 classes., whilst less weighting for less imbalanced labels. Moreover, is independent of the per-label imbalance, therefore a model hyper-parameter estimated by cross-validation (independent of individual class imbalance). In this study, we explore three loss criteria for at both class-level and instance-level.
(I) Relative Comparison First, we consider the seminal triplet ranking loss  to model the relative relationship constraint between intra-class and inter-class. Considering the small number of training samples in minority classes, it is sensible to make full use of them in order to effectively handle the underlying learning bias. Hence, we regard each minority class sample as an “anchor” in the triplet construction to compute the batch loss balancing regularisation.
Specifically, for each anchor sample , we first construct a set of triplets based on the mined top- hard-positives and hard-negatives associated with either the corresponding class of attribute label (for class-level hard miming), or the sample instance itself (for instance-level hard mining). In this way, we form at most triplets w.r.t. , and a total of at most triplets for all the anchors across all the minority classes of every attribute label. We then formulate the following triplet ranking loss to impose a CRL class balancing constraint:
where denotes the class margin of attribute and is the distance between two samples. We consider both class-level and instance-level model learning rectifications444The maximum operation in Eqn. (9) is implemented by a ReLU (rectified linear unit) in TensorFlow..
For class-level rectification, we consider the model predictions between matched and unmatched pairs:
where denotes the model prediction score of on the target minority class of attribute label , with . The intuition is that, the matched pair is constrained to have similar prediction scores on the true class (both directions with absolute values), higher than that of any negative sample by a margin in a single direction (without absolute operation). For the triplet ranking, a fixed inter-class margin is often utilised  and we set for all attribute labels . This ensures a correct classification by the maximum a posteriori probability estimation.
For instance-level rectification, we consider the sample pairwise distance in the feature space as:
where denotes the attribute feature vector of the corresponding image sample. We adopt the Euclidean distance. In this case, the (Eqn. (9)) specifies the class margin in the feature space. We apply a geometrically intuitive design: projecting uniformly all the class centres along a unit circle and using the arc length between nearby centres as the class margin. That is, we set the class margin for attribute as:
where is the number of classes.
(II) Absolute Comparison Second, we consider the contrastive loss  to enforce absolute pairwise constraints on positive and negative pairs of minority classes. This constraint aims to optimise the boundary of minority classes by incrementally separating the overlapped (confusing) majority class samples in batch-wise optimisation. Specifically, for each sample in a minority class of an attribute , we use the mined hard samples to build positive and negative pairs in each training batch. Intuitively, we require the positive pairs to be close whilst the negative pairs to be far away in either model score or sample feature space. Thus, we define the CRL as:
where is the between-class margin, which can be set theoretically to an arbitrary positive number . We compute the average loss separately for positive and negative sets to balance their importance even given different sizes.
For class-level rectification, we consider the model prediction scores of pairs as defined in Eqn. (10). We set to encourage correct prediction. For instance-level rectification, we use the Euclidean distance in the feature space (Eqn. (11)) for pairwise comparison. We empirically set , which gives satisfactory converging speed and stability in our experiments.
(III) Distribution Comparison Third, we formulate class rectification for minority classes by modelling the distribution relationship of positive and negative pairs (built as in “Absolute Comparison”). This distribution based CRL aims to guide model learning by mining minority class decisive regions non-deterministically. In spirit of , we represent the distribution of positive () and negative () pair sets with histograms and of uniformly spaced bins . We compute the positive histogram as:
and defines the pace length between two adjacent bins. The negative histogram can be constructed similarly. To make minority classes distinguishable from majority classes, we enforce the two histogram distributions as disjoint as possible. Formally, we define the CRL regularisation by how much overlapping between the two distributions:
Statistically, this CRL distribution loss estimates the probability that the distance of a random negative pair is smaller than that of a random positive pair, either in the score space or the feature space. Similarly, we consider both class-level (Eqn. (10)) and instance-level (Eqn. (11)) rectification.
In our experiments (Sec. 4), we compared all six CRL loss designs. By default we deploy the class-level Relative Comparison CRL in our experiments if not stated otherwise.
|Dataset||Semantics||Labels||Classes||Total Images||Training Images||Test Images|
|CelebA ||Facial Attribute||Multiple (40)||Binary (2)||202,599||162,770 (3,713135,779/class)||19,867 (43217,041/class)|
|X-Domain ||Clothing Attribute||Multiple (9)||Multiple (655)||245,467||165,467 (13132,870/class)||80,000 (464,261/class)|
|CIFAR-100 ||Object Category||Single (1)||Multiple (100)||60,000||50,000 (500/class)||10,000 (100/class)|
Further Remarks We do not consider exemplars as anchors from majority classes in CRL because the conventional CE loss can already model the majority classes well given their frequent sampling. As demonstrated in our experiments, additional rectification on majority classes gives some benefit but focusing only on minority classes makes the CRL model more cost-effective (Table XII). Due to the batch-wise design, the class balancing effect by our proposed regularisor is incorporated throughout the whole training process progressively. Conceptually, our CRL shares a similar principle to Batch Normalisation  in achieving learning scalability.
Datasets For evaluations, we used both imbalanced and balanced benchmark datasets. Given Table I, we selected the CelebA , X-Domain , and CIFAR-100  (see Table II for statistics) due to: (1) The CelebA provides a class imbalanced learning test on multiple binary-class facial attributes with imbalance ratios up to 1:43. Specifically, it has 202,599 web in-the-wild images from 10,177 person identities with on average 20 images per person. Each image is annotated by 40 attribute labels. Following [12, 17], we used 162,770 images for model training (including 10,000 images for validation), and the remaining 19,867 for test. (2) The X-Domain offers an extremely class imbalanced learning test on multiple multi-class clothing attributes with the imbalance ratios upto 1:4,162. This dataset consists of 245,467 shop images extracted from online retailers. Each image is annotated by 9 attribute labels. Each attribute has a different set of mutually exclusive class values, sized from 6 (“sleeve-length”) to 55 (“colour”). In total, there are 178 distinctive attribute classes over the 9 labels. We randomly selected 165,467 images for training (including 10,000 images for validation) and the remaining 80,000 for test. (3) The CIFAR-100 provides a single-label class balanced learning test. This benchmark contains 100 classes with each having 600 images. This test provides a complementary evaluation of the proposed method against a variety of benchmarking methods, and moreover, facilitates extra in-depth model analysis under simulated class imbalanced settings. We used the standard 490/10/100 training/validation/test split per class .
Performance Metrics The classification accuracy [37, 12] that treats all classes uniformly is not appropriate for class imbalanced test, as a naive classifier that predicts every test sample as majority classes can still achieve a high overall accuracy although it fails all minority class samples. Since we consider the multi-class imbalanced classification test, the common true/false (positive/negative) rates for binary-class classification are no longer valid. In this work, we adopt the sensitivity measure that leads to a class-balanced accuracy by considering particularly the class distribution statistics  and generalises the conventional binary-class criterion . Formally, we compute the per-class sensitivity based on the classification confusion matrix as:
where is the number of class test samples predicted by a model as class , and is the size of class (totally classes). Therefore, the confusion matrix diagonal refers to correctly classified sample numbers of individual classes whilst the off-diagonal to the incorrect numbers. We define the class-balanced accuracy (i.e. mean sensitivity) as:
The above metric is for the single-label case. For the multi-label test, we average the mean sensitivity measures over all labels (attributes) to give the overall class-balanced accuracy.
|Imbalance ratio (1:x)||1||1||1||1||1||1||2||2||3||3||3||3||3||3||4||4||4||4||4||5|
|Imbalance ratio (1:x)||6||6||6||7||8||8||11||13||14||14||15||16||17||18||19||20||22||23||24||43|
Imbalanced Learning Methods for Comparison We considered five existing class imbalanced learning methods: (1) Over-Sampling : A multi-label re-sampling strategy to build a more balanced set before model learning through over-sampling minority classes by random replication. (2) Down-Sampling : Another training data re-sampling method based on under-sampling majority classes with random sample removal. (3) Cost-Sensitive : A class-weighting strategy by assigning greater misclassification penalties to minority classes and smaller penalties to majority classes in loss design. We assign the class weight as where specifies the ratio of class in training data. (4) Threshold-Adjustment : Adjusting the model decision threshold in test time by incorporating the class probability prior , e.g. moderating the original model prediction to where is a temperature (softening) parameter estimated by cross validation. Given , we then use the maximum a posteriori probability for class prediction. (5) LMLE : A state-of-the-art class imbalanced deep learning model exploiting the class structure for improving minority class modelling. For fair comparisons, all the methods were implemented on the same network architecture (details below), with the parameters set by following the authors’ suggestions if available or cross-validation. All models were trained on the same training data, and evaluated on the same test data. We adopted the class-level relative comparison CRL for all remaining experiments if not stated otherwise.
4.1 Comparisons on Facial Attributes Recognition
Competitors We compared the proposed CRL model with 9 existing methods including the 5 class imbalanced learning models above and other 4 state-of-the-art deep learning models for facial attribute recognition on the CelebA benchmark: (1) PANDA , (2) ANet , (3) Triplet-NN , and (4) DeepID2 .
Network Architecture We adopted the 5-layers CNN architecture DeepID2  as the base network for training all class imbalanced learning methods including CRL and LMLE. Training DeepID2 was based on the conventional CE loss (Eqn. (2)). This provides a baseline for evaluations with and without CRL. Moreover, the CRL allows multi-task learning in the spirit of [78, 79], with an additional 64-dim FC feature layer and a 2-dim binary prediction layer for each face attribute.
Parameter Settings We trained the CRL from scratch by the learning rate at 0.001, the decay at 0.0005, the momentum at 0.9, the batch size at 256, and the epoch at 921. We set the loss weight (Eqn. (8)) to 0.01.
Overall Evaluation Facial attribute recognition performance comparisons are shown in Table III. It is evident that CRL outperforms all competitors including the attribute recognition models and class imbalanced learning methods on the overall mean accuracy. Compared to the best non-imbalanced learning model DeepID2, CRL improves the average accuracy by . Compared to the state-of-the-art imbalanced learning model LMLE, CRL is better on average accuracy by . Other classic imbalanced learning methods perform inferiorly than both CRL and LMLE. In particular, Over-Sampling brings only marginal gain with no clear difference across all imbalance degrees, suggesting that replication based data rebalance is limited in introducing useful information. Cost-Sensitive is largely similar to Over-Sampling. The performance drop by Down-Sampling and Threshold-Adjustment is due to discarding useful data in balancing the class data distribution and imposing potentially inconsistent adjustment to model prediction. This shows that (1) not all class imbalanced learning methods are helpful, and (2) the clear superiority of our batch-wise incremental minority class rectification method in handling biased model learning over alternative methods.
|Imbalance Ratio||Bottom-20 (1:11:5)||Top-20 (1:61:43)|
Further Analysis We examined the characteristics of model performance on individual attributes exhibiting different class imbalance ratios. In particular, we further analysed CRL and the best competitor LMLE against the base model DeepID2 without class imbalanced learning. To that end, we split the 40 facial attributes into two groups at a 1:5 imbalance ratio: bottom-20 (the first 20 in Table III) and top-20 (the remaining) imbalanced attributes. Figure 6 and Table IV show that: (1) CRL improves the prediction accuracy on all attributes (above “0”), whilst LMLE can give weaker prediction than DeepID2 especially on highly imbalanced attributes. This suggests that CRL is more robust in coping with different imbalanced attributes, especially more extremely imbalanced classes. (2) LMLE is better at the bottom-20 imbalanced attributes, improving the mean accuracy by 7% versus 5% by CRL. For instance, CRL is outperformed by LMLE on the “Attractive” (balanced) and “Heavy Makeup” attributes by 7% and 8%, respectively. This suggests that LMLE is better for less-extremely imbalanced attributes. (3) LMLE performance degrades on top-20 imbalanced attributes by in mean accuracy. Specifically, LMLE performs worse than DeepID2 on most attributes with imbalance ratio greater than 1:7, starting from “Wear Necklace” in Table III. This is in contrast to CRL which achieves an even better performance gain at on top-20. On some very imbalanced attributes, CRL outperforms LMLE significantly, e.g. by on “Mustache” and on “Blurry”. Interestingly, the “Blurry” attribute is visually challenging due to its global characteristics not defined by local features therefore very subtle, similar to the “Mustache” attribute (see Fig. 7). This demonstrates that CRL is superior and more scalable than LMLE in coping with severely imbalanced data learning. This is due to (1) incremental batch-wise minority class predictive boundary rectification which is independent to global training data class structure, and (2) end-to-end deep learning for joint feature and classifier optimisation which LMLE lacks.
Model Training Cost Analysis We analysed the model training cost of CRL and LMLE on a workstation with NVIDIA Tesla K GPU and E5-2680 @ GHz CPUs. For LMLE, we used the codes released by the authors555 The k-means clustering function is not included in the original codes. We used the VLFeat’s implementation  with the default setting as maximum iterations and repetitions. with the original settings ( rounds of training each with iterations of CNN optimisation). The training was initialised by pre-trained DeepID2 face recognition features. On our workstation, LMLE took a total of 264.8 hours to train666 We did not consider the time cost for pre-training the DeepID2 (needed for extracting the initial features for the first round of data pre-processing) on face identity labels from CelebFaces+  due to lacking the corresponding codes and details. We used the pre-trained DeepID2 model thanks to the helpful sharing by the LMLE authors., with each round taking 66.2 hours including 24.5 hours for “clustering+quintuplet construction” and 41.7 hours for “CNN model optimisation”. In contrast, CRL took 27.2 hours, that is 9.7 (264.8/27.2) times faster than LMLE.
We further examined model convergence rate. Specifically, LMLE converges quicker than CRL on training batch iterations, LMLE’s 20,000 versus CRL’s 540,000. This is reasonable as LMLE benefits uniquely from both a specifically designed data structural pre-processing (building quintuplets) of the entire training data which is a computationally expensive procedure, and a model pre-training process on auxiliary face recognition labels. However, LMLE is significantly slower than CRL in the overall CNN training time: LMLE’s 166.6 hours versus CRL’s 27.2 hours.
|Imbalance ratio (1:x)||2||138||210||242||476||2,138||3,401||4,115||4,162|
4.2 Comparisons on Clothing Attributes Recognition
Competitors Except the five imbalanced learning methods, we also compared CRL against four other state-of-the-art clothing attribute recognition models: (1) DDAN , (2) DARN , (3) FashionNet777We implemented this FashionNet without the landmark detection branch since no landmark labels are available in the X-Domain dataset. , and (4) MTCT .
Network Architecture We used the same network structure as the MTCT . Specifically, this network is composited of five stacked NIN conv units  and parallel branches with each a 3-FC-layers sub-network for modelling a distinct attribute respectively, in the multi-task learning spirit . We trained MTCT using the CE loss (Eqn. (2)).
Parameter Settings We pre-trained the base network on ImageNet-1K  at the learning rate 0.01, then fine-tuned the CRL model on the X-Domain images at a lower learning rate 0.001. We set the decay to 0.0005, the momentum to 0.9, the batch size to 128, and the epoch to 256. We set the loss weight (Eqn. (8)) to 0.01.
Overall Evaluation Table V shows the comparative evaluation of 10 different models on the X-Domain benchmark. It is evident that CRL surpasses all prior state-of-the-art models on all attribute labels. This shows the superiority and scalability of our incremental minority class rectification in tackling extremely imbalanced attribute data, with the maximum imbalance ratio 4,162 versus 43 in CelebA attributes. For example, CRL surpasses the best competitor LMLE by in mean accuracy. Traditional class imbalanced learning methods behave similarly as on facial attributes, except that Threshold-Adjustment also yields a small gain similar as Cost-Sensitive. Other models without an explicit imbalanced learning mechanism like DDAN, FashionNet, DARN and MTCT suffer notably.
|Imbalance Ratio||Bottom-1 (1:2)||Top-8 (1:1381:4,162)|
We further examined the performance of CRL and LMLE
in comparison to the base model MTCT.
Similar to CelebA, we split the 9 attributes into
two groups at a 1:5 class imbalance ratio:
bottom-1 (the first column in Table V) and top-8
Figure 8 and Table VI
(1) LMLE improves MTCT on all clothing attributes
with varying imbalance ratios.
This suggests that
LMLE does address the imbalanced data learning problem
in a multi-class setting
by embedding local class structures into deep feature learning.
(2) Compared to LMLE, CRL achieves more significant performance gains on more severely imbalanced attributes.
On the top-8 imbalanced attributes, CRL achieves mean accuracy gain
of 7.10% versus 2.10% by LMLE
In particular, our CRL improves LMLE by 10.03% in accuracy
for recognising “Sleeve Shape”,
a fine-grained and visually ambiguous attribute
due to its locality and subtle inter-class discrepancy
This evidence is interesting as it shows that
class training data distribution affects a model’s ability to learn
effectively fine-grained class discrimination.
Importantly, a model’s ability in coping effectively with class imbalanced
data learning can help improve its learning of fine-grained class discrimination.
This further demonstrates the strength of CRL over existing
models for mitigating model learning bias given severely imbalanced fine-grain
labelled classes in an end-to-end deep learning framework.
Model Training Cost Analysis We examined the model training cost of LMLE and CRL on X-Domain using the same workstation as on CelebA. We used the original author released codes with the suggested optimisation setting, e.g. trained the LMLE for 4 rounds each with 5,000 CNN training iterations. We started with the ImageNet-1K trained VGGNet16 features . For model training, LMLE took 429.9 hours, with each round taking 107.5 hours including 27.6 hours for “clustering+quintuplet construction” and 79.9 hours for “CNN model optimisation”. In contrast, CRL took 60.4 hours, that is 7.1 (429.9/60.4) times faster than LMLE.
Further Evaluation We further evaluated the CRL on the DeepFashion clothing attribute dataset  with a controlled experiment. We adopted a test setting that is consistent with all the other experiments: (1) The standard multi-label classification setting without using the clothing landmark and category labels (used in ). (2) ResNet50  as the base network trained by the CE loss. (3) Top-5 attribute predictions in a class-balanced accuracy metric other than a class-biased metric as in . We adopted the standard data split: 209,222/40,000/40,000 images for model training/validation/test. We trained the deep models from scratch with the learning rate as 0.01, the decay as 0.00004, the batch size as 64, and the epoch as 141. We focused on evaluating the additional effect of CRL on top of the CE loss. Table VII shows that CRL yields a 2.36% (54.56-52.20) boost in mean accuracy.
4.3 Comparisons on Object Category Recognition
We evaluated the CRL on a popular class balanced single-label object category benchmark CIFAR-100 .
Network Architecture We evaluated the CRL in three state-of-the-art CNN models: (1) CifarNet , (2) ResNet32 , and (3) DenseNet . Each CNN model was trained by the conventional CE loss (Eqn. (2)). The purpose is to test their performance gains in single-label object classification when incorporating the proposed CRL regularisation (Eqn. (8)).
Parameter Settings We trained each CNN from scratch with the learning rate at 0.1, the decay at 0.0005, the momentum at 0.9, the batch size at 256, and the epoch at 200. In the class balanced test, we cannot directly deploy our loss formulation (Eqn. (8)) as the imbalance measure hence eliminating the CRL. Instead, we integrated our CRL with the CE loss using equal weight by setting for all models. For the class imbalanced cases, we set for CifarNet/ResNet32/DenseNet, respectively.
(I) Comparative Evaluation Table VIII shows the single-label object classification accuracy. Interestingly, it is found that our CRL approach can consistently improve state-of-the-art CNN models, e.g. increasing the accuracy of CifarNet/ResNet32/DenseNet by //, respectively. This shows that the advantages of our batch-wise minority class rectification method remain on class balanced cases. The plausible reasons are: (1) Whilst the global class distribution is balanced, random sampling of mini-batch adopted by common deep learning may introduce some imbalance in each iteration. Our per-batch balancing strategy hence has the chance to regularise inter-class margin and benefit the overall model learning. (2) The CRL considers the optimisation of class-level structural separation, which can provide a complementary benefit to the CE loss that instead performs per-sample single-class optimisation.
(II) Effect Analysis of Imbalanced Training Data We further evaluated the deep model performance and the CRL under different imbalance ratios. To this end, we carried out a controlled experiment by simulating class imbalance cases in training data. Specifically: (1) We simulated class imbalanced training data by a power-law class distribution as (Fig. 10(a)): , where is the class index, represents a preset parameter for controlling the imbalanced degree, and are two numbers estimated by the largest () and smallest () class size. We call the resulted training set “CIFAR-100”. (2) We constructed a corresponding dataset “CIFAR-100”, subject to having the same number of images covering all classes as “CIFAR-100” and being class balanced (i.e. all classes are equally sized). This is necessary as “CIFAR-100” and “CIFAR-100” differ in both data balance and size thus not directly comparable. (3) We trained the CNN models with and without CRL on these simulated training sets separately and tested their performances on the same standard test data. (4) To compare deep learning methods with conventional models, we also evaluated the -nearest neighbour classifier with the HOG feature . Table IX shows the results when . We observed that: (1) Given class imbalanced training data, all three CNN models are adversely affected, with accuracy decreased by (CifarNet), (ResNet32), and (DenseNet) respectively. Interestingly, the stronger CNNs suffer more performance degradation. (2) CRL improves all three CNN models by in accuracy, which show the effectiveness of CRL. (3) All three deep learning models are sensitive to imbalanced training data with similar relative performance drops as the conventional non-deep-learning HOG+NN model. This suggests that deep learning models are not necessarily superior in tackling the class imbalanced learning challenge. Moreover, we evaluated the CRL with ResNet32 given different imbalance cases ranging from to . Figure 10 (b) shows its performance gains across all these settings. We observed no clear trend between model performance and since their relationship is non-linear. In particular, the model generalisation depends not only on the class distribution but also on other factors such as the specific training samples, i.e. information content is variable (and unknown) over training samples.
4.4 Further Evaluations and Discussions
We conducted component analysis for providing more insights on CRL. By default, we adopted the class-level relative comparison based CRL (Eqn. (9)) and used the most imbalanced X-Domain dataset, unless declared otherwise.
|Dataset||CelebA ||X-Domain |
CRL Design We evaluated the two hard mining schemes (class-level and instance-level, Sec. 3.2), and three loss types (relative, absolute, and distribution comparison, Sec. 3.3). We tested therefore 6 CRL design combinations in the comparison with baseline models without imbalanced learning: “DeepID2” on CelebA and “MTCT” on X-domain. Table X shows that: (1) All CRL models improve the mean accuracy consistently, with the CRL(Class+Rel) the best. (2) With the same loss type, the class-level design is superior in most cases. This suggests that regularising the score space is more effective than the feature space. A plausible explanation is that the former is more compatible with the conventional CE loss which also operates with class scores.
Loss Weight Optimisation We evaluated the effectiveness of the weight between the CE loss and the CRL loss by tuning the coefficient in Eqn. (8) on a range from to using the X-Domain benchmark. Figure 11 shows that the best weight selection is . Moreover, it is found that the change in affects the performance on most or all attributes consistently. This indicates that the CRL formulation with loss weighting is imbalance adaptive, capable of effectively modelling multiple attribute labels with diverse class imbalance ratios by a single-value hyper-parameter () optimisation using cross-validation.
|Mean Accuracy (%)||80.42||78.89||75.77|
Hard Mining and Joint Learning We further evaluated the individual effects of Hard Mining (HM) and Joint Learning (JL) the features and classifier in the CRL model “CRL(HM+JL)”, as in comparison to LMLE888In this evaluation, we treat LMLE as a whole without separating/removing its built-in hard mining mechanism. . Table XI shows the performance benefit of the proposed CRL Hard Mining (HM) as compared to CRL without HM “CRL(JL)”, with a 1.53% (80.42-78.89) mean accuracy advantage on X-Domain. It also shows that the joint learning in CRL has a mean accuracy advantage of 3.12% (78.89-75.77) over LMLE which has no joint learning but has hard mining.
Top- We examined the effect of different values in hard mining from 1 to 175 with step-size 25. Figure 12 shows that when (i.e. hardest mining), the model fails to capture a good converging trajectory. This is because the hardest mining represents over sparse and possibly incorrect (due to outlier noise) class boundaries, which hence causes poorer optimisation. When , there is no further improvement to model learning. Given that larger increases the model training cost, we set for all our experiments.
|CRL Class Scope||Mean Accuracy (%)||Training Time|
|Minority Classes||80.42||60.4 Hours|
|All Classes||81.30||77.6 Hours|
Class Scope We also evaluated the effect of the class scope (Minority-Classes and All-Classes) on which the CRL regularisation is enforced in terms of both accuracy and model training cost. Table XII shows that applying the CRL to all classes in each batch yields superior performance. Specifically, relative to the baseline MTCT’s mean accuracy (Table V), the scopes of Minority-Classes and All-Classes bring (80.42-73.53) and (81.30-73.53) accuracy gain, respectively. That is, the latter yields additional ((7.78-6.89)/6.89) gain but at ((77.6-60.4)/60.4) extra training cost. This suggests better cost-effectiveness by focusing only on minority classes in imbalanced data learning.
|Minority Class Criterion||=10%||=30%||=50% (Ours)|
|Mean Accuracy (%)||77.82||79.29||80.42|
Minority Class Criterion At last, we evaluated the effect of minority class criterion ( in Eqn. (3)) on a range from to to generalise the two-class minority class definition to a multi-class setting. Table XIII shows the effect on model performance when changes, demonstrating that a minority class criterion setting with is both most effective and conceptually consistent with the two-class setting.
In this work, we introduced an end-to-end class imbalanced deep learning framework for large scale visual data learning. The proposed Class Rectification Loss (CRL) approach is characterised by batch-wise incremental minority class rectification with a scalable hard mining principle. Specifically, the CRL is designed to regularise the inherently biased deep model learning behaviour given extremely imbalanced training data. Importantly, CRL preserves the model optimisation convergence characteristics of stochastic gradient descent, therefore allowing for efficient end-to-end deep learning on significantly imbalanced training data with multi-label semantic interpretations. Comprehensive experiments were carried out to show the clear advantages and scalability of the CRL method over not only the state-of-the-art imbalanced data learning models but also dedicated deep learning visual recognition methods. For example, the CRL surpasses the best alternative LMLE by on the CelebA facial attribute benchmark and on the extremely imbalanced X-Domain clothing attribute benchmark, whilst enjoying over 7 faster model training advantage. Our experiments also show the benefits of the CRL in learning standard deep models given class balanced training data. Finally, we provided detailed component analysis for giving insights into the characteristics of the CRL model design.
We shall thank Victor Lempitsky for providing the histogram loss code, Chen Huang and Chen Change Loy for sharing the pre-trained DeepID2 face recognition model. This work was partly supported by the China Scholarship Council, Vision Semantics Ltd., the Royal Society Newton Advanced Fellowship Programme (NA150459), and Innovate UK Industrial Challenge Project on Developing and Commercialising Intelligent Video Analytics Solutions for Public Safety (98111-571149).
-  N. Japkowicz and S. Stephen, “The class imbalance problem: A systematic study,” Intelligent Data Analysis, vol. 6, no. 5, pp. 429–449, 2002.
-  G. M. Weiss, “Mining with rarity: a unifying framework,” ACM SIGKDD Explorations Newsletter, vol. 6, no. 1, pp. 7–19, 2004.
-  H. He and E. A. Garcia, “Learning from imbalanced data,” IEEE TKDE, vol. 21, no. 9, pp. 1263–1284, 2009.
-  T. M. Hospedales, S. Gong, and T. Xiang, “Finding rare classes: Active learning with generative and discriminative models,” IEEE TKDE, vol. 25, no. 2, pp. 374–386, 2013.
-  C. Drummond, R. C. Holte et al., “C4.5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling,” in ICML Workshop, 2003.
-  N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” JAIR, vol. 16, pp. 321–357, 2002.
-  T. Maciejewski and J. Stefanowski, “Local neighbourhood extension of smote for mining imbalanced data,” in ICDM, 2011.
-  K. M. Ting, “A comparative study of cost-sensitive boosting algorithms,” in ICML, 2000.
-  Y. Tang, Y.-Q. Zhang, N. V. Chawla, and S. Krasser, “Svms modeling for highly imbalanced classification,” IEEE TSMCB, vol. 39, no. 1, pp. 281–288, 2009.
-  R. Akbani, S. Kwek, and N. Japkowicz, “Applying support vector machines to imbalanced datasets,” in ECML, 2004.
-  Q. Chen, J. Huang, R. Feris, L. M. Brown, J. Dong, and S. Yan, “Deep domain adaptation for describing people based on fine-grained clothing attributes,” in CVPR, 2015.
-  Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in ICCV, 2015.
-  B. Krawczyk, “Learning from imbalanced data: open challenges and future directions,” Progress in Artificial Intelligence, vol. 5, no. 4, pp. 221–232, 2016.
-  S. Gong, M. Cristani, S. Yan, and C. C. Loy, Person re-identification. Springer, 2014, vol. 1.
-  R. Feris, R. Bobbitt, L. Brown, and S. Pankanti, “Attribute-based people search: Lessons learnt from a practical surveillance system,” in ICMR, 2014.
-  K. Chen, S. Gong, T. Xiang, and C. Loy, “Cumulative attribute space for age and crowd density estimation,” in CVPR, 2013.
-  C. Huang, Y. Li, C. Change Loy, and X. Tang, “Learning deep representation for imbalanced classification,” in CVPR, 2016.
-  N. Zhang, M. Paluri, M. Ranzato, T. Darrell, and L. Bourdev, “Panda: Pose aligned networks for deep attribute modeling,” in CVPR, 2014, pp. 1637–1644.
-  I. Triguero, S. del Río, V. López, J. Bacardit, J. M. Benítez, and F. Herrera, “Rosefw-rf: the winner algorithm for the ecbdlâ14 big data competition: an extremely imbalanced big data bioinformatics problem,” Knowledge-Based Systems, vol. 87, pp. 69–79, 2015.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2015.
-  A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “Cnn features off-the-shelf: an astounding baseline for recognition,” in CVPR, 2014, pp. 806–813.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012, pp. 1097–1105.
-  Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE TPAMI, vol. 35, no. 8, pp. 1798–1828, 2013.
-  Z.-H. Zhou and X.-Y. Liu, “Training cost-sensitive neural networks with methods addressing the class imbalance problem,” IEEE TKDE, vol. 18, no. 1, pp. 63–77, 2006.
-  P. Jeatrakul, K. W. Wong, and C. C. Fung, “Classification of imbalanced data by combining the complementary neural network and smote algorithm,” in International Conference on Neural Information Processing, 2010.
-  R. Alejo, V. García, J. M. Sotoca, R. A. Mollineda, and J. S. Sánchez, “Improving the classification accuracy of rbf and mlp neural networks trained with imbalanced samples,” in International Conference on Intelligent Data Engineering and Automated Learning, 2006.
-  T. M. Khoshgoftaar, J. Van Hulse, and A. Napolitano, “Supervised neural network modeling: an empirical investigation into learning from imbalanced data with labeling errors,” IEEE TNN, vol. 21, no. 5, pp. 813–830, 2010.
-  M. A. Mazurowski, P. A. Habas, J. M. Zurada, J. Y. Lo, J. A. Baker, and G. D. Tourassi, “Training neural network classifiers for medical decision making: The effects of imbalanced datasets on classification performance,” Neural Networks, vol. 21, no. 2, pp. 427–436, 2008.
-  J. Huang, R. S. Feris, Q. Chen, and S. Yan, “Cross-domain image retrieval with a dual attribute-aware ranking network,” in ICCV, 2015.
-  Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, “Deepfashion: Powering robust clothes recognition and retrieval with rich annotations,” in CVPR, 2016.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in ECCV, 2014.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” IJCV, vol. 115, no. 3, pp. 211–252, 2015.
-  M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” IJCV, vol. 111, no. 1, pp. 98–136, 2015.
-  A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
-  G. Griffin, A. Holub, and P. Perona, “Caltech-256 object category dataset,” 2007.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
-  Q. Dong, S. Gong, and X. Zhu, “Multi-task curriculum transfer deep learning of clothing attributes,” in WACV, 2017.
-  D. Arthur and S. Vassilvitskii, “How slow is the k-means method?” in ACM Annual Symposium on Computational Geometry, 2006, pp. 144–153.
-  F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in CVPR, 2015.
-  H. Han, W.-Y. Wang, and B.-H. Mao, “Borderline-smote: a new over-sampling method in imbalanced data sets learning,” in International Conference on Intelligent Computing, 2005.
-  M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in CVPR, 2014.
-  N. Japkowicz et al., “Learning from imbalanced data sets: a comparison of various strategies,” in AAAI Workshop, 2000.
-  R. Barandela, J. S. Sánchez, V. Garcıa, and E. Rangel, “Strategies for learning in class imbalance problems,” Pattern Recognit., vol. 36, no. 3, pp. 849–851, 2003.
-  C. Chen, A. Liaw, and L. Breiman, “Using random forest to learn imbalanced data,” University of California, Berkeley, 2004.
-  Y. Lin, Y. Lee, and G. Wahba, “Support vector machines for classification in nonstandard situations,” Machine Learning, vol. 46, no. 1, pp. 191–202, 2002.
-  B. Liu, Y. Ma, and C. K. Wong, “Improving an association rule based classifier,” in European Conference on Principles of Data Mining and Knowledge Discovery, 2000.
-  B. Zadrozny and C. Elkan, “Learning and making decisions when costs and probabilities are both unknown,” in SIGKDD, 2001, pp. 204–213.
-  J. R. Quinlan, “Improved estimates for the accuracy of small disjuncts,” Machine Learning, vol. 6, no. 1, pp. 93–98, 1991.
-  G. Wu and E. Y. Chang, “Kba: Kernel boundary alignment considering imbalanced data distribution,” IEEE TKDE, vol. 17, no. 6, pp. 786–795, 2005.
-  B. Zadrozny, J. Langford, and N. Abe, “Cost-sensitive learning by cost-proportionate example weighting,” in ICDM, 2003.
-  F. Provost, “Machine learning from imbalanced data sets 101,” in AAAI, 2000.
-  B. Krawczyk and M. Woźniak, “Cost-sensitive neural network with roc-based moving threshold for imbalanced classification,” in International Conference on Intelligent Data Engineering and Automated Learning, 2015.
-  H. Yu, C. Sun, X. Yang, W. Yang, J. Shen, and Y. Qi, “Odoc-elm: Optimal decision outputs compensation-based extreme learning machine for classifying imbalanced data,” Knowledge-Based Systems, vol. 92, pp. 55–70, 2016.
-  J. Chen, C.-A. Tsai, H. Moon, H. Ahn, J. Young, and C.-H. Chen, “Decision threshold adjustment in class prediction,” SAR and QSAR in Environmental Research, vol. 17, no. 3, pp. 337–352, 2006.
-  M. Wozniak, Hybrid classifiers: methods of data, knowledge, and classifier combination. Springer, 2013, vol. 519.
-  S. Wang, Z. Li, W. Chao, and Q. Cao, “Applying adaptive over-sampling technique based on data density and cost-sensitive svm to imbalanced learning,” in IJCNN, 2012.
-  M. Woźniak, M. Graña, and E. Corchado, “A survey of multiple classifier systems as hybrid systems,” Information Fusion, vol. 16, pp. 3–17, 2014.
-  J. Lan, M. Y. Hu, E. Patuwo, and G. P. Zhang, “An investigation of neural network classifiers with unequal misclassification costs and group sizes,” Decision Support Systems, vol. 48, no. 4, pp. 582–591, 2010.
-  Y.-M. Huang, C.-M. Hung, and H. C. Jiau, “Evaluation of neural networks and data mining methods on a credit assessment task for class imbalance problem,” Nonlinear Analysis: Real World Applications, vol. 7, no. 4, pp. 720–747, 2006.
-  F. Fernández-Navarro, C. Hervás-Martínez, and P. A. Gutiérrez, “A dynamic over-sampling procedure based on sensitivity for multi-class problems,” Pattern Recognit., vol. 44, no. 8, pp. 1821–1833, 2011.
-  C. L. Castro and A. P. Braga, “Novel cost-sensitive approach to improve the multilayer perceptron performance on imbalanced data,” IEEE TNNLS, vol. 24, no. 6, pp. 888–899, 2013.
-  S. H. Khan, M. Hayat, M. Bennamoun, F. A. Sohel, and R. Togneri, “Cost-sensitive learning of deep feature representations from imbalanced data,” IEEE TNNLS, 2017.
-  W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, “Deepcontour: A deep convolutional feature learned by positive-sharing loss for contour detection,” in CVPR, 2015.
-  S. Wang, W. Liu, J. Wu, L. Cao, Q. Meng, and P. J. Kennedy, “Training deep neural networks on imbalanced data sets,” in IJCNN, 2016.
-  S. Guan, M. Chen, H.-Y. Ha, S.-C. Chen, M.-L. Shyu, and C. Zhang, “Deep learning with mca-based instance selection and bootstrapping for imbalanced data classification,” in CIC, 2015.
-  Y. Yan, M. Chen, M.-L. Shyu, and S.-C. Chen, “Deep learning for imbalanced multimedia data classification,” in ISM, 2015.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE TPAMI, vol. 32, no. 9, pp. 1627–1645, 2010.
-  A. Shrivastava, A. Gupta, and R. Girshick, “Training region-based object detectors with online hard example mining,” in CVPR, 2016.
-  H. Oh Song, Y. Xiang, S. Jegelka, and S. Savarese, “Deep metric learning via lifted structured feature embedding,” in CVPR, 2016.
-  X. Wang and A. Gupta, “Unsupervised learning of visual representations using videos,” in ICCV, 2015.
-  E. Ustinova and V. Lempitsky, “Learning deep embeddings with histogram loss,” in NIPS, 2016.
-  J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu, “Learning fine-grained image similarity with deep ranking,” in CVPR, 2014.
-  T.-Y. Liu, “Learning to rank for information retrieval,” Foundations and Trends in Information Retrieval, vol. 3, no. 3, pp. 225–331, 2009.
-  S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in CVPR, 2005.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
-  S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv, 2015.
-  Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in NIPS, 2014.
-  T. Evgeniou and M. Pontil, “Regularized multi–task learning,” in SIGKDD, 2004, pp. 109–117.
-  R. K. Ando and T. Zhang, “A framework for learning predictive structures from multiple tasks and unlabeled data,” JMLR, vol. 6, no. Nov, pp. 1817–1853, 2005.
-  A. Vedaldi and B. Fulkerson, “VLFeat: An open and portable library of computer vision algorithms,” http://www.vlfeat.org/, 2008.
-  M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv, 2013.
-  G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” in CVPR, 2017.
-  N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in CVPR, 2005.