\mathcal{G}-softmax: Improving Intra-class Compactness and Inter-class Separability of Features

-softmax: Improving Intra-class Compactness and Inter-class Separability of Features

Yan Luo, Yongkang Wong,  Mohan Kankanhalli,  and Qi Zhao,  Y. Luo and Q. Zhao are with the Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, 55455.
E-mail: luoxx648@umn.edu, qzhao@cs.umn.edu Y. Wong and M. Kankanhalli are with the School of Computing, National University of Singapore, Singapore, 117417. E-mail: yongkang.wong@nus.edu.sg, mohan@comp.nus.edu.sg Qi Zhao is the corresponding author.
Abstract

Intra-class compactness and inter-class separability are crucial indicators to measure the effectiveness of a model to produce discriminative features, where intra-class compactness indicates how close the features with the same label are to each other and inter-class separability indicates how far away the features with different labels are. In this work, we investigate intra-class compactness and inter-class separability of features learned by convolutional networks and propose a Gaussian-based softmax (-softmax) function that can effectively improve intra-class compactness and inter-class separability. The proposed function is simple to implement and can easily replace the softmax function. We evaluate the proposed -softmax function on classification datasets (i.e., CIFAR-10, CIFAR-100, and Tiny ImageNet) and on multi-label classification datasets (i.e., MS COCO and NUS-WIDE). The experimental results show that the proposed -softmax function improves the state-of-the-art models across all evaluated datasets. In addition, analysis of the intra-class compactness and inter-class separability demonstrates the advantages of the proposed function over the softmax function, which is consistent with the performance improvement. More importantly, we observe that high intra-class compactness and inter-class separability are linearly correlated to average precision on MS COCO and NUS-WIDE. This implies that improvement of intra-class compactness and inter-class separability would lead to improvement of average precision.

Deep learning, Multi-Label classification, Gaussian-based softmax, Compactness and separability.

I Introduction

Machine learning is an important and fundamental component in visual understanding tasks. The core idea of supervised learning is to learn a model that explores the causal relationship between the dependent variables and the predictor variables. To quantify this relationship, the conventional approach is to make a hypothesis on the model, and feed the observed pairs of dependent variables and predictor parameters to the model for predicting future cases. For most learning problems, it is infeasible to make a perfect hypothesis that matches the underlying pattern, whereas a badly designed hypothesis often leads to a model that is more complicated than necessary and violates the principle of parsimony. Therefore, when designing or evaluating a model, the core objective is to seek a balance between two conflicting goals: how complicated a model should be to achieve accurate predictions, and how to design a model as simple as possible, but not simpler.

In the past decade, deep learning methods have significantly accelerated the development of machine learning research, where Convolutional Network (ConvNet) has achieved superior performance in numerous real-world visual understanding tasks [Girshick_CVPR_2014, Krizhevsky_NIPS_2012, Noh_ICCV_2015, Fu_PAMI_2017, Christian_RAS_2016, Hong_CVPR_2017, Zhou_ICCV_2017, Xu_CVPR_2017, Change_PAMI_2017, Hou_TNNLS_2015, Yuan_TNNLS_2015, Shao_TNNLS_2014]. Although their architectures vary with each other, the softmax function is widely used along with the cross entropy loss at the training phase [He_CVPR_2016, Krizhevsky_NIPS_2012, Lecun_IEEE_1998, Simonyan_ICML_2015, Szegedy_CVPR_2015]. The softmax function may not take the distribution pattern of previously observed samples into account to boost classification accuracy. In this work, we design a statistically driven extension of the softmax function that fits into the Stochastic Gradient Descent (SGD) scheme for end-to-end learning. Furthermore, the final layer of the softmax function directly connects to the predictions and can maximally preserve generality for various ConvNets, i.e., avoid complex modification of existing network architectures.

Fig. 1: An illustration to show the benefits of improving inter-class separability and intra-class compactness. Given a model, an input image will be encoded to yield a discriminative feature that is used to compute the class-dependent confidence . As shown in the figure, inter-class separability encourages to the distribution w.r.t. a label to be distant from the distributions w.r.t. the other labels, and intra-class compactness encourages the features w.r.t. the ground truth label to be close to the mean. PDF stands for probability density function.
Fig. 2: An illustration of the learning framework with the proposed -softmax function. The proposed -softmax function plays as a predictor to yield prediction confidences w.r.t. each class by taking input feature and its distribution into account. is cumulative distribution function. The distributions are updated by gradient descent methods via back-propagation. In the mainstream ConvNets [He_CVPR_2016, Krizhevsky_NIPS_2012, Simonyan_ICML_2015, Szegedy_CVPR_2015], the predictor is often the softmax function.

Features are key for prediction in ConvNet Learning. According to the central limit theorem [Kallenberg_book_1997], the arithmetic mean of a sufficiently large number of iterates of i.i.d. random variables, each with a finite expected value and variance, can be approximately normally distributed even if the original variables are not normally distributed. This makes the Gaussian distribution generally valid in a great variety of contexts. Following this line of thought, online learning methods [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] assumed that the weights follow Gaussian distribution and make use of its distribution pattern for classification. Given a large-scale training data [Russakovsky_IJCV_2015], the underlying distributions of discriminative features generated by ConvNets can be modeled. This distribution pattern has not been fully explored in existing literature.

Intra-class compactness and inter-class separability of features are generally correlated to the quality of the learned features. If intra-class compactness and inter-class separability are simultaneously maximized, the learned features are more discriminative [Liu_ICML_2016]. We introduce a variant of the softmax function, named Gaussian-based softmax (-softmax) function, which aims to improve intra-class compactness and inter-class separability as shown in Figure 1. With a typical assumption that features are distributed according to Gaussian distributions, where Gaussian cumulative distribution function (CDF) is used in prediction and normalization to generate the final confidence in a soft form.

Figure 2 demonstrates the role and position of the proposed -softmax function in a supervised learning framework. Given the training samples, the feature extractor would extract the features and then pass them to the predictor for inference. In this work, we follow the mainstream deep learning framework where the feature extractor is modeled with a ConvNet. The proposed -softmax function is able to replace the softmax function. The contributions can be summarized as:

  • With the general assumption, i.e., features w.r.t. a class are subject to a Gaussian distribution, we propose the -softmax function which models the distributions of features for better prediction. The experiments on CIFAR-10, CIFAR-100 [Krizhevsky_Citeseer_2009] and Tiny ImageNet111https://tiny-imagenet.herokuapp.com/ show that the proposed -softmax function consistently outperforms the softmax and L-softmax function on various state-of-the-art models. Also, we apply the proposed -softmax function to solve the multi-label classification problem, which yields better performance than the softmax function on MS COCO [Lin_ECCV_2014] and NUS-WIDE [Chua_CIVR_2009]. The source code is available222https://gitlab.com/luoyan/gsoftmax and is easy for use.

  • The proposed -softmax function can quantify the compactness and separability. Specifically, for each learned Gaussian distribution, the corresponding mean and variance indicate the center and compactness of the predictor.

  • In our analysis of correlation between intra-class compactness (or inter-class separability) and average precision, we observe that high intra-class compactness and inter-class separability are linearly correlated to average precision on MS COCO and NUS-WIDE. This implies that improvement of intra-class compactness and inter-class separability would leads to improvement of average precision.

Ii Related Works

Gaussian-based Online Learning. We first review the Gaussian-based online learning methods. In the online learning context, the training data are provided in a sequential order to learn a predictor for unobserved data. These methods usually make some assumptions to minimize the cumulative disparity errors between the ground truth and predictions over the entire sequence of instances [Crammer_JMLR_2006, Crammer_NIPS_2008, Dredze_ICML_2008, Rosenblatt_Psychological_1958, Wang_ICML_2012]. In this sense, these works can give some guidance and inspiration for designing a flexible mapping function.

Passive-Aggressive model [Crammer_JMLR_2006], Dredze et al. [Dredze_ICML_2008] made an explicit assumption on the weights : , where is the mean of the weights and is a covariance matrix for the underlying Gaussian distribution. Given an input instance with the corresponding label , the multivariate Gaussian distribution over weight vectors induces a univariate Gaussian distribution over the margin: , where is the inner product operation. Hence, the probability of a correct prediction is . The objective is to minimize the Kullback-Leibler divergence between the current distribution and the ideal distribution with the constraint that the probability of a correct prediction is not smaller than the confidence hyperparameter , i.e., . With the mean of the margin and the variance , the constraint can lead to , where is the cumulative function of the Gaussian distribution. This inequality is used as a constraint in optimization in practice. However, it is not convex with respect to and Dredze et al. [Dredze_ICML_2008] linearized it by omitting the square root: . To solve this non-convex problem, Crammer et al. [Crammer_NIPS_2008] discovered that a change of variable helps to maintain the convexity, i.e., when , the constraint becomes . The confidence weighted method [Crammer_NIPS_2008] employs an aggressive updating strategy by changing the distribution to satisfy the constraint imposed by the current instance, which may incorrectly update the parameters of the distribution when handling a mislabeled instance. Therefore, Wang et al. [Wang_ICML_2012] introduced a trade-off parameter to balance the passiveness and aggressiveness.

The aforementioned online learning methods [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] hypothesize that the weights are subject to a multivariate Gaussian distribution and pre-define a confidence hyperparameter to formalize a constraint for optimization. Nevertheless, the weights are learned based on the training data, putting hypothesis on the weights could be similar to put the cart before the horse. Moreover, such confidence hyperparameter may not be flexible or adaptive for various datasets. In this work, we instead hypothesize that the features are subject to Gaussian distribution and there is no confidence hyperparameter. To update the weights, [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] apply the Lagrangian method to compute the optimal weights. This mechanism does not straightforwardly fit into SGD scheme. Along the same line, this work is motivated to investigate how to incorporate the Gaussian assumption in SGD.

Softmax Function in ConvNet Learning. The success of ConvNets is largely attributed to the layer-stacking mechanism. Despite its effectiveness in complex real-world visual classification, this mechanism will result in co-adaptation and overfitting. To prevent the co-adaptation problem, Hinton et al. [Hinton_arXiv_2012] proposed a method which randomly omits a portion of neurons in a feedforward network. Then, Srivastava et al. [Srivastava_JMLR_2014] introduced the dropout unit to minimize overfitting and presented a comprehensive investigation of its effect in ConvNets. Similar regularization methods are also proposed in [Goodfellow_ICML_2013] and [Wan_ICML_2013]. Instead of modifying the connection between layers, [Zeiler_arXiv_2013] replaced the deterministic pooling with the stochastic pooling for regularizing ConvNets. The proposed -softmax function can be used together with these models to offer better general ability. We posit a general assumption and establish Gaussian distributions over the feature space at the final layer, i.e., the softmax module. In other words, the proposed -softmax function is general for most ConvNets without requiring much modification of the network structure.

ConvNets [Huang_CVPR_2017, Zagoruyko_BMVC_2016, He_CVPR_2016, Lecun_IEEE_1998, Krizhevsky_NIPS_2012, Simonyan_ICML_2015, Szegedy_CVPR_2015, Yu_CVPR_2017] have strong representational ability in learning invariant features. Although their architectures vary with each other, the softmax function is widely used along with cross entropy loss at the training phase. Hence, the softmax module is important and general for ConvNets. Liu et al. [Liu_ICML_2016] introduced a large-margin softmax function to enhance the compactness and the separability from a geometric perspective. Substantially, the large-margin softmax function is fundamentally similar to the softmax function, i.e., both use the exponential function, while having different inputs for the exponential function. In contrast, we model the mappings between features and ground truth labels as Gaussian CDF. Similar to the softmax function, we utilize normalization to identify the maximal element but not its exact value.

Multi-label Classification. Multi-label classification is a special case of multi-output learning tasks. Read et al. [Read_ECMLKDD_2009] proposed the classifier chain model to model label correlations. In particular, label order is important for chain classification models. A dynamic programming based classifier chain algorithm [Liu_NIPS_2015] was proposed to find the globally optimal label order for the classifier chain models. Shen et al. [Shen_NNLS_2018] introduced Co-Embedding and Co-Hashing method that explores the label correlations from the perspective of cross-view learning to improve prediction accuracy and efficiency. On the other hand, the classifier chain model does not take the order of difficulty of the labels into account. Therefore, the easy-to-hard learning paradigm [Liu_JMLR_2017b] was proposed to make good use of the predictions from simple labels to improve the predictions from hard labels. Liu et al. [Liu_JMLR_2017a] presented comprehensively theoretical analysis on the curse of dimensionality of decision tree models and introduced a sparse coding tree framework for multi-label annotation problems. In multi-label prediction, a large margin metric learning paradigm [Liu_AAAI_2015] was introduced to reduce the complexity of decoding procedure in canonical correlation analysis and maximum margin output coding methods. Liu et al. [Liu_PAMI_2018] introduced a large margin metric learning method to efficiently learn an appropriate distance metric for multi-output problems with theoretical guarantee.

Recently, there have been attempts to apply deep networks in multi-label classification, especially ConvNets and Recurrent Neural Networks (RNNs), for their promising performance in various vision tasks. In [Wang_CVPR_2016], ConvNet and RNN are utilized together to explicitly exploit the label dependencies. In contrast to [Wang_CVPR_2016], [Zhang_arXiv_2016] proposed a regional latent semantic dependencies model to predict small-size objects and visual concepts by exploiting the label dependencies at the regional level. Similarly, [Durand_CVPR_2016] automatically selected relevant image regions from global image labels using weakly supervised learning. Zhao et al. [Zhao_BMVC_2016] reduced irrelevant and noisy regions with the help of region gating module. These region proposal based methods usually suffer from redundant computation and sub-optimal performance. Wang et al. [Wang_2017_ICCV] addressed these problems by developing a recurrent memorized-attention module, and the module allows to locate attentional regions from the ConvNet’s feature maps. Instead of utilizing the label dependencies, [Li_CVPR_17] proposed a novel loss function for pairwise ranking, and the loss function is smooth everywhere so that it is easy to optimize within ConvNets. Also, there are two works that focus on improving the architectures of the networks for multi-label classification [Zhu_CVPR_2017, Durand_CVPR_2017]. In this work, we adopt a common baseline, i.e., ResNet-101 [He_CVPR_2016], which is widely used in the state-of-the-art models [Zhu_CVPR_2017, Durand_CVPR_2017].

Iii Methodology

Iii-a -Softmax Function

Logistic function, i.e., sigmoid function, and hyperbolic tangent function are widely used in deep learning, whose graphs are “S-shaped” curves. Their curves imply a graceful balance between linearity and non-linearity [Menon_NN_1996]. The Gaussian CDF has the same monotonicity as logistic and hyperbolic tangent function and shares similar shapes. It makes the Gaussian CDF a potential substitute with the capability to model the distribution pattern with class dependent and . Fundamentally, softmax function in mainstream deep learning models is the normalized exponential function, which is a generalization of the logistic function. In this work, the proposed -softmax function uses the Gaussian CDF to substitute the exponential function.

Similar to the softmax loss, we use cross entropy as the loss function, i.e.,

(1)

where is the loss, is the label with respect to the -th category, is the prediction confidence with respect to the -th category, and is the number of categories. Conventionally, given features that with respect to various labels, is given by the softmax function

(2)

The softmax function can be considered to represent a categorical distribution. By normalizing exponential function, the largest value is highlighted and the other values are suppressed significantly. As discussed in Section II, [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] hypothesized that the classification margin is subject to a Gaussian distribution. Slightly differently, we assume that the deep features with respect to the -th category is subject to a Gaussian distribution, i.e., . In this work, we define the proposed -softmax function as

(3)

where is a parameter controlling the width of CDF along y-axis. We can see that if , Equation (3) becomes the conventional softmax function. is the CDF of a Gaussian distribution, that is

(4)

where and are the mean and standard deviation, respectively. For simplicity, we denote as in the following paragraphs.

Comparing to the softmax function (2), the proposed -softmax function takes the feature distribution into account, i.e., the distribution term in (3). This formulation leads to two advantages. First, it enables to approximate a large variety of the distributions w.r.t. every class on the training samples, whereas the softmax function only learns from current observing sample. Second, with distribution parameters and , it is straightforward to quantify intra-class compactness and inter-class separability. In other words, the proposed -softmax function is more analytical than the softmax function.

The proposed -softmax function can work with any ConvNets, such as VGG [Simonyan_ICML_2015] and ResNet [He_CVPR_2016]. In this work, we make , and is not an arbitrary layer but the fully-connected layer. When , is prone to shift towards the positive axis direction because . The curve of has a similar shape as that of logistic function and hyperbolic tangent function, and can accurately capture the distribution of . As discussed in Section II, the online learning methods [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] considered the features as a Gaussian distribution and use Kullback-Leibler divergence (KLD) between the estimated distribution and the optimal distribution. Since their formulations involve the unknown optimal Gaussian distribution, they had to apply the Lagrangian to optimize and approximate and . This may not fit the back-propagation in modern ConvNets which commonly use SGD as a solver.

To optimize , we have to compute the partial derivatives of Equation (1) using the chain rule,

(5)

Usually, equals to 1 due to the normalization. Similarly, we can obtain the partial derivatives with respect to ,

(6)

According to the CDF, i.e., Equation (3), the derivatives with respect to and are

(7)
(8)

Plugging (7) and (8) into (5) and (6), partial derivatives of and are

(9)
(10)

In the back-propagation of ConvNets, the chain rule requires the derivatives of upper layers to compute the weight derivatives of lower layers. Therefore, is needed to pass backwards the lower layers. Because has the same form as in Equation (5) and we know

(11)

Then, is obtained

(12)

Iii-B -Softmax in Multi-label Classification

The aforementioned section are based on the single-label classification problems. Here, we apply the proposed -Softmax function to the multi-label classification problem. In the single-label classification problems, the softmax loss and the -softmax variant are defined as

(13)

For multi-label classification, multi-label soft margin loss (MSML) is widely used to solve the multi-label classification problems [Durand_CVPR_2017, Zhu_CVPR_2017], as defined by Equation (14).

(14)

In contrast with MSML, there is a variant that takes and as inputs, instead of only taking as inputs in MSML. is the positive feature which is used to compute the probability that the input image is classified to the -th category, while is the negative feature that is used to compute the probability that the input image is classified to the non--th category. The variant is used in the multi-label classification problems [Li_MTA_2018]. It is defined by Equation (15).

(15)

The terms and in MSML (14) are both determined by . To make the learning process consistent with the loss function used in single-label classification, we use the variant, i.e., Equation (15), for multi-label classification in this work and denote it as the softmax loss function for consistency. Correspondingly, the -softmax loss function is defined as

(16)

In this way, we can model the distributions of and by and , respectively.

We can see that the proposed -softmax and softmax function are both straightforward to extend for multi-label classification. In contrast, L-softmax function may not be easy to adapt to multi-label classification. This is because L-softmax function needs to be aware of the feature related to the ground truth label so that it is able to impose a margin constraint on the feature, i.e., , where is an integer representing the margin, indicates the -th label is the ground truth label of , is the -th column of , and is the angle between and . When , the exponential term is the same as softmax function. However, when , is used to guarantee the margin between and . As a consequence, it is hard to use in MSML because L-softmax function will treat the terms in Equation (14) differently.

Iii-C Malleable Learning Rates

The training of a model usually required a series of predefined learning rates. The learning rate is a real value and a function of the current epoch with given starting and final value. There are several popular types of learning rates, e.g. linspace, logspace, and staircase. Usually, the number of epochs with these types of learning rates is not more than 300. Although Huang et al.  [Huang_ICLR_2017] use many more epochs with annealing learning rates, the learning rate is designed as a function of iteration number instead of epoch number. Therefore, it may not generalize to distributed or parallel processing because the iterations are not processed sequentially. We would like to test the proposed -Softmax function for an extreme condition, i.e., more epochs, to investigate the stability. In the following, we first describe the three learning rates followed by showing how these learning rates are in correlation to the proposed malleable learning rate. The proposed malleable learning rates can control the curvature of the scheduled learning rates to boost convergence of the learning process.

The linspace learning rates are generated with a simple linear function, where learning rate at epoch, , is denoted as . Here, is the maximum epoch number, while and are the starting and final value of the learning sequences, respectively. is the initial learning rate. Because of linearity, changes of the learning rates are constant through all epochs. As the learning rates become smaller when epoch number increase, it is expected that the training process can converge stably. Logspace learning rates meet this requirement by a log function .

The logspace learning rate has a gradual descent trace that rapidly becomes stable. On the other hand, the staircase learning rate remains constant for a large number of epochs. As the learning rate is not frequently adjusted, the model learning process may not converge. These problems undermine the sustainable convergence ability of deep learning model. Therefore, we integrate the advantages of these learning rates and propose a malleable learning rate, that is,

(17)

where is the end epoch of the -th piece of learning rates and . As shown in Equation (17), the propose learning rate is able to separate piece wise learning rates (i.e., staircase learning rates), yet able to control the shape of each piece (e.g. curvature or degree of bend) by configuring and .

For the experiments using pre-trained models with the ImageNet dataset [Russakovsky_IJCV_2015], the initialization contains well learned knowledge for Tiny ImageNet, MS COCO, and NUS-WIDE, which are similar to ImageNet in terms of visual content and concept labels. Hence, the training process on these datasets do not need a number of epochs [Zhu_CVPR_2017, Durand_CVPR_2017]. In this work, we instead apply malleable learning rates on CIFAR to train the models from scratch.

Iii-D Compactness & Separability

As commonly studied in machine learning [Liu_ICML_2016, Yang_AAAI_2006, Zhang_ICML_2007], intra-class compactness and inter-class separability are important characteristics that can reveal some intuition about the learning ability and efficacy of a model. Due to the underlying Gaussian nature of the proposed -softmax function, the intra-class compactness for a given class is characterized by the respective standard deviation , where smaller indicates the learned model is more compact. Mathematically, the compactness of a given class can be represented by .

The inter-class separability can be measured by computing the disparity of two models, i.e., the divergence between two Gaussian distributions. In the probability and information theory literature, KLD is commonly used to measure the difference between two probability distributions. In the following, we denote a learned Gaussian distribution as . Specifically, given two learned Gaussian distributions and , the divergence between two distributions is

(18)

where and are the probability density functions of the respective class. KLD is always non-negative. As proven by Gibbs’ inequality, KLD is zero if and only if the two distributions are equivalent almost everywhere. To quantify the divergence between the distribution of the -th category and the distributions of the rest of categories, we use the mean of KLDs,

(19)

Because KLD is asymmetric, we compute the mean of and for a fair measurement.

Since compactness indicates the intra-class correlations and separability indicates the inter-class correlations, we multiply (which is the operator) intra-class compactness with inter-class separability to overall quantify how discriminative the features with the same label are. Hence we define separability- ratio , with respect to the -th class as follows

(20)

Since of a distribution is inversely proportional to compactness, is also inversely proportional to . Ideally, we hope a model’s is as large as possible, which requires separability as large as possible and as small as possible at the same time.

Iv Empirical Evaluation

In this section, we provide comprehensive comparison between the softmax function and the proposed -softmax function for single-label classification and multi-label classification. Specifically, we evaluate three baseline ConvNets (i.e., VGG, DenseNet, and wide ResNet) on CIFAR-10 and CIFAR-100 datasets for single-label classification. For multi-label classification, we conduct the experiments with ResNet on the MS COCO dataset.

Datasets & Evaluation Metrics. To evaluate the proposed -softmax function for single-label classification, we use the CIFAR-10 [Krizhevsky_Citeseer_2009] and CIFAR-100 datasets, which are widely used in machine learning literature [Lin_ICLR_2014, Lee_AISTATS_2015, Springenberg_ICLR_2015, Liu_ICML_2016, Clevert_ICLR_2016, Lee_AISTATS_2016, Zagoruyko_BMVC_2016, Huang_CVPR_2017]. CIFAR-10 consists of 60,000 color images with 3232 pixels in 10 classes. Each class has 6,000 images including 5,000 training images and 1,000 test image. CIFAR-100 has 100 classes and the image resolution is same as CIFAR-10. It has 600 images per class including 500 training images and 100 test images. Moreover, we also use Tiny ImageNet in this work. It is a variant of ImageNet, which has 200 classes and each class has 500 training images and 50 validation images.

For multi-label classification task, we adopt widely used datasets, i.e., MS COCO [Lin_ECCV_2014] and NUS-WIDE [Chua_CIVR_2009]. The MS COCO dataset is primarily designed for object detection in context, and it is also widely used for multi-label recognition. Therefore, MS COCO is adopted in this work. It comprises a training set of 82,081 images, and a validation set of 40,137 images. The dataset covers 80 common object categories, with about 3.5 object labels per image. In this work, we follow the original split for training and test, respectively. Following [Li_CVPR_17, Zhu_CVPR_2017, Wang_2017_ICCV, Durand_CVPR_2017], we only use the image labels for training and evaluation. NUS-WIDE consists of 269,648 images with 81 concept labels. We use official train/test split i.e., 161,789 images for training and 107,859 images for evaluation.

We use the same evaluation metrics as [Zhu_CVPR_2017, Wang_2017_ICCV], namely mean average precision (mAP), per-class precision, recall, F1 score (denoted as C-P, C-R, C-F1), and overall precision, recall, F1 score (denoted as O-P, O-R, O-F1). More concretely, average precision is defined as follows

(21)

where is a relevant function that returns 1 if the item at the rank is relevant to the -th class and returns 0 otherwise. To compute mAP, we collect all predicted probabilities for each class of all the images. The corresponding predicted -th labels over all images are sorted in descending order. The average precision of the -th class is the average of precisions predicted correctly -th labels. is the precision ranked at over all predicted -th labels. denotes the number of predicted -th labels. Finally, the mAP is obtained by averaging AP over all classes. The other metrics are defined as follows

(22)

where is the number of images that correctly predicted for the -th class, is the number of predicted images for the -th label, is the number of ground truth images for the -th label. For C-P, C-R, and C-F1, is the number of labels.

Baselines & Experiment Configurations. For the classification task, we adopt softmax and L-softmax [Liu_ICML_2016] as baseline methods for comparison purposes. For multi-label classification, due to the limits of L-softmax as discussed in Subsection III-B, we only use softmax as the baseline method.

There are a number of ConvNets, such as AlexNet [Krizhevsky_NIPS_2012], GoogLeNet [Szegedy_CVPR_2015], VGG [Simonyan_ICML_2015], ResNet [He_CVPR_2016], wide ResNet [Zagoruyko_BMVC_2016], and DenseNet [Huang_CVPR_2017]. For the experiment on CIFAR-10 and CIFAR-100, we adopt the state-of-the-art wide ResNet and DenseNet as baseline models. Also, considering that the network structure of wide ResNet and DenseNet are quite different with conventional networks, such as AlexNet and VGG, VGG is taken into account too. Specifically, we use VGG-16 (16-layer model), wide ResNet with 40 convolutional layers and the widening factor 14, and DenseNet with 100 convolutional layers and the growth rate 24 in this work. Our experiments focus on comparing the conventional softmax function with the proposed -softmax function. Softmax and L-softmax function are considered the baseline functions in this work. For fair comparisons, the experiments are strictly conducted under the same conditions. For all comparisons, we only replace the softmax function in the final layer with the proposed -softmax function, and preserve other parts of the network. In the training stage, we keep most of training hyperparameters, e.g. weight decay, momentum and so on, the same as AlexNet [Alex_NIPS_2012]. Both the baseline and the proposed -softmax function would be trained from scratch under the same conditions. In wide ResNet experiments, the batch size for CIFAR-10 and CIFAR-100 are both 128, which is the number used in its original work [Zagoruyko_BMVC_2016]. In DenseNet experiments, since its graphics memory usage is considerably higher than wide ResNet’s, we use 50 as batch size, which leads to fully graphics memory usage for three GPUs. The hardware used in this work are Intel Xeon E5-2660 CPU and GeForce GTX 1080 Ti. All models are implemented with Torch [Collobert_NIPS_2011].

We follow the original experimental settings of the baseline models for the training and evaluation of the softmax function and the -softmax function. For example, in DenseNet [Huang_CVPR_2017], Huang et al. train their model in 300 epochs with staircase learning rates. From 1st epoch to 149th epoch, the learning rate is set to . From 150th epoch to 224th epoch, it is , and the learning rates of the remaining epochs are . The wide ResNet model is trained in 200 epochs [Zagoruyko_BMVC_2016]. The learning rate is initialized to , and at 60th, 120th, 160th, it will decrease to , , , respectively. To make it comparable to DenseNet, we extend the epochs from 200 to 300 and decrease the learning rate at 220th and 260th epoch by multiplying . To avoid ad hoc training of hyperparameter settings, we set the weight decay and momentum to be the same as the default hyperparameters in the baselines [Zagoruyko_BMVC_2016, Huang_CVPR_2017] (i.e., ) for the softmax function and the proposed -softmax function.

For the experiments on Tiny ImageNet, we adopt wide ResNet [Zagoruyko_BMVC_2016] with 40 convolutional layers and width 14 as the baseline model. Initial learning rate is 0.001 and weight decay is 1e-4. The training process consists of 30 epochs with batch size 80 and the learning will be decreased to its one tenth every 10 epoch. Following [Huang_ICLR_2017, Yamada_ICLR_2018], we use the ImageNet pre-trained weights as an initialization and the input image will be resized to 224224 to feed the wide ResNet.

TABLE I: Top 1 error rate () of the proposed -softmax function and other baselines on CIFAR-10 and CIFAR-100. * indicates that malleable learning rates with 1100 epochs are used in the training process (refer to experimental configurations in Section IV for more details).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
350905
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description