softmax: Improving Intraclass Compactness and Interclass Separability of Features
Abstract
Intraclass compactness and interclass separability are crucial indicators to measure the effectiveness of a model to produce discriminative features, where intraclass compactness indicates how close the features with the same label are to each other and interclass separability indicates how far away the features with different labels are. In this work, we investigate intraclass compactness and interclass separability of features learned by convolutional networks and propose a Gaussianbased softmax (softmax) function that can effectively improve intraclass compactness and interclass separability. The proposed function is simple to implement and can easily replace the softmax function. We evaluate the proposed softmax function on classification datasets (i.e., CIFAR10, CIFAR100, and Tiny ImageNet) and on multilabel classification datasets (i.e., MS COCO and NUSWIDE). The experimental results show that the proposed softmax function improves the stateoftheart models across all evaluated datasets. In addition, analysis of the intraclass compactness and interclass separability demonstrates the advantages of the proposed function over the softmax function, which is consistent with the performance improvement. More importantly, we observe that high intraclass compactness and interclass separability are linearly correlated to average precision on MS COCO and NUSWIDE. This implies that improvement of intraclass compactness and interclass separability would lead to improvement of average precision.
I Introduction
Machine learning is an important and fundamental component in visual understanding tasks. The core idea of supervised learning is to learn a model that explores the causal relationship between the dependent variables and the predictor variables. To quantify this relationship, the conventional approach is to make a hypothesis on the model, and feed the observed pairs of dependent variables and predictor parameters to the model for predicting future cases. For most learning problems, it is infeasible to make a perfect hypothesis that matches the underlying pattern, whereas a badly designed hypothesis often leads to a model that is more complicated than necessary and violates the principle of parsimony. Therefore, when designing or evaluating a model, the core objective is to seek a balance between two conflicting goals: how complicated a model should be to achieve accurate predictions, and how to design a model as simple as possible, but not simpler.
In the past decade, deep learning methods have significantly accelerated the development of machine learning research, where Convolutional Network (ConvNet) has achieved superior performance in numerous realworld visual understanding tasks [Girshick_CVPR_2014, Krizhevsky_NIPS_2012, Noh_ICCV_2015, Fu_PAMI_2017, Christian_RAS_2016, Hong_CVPR_2017, Zhou_ICCV_2017, Xu_CVPR_2017, Change_PAMI_2017, Hou_TNNLS_2015, Yuan_TNNLS_2015, Shao_TNNLS_2014]. Although their architectures vary with each other, the softmax function is widely used along with the cross entropy loss at the training phase [He_CVPR_2016, Krizhevsky_NIPS_2012, Lecun_IEEE_1998, Simonyan_ICML_2015, Szegedy_CVPR_2015]. The softmax function may not take the distribution pattern of previously observed samples into account to boost classification accuracy. In this work, we design a statistically driven extension of the softmax function that fits into the Stochastic Gradient Descent (SGD) scheme for endtoend learning. Furthermore, the final layer of the softmax function directly connects to the predictions and can maximally preserve generality for various ConvNets, i.e., avoid complex modification of existing network architectures.
Features are key for prediction in ConvNet Learning. According to the central limit theorem [Kallenberg_book_1997], the arithmetic mean of a sufficiently large number of iterates of i.i.d. random variables, each with a finite expected value and variance, can be approximately normally distributed even if the original variables are not normally distributed. This makes the Gaussian distribution generally valid in a great variety of contexts. Following this line of thought, online learning methods [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] assumed that the weights follow Gaussian distribution and make use of its distribution pattern for classification. Given a largescale training data [Russakovsky_IJCV_2015], the underlying distributions of discriminative features generated by ConvNets can be modeled. This distribution pattern has not been fully explored in existing literature.
Intraclass compactness and interclass separability of features are generally correlated to the quality of the learned features. If intraclass compactness and interclass separability are simultaneously maximized, the learned features are more discriminative [Liu_ICML_2016]. We introduce a variant of the softmax function, named Gaussianbased softmax (softmax) function, which aims to improve intraclass compactness and interclass separability as shown in Figure 1. With a typical assumption that features are distributed according to Gaussian distributions, where Gaussian cumulative distribution function (CDF) is used in prediction and normalization to generate the final confidence in a soft form.
Figure 2 demonstrates the role and position of the proposed softmax function in a supervised learning framework. Given the training samples, the feature extractor would extract the features and then pass them to the predictor for inference. In this work, we follow the mainstream deep learning framework where the feature extractor is modeled with a ConvNet. The proposed softmax function is able to replace the softmax function. The contributions can be summarized as:

With the general assumption, i.e., features w.r.t. a class are subject to a Gaussian distribution, we propose the softmax function which models the distributions of features for better prediction. The experiments on CIFAR10, CIFAR100 [Krizhevsky_Citeseer_2009] and Tiny ImageNet^{1}^{1}1https://tinyimagenet.herokuapp.com/ show that the proposed softmax function consistently outperforms the softmax and Lsoftmax function on various stateoftheart models. Also, we apply the proposed softmax function to solve the multilabel classification problem, which yields better performance than the softmax function on MS COCO [Lin_ECCV_2014] and NUSWIDE [Chua_CIVR_2009]. The source code is available^{2}^{2}2https://gitlab.com/luoyan/gsoftmax and is easy for use.

The proposed softmax function can quantify the compactness and separability. Specifically, for each learned Gaussian distribution, the corresponding mean and variance indicate the center and compactness of the predictor.

In our analysis of correlation between intraclass compactness (or interclass separability) and average precision, we observe that high intraclass compactness and interclass separability are linearly correlated to average precision on MS COCO and NUSWIDE. This implies that improvement of intraclass compactness and interclass separability would leads to improvement of average precision.
Ii Related Works
Gaussianbased Online Learning. We first review the Gaussianbased online learning methods. In the online learning context, the training data are provided in a sequential order to learn a predictor for unobserved data. These methods usually make some assumptions to minimize the cumulative disparity errors between the ground truth and predictions over the entire sequence of instances [Crammer_JMLR_2006, Crammer_NIPS_2008, Dredze_ICML_2008, Rosenblatt_Psychological_1958, Wang_ICML_2012]. In this sense, these works can give some guidance and inspiration for designing a flexible mapping function.
PassiveAggressive model [Crammer_JMLR_2006], Dredze et al. [Dredze_ICML_2008] made an explicit assumption on the weights : , where is the mean of the weights and is a covariance matrix for the underlying Gaussian distribution. Given an input instance with the corresponding label , the multivariate Gaussian distribution over weight vectors induces a univariate Gaussian distribution over the margin: , where is the inner product operation. Hence, the probability of a correct prediction is . The objective is to minimize the KullbackLeibler divergence between the current distribution and the ideal distribution with the constraint that the probability of a correct prediction is not smaller than the confidence hyperparameter , i.e., . With the mean of the margin and the variance , the constraint can lead to , where is the cumulative function of the Gaussian distribution. This inequality is used as a constraint in optimization in practice. However, it is not convex with respect to and Dredze et al. [Dredze_ICML_2008] linearized it by omitting the square root: . To solve this nonconvex problem, Crammer et al. [Crammer_NIPS_2008] discovered that a change of variable helps to maintain the convexity, i.e., when , the constraint becomes . The confidence weighted method [Crammer_NIPS_2008] employs an aggressive updating strategy by changing the distribution to satisfy the constraint imposed by the current instance, which may incorrectly update the parameters of the distribution when handling a mislabeled instance. Therefore, Wang et al. [Wang_ICML_2012] introduced a tradeoff parameter to balance the passiveness and aggressiveness.
The aforementioned online learning methods [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] hypothesize that the weights are subject to a multivariate Gaussian distribution and predefine a confidence hyperparameter to formalize a constraint for optimization. Nevertheless, the weights are learned based on the training data, putting hypothesis on the weights could be similar to put the cart before the horse. Moreover, such confidence hyperparameter may not be flexible or adaptive for various datasets. In this work, we instead hypothesize that the features are subject to Gaussian distribution and there is no confidence hyperparameter. To update the weights, [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] apply the Lagrangian method to compute the optimal weights. This mechanism does not straightforwardly fit into SGD scheme. Along the same line, this work is motivated to investigate how to incorporate the Gaussian assumption in SGD.
Softmax Function in ConvNet Learning. The success of ConvNets is largely attributed to the layerstacking mechanism. Despite its effectiveness in complex realworld visual classification, this mechanism will result in coadaptation and overfitting. To prevent the coadaptation problem, Hinton et al. [Hinton_arXiv_2012] proposed a method which randomly omits a portion of neurons in a feedforward network. Then, Srivastava et al. [Srivastava_JMLR_2014] introduced the dropout unit to minimize overfitting and presented a comprehensive investigation of its effect in ConvNets. Similar regularization methods are also proposed in [Goodfellow_ICML_2013] and [Wan_ICML_2013]. Instead of modifying the connection between layers, [Zeiler_arXiv_2013] replaced the deterministic pooling with the stochastic pooling for regularizing ConvNets. The proposed softmax function can be used together with these models to offer better general ability. We posit a general assumption and establish Gaussian distributions over the feature space at the final layer, i.e., the softmax module. In other words, the proposed softmax function is general for most ConvNets without requiring much modification of the network structure.
ConvNets [Huang_CVPR_2017, Zagoruyko_BMVC_2016, He_CVPR_2016, Lecun_IEEE_1998, Krizhevsky_NIPS_2012, Simonyan_ICML_2015, Szegedy_CVPR_2015, Yu_CVPR_2017] have strong representational ability in learning invariant features. Although their architectures vary with each other, the softmax function is widely used along with cross entropy loss at the training phase. Hence, the softmax module is important and general for ConvNets. Liu et al. [Liu_ICML_2016] introduced a largemargin softmax function to enhance the compactness and the separability from a geometric perspective. Substantially, the largemargin softmax function is fundamentally similar to the softmax function, i.e., both use the exponential function, while having different inputs for the exponential function. In contrast, we model the mappings between features and ground truth labels as Gaussian CDF. Similar to the softmax function, we utilize normalization to identify the maximal element but not its exact value.
Multilabel Classification. Multilabel classification is a special case of multioutput learning tasks. Read et al. [Read_ECMLKDD_2009] proposed the classifier chain model to model label correlations. In particular, label order is important for chain classification models. A dynamic programming based classifier chain algorithm [Liu_NIPS_2015] was proposed to find the globally optimal label order for the classifier chain models. Shen et al. [Shen_NNLS_2018] introduced CoEmbedding and CoHashing method that explores the label correlations from the perspective of crossview learning to improve prediction accuracy and efficiency. On the other hand, the classifier chain model does not take the order of difficulty of the labels into account. Therefore, the easytohard learning paradigm [Liu_JMLR_2017b] was proposed to make good use of the predictions from simple labels to improve the predictions from hard labels. Liu et al. [Liu_JMLR_2017a] presented comprehensively theoretical analysis on the curse of dimensionality of decision tree models and introduced a sparse coding tree framework for multilabel annotation problems. In multilabel prediction, a large margin metric learning paradigm [Liu_AAAI_2015] was introduced to reduce the complexity of decoding procedure in canonical correlation analysis and maximum margin output coding methods. Liu et al. [Liu_PAMI_2018] introduced a large margin metric learning method to efficiently learn an appropriate distance metric for multioutput problems with theoretical guarantee.
Recently, there have been attempts to apply deep networks in multilabel classification, especially ConvNets and Recurrent Neural Networks (RNNs), for their promising performance in various vision tasks. In [Wang_CVPR_2016], ConvNet and RNN are utilized together to explicitly exploit the label dependencies. In contrast to [Wang_CVPR_2016], [Zhang_arXiv_2016] proposed a regional latent semantic dependencies model to predict smallsize objects and visual concepts by exploiting the label dependencies at the regional level. Similarly, [Durand_CVPR_2016] automatically selected relevant image regions from global image labels using weakly supervised learning. Zhao et al. [Zhao_BMVC_2016] reduced irrelevant and noisy regions with the help of region gating module. These region proposal based methods usually suffer from redundant computation and suboptimal performance. Wang et al. [Wang_2017_ICCV] addressed these problems by developing a recurrent memorizedattention module, and the module allows to locate attentional regions from the ConvNet’s feature maps. Instead of utilizing the label dependencies, [Li_CVPR_17] proposed a novel loss function for pairwise ranking, and the loss function is smooth everywhere so that it is easy to optimize within ConvNets. Also, there are two works that focus on improving the architectures of the networks for multilabel classification [Zhu_CVPR_2017, Durand_CVPR_2017]. In this work, we adopt a common baseline, i.e., ResNet101 [He_CVPR_2016], which is widely used in the stateoftheart models [Zhu_CVPR_2017, Durand_CVPR_2017].
Iii Methodology
Iiia Softmax Function
Logistic function, i.e., sigmoid function, and hyperbolic tangent function are widely used in deep learning, whose graphs are “Sshaped” curves. Their curves imply a graceful balance between linearity and nonlinearity [Menon_NN_1996]. The Gaussian CDF has the same monotonicity as logistic and hyperbolic tangent function and shares similar shapes. It makes the Gaussian CDF a potential substitute with the capability to model the distribution pattern with class dependent and . Fundamentally, softmax function in mainstream deep learning models is the normalized exponential function, which is a generalization of the logistic function. In this work, the proposed softmax function uses the Gaussian CDF to substitute the exponential function.
Similar to the softmax loss, we use cross entropy as the loss function, i.e.,
(1) 
where is the loss, is the label with respect to the th category, is the prediction confidence with respect to the th category, and is the number of categories. Conventionally, given features that with respect to various labels, is given by the softmax function
(2) 
The softmax function can be considered to represent a categorical distribution. By normalizing exponential function, the largest value is highlighted and the other values are suppressed significantly. As discussed in Section II, [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] hypothesized that the classification margin is subject to a Gaussian distribution. Slightly differently, we assume that the deep features with respect to the th category is subject to a Gaussian distribution, i.e., . In this work, we define the proposed softmax function as
(3) 
where is a parameter controlling the width of CDF along yaxis. We can see that if , Equation (3) becomes the conventional softmax function. is the CDF of a Gaussian distribution, that is
(4) 
where and are the mean and standard deviation, respectively. For simplicity, we denote as in the following paragraphs.
Comparing to the softmax function (2), the proposed softmax function takes the feature distribution into account, i.e., the distribution term in (3). This formulation leads to two advantages. First, it enables to approximate a large variety of the distributions w.r.t. every class on the training samples, whereas the softmax function only learns from current observing sample. Second, with distribution parameters and , it is straightforward to quantify intraclass compactness and interclass separability. In other words, the proposed softmax function is more analytical than the softmax function.
The proposed softmax function can work with any ConvNets, such as VGG [Simonyan_ICML_2015] and ResNet [He_CVPR_2016]. In this work, we make , and is not an arbitrary layer but the fullyconnected layer. When , is prone to shift towards the positive axis direction because . The curve of has a similar shape as that of logistic function and hyperbolic tangent function, and can accurately capture the distribution of . As discussed in Section II, the online learning methods [Crammer_NIPS_2008, Dredze_ICML_2008, Wang_ICML_2012] considered the features as a Gaussian distribution and use KullbackLeibler divergence (KLD) between the estimated distribution and the optimal distribution. Since their formulations involve the unknown optimal Gaussian distribution, they had to apply the Lagrangian to optimize and approximate and . This may not fit the backpropagation in modern ConvNets which commonly use SGD as a solver.
To optimize , we have to compute the partial derivatives of Equation (1) using the chain rule,
(5) 
Usually, equals to 1 due to the normalization. Similarly, we can obtain the partial derivatives with respect to ,
(6) 
According to the CDF, i.e., Equation (3), the derivatives with respect to and are
(7)  
(8) 
(9)  
(10) 
In the backpropagation of ConvNets, the chain rule requires the derivatives of upper layers to compute the weight derivatives of lower layers. Therefore, is needed to pass backwards the lower layers. Because has the same form as in Equation (5) and we know
(11) 
Then, is obtained
(12) 
IiiB Softmax in Multilabel Classification
The aforementioned section are based on the singlelabel classification problems. Here, we apply the proposed Softmax function to the multilabel classification problem. In the singlelabel classification problems, the softmax loss and the softmax variant are defined as
(13) 
For multilabel classification, multilabel soft margin loss (MSML) is widely used to solve the multilabel classification problems [Durand_CVPR_2017, Zhu_CVPR_2017], as defined by Equation (14).
(14) 
In contrast with MSML, there is a variant that takes and as inputs, instead of only taking as inputs in MSML. is the positive feature which is used to compute the probability that the input image is classified to the th category, while is the negative feature that is used to compute the probability that the input image is classified to the nonth category. The variant is used in the multilabel classification problems [Li_MTA_2018]. It is defined by Equation (15).
(15) 
The terms and in MSML (14) are both determined by . To make the learning process consistent with the loss function used in singlelabel classification, we use the variant, i.e., Equation (15), for multilabel classification in this work and denote it as the softmax loss function for consistency. Correspondingly, the softmax loss function is defined as
(16) 
In this way, we can model the distributions of and by and , respectively.
We can see that the proposed softmax and softmax function are both straightforward to extend for multilabel classification. In contrast, Lsoftmax function may not be easy to adapt to multilabel classification. This is because Lsoftmax function needs to be aware of the feature related to the ground truth label so that it is able to impose a margin constraint on the feature, i.e., , where is an integer representing the margin, indicates the th label is the ground truth label of , is the th column of , and is the angle between and . When , the exponential term is the same as softmax function. However, when , is used to guarantee the margin between and . As a consequence, it is hard to use in MSML because Lsoftmax function will treat the terms in Equation (14) differently.
IiiC Malleable Learning Rates
The training of a model usually required a series of predefined learning rates. The learning rate is a real value and a function of the current epoch with given starting and final value. There are several popular types of learning rates, e.g. linspace, logspace, and staircase. Usually, the number of epochs with these types of learning rates is not more than 300. Although Huang et al. [Huang_ICLR_2017] use many more epochs with annealing learning rates, the learning rate is designed as a function of iteration number instead of epoch number. Therefore, it may not generalize to distributed or parallel processing because the iterations are not processed sequentially. We would like to test the proposed Softmax function for an extreme condition, i.e., more epochs, to investigate the stability. In the following, we first describe the three learning rates followed by showing how these learning rates are in correlation to the proposed malleable learning rate. The proposed malleable learning rates can control the curvature of the scheduled learning rates to boost convergence of the learning process.
The linspace learning rates are generated with a simple linear function, where learning rate at epoch, , is denoted as . Here, is the maximum epoch number, while and are the starting and final value of the learning sequences, respectively. is the initial learning rate. Because of linearity, changes of the learning rates are constant through all epochs. As the learning rates become smaller when epoch number increase, it is expected that the training process can converge stably. Logspace learning rates meet this requirement by a log function .
The logspace learning rate has a gradual descent trace that rapidly becomes stable. On the other hand, the staircase learning rate remains constant for a large number of epochs. As the learning rate is not frequently adjusted, the model learning process may not converge. These problems undermine the sustainable convergence ability of deep learning model. Therefore, we integrate the advantages of these learning rates and propose a malleable learning rate, that is,
(17) 
where is the end epoch of the th piece of learning rates and . As shown in Equation (17), the propose learning rate is able to separate piece wise learning rates (i.e., staircase learning rates), yet able to control the shape of each piece (e.g. curvature or degree of bend) by configuring and .
For the experiments using pretrained models with the ImageNet dataset [Russakovsky_IJCV_2015], the initialization contains well learned knowledge for Tiny ImageNet, MS COCO, and NUSWIDE, which are similar to ImageNet in terms of visual content and concept labels. Hence, the training process on these datasets do not need a number of epochs [Zhu_CVPR_2017, Durand_CVPR_2017]. In this work, we instead apply malleable learning rates on CIFAR to train the models from scratch.
IiiD Compactness & Separability
As commonly studied in machine learning [Liu_ICML_2016, Yang_AAAI_2006, Zhang_ICML_2007], intraclass compactness and interclass separability are important characteristics that can reveal some intuition about the learning ability and efficacy of a model. Due to the underlying Gaussian nature of the proposed softmax function, the intraclass compactness for a given class is characterized by the respective standard deviation , where smaller indicates the learned model is more compact. Mathematically, the compactness of a given class can be represented by .
The interclass separability can be measured by computing the disparity of two models, i.e., the divergence between two Gaussian distributions. In the probability and information theory literature, KLD is commonly used to measure the difference between two probability distributions. In the following, we denote a learned Gaussian distribution as . Specifically, given two learned Gaussian distributions and , the divergence between two distributions is
(18) 
where and are the probability density functions of the respective class. KLD is always nonnegative. As proven by Gibbs’ inequality, KLD is zero if and only if the two distributions are equivalent almost everywhere. To quantify the divergence between the distribution of the th category and the distributions of the rest of categories, we use the mean of KLDs,
(19) 
Because KLD is asymmetric, we compute the mean of and for a fair measurement.
Since compactness indicates the intraclass correlations and separability indicates the interclass correlations, we multiply (which is the operator) intraclass compactness with interclass separability to overall quantify how discriminative the features with the same label are. Hence we define separability ratio , with respect to the th class as follows
(20) 
Since of a distribution is inversely proportional to compactness, is also inversely proportional to . Ideally, we hope a model’s is as large as possible, which requires separability as large as possible and as small as possible at the same time.
Iv Empirical Evaluation
In this section, we provide comprehensive comparison between the softmax function and the proposed softmax function for singlelabel classification and multilabel classification. Specifically, we evaluate three baseline ConvNets (i.e., VGG, DenseNet, and wide ResNet) on CIFAR10 and CIFAR100 datasets for singlelabel classification. For multilabel classification, we conduct the experiments with ResNet on the MS COCO dataset.
Datasets & Evaluation Metrics. To evaluate the proposed softmax function for singlelabel classification, we use the CIFAR10 [Krizhevsky_Citeseer_2009] and CIFAR100 datasets, which are widely used in machine learning literature [Lin_ICLR_2014, Lee_AISTATS_2015, Springenberg_ICLR_2015, Liu_ICML_2016, Clevert_ICLR_2016, Lee_AISTATS_2016, Zagoruyko_BMVC_2016, Huang_CVPR_2017]. CIFAR10 consists of 60,000 color images with 3232 pixels in 10 classes. Each class has 6,000 images including 5,000 training images and 1,000 test image. CIFAR100 has 100 classes and the image resolution is same as CIFAR10. It has 600 images per class including 500 training images and 100 test images. Moreover, we also use Tiny ImageNet in this work. It is a variant of ImageNet, which has 200 classes and each class has 500 training images and 50 validation images.
For multilabel classification task, we adopt widely used datasets, i.e., MS COCO [Lin_ECCV_2014] and NUSWIDE [Chua_CIVR_2009]. The MS COCO dataset is primarily designed for object detection in context, and it is also widely used for multilabel recognition. Therefore, MS COCO is adopted in this work. It comprises a training set of 82,081 images, and a validation set of 40,137 images. The dataset covers 80 common object categories, with about 3.5 object labels per image. In this work, we follow the original split for training and test, respectively. Following [Li_CVPR_17, Zhu_CVPR_2017, Wang_2017_ICCV, Durand_CVPR_2017], we only use the image labels for training and evaluation. NUSWIDE consists of 269,648 images with 81 concept labels. We use official train/test split i.e., 161,789 images for training and 107,859 images for evaluation.
We use the same evaluation metrics as [Zhu_CVPR_2017, Wang_2017_ICCV], namely mean average precision (mAP), perclass precision, recall, F1 score (denoted as CP, CR, CF1), and overall precision, recall, F1 score (denoted as OP, OR, OF1). More concretely, average precision is defined as follows
(21) 
where is a relevant function that returns 1 if the item at the rank is relevant to the th class and returns 0 otherwise. To compute mAP, we collect all predicted probabilities for each class of all the images. The corresponding predicted th labels over all images are sorted in descending order. The average precision of the th class is the average of precisions predicted correctly th labels. is the precision ranked at over all predicted th labels. denotes the number of predicted th labels. Finally, the mAP is obtained by averaging AP over all classes. The other metrics are defined as follows
(22) 
where is the number of images that correctly predicted for the th class, is the number of predicted images for the th label, is the number of ground truth images for the th label. For CP, CR, and CF1, is the number of labels.
Baselines & Experiment Configurations. For the classification task, we adopt softmax and Lsoftmax [Liu_ICML_2016] as baseline methods for comparison purposes. For multilabel classification, due to the limits of Lsoftmax as discussed in Subsection IIIB, we only use softmax as the baseline method.
There are a number of ConvNets, such as AlexNet [Krizhevsky_NIPS_2012], GoogLeNet [Szegedy_CVPR_2015], VGG [Simonyan_ICML_2015], ResNet [He_CVPR_2016], wide ResNet [Zagoruyko_BMVC_2016], and DenseNet [Huang_CVPR_2017]. For the experiment on CIFAR10 and CIFAR100, we adopt the stateoftheart wide ResNet and DenseNet as baseline models. Also, considering that the network structure of wide ResNet and DenseNet are quite different with conventional networks, such as AlexNet and VGG, VGG is taken into account too. Specifically, we use VGG16 (16layer model), wide ResNet with 40 convolutional layers and the widening factor 14, and DenseNet with 100 convolutional layers and the growth rate 24 in this work. Our experiments focus on comparing the conventional softmax function with the proposed softmax function. Softmax and Lsoftmax function are considered the baseline functions in this work. For fair comparisons, the experiments are strictly conducted under the same conditions. For all comparisons, we only replace the softmax function in the final layer with the proposed softmax function, and preserve other parts of the network. In the training stage, we keep most of training hyperparameters, e.g. weight decay, momentum and so on, the same as AlexNet [Alex_NIPS_2012]. Both the baseline and the proposed softmax function would be trained from scratch under the same conditions. In wide ResNet experiments, the batch size for CIFAR10 and CIFAR100 are both 128, which is the number used in its original work [Zagoruyko_BMVC_2016]. In DenseNet experiments, since its graphics memory usage is considerably higher than wide ResNet’s, we use 50 as batch size, which leads to fully graphics memory usage for three GPUs. The hardware used in this work are Intel Xeon E52660 CPU and GeForce GTX 1080 Ti. All models are implemented with Torch [Collobert_NIPS_2011].
We follow the original experimental settings of the baseline models for the training and evaluation of the softmax function and the softmax function. For example, in DenseNet [Huang_CVPR_2017], Huang et al. train their model in 300 epochs with staircase learning rates. From 1st epoch to 149th epoch, the learning rate is set to . From 150th epoch to 224th epoch, it is , and the learning rates of the remaining epochs are . The wide ResNet model is trained in 200 epochs [Zagoruyko_BMVC_2016]. The learning rate is initialized to , and at 60th, 120th, 160th, it will decrease to , , , respectively. To make it comparable to DenseNet, we extend the epochs from 200 to 300 and decrease the learning rate at 220th and 260th epoch by multiplying . To avoid ad hoc training of hyperparameter settings, we set the weight decay and momentum to be the same as the default hyperparameters in the baselines [Zagoruyko_BMVC_2016, Huang_CVPR_2017] (i.e., ) for the softmax function and the proposed softmax function.
For the experiments on Tiny ImageNet, we adopt wide ResNet [Zagoruyko_BMVC_2016] with 40 convolutional layers and width 14 as the baseline model. Initial learning rate is 0.001 and weight decay is 1e4. The training process consists of 30 epochs with batch size 80 and the learning will be decreased to its one tenth every 10 epoch. Following [Huang_ICLR_2017, Yamada_ICLR_2018], we use the ImageNet pretrained weights as an initialization and the input image will be resized to 224224 to feed the wide ResNet.