The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification

The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification


Key for solving fine-grained image categorization is finding discriminate and local regions that correspond to subtle visual traits. Great strides have been made, with complex networks designed specifically to learn part-level discriminate feature representations. In this paper, we show it is possible to cultivate subtle details without the need for overly complicated network designs or training mechanisms – a single loss is all it takes. The main trick lies with how we delve into individual feature channels early on, as opposed to the convention of starting from a consolidated feature map. The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components: a discriminality component and a diversity component. The discriminality component forces all feature channels belonging to the same class to be discriminative, through a novel channel-wise attention mechanism. The diversity component additionally constraints channels so that they become mutually exclusive on spatial-wise. The end result is therefore a set of feature channels that each reflects different locally discriminative regions for a specific class. The MC-Loss can be trained end-to-end, without the need for any bounding-box/part annotations, and yields highly discriminative regions during inference. Experimental results show our MC-Loss when implemented on top of common base networks can achieve state-of-the-art performance on all four fine-grained categorization datasets (CUB-Birds, FGVC-Aircraft, Flowers-102, and Stanford-Cars). Ablative studies further demonstrate the superiority of MC-Loss when compared with other recently proposed general-purpose losses for visual classification, on two different base networks. Code available at

Fine-grained image classification, deep learning, loss function, mutual channel.

I Introduction

Fig. 1: Mutual-channel loss (MC-Loss) where we learn part localized discriminate features directly on channels, without explicit part detection vs. conventional fine-grained classification methods that work on feature maps and with explicit network designs for part detection. We can observe that feature channels after going through the MC-Loss become class-aligned and each focues on different discriminative regions that roughly correspond to object parts.

Fine-grained image classification refers to the problem of differentiating sub-categories of a common visual category (e.g., bird species, car models) [22]. The task is much harder when compared to conventional category-level classification [26, 27, 45, 44], since visual differences between subordinate classes are often subtle and deeply embedded within local discriminative parts. As a result, it has become common knowledge that developing effective methods to extract information from the localized regions that capture subtle differences is the key for solving fine-grained image classification  [40, 46, 47].

Early works largely relied on manual part annotations, and followed a supervised learning paradigm [2, 19, 43, 3, 25, 17]. Albeit with decent results reported, it had quickly become apparent that such supervised approaches are not scalable. This is because expert human annotations can be cumbersome to obtain and are often error-prone [35]. More recent research has therefore concentrated on realizing parts in an unsupervised fashion  [22, 34, 49, 30, 7, 41, 9]. These approaches have been shown to yield performances on par or even exceeding those that relied on manual annotations, owing to their ability of mining discriminative parts that are otherwise missing or inaccurate in human labelled data. Again, the main focus is placed on how best to locate discriminative object parts. Increasingly more complicated networks have been proposed to perform part learning, mainly to compensate for the lack of annotation data. Two main components can be typically identified amongst these approaches: (i) a network component to explicitly perform part detection, and (ii) a way to ensure that features learned are maximally discriminative. Most recent work on fine-grained classification [40, 46, 50, 11, 18] has shown state-of-the-art performance by simultaneously exploring these two components, cultivating their complementary properties.

In this paper, we follow the same motivation as above [49, 38, 50] to address the unique challenges of fine-grained classification. We importantly differ in that we do not attempt to introduce any explicit network components for discriminate part discovery. Instead, we ask the question if it is possible to simultaneously achieve both discriminative feature learning and part localization, with just a single loss. This design choice has a few salient advantages over prior art: (i) it does not introduce any extra network parameters, making the network easier to train, and (ii) it can in principle be applied to any existing/future network architectures. The key insight lies with how we delve into feature channels early on, as opposed to learning fine-grained part-level features on feature maps directly – the devil is in the channels.

More specifically, we assume a fixed number of feature channels to represent each class. It follows that instead of applying constraints on the final feature maps, we impose a loss directly on the channels, so that all feature channels belonging to the same class are (i) discriminative, i.e., they each contribute to discriminating the class from others, and (ii) mutually exclusive, i.e., each channel can attend to different local regions/parts. The end result is therefore, a set of feature channels that are class-aligned, each being discriminative on mutually distinct local parts. Figure 1 offers a visualization. To the best of our knowledge, this is the first time that a single loss is proposed for fine-grained classification that does not require any specific network designs for part localization.

Our loss is termed mutual-channel loss, MC-Loss in short. It has two components that work synergistically for fine-grained feature learning. Firstly, a discriminality component is introduced to enforce all feature channels corresponding to a class to be discriminative on their own, before being fused. A novel channel attention mechanism is introduced, whereby during training a fixed percentage of channels is randomly masked out, forcing the remaining channels to become discriminative for a given class. We then apply cross-channel max pooling [14] to fuse the feature channels and produce the final feature map which is now class-aligned and optimally discriminative.

Although every feature channel is now discriminative against a class, there is still no guarantee that most discriminative parts will be localized. This leads us to introduce the second component of our loss function, the diversity component. This component is specifically designed so that each channel within a group will attend to mutually distinct local parts. We achieve this goal by asking for maximum spatial decorrelation across channels belonging to the same class. This can be conveniently implemented by (again) applying cross-channel max pooling, then asking for maximum spatial summation. Ultimately, this is done to ensure as many discriminative parts are attended to as possible, therefore helping with fine-grained feature learning. Note that the diversity component would not work without its discriminative counter part, since otherwise not all channels would be discriminative making localization much harder.

Extensive experiments are carried out on all four commonly used fine-grained categorization datasets, CUB-- [36], FGVC-Aircraft [28], Flowers- [29], and Stanford-Cars [16]. The results show that our model can outperform the current state-of-the-arts by a significant margin. Ablative studies are further conducted to draw insights towards each of the proposed loss components, and hyper-parameters.

Fig. 2: The framework of a typical fine-grained classification network where MC-Loss is used. The MC-Loss function considers the output feature channels of the last convolutional layer as the input and gathers together with the cross-entropy (CE) loss function by a hyper-parameter .

Ii Related Work

In this section, we briefly review previous works regarding both fine-grained image classification and relevant loss functions for the similar purposes.

Ii-a Fined-Grained Image Classification

Some of the earlier works [2, 5, 43] take advantages of bounding-box/part annotations, as an additional information for both training and testing. However, expert annotations are hard to source and can be prone to human error, and thus it hinders practical deployment in the wild scenarios. To address this issue, some other works [3, 48] use annotations only during training and utilize a part-detection module during testing. Recently, some frameworks employ a more general architecture that can localize discriminative parts within an image without any extra supervision from part annotations, and thus it makes the fine-grained image classification more feasible in real-world scenarios. Wang et al. [40] claimed that improving mid-level convolutional feature representation can bring significant advantages for part-based fine-grained classification. This is accomplished by introducing a bank of discriminative filters in the classical convolutional neural networks (CNNs) architecture and it can be trained in an end-to-end fashion. Authors in [10] presented a new procedure, called pairwise confusion (PC), in order to improve the generalization for fine-grained image classification task by encouraging confusion in the output activations and forcing the model to focus on local discriminative features of the objects rather than the backgrounds. Meanwhile, Yang et al. [46] proposed a novel multi-agent cooperative learning scheme which learns to identify the discriminative regions in the image in a self-supervised way.

Despite all these improvements, part-based methods have difficulties in modelling the specific features of an image because of complicated relationship that exists between the different distinct parts. In order to handle this complex interaction, some approaches encode higher-order statistics of convolutional features and extract compact holistic representations. Lin et al. [22] added a bilinear pooling behind the dual CNNs to obtain discriminative feature representation of the whole image. As an extension of bilinear pooling, Cui et al. [8] proposed a deep kernel pooling method that captures the high-order, non-linear feature interactions via compact and explicit feature mapping. A higher-order integration of hierarchical convolutional features has been introduced into an end-to-end framework to derive rich representation of the local parts at different scales for fine-grained image classification [4]. The very recent work by Zheng et al. [50] is perhaps the closest to our work since they also operated at channel-level. They designed a multi-attention convolutional neural network (MA-CNN) to jointly learn discriminative parts and fine-grained feature representation on each channel, which then got aggregated later on to construct the final fine-grained features.

Without exception, all previous approaches incur network changes to achieve part localization and/or discriminative feature learning. This is distinctively different to our approach of achieving the same via a single loss function.

Ii-B Loss Functions in CNNs

A recent trend has been noticed in the computer vision community towards designing task-specific loss functions to reinforce the CNNs with strong discriminative information. Intuitively, the extracted features are most discriminative when their intra-class compactness and inter-class separability are simultaneously maximized, i.e. the Fisher criterion. Wen et al. [42] proposed the center loss to obtain the highly discriminative features for robust recognition by minimizing the inter-class distance of deep features. Liu et al. [23] introduced the A-softmax loss to learn angularly discriminative features for image classification on a deep hypersphere embedding manifold. Wang et al. [39] embraced the idea of the Fisher criterion and proposed the large margin cosine loss (LMCL) to learn highly discriminative deep features for image recognition. In addition, there are some works that focus on the effective use of training data. Lin et al. [21] proposed the focal loss, a modified cross-entropy (CE) loss, in order to emphasize learning on hard samples and down-weight well-classified samples. Wan et al. [37] proposed the large-margin Gaussian mixture (L-GM) loss by assuming a Gaussian mixture distribution for the deep features on the training set, which boosts a more effective discrimination of out-of-domain inputs.

Although all the aforementioned loss functions can obtain discriminative features to an extent, they do not explicitly encourage the network to focus on the localized discriminative regions. In contrast, our proposed MC-Loss function enforces the network to discover multiple discriminative regions, which also alleviates the need of complicated network designs unlike[49, 38, 50], and thus it makes our framework easy-to-implement and easy-to-interpret.

Fig. 3: (a) Overview of the MC-Loss. The MC-Loss consists of (i) a discriminality component (left) that makes to be class-aligned and discriminative, and (ii) a diversity component (right) that supervises the feature channels to focus on different local regions. (b) Comparison of feature maps before (left) and after (right) applying MC-Loss, where feature channels become class aligned, and each attending to different discriminate parts. Please refer to Section III for details.

Iii The Mutual-Channel Loss (MC-Loss)

In this section, we propose the mutual-channel loss (MC-Loss) function to effectively navigate the model focusing on different discriminative regions without any fine-grained bounding-box/part annotations.

The network combined with the proposed MC-Loss in the training step is shown in Figure 2. Given an input image, it first extracts the feature maps by feeding the image into a base network; e.g., VGG [33] or ResNet [15]. Let the extracted feature maps be denoted as , with height , width , and number of channels . In the proposed MC-Loss, we need to set the value of equals to , where and indicate the number of classes in a dataset and the number of feature channels used to represent each class, respectively. Note that is a scalar hyper-parameter and empirically larger than 2. The vectored feature channel of is represented as . Please note that we reshape each channel matrix of of dimension to a vector of size times , i.e. . The grouped feature channels corresponding to class is indicated by . Mathematically, it can be represented as


Subsequently, enters into two streams of the network with two different sub-losses tailored for two distinct goals. In Figure 2, the cross-entropy stream considers as the input to a fully connected (FC) layers with traditional cross-entropy (CE) loss . Here, the cross-entropy loss encourages the network to extract informative features which mainly focus on the global discriminative regions. On the other side, the MC-Loss stream supervises the network to spotlight different local discriminative regions. The MC-Loss is then added to the CE loss with the weight of in the training step. Thus, the total loss function of the whole network can be defined as


Furthermore, the MC-Loss is a weighted summation of one discriminality component and another diversity component . We define the MC-Loss as


Iii-a The Discriminality Component

In our framework, each class is represented by certain number of grouped feature channels. The discriminality component enforce the feature channels to be class-aligned and each feature channel corresponding to a particular class should be discriminative enough. The discriminality component can be represented as


where is defined as


where the GAP, the CCMP, and the CWA are the short notations for the Global Average Pooling, the Cross-Channel Max Pooling, and the Channel-Wise Attention, respectively. is the cross-entropy loss between the ground-truth class label and the output of GAP. , where is a - mask with randomly zero(s). The ones and operation put a vector on the principle diagonal of a diagonal matrix. The left block in Figure 3(a) shows the flow diagram of the discriminality component.

CWA: While in case of traditional CNNs, trained with the classical CE loss objective, a certain subset of feature channels contain discriminative information, we here propose channel-wise attention operation to enforce the network to equally capture discriminative information in all channels corresponding to a particular class. Unlike other channel-wise-attention design [6] that intends to assign higher priority to the discriminative channels using soft-attention values, we assign random binary weights to the channels and stochastically selects a few feature channels from every feature group during each iteration, and thus it explicitly encourages every feature channel to contain sufficient discriminative information. This process could be visualized as a random channel-dropping operation. Please note that the CWA is only used during training and the whole MC-Loss branch is not present at the time of inference. Therefore, the classification layers receives the same input feature distributions during both training and inference.

CCMP: Cross-Channel Max Pooling [14] is used to compute the maximum response of each element across each feature channel in corresponding to a particular class, and thus it results into an one dimensional vector of size concurring to a particular class. Note that the cross-channel average pooling (CCAP) is an alternative of the CCMP, which only substitutes the max pooling operation by the average pooling. However, the CCAP tends to average each element across the group which may suppress the peaks of feature channels, i.e., attentions of local regions. On the contrary, the CCMP can preserve these attentions, and is found to be beneficial for fine-grained classification.

GAP: Global Average Pooling [20] is used to compute average response of each feature channel, resulting in an -dimensional vector where each element corresponds to one individual class.

Finally, we use the CE loss function to compute the dissimilarity between the ground-truth labels and the predicted probabilities given by the softmax function behind the GAP operation.

Iii-B The Diversity Component

The diversity component is an approximated distance measurement for feature channels to calculate the total similarity of all the channels. It is cheaper in computation with a constant complexity than other common measurements like Euclidean distance and Kullback-Leibler divergence with a quadratic complexity. The diversity component illustrated along the right block of Figure 3(a) drives the feature channels in a group to become different from each other via training. In other words, different feature channels of a class should focus on different regions of the image, rather than all the channels focusing on the most discriminative region. Thus, it reduces the redundant information by diversifying the feature channels from every group and helps to discover different discriminative regions with respect to every class in an image. This operation can be interpreted as a cross-channel de-correlation in order to capture details from different salient regions of an image. After the softmax, we impose supervision directly at the convolutional filters by introducing a CCMP followed by a spatial-dimension summation to measure the degree of intersection. The diversity specific loss component can be defined as


where is defined as


The function softmax is a normalization on spatial dimensions and the CCMP here plays the same role as it does in the discriminality component.

The upper-bound of is equal to in the case of extremely different feature maps which means that they focus on different local regions, while the lower-bound is facing same feature maps in which need to be optimized clearly shown in Figure 4. Ideally, we intend to maximize the term and thus it justifies the minus sign in Equation 3. A point is to be noted that the diversity component cannot work alone for classification, it acts as a regularizer on the top of discriminality loss to implicitly discover different discriminative regions in an image. Intuitively, we will discuss the availability of the diversity component in visualization results in Section IV-D.

Datasets #Category #Training #Testing
Stanford Cars
TABLE I: Statistics of datasets.
Datasets / feature channels / feature channels
CUB-- / /
Stanford Cars / /
Datasets / feature channels / feature channels
FGVC-Aircraft / /
Flowers- / /
TABLE II: value assignment while using the pre-trained VGG/ResNet with / feature channels.
Method Base Model Flowers-
Det.+seg. (CVPR [1]) SVM
Overfeat (CVPR workshop [32]) Overfeat
Selective joint FT (CVPR17 [13]) ResNet
PC (ECCV [10] ) B-CNN
PC (ECCV [10] ) DenseNet
MC-Loss ResNet
MC-Loss B-CNN 97.7
TABLE III: Experimental results () on Flowers- dataset using the pre-trained VGG and ResNet.

Iv Experimental Results and Discussions

In this section, we evaluate the performance of our proposed framework on the fine-grained image classification task. Firstly, the datasets and implementation details are introduced in Section IV-A and IV-B, respectively. Subsequently, the classification accuracy comparisons with other state-of-the-art methods are then provided in Section IV-C. In order to illustrate the advantages of different loss-components and design choices, a comprehensive ablation study is provided in Section IV-D.

Iv-a Datasets

We evaluate the proposed MC-Loss on four widely used fine-grained image classification datasets, namely Caltech-UCSD-Birds (CUB--[36], FGVC-Aircraft [28], Stanford Cars [16], and Flowers- [29]. The detailed summary of the datasets are provided in Table I. In order to keep consistency with other datasets, where datasets are divided into training and test set only, we consider both training and validation set for training in case of Flowers- dataset. Moreover, we only use the category labels in our experiments.

Fig. 4: A graphical explanation of the diversity component. Assuming that each feature channel is one-hot normalized by softmax, would response to the upper-bound () if each feature channel has the one in distinct locations, i.e., focusing on different local regions. Conversely, if obtaining identical feature channels, would response to the lower-bound .
Method Base Model CUB-- FGVC-Aircraft Stanford Cars Model Component
FT ResNet (CVPR [40]) ResNet D+A
B-CNN (ICCV [22]) VGG B +A
KA (ICCV [4]) VGG B+A+Conv.(,)
KP (CVPR [8]) VGG B+ kernel pooling+A
MA-CNN (ICCV [50]) VGG C + A + channel grouping layers
PC (ECCV [10]) B-CNN B+A
PC (ECCV [10]) DenseNet E + A
DFL-CNN (CVPR [40]) VGG B+A+Conv.(,)
DFL-CNN (CVPR [40]) ResNet D+A+Conv.(,)
NTS-Net (ECCV [46]) ResNet D+A+Conv.(,)
WPS-CPM (CVPR [12]) GoogleNet + ResNet 90.4 - - GoogleNet + D+A
TASN (CVPR [51]) ResNet - D+A
MC-Loss VGG16 B+A
MC-Loss ResNet50 D+A
MC-Loss B-CNN 92.9 94.4 B+A
TABLE IV: Experimental results () on CUB--, FGVC-Aircraft, and Stanford Cars datasets, respectively with pre-trained VGG and ResNet. In particular, A, B, C, D, and E denote stochastically initialized classification layers (c-layers), pre-trained VGG16 with c-layers removed, pre-trained VGG19 with c-layers removed, pre-trained ResNet50 with c-layers removed, and pre-trained DenseNet161 with c-layers removed, respectively. The best and second best results are respectively marked in bold and italic fonts.
Method Base Model CUB-- FGVC-Aircraft Stanford Cars Flowers-
MC-Loss (=) VGG
MC-Loss (=) VGG
MC-Loss (=) VGG 65.98 89.20 90.85 83.23
MC-Loss (=) VGG
MC-Loss (=) VGG
MC-Loss () VGG
TABLE V: Influence of feature channel number on four fine-grained image classification datasets (trained from scratch). = means each classes have feature channels.

Iv-B Implementation Details

The foremost important thing is to be noted that the number of channels in the output feature maps, extracted from a pre-trained VGG (ResNet), is fixed at (). Say for example, if want to fix uniformly for every class, this would require , , , and feature channels for CUB-- (with 200 classes), FGVC-Aircraft (with 150 classes), Stanford Cars (with 196 classes), and Flowers- datasets (with 102 classes), respectively. This is not feasible with the pre-trained VGG (ResNet) since the number of feature channel is fixed at (). On the other side, we intend to explore the pre-trained rich discriminative features of the VGG (ResNet) that is learned on large ImageNet dataset and we fine-tune the pre-trained models with our proposed loss function in Equation 2. Therefore, we assign non-uniformly in order to serve the purpose of using pre-trained VGG (ResNet). Say for example, when we fine-tune the VGG pre-trained on the ImageNet classification dataset, we assign feature channels for each of the first classes and the rest classes are modelled with feature channels in case of CUB-- dataset. Please refer to Table II for details.

In order to compare our proposed loss function with other state-of-the-art methods (see Table III and IV), we resize every image to a size of (following others), then extract features using the VGG (ResNet), and the B-CNN [22] based on a VGG model pre-trained on the ImageNet classification dataset. We use Stochastic Gradient Descent optimizer and batch normalization as the regularizer. The learning rate of the pre-trained feature extraction layers are kept as , while the learning rate of fully connected layers is initially set at and multiplied by at and epoch, successively. We train our model for epochs and weight decay value is kept as . Furthermore, we set the hyper-parameters of the MC-Loss as = and =.

Method Base Model CUB-- FGVC-Aircraft Stanford Cars Flowers-
CE loss VGG / ResNet / / / /
Center loss [42] VGG / ResNet / / / /
A-softmax loss [23] VGG / ResNet / / / /
Focal loss [21] VGG / ResNet / / / /
COCO loss [24] VGG / ResNet / / / /
LGM loss [37] VGG / ResNet / / / /
LMCL [39] VGG / ResNet / / / /
MC-Loss VGG / ResNet 65.98 / 59.41 89.2 / 85.57 90.85 / 87.47 83.23 / 79.54
TABLE VI: Comparisons of classification accuracies () with different loss functions using the VGG and the ResNet as backbone architecture (trained from scratch). The best and the second best results are respectively marked in bold and italic fonts. Results on the left and right hands of the slashes in the table are for the VGG and the ResNet respectively.
Method Base Model CUB-- FGVC-Aircraft Stanford Cars Flowers-
MC-Loss VGG 65.98 89.20 90.85 83.23
MC-Loss minus VGG
MC-Loss minus VGG
MC-Loss minus CWA VGG
TABLE VII: Ablation study of the MC-Loss (trained from scratch) on four fine-grained image classification datasets.
Fig. 5: Visualization of the localized regions returned from Grad-CAM [31] based on a VGG model (trained from scratch) optimized by the MC-Loss. The higher energy region denotes the more discriminative part in the image. The first column represents the original image. The second to seventh columns show visualizations of the localization regions obtained from feature channels (), respectively. The last column represents the visualizations of the merged localization regions of aforementioned feature channels. Red boxes: redundant channels; green boxes: channels that exhibit localized regions.
Fig. 6: Channel visualizations (). The first column represents the original image. The second to fourth columns show visualizations of the localization regions obtained from feature channels (), respectively. The last column represents the visualizations of the merged localization regions of aforementioned feature channels.

Iv-C Comparisons with State-of-the-Art Methods

Irrespective of the backbone networks, the proposed MC-Loss achieves a consistent improvement over the other the state-of-the-art methods. Especially, the proposed MC-Loss achieves the best accuracy of on Flowers- dataset. Detailed results are listed in Table III. From Table IV, it can be observed that our MC-Loss achieves best accuracies of , on FGVC-Aircraft and the Stanford Cars, respectively. Moreover, it obtains a competitive result on CUB-- dataset.

The component settings of the referred methods are also listed in Table IV. While most of the methods modify their base architectures, the MC-Loss performs best on most datasets  without any structural modification or adding extra parameters. The only optimization procedure, Paired Confusion (PC), is based on the bilinear CNN (B-CNN) which is same as ours. The MC-Loss achieves remarkable improvement compared with the PC on all four datasets. When the backbone of the MC-Loss is the pre-trained VGG, the MC-Loss does not perform better on CUB-- dataset, one reason that is due to the lack of feature channels. As mentioned in Table II, with feature channels, classes on CUB-- dataset have only two feature channels. Since the birds on CUB-- dataset have rich discriminative regions, it is difficult to obtain robust descriptions with insufficient number of feature channels. Hence, the performance is worse than some referred methods. For Stanford Cars dataset, although classes have two feature channels, the MC-Loss can still perform well due to the fact that the cars have less discriminative regions than the birds and the discriminative ability of the MC-Loss can compensate for the lack of feature channels.

Iv-D Ablation Study

For ablation study, we train the backbone architecture (like the VGG or the ResNet) from scratch using the loss function mentioned in Equation 2, and we define number of output channels based on the requirement of assigning uniformly for every class, which is not possible with the pre-trained VGG because of its fixed channel outputs. We resize every image to . The learning rate of the complete network is initially set at and multiplied by at and epoch, while other settings are same with the earlier one. In addition, we set the hyper-parameters of the MC-Loss as = and =. Although the pre-trained VGG provides much better results, we have done this ablation study in order to justify the choice/potential of different hyper-parameters (like ) and different individual component of our loss function, where the backbone architecture is trained from scratch.

Influence of : In order to judge the influence of on the accuracy, we vary from to uniformly (for every class). Alongside, we also keep the assignment setup as detailed in Table II and term this as MC-Loss (). From Table V, it can be interpreted that the MC-Loss (=) performs the worst and it signifies that only one discriminative region learned for each class is not enough for fine-grained image classification. The MC-Loss (=) achieves higher accuracy on CUB-- dataset compared to MC-Loss () and thus it demonstrates that only two feature channels assigned to each class (recall from Table II that there are classes contain only two feature channels for each of them) is not sufficient to capture the discriminative information in bird’s images. Increasing the value beyond decreases the performance along with additional burden of computational cost. We speculate performance dropped because when is large, the number of channels employed would exceed the number of useful “parts”, therefore, introducing redundant channels that are counter-effective. We also verified this through a few visualizations in Figure 5. Overall, it is to be noted that the number of feature channels has remarkable influence on the classification performance and accuracy is optimum when equals to . Therefore, had we been able to train a new VGG model on ImageNet dataset with sufficient number of output channels, such that can be set to uniformly for every class in the fine-grained dataset, it is expected that the MC-Loss optimized on the top that pre-trained model could have performed better than other methods for fine-grained classification.

Comparison with other loss-functions: Table VI shows the comparison between the proposed MC-Loss and other commonly used loss functions, on the four widely used fine-grained image classification datasets. Using the VGG model as the feature extractor, the proposed MC-Loss achieves the best accuracies of , , , and on CUB--, FGVC-Aircraft, Stanford Cars, and Flowers- datasets, respectively. While using ResNet model as the feature extractor, the proposed MC-Loss still obtains the best performance on four fine-grained image classification datasets. In summary, the proposed MC-Loss outperforms all the compared methods on all the four fine-grained image classification datasets for both the VGG and the ResNet base networks.

Visualization: In order to illustrate the advantages of the proposed MC-Loss intuitively, we applied the Grad-CAM [31] to implement the visualization for the feature channels. The first row of Figure 6 shows the most discriminative regions proposed by the VGG model while trained using the complete MC-Loss. It can be observed that the three feature channels that corresponds to a specified bird class focus on different discriminative regions, e.g. head, feet, wings, and body. Meanwhile, Tte second row of Figure 6 shows the most discriminative regions while using only discriminality component alone in the MC-loss. We can observe that if we do not use the diversity component in the MC-Loss, the three feature channels learned by the VGG model tend to be similar to each other. This indicates that the learned feature channels cannot focus on different discriminative regions in absence of the diversity component, which reduces its ability in fine-grained image classifications. The last row of Figure 6 shows an example of the most discriminative regions predicted by the VGG model optimized by the MC-loss without channel-wise attention operation. It can be clearly interpreted that if the channel-wise attention module is removed, only one of the three feature channels represents the correct discriminative region. The other two feature channels, although are different from each other, do not necessarily learn any discriminative information.

The quantitative comparisons about the aforementioned phenomenon are listed in Table VII. We can observe that, if we use only the discriminality component of the MC-Loss, the classification accuracies drop by , , , and on CUB--, FGVC-Aircraft, Stanford Cars, and Flowers- datasets, respectively. Furthermore, if we remove the channel-wise attention in the discriminality component in the MC-Loss, the accuracies will be decreased by , , , and , respectively. Alternatively, in contrast to Equation 6 one could only consider the channel group that belongs to the ground-truth class. In particular, Equation 6 could be replaced as . In Table VII, we report the performance of this design as MC-Loss-V, while the rest of the things remain unchanged. We can see that the classification accuracy drops significantly. The reason is that while using our proposed diversity loss component all the channel groups influence each other during training, but this only considers the diversity of a channel group belonging to the ground truth class during training. In other words, if we only consider one channel group during training, the other groups of channels might lose the diversity. Intuitively, Equation 6 is being able to cultivate cross-group/class information, which essentially helps the final classification. These results are consistent with the analysis about the visualizations in Figure 6.

V Conclusions

In this paper, we show it is possible to learn discriminate localized part features for fine-grained classification, with just a single loss. The proposed mutual-channel loss (MC-Loss) can effectively drive the feature channels to be more discriminative and focusing on the different regions, without the need of fine-grained bounding-box/part annotations. We show our loss can applied to different network architectures, and does not introduce any extra parameters in doing so. Experiments on all four fine-grained classification datasets confirm the superiority of the MC-Loss. In the future, we will investigate means of automatically searching for , without necessarily introducing considerably more network parameters. We will also look into applying the MC-Loss to other tasks that rely on local and discriminative regions, and extending it to work across different modalities (e.g., for fine-grained sketch-based image retrieval).


  1. A. Angelova and S. Zhu (2013) Efficient object detection and segmentation for fine-grained recognition. In proceeding of IEEE CVPR, Cited by: TABLE III.
  2. T. Berg and P. Belhumeur (2013) Poof: part-based one-vs.-one features for fine-grained categorization, face verification, and attribute estimation. In proceeding of IEEE CVPR, Cited by: §I, §II-A.
  3. S. Branson, G. Van Horn, S. Belongie and P. Perona (2014) Bird species categorization using pose normalized deep convolutional nets. arXiv preprint arXiv:1406.2952. Cited by: §I, §II-A.
  4. S. Cai, W. Zuo and L. Zhang (2017) Higher-order integration of hierarchical convolutional activations for fine-grained visual categorization. In proceeding of IEEE ICCV, Cited by: §II-A, TABLE IV.
  5. Y. Chai, V. Lempitsky and A. Zisserman (2013) Symbiotic segmentation and part localization for fine-grained categorization. In proceeding of IEEE ICCV, Cited by: §II-A.
  6. L. Chen, H. Zhang, J. Xiao, L. Nie, J. Shao, W. Liu and T. Chua (2017) Sca-cnn: spatial and channel-wise attention in convolutional networks for image captioning. In proceeding of IEEE CVPR, Cited by: §III-A.
  7. R. Cong, J. Lei, H. Fu, W. Lin, Q. Huang, X. Cao and C. Hou (2017) An iterative co-saliency framework for rgbd images. IEEE Transactions on Cybernetics 49 (1), pp. 233–246. Cited by: §I.
  8. Y. Cui, F. Zhou, J. Wang, X. Liu, Y. Lin and S. Belongie (2017) Kernel pooling for convolutional neural networks. In proceeding of IEEE CVPR, Cited by: §II-A, TABLE IV.
  9. C. Deng, E. Yang, T. Liu, J. Li, W. Liu and D. Tao (2019) Unsupervised semantic-preserving adversarial hashing for image search. IEEE Transactions on Image Processing 28 (8), pp. 4032–4044. Cited by: §I.
  10. A. Dubey, O. Gupta, P. Guo, R. Raskar, R. Farrell and N. Naik (2018) Pairwise confusion for fine-grained visual classification. In proceeding of IEEE ECCV, Cited by: §II-A, TABLE III, TABLE IV.
  11. J. Fu, H. Zheng and T. Mei (2017) Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In proceeding of IEEE CVPR, Cited by: §I.
  12. W. Ge, X. Lin and Y. Yu (2019) Weakly supervised complementary parts models for fine-grained image classification from the bottom up. In proceeding of IEEE CVPR, Cited by: TABLE IV.
  13. W. Ge and Y. Yu (2017) Borrowing treasures from the wealthy: deep transfer learning through selective joint fine-tuning. In proceeding of IEEE CVPR, Cited by: TABLE III.
  14. I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville and Y. Bengio (2013) Maxout networks. arXiv preprint arXiv:1302.4389. Cited by: §I, §III-A.
  15. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In proceeding of IEEE CVPR, Cited by: §III.
  16. J. Krause, M. Stark, J. Deng and L. Fei-Fei (2013) 3d object representations for fine-grained categorization. In proceeding of IEEE ICCV Workshops, Cited by: §I, §IV-A.
  17. J. Lei, J. Duan, F. Wu, N. Ling and C. Hou (2016) Fast mode decision based on grayscale similarity and inter-view correlation for depth map coding in 3d-hevc. IEEE Transactions on Circuits and Systems for Video Technology 28 (3), pp. 706–718. Cited by: §I.
  18. J. Lei, Y. Song, B. Peng, Z. Ma, L. Shao and Y. Song (2019) Semi-heterogeneous three-way joint embedding network for sketch-based image retrieval. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §I.
  19. X. Li, L. Yu, D. Chang, Z. Ma and J. Cao (2019) Dual cross-entropy loss for small-sample fine-grained vehicle classification. IEEE Transactions on Vehicular Technology 68 (5), pp. 4204–4212. Cited by: §I.
  20. M. Lin, Q. Chen and S. Yan (2013) Network in network. arXiv preprint arXiv:1312.4400. Cited by: §III-A.
  21. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In proceeding of IEEE ICCV, Cited by: §II-B, TABLE VI.
  22. T. Lin, A. RoyChowdhury and S. Maji (2015) Bilinear cnn models for fine-grained visual recognition. In proceeding of IEEE ICCV, pp. 1449–1457. Cited by: §I, §I, §II-A, TABLE III, §IV-B, TABLE IV.
  23. W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj and L. Song (2017) Sphereface: deep hypersphere embedding for face recognition. In proceeding of IEEE CVPR, Cited by: §II-B, TABLE VI.
  24. Y. Liu, H. Li and X. Wang (2017) Rethinking feature discrimination and polymerization for large-scale recognition. arXiv preprint arXiv:1710.00870. Cited by: TABLE VI.
  25. Z. Ma, D. Chang, J. Xie, Y. Ding, S. Wen, X. Li, Z. Si and J. Guo (2019) Fine-grained vehicle classification with channel max pooling modified cnns. IEEE Transactions on Vehicular Technology. Cited by: §I.
  26. Z. Ma, Y. Lai, W. B. Kleijn, Y. Song, L. Wang and J. Guo (2018) Variational bayesian learning for dirichlet process mixture of inverted dirichlet distributions in non-gaussian image feature modeling. IEEE Transactions on Neural Networks and Learning Systems 30 (2), pp. 449–463. Cited by: §I.
  27. Z. Ma, J. Xie, Y. Lai, J. Taghia, J. Xue and J. Guo (2019) Insights into multiple/single lower bound approximation for extended variational inference in non-gaussian structured data modeling. IEEE Transactions on Neural Networks and Learning Systems. Cited by: §I.
  28. S. Maji, E. Rahtu, J. Kannala, M. Blaschko and A. Vedaldi (2013) Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151. Cited by: §I, §IV-A.
  29. M. Nilsback and A. Zisserman (2008) Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Cited by: §I, §IV-A.
  30. Y. Peng, X. He and J. Zhao (2018-03) Object-part attention model for fine-grained image classification. IEEE Transactions on Image Processing 27 (3), pp. 1487–1500. External Links: ISSN 1057-7149 Cited by: §I.
  31. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In proceeding of IEEE ICCV, Cited by: Fig. 5, §IV-D.
  32. A. Sharif Razavian, H. Azizpour, J. Sullivan and S. Carlsson (2014) CNN features off-the-shelf: an astounding baseline for recognition. In proceeding of IEEE CVPR Workshops, Cited by: TABLE III.
  33. K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §III.
  34. K. Song, F. Nie, J. Han and X. Li (2017) Parameter free large margin nearest neighbor for distance metric learning. In Proceedings of AAAI, Cited by: §I.
  35. T. Volkmer, J. R. Smith and A. P. Natsev (2005) A web-based system for collaborative annotation of large image and video collections: an evaluation and user study. In the 13th Annual ACM International Conference on Multimedia, Cited by: §I.
  36. C. Wah, S. Branson, P. Welinder, P. Perona and S. Belongie (2011) The caltech-ucsd birds-200-2011 dataset. Cited by: §I, §IV-A.
  37. W. Wan, Y. Zhong, T. Li and J. Chen (2018) Rethinking feature distribution for loss functions in image classification. In proceeding of IEEE CVPR, Cited by: §II-B, TABLE VI.
  38. D. Wang, Z. Shen, J. Shao, W. Zhang, X. Xue and Z. Zhang (2015) Multiple granularity descriptors for fine-grained categorization. In proceeding of IEEE ICCV, Cited by: §I, §II-B.
  39. H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li and W. Liu (2018) Cosface: large margin cosine loss for deep face recognition. In proceeding of IEEE CVPR, Cited by: §II-B, TABLE VI.
  40. Y. Wang, V. I. Morariu and L. S. Davis (2018) Learning a discriminative filter bank within a cnn for fine-grained recognition. In proceeding of IEEE CVPR, Cited by: §I, §I, §II-A, TABLE IV.
  41. K. Wei, M. Yang, H. Wang, C. Deng and X. Liu (2019) Adversarial fine-grained composition learning for unseen attribute-object recognition. In Proceedings of IEEE ICCV, pp. 3741–3749. Cited by: §I.
  42. Y. Wen, K. Zhang, Z. Li and Y. Qiao (2016) A discriminative feature learning approach for deep face recognition. In proceeding of IEEE ECCV, Cited by: §II-B, TABLE VI.
  43. L. Xie, Q. Tian, R. Hong, S. Yan and B. Zhang (2013) Hierarchical part matching for fine-grained visual categorization. In proceeding of IEEE ICCV, Cited by: §I, §II-A.
  44. P. Xu, C. K. Joshi and X. Bresson (2019) Multi-graph transformer for free-hand sketch recognition. arXiv preprint arXiv:1912.11258. Cited by: §I.
  45. P. Xu (2020) Deep learning for free-hand sketch: a survey. arXiv preprint arXiv:2001.02600. Cited by: §I.
  46. Z. Yang, T. Luo, D. Wang, Z. Hu, J. Gao and L. Wang (2018) Learning to navigate for fine-grained classification. In proceeding of IEEE ECCV, Cited by: §I, §I, §II-A, TABLE IV.
  47. K. Zhang, N. Liu, X. Yuan, X. Guo, C. Gao, Z. Zhao and Z. Ma (2019) Fine-grained age estimation in the wild with attention lstm networks. IEEE Transactions on Circuits and Systems for Video Technology. Cited by: §I.
  48. N. Zhang, J. Donahue, R. Girshick and T. Darrell (2014) Part-based r-cnns for fine-grained category detection. In proceeding of IEEE ECCV, Cited by: §II-A.
  49. X. Zhang, H. Xiong, W. Zhou, W. Lin and Q. Tian (2016) Picking deep filter responses for fine-grained image recognition. In proceeding of IEEE CVPR, Cited by: §I, §I, §II-B.
  50. H. Zheng, J. Fu, T. Mei and J. Luo (2017) Learning multi-attention convolutional neural network for fine-grained image recognition. In proceeding of IEEE ICCV, Cited by: §I, §I, §II-A, §II-B, TABLE IV.
  51. H. Zheng, J. Fu, Z. Zha and J. Luo (2019) Looking for the devil in the details: learning trilinear attention sampling network for fine-grained image recognition. In proceeding of IEEE CVPR, Cited by: TABLE IV.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description