Aligned to the Object, not to the Image:
A Unified Pose-aligned Representation
for Fine-grained Recognition
Dramatic appearance variation due to pose constitutes a great challenge in fine-grained recognition, one which recent methods using attention mechanisms or second-order statistics fail to adequately address. Modern CNNs typically lack an explicit understanding of object pose and are instead confused by entangled pose and appearance. In this paper, we propose a unified object representation built from a hierarchy of pose-aligned regions. Rather than representing an object by regions aligned to image axes, the proposed representation characterizes appearance relative to the object’s pose using pose-aligned patches whose features are robust to variations in pose, scale and rotation. We propose an algorithm that performs pose estimation and forms the unified object representation as the concatenation of hierarchical pose-aligned regions features, which is then fed into a classification network. The proposed algorithm surpasses the performance of other approaches, increasing the state-of-the-art by nearly 2% on the widely-used CUB-200  dataset and by more than 8% on the much larger NABirds  dataset. The effectiveness of this paradigm relative to competing methods suggests the critical importance of disentangling pose and appearance for continued progress in fine-grained recognition.
What makes fine-grained visual categorization (FGVC), commonly referred to as fine-grained recognition, different from general visual categorization? One important distinction lies in the difficulty of the datasets. General-purpose visual categorization often involves the classification of everyday objects, such as chairs, bicycles and dogs, which are easy for humans to identify. Fine-grained recognition, on the other hand, consists of more detailed classifications such as identifying the species of a bird. This is extremely difficult for non-expert humans as it requires familiarity with domain knowledge and hundreds of hours of training. Computer algorithms for fine-grained recognition have the potential to be far more accurate than most humans and can thus benefit millions of people by providing services like species recognition through mobile applications [25, 3, 1].
An intrinsic and readily observed quality of fine-grained recognition is small inter-category variance coupled with large intra-category variance. Discriminative features of two visually similar categories often lie in a few key locations, while the appearances of the objects from the same category be dramatically different due simply to pose variation. The entanging of appearance and pose presents a great challenge and motivates the need for stable appearance features, ones that are nearly invariant to variations in pose, scale and rotation.
It’s almost instinct for humans to identify and visually compare key locations across objects in different poses, establishing correspondences. Convolutional neural networks, however, struggle on this task because the convolutional mechanisms are purely appearance-based and lacks an understanding of the pose or geometry. The built in pooling mechanisms can tolerate a certain amount of scale and rotation variation [7, 5, 33, 43, 6], but exactly how much is still largely an open question . We show this in Figure 2 via the visualization of some final convolutional layer responses. We show the top-activated images together with the feature map as a masks. It’s evident that this convolutional filter is attuned to red beaks. However, due to its lack of part-awareness, this filter also fires strongly at visually similar parts such as red crowns, red throats, red eyes, etc. This causes confusion for the classifier because of the noisy entangled part-appearance representation.
In the feature embedding space, dramatic pose variation would make images of the same category farther separated and images of visually-similar categories appear closer together as shown in Figure 1. It is therefore vital that pose-aligned regions, which explicitly factor out pose variation, should be the building block of the disentangled image representation.
Recent efforts in fine-grained recognition have largely focused on two directions. One is second-order statistics based algorithms[29, 14, 21, 8]. Representative works include Bilinear Pooling and its reduced-memory variants [14, 21] or thosethat extends to higher-order statistics . The idea is to project the features onto a higher-order space where they can be linearly separated. Second-order statistics methods have both sound theoretical support and work well in practice. However, they look at the image globally, and thus having little hope of finding subtle highly-localized differences. Also, they lack interpretability and insights for further improvement.
The other direction is attention-based methods [13, 26, 32, 37, 44, 53] that use subnetworks to propose possible discriminative regions to attend to. However, the regions proposed by these networks are often weakly-supervised by some heuristic loss function, lacking proof that they really attend to the right position. Both of these directions suffer from a lack of pose awareness and moreover the entanglement of pose and appearance features limits their performance. Moreover, training data is often scarce in the long-tailed distributions seen in many fine-grained domains; in such cases, both techniques suffer as the limited training imagery does not adequately span the space of pose and viewing angle for each category, hindering their ability to recognize any species in any pose.
Based on the above observations, we propose to disentagle pose and appearance via a unified object representation built upon pose-aligned regions, defined as rectangular patches defined relative to two keypoints anchors. The final object representation is an aggregation of the features across all of the pose-aligned regions. This representation comprises a pose-invariant and over-complete basis of features from multiple scales. We contrast the pose-aligned regions with weakly-supervised regions that are generated in a purely data-driven fashion and with “axis-aligned” rectangular bounding boxes centered around a keypoint or landmark. The features from these types of regions are subject to the variation of pose, scale and rotation. We experimentally demonstrate that axis-aligned regions are inferior pose-aligned regions with respect to classification accuracy (see Figure 6).
To automate the process of applying the unified object representation to fine-grained recognition, we propose an algorithm that first does pose-estimation for keypoint detection, enabling the generation of pose-aligned region features. The local features from these aligned regions, regions of varying size/scale relative to the object, are concatenated to comprise the unified representation for the input image and are then fed into a classification network to produce a final classification prediction. We call the proposed algorithm PAIRS: Pose and Appearance Integration for Recognizing Subcategories. It achieves state-of-the-art results on two key fine-grained datasets: CUB-200-2011  and NABirds . Keypoint annotations are used only during training time. Considering the annotation cost, keypoint annotations are actually less expensive and time-consuming than collecting additional data samples because keypoints can be annotated by human non-experts wheras fine-grained image category annotation requires the consensus of multiple domain experts.
2 Background and Related Work
Fine-grained visual categorization (FGVC) lies between generic category-level object recognition like the VOC , ImageNet , COCO , etc. and instance-level classification like facial recognition and other visual biometrics. The challenges of FGVC are many-fold: differences between similar species are often subtle and highly-localized and thus difficult even for (non-expert) humans to identify. Dramatic pose changes introduce great intra-class variance. Generalization also becomes an issue as the network struggles to find truly useful and discriminative features.
FGVC has drawn wide attention in the computer vision community. Some early works include [9, 11, 31, 46, 47, 49, 50]. Birdlet, a volumetric poselet representation is proposed to account for the pose and appearance variation in .  further proposes two pose-normalized descriptors based on computationally-efficient deformable part models. Although these early works seek to integrate pose and appearance like our method does, they rely heavily on hand-engineered descriptors thus have limited success on classification accuracy.
Our work is related to part-based CNN models [4, 18, 22, 48, 51, 27], which seek to decompose the object into semantic parts.  first employs an object detection framework – R-CNN  for object and part detection. Part-Stacked CNN  proposes a fully convolutional network for keypoint detection and a two-stream convolutional network for object and part level feature extraction. Deep LAC  proposes a valve linkage function for back-propagation chaining and form a deep localization, alignment and classification system.  introduces an end-to-end learning framework for joint learning of pose estimation, normalization and recognition. These models are all based on a limited number of single keypoint patches, which could be poorly-aligned in the presence of pose and viewpoint variance.
Perhaps our work is mostly related to POOF , which also uses keypoint pair patch. Our algorithm is different from theirs as we automatically detect keypoints instead use ground truth ones. Also the POOF approach computed 5000 patches with corresponding features in order to produce the final classification, we’re computing 35-70.
There are also works targeting the object alignment problem. Unlike previous methods which rely on detectors for part localization, [15, 16] proposes to localize distinctive details by roughly aligning the objects using just the overall shape. Spatial transformer network  introduces a differentiable affine transformation learning layer to transform and align the object or part of interest.
Another direction in fine grained recognition is feature correlation and kernel mapping.  proposes a bilinear pooling layer to compute a second order polynomial kernel mapping on CNN features. Many works has followed this simple paradigm [14, 21, 8]. Compact bilinear pooling  proposes a compact representation to approximate the polynomial kernel, reducing memory usage. Low-rank bilinear pooling  represents the covariance features as a matrix and applies a low-rank bilinear classifier. Kernel pooling  proposes a general pooling framework that captures higher order interactions of features in the form of kernels. This line of works achieves relatively good results with weakly supervision, however, they attend to the whole image globally, lacking part-level information discovery mechanism. This limites their success in further accuracy improvement.
Inspired by human attention mechanism, many attempts have been made to guide the attention of the CNN model to informative object parts. Works along this direction include [13, 26, 32, 37, 44, 53].  proposes a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other.  leverages long short term memory networks (LSTM) to unify new patch candidates generation and informative part evaluation. This work establishes the current state-of-the-art performance on CUB-200-2011  dataset, achieving an accuracy of 87.5% with part annotations. The key difference is our PAIRS representation integrates pose and appearance information and achieves multi-level attention over semantic object parts explicitly at the same time.
3 Pairs - Pose and Appearance Integration
We illustrate our algorithm pipeline in Figure 3. Firstly, we propose a simple yet effective fully convolutional neural network for pose estimation. We follow the prevailing modular design paradigm by stacking convolutional blocks that have similar topology. we show our pose estimation network achieves supreme results on the CUB-200 dataset both qualitatively and quantitatively. Secondly, given detected keypoint locations, a rectangle bounding box enclosing each keypoint pair is cropped from the original image and similarity-transformed to a uniform-sized patch (Figure 4), such that both keypoints are at fixed position across different images. As the representation is normalized to the keypoint locations, the patches are well-aligned, independent of the pose or the viewer’s angle. Thirdly, we train separate CNN models as feature extractors for the pose-aligned patch representation. Lastly, we explore different classification architectures for the unified representation based on the assumption that part contribution should vary for different images and classes. We find surprisingly that the Multi-Layer Perception (MLP), while being the most simple method, achieves the best final classification accuracy.
3.1 Pose Estimation Network
Pose estimation networks usually follow one of two paradigms for prediction. The first is to directly regress discrete keypoint coordinates, e.g. . Representative approaches include . The alternate approach  instead uses a two-dimensional probability distribution heat map to represent the keypoint location. We call this resulting multi-channel probability distribution matrix pose tensor.
In this paper, we adopt the second strategy, proposing a fully convolutional network to produce the structured output distribution. Specifically, we take a pretrained classification network and remove the final classifier layer(s), retaining what can be seen as an encoder network that encodes strong visual semantics. We follow the prevailing modular design to stack repeated building blocks to the end of the network. The resulting block consists of one upsampling layer, one convolutional layer, one batch normalization layer and one ReLU non-linearity layer. The parameter-free bilinear interpolation layer is used for upsampling. The convolutional layer has 1x1 kernel and reduces the input channel size to half. Additionally, a final convolutional layer and upsampling layer are added to produce the pose tensor. There are many modifications one can make to enhance this basic model, including using larger 3x3 kernels, adding more convolutional layers to the building block, adding residue connection to each block, stacking more building blocks, and using a learnable transposed convolutional layer for upsampling. We find these structures provide only limited improvement but introduce more parameters, so we prefer this simpler architecture.
3.2 Patch Generation
Historically, part-based representations would model parts either as rectangular regions [48, 12] or keypoints. Keypoints are convenient for pose-estimation. However, the square or rectangular patches, each centered on a given keypoint and extracted to characterize the part’s appearance, are far from optimal in the presence of rotation or general pose variation. We instead, propose to use keypoint pairs as anchor points in extracting pose-aligned patches.
Given two keypoints and , we define the vectors , and . We also define the vector , a unit vector perpendicular to , and the distances and for convenience. We seek to extract a region around and that is aligned with and has dimensions . The four corners of this rectangular region are then given by:
A similarity-transform is computed to extract the pose-normalized patch. Patches generated in this way contain stable pose-aligned features – features near these keypoints appear at the same location in the given patch different images independent of the object’s pose or the camera viewing angle.
3.3 Patch Feature Extraction
A separate patch classification network is trained for each posed-aligned A B patch as feature extractor. The softmax output from each network are concatenated as the representation for the input image. Alternatively, the final convolutional layer output after pooling can also be used and the result is comparable. We find that symmetric parts can help reduce the overall classifier number by nearly 50%, which is described in Section 4.2. The proposed patch representation can be seen as spatial pyramid that explicitly captures information of different parts at multiple spatial scales on the object.
3.4 Classification Network
To fully utilize the abundant patch representations, we explore different ways to form a strong classification network. Based on the assumption that only a small fraction of the patches contains discriminative information and patches contribution should be weighted, we explores the following strategies.
1). Fixed patch selection: take the average score for a fixed number of top ranking patches. This strategy can also predicts the potential of our PAIRS representation.
2). Dynamic patch selection: employ the sparsely gated network  to dynamically learn a selection function to select a fixed number of patches for each input.
3). Sequential patch weighting: apply a Long Short Term Memory Networks (LSTM) to reweigh different patch features in a sequential way.
4). Static patch weighting: learn a Multi-Layer Perceptron network, which essentially applies a non-linear weighting function to aggregate information from different patches.
We find surprisingly that the MLP network, while being the simplest network architecture, achieves the best accuracy out of all the attempts we made. Details are included in the experiment section.
4 Experimental Evaluation
We test our algorithm on two datasets, the CUB-200-2011 dataset and the NABirds dataset. The CUB-200-2011 contains 200 species of birds with 5994 training images and 5794 testing images. The NABirds dataset has 555 common species of birds in North America with a total number of 48,562 images. Class labels and keypoint locations are provided in both datasets.
4.1 Keypoint Prediction Performance
We use PCK (Percentage of Correct Keypoints) score to measure the accuracy of keypoint prediction results. A predicted keypoint () is “correct” if its within a small neighborhood of the ground truth location (), or equally speaking,
is a constant factor and is the longer side of the bounding box.
We evaluate our pose estimation network on CUB-200 and compare our PCK score with the others in Table 1. We achieve highest score on all 15 keypoints with considerable leading margin. We do especially well on legs and wings where other models struggle to make precise prediction. Some visualization results are shown in 5
Although we localize wings and legs better than baselines, they still have the lowest PCK in our model. This is caused by dramatic pose change as well as the appearance similarity between symmetric parts. We note that using keypoints to denote the wings isn’t always appropriate. Because wings are two dimensional planar parts that spread over a relatively large area. Designating a keypoint to the wing can be obscure, because it is not easy to decide which point represents the wing location better. In fact, the ground truth keypoint location of the CUB dataset is the average of five annotators’ results and it’s even hard for them to reach a consensus.
4.2 Patch Classification Network
We adopt the ResNet-50 architecture as the patch classification network due to its high performance and compact GPU footprint. Alternate architectures like VGG and Inception can easily be adapted. We now discuss two considerations which facilitate training the patch classification network.
Symmetry. For a given object with keypoints, the total number of patches to be classified is:
which increases quadratically with . Most real world objects show some kind of symmetry. Due to the visual similarity inherent in symmetric pairs keypoints (for example, right and left eyes, wings and feet), we treat each pair as a hybrid keypoint in the patch generation process. Many real-world objects, however, like birds, cats, cars, etc. are symmetric in appearance. Based on this observation, we propose to merge the patches for a symmetric pair of keypoints into a hybrid patch, e.g. left-eye tail and right-eye tail can be merged into the hybrid eye tail pair.
As a result, the total number of patch classification networks is reduced from 105 to 69 for the CUB dataset; on the NABirds dataset, the number is reduced from 55 to 37.
Visibility. Due to self-occlusion or foreground-occlusion, not all keypoints are visible in the image. Previous works  would eliminate patches with invisible keypoints to purify the input data. Contrarily, we find that this would hurt the performance of the patch classifiers. Details for comparison can be found in Figure 7. We believe this degradation is caused by the shrinkage of effective training set size. This is a similar finding with  that noisy but abundant data consistently outperforms clean but limited-sized data. Additionally, the pose estimation network would make a reasonable guess even if the keypoint is invisible. So during patch classifier training, all keypoints are considered visible by taking the maximally activated location.
4.3 Classification Network
Based on the assumption that image patches should contribute differently to classification. Four different strategies are explored and we describe details of them in this section.
Fixed patch selection. We assume only few patches contains useful information and others may merely act as noise. We propose a fixed patch selection strategy to keep the best patches.A greedy search algorithm would evaluate each choose combinations for from 1 to . The number of evaluations needed for this algorithm is The complexity grows in the order of and quickly becomes intractable. We thus employ the beam search algorithm. Instead of greedily searching the whole parameter space, we only keep a fixed combinations each iteration and build our search path based only on previously learned patch combinations. Out of curiosity, we also do beam search on the testing set alone. This operation, although invalid, provides some insights in the potential of our pose aligned patch representation. The results are shown in Figure 7. Our observations is that without overfitting, the potential of fixed patch selection should be well above 89%, compared to the current state-of-the-art  87.5%. Notably, a simple average over all strategy can achieve 87.6%.
Dynamic patch selection. As an alternative attempt we experiment with is the sparsely gated network  for dynamic patch selection. Different from the beam search algorithm which selects fixed patches for each input, the gated network would select different combinations depending on the input. A tiny network is trained to predict weights for each patch and an explicit sparsity constraint is exposed on the weight to only allow non-zero elements. A Sigmoid layer is added to normalize the weight. The network architecture can be described as,
represents the mapping function from the input to patch weight. is the patch selection function. Different architectures for the tiny network are tried and we find a simple linear layer would work decently most of the time. Best accuracy is achieved when = 105. Interestingly when =1, our dynamic patch selection performs worse than the fixed patch selection, implying the gated network’s inability to learn useful information for decision making.
Sequential patch weighting. Recurrent neural networks (RNN) is specialized at processing sequential data like text and speech. RNN has been widely adopted as an attention mechanism to focus on different parts sequentially. We instead employ RNN for sequential patch weighting, aiming to discover different patches for decision making. We employ a one-layer Long Short Term Memory (LSTM) network with 512 nodes. Each node has a hidden layer of size 1024. The last output of sequence is selected as the final output. We get 82.7% in this experiment and this confirms the effectiveness of the LSTM network.
Static patch weighting The final and most effective method we tried is the MLP network. The MLP network contains one hidden layer with 1024 parameters, followed by the batch normalization layer, ReLU layer, and then the output layer. On CUB our final accuracy is 88.7% , 1.2% higher than the current state-of-the-art result. We combine pairs patch with single keypoint patch and achieves a new state-of-the-art 89.2% accuracy. We compare our result with several other strong baselines in Table- 2.
We test our algorithm also on the NAbirds dataset and the result is shown in 3. Our algorithm attains an accuracy of 87.9%, more than 8% better than a strong baseline.
|Huang et al. ||GT+BB+KP||76.2|
|Zhang et al. ||GT + BB||76.4|
|Krause et al. ||GT+BB||82.8|
|Jaderberg et al. ||GT||84.1|
|Shu et al. ||GT||84.2|
|Zhang et al. ||GT||84.5|
|Xu et al. ||GT+BB+KP+WEB||84.6|
|Lin et al. ||GT+BB||85.1|
|Cui et al. ||GT||86.2|
|Lam et al. ||GT+KP||87.5|
|PAIRS Only||GT + KP||88.7|
|PAIRS+Single||GT + KP||89.2|
|Bilinear CNN (PAMI 2017) ||79.4%|
4.4 Ablation Study
Axis-Aligned v.s. Pose-Aligned We compare and show patch classification accuracy using pose-aligned patches v.s. single keypoint based axis-aligned patches in Figure 6. Single keypoint based patches performs consistently poorly compared to the pairs patches, confirming the effectiveness of disentangled feature representation.
Patch Size Study One hyper-parameter in our algorithm is the pose-aligned patch size. We tries several size options on the best performing patch. We see that the larger-size patches generally yield better accuracy. We adopt empirically because our base model is pretrained for such size.
Choice of Pose Estimation Network To test the influence of pose estimation network on the proposed algorithm, we train a separate Stacked Hourglass Network  for comparison. Stacked Hourglass Network is about 2% better than the FCN on PCK score, but the final classification accuracy numbers are comparable.
4.5 Results Visualization
We show the patch classification accuracy for each patch in the CUB dataset in Figure 7. The best performing patch corresponds to belly crown, achieving 79.6% accuracy. The worst performing patch is the left-leg right-leg pair which achieves only 15.7% accuracy. Empirically, global patches perform better than local patches, however local patches are also important for localizing discriminative object parts. Patches found by beam search, as shown in Figure 7, can provide insight – a combination of global and local patches are selected to achieve an optimal result.
As hard cases often can only be classified by a few highly localized discriminative parts, the number of patches with correct predictions reflects the difficulty of the image. We propose to use the correctly predicted patch number as the indicator of the image difficulty. This is a histogram reflecting the count of many images have a given number of patches correctly predicted the class (top right plot in Figure 7). Example images, ranging from hard on the left, to easy on the right, are shown below; the hard cases can be due either to very similar/easily confused classes or to pose-estimation failure.
Fine-grained recognition is an area where computer algorithms can assist humans on difficult tasks like recognizing bird species. Pose variation constitutes a major challenge in fine-grained recognition that recent works fail to address. In this paper, we introduce a unified object representation built from pose-aligned patches which disentangle the appearance features from the influence of pose, scale and rotation. Our proposed algorithm attains state-of-the-art performance on two fine-grained datasets, suggesting the critical importance of disentangling pose and appearance in fine-grained recognition.
-  Merlin bird id app. http://merlin.allaboutbirds.org/. Accessed: 2018-07-03.
-  T. Berg and P. N. Belhumeur. POOF: Part-Based One-vs.-One Features for Fine-Grained Categorization, Face Verification, and Attribute Estimation. In CVPR, 2013.
-  T. Berg, J. Liu, S. Woo Lee, M. L. Alexander, D. W. Jacobs, and P. N. Belhumeur. Birdsnap: Large-scale fine-grained visual categorization of birds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2011–2018, 2014.
-  S. Branson, G. Van Horn, P. Perona, and S. Belongie. Bird Species Recognition Using Pose Normalized Deep Convolutional Nets. In BMVC, 2014.
-  T. Cohen and M. Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990–2999, 2016.
-  T. S. Cohen, M. Geiger, J. Köhler, and M. Welling. Spherical cnns. arXiv preprint arXiv:1801.10130, 2018.
-  T. S. Cohen and M. Welling. Steerable cnns. arXiv preprint arXiv:1612.08498, 2016.
-  Y. Cui, F. Zhou, J. Wang, X. Liu, Y. Lin, and S. Belongie. Kernel Pooling for Convolutional Neural Networks. In CVPR, 2017.
-  K. Duan, D. Parikh, D. Crandall, and K. Grauman. Discovering localized attributes for fine-grained recognition. In CVPR, 2012.
-  M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, Jan. 2015.
-  R. Farrell, O. Oza, V. I. Morariu, N. Zhang, T. Darrell, and L. S. Davis. Birdlets: Subordinate categorization using volumetric primitives and pose-normalized appearance. In ICCV, 2011.
-  P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. PAMI, 32(9):1627–45, 9 2010.
-  J. Fu, H. Zheng, and T. Mei. Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-Grained Image Recognition. In CVPR, 2017.
-  Y. Gao, O. Beijbom, N. Zhang, and T. Darrell. Compact Bilinear Pooling. In CVPR, 2016.
-  E. Gavves, B. Fernando, C. Snoek, A. Smeulders, and T. Tuytelaars. Fine-Grained Categorization by Alignments. In ICCV, 2013.
-  E. Gavves, B. Fernando, C. G. M. Snoek, A. W. M. Smeulders, and T. Tuytelaars. Local Alignments for Fine-Grained Categorization. IJCV, 111(2):191–212, 2015.
-  R. Girshick. Fast R-CNN. In CVPR, 2015.
-  S. Huang, Z. Xu, D. Tao, and Y. Zhang. Part-Stacked CNN for Fine-Grained Visual Categorization. In CVPR, 2016.
-  S. Huang, Z. Xu, D. Tao, and Y. Zhang. Part-stacked cnn for fine-grained visual categorization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
-  M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial Transformer Networks. In NIPS, 2015.
-  S. Kong and C. C. Fowlkes. Low-rank Bilinear Pooling for Fine-Grained Classification. In CVPR, 2017.
-  J. Krause, T. Gebru, J. Deng, L. J. Li, and L. Fei-Fei. Learning Features and Parts for Fine-Grained Recognition. In ICPR, 2014.
-  J. Krause, H. Jin, J. Yang, and L. Fei-Fei. Fine-Grained Recognition without Part Annotations. In CVPR, 2015.
-  J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei. The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition. In ECCV, 2016.
-  N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. Lopez, and J. V. B. Soares. Leafsnap: A computer vision system for automatic plant species identification. In The 12th European Conference on Computer Vision (ECCV), October 2012.
-  M. Lam, B. Mahasseni, and S. Todorovic. Fine-Grained Recognition as HSnet Search for Informative Image Parts. In CVPR, 2017.
-  D. Lin, X. Shen, C. Lu, and J. Jia. Deep lac: Deep localization, alignment and classification for fine-grained recognition. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1666–1674, June 2015.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, Computer Vision – ECCV 2014, pages 740–755, Cham, 2014. Springer International Publishing.
-  T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear CNN Models for Fine-Grained Visual Recognition. In ICCV, 2015.
-  T. Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear Convolutional Neural Networks for Fine-grained Visual Recognition. PAMI, 2018.
-  J. Liu, A. Kanazawa, D. W. Jacobs, and P. N. Belhumeur. Dog Breed Classification Using Part Localization. In ECCV, 2012.
-  X. Liu, J. Wang, S. Wen, E. Ding, and Y. Lin. Localizing by Describing: Attribute-Guided Attention Localization for Fine-Grained Recognition. In AAAI, 2017.
-  D. Marcos, M. Volpi, N. Komodakis, and D. Tuia. Rotation equivariant vector field networks. ArXiv e-prints, 2016.
-  A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.
-  A. Ruderman, N. Rabinowitz, A. S. Morcos, and D. Zoran. Learned deformation stability in convolutional neural networks. arXiv preprint arXiv:1804.04438, 2018.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211–252, 12 2015.
-  P. Sermanet, A. Frome, and E. Real. Attention for Fine-Grained Categorization. In ICLR, 2015.
-  N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. CoRR, abs/1701.0, 2017.
-  J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1, NIPS’14, pages 1799–1807, Cambridge, MA, USA, 2014. MIT Press.
-  A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
-  G. Van Horn, S. Branson, R. Farrell, S. Haber, J. Barry, P. Ipeirotis, P. Perona, and S. Belongie. Building a Bird Recognition App and Large Scale Dataset With Citizen Scientists: The Fine Print in Fine-Grained Dataset Collection. In CVPR, 2015.
-  C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, California Institute of Technology, 2011.
-  M. Weiler, F. A. Hamprecht, and M. Storath. Learning steerable filters for rotation equivariant cnns.
-  T. Xiao, Y. Xu, K. Yang, J. Zhang, Y. Peng, and Z. Zhang. The Application of Two-Level Attention Models in Deep Convolutional Neural Network for Fine-Grained Image Classification. In CVPR, 2015.
-  Z. Xu, S. Huang, Y. Zhang, and D. Tao. Augmenting Strong Supervision Using Web Data for Fine-Grained Categorization. In ICCV, 2015.
-  B. Yao, G. Bradski, and L. Fei-Fei. A codebook-free and annotation-free approach for fine-grained image categorization. In CVPR, 2012.
-  B. Yao, A. Khosla, and L. Fei-Fei. Combining randomization and discrimination for fine-grained image categorization. In CVPR, 2011.
-  N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-Based R-CNNs for Fine-Grained Category Detection. In ECCV, 2014.
-  N. Zhang, R. Farrell, and T. Darrell. Pose pooling kernels for sub-category recognition. In CVPR, 2012.
-  N. Zhang, R. Farrell, F. Iandola, and T. Darrell. Deformable Part Descriptors for Fine-Grained Recognition and Attribute Prediction. In ICCV, 2013.
-  N. Zhang, E. Shelhamer, Y. Gao, and T. Darrell. Fine-grained pose prediction, normalization, and recognition. ICLR Workshops, 2016.
-  X. Zhang, H. Xiong, W. Zhou, W. Lin, and Q. Tian. Picking Deep Filter Responses for Fine-Grained Image Recognition. In CVPR, 2016.
-  B. Zhao, X. Wu, J. Feng, Q. Peng, and S. Yan. Diversified Visual Attention Networks for Fine-Grained Object Classification. IEEE Transactions on Multimedia, 19(6):1245–1256, 6 2017.
-  H. Zheng, J. Fu, T. Mei, and J. Luo. Learning multi-attention convolutional neural network for fine-grained image recognition. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 5219–5227, Oct 2017.