Modelling Local Deep Convolutional Neural Network Features to Improve Fine-Grained Image Classification
We propose a local modelling approach using deep convolutional neural networks (CNNs) for fine-grained image classification. Recently, deep CNNs trained from large datasets have considerably improved the performance of object recognition. However, to date there has been limited work using these deep CNNs as local feature extractors. This partly stems from CNNs having internal representations which are high dimensional, thereby making such representations difficult to model using stochastic models. To overcome this issue, we propose to reduce the dimensionality of one of the internal fully connected layers, in conjunction with layer-restricted retraining to avoid retraining the entire network. The distribution of low-dimensional features obtained from the modified layer is then modelled using a Gaussian mixture model. Comparative experiments show that considerable performance improvements can be achieved on the challenging Fish and UEC FOOD-100 datasets.
ZongYuan Ge, Chris McCool, Conrad Sanderson, Peter Corke
Queensland University of Technology, Brisbane, QLD 4000, Australia
NICTA, PO Box 10522, Adelaide St, Brisbane, QLD 4001, Australia
fine-grained classification, deep convolutional neural networks, session variation modelling, Gaussian mixture models.
Fine-grained image classification refers to the task of recognising the class or subcategory (for instance the particular fish species) under the same basic category such as bird or fish species [1, 17]. This is a challenging task for two reasons. First, some classes (species) from the same category, such as fish, can appear to be very similar in terms of appearance leading to low inter-class variation. Second, there is a high degree of variability in the instances of the same classes due to environmental and illumination variations leading to high intra-class variation. Fig. 1 shows examples of both issues.
An approach to tackling these two issues is to extract local region descriptors and to model them. Such an approach has previously been popular for recognition of faces [11, 16] and fish . These approaches typically divide the image into patches (or blocks), with each patch considered to be an independent (and partial) observation of the object. Each patch is then represented by a feature vector and the distribution of all of these features vectors, from an image, is then modelled using a Gaussian mixture model (GMM). The feature vector to represent each patch has usually been obtained from a transform such as the 2D discrete cosine transform .
Recently, feature learning through the use of deep convolutional neural networks (CNNs) has led to considerable improvements for object recognition . These deep CNN feature representations are trained on large datasets such as ImageNet  which has general object categories. It has been shown that these learnt features can be used to obtain impressive results for other recognition tasks when used as a global image representation . However, to the best of our knowledge no work has examined how to use these learnt features as a local feature extractor for use with well known statistical modelling approaches such as GMMs.
To use these deep CNN features as a local feature extractor two issues need to be addressed. First, deep CNNs such as  generally have an internal representation which is high dimensional, leading to the curse of dimensionality  for local modelling techniques such as GMMs. Second, we need to develop an efficient and effective method to retrain a deep CNN containing millions of weights using a relatively small set of images specific to a fine-grained class. In this paper we address both of these issues.
Inspired by recent work that has shown how to optimise deep CNN features for small datasets using fine-tuning , we propose a method to obtain a low-dimensional deep CNN representation that can be used as a local feature descriptor. Specifically, we propose to explicitly reduce the dimensionality of one of the internal fully connected layers, in conjunction with using layer-restricted retraining to avoid retraining the entire network. We demonstrate empirically that the proposed approach leads to considerable performance improvements for two fine-grained image classification tasks: fish recognition  and food recognition .
We continue the paper as follows. In Section 2 we briefly describe the image classification approach based on statistical modelling of local features and inter-session variability modelling. The approach is used as a base upon which we build on in Section 3, where we learn a low-dimensional deep CNN representation that can be used as local feature descriptor. Comparative experiments are given in Section 4, followed by the main findings and future directions in Section 5.
2 Modelling Local Image Features
Modelling the distribution of local features has been explored by several researchers [11, 16, 13]. In general, these methods divide the -th image of the -th class, , into overlapping patches. Each patch is represented by an -dimensional feature vector, of low dimensionality, to yield the set of feature vectors . The distribution of the vectors is then modelled using a GMM to obtain a prior model, referred to as a universal background model (UBM), that represents the basic category in question (eg. fish, food).
This UBM representation forms the basis which many feature modelling methods use. It can be used as a probabilistic bag-of-words representation  or a model can be derived for each class by performing mean-only relevance MAP adaptation . Another extension is to perform inter-session variability (ISV) modelling  which learns those variations that can make one instance (image) of the same class look different to another image of the same class.
Irrespective of the specific method they all rely on a GMM which is known to perform poorly for high-dimensional data . This is partly due to the curse of dimensionality where it becomes difficult to estimate a large number of parameters when there is limited data. To avoid this we will show how to learn a low-dimensional deep CNN representation, however, before proceeding to this we first describe the GMM feature modelling methods that we use in this work.
2.1 GMM Feature Modelling
We use two feature modelling approaches in this work, GMM mean-only MAP adaptation and its extension ISV. These two are chosen as they have been shown to provide consistently good performance .
GMM mean-only MAP adaptation takes the prior model (UBM) and adapts just the means using the enrollment data of the -th class ; all of the features for the enrollment images. Using supervector notation , this is written as
where is the mean supervector for the -th class, is the mean supervector of the UBM (the prior), is a normally distributed latent variable, and is a diagonal matrix that incorporates the relevance factor and the covariance matrix and ensures the result is equivalent to mean-only relevance MAP adaptation.
ISV is an extension of the GMM mean-only MAP model which learns a sub-space which models and suppresses session variation . It includes a subspace to cope with session variation and is written in supervector notation as
where is the latent session variable and is assumed to be normally distributed. Suppressing the session variation is done by jointly estimating the latent variables and followed by discarding the latent session variables to give
For both of these methods, the log-likelihood ratio is used to determine if the -th test image was most likely produced by class . This is efficiently calculated using the linear scoring approximation  which for GMM mean-only MAP is
and for ISV it is
where the diagonal matrix is formed by concatenating the diagonals of the UBM covariance matrices, is the supervector of mean normalised first order statistics, and contains the zeroth order statistics for the test sample in a block diagonal matrix .
3 Proposed Method
To extract features from local patches, we aim to learn a low-dimensional deep CNN representation which we refer to as a low-dimensional CNN feature vector (LDCNN). This is in contrast to the high dimensional representation ( dimensions) that is usually obtained from the fully connected layer (fc-6) of the pretrained deep CNN , the structure of this network can be seen in Fig. 2. Such high dimensional representations are difficult to be effectively modeled with a stochastic model such as a GMM, as such we aim to learn a low-dimensional representation (LDCNN) whose dimensionality is much less than . To reduce the dimensionality while preventing the parameters from overfitting in the large CNN architecture, we propose a two step modification for the network.
In the first step, using the pretrained network of  as a starting point, we modify the final output layer (fc-8) to have outputs for the training classes.
The weights are randomly initialised
In the second step we replace the two fully connected layers fc-6 and fc-7 and retrain only these two layers with the other layers fixed. We replace the original dimension fc-6 layer with a new -dimensional fc-6 layer that is randomly initialisedfootnote 1, where . Features extracted from this layer are referred to as LDCNN. The fc-7 layer is also replaced and randomly initialisedfootnote 1 as fc-6 and fc-7 are densely connected. However, when we retrain the network, fc-7 retains its original dimensionality of . Retraining is then performed using back propagation and stochastic gradient descent to update only these two layers. The learning rate is initially set to but this rate reduces by a factor of for every iterations throughout training process. In this way, all pretrained convolutional layer filters from the original network  are retained.
We evaluate our approach on two fine-grained image datasets: Fish  and UEC FOOD-100 . For both datasets we present two baseline systems, both of which perform classification using an SVM and extract a single global CNN feature to represent each image. The first baseline extracts a single global feature vector using fc-6 of the pre-trained deep CNN  ( dimensions); we refer to this as SVM-CNN. The second baseline extracts a single global feature vector using the re-trained low-dimensional CNN feature (LDCNN) vector; we refer to this as SVM-LDCNN.
The local features modelling results (GMM), where the image is divided into overlapping patches, use two feature extractors. These feature extractors obtain an -dimensional feature vector from each of the patches which is then modelled using a GMM. The first, GMM-LDCNN, uses the proposed low-dimensional CNN feature vector (LDCNN) to obtain the -dimensional feature vector. The second, GMM-PCA-CNN, uses fc-6 pre-trained deep CNN  ( dimensions) and learns a transform using principal component analysis (PCA)  to reduce the dimensionality to .
When we perform local feature modelling (GMM) a range of parameters are varied. The number of components evaluated for the GMM were = , the size of the ISV subspace was = , and the range of block sizes = . For both datasets the images were resized to be . Caffe  was used to extract and retrain the CNN features and Bob  was used to learn the GMM and ISV models.
4.1 Fine-Grained Fish Classification
We use the Fish image dataset from  which consists of images collected from species. This dataset contains images captured in different conditions, defined as “controlled”, “out-of-the-water” and “in-situ”. The “controlled” images consist of fish specimens with controlled background and illumination. The “in-situ” images are underwater images of fish in their natural habitat and the “out-of-the-water” images consist of fish specimens taken out of the water with a varying background.
Following the defined protocols, the dataset is split into three sets: a training set (train) to learn/derive UBM GMM models; a development set (dev) to determine the optimal parameters and decision threshold for our models and an evaluation set (eval) to measure the final system performance. There are two protocols: protocol 1a evaluates the system performance when high quality (“controlled”) data is used to enrol classes and protocol 1b evaluates the system performance when low quality (“in-situ”) data is used to enrol classes. For both protocols, the same test imagery (a mix of “controlled”, “in-situ” and “out-of-the-water” images) is used. The local modelling approach used for these experiments was the ISV extension of the GMM approach as this provided a considerable boost for the initial experiments; we refer to this as GMM-LDCNN.
It has been shown in  that incorporating spatial information can be advantageous, and as such we further propose to extend the GMM-LDCNN approach by adding the spatial location to each local feature vector prior to modelling; we refer to this method as GMM-LDCNN-xy.
The results in Table 1 show that
in contrast to global features, local modelling provides notable improvements:
the two baseline systems (SVM-CNN and SVM-LDCNN) which use global features perform worse than the previous state-of-the-art local ISV modelling approach (Local GMM).
Furthermore, our local low-dimensional GMM-LDCNN approach
|System||Protocol 1a||Protocol 1b|
|Local GMM ||43.1||49.3||40.8||46.7|
4.2 Results on Food Dataset
We use the UEC FOOD-100 dataset which contains 100 Japanese food categories with more than 100 images for each category. Some images contain multiple classes and a bounding box is provided for each class. Examples are shown in Fig. 1. Features are extracted from the bounding box only, so detection/localisation is not considered in this paper.
We use half of the images from each class for training and the other half for testing
The results, presented in Fig. 3, show that performing local modelling using the LDCNN features (GMM-LDCNN) provides the best performance
In this paper we have explored the benefits of using deep convolutional neural networks (CNNs) to extract local features which are then modelled using a GMM. Our two-step retraining procedure provides an effective way to perform dimensionality reduction and provides considerably better performance than a simple linear model such as PCA. Comparative experiments show that considerable performance improvements can be achieved on the challenging Fish and UEC FOOD-100 datasets.
Future work will examine other ways to retrain the deep CNN. For instance, an issue not examined in this work is the possibility of extracting thousands of local patches from each image and using these samples to retrain the entire network.
- Random initialisation is performed by drawing from .
- Optimal parameters for protocol 1a were , , and , while for protocol 1b , , and .
- We developed these protocols as insufficient details were provided to reproduce the experiments in ; our protocol files will be publicly available.
- By closed set we mean that while the data differs between the training and testing sets, the classes in both sets are the same.
- The optimal parameters were and .
- K. Anantharajah, Z. Ge, C. McCool, S. Denman, C. Fookes, P. Corke, D. Tjondronegoro, and S. Sridharan. Local inter-session variability modelling for object classification. WACV, 2014.
- A. Anjos, L. E. Shafey, R. Wallace, M. Günther, C. McCool, and S. Marcel. Bob: a free signal processing and machine learning toolbox for researchers. In 20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan. ACM Press, Oct. 2012.
- C. Bishop. Pattern Recognition and Machine Learning, pages 33–38. Springer, 2006.
- C. Bouveyron, S. Girard, and C. Schmid. High dimensional data clustering. Technical report, LMC-IMAG, Université J. Fourier, Grenoble, 2006.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
- K. Fukunaga. Introduction to Statistical Pattern Recognition, pages 399–417. Elsevier, second edition, 1990.
- O. Glembek, L. Burget, N. Dehak, N. Brummer, and P. Kenny. Comparison of scoring methods used in speaker recognition with joint factor analysis. In ICASSP 2009, pages 4057–4060.
- Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
- Y. Kawano and K. Yanai. Food image recognition with deep convolutional features. In Proc. of ACM UbiComp Workshop on Cooking and Eating Activities (CEA), 2014.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
- S. Lucey and T. Chen. A GMM parts based face representation for improved verification through relevance adaptation. In CVPR 2004, volume 2, pages 855–861.
- Y. Matsuda, H. Hoashi, and K. Yanai. Recognition of multiple-food images by detecting candidate regions. In Proc. of IEEE International Conference on Multimedia and Expo (ICME), 2012.
- C. McCool, R. Wallace, M. McLaren, L. E. Shafey, and S. Marcel. Session variability modelling for face authentication. IET Biometrics, 2:117–129(12), September 2013.
- A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features off-the-shelf: an astounding baseline for recognition. CVPR Workshop on Deep Vision, 2014.
- C. Sanderson and B. Lovell. Multi-region probabilistic histograms for robust and scalable identity inference. Lecture Notes in Computer Science (LNCS), Vol. 5558, pages 199–208, 2009.
- C. Sanderson and K. K. Paliwal. Fast features for face authentication under illumination direction changes. Pattern Recognition Letters, 24(14):2409–2419, 2003.
- N. Zhang, J. Donahue, R. Girshick, and T. Darrell. Part-based r-cnns for fine-grained category detection. In ECCV, pages 834–849. 2014.