###### Abstract

Being symmetric positive-definite (SPD), covariance matrix has traditionally been used to represent a set of local descriptors in visual recognition. Recent study shows that kernel matrix can give considerably better representation by modelling the nonlinearity in the local descriptor set. Nevertheless, neither the descriptors nor the kernel matrix is deeply learned. Worse, they are considered separately, hindering the pursuit of an optimal SPD representation. This work proposes a deep network that jointly learns local descriptors, kernel-matrix-based SPD representation, and the classifier via an end-to-end training process. We derive the derivatives for the mapping from a local descriptor set to the SPD representation to carry out backpropagation. Also, we exploit the Daleckiǐ-Kreǐn formula in operator theory to give a concise and unified result on differentiating SPD matrix functions, including the matrix logarithm to handle the Riemannian geometry of kernel matrix. Experiments not only show the superiority of kernel-matrix-based SPD representation with deep local descriptors, but also verify the advantage of the proposed deep network in pursuing better SPD representations for fine-grained image recognition tasks.

DeepKSPD: Learning Kernel-matrix-based SPD Representation for Fine-grained Image Recognition \author Melih Engin, Lei Wang\thanksCorresponding author (leiw@uow.edu.au), Luping Zhou \\ School of Computing and Information Technology\\ University of Wollongong\\ Wollongong, NSW 2500, Australia \andXinwang Liu\\ School of Computer\\ National University of Defense Technology\\ Changsha, Hunan 410073, China \date

## 1 Introduction

To deal with image variations, modern visual recognition usually models the appearance of an image by a set of local descriptors. They evolve from early filter bank responses, through traditional local invariant features, to the activation feature maps of recent deep convolutional neural networks (CNNs). During the course, how to represent a set of local descriptors to obtain a global image representation has always been a central issue. Among the best known methods in this line of research are the bag-of-features (BoF) model [21], sparse coding [22], vector of locally aggregated descriptors (VLAD) [12], and Fisher vector encoding [20]. In recent years, representing a set of descriptors with a covariance matrix has attracted increasing attention. It characterizes the pairwise correlation of descriptor components presented in a set, and is generally called SPD representation since covariance matrix is symmetric positive-definite. This representation is robust to noisy descriptors and independent of the cardinality of a descriptor set. Also, it does not need a large number of images to generate common bases for encoding, and can therefore be individually applied to single images. In the past few years, covariance-matrix-based SPD representation has been employed in a variety of visual recognition tasks including the recognition of texture, face and object [11], the classification of image set [24], and so on.

A recent progress on SPD representation is to model the nonlinear information in a set of descriptors. As reported in [23], directly using a kernel matrix to represent a descriptor set demonstrates its superiority. Given a set of -dimensional descriptors, a kernel matrix is computed with a predefined kernel function, where each entry is the kernel value between the realization of two descriptor components in this set. This method effectively models the nonlinear correlation among these descriptor components. The kernel function can be flexibly chosen to extract various nonlinear relationship, and the covariance is just a special case using a linear kernel. The resulting kernel-matrix-based SPD representation maintains the same size as its covariance-matrix-based counterpart, but produces considerable improvement on recognition performance.

Nevertheless, this kernel-matrix-based SPD representation [23] is only developed upon traditional local descriptors like the pixel intensities or the Gabor filter responses of a textured or facial image. Its potential with deep local descriptors on image recognition has not been explored in the literature and therefore remains unclear. Another more critical issue, which is the main focus of this work, is that the local descriptors and the kernel matrix in the existing SPD representation are detached. In other words, they cannot effectively negotiate with each other to obtain an optimal representation for the ultimate goal of classification. To address these two issues, this work completely builds the kernel-matrix-based SPD representation built upon deep local descriptors and benchmarks it against the state-of-the-art image recognition methods. More importantly, we develop a deep network called DeepKSPD to jointly learn the deep local descriptors, the kernel-matrix-based SPD representation, and the classifier. This is achieved by an end-to-end training process between input images and class labels. The presence of kernel matrix computation in the proposed deep network complicates the backpropagation process. Also, to make the resulting SPD representation better work with the classifier, a matrix logarithm function is usually required to map the kernel matrix from Riemannian geometry to Euclidean geometry. In this work, we derive all the matrix derivatives involved in the mapping from a local descriptor set to the kernel-matrix-based SPD representation to fulfill the backpropagation algorithm for the proposed deep network. Also, by exploiting the Daleckiǐ-Kreǐn formula in operator theory [6, 2], we provide a concise and unified result on the derivative of the functions on SPD matrices, in which matrix logarithm is a special case. Together, these produce a backpropagation algorithm that could deal with a deep network with various kernel-matrix-based SPD representations.

Experimental study is conducted on multiple benchmark datasets, especially on fine-grained image recognition, to demonstrate the efficacy of the proposed DeepKSPD framework. First, in contrast to the existing kernel-matrix-based representation built upon traditional local descriptors, we demonstrate the superiority of the kernel-matrix-based SPD representation built upon deep local descriptors. On top of that, we further demonstrate the advantage of the proposed end-to-end trained DeepKSPD network in jointly learning the local descriptors and the kernel-matrix-based SPD representation. As will be shown, our DeepKSPD network achieves the overall highest classification accuracy on these benchmark datasets, when compared with the related deep learning based methods.

## 2 Related Work

Let denote a data matrix, in which each column contains a local descriptor , extracted from an image. In the days when local invariant features such as SIFT are popularly used, methods like BoF, VLAD, and Fisher vector have been developed to encode and pool these descriptors to obtain a global image representation. VLAD and Fisher vector methods have recently been applied to the deep local descriptors collected from the activation feature maps of deep CNNs, demonstrating promising image recognition performance [7, 5]. These methods usually need a sufficient number of images to train a set of common bases (e.g., cluster centers or Gaussian mixture models) for the encoding step.

The SPD representation takes a different approach. It traditionally computes a covariance matrix over as (or simply ), where denotes the centered . Originally, this covariance matrix is proposed as a region descriptor, for example, characterizing the covariance of the color intensities of pixels in a local image patch. In the past several years, it has been employed as a promising global image representation in a number of visual recognition tasks. Recent research in this line aims to model the nonlinear information in a set of descriptors. The approach proposed in [9] implicitly maps each descriptor onto a kernel-induced feature space and computes a covariance matrix therein. Nevertheless, this results in a high (or even infinite) dimensional covariance matrix that is difficult to manipulate explicitly or computationally. The other approach [23] proposes to directly compute a kernel matrix over as follows. Let denote the th row of , consisting of the realizations of the th component of . The th entry of is calculated as , with a predefined kernel function such as a Gaussian kernel. In this way, the nonlinear relationship among the components can be effectively and flexibly extracted. The resulting kernel matrix maintains the size of and is more robust against the singularity issue caused by small sample. It is easy to see that covariance matrix is a special case in which reduces to a linear kernel function. As reported in [23], this kernel-matrix-based SPD representation achieves considerably better performance than its covariance counterpart and that proposed in [9] on multiple visual recognition tasks.

Both covariance and kernel matrices are SPD and have a Riemannian geometry. In order to work with the common classifiers that assume a Euclidean geometry, a variety of operations have been developed in the literature. Among them, the matrix logarithm operation, , may be the most commonly used one due to its simplicity and efficacy [1]. Conceptually, it can be viewed as mapping an SPD matrix from the underlying manifold to its tangent space in which Euclidean geometry can be applied. In practice, after the matrix is obtained, the matrix will be computed and then reshaped into a long vector to be fed into a classifier.

Research on integrating the SPD representation with deep local descriptors or even into deep networks is still in its very early stage but has demonstrated both theoretical and practical values. In the recent work of Bilinear CNN [14, 15], an outer product layer is applied to combine the activation features maps from two CNNs, and this produces clear improvement in fine-grained visual recognition. This outer product essentially leads to a covariance matrix (in the form of ) when the two CNNs are set as the same. Another work in [10] trains a deep network for image semantic segmentation, in which the covariance-matrix-based SPD representation is used to represent a set of local descriptors.

Nevertheless, to the best of our survey, all the existing few works on SPD representation in deep learning focus on the covariance-matrix-based SPD representation. None of them has considered the kernel-matrix-based one, which can produce significantly better recognition performance. To address this issue, we develop a deep network focusing on the kernel-matrix-based SPD representation and jointly learn this representation with deep local descriptors. Also, the work in [10] derives the derivations of the matrix logarithm function from the scratch. Although instructive, it does not connect this derivation with the operator theory on positive definite matrix [2]. In this work, by establishing this interesting link, we not only readily obtain the derivatives for matrix logarithm (and other general SPD matrix functions) in a much concise way, but can also gain more insights by accessing the vast knowledge in that field for future research.

At the end, it is worth noting that in this work the kernel matrix is integrated into deep neural networks as a representation of a set of local descriptors collected from the activation feature maps. This is fundamentally different from the recent works that develop new CNNs with reproducing kernels, supervised convolutional kernel networks, and deep kernel learning models [17, 16, 26].

## 3 The proposed network DeepKSPD

The proposed network DeepKSPD consists of three blocks, as shown in Fig. 1. The leftmost block maps an input image into a set of deep local descriptors. Since we deal with visual recognition, any convolutional neural network can be used. It generates a set of activation feature maps for an image, from which a set of deep local descriptors are collected. In this work we employ the commonly used VGG- network pre-trained on the ImageNet dataset. The rightmost block includes the commonly used fully connected and softmax layers to produce the posteriori probability for each class. In between is the KSPD block that contains the layers related to the kernel-matrix-based representation and the matrix logarithm. In specific, the input of the KSPD block is the output of the last convolutional layer (conv_) of the VGG- network. In this way, the input consists of activation feature maps of the size of . These feature maps are reshaped along the depth dimension , and this gives rise to the matrix with . Afterwards, the kernel matrix is computed with . It pools the deep local descriptors by capturing the pairwise nonlinear relationship among the feature maps. Following that is the matrix logarithm layer to handle the Riemannian geometry of SPD matrix and this produces the matrix . Since is symmetric, a layer that extracts the upper triangular and diagonal entries of is deployed next to avoid redundancy. We observe that normalized KSPD representations usually perform better. Therefore, an normalization and a batch normalization layer are added at the two ends of the KSPD block, respectively.

## 4 End-to-end training of DeepKSPD

### 4.1 Derivatives between X and the kernel matrix K

Recall that denotes a set of local descriptors. Considering that Gaussian kernel is commonly used in the literature and that it is used in [23] to demonstrate the advantage of the kernel-matrix-based representation, we exemplify the proposed DeepKSPD with a Gaussian kernel and focus on this case to derive the derivatives. Other kernels such as polynomial kernel can be dealt with in a similar way.

Let and denote an identity matrix and a matrix of s. Let denote the entrywise product (Hadamard product) of two matrices, and denote an exponential function applied to a matrix in an entrywise manner. In this way, the Gaussian kernel matrix computed on can be expressed as

(1) |

where is the width of the Gaussian kernel. This expression is illustrated in Fig. 2.

Let denote the objective function to be optimized by the DeepKSPD network. By temporarily assuming that the derivative has been known (will be resolved in the next section), we now work out the derivative and . Note that is a composition of functions applied to and it can be equally expressed as a function of each of the intermediate variables as follows.

(2) |

where , , and are defined as

(3) |

Following the rules for differentiation, the following relationship can be obtained

(4) |

Furthermore, it is known from the differentiation of a scalar-valued matrix function that

(5) |

where denotes the vectorization of a matrix and denotes the inner product. Combining this result with in Eq.(4) and using the identity that , we can obtain

(6) |

The last equality holds because from Eq.(2) we know that can also be written as . Noting that Eq.(6) is true for any , we can therefore derive that

(7) |

Repeating the above process by using the relationship of and and that of and in Eq.(4), we can further have (proof is provided in Appendix)

(8) |

In addition, the derivative can be obtained as

(9) |

Therefore, when is available, we can work out and according to the above results.

### 4.2 Derivatives of the matrix logarithm on the kernel matrix K

Now, to obtain we deal with the matrix logarithm operation between and , which can be written as

(10) |

Note that is ready to obtain because it only involves the classification layers like fully connected layer, softmax regression and cross-entropy computation. The key issue is to obtain . In the following we introduce the Daleckiǐ-Kreǐn formula [6] to give a concise and unified result on differentiating SPD matrix functions, of which the matrix logarithm is a special case.

Theorem 1 (pp., [2]) Let be the set of real symmetric matrices. Let be an open interval and is the set of all real symmetric matrices whose eigenvalues belong to . Let be the space of continuously differentiable real functions on . Every function in induces a differentiable map from in to in . Let denote the derivative of at . It is a linear map from to itself. When applied to , is given by the Daleckiǐ-Kreǐn formula as

(11) |

where is the eigendecomposition of with , and is the entrywise product. The entry of the matrix is defined as

(12) |

This theorem indicates that for a matrix function applied to , perturbing by a small amount will vary by the quantity in Eq.(11), where the variation is in the sense of the first-order approximation. Now we show how to derive the functional relationship between and based on Theorem 1. According to Eq.(2) and following the argument in Eq.(5), we have

(13) |

Applying the Daleckiǐ-Kreǐn formula, we can explicitly represent to be a function of as

(14) |

Replacing in Eq.(13) with the above result and again applying the properties of , the relationship between and can be derived in a similar way as in Eqs.(6) and (7)

(15) |

where and are obtained from the eigendecomposition of . The matrix logarithm is now just a special case in which in Eq.(12) is computed as when and otherwise.

The work in [10] derives the derivative of the matrix logarithm from the scratch with the basic facts of matrix differentiation, which is instructive. However, as previously mentioned, that work does not connect this derivative with the well-established Daleckiǐ-Kreǐn formula. To consolidate this connection and link with the work in [10], we prove the following proposition.

Proposition 1 The functional relationship obtained in [10] shown in Eq.(16) (with the notation in this work for consistency) is equivalent to that in Eq.(15) obtained by this work.

(16) | ||||

where ; when and zero otherwise; means the off-diagonal entries of are all set to zeros; and is defined to represent .

Proof. Note that is symmetric because is symmetric. Therefore, just equals . In this way, Eq.(16) can be written as

Noting that is symmetric because is symmetric, it can be shown that

(17) |

Now let us examine the matrix of .

Noting that

(18) |

it can be obtained that

(19) |

where is the matrix defined in Eq.(12). Therefore it can be obtained that

(20) |

Combining this result with the last line of Eq.(17) in this proof gives rise to

(21) |

This completes the proof.

Connecting with the results in operator theory not only facilitates the access to the derivatives of SPD matrix functions, but also provides us more insight on these functions. For example, defined in Eq.(12) has a specific name of “first divided difference” of the function , and is called “Löewner matrix” [2]. The positive semi-definiteness (PSD) of guarantees the operator monotonicity of , that is maintains to be PSD if is PSD. This applies to the matrix logarithm function because it can be proved that in Eq.(12) is PSD. Properties like this could be useful for the future research on SPD representations, for example, when designing a deep Siamese network that involves the difference of two SPD representations.

## 5 Experimental Result

There are two tasks in this experiment: i) test the performance of KSPD built upon deep local descriptors and ii) more importantly, test the performance of the proposed end-to-end learning network DeepKSPD, on the tasks of fine-grained image recognition and scene recognition, by following the literature. In the Birds dataset, bounding boxes are not used. Example images of these datasets are in Fig. 3.

#### Datasets

Four benchmark data sets are employed in this experiment. For scene recognition, the MIT Indoor data set is used, which has classes with predefined training and test images. For fine-grained image recognition, three data sets of Cars [13], Birds [25], and Aircrafts [18] are tested. The Cars dataset has images from classes; the Aircrafts dataset contains images of classes (variants). The birds dataset has samples of bird species. All the datasets are the benchmarks widely used by the recently developed deep learning based image recognition methods. In the Birds dataset, bounding boxes are not used.

#### Setting of Proposed Methods

For the first task, we put forward a method called KSPD-VGG, which constructs kernel-matrix-based SPD representation upon the deep local descriptors extracted from VGG- pretrained on ImageNet. Specifically, the feature maps (of size ) of the last convolutional layer of VGG- are reshaped to form vectors with the dimensions of (). These vectors are further used to compute the Gaussian kernel matrix . Then, after applying the matrix logarithm to the kernel matrix, only the upper triangular and diagonal parts of the resulting matrix are taken and vectorized to represent an image. The resulting KSPD representations of all images are further processed by PCA dimensionality reduction (to dimensions), standardization (to zero mean and unit standard deviation), and normalization. Finally, a nonlinear SVM classifier is employed to perform classification for this first task.

For the second task, the proposed DeepKSPD network is trained and tested. Note that the architecture of DeepKSPD consists of three blocks (Fig. 1). In the local descriptor block, the network hyperparameters (e.g., the number of kernels and their sizes) are set by following VGG-. In the proposed KSPD representation block, no hyperparameter needs to be preset (initial is set to for all of the experiments). In the classification block, the size of FC layer is set as the number of classes for each data set. DeepKSPD is trained by Adaptive Moment Estimation (Adam) in mini-batch mode (with the batch-size of ). A two-step training procedure [3] is applied as good performance is observed [3, 14]. Specifically, we first train the last layer using softmax regression for epochs, and then fine-tune the whole system. The total training epochs are , varied with the data sets.

#### Methods in Comparison

We compare the proposed KSPD-VGG and DeepKSPD with a set of methods that are either comparable or competitive in the literature. They are listed in the first column in Table 1, and can be roughly grouped into the following three categories.

The first category can be deemed as feature extraction methods, to which KSPD-VGG belongs. This category also includes FV-SIFT [19], FC-VGG [10], FV-VGG [5], and COV-VGG (standing for covariance-matrix-based SPD representation). Except in FV-SIFT, the images are represented by features extracted from the pretrained deep CNN model (VGG-) without fine-tuning, which allows us to better focus on the sheer effectiveness of the methods in comparison. In FC-VGG, features are extracted from the last FC layer of VGG- for classification. FV-SIFT and FV-VGG construct Fisher vectors based on local descriptors for classification. FV-SIFT uses the conventional SIFT descriptors, while FV-VGG uses the deep local descriptors from the last convolutional layer of VGG-, following the literature. COV-VGG’s setting is same as that of KSPD-VGG, except that a covariance matrix is constructed instead of a kernel matrix. Note that, we directly quote the results of FV-SIFT and FC-VGG from the literature, and provide our own implementation of FV-VGG, COV-VGG, and KSPD-VGG to ensure the same setting for fair comparison.

The second category includes three end-to-end learning methods, i.e., DeepCOV, DeepKSPD (proposed) and Bilinear CNN (denoted as B-CNN) [14]. DeepCOV follows the same network architecture as the proposed DeepKSPD, but replaces the kernel matrix in the KSPD layer with a covariance matrix. DeepCOV is conceptually the same as [10], but [10] is designed for segmentation. B-CNN is tested by using the code provided by [14]. The fine-tuned B-CNN is employed for a fair comparison with DeepCOV and DeepKSPD that involve an end-to-end training. Note that, in [14], it shows that some engineering efforts can significantly improve the performance of B-CNN, such as augmenting the data sets by flipping images and using a separate SVM classifier instead of the softmax layer in the original deep model for classification, etc. To minimize the impacts of these engineering tricks, we switch off the image flipping component in the downloaded code, and directly perform the classification by the softmax layer as usual, same as what we do with DeepCOV and DeepKSPD.

In the third category, additional methods previously reported on the involved data sets are quoted to further extend the comparison and provide a whole picture.

#### Results and Discussion

The result is summarized in Table 1 with the following observations.

First, the proposed KSPD-VGG and DeepKSPD demonstrate their effectiveness for visual recognition. On every dataset, the end-to-end learning method DeepKSPD achieves the best performance among all the methods. Overall, DeepKSPD shows superior performance over KSPD-VGG (up to percentage points on Cars) and other competitive methods, demonstrating the essentials of the end-to-end learning of kernel-matrix-based representation.

Second, it can be seen that KSPD-based methods consistently win COV-based ones on all data sets, either based on feature extraction (KSPD-VGG vs COV-VGG) or using end-to-end learning (DeepKSPD vs DeepCOV). To ensure fair comparison, the KSPD-based and COV-based methods only differ in the SPD representation.

Third, as analyzed above, conceptually B-CNN is very close to DeepCOV when the two paths used in B-CNN are set as the same. However, DeepCOV performs slightly worse than BCNN in the experiment (around ). Looking into this result, we find that after attaining the outer product matrix, B-CNN applies sign square-root on all entries of the matrix, rather than performing the matrix logarithm as in DeepCOV and DeepKSPD. Sign square-root can be efficiently computed by GPU, so that a much longer training procedure (up to epochs) is tolerable. However, matrix logarithm is currently implemented with CPU, whose calculation is slower than sign square-root. Therefore, we only train DeepCOV and DeepKSPD for epochs, and even with this setting the proposed DeepKSPD has achieved superior performance. Note that the incorporation of matrix logarithm is necessary, as it is a principled way to handle the Riemannian geometry of SPD matrix. We have observed that using more epochs and smaller learning rate, the performance of DeepKSPD and DeepCOV can be further improved, and the superiority of DeepKSPD over B-CNN will become more salient. In future, we will explore GPU-based implementation of matrix logarithm.

Fourth, as shown, the SPD representation (being it based on an outer product, covariance, or kernel matrix) outperforms Fisher vector representation in the given visual recognition tasks. The proposed DeepKSPD also outperforms FV-VGG obtained from fine-tuned VGG-. The latter attained % on Aircraft, on Birds % and % on Cars [14], which is worse than %, % and % achieved by DeepKSPD.

Moreover, it is worth emphasizing that this experiment focuses on comparing the core of these methods. Therefore, we minimize the engineering tricks that are detachable from the model. Certainly, e steps such as augmenting the data, fine-tuning the model for feature extraction, and applying multi-scaling, as used in the literature, can effectively improve the performance of KSPD-VGG and DeepKSPD.

## 6 Conclusion

Motivated by the recent progress on SPD representation, we develop a deep neural network that jointly learns local descriptors and kernel-matrix-based SPD representation for fine-grained image recognition. The matrix derivatives required by the backpropagation process are derived and linked to the established literature on the theory of positive definite matrix. Experimental result on benchmark datasets demonstrates the improved performance of kernel-matrix-based SPD representation when built upon deep local descriptors and the superiority of the proposed DeepKSPD network. Future work will further explore the effectiveness of this network on other recognition tasks and develop the SPD representations in other forms.

## 7 Appendix: Proof for Eq.(8) in main text

According to Eq.(2) and following the argument in Eq.(5), it can be shown that

where denotes the vectorization of a matrix and denotes the inner product. Combining this result with in Eq.(4), it can be obtained that

Keeping applying the identity that , we can have

Because we know can also be expressed as and the last result is valid for any , it can be obtained that

This gives rise to the first half of Eq.(8).

Again, combining with in Eq.(4), it can be obtained that

Applying the identities that and , we can obtain

Because we know can also be expressed as and the last result is valid for any , it can therefore be obtained that

This gives rise to the second half of Eq.(8).

In addition, can be derived in a similar manner. As previous, can be equally written as

where is the width of the Gaussian kernel, a scalar. It is not difficult to see that by regarding as constant, . Therefore, it can be obtained that

Combining with the last equation, we have

## 8 Appendix: Visualization of feature maps learned by DeepKSPD network

(a) Input image | (b) Before learning | (c) After learning | (d) Difference |

To gain more insight into the proposed DeepKSPD network, we visualize the activation feature maps (accumulated along the depth dimension) obtained with and without DeepKSPD learning. In the following figure, the four columns correspond to 1) the original input image; 2) the accumulated activation feature maps before learning (obtained from pretrained VGG- network); 3) the accumulated activation feature maps after learning (obtained from the trained DeepKSPD network); and 4) the difference between the two previous maps, where red color indicates increase and green color indicates decrease.

As seen, the activations in the feature maps learned by DeepKSPD are generally enhanced on the body of the cars while reduced on the surroundings that are less relevant for car recognition. This shows that in the presence of the kernel-matrix-based SPD representation block, the DeepKSPD network is able to learn features that are meaningful from the perspective of recognition. This provides additional support to the excellent performance observed for DeepKSPD.

## References

- [1] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Log-euclidean metrics for fast and simple calculus on diffusion tensors. Magnetic Resonance in Medicine, 56(2):411–421, 2006.
- [2] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2015.
- [3] S. Branson, G. V. Horn, S. Belongie, and P. Perona. Bird species categorization using pose normalized deep convolutional nets. In British Machine Vision Conference (BMVC), Nottingham, 2014.
- [4] Y. Chai, V. Lempitsky, and A. Zisserman. Symbiotic segmentation and part localization for fine-grained categorization. In IEEE International Conference on Computer Vision, 2013.
- [5] M. Cimpoi, S. Maji, and A. Vedaldi. Deep filter banks for texture recognition and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pages 3828–3836, 2015.
- [6] Y. L. Daleckiĭ and S. G. Kreĭn. Integration and differentiation of functions of hermitian operators and applications to the theory of perturbations. (Russian) Vorone. Gos. Univ. Trudy Sem. Funkcional. Anal. 1, (1):81–105, 1956. English translation is in book Thirteen Papers on Functional Analysis and Partial Differential Equations, American Mathematical Society Translations: Series 2, vol.47, 1965.
- [7] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In Computer Vision - ECCV 2014, pages 392–407, 2014.
- [8] P.-H. Gosselin, N. Murray, H. Jégou, and F. Perronnin. Revisiting the Fisher vector for fine-grained classification. Pattern Recognition Letters, 49:92–98, Nov. 2014.
- [9] M. T. Harandi, M. Salzmann, and F. M. Porikli. Bregman divergences for infinite dimensional covariance matrices. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, pages 1003–1010, 2014.
- [10] C. Ionescu, O. Vantzos, and C. Sminchisescu. Matrix backpropagation for deep networks with structured layers. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, pages 2965–2973, 2015.
- [11] S. Jayasumana, R. I. Hartley, M. Salzmann, H. Li, and M. T. Harandi. Kernel methods on the riemannian manifold of symmetric positive definite matrices. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, pages 73–80, 2013.
- [12] H. Jegou, M. Douze, C. Schmid, and P. Pérez. Aggregating local descriptors into a compact image representation. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, pages 3304–3311, 2010.
- [13] J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
- [14] T. Lin, A. Roy Chowdhury, and S. Maji. Bilinear CNN models for fine-grained visual recognition. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, pages 1449–1457, 2015.
- [15] T.-Y. Lin, A. RoyChowdhury, and S. Maji. Bilinear cnns for fine-grained visual recognition. In Transactions of Pattern Analysis and Machine Intelligence (PAMI).
- [16] J. Mairal. End-to-end kernel learning with supervised convolutional kernel networks. In NIPS, pages 1399–1407, 2016.
- [17] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. In NIPS, pages 2627–2635, 2014.
- [18] S. Maji, E. Rahtu, J. Kannala, M. B. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. CoRR, abs/1306.5151, 2013.
- [19] F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In Proceedings of the 11th European Conference on Computer Vision: Part IV, ECCV’10, pages 143–156. Springer-Verlag, 2010.
- [20] J. Sánchez, F. Perronnin, T. Mensink, and J. J. Verbeek. Image classification with the fisher vector: Theory and practice. International Journal of Computer Vision, 105(3):222–245, 2013.
- [21] J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In 9th IEEE International Conference on Computer Vision (ICCV 2003), pages 1470–1477, 2003.
- [22] J. Wang, J. Yang, K. Yu, F. Lv, T. S. Huang, and Y. Gong. Locality-constrained linear coding for image classification. In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, pages 3360–3367, 2010.
- [23] L. Wang, J. Zhang, L. Zhou, C. Tang, and W. Li. Beyond covariance: Feature representation with nonlinear kernel matrices. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, pages 4570–4578, 2015.
- [24] R. Wang, H. Guo, L. S. Davis, and Q. Dai. Covariance discriminative learning: A natural and efficient approach to image set classification. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2496–2503, 2012.
- [25] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
- [26] A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Stochastic variational deep kernel learning. In NIPS, pages 2586–2594, 2016.