Learning Explicit Deep Representations from Deep Kernel Networks

Learning Explicit Deep Representations from Deep Kernel Networks

Mingyuan Jiu, Hichem Sahbi M. Jiu is with School of Information Engineering, Zhengzhou University, Zhengzhou, China. Email: iemyjiu@zzu.edu.cn.H. Sahbi is with CNRS, LIP6 UPMC, Sorbonne University, Paris, France. Email: hichem.sahbi@lip6.fr
Abstract

Deep kernel learning aims at designing nonlinear combinations of multiple standard elementary kernels by training deep networks. This scheme has proven to be effective, but intractable when handling large-scale datasets especially when the depth of the trained networks increases; indeed, the complexity of evaluating these networks scales quadratically w.r.t. the size of training data and linearly w.r.t. the depth of the trained networks.

In this paper, we address the issue of efficient computation in Deep Kernel Networks (DKNs) by designing effective maps in the underlying Reproducing Kernel Hilbert Spaces. Given a pretrained DKN, our method builds its associated Deep Map Network (DMN) whose inner product approximates the original network while being far more efficient. The design principle of our method is greedy and achieved layer-wise, by finding maps that approximate DKNs at different (input, intermediate and output) layers. This design also considers an extra fine-tuning step based on unsupervised learning, that further enhances the generalization ability of the trained DMNs. When plugged into SVMs, these DMNs turn out to be as accurate as the underlying DKNs while being at least an order of magnitude faster on large-scale datasets, as shown through extensive experiments on the challenging ImageCLEF and COREL5k benchmarks.

Multiple kernel learning, kernel design, deep networks, efficient computation, image annotation.

1 Introduction

Kernel design has been an active field of machine learning during the last two decades with many innovative kernel-based algorithms successfully applied to various tasks, including support vector machines (SVMs) for pattern classification and support vector regression for multivariate estimation [1, 2, 3, 4, 5, 6] as well as kernel-PCA for dimensionality reduction [7]. The success of these kernel-based algorithms is highly dependent on the choice of kernels; the latter are defined as symmetric and positive semi-definite functions that return similarity between data [8, 9]. Various kernels have been introduced in the literature [9] including standard elementary kernels (linear, polynomial, Gaussian, histogram intersection, etc.) as well as sophistical ones that model more complex relationships between data [3, 10, 11]. However, in practice, knowing a priori which (elementary or sophisticated) kernel is suitable for a given task is not obvious and research has recently been undertaken in order to train suitable kernels for different classification tasks (see for instance [12, 13, 14, 15, 16, 17, 18, 43]).

Among existing solutions, Multiple Kernel Learning (MKL) [12, 19, 20] has been popular; its principle consists in learning (sparse or convex) linear combinations of elementary kernels that maximize performances for a given classification task. Different MKL algorithms have been proposed in the literature, including constrained quadratic programming [12], second-order cone and semi-infinite linear programming [19, 21] as well as simpleMKL based on mixed-norm regularization [20]. In spite of their relative success these solutions hit two major limitations: on the one hand, the convexity of these simple linear MKL models may limit the space of possible (and also relevant) solutions. On the other hand, MKL solutions, relying on shallow kernel combinations, are less powerful (compared to their deep variants) in order to capture different levels of abstractions in the learned kernel similarity. Considering these two issues, nonlinear and deep architectures have been recently proposed and turned out to be more effective: for instance, hierarchical multiple kernel learning is proposed in [22] where elementary kernels are embedded into acyclic directed graphs while in [23], nonlinear combination of polynomial kernels are used. Following the spirit of deep convolutional neural networks [24, 25, 26], authors in [13] adopt kernel functions as a prior knowledge for regularization. In [27], Cho and Saul propose Arc-cosine kernels that mimic the computation of large neural nets which can be used in shallow as well as deep networks. In [28], a multi-layer nonlinear MKL framework is proposed, but it is restricted to only two layers; in this solution, an exponential activation function is applied to each intermediate and output kernel combination. In [29], Jiu and Sahbi extend this method to a deeper network of more than two layers using a semi-supervised setting that takes into account the topology of training and test data. In all the aforementioned MKL algorithms, the computational complexity of kernel (gram-matrix) evaluation is a major issue that limits the applicability of these methods; indeed, considering a dataset with samples, this complexity reaches with being the depth of the deep kernel networks; this evaluation process is clearly intractable even on reasonable size datasets.
Existing solutions that reduce the computational complexity of evaluating these kernels consider instead explicit maps. In this respect, different solutions have been proposed in the literature including: the Nyström expansion [30] which generates low-rank kernel map approximations of original gram-matrices from uniformly sampled data without replacement111Bounds, on Nyström approximation and sampling, are given in [31, 32], and the random Fourier sampling (proposed by Rahimi an Recht [33] and extended to group-invariant kernel method in [34]) which builds explicit features for stationary kernels using random sampling of the Fourier spectrum. The explicit feature maps for additive homogeneous kernels are also given in [35] and finite approximations are derived based on spectral analysis. Other works have been undertaken including random features [36] and convolutional kernel networks [37], which approximate maps of Gaussians using convolutional neural networks.
In this paper, we propose a novel method that reduces the computational complexity of DKN evaluation (and therefore SVM learning) on large datasets. We address the issue of kernel map approximation for any deep nonlinear combination of elementary kernels rather than one specific type of kernels as achieved in the aforementioned related work. Our solution relies on the positive semi-definiteness (p.s.d) of existing elementary kernels (linear, polynomial, etc.) and the closure properties of p.s.d with respect to different operations (including product, addition and exponentiation) in order to express DKN with DMN. In these closure properties, linear combinations of kernels correspond to concatenations of their respective maps, while products correspond to Kronecker tensor operations, etc. As some elementary kernels222such as Gaussian and Histogram Intersection. used to feed the inputs of DKNs may have infinite dimensional or undefined maps, we consider new explicit maps that accurately approximate these elementary kernels. Considering these maps as inputs, this greedy process continues layer-wise in order to find all the maps of the subsequent (intermediate and output) layers. Note that the contribution presented in this paper is an extension of our preliminary work in [38], but it differs at least in two aspects: first, we consider an unsupervised training criterion that benefits from abundant unlabeled data in order to further decrease the approximation error of the trained DMN and thereby making its generalization power as high as the underlying DKN (and also better than existing elementary and shallow kernel combinations; as shown through experiments). Furthermore, with DMNs, one may employ efficient SVM learning algorithms based on stochastic gradient descent [39] on large-scale datasets, rather than usual training algorithms that rely on heavy gram-matrices and intractable quadratic programming problems. All these statements are corroborated through extensive experiments using two benchmarks: ImageCLEF Photo Annotation [40, 41] and COREL5k [42].

Fig. 1: Left: a three-layer deep kernel network (DKN). Right: a sub-module of deep map network (DMN). The blue dash in the left figure denotes a sub-module of DKN where each node stands for a kernel. The input in the right figure corresponds to the kernel maps and each unit stands for a feature.

The rest of this paper is organized as follows: in Section 2 we first briefly remind DKNs, and then in Section 3 we introduce a novel method that builds their equivalent DMNs. In Section 4, we describe an unsupervised setting of our DMN design while in Section 5, we present the experimental validation of our method on image annotation tasks using ImageCLEF and COREL5k benchmarks. Finally, we conclude the paper while providing possible extensions for a future work.

2 deep kernel networks at a glance

A deep kernel network [28, 29] is a multi-layered architecture that recursively defines nonlinear combinations of elementary kernels (linear, Gaussian, etc.). Let denote a kernel function assigned to unit and layer ; is recursively defined as the output of a nonlinear activation function333For instance, exponential function [28]. (denoted ) applied to a weighted combination of (input or intermediate) kernels from the preceding layer as

(1)

with being weights connecting units at layers and ; see the blue dashed area in Fig. (1, left). This feed-forward kernel evaluation is achieved layer-wise till reaching the final output kernel. In this recursive definition, other activation functions can be chosen (particularly for the intermediate layers) including the hyperbolic making the learning numerically more stable while also preserving the p.s.d of the final output kernel.

For a given classification task, the weights are trained discriminatively [28, 29] using a max margin SVM criterion which aims at minimizing a regularized hinge loss on top of the learned DKN. This results into an SVM optimization problem which is solved in its dual form by backpropagating the gradient of that form w.r.t. the output kernel using the chain rule [24], then the weights connecting layers in the DKN are updated using gradient descent. Variants of this optimization criterion, leveraging both labeled and unlabeled data (following a semi-supervised and laplacian setting) makes it possible to train better DKN as detailed in [29].

3 Deep map networks

In this section, we introduce a novel method that finds for any given DKN, its associated DMN; the proposed method proceeds layer-wise by finding explicit maps that best fit the original kernels in the DKN. As shown later in experiments, this process delivers highly efficient DMNs, while being comparably accurate w.r.t. their underlying DKNs. Later in Section 4, we introduce an extension that further enhances the approximation quality of our DMN; starting from the initial weights of the DMN, we update these weights by minimizing the difference between inner products of the maps in the DMN and the original kernels in the DKN. The strength of this extension also resides in its unsupervised setting which makes it possible to learn from abundant unlabeled sets.
Considering all the elementary (input) kernels in the DKN as positive semi-definite and resulting from the closure of the p.s.d w.r.t. different operations (including sum, product, exponential and hyperbolic activation functions), all the intermediate and output kernels will also be p.s.d. Each can therefore be written as an inner product of kernel maps as , with being a mapping from the input space to a high dimensional space . As the explicit form of is not necessarily explicit (known), our goal is to design an approximated mapping that guarantees . When these approximated mappings through different layers are known, the resulting DMN provides deep kernel representations from the input data.

3.1 Input layer maps

In order to fully benefit from DMNs, the maps of the elementary kernels, that feed these DMNs, should be explicitly known. As discussed earlier, different kernels have different maps; for linear and polynomial, their maps are straightforward and can be easily defined. However, for other more powerful and discriminating kernels, such as the Gaussian and the histogram intersection (HI), their maps are either infinite dimensional or unknown. In this subsection, the definitions of exact and approximate explicit maps are shown for different kernels (including polynomial and HI).

Exact polynomial kernel map. An n-degree polynomial kernel defined as can be expressed as , with standing for the Kronecker tensor product applied times. Hence, it is easy to see that the exact explicit map for an n-degree polynomial kernel is .

Approximate HI kernel map. The approximate explicit maps of HI can be obtained using vector quantization. Given two vectors and of dimension , the HI on , is defined as (with being the value of dimension of ). Considering , each dimension of is mapped to

(2)

where and stands for the largest integer not greater than , is a predefined quantization, and . In the above definition, is a “decimal-to-unary” map; for instance, is mapped to , is mapped to , to , and so on. In the following, is rewritten as a vector of dimensions, and its first dimensions are set to and the remaining are set to .

Proposition 1.

Given any , in , for sufficiently large , the inner product approximates the histogram intersection kernel , where

(3)

is the approximate kernel map and stands for the transpose of .

Proof.

,

(4)

It is easy to see that , . By replacing in Eq. (4)

(5)

as increases,

Approximate Gaussian kernel map. As the explicit map of the Gaussian kernel is infinite dimensional, we consider instead an approximate explicit map of that kernel using eigen decomposition (ED) as shown in Eqs. (7), (6) with (see Section 3.2). This ED is not restricted to the Gaussian kernel and can also be extended to other kernels whose exact explicit maps are difficult to obtain.

3.2 Intermediate/output layer maps

Given the explicit map of each elementary kernel at the input layer, our goal is to design the maps of the subsequent layers. Since the map of each layer depends on its preceding layers, this goal is achieved layer-wise using a greedy process. As intermediate/output kernels in the DKN are defined as linear combinations of kernels in the preceding (input or intermediate) layers followed by nonlinear activations, we mainly focus on how to approximate the maps of these activation functions in the DMN; in this section, we assume that weights connecting different layers are already known resulting from the initial setting of the DKN (see again Section 2).

Proposition 2.

Let be a subset of samples of , and let be a gram-matrix whose entries are defined on . Let with , being respectively the matrices of eigenvectors and eigenvalues obtained by solving

(6)

Considering as the (matrix) norm and as the gram-matrix associated to with

(7)

and

(8)

then the following property is satisfied

(9)
Proof.

Let’s proceed layer-wise by induction; for (and following section 3.1), the initial kernel maps are designed to satisfy .

Now provided that , the property to show is , . Following (8) we have

(10)

the second equality results from the hypothesis of induction. By plugging (10) into (7), we obtain

(11)

and equivalently . Hence,

(12)

which also results from Eq. (6) and the orthogonality of eigenvectors in . ∎

Note that for any samples , taken out of (but with similar distribution as ), it is clear (as also observed in our experiments) that as and the number of eigenvectors used in increase.

3.3 Network design

We incrementally expand each layer in the DKN into three sub-layers in the underlying DMN in order to design the map . The first sub-layer provides the products between weights and the preceding maps resulting into the intermediate map as shown in Eq. (8). Afterwards, we feed this map to Eq. (7) in two steps: (i) in the second sub-layer, inner products are achieved between and parameters followed by the activations (with being the hyperbolic excepting the final layer in the DKN which uses the exponential); (ii) in the third sub-layer, the explicit map is obtained as the product of and weights . Fig. (1, right) shows these three sub-layers in the DMN. Similarly, all the subsequent layers in the DMN are designed by processing the DKN layer-wise.444As the goal, in this paper, is to build approximate deep kernel maps for a given (fixed) deep kernel network, the weights between different layers remain fixed (as shown in Eq. (8)). However, they can also be jointly learned using gradient descent, but this is out of the main scope of this paper.

4 Enhancing DMN Parameters

So far the design principle of our method (shown in Section 3.3 and Fig. 1) seeks to find explicit maps whose inner products approximate the original kernel values. This is achieved by expanding each layer in the DKN into three sub-layers in the DMN with parameters fixed to and . In spite of being efficient and also effective w.r.t. the DKN (see experiments), the resulting DMN can be further improved when re-training and fine-tuning these parameters as shown subsequently.

The purpose of the proposed unsupervised algorithm is to further reduce the approximation error between the kernel values from DKN and the inner product of kernel maps from DMN. Let be a subset drawn from the same distribution as and define as a subset of pairs taken from . Our goal is to optimize maps of DMN using the following unsupervised criterion

(13)

here corresponds to the kernel value obtained using the DKN and , are the underlying (unknown) kernel maps; initially, only are known according to the procedure shown in Section 3.1.

Considering the initial setting of DMN parameters (i.e., and ), the learning process of this DMN relies on backpropagation [24]. The latter finds the best parameters by minimizing the objective function () following an “end-to-end” framework where the gradients of are given using the chain rule; we firstly compute the gradients of the loss function w.r.t. final kernel maps, then we backpropagate them through the DMN in order to obtain the gradients w.r.t. the parameters of DMN, finally we average them over training pairs to obtain the descent direction and update DMN parameters.

Starting from the derivative of w.r.t.

(14)

we obtain the gradients w.r.t. different layers and units as shown in the following section.

4.1 Error backpropagation

As the construction of DMN is achieved layer-wise (see again Section 3.3), we show below the backpropagation procedure for a module (shown in Fig. 1, right). Given the derivatives of w.r.t. in layer , we evaluate the derivatives w.r.t. in layer . The derivative w.r.t. is backpropagated to in Eq. (7) by

(15)

here stands for the ith row of a matrix. Considering , with , we obtain

(16)

where is the derivative of the nonlinear activation function; for instance, for the tangent hyperbolic and for the exponential. By accumulating the derivatives from each term , we obtain

(17)

Finally, we get the derivatives w.r.t. for layer in Eq. (8) by

(18)

where stands for the fragment of derivatives corresponding to the kernel maps of the unit at layer in the DKN.

The gradients of the loss function w.r.t. and are then given as

(19)
(20)

Error backpropagation is achieved layer-wise from the final to the input layer; the increments of and are obtained by Eq. (20) and Eq. (19). Gradient descent with a step (see experiments) is performed to update the parameters of DMN. The whole learning procedure is shown in Algorithm 1.

As described earlier, an initial DMN is firstly set using the training set , then sample pairs in are randomly selected from to further enhance the parameters of the new (fine-tuned) DMN. As a result, the fine-tuned DMN enables us to obtain a better approximation of the original DKN on large datasets while being highly efficient as shown through the following experiments in image annotation.

Input: Fixed ,
A set of sample pairs ,
Kernel maps at the input layer,
Output kernel values .
Initialization: and , , learning rate .
Output: Optimal (updated) and .
repeat
        for  each pair  do
              Forward through DMN to obtain by Eqs. (8), (7);
               Compute the loss by Eq. (13);
               Compute the gradients by Eq. (14);
               for  do
                      Backward the gradients by Eqs. (15)-(18);
                      Compute and by Eq. (19) and (20);
                      Compute the gradients from : and ;
                      Average both gradients: ;
                      ; Update these parameters by gradient descents;
                      ;
                      ;
                     
               end for
              
        end for
       
until Convergence;
Algorithm 1 Unsupervised DMN learning algorithm

5 Experiments

In this section, we compare the performance of the proposed DMN w.r.t. its underlying DKN in three aspects: i) discrimination power, ii) relative approximation error between DMN and DKN and iii) also efficiency. The targeted task is image annotation (e.g., [41, 44]); given a picture, the goal is to predict a list of keywords that best describes the visual content of that image. We consider two challenging and widely used benchmarks: ImageCLEF [40] and COREL5k [42] (see details below). For both sets, we learn – highly competitive – 3-layer DKNs using the setting in [29] and we plug these DKNs into SVMs in order to achieve image classification and annotation.

The discrimination power of the learned DMN and DKN networks is measured following the protocol defined by challenge organizers and data providers (see [40] for ImageCLEF and [42] for COREL5k; see also extra details below). The relative approximation error (RE) of a given DMN w.r.t. its underlying DKN is measured (on a given set ) as

(21)

In the remainder of this section, we show different evaluation measures (discrimination power, RE and efficiency) on ImageCLEF and COREL5k benchmarks; note that efficiency was measured on a Mac OS with Intel Core i5 processors.

5.1 ImageCLEF benchmark

The ImageCLEF Photo Annotation benchmark [40] includes more than 250k (training, dev and test) images belonging to 95 different concepts. As ground truth is available (released) only on the dev set (with 1,000 images), we learn DKNs and SVMs [29] using only the dev set; the latter is split into two subsets: the first one used for DKN+SVM training while the other one for SVM testing. Given a concept and a test image, the decision about whether that concept is present in that test image depends on the score of a classifier; the latter corresponds to a “one-versus-all” SVM that returns a positive score if the concept is present in the test image and a negative score otherwise. The discrimination power of DKN and DMN (when combined with SVMs) is evaluated using the F-measure (defined as harmonic means of recalls and precisions) both at the concept and the image levels (resp. denoted MF-C and MF-S) as well as the Mean Average Precision (MAP) [40]; high values of these measures imply better performances.

In order to feed the inputs of DKN, we consider a combination of 10 visual features (provided by the ImageCLEF challenge organizers) and 4 elementary kernels (i.e. linear, polynomial with 2 orders, Gaussian555with a scale hyper-parameter set to be average Euclidean distance between data samples and their neighbors. and histogram intersection) and we train a three-layer DKN with 40 input and 80 hidden units in a supervised way following the scheme in [29]; the only difference w.r.t. [29] resides in the hyperbolic tangent activation function which is used to provide a better numerical stability and convergence when training DKN.

Initial DMNs. Assuming the weights of three-layer DKN known, we build its equivalent DMN (referred to as initial DMN) as shown in Section 3. In these experiments, we consider two random samplings of the subset – from the dev set with and – in order to build the initial DMN (see Section 3 and Eqs. (8), (7)). According to Table I, we observe that the performance of the initial DMN – with – slightly degrades compared to its underlying DKN; indeed, MF-S and MF-C decrease by 1.3 and 2.6 pts respectively while MAP decreases by 6.0 pts. With performances of the initial DMN is clearly improved compared to the one with ; we obtain a slight gain in MF-S and comparable performance in MF-C. We also provide a comparison of the discrimination power of initial DMN against shallow DKN (i.e two-layer DKN) using a supervised setting; Table I clearly shows the superiority of initial DMN (when ). The relative approximation error (RE) of the two initial DMNs (i.e., with and ) are also shown in Table II; we evaluate these REs on with a cardinality ranging from 2,000 to 10,000 samples. From these results, we observe that REs are comparably low on small sets; indeed, with , the obtained REs are equal to 0.94% when and 0.95% when . Higher REs are obtained on larger and this clearly motivates the importance of fine-tuning in order to make REs (and thereby performances) of the learned DMN stable (and close to the underlying DKN).

Framework MF-S MF-C MAP
2-layer DKN 44.96 25.77 53.95
3-layer DKN 46.23 30.00 55.73
Initial DMN  () 44.92 27.39 49.75
Fine-tuned DMN () 45.05 27.51 49.80
Fine-tuned DMN () 44.94 27.40 49.80
Fine-tuned DMN () 45.06 27.44 49.79
Initial DMN () 47.73 29.40 53.15
Fine-tuned DMN () 47.79 29.68 52.89
Fine-tuned DMN () 47.95 29.80 53.32
Fine-tuned DMN () 47.70 29.30 53.33
TABLE I: The discrimination power (in %) of different DMNs w.r.t the underlying DKN; in these experiments, two initial DMNs are designed using 500 and 1000 samples.
Configuration 2K 3K 4K 5K 6K 7K 8K 9K 10K
Initial DMN () - 0.94 1.25 1.41 1.51 1.58 1.62 1.66 1.69 1.71
Fine-tuned DMN 500 0.89 1.19 1.35 1.45 1.52 1.57 1.60 1.63 1.65
1000 0.89 1.20 1.36 1.46 1.53 1.58 1.61 1.64 1.66
2000 0.42 0.46 0.50 0.52 0.54 0.56 0.57 0.58 0.59
3000 0.52 0.47 0.47 0.47 0.47 0.47 0.48 0.48 0.48
4000 0.60 0.51 0.49 0.47 0.47 0.46 0.46 0.46 0.46
Initial DMN () - 0.95 1.27 1.44 1.54 1.62 1.67 1.70 1.74 1.76
Fine-tuned DMN 1000 0.89 1.21 1.38 1.48 1.55 1.60 1.64 1.67 1.69
2000 0.37 0.41 0.44 0.46 0.48 0.49 0.50 0.51 0.52
3000 0.46 0.43 0.43 0.43 0.44 0.44 0.44 0.45 0.45
4000 0.54 0.48 0.46 0.45 0.45 0.44 0.44 0.44 0.44
TABLE II: Relative errors of initial and fine-tuned DMNs w.r.t. the DKN for different dataset cardinalities (ranging from 2K to 10K) and when two different initializations are employed.
Fig. 2: This figure shows the loss criterion in Eq. (13) as the learning iterates when and .
Fig. 3: This figure shows a comparison of processing time between two different DMNs and their underlying DKN as increases (with and ) on ImageCLEF dataset.
50K 100K
3-layer DKN Time 40.4 hrs 160.3 hrs
Fine-tuned DMN Time 1.1 hrs 2.4 hrs
, RE 0.46% 0.46%
Fine-tuned DMN Time 1.3 hrs 2.8 hrs
, RE 0.45% 0.45%
TABLE III: This table shows a comparison of processing time and relative errors between the DKN and the fine-tuned DMN on and images of ImageCLEF.“hrs” stands for “hours”.

Fine-tuned DMNs. In order to fine-tune the parameters of DMN, we use the learning procedure presented in Section 4. We consider an unlabeled set (with ranging from 1,000 to 4,000) and we sample 100,000 pairs from to minimize criterion (13) using gradient descent with a step-size empirically set to , a mini-batch size equal to 200 and a max number of iterations set to 5,000 (see Fig. 2).
As shown in Table I, we observe that the discrimination power of different DMNs remains stable (with a slight gain in MF-S when ) w.r.t. their underlying DKNs, and this naturally follows the noticeably small REs of the fine-tuned DMNs (see Table II). The latter are further positively impacted when becomes larger; for instance, when increasing from 1,000 to 4,000, the RE decreases significantly (particularly when ). Moreover, and in contrast to the initial DMNs, the fine-tuned DMNs are less sensitive to as shown through the observed REs which remain stable w.r.t. .

Finally, we measure the gain in efficiency obtained with DMNs against DKNs. From Fig. 3, we observe that DMN is (at least) an order of magnitude faster compared to its DKN; for instance, with 10,000 samples, DKN requires more than 15,000 seconds in order to compute kernel values while DMN requires less than 1,000 seconds. Table III also provides a comparison of efficiency and RE on much larger sets (resp. 50K and 100K) randomly sampled from the (unlabeled) training set of ImageCLEF; a significant improvement in efficiency is observed. In other words, the complexity of evaluating DMNs is linear while for DKN it is quadratic. These results clearly corroborate the fact that the proposed DMNs are as effective as DKNs while being highly efficient especially on large scale datasets.

5.2 COREL5k benchmark

The COREL5k database introduced in [42] is another benchmark which is widely used for image annotation. In this database, 4,999 images are collected and a vocabulary of 200 keywords is used for annotation. This set is split into two parts; the first one includes 4,500 images for training and the second one 499 images for testing. As for ImageCLEF, the task is again to assign a list of keywords for each image in the test set.
Each image in COREL5k is described using 15 types of INRIA features [45] including: GIST features, 6 color histograms for RGB, HSV, LAB in two spatial layouts, 8 bag-of-features based on SIFT and robust hue descriptors in two spatial layouts. Following the standard protocol defined on COREL5k [42], each test image is annotated with up to 5 keywords and performances (discrimination power of image classification/annotation) are measured by the mean precision and recall over keywords (referred to as and respectively) as well as the number of keywords with non-zero recall value (denoted ); again, higher values of these measures imply better performances.

Framework
3-layer DKN 37.65 25.49 158
Initial DMN () 31.30 18.67 155
Fine-tuned DMN () 31.34 18.54 155
Fine-tuned DMN () 31.62 18.43 153
Fine-tuned DMN () 31.18 19.04 155
Fine-tuned DMN () 31.65 19.13 157
Initial DMN () 32.31 19.39 155
Fine-tuned DMN () 32.57 19.82 157
Fine-tuned DMN () 33.05 20.88 159
Fine-tuned DMN () 33.08 20.40 158
Fine-tuned DMN () 33.30 20.18 158
TABLE IV: The discrimination power of different DMNs w.r.t the underlying DKN on COREL5k; in these experiments, two initial DMNs are designed using 500 and 700 samples.
Framework 2K 3K 4K 4999
Initial DMN - 2.45 2.41 2.35 2.26
Fine-tuned DMN 500 1.22 1.28 1.32 1.37
1000 1.23 1.35 1.40 1.42
2000 1.12 1.15 1.19 1.19
3000 1.14 1.12 1.13 1.12
4000 1.18 1.14 1.11 1.10
4999 1.18 1.14 1.12 1.10
Initial DMN - 2.43 2.39 2.33 2.24
Fine-tuned DMN 700 1.30 1.42 1.48 1.51
1000 1.22 1.35 1.42 1.44
2000 1.09 1.13 1.17 1.18
3000 1.11 1.10 1.11 1.11
4000 1.16 1.12 1.09 1.08
4999 1.16 1.12 1.10 1.08
TABLE V: Relative errors of initial and fine-tuned DMNs (w.r.t. the underlying DKN) on COREL5k as increases (with values ranging from 2K to 4999)
Fig. 4: Comparison of processing time between two approximated DMNs (with and ) and their underlying DKN as increases on COREL5k dataset.
Method Learned context
Input. feat.
wTKML [46] no yes 42 21 173
LDMKL [47] no yes 44 29 179
CNN-R [48] yes yes 41.3 32.0 166
3-layer DKN+SVM [49] no no 37.7 25.5 158
Init. DMN+SVM () no no 32.3 19.3 155
FT DMN+SVM () no no 33.1 20.9 159
Init. DMN+SVM () no no 34.0 20.9 162
FT DMN+SVM () no no 34.7 21.0 168
ResNet[50] + SVM yes no 34.5 21.8 161
3-layer DKN+SVM [49] yes no 42.6 24.9 180
Init. DMN+SVM () yes no 36.1 21.7 166
FT DMN+SVM () yes no 36.8 22.4 165
Init. DMN+SVM () yes no 37.4 21.6 162
FT DMN+SVM () yes no 37.7 22.3 164
Init. DMN+SVM () yes no 37.8 23.2 167
FT DMN+SVM () yes no 38.9 23.2 169
TABLE VI: Extra comparison of the proposed DMN w.r.t different settings as well as the related work. In these experiments, and different are used. In this table, FT stands for Fine-Tuned.

As in ImageCLEF (see section 5.1), we use 4 elementary kernels for each feature: linear, order two polynomial, RBF (with a scale parameter set to the average distance between data) and histogram intersection; in total, we use 60 different elementary kernels as inputs to the 3-layer DKN. We also use the same DKN architecture on COREL5k with a slight difference in the number of units in the hidden layers (equal to 120 instead of 80 in ImageCLEF). Again, the weights of DKN are learned using the semi-supervised learning procedure presented in [29] where the similarity between images is computed by the heat kernel (with a width set to the mean distance between neighbors). An ensemble of “one-versus-all” SVM classifiers is trained on top of DKN for each category. The average decision score from all the classifiers is taken as a final score for a given category. In order to avoid the severe imbalanced class distributions in SVM training, we adopt a sampling strategy that randomly selects a subset of negative samples whose cardinality is equal to the number of positive training samples. Hence, each classifier is learned using all the positive data and a random subset of negative data. The discrimination power of the learned DKNs+SVMs is shown in Table IV.

Initial and fine-tuned DMNs. Assuming the weights of DKN known (learned), we build the initial DMN as shown in Section 3. We consider two random samplings of the subset – from the training set with and – in order to build the initial DMN. We also use the learning procedure presented in Section 4 in order to fine-tune the parameter of the DMN. We consider an unlabeled set which includes up to 4,999 samples (i.e. the whole COREL5K set); again we sample 100,000 pairs in order to minimize the criterion in Eq. (13) using gradient descent with a step-size empirically set to , a mini-batch size equal to 200 and a max number of iterations set to 5,000.

According to Table IV, we observe that the performances of the initial DMNs (, and ) again degrade compared to their underlying DKNs as a result of the high RE of these DMNs. This degradation in performances is also amplified by the scarceness of training data for SVM learning in COREL5k (in contrast to ImageCLEF) especially when the RE is relatively large (see Table. V). However, the discrimination power is improved when more data are used to design these DMNs (i.e., with and also in Table VI). Furthermore, fine-tuning DMNs reduces the RE as increases, and makes RE stable even with a relatively large , so RE (on COREL5k) behaves similarly compared to ImageCLEF. Finally, Fig. 4 shows a comparison of processing time between DMN and DKN. It is easy to see that when is small, the processing times of DKN and DMN are comparable. However, when reaches large values (e.g., ), DMN becomes an order of magnitude faster than its underlying DKN while maintaining a comparable accuracy.

Extra comparisons. We further compare the performance of DMNs against closely related kernel-based methods (namely wTKML [46] and LDMKL [47]) as well as convolutional neural networks (mainly CNN-R [48]). wTKML [46] learns explicit and transductive kernel maps using a priori knowledge taken from the semantic and geometric (statistical) dependencies between classes while LDMKL [47] combines Laplacian SVM with deep kernel networks using an “end-to-end” framework. CNN-R [48] combines deep features from Caffe-Net with word embedding features from Word2Vec; as introduced in the literature, these related methods leverage different sources of contexts and a priori knowledge while our method does not.
In our experiments (see Table VI), we use four elementary kernels (linear, polynomial, RBF and HI) combined with different features as inputs to the designed DKN and DMN networks: “handcrafted features” including GIST and SIFT and “learned features” taken from ResNet [50] (pretrained on the ImageNet) which is a very deep architecture consisting of 152 layers; the 2048 dimensional features of the last pooling layer are used in our annotation task. Using all these elementary kernels and features, we first train a DKN in a supervised way according to [49], then we design and fine-tune its associated DMNs with and (as done in Table. IV).
From the results shown in Table VI, first, we observe that the use of ResNet features as inputs to our DMN framework provides a clear gain compared to the use of handcrafted features. Second, fine-tuning DMNs brings a clear gain compared to the initial DMNs as well as ResNet. Our DKN (and its DMN variant) can even catch (and sometimes outperform) the aforementioned related work which again relies on different contextual clues, in contrast to our method. We believe that considering context will further enhance the performance of DKNs and their associated DMNs, but this is out of the main scope of this paper and will be investigated as a future work.
Finally, Fig. 5 shows examples of annotation results, on the test set, obtained using the learned DMNs and the underlying DKNs on ImageCLEF and COREL5k datasets. From these figures, DMNs behave similarly, w.r.t. DKNs, with an extra advantage of being computationally more efficient especially on COREL5k (as shown in Table VII); whereas the computational complexity of DKN evaluation scales linearly w.r.t. the number of support vectors (which is an order of magnitude larger on COREL5k w.r.t. ImageCLEF: 4,500 versus 500), the computational complexity of DMN evaluation grows slowly and remains globally stable w.r.t. the number of support vectors (which is again an order of magnitude larger on COREL5k). These results are also consistent with those already shown in Fig. 3 and Fig. 4.

Dataset Framework time (in sec)
ImageCLEF DKN 0.68
Fine-tuned DMN () 0.57
Fine-tuned DMN () 0.95
COREL5k DKN 10.39
Fine-tuned DMN () 1.22
Fine-tuned DMN () 1.58
Fine-tuned DMN () 2.51
Fine-tuned DMN () 3.67
TABLE VII: Comparison of the average processing time per test image (excluding feature extraction) on ImageCLEF and COREL5k datasets.
Fig. 5: Examples of annotation results using DKNs and their ”Fine-tuned” DMN variants on ImageCLEF (top) and COREL5k (bottom).“GT” stands for ground-truth keywords and the symbol “*” stands for the presence of a keyword in a given test image.

6 Conclusion

In this paper we introduced a novel method that transforms deep kernel networks into highly efficient deep map networks. The proposed method is greedy and proceeds layer-wise by expressing p.s.d kernels in different (input, intermediate, and output) layers of DKN as inner products involving explicit maps. These explicit maps are either exactly designed for some input kernels (including linear and polynomial) or tightly approximated for others (including intermediate and output kernels in DKN). We also introduced an unsupervised fine-tuning algorithm that benefits from large unlabeled sets in order to further enhance the generalization capacity of DMNs. Extensive experiments in image annotation, using the challenging ImageCLEF and COREL5k benchmarks, clearly demonstrate the effectiveness of DMNs and their high efficiency.

Acknowledgment

This work was supported in part by a grant from the research agency ANR (Agence Nationale de la Recherche) under the MLVIS project, ANR-11-BS02-0017.

References

  • [1] B. Caputo, C. Wallraven, and M.-E. Nilsback, “Object categorization via local kernels,” in ICPR, 2004.
  • [2] S. Lyu, “Mercer kernels for object recognition with local features,” in CVPR, 2005.
  • [3] K. Grauman and T. Darrell, “The pyramid match kernel: Efficient learning with sets of features,” JMLR, vol. 8, pp. 725–760, 2007.
  • [4] X. Qi and Y. Han, “Incorporating multiple svms for automatic image annotation,” IEEE Transcations on Knowledge and Data Engineering, vol. 40, 2007.
  • [5] H. Sahbi and N. Boujemaa, “Coarse-to-fine support vector classifiers for face detection,” in Pattern Recognition, 2002. Proceedings. 16th International Conference on, vol. 3.   IEEE, 2002, pp. 359–362.
  • [6] H. Sahbi, “Coarse-to-fine support vector machines for hierarchical face detection,” Ph.D. dissertation, PhD thesis, Versailles University, 2003.
  • [7] K. Q. Weinberger, F. Sha, and L. K. Saul, “Learning a kernel matrix for nonlinear dimensionality reduction,” in ICML, 2014.
  • [8] V. Vapnik, “Statistical learning theory,” Wiley, New York, 1998.
  • [9] J. Shawe-Taylor and N. Cristianini, “Kernel methods for pattern analysis,” Cambriage University Press, 2004.
  • [10] H. Sahbi, J.-Y. Audibert, J. Rabarisoa, and R. Keriven, “Context-dependent kernel design for object matching and recognition,” in Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on.   IEEE, 2008, pp. 1–8.
  • [11] L. Wang and H. Sahbi, “Directed acyclic graph kernels for action recognition,” in Computer Vision (ICCV), 2013 IEEE International Conference on.   IEEE, 2013, pp. 3168–3175.
  • [12] G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan, “Learning the kernel matrix with semi-definite programming,” JRML, vol. 5, pp. 27–72, 2004.
  • [13] K. Yu, W. Xu, and Y. Gong, “Deep learning with kernel regularization for visual recognition,” in NIPS, 2009, pp. 1889–1896.
  • [14] C. Corinna, M. Mehryar, and R. Afshin, “Two-stage learning kernel algorithms,” in ICML, 2010.
  • [15] H. Sahbi, J.-Y. Audibert, and R. Keriven, “Context-dependent kernels for object classification,” PAMI, vol. 33, pp. 699–708, 2011.
  • [16] H. Sahbi and X. Li, “Context-based support vector machines for interconnected image annotation,” in ACCV, 2011, pp. 214–227.
  • [17] S. Tollari, P. Mulhem, M. Ferecatu, H. Glotin, M. Detyniecki, P. Gallinari, H. Sahbi, and Z.-Q. Zhao, “A comparative study of diversity methods for hybrid text and image retrieval approaches,” in Workshop of the Cross-Language Evaluation Forum for European Languages.   Springer, 2008, pp. 585–592.
  • [18] N. Boujemaa, F. Fleuret, V. Gouet, and H. Sahbi, “Visual content extraction for automatic semantic annotation of video news,” in the proceedings of the SPIE Conference, San Jose, CA, vol. 6, 2004.
  • [19] F. Bach, G. Lanckriet, and M. Jordan, “Multiple kernel learning, conic duality, and the smo algorithm,” in ICML, 2004.
  • [20] A. Rakotomamonjy, F. Bach, C. S., and G. Yves, “Simplemkl,” JMLR, vol. 9, pp. 2491–2521, 2008.
  • [21] S. Sonnenburg, G. Rätsch, C. Schafer, and B. Schölkopf, “Large scale multiple kernel learning,” JMLR, vol. 7, pp. 1531–1565, 2006.
  • [22] F. Bach, “Exploring large feature spaces with hierarchical multiple kernel learning,” in NIPS, 2009, pp. 1–9.
  • [23] C. Cortes, M. Mohri, and A. Rostamizadeh, “Learning non-linear combinations of kernels,” in NIPS, 2009, pp. 1–9.
  • [24] Y. LeCun, L. Botto, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012.
  • [26] C. Farabet, C. Couprie, L. Najman, and Y. LeCun, “Learning hierarchical features for scene labeling,” PAMI, vol. 35, no. 8, pp. 1915–1929, 2013.
  • [27] Y. Cho and L. Saul, “Kernel methods for deep learning,” in NIPS, 2009, pp. 1–9.
  • [28] J. Zhuang, I. Tsang, and S. Hoi, “Two-layer multiple kernel learning,” in ICML, 2011, pp. 909–917.
  • [29] M. Jiu and H. Sahbi, “Semi supervised deep kernel design for image annotation,” in ICASSP, 2015.
  • [30] C. Williams and M. Seeger, “Using the nyström method to speed up kernel machines,” in NIPS, 2001.
  • [31] P. Drineas and M. W. Mahoney, “On the nystrom method for approximating a gram matrix for improved kernel-based learning,” J. Mach. Learn. Res., vol. 6, pp. 2153–2175, Dec. 2005.
  • [32] S. Kumar, M. Mohri, and A. Talwalkar, “Sampling methods for the nyström method,” J. Mach. Learn. Res., vol. 13, no. 1, pp. 981–1006, Apr. 2012.
  • [33] A. Rahimi and B. Recht, “Random features for large-scale kernel machines,” in NIPS, 2007.
  • [34] F. Li, C. Ionescu, and C. Sminchisescu, “Random fourier approximations for skewed multiplicative histogram kernels,” in DAGM conference Pattern Recognition, 2010.
  • [35] A. Vedaldi and A. Zisserman, “Efficient additive kernels via explicit feature maps,” IEEE Transactions on PAMI, vol. 34, 2012.
  • [36] L. Deng, M. Hasegawa-Johnson, and X. He, “Random features for kernel deep convex network,” in ICASSP, 2013, pp. 3143–3147.
  • [37] J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid, “Convolutional kernel networks,” in NIPS, 2014.
  • [38] M. Jiu and H. Sahbi, “Deep kernel map networks for image annotation,” in ICASSP, 2016.
  • [39] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, “Liblinear: A library for large linear classification,” JMLR, vol. 9, pp. 1871–1874, 2008.
  • [40] M. Villegas, R. Paredes, and B. Thomee, “Overview of the imageclef 2013 scalable concept image annotation subtask,” in CLEF 2013 Evaluation Labs and Workshop, 2013.
  • [41] H. Sahbi, “Cnrs-telecom paristech at imageclef 2013 scalable concept image annotation task: Winning annotations with context dependent svms.” in CLEF (Working Notes), 2013.
  • [42] P. Duygulu, K. Barnard, N. de Freitas, and D. Forsyth, “Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary,” in ECCV, 2002.
  • [43] H. Sahbi, “Imageclef annotation with explicit context-aware kernel maps,” International Journal of Multimedia Information Retrieval, vol. 4, pp. 113–128, 2015.
  • [44] X. Li and H. Sahbi, “Superpixel-based object class segmentation using conditional random fields,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on.   IEEE, 2011, pp. 1101–1104.
  • [45] M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid, “Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation,” in ICCV, 2009, pp. 316–329.
  • [46] P. Vo and H. Sahbi, “Transductive kernel map learning and its application to image annotation,” in BMVC, 2012, pp. 1–12.
  • [47] D. Zhang, M. Islam, and G. Lu, “A review on automatic image annotation techniques,” Pattern Recognition, vol. 45, 2012.
  • [48] V. N. Murthy, S. Maji, and R. Manmatha, “Automatic image annotation using deep learning representations,” in International Conference on Multimedia Retrieval, 2015, p. 603–606.
  • [49] M. Jiu and H. Sahbi, “Nonlinear deep kernel learning for image annotation,” IEEE Transactions on Image Processing, vol. 26(4), 2017.
  • [50] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016, p. 770–778.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
169202
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description