Deep Kernelized Autoencoders
In this paper we introduce the deep kernelized autoencoder, a neural network model that allows an explicit approximation of (i) the mapping from an input space to an arbitrary, user-specified kernel space and (ii) the back-projection from such a kernel space to input space. The proposed method is based on traditional autoencoders and is trained through a new unsupervised loss function. During training, we optimize both the reconstruction accuracy of input samples and the alignment between a kernel matrix given as prior and the inner products of the hidden representations computed by the autoencoder. Kernel alignment provides control over the hidden representation learned by the autoencoder. Experiments have been performed to evaluate both reconstruction and kernel alignment performance. Additionally, we applied our method to emulate kPCA on a denoising task obtaining promising results.
Keywords:Autoencoders; Kernel methods; Deep learning; representation learning.
Autoencoders (AEs) are a class of neural networks that gained increasing interest in recent years [25, 18, 23]. AEs are used for unsupervised learning of effective hidden representations of input data [11, 3]. These representations should capture the information contained in the input data, while providing meaningful features for tasks such as clustering and classification . However, what an effective representation consists of is highly dependent on the target task.
In standard AEs, representations are derived by training the network to reconstruct inputs through either a bottleneck layer, thereby forcing the network to learn how to compress input information, or through an over-complete representation. In the latter, regularization methods are employed to, e.g., enforce sparse representations, make representations robust to noise, or penalize sensitivity of the representation to small changes in the input . However, regularization provides limited control over the nature of the hidden representation.
In this paper, we hypothesize that an effective hidden representation should capture the relations among inputs, which are encoded in form of a kernel matrix. Such a matrix is used as a prior to be reproduced by inner products of the hidden representations learned by the AE. Hence, in addition to minimizing the reconstruction loss, we also minimize the normalized Frobenius distance between the prior kernel matrix and the inner product matrix of the hidden representations. We note that this process resembles the kernel alignment procedure .
The proposed model, called deep kernelized autoencoder, is related to recent attempts to bridge the performance gap between kernel methods and neural networks [27, 5]. Specifically, it is connected to works on interpreting neural networks from a kernel perspective  and the Information Theoretic-Learning Auto-Encoder , which imposes a prior distribution over the hidden representation in a variational autoencoder .
In addition to providing control over the hidden representation, our method also has several benefits that compensate for important drawbacks of traditional kernel methods. During training, we learn an explicit approximate mapping function from the input to a kernel space, as well as the associated back-mapping to the input space, through an end-to-end learning procedure. Once the mapping is learned, it can be used to relate operations performed in the approximated kernel space, for example linear methods (as is the case of kernel methods), to the input space. In the case of linear methods, this is equivalent to performing non-linear operations on the non-transformed data. Mini-batch training is used in our proposed method in order to lower the computational complexity inherent to traditional kernel methods and, especially, spectral methods [24, 4, 15]. Additionally, our method applies to arbitrary kernel functions, even the ones computed through ensemble methods. To stress this fact, we consider in our experiments the probabilistic cluster kernel, a kernel function that is robust with regards to hyperparameter choices and has been shown to often outperform counterparts such as the RBF kernel .
2.1 Autoencoders and stacked autoencoders
AEs simultaneously learn two functions. The first one, encoder, provides a mapping from an input domain, , to a code domain, , i.e., the hidden representation. The second function, decoder, maps from back to . For a single hidden layer AE, the encoding function and the decoding function are defined as
where denotes a suitable transfer function (e.g., a sigmoid applied component-wise), , , and denote, respectively, a sample from the input space, its hidden representation, and its reconstruction; finally, and are the weights and and the bias of the encoder and decoder, respectively. For the sake of readability, we implicitly incorporate in the notation. Accordingly, we can rewrite
In order to minimize the discrepancy between the original data and its reconstruction, the parameters in Eq. 1 are typically learned by minimizing, usually through stochastic gradient descent (SGD), a reconstruction loss
Differently from Eq. 1, a stacked autoencoder (sAE) consists of several hidden layers . Deep architectures are capable of learning complex representations by transforming input data through multiple layers of nonlinear processing . The optimization of the weights is harder in this case and pretraining is beneficial, as it is often easier to learn intermediate representations, instead of training the whole architecture end-to-end . A very important application of pretrained sAE is the initialization of layers in deep neural networks . Pretraining is performed in different phases, each of which consists of training a single AE. After the first AE has been trained, its encoding function is applied to the input and the resulting representation is used to train the next AE in the stacked architecture. Each layer, being trained independently, aims at capturing more abstract features by trying to reconstruct the representation in the previous layer. Once all individual AEs are trained, they are unfolded yielding a pretrained sAE. For a two-layer sAE, the encoding function consists of , while the decoder reads . The final sAE architecture can then be fine-tuned end-to-end by back-propagating the gradient of the reconstruction error.
2.2 A brief introduction to relevant kernel methods
Kernel methods process data in a kernel space associated with an input space through an implicit (non-linear) mapping . There, data are more likely to become separable by linear methods , which produces results that are otherwise only obtainable by nonlinear operations in the input space. Explicit computation of the mapping and its inverse is, in practice, not required. In fact, operations in the kernel space are expressed through inner products (kernel trick), which are computed as Mercer kernel functions in input space: .
As a major drawback, kernel methods scale poorly with the number of data points : traditionally, memory requirements of these methods scale with and computation with , where is the dimension . For example, kernel principal component analysis (kPCA) , a common dimensionality reduction technique that projects data into the subspace that preserves the maximal amount of variance in kernel space, requires to compute the eigendecomposition of a kernel matrix , with , yielding a computational complextiy and memory requirements that scale as . For this reason, kPCA is not applicable to large-scale problems. The availability of efficient (approximate) mapping functions, however, would reduce the complexity, thereby enabling these methods to be applicable on larger datasets . Furthermore, by providing an approximation for , it would be possible to directly control and visualize data represented in . Finding an explicit inverse mapping from is a central problem in several applications, such as image denoising performed with kPCA, also known as the pre-image problem [1, 13].
2.3 Probabilistic Cluster Kernel
The Probabilistic Cluster Kernel (PCK)  adapts to inherent structures in the data and it does not depend on any critical user-specified hyperparameters, like the width in Gaussian kernels. The PCK is trained by fitting multiple Gaussian Mixture Models (GMMs) to input data and then combining these models into a single kernel. In particular, GMMs are trained for a variety of mixture components , each with different randomized initial conditions . Let denote the posterior distribution for data point under a GMM with mixture components and initial condition . The PCK is then defined as
where is a normalizing constant.
Intuitively, the posterior distribution under a mixture model contains probabilities that a given data point belongs to a certain mixture component in the model. Thus, the inner products in Eq. 4 are large if data pairs often belong to the same mixture component. By averaging these inner products over a range of values, the kernel function has a large value only if these data points are similar on both global scale (small ) and local scale (large ).
3 Deep kernelized autoencoders
In this section, we describe our contribution, which is a method combining AEs with kernel methods: the deep kernelized AE (dkAE). A dkAE is trained by minimizing the following loss function
where is the reconstruction loss in Eq. 3. is a hyperparameter ranging in , which weights the importance of the two objectives in Eq. 5. For , the loss function simplifies to the traditional AE loss in Eq. 2. is the code loss, a distance measure between two matrices, , the kernel matrix given as prior, and , the inner product matrix of codes associated to input data. The objective of is to enforce the similarity between and the prior . A depiction of the training procedure is reported in Fig. 1.
We implement as the normalized Frobenius distance between and . Each matrix element in is given by and the code loss is computed as
Minimizing the normalized Frobenius distance between the kernel matrices is equivalent to maximizing the traditional kernel alignment cost, since
where is exactly the kernel alignment cost function [7, 26]. Note that the distance in Eq. 7 can be implemented also with more advanced differentiable measures of (dis)similarity between PSD matrices, such as divergence and mutual information [19, 9]. However, these options are not explored in this paper and are left for future research.
In this paper, the prior kernel matrix is computed by means of the PCK algorithm introduced in Section 2.3, such that . However, our approach is general and any kernel matrix can be used as prior in Eq. 6.
3.1 Mini-batch training
We use mini batches of samples to train the dkAE, thereby avoiding the computational restrictions of kernel and especially spectral methods outlined in Sec. 2.2. Making use of mini-batch training, the memory complexity of the algorithm can be reduced to , where . Finally, we note that the computational complexity scales linearly with regards to the parameters in the network. In particular, given a mini batch of samples, the dkAE loss function is defined by taking the average of the per-sample reconstruction cost
where is the dimensionality of the input space, is a subset of that contains only the rows and columns related to the current mini-batch, and contains the inner products of the codes for the specific mini-batch. Note that is re-computed in each mini batch.
3.2 Operations in code space
Linear operations in code space can be performed as shown in Fig. 2. The encoding scheme of the proposed dkAE explicitly approximates the function that maps an input onto the kernel space. In particular, in a dkAE the feature vector is approximated by the code . Following the underlying idea of kernel methods and inspired by Cover’s theorem , which states that a high dimensional embedding is more likely to be linearly separable, linear operations can be performed on the code. A linear operation on produces a result in the code space, , relative to the input . Codes are mapped back to the input space by means of a decoder, which in our case approximates the inverse mapping from the kernel space back to the input domain. Unlike other kernel methods where this explicit mapping is not defined, this fact permits visualization and interpretation of the results in the original space.
4 Experiments and results
In this section, we evaluate the effectiveness of dkAEs on different benchmarks. In the first experiment we evaluate the effect on the two terms of the objective function (Eq. 8) when varying the hyperparameters (in Eq. 5) and the size of the code layer. In a second experiment, we study the reconstruction and the kernel alignment. Further we compare dkAEs approximation accuracy of the prior kernel matrix to kPCA as the number of principle components increases. Finally, we present an application of our method for image denoising, where we apply PCA in the dkAE code space to remove noise.
For these experiments, we consider the MNIST dataset, consisting of images of handwritten digits. However, we use a subset of samples due to the computational restrictions imposed by the PCK, which we use to illustrate dkAEs ability to learn arbitrary kernels, even if they originate from an ensemble procedure. We train the PCK by fitting the GMMs on a subset of 200 training samples, with the parameters . Once trained, the GMM models are applied on the remaining data to calculate the kernel matrix. We use 70%,15% and 15% of the data for training, validation, and testing, respectively.
The network architecture used in the experiments is (see Fig. 1), which has been demonstrated to perform well on several datasets, including MNIST, for both supervised and unsupervised tasks [20, 12]. Here, refers to the dimensionality of the code layer. Training was performed using the sAE pretraining approach outlined in Sec. 2.1. To avoid learning the identify mapping on each individual layer, we applied a common  regularization technique where the encoder and decoder weights are tied, i.e., . This is done during pretraining and fine-tuning. Unlike in traditional sAEs, to account for the kernel alignment objective, the code layer is optimized according to Eq. 5 also during pretraining.
Size of mini-batches for training was chosen to be randomly, independently sampled data points; in our experiments, an epoch consists of processing batches. Pretraining is performed for epochs per layer and the final architecture is fine-tuned for epochs using gradient descent based on Adam . The dkAE weights are randomly initialized according to Glorot et al. .
4.2 Influence of hyperparameter and size of code layer
In this experiment, we evaluate the influence of the two main hyperparameters that determine the behaviour of our architecture. Note that the experiments shown in this section are performed by training the dkAE on the training set and evaluating the performance on the validation set. We evaluate both the out-of-sample reconstruction and . Figure 3 illustrates the effect of for a fixed value of neurons in the code layer. It can be observed that the reconstruction loss increases as more and more focus is put on minimizing (obtained by increasing ). This quantifies empirically the trade-off in optimizing the reconstruction performance and the kernel alignment at the same time. Similarly, it can be observed that decreases when increasing . By inspecting the results, specifically the near constant losses for in range [0.1,0.9] the method appears robust to changes in hyperparameter .
Analyzing the effect of varying given a fixed (Figure 3), we observe that both losses decrease as increases. This could suggest that an even larger architecture, characterized by more layers and more neurons w.r.t. the architecture adopted, might work well, as the dkAE does not seem to overfit, due also to the regularization effect provided by the kernel alignment.
4.3 Reconstruction and kernel alignment
According to the previous results, in the following experiments we set and . Figure 4 illustrates the results in Sec. 4.2 qualitatively by displaying a set of original images from our test set and their reconstruction for the chosen value and a non-optimal one. Similarly, the prior kernel (sorted by class in the figure, to ease the visualization) and the dkAEs approximated kernel matrices, relative to test data, are displayed for two different values. Notice that, to illustrate the difference with a traditional sAE, one of the two values is set to zero. It can be clearly seen that, for , both the reconstruction and the kernel matrix, resemble the original closely, which agrees with the plots in Figure 3.
Inspecting the kernels obtained in Figure 4, we compare the distance between the kernel matrices, and , and the ideal kernel matrix, obtained by considering supervised information. We build the ideal kernel matrix , where if elements and belong to same class, otherwise . Table 1 illustrates that the kernel approximation produced by dkAE outperforms a traditional sAE with regards to kernel alignment with the ideal kernel. Additionally it can be seen that the kernel approximation actually improves slightly on the kernel prior, which we hypothesise is due to the regularization that is imposed by the reconstruction objective.
|Kernel||Improvement [%] vs.|
4.4 Approximation of kernel matrix given as prior
In order to quantify the kernel alignment performance, we compare dkAE to the approximation provided by kPCA when varying the number of principal components. For this test, we take the kernel matrix of the training set and compute its eigendecomposition. We then select an increasing number of components (with components associated with the largest eigenvalues) to project the input data: . The approximation of the original kernel matrix (prior) is then given as . We compute the distance between and following Eq. 7 and compare it to dissimilarity between and . For evaluating the out-of-sample performance, we use the Nyström approximation for kPCA  and compare it to the dkAE kernel approximation on the test set.
Figure LABEL:fig:kPCA_exp shows that the approximation obtained by means of dkAEs outperforms kPCA when using a small number of components, i.e., . Note that it is common in spectral methods to chose a number of components equal to the number of classes in the dataset  in which case, for the classes in the MNIST dataset, dkAE would outperform kPCA. As the number of selected components increases, the approximation provided by kPCA will perform better. However, as shown in the previous experiment (Sec. 4.3), this does not mean that the approximation performs better with regards to the ideal kernel. In fact, in that experiment the kernel approximation by dkAE actually performed at least as well as the prior kernel (kPCA with all components taken into account). \@floatfigure\end@float
4.5 Linear operations in code space
Here we hint at the potential of performing operations in code space as described in Sec. 3.2. We try to emulate kPCA by performing PCA in our learned kernel space and evaluate the performance on the task of denoising. Denoising is a task that requires both a mapping to the kernel space, as well as a back-projection. For traditional kernel methods no explicit back-projection exists, but approximate solutions to this so called pre-image problem have been proposed [1, 13]. We chose the method proposed by Bakir et al. , where they use kernel ridge regression, such that a different kernel (in our case an RBF) can be used for the back-mapping. As it was a challenging to find a good for the RBF kernel that captures all numbers in the MNIST dataset, we performed this test on the and class only. The regularization parameter and the required for the back-projection where found via grid search, where the best regularization parameter (according to MSE reconstruction) was found to be and as the median of the euclidean distances between the projected feature vectors.
Both models are fitted on the training set and Gaussian noise is added to the test set. For both methods principle components are used. Tab. 6 illustrates that dkAE+PCA outperforms kPCAs reconstruction with regards to mean squared error. However, as this is not necessarily a good measure for denoising , we also visualize the results in Fig. 6. It can be seen that dkAE yields sharper images in the denoising task.
In this paper, we proposed a novel model for autoencoders, based on the definition of a particular unsupervised loss function. The proposed model enables us to learn an approximate embedding from an input space to an arbitrary kernel space as well as the projection from the kernel space back to input space through an end-to-end trained model. It is worth noting that, with our method, we are able to approximate arbitrary kernel functions by inner products in the code layer, which allows us to control the representation learned by the autoencoder. In addition, it enables us to emulate well-known kernel methods such as kPCA and scales well with the number of data points.
A more rigorous analysis of the learned kernel space embedding, as well as applications of the code space representation for clustering and/or classification tasks, are left as future works.
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for this research. This work was partially funded by the Norwegian Research Council FRIPRO grant no. 239844 on developing the Next Generation Learning Machines.
-  Bakir, G.H., Weston, J., Schölkopf, B.: Learning to find pre-images. Advances in Neural Information Processing Systems pp. 449–456 (2004)
-  Bengio, Y., Courville, A., Vincent, P.: Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(8), 1798–1828 (Aug 2013)
-  Bengio, Y.: Learning deep architectures for ai. Foundations and trends® in Machine Learning 2(1), 1–127 (2009)
-  Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the fifth annual workshop on Computational learning theory pp. 144–152 (1992)
-  Cho, Y., Saul, L.K.: Kernel methods for deep learning. Advances in Neural Information Processing Systems 22 pp. 342–350 (2009)
-  Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (1991)
-  Cristianini, N., Elisseeff, A., Shawe-Taylor, J., Kandola, J.: On kernel-target alignment. Advances in neural information processing systems (2001)
-  Dai, B., Xie, B., He, N., Liang, Y., Raj, A., Balcan, M.F.F., Song, L.: Scalable kernel methods via doubly stochastic gradients. Advances in Neural Information Processing Systems pp. 3041–3049 (2014)
-  Giraldo, L.G.S., Rao, M., Principe, J.C.: Measures of entropy from data using infinitely divisible kernels. IEEE Transactions on Information Theory 61(1), 535–548 (Nov 2015)
-  Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATSâ10) (2010)
-  Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
-  Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural computation 18(7), 1527–1554 (2006)
-  Honeine, P., Richard, C.: A closed-form solution for the pre-image problem in kernel-based machines. Journal of Signal Processing Systems 65(3), 289–299 (2011)
-  Izquierdo-Verdiguier, E., Jenssen, R., Gómez-Chova, L., Camps-Valls, G.: Spectral clustering with the probabilistic cluster kernel. Neurocomputing 149, 1299–1304 (2015)
-  Jenssen, R.: Kernel entropy component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(5), 847–860 (2010)
-  Kamyshanska, H., Memisevic, R.: The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence 37(6), 1261–1273 (2015)
-  Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
-  Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
-  Kulis, B., Sustik, M.A., Dhillon, I.S.: Low-rank kernel learning with Bregman matrix divergences. Journal of Machine Learning Research 10(Feb.), 341–376 (2009)
-  Maaten, L.: Learning a parametric embedding by preserving local structure. International Conference on Artificial Intelligence and Statistics pp. 384–391 (2009)
-  Montavon, G., Braun, M.L., Müller, K.R.: Kernel analysis of deep networks. Journal Machine Learning Research 12, 2563–2581 (Nov 2011)
-  Ng, A.Y., Jordan, M.I., Weiss, Y., et al.: On spectral clustering: Analysis and an algorithm. Advances in Neural Information Processing Systems pp. 849–856 (2001)
-  Santana, E., Emigh, M., Principe, J.C.: Information theoretic-learning auto-encoder. arXiv preprint arXiv:1603.06653 (2016)
-  Schölkopf, B., Smola, A., Müller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural computation 10(5), 1299–1319 (1998)
-  Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research 11, 3371–3408 (2010)
-  Wang, T., Zhao, D., Tian, S.: An overview of kernel alignment and its applications. Artificial Intelligence Review 43(2), 179–192 (2015)
-  Wilson, A.G., Hu, Z., Salakhutdinov, R., Xing, E.P.: Deep kernel learning. In: Proceedings of the 19th International Conference on Artificial Intelligence and Statistics. pp. 370–378 (2016)