References
Abstract

Representations learnt through deep neural networks tend to be highly informative, but opaque in terms of what information they learn to encode. We introduce an approach to probabilistic modelling that learns to represent data with two separate deep representations: an invariant representation that encodes the information of the class from which the data belongs, and an equivariant representation that encodes the symmetry transformation defining the particular data point within the class manifold (equivariant in the sense that the representation varies naturally with symmetry transformations). This approach is based primarily on the strategic routing of data through the two latent variables, and thus is conceptually transparent, easy to implement, and in-principle generally applicable to any data comprised of discrete classes of continuous distributions (e.g. objects in images, topics in language, individuals in behavioural data). We demonstrate qualitatively compelling representation learning and competitive quantitative performance, in both supervised and semi-supervised settings, versus comparable modelling approaches in the literature with little fine tuning.

oddsidemargin has been altered.
marginparsep has been altered.
topmargin has been altered.
marginparwidth has been altered.
marginparpush has been altered.
paperheight has been altered.
The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.

 

Invariant-Equivariant Representation Learning for Multi-Class Data

 

Ilya Feige1 


footnotetext: 1Faculty, 54 Welbeck Street, London. Correspondence to: Ilya Feige <ilya@faculty.ai>.  
Proceedings of the International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s).
\@xsect

Representation learning (Bengio et al., 2013) is part of the foundation of deep learning; powerful deep neural network models appear to derive their performance from sequentially representing data in more-and-more refined structures, tailored to the training task.

However, representation learning has a broader impact than just model performance. Transferable representations are leveraged efficiently for new tasks (Mikolov et al., 2013), representations are used for human interpretation of machine learning models (Mahendran & Vedaldi, 2015), and meaningfully structured (disentangled) representations can be used for model control (e.g. semi-supervised learning as in Kingma et al. (2014), topic modelling as in Blei et al. (2003)).

Consequently, it is often preferable to have interpretable data representations within a model, in the sense that the information contained in the representation is easily understood and the representation can be used to control the output of the model (e.g. to generate data of a given class or with a particular characteristic). Unfortunately, there is often a tension between optimal model performance and cleanly disentangled or controllable representations.

To overcome this, some practitioners have proposed modifying their model’s objective functions by inserting parameters in front of particular terms (Bowman et al., 2016; Higgins et al., 2017), while others have sought to modify the associated generative models (Mansbridge et al., 2018). Further still, attempts have been made to build the symmetries of the data directly into the neural network architecture in order to force the learning of latent variables that transform meaningfully under those symmetries (Sabour et al., 2017). The diversity and marginal success of these approaches point to the importance and difficulty of learning meaningful representations in deep generative modelling.

In this work we present an approach to probabilistic modelling of data comprised of a finite number of distinct classes, each described by a smooth manifold of instantiations of that class. For convenience, we call our approach EquiVAE for Equivariant Variational Autoencoder. EquiVAE is a probabilistic model with 2 latent variables: an invariant latent that represents the global class information, and an equivariant latent that smoothly interpolates between all of the members of that class. The EquiVAE approach is general in that the symmetry group of the manifold need not be specified (as in for example Cohen & Welling (2014); Falorsi et al. (2018)), and it can be used for any number of classes and any dimensionality of both underlying representations. The price that must be paid for this level of model control and flexibility is that some labelled data is needed in order to provide the concept of class invariance versus equivariance to the model.

The endeavor to model the content and the style of data separately is certainly not new to this work (Tenenbaum & Freeman, 2000). Reed et al. (2014) and Radford et al. (2016) go further, disentangling the continuous sources of variation in their representations using a clamping technique that exposes specific latent components to a single source of variation in the data during training. In the same vein, other approaches have used penalty terms in the objective function that encourage the learning of disentangled representations (Cheung et al., 2014; Chen et al., 2016).

EquiVAE does not require any modification to the training algorithm, nor additional penalty terms in the objective function in order to bifurcate the information stored in the two latent variables. This is due to the way in which multiple data points are used to reconstruct a single data point from the same-class manifold, which we consider the primary novel aspect of our approach. In particular, our invariant representation takes as input multiple data points that come from the same class, but are different from the data point to be reconstructed. This invariant representation thus directly learns to encode the information common to the overall class, but not the individual data point, simply due to the information flowing through it.

Of further note, we deliberately use a deterministic latent for the invariant representation, and a stochastic latent for the smooth equivariant representation (an idea also employed by Zhu et al. (2014)). This choice is why we do not need to explicitly force the equivariant latent to not contain any class-level information: it is available and easier to access from the deterministic latent.

EquiVAE is also comparable to Siddharth et al. (2017), where the authors leverage labelled data explicitly in their generative model in order to force the VAE latent to learn the non-class information (Makhzani et al. (2016) do similarly using adversarial training). The primary difference between those works and ours is that EquiVAE provides a non-trivial representation of the global information instead of simply using the integer-valued label. Furthermore, this invariant representation can be deterministically evaluated directly on unlabelled data. Practitioners can reuse this embedding on unlabelled data in downstream tasks, along with the equivariant encoder if needed. The invariant representation provides more information than a simple prediction of the class-label distribution.

The encoding procedure for the invariant representation in EquiVAE is partially inspired by Eslami et al. (2018), who use images from various, known coordinates in a scene in order to reconstruct a new image of that scene at new, known coordinates. In contrast, we do not have access to the exact coordinates of the class instance, which in our case corresponds to the unknown, non-trivial manifold structure of the class; we must infer these manifold coordinates in an unsupervised way. Garnelo et al. (2018a; b) similarly explore the simultaneous usage of multiple data points in generative modelling in order to better capture modelling uncertainty.

\@xsect

We consider a generative model for data comprised of a finite set of distinct classes, each of which occupies a smooth manifold of instantiations. For example, images of distinct objects where each object might be in any pose, or sentences describing distinct sets of topics. Such data should be described by a generative model with two latent variables, the first describing which of the objects the data belongs to, and the second describing the particular instantiation of the object (e.g. its pose).

In this way the object-identity latent variable would be invariant under the transformations that cover the set of possible instantiations of the object, and the instantiation-specific latent variable should be equivariant under such transformations. Note that the class label is itself an invariant representation of the class, however, we seek a higher-dimensional latent vector that has the capacity to represent rich information relevant to the class of the data point, rather than just its label.

Denoting an individual data point as with associated class label , and the full set of class- labeled data as , we write such a generative model as:

(1)

where are the parameters of the generative model. We make explicit with a function the conditional dependency of on the deterministically calculable representation of the global properties of class . The distribution is a categorical distribution with weights given by the relative frequency of each class and the prior distribution is taken to be a unit normal describing the set of smooth transformations that cover the class- manifold.

\@xsect

To guarantee that will learn an invariant representation of information common to the class- data, we use a technique inspired by Generative Query Networks (Eslami et al., 2018). Instead of encoding the information of a single data point into , we provide samples from the whole class- manifold. That is, we compute the invariant latent as:

(2)

where are the parameters of this embedding. We explicitly exclude the data point at hand from this expectation value; in the infinite labelled data limit, the probability of sampling from would vanish. We include the simplified notation for subsequent mathematical clarity.

This procedure invalidates the assumption that the data is generated i.i.d. conditioned on a set of model parameters, since is computed using a number of other data points with label . For notational simplicity, we will suppress this fact, and consider the likelihood as if it were i.i.d. per data point. It is not difficult to augment the equations that follow to incorporate the full dependencies, but we find this to obfuscate the discussion (see Appendix id1 for full derivations). We ignore the bias introduced from the non-i.i.d. generation process; this could be avoided by holding out a dedicated labelled data set (Garnelo et al., 2018a; b), but we find it empirically insignificant.

The primary purpose of our approach to the invariant representation used in Equation 2 is to provide exactly the information needed to learn a global-class embedding: namely, to learn what the elements of the class manifold have in common. However, our approach provides a secondary advantage. During training we will use values of (see Equation 2) sampled uniformly between 1 and some small maximal value . The embedding will thus learn to work well for various values of , including . Consequently, at inference time, any unlabelled data point can be immediately embedded via . This is ideal for downstream usage; we will use this technique in Section id1 to competitively classify unlabelled test-set data using only .

\@xsect

In order to approximate the integral in Equation 1 over the equivariant latent, we use variational inference following the standard VAE approach (Kingma & Welling, 2014; Rezende et al., 2014):

(3)

where are the parameters of the variational distribution over the equivariant latent. Note that is inferred from (and ), not , since is posited to be a multi-dimensional latent vector that represents the rich set of global properties of the class , rather than just its label. As is shown empirically in Section id1, only learns to store the intra-class variations, rather than the class-label information. This is because the class-label information is easier to access directly from the deterministic representation , rather than indirectly through the stochastic . We choose to provide both and to in order to provide the variational distribution with more flexibility.

We thus arrive at a lower bound on given in Equation 1 following the standard arguments:

(4)

The various model parameters are suppressed for clarity.

The intuition associated with the inference of from multiple same-class, but complementary data points, as well as the inference of from and , is depicted heuristically in Figure 1. A more detailed depiction of the architecture used for the invariant and equivariant representations is shown in Figure 2. The architecture for the generative model is then treated as any generative latent-variable model, with latent inputs and .

Figure 1: Heuristic depiction of class- manifold. The invariant latent will encode global-manifold information, whereas equivariant latent will encode coordinates of on the manifold.
Figure 2: Depiction of encoding a particular MNIST digit (from the 6 class) both in terms of its invariant representation (top) and its equivariant representation (bottom).
\@xsect

EquiVAE is designed to learn a representation that stores global-class information, and as such, it can be used for semi-supervised learning. Thus, an objective function for unlabelled data must be specified to accompany the labelled-data objective function given in Equation 4.

We marginalise over the label in (Equation 1) when a data point is unlabelled. In order to perform variational inference in this case, we use a variational distribution of the form:

(5)

where is the same distribution as is used in the labelled case, given in Equation 3. The unlabelled setting requires an additional inference distribution to infer the label , which is achieved with , parametrised by . Once is inferred from , can be deterministically calculated using Equation 2 from the labelled data set for class , of which is no longer a part. With and , the equivariant latent is inferred via .

Using this variational inference procedure, we arrive at a lower bound for :

(6)

where the model parameters are again suppressed for clarity.

We will compute the expectation over the discrete distribution in Equation 6 exactly in order to avoid the problem of back propagating through discrete variables. However, this expectation could be calculated by sampling using standard techniques (Brooks et al., 2011; Jang et al., 2017; Maddison et al., 2017).

Therefore, the evidence lower bound objective for semi-supervised learning becomes:

(7)

In order to ensure that does not collapse into the local minimum of predicting a single label for every value, we add to . This is done in Kingma et al. (2014) and Siddharth et al. (2017), however, we do not add any hyperparameter in front of this term unlike those works. We also do not add a hyperparameter up-weighting overall, as is done in Siddharth et al. (2017). The only hyperparameter tuning we perform is to choose latent dimensionality (either 8 or 16) and to choose , between 1 and which (see Equation 2) varies uniformly during training.

\@xsect

We carry out experiments on both the MNIST data set (LeCun et al., 1998) and the Street View House Numbers (SVHN) data set (Netzer et al., 2011). These data sets are appropriately modelled with EquiVAE, since digits from a particular class live on a smooth, a-priori-unknown manifold.

EquiVAE requires some labelled data. Forcing the model to reconstruct through a representation that only has access to other members of the class is what forces to represent the common information of that class, rather than a representation of the particular instantiation . Thus, the requirement of some labelled data is at the heart of EquiVAE. Indeed, we trained several versions of the EquiVAE generative model in the unsupervised setting, allowing to receive directly. The results were as expected: the equivariant latent is completely unused, with the model unable to reconstruct the structure in each class, as it essentially becomes a deterministic autoencoder.

In Section id1, we study the disentanglement properties of the representations learnt with EquiVAE in the supervised setting, where we have access to the full set of labelled training data. The semi-supervised learning setting is discussed in Section id1. The details of the experimental setup used in this section are provided in Appendix id1.

\@xsect

With the full training data set labelled, EquiVAE is able to learn to optimise both equivariant and invariant representations at every training step. The supervised EquiVAE (objective given in Equation 4) converges in approximately 40 epochs on the MNIST training datset of 55,000 data points and in 90 epochs on the SVHN training datset of 70,000 data points.

The training curves on MNIST are shown on the right in Figure 3. The equivariant latent is learning to represent non-trivial information from the data as evidenced by the KL between the equivariant variational distribution and its prior not vanishing at convergence.

Figure 3: Validation-set results for EquiVAE on MNIST. Invariant (left) and equivariant (middle) latent representations are shown reduced to 2D using UMAP. Learning curves are shown (right) with the ELBO broken down into the sum of the reconstruction and (negative) KL terms, as in Equation 4.

However, when visualised in 2 dimensions using UMAP for dimensional reduction (McInnes & Healy, 2018), the equivariant latent appears not to distinguish between digit classes, as is seen in the uniformity of the class-coloured labels in the middle plot of Figure 3. The apparent uniformity of reflects two facts: the first is that, given that the generative model gets access to another latent containing the global class information, the equivariant latent does not need to distinguish between classes. The second is that the equivariant manifolds should be similar across all MNIST digits. Indeed, they all include rotations, stretches, stroke thickness, among smooth transformations.

Finally, on the left in Figure 3, the invariant representation vectors are shown, dimensionally reduced to 2 dimensions using UMAP, and coloured according to the label of the class. These representations are well separated for each class. Each class has some spread in its invariant representations due to the fact that we choose relatively small numbers of complementary samples, (see Equation 2). The model shown in Figure 3 had randomly selected between 1 and 7 during training, with used for visualisation. The outlier points in this plot are exaggerated by the dimensional reduction; we show this below in Figure 5 by considering a EquiVAE with 2D latent.

The SVHN results are similar to Figure 3, with slightly less uniformity in the equivariant latent.

All visualisations in this work, including those in Figure 3, use data from the validation set (5,000 images for MNIST; 3,257 for SVHN). We reserve the test set (10,000 images for MNIST; 26,032 for SVHN) for computing the accuracy values provided. We did not look at this test set during training and hyperparameter tuning. In terms of hyperparameter tuning, we only tuned the number of epochs for training, the range of values (i.e. for MNIST; for SVHN), and chose between 8 and 16 for the dimensionality of both latents (16 chosen for both data sets). We fixed the architecture at the outset to have ample capacity for this task, but did not vary it in our experiments.

Figure 4: Generated images (left) sampled from the prior for each , (middle) reconstructed from equivariant interpolations between the embeddings of same-class digits with fixed , and (right) reconstructed from latent pairs where is an encoded image with .
Figure 5: Latent variables for EquiVAE with 2 dimensional latents. The invariant latent space (left), reconstructions from evenly-spaced variations of covering the full space at fixed (middle), and reconstructions from 2-prior-standard-deviation variations of at fixed (right) are shown.

In order to really see what information is stored in the two latents, we consider them in the context of the generative model. We show reconstructed images in various latent-variable configurations in Figure 4. To show samples from the equivariant prior we fix a single invariant representation for each class by taking the mean over each class in the validation set. On the left in Figure 4 we show random samples from reconstructed along with for each class ascending from 0 to 9 in each column. Two properties stand out from these samples: firstly, the samples are (almost) all from the correct class for MNIST (SVHN), showing that the invariant latent is representing the class information well. Secondly, there is appreciable variance within each class, showing that samples from the prior are able to represent the intra-class variations.

The middle plots in Figure 4 shows interpolations between actual digits of the same class (the top and bottom rows of each subfigure), with the invariant representation fixed throughout. These interpolations are smooth, as is expected from interpolations of a VAE latent, and cover the trajectory between the two images well. This again supports the argument that the equivariant representation , as a stochastic latent variable, is appropriate for representing the smooth intra-class transformations.

To create the right-hand side of Figure 4, a validation-set image for each digit is encoded to create the set of latents , from which we reconstruct images using the latent pairs for . Thus we see in each row a single digit and in each column a single style. It is most apparent in the SVHN results that the equivariant latent controls all stylistic aspects of the image, including the non-central digits, whereas the invariant latent controls only the central digit.

In Figure 5 we show an EquiVAE trained with 2-dimensional invariant and equivariant latents on MNIST for clearer visualisation of the latent space. On the right, reconstructions are shown with fixed for each and with the identical set of evenly spaced over a grid spanning from to in each coordinate (2 prior standard deviations). The stylistic variations appear to be similar for the same values of across different digits, as was partially evidence on the right of Figure 4. On the left of Figure 5 we see where images in the validation set are encoded, showing significant distance between clusters when dimensional reduction is not used. Finally, in the middle of Figure 5 we see the evenly-spaced reconstruction of the full invariant latent space. This shows that, though the invariant latent is not storing stylistic information, it does contain the relative similarity of the base version of each digit. This lends justification to our assertion that the invariant latent represents more information than just the label .

We have thus seen that the invariant representation in EquiVAE learns to represent global-class information and the equivariant representation learns to represent local, smooth, intra-class information. This is what we expected from the theoretical considerations given in Section id1.

We now show quantitatively that the invariant representation learns the class information by showing that it alone can predict the class as well as a dedicated classifier. In order to predict an unknown label, we employ a direct technique to compute the invariant representation from . We simply use from Equation 2. We can then pass into a neural classifier, or we can find the nearest cluster mean from the training set and assign class probabilities according to . We find that classifying test-set images using this 0-parameter distance metric performs as well as using a neural classifier (2-layer dense dropout network with 128, 64 neurons per layer) with as input. Note that using is roughly equivalent to our distance-based classifier, as will be maximal when (computed from the training data with label ) is most similar to . Thus, our distance-based technique is a more-direct approach to classification than using .


Technique Error rate

MNIST

EquiVAE Benchmark neural classifier
EquiVAE (neural classifier using )
EquiVAE (distance based on )
Stacked VAE (M1+M2) (Kingma et al., 2014)
Adversarial Autoencoders (Makhzani et al., 2016)

SVHN

EquiVAE Benchmark neural classifier
EquiVAE (neural classifier using )
EquiVAE (distance based on )
Table 1: Supervised error rates on MNIST (10,000 images) and SVHN (26,032 images) test sets.

Labels EquiVAE Benchmark Siddharth et al. (2017) Kingma et al. (2014)

MNIST

(M2)
100
600
1000
3000

SVHN

(M1+M2)
1000
3000
Table 2: Semi-supervised test-set error rates for various labelled-data-set sizes.

Our results are shown in Table 1, along with the error rate of a dedicated, end-to-end neural network classifier, identical in architecture to that of , with 2 dropout layers added. This benchmark classifier performs similarly on MNIST (slightly better on SVHN) to the simple classifier based on finding the nearest training-set cluster to . This is a strong result, as our classification algorithm based on has no direct classification objective in its training, see Equation 4. Our uncertainty bands quoted in Table 1 use the standard error on the mean (standard deviation divided by ), with trials.

Results from selected, relevant works from the literature are also shown in Table 1. Kingma et al. (2014) do not provide error bars, and they only provide a fully supervised result for their most-powerful, stacked / pre-trained VAE M1+M2, but it appears as if their learnt representation is less accurate in its classification of unlabelled data. Makhzani et al. (2016) perform slightly worse than EquiVAE, but within error bars. Makhzani et al. (2016) train for 100 times more epochs than we do, and use over 10 times more parameters in their model, although they use shallower dense networks. Note also that Kingma et al. (2014) and Makhzani et al. (2016) train on a training set of 50,000 MNIST images, whereas we use 55,000. We are unable to compare to Siddharth et al. (2017) as they do not provide fully supervised results on MNIST, and none of these comparable approaches provide fully supervised results on SVHN.

\@xsect

For semi-supervised learning, we maximise given in Equation 7. Test-set classification error rates are presented in Table 2 for varying numbers of labelled data. We compare to a benchmark classifier with similar architecture to (see Equation 5) trained only on the labelled data, as well as to similar VAE-based semi-supervised work. The number of training epochs are chosen to be 20, 25, 30, and 35, for data set sizes 100, 600, 1000, and 3000, respectively, on MNIST, and and epochs for data set sizes 1000, and 3000, respectively, on SVHN. We use an 8D latent space and for MNIST, and an 16D latent space and for SVHN. Otherwise, no hyperparameter tuning is performed. Each of our experiments is run 5 times to get the mean and (standard) error on the estimate of the mean in Table 2.

We find that EquiVAE performs better than the benchmark classifier with the same architecture (plus two dropout layers appended) trained only on the labelled data, especially for small labelled data sets. Furthermore, EquiVAE performs competitively (within error bars or better) relative to its most similar comparison, Siddharth et al. (2017), which is a VAE-based probabilistic model that treats the labels and the style of the data separately. Given its relative simplicity, rapid convergence (20-35 epochs), and lack of hyperparameter tuning performed, we consider this to be an indication that EquiVAE is an effective approach to jointly learning invariant and equivariant representations, including in the regime of limited labelled data.

\@xsect

We have introduced a technique for jointly learning invariant and equivariant representations of data comprised of discrete classes of continuous values. The invariant representation encodes global information about the given class manifold which is ensured by the procedure of reconstructing a data point through complementary samples from the same class. The equivariant representation is a stochastic VAE latent that learns the smooth set of transformations that cover the instances of data on that class manifold. We showed that the invariant latents are so widely separated that a 99.18% accuracy can be achieved on MNIST (87.70% on SVHN) with a simple 0-parameter distance metric based on the invariant embedding. The equivariant latent learns to cover the manifold for each class of data with qualitatively excellent samples and interpolations for each class. Finally, we showed that semi-supervised learning based on such latent variable models is competitive with similar approaches in the literature with essentially no hyperparameter tuning.

\@ssect

Acknowledgments

This work was developed and the experiments were run on the Faculty Platform for machine learning.

The author thanks David Barber, Benoit Gaujac, Raza Habib, Hippolyt Ritter, and Harshil Shah, as well as the ICML reviewers, for helpful discussions and for their comments on various drafts of this paper.

References

  • Bengio et al. (2013) Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013.
  • Blei et al. (2003) Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent dirichlet allocation. Journal of Machine Learning Research, 2003.
  • Bowman et al. (2016) Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R., and Bengio, S. Generating sentences from a continuous space. In Conference on Computational Natural Language Learning, 2016.
  • Brooks et al. (2011) Brooks, S., Gelman, A., Jones, G., and Xiao-Li, M. Handbook of Markov chain monte carlo. CRC press, 2011.
  • Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, 2016.
  • Cheung et al. (2014) Cheung, B., Livezey, J. A., Bansal, A. K., and Olshausen, B. A. Discovering hidden factors of variation in deep networks. arXiv:1412.6583, 2014.
  • Cohen & Welling (2014) Cohen, T. and Welling, M. Learning the irreducible representations of commutative lie groups. In International Conference on Machine Learning, 2014.
  • Eslami et al. (2018) Eslami, S. M. A., Rezende, D. J., Besse, F., Viola, F., Morcos, A. S., Garnelo, M., Ruderman, A., Rusu, A. A., Danihelka, I., Gregor, K., Reichert, D. P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum, D., Rabinowitz, N., King, H., Hillier, C., Botvinick, M., Wierstra, D., Kavukcuoglu, K., and Hassabis, D. Neural scene representation and rendering. Science, 2018.
  • Falorsi et al. (2018) Falorsi, L., de Haan, P., Davidson, T. R., Cao, N. D., Weiler, M., Forré, P., and Cohen, T. S. Explorations in homeomorphic variational auto-encoding. arXiv preprint 1807.04689, 2018.
  • Garnelo et al. (2018a) Garnelo, M., Rosenbaum, D., Maddison, C., Ramalho, T., Saxton, D., Shanahan, M., Teh, Y. W., Rezende, D., and Eslami, S. M. A. Conditional neural processes. In International Conference on Machine Learning, 2018a.
  • Garnelo et al. (2018b) Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D. J., Eslami, S. M. A., and Teh, Y. W. Neural processes. arXiv:1807.01622, 2018b.
  • Higgins et al. (2017) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017.
  • Jang et al. (2017) Jang, E., Gu, S., and Poole, B. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017.
  • Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
  • Kingma & Welling (2014) Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014.
  • Kingma et al. (2014) Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling, M. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems. 2014.
  • LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 1998.
  • Maddison et al. (2017) Maddison, C. J., Mnih, A., and Teh, Y. W. The concrete distribution: a continuous relaxation of discrete random variables. In International Conference on Learning Representations, 2017.
  • Mahendran & Vedaldi (2015) Mahendran, A. and Vedaldi, A. Understanding deep image representations by inverting them. In The IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • Makhzani et al. (2016) Makhzani, A., Shlens, J., Jaitly, N., and Goodfellow, I. Adversarial autoencoders. In International Conference on Learning Representations, 2016.
  • Mansbridge et al. (2018) Mansbridge, A., Fierimonte, R., Feige, I., and Barber, D. Improving latent variable descriptiveness with autogen. arXiv preprint 1806.04480, 2018.
  • McInnes & Healy (2018) McInnes, L. and Healy, J. UMAP: uniform manifold approximation and projection for dimension reduction. arXiv preprint 1802.03426, 2018.
  • Mikolov et al. (2013) Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. 2013.
  • Netzer et al. (2011) Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. In Advances in Neural Information Processing Systems Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
  • Radford et al. (2016) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations, 2016.
  • Reed et al. (2014) Reed, S., Sohn, K., Zhang, Y., and Lee, H. Learning to disentangle factors of variation with manifold interaction. In International Conference on Machine Learning, 2014.
  • Rezende et al. (2014) Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, 2014.
  • Sabour et al. (2017) Sabour, S., Frosst, N., and Hinton, G. E. Dynamic routing between capsules. In Advances in Neural Information Processing Systems. 2017.
  • Siddharth et al. (2017) Siddharth, N., Paige, B. T., Van de Meent, J.-W., Desmaison, A., Goodman, N., Kohli, P., Wood, F., and Torr, P. Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems, 2017.
  • Tenenbaum & Freeman (2000) Tenenbaum, J. and Freeman, W. T. Separating style and content with bilinear models. Neural Computation, 2000.
  • Zhu et al. (2014) Zhu, Z., Luo, P., Wang, X., and Tang, X. Multi-view perceptron: a deep model for learning face identity and view representations. In Advances in Neural Information Processing Systems, 2014.
\@xsect

In this appendix we detail the derivations of the log-likelihood lower bounds that were provided in Section id1.

EquiVAE is relevant when a non-empty set of labelled data is available. We write the data set as

(8)

We also decompose , where is the set of labelled instantiations with label . In particular, in what follows we think of as containing only the images , not the labels, since they are specified by the index on the set. We require at least two labelled data points from each class, so that .

We would like to maximise the log likelihood that our model generates both the labelled and the unlabelled data, which we write as:

(9)

where we make explicit here the usage of labelled data in the unlabelled generative model.

For convenience, we begin by repeating the generative model for the labelled data in Equation 1, except with the deterministic integral over completed:

(10)

We will simplify the notation by writing and , but keep all other details explicit.

We seek to construct a lower bound on , namely the log likelihood of the labelled data, using the following variational distribution over (Equation 3):

(11)

Indeed,

(12)

Which coincides with the notationally simplified lower bound objective function given in Equation 4.

We now turn to the lower bound on the unlabelled data. To start, we marginalise over the labels on the unlabelled dataset:

(13)

where we no longer need to remove from in since for the unlabelled data, .

As was done for the labelled data, we construct a lower bound using variational inference. However, in this case, we require a variational distribution over both and . We take:

(14)

which gives

(15)

Thus, we have Equation 6 augmented with the notational decorations that were omitted in Section id1.

Therefore, the objective

(16)

given in Equation 7, with given in Equation 4 and given in Equation 6, is a lower bound on the data log likelihood.

\@xsect

In this appendix we provide details of the experimental setup that was used to generate the results from Section id1.

For our implementation of EquiVAE, we use relatively standard neural networks. All of our experiments use implementations with well under 1 million parameters in total, converge within a few hours (on a Tesla K80 GPU), and are exposed to minimal hyperparameter tuning.

In particular, for the deterministic class-representation vector given in Equation 2, we parametrise using a 5-layer, stride-2 (stride-1 first layer), with 5x5 kernal size, convolution network, followed by a dense hidden layer. The mean of these embeddings is taken, followed then by another dense hidden layer, and the final linear dense output layer. This is shown for a MNIST digit in the top shaded box of Figure 2. Our implementation uses filters in the convolution layers, and hidden units in the two subsequent dense layers for a 16 dimensional latent (the number of units in the dense layers are halved when using 8 dimensional latents, as in our semi-supervised experiments on MNIST).

We parametrise the approximate posterior distribution over the equivariant latent as a diagonal-covariance normal distribution, , following the SGVB algorithm (Kingma & Welling, 2014; Rezende et al., 2014). For and , we use the identical convolution architecture as for the invariant embedding network as an initial embedding for the data point . This embedding is then concatenated with the output of a single dense layer that transforms , the output of which is then passed to one more dense hidden layer for each and separately. This is shown in the bottom shaded box of Figure 2.

The generative model is based on the DCGAN-style transposed convolutions (Radford et al., 2016), and is assumed to be a Bernoulli distribution for MNIST (Gaussian distribution for SVHN) over the conditionally independent image pixels. Both the invariant representation and the equivariant representation , are separately passed through a single-layer dense network before being concatenated and passed through another dense layer. This flat embedding that combines both representations is then transpose convolved to get the output image in a way the mirrors the 5-layer convolution network used to embed the representations in the first place. That is, we use hidden units in the first two dense layers, and then filters in each transpose convolution layer, all with 5x5 kernals and stride 2, except the last layer, which is a stride-1 convolution layer (with padding to accommodate different image sizes).

In our semi-supervised experiments, we implement using the same (5-CNN, 1-dense) encoding block to provide an initial embedding for . This is then concatenated with stop_grad and passed to a 2-layer dense dropout network with (128, 64) units. The use of stop_grad is simply that is learning a highly relevant, invariant representation of that might as well get access to. However, we do not allow gradients to pass through this operation since is meant to learn from the complementary data of known same-class members only.

As discussed in Section id1, the number of complementary samples used to reconstruct (see Equation 2) is chosen randomly at each training step in order to ensure that is insensitive to . For our supervised experiments where labelled data are plentiful, is randomly select between 1 and with for MNIST ( for SVHN), whereas in the semi-supervised case for MNIST ( for SVHN).

We perform standard, mild preprocessing on our data sets. MNIST is normalised so that each pixel value lies between 0 and 1. SVHN is normalised so that each pixel has zero mean and unit standard deviation over the entire dataset.

Finally, all activation functions that are not fixed by model outputs are taken to be rectified linear units. We use Adam (Kingma & Ba, 2015) for training with default settings, and choose a batch size of 32 at the beginning of training, which we double successively throughout training.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
365654
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description