# Disentangling by Factorising

###### Abstract

We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon -VAE by providing a better trade-off between disentanglement and reconstruction quality. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.

## 1 Introduction

Learning interpretable representations of data that expose semantic meaning has important consequences for artificial intelligence. Such representations are useful not only for standard downstream tasks such as supervised learning and reinforcement learning, but also for tasks such as transfer learning and zero-shot learning where humans excel but machines struggle (Lake et al., 2016). There have been multiple efforts in the deep learning community towards learning factors of variation in the data, commonly referred to as learning a disentangled representation. While there is no canonical definition for this term, we adopt the one due to Bengio et al. (2013): a representation where a change in one dimension corresponds to a change in one factor of variation, while being relatively invariant to changes in other factors. In particular, we assume that the data has been generated from a fixed number of independent factors of variation.^{3}^{3}3We discuss the limitations of this assumption in Section 4. We focus on image data, where the effect of factors of variation is easy to visualise.

Using generative models has shown great promise in learning disentangled representations in images. Notably, semi-supervised approaches that require implicit or explicit knowledge about the true underlying factors of the data have excelled at disentangling (Kulkarni et al., 2015; Kingma et al., 2014; Reed et al., 2014; Siddharth et al., 2017; Hinton et al., 2011; Mathieu et al., 2016; Goroshin et al., 2015; Hsu et al., 2017; Denton & Birodkar, 2017). However, ideally we would like to learn these in an unsupervised manner, due to the following reasons: 1. Humans are able to learn factors of variation unsupervised (Perry et al., 2010). 2. Labels are costly as obtaining them requires a human in the loop. 3. Labels assigned by humans might be inconsistent or leave out the factors that are difficult for humans to identify.

-VAE (Higgins et al., 2016) is a popular method for unsupervised disentangling based on the Variational Autoencoder (VAE) framework (Kingma & Welling, 2014; Rezende et al., 2014) for generative modelling. It uses a modified version of the VAE objective with a larger weight () on the KL divergence between the variational posterior and the prior, and has proven to be an effective and stable method for disentangling. One drawback of -VAE is that reconstruction quality (compared to VAE) must be sacrificed in order to obtain better disentangling. The goal of our work is to obtain a better trade-off between disentanglement and reconstruction, allowing to achieve better disentanglement without degrading reconstruction quality. In this work, we analyse the source of this trade-off and propose FactorVAE, which augments the VAE objective with a penalty that encourages the marginal distribution of representations to be factorial without substantially affecting the quality of reconstructions. This penalty is expressed as a KL divergence between this marginal distribution and the product of its marginals, and is optimised using a discriminator network following the divergence minimisation view of GANs (Nowozin et al., 2016; Mohamed & Lakshminarayanan, 2016). Our experimental results show that this approach achieves better disentanglement than -VAE for the same reconstruction quality. We also point out the weaknesses in the disentangling metric of Higgins et al. (2016), and propose a new metric that addresses these shortcomings.

A popular alternative to -VAE is InfoGAN (Chen et al., 2016), which is based on the Generative Adversarial Net (GAN) framework (Goodfellow et al., 2014) for generative modelling. InfoGAN learns disentangled representations by rewarding the mutual information between the observations and a subset of latents. However at least in part due to its training stability issues (Higgins et al., 2016), there has been little empirical comparison between VAE-based methods and InfoGAN. Taking advantage of the recent developments in the GAN literature that help stabilise training, we include InfoWGAN-GP, a version of InfoGAN that uses Wasserstein distance (Arjovsky et al., 2017) and gradient penalty (Gulrajani et al., 2017), in our experimental evaluation.

In summary, we make the following contributions: 1) We introduce FactorVAE, a method for disentangling that gives higher disentanglement scores than -VAE for the same reconstruction quality. 2) We identify the weaknesses of the disentanglement metric of Higgins et al. (2016) and propose a more robust alternative. 3) We give quantitative comparisons of FactorVAE and -VAE against InfoGAN’s WGAN-GP counterpart for disentanglement.

## 2 Trade-off between Disentanglement and Reconstruction in -Vae

We motivate our approach by analysing where the disentanglement and reconstruction trade-off arises in the -VAE objective. First, we introduce notation and architecture of our VAE framework. We assume that observations are generated by combining underlying factors . These observations are modelled using a real-valued latent/code vector , interpreted as the representation of the data. The generative model is defined by the standard Gaussian prior , intentionally chosen to be a factorised distribution, and the decoder parameterised by a neural net. The variational posterior for an observation is , with the mean and variance produced by the encoder, also parameterised by a neural net.^{1}^{1}1In the rest of the paper we will omit the dependence of and on their parameters for notational convenience. The variational posterior can be seen as the distribution of the representation corresponding to the data point . The distribution of representations for the entire data set is then given by

(1) |

which is known as the marginal posterior or aggregate posterior, where is the empirical data distribution. A disentangled representation would have each correspond to precisely one underlying factor . Since we assume that these factors vary independently, we wish for a factorial distribution .

The -VAE objective

is a variational lower bound on for , reducing to the VAE objective for . Its first term can be interpreted as the negative reconstruction error, and the second term as the complexity penalty that acts as a regulariser. We may further break down this KL term as (Hoffman & Johnson, 2016; Makhzani & Frey, 2017)

where is the mutual information between and under the joint distribution . See Appendix C for the derivation. Penalising the term pushes towards the factorial prior , encouraging independence in the dimensions of and thus disentangling. Penalising , on the other hand, reduces the amount of information about stored in , which can lead to poor reconstructions for high values of (Makhzani & Frey, 2017). Thus making larger than 1, penalising both terms more, leads to better disentanglement but reduces reconstruction quality. When this reduction is severe, there is insufficient information about the observation in the latents, making it impossible to recover the true factors. Therefore there exists a value of that gives highest disentanglement, but results in a higher reconstruction error than a VAE.

## 3 Total Correlation Penalty and FactorVAE

Penalising more than a VAE does might be neither necessary nor desirable for disentangling. For example, InfoGAN disentangles by encouraging to be high where is a subset of the latent variables ^{2}^{2}2Note however that in -VAE is defined under the joint distribution of data and their encoding distribution , whereas in InfoGAN is defined on the joint distribution of the prior on and the decoding distribution .. Hence we motivate FactorVAE by augmenting the VAE objective with a term that directly encourages independence in the code distribution, arriving at the following objective:

(2) |

where . Note that this is also a lower bound on the marginal log likelihood . is known as Total Correlation (TC, Watanabe, 1960), a popular measure of dependence for multiple random variables. In our case this term is intractable since both and involve mixtures with a large number of components, and the direct Monte Carlo estimate requires a pass through the entire data set for each evaluation.^{3}^{3}3We have also tried using a batch estimate of , but this did not work. See Appendix D for details.. Hence we take an alternative approach for optimizing this term. We start by observing we can sample from efficiently by first choosing a datapoint uniformly at random and then sampling from . We can also sample from by generating samples from and then ignoring all but one dimension for each sample. A more efficient alternative involves sampling a batch from and then randomly permuting across the batch for each latent dimension (see Alg. 1). This is a standard trick used in the independence testing literature (Arcones & Gine, 1992) and as long as the batch is large enough, the distribution of these samples samples will closely approximate .

Having access to samples from both distributions allows us to minimise their KL divergence using the density-ratio trick (Nguyen et al., 2010; Sugiyama et al., 2012) which involves training a classifier/discriminator to approximate the density ratio that arises in the KL term. Suppose we have a discriminator (in our case an MLP) that outputs an estimate of the probability that its input is a sample from rather than from . Then we have

(3) |

We train the discriminator and the VAE jointly. In particular, the VAE parameters are updated using the objective in Eqn. (3), with the TC term replaced using the discriminator-based approximation from Eqn. (3). The discriminator is trained to classify between samples from and , thus learning to approximate the density ratio needed for estimating TC. See Alg. 2 for pseudocode of FactorVAE.

It is important to note that low TC is necessary but not sufficient for meaningful disentangling. For example, when , TC=0 but carries no information about the data. Thus having low TC is only meaningful when we can preserve information in the latents, which is why controlling for reconstruction error is important.

In the GAN literature, divergence minimisation is usually done between two distributions over the data space, which is often very high dimensional (e.g. images). As a result, the two distributions often have disjoint support, making training unstable, especially when the discriminator is strong. Hence it is necessary to use tricks to weaken the discriminator such as instance noise (Sønderby et al., 2016) or to replace the discriminator with a critic, as in Wasserstein GANs (Arjovsky et al., 2017). In this work, we minimise divergence between two distributions over the latent space (as in e.g. (Mescheder et al., 2017)), which is typically much lower dimensional and the two distributions have overlapping support. We observe that training is stable for sufficiently large batch sizes (e.g. 64 worked well for ), allowing us to use a strong discriminator.

## 4 A New Metric for Disentanglement

The definition of disentanglement we use in this paper, where a change in one dimension of the representation corresponds to a change in exactly one factor of variation, is clearly a simplistic one. It does not allow correlations among the factors or hierarchies over them. Thus this definition seems more suited to synthetic data with independent factors of variation than to most realistic data sets. However, as we will show below, robust disentanglement is not a fully solved problem even in this simple setting. One obstacle on the way to this first milestone is the absence of a sound quantitative metric for measuring disentanglement.

A popular method of measuring disentanglement is by inspecting latent traversals: visualising the change in reconstructions while traversing one dimension of the latent space at a time. Although latent traversals can be a useful indicator of when a model has failed to disentangle, the qualitative nature of this approach makes it unsuitable for comparing algorithms reliably. Doing this would require inspecting a multitude of latent traversals over multiple reference images, random seeds, and points during training. Having a human in the loop to assess the traversals is also too time-consuming and subjective. Unfortunately, for data sets that do not have the ground truth factors of variation available, currently this is the only viable option for assessing disentanglement.

Higgins et al. (2016) proposed a supervised metric that attempts to quantify disentanglement when the ground truth factors of a data set are given. The metric is the error rate of a linear classifier that is trained as follows. Choose a factor ; generate data with this factor fixed but all other factors varying randomly; obtain their representations (defined to be the mean of ); take the absolute value of the pairwise differences of these representations. Then the mean of these statistics across the pairs gives one training input for the classifier, and the fixed factor index is the corresponding training output (see top of Figure 2). So if the representations were perfectly disentangled, we would see zeros in the dimension of the training input that corresponds to the fixed factor of variation, and the classifier would learn to map the index of the zero value to the index of the factor.

However this metric has several weaknesses. Firstly, it could be sensitive to hyperparameters of the linear classifier optimisation, such as the choice of the optimiser and its hyperparameters, weight initialisation, and the number of training iterations. Secondly, having a linear classifier is not so intuitive – we could get representations where each factor corresponds to a linear combination of dimensions instead of a single dimension. Finally and most importantly, the metric has a failure mode: it gives 100% accuracy even when only factors out of have been disentangled; to predict the remaining factor, the classifier simply learns to detect when all the values corresponding to the factors are non-zero. An example of such a case is shown in Figure 3.

To address these weaknesses, we propose a new disentanglement metric as follows. Choose a factor ; generate data with this factor fixed but all other factors varying randomly; obtain their representations; normalise each dimension by its empirical standard deviation over the full data (or a large enough random subset); take the empirical variance in each dimension^{4}^{4}4We can use Gini’s definition of variance for discrete latents (Gini, 1971). See Appendix B for details. of these normalised representations. Then the index of the dimension with the lowest variance and the target index provide one training input/output example for the classifier (see bottom of Figure 2). Thus if the representation is perfectly disentangled, the empirical variance in the dimension corresponding to the fixed factor will be 0. We normalise the representations so that the is invariant to rescaling of the representations in each dimension. Since both inputs and outputs lie in a discrete space, the optimal classifier is the majority-vote classifier (see Appendix B for details), and the metric is the error rate of the classifier. The resulting classifier is a deterministic function of the training data, hence there are no optimisation hyperparameters to tune. We also believe that this metric is conceptually simpler and more natural than the previous one. Most importantly, it circumvents the failure mode of the earlier metric, since the classifier needs to see the lowest variance in a latent dimension for a given factor to classify it correctly.

We think developing a reliable unsupervised disentangling metric that does not use the ground truth factors is an important direction for future research, since unsupervised disentangling is precisely useful for the scenario where we do not have access to the ground truth factors. With this in mind, we believe that having a reliable supervised metric is still valuable as it can serve as a gold standard for evaluating unsupervised metrics.

## 5 Related Work

There are several recent works that use a discriminator to optimise a divergence to encourage independence in the latent codes. Adversarial Autoencoder (AAE, Makhzani et al., 2015) removes the term in the VAE objective and maximizes the negative reconstruction error minus via the density-ratio trick, showing applications in semi-supervised classification and unsupervised clustering. This means that the AAE objective is not a lower bound on the log marginal likelihood. Although optimising a lower bound is not strictly necessary for disentangling, it does ensure that we have a valid generative model; having a generative model with disentangled latents has the benefit of being a single model that can be useful for various tasks e.g. planning for model-based RL, visual concept learning and semi-supervised learning, to name a few. In PixelGAN Autoencoders (Makhzani & Frey, 2017), the same objective is used to study the decomposition of information between the latent code and the decoder. The authors state that adding noise to the inputs of the encoder is crucial, which suggests that limiting the information that the code contains about the input is essential and that the term should not be dropped from the VAE objective. Brakel & Bengio (2017) also use a discriminator to penalise the Jensen-Shannon Divergence between the distribution of codes and the product of its marginals. However, they use the GAN loss with deterministic encoders and decoders and only explore their technique in the context of Independent Component Analysis source separation.

Early works on unsupervised disentangling include (Schmidhuber, 1992) which attempts to disentangle codes in an autoencoder by penalising predictability of one latent dimension given the others and (Desjardins et al., 2012) where a variant of a Boltzmann Machine is used to disentangle two factors of variation in the data. More recently, Achille & Soatto (2018) have used a loss function that penalises TC in the context of supervised learning. They show that their approach can be extended to the VAE setting, but do not perform any experiments on disentangling to support the theory. In a concurrent work, Kumar et al. (2018) used moment matching in VAEs to penalise the covariance between the latent dimensions, but did not constrain the mean or higher moments. We provide the objectives used in these related methods and show experimental results on disentangling performance, including AAE, in Appendix F.

There have been various works that use the notion of predictability to quantify disentanglement, mostly predicting the value of ground truth factors from the latent code . This dates back to Yang & Amari (1997) who learn a linear map from representations to factors in the context of linear ICA, and quantify how close this map is to a permutation matrix. More recently Eastwood & Williams (2018) have extended this idea to disentanglement by training a Lasso regressor to map to and using its trained weights to quantify disentanglement. Like other regression-based approaches, this one introduces hyperparameters such as the optimiser and the Lasso penalty coefficient. The metric of Higgins et al. (2016) as well as the one we proposed, predict the factor from the of images with a fixed but varying randomly. Schmidhuber (1992) quantifies predictability between the different dimensions of , using a predictor that is trained to predict from .

Invariance and equivariance are frequently considered to be desirable properties of representations in the literature (Goodfellow et al., 2009; Kivinen & Williams, 2011; Lenc & Vedaldi, 2015). A representation is said to be invariant for a particular task if it does not change when nuisance factors of the data, that are irrelevant to the task, are changed. An equivariant representation changes in a stable and predictable manner when altering a factor of variation. A disentangled representation, in the sense used in the paper, is equivariant, since changing one factor of variation will change one dimension of a disentangled representation in a predictable manner. Given a task, it will be easy to obtain an invariant representation from the disentangled representation by ignoring the dimensions encoding the nuisance factors for the task (Cohen & Welling, 2014).

Building on a preliminary version of this paper, (Chen et al., 2018) recently proposed a minibatch-based alternative to our density-ratio-trick-based method for estimating the Total Correlation and introduced an information-theoretic disentangling metric.

## 6 Experiments

We compare FactorVAE to -VAE on the following data sets with i) known generative factors: 1) 2D Shapes (Matthey et al., 2017): 737,280 binary images of 2D shapes with ground truth factors[number of values]: shape[3], scale[6], orientation[40], x-position[32], y-position[32]. 2) 3D Shapes data: 480,000 RGB images of 3D shapes with ground truth factors: shape[4], scale[8], orientation[15], floor colour[10], wall colour[10], object colour[10] ii) unknown generative factors: 3) 3D Faces (Paysan et al., 2009): 239,840 grey-scale images of 3D Faces. 4) 3D Chairs (Aubry et al., 2014): 86,366 RGB images of chair CAD models. 5) CelebA (cropped version) (Liu et al., 2015): 202,599 RGB images of celebrity faces. The experimental details such as encoder/decoder architectures and hyperparameter settings are in Appendix A. The details of the disentanglement metrics, along with a sensitivity analysis with respect to their hyperparameters, are given in Appendix B.

From Figure 4, we see that FactorVAE gives much better disentanglement scores than VAEs (), while barely sacrificing reconstruction error, highlighting the disentangling effect of adding the Total Correlation penalty to the VAE objective. The best disentanglement scores for FactorVAE are noticeably better than those for -VAE given the same reconstruction error. This can be seen more clearly in Figure 5 where the best mean disentanglement of FactorVAE () is around 0.82, significantly higher than the one for -VAE (), which is around 0.73, both with reconstruction error around 45. From Figure 6, we can see that both models are capable of finding -position, -position, and scale, but struggle to disentangle orientation and shape, -VAE especially. For this data set, neither method can robustly capture shape, the discrete factor of variation^{5}^{5}5This is partly due to the fact that learning discrete factors would require using discrete latent variables instead of Gaussians, but jointly modelling discrete and continuous factors of variation is a non-trivial problem that needs further research..

As a sanity check, we also evaluated the correlation between our metric and the metric in Higgins et al. (2016): Pearson (linear correlation coefficient): 0.404, Kendall (proportion of pairs that have the same ordering): 0.310, Spearman (linear correlation of the rankings): 0.444, all with p-value 0.000. Hence the two metrics show a fairly high positive correlation as expected.

We have also examined how the discriminator’s estimate of the Total Correlation (TC) behaves and the effect of on the true TC. From Figure 7, observe that the discriminator is consistently underestimating the true TC, also confirmed in (Rosca et al., 2018). However the true TC decreases throughout training, and a higher leads to lower TC, so the gradients obtained using the discriminator are sufficient for encouraging independence in the code distribution.

We then evaluated InfoWGAN-GP, the counterpart of InfoGAN that uses Wasserstein distance and gradient penalty. See Appendix G for an overview. One advantage of InfoGAN is that the Monte Carlo estimate of its objective is differentiable with respect to its parameters even for discrete codes , which makes gradient-based optimisation straightforward. In contrast, VAE-based methods that rely on the reprameterisation trick for gradient-based optimisation require to be a reparameterisable continuous random variable and alternative approaches require various variance reduction techniques for gradient estimation (Mnih & Rezende, 2016; Maddison et al., 2017). Thus we might expect Info(W)GAN(-GP) to show better disentangling in cases where some factors are discrete. Hence we use 4 continuous latents (one for each continuous factor) and one categorical latent of 3 categories (one for each shape). We tuned for , the weight of the mutual information term in Info(W)GAN(-GP), , number of noise variables and the learning rates of the generator , discriminator .

However from Figure 8 we can see that the disentanglement scores are disappointingly low. From the latent traversals in Figure 9, we can see that the model learns only the scale factor, and tries to put positional information in the discrete latent code, which is one reason for the low disentanglement score. Using 5 continuous codes and no categorical codes did not improve the disentanglement scores however. InfoGAN with early stopping (before training instability occurs – see Appendix H) also gave similar results. The fact that some latent traversals give blank reconstructions indicates that the model does not generalise well to all parts of the domain of .

One reason InfoWGAN-GP’s poor performance on this data set could be that InfoGAN is sensitive to the generator and discriminator architecture, which is one thing we did not tune extensively. We use a similar architecture to the VAE-based approaches for 2D shapes for a fair comparison, but have also tried a bigger architecture which gave similar results (see Appendix H). If architecture search is indeed important, this would be a weakness of InfoGAN relative to FactorVAE and -VAE, which are both much more robust to architecture choice. In Appendix H, we check that we can replicate the results of Chen et al. (2016) on MNIST using InfoWGAN-GP, verify that it makes training stable compared to InfoGAN, and give implementation details with further empirical studies of InfoGAN and InfoWGAN-GP.

We now show results on the 3D Shapes data, which is a more complex data set of 3D scenes with additional features such as shadows and background (sky). We train both -VAE and FactorVAE for 1M iterations. Figure 10 again shows that FactorVAE achieves much better disentanglement with barely any increase in reconstruction error compared to VAE. Moreover, while the top mean disentanglement scores for FactorVAE and -VAE are similar, the reconstruction error is lower for FactorVAE: 3515 () as compared to 3570 (). The latent traversals in Figure 11 show that both models are able to capture the factors of variation in the best-case scenario. Looking at latent traversals across many random seeds, however, makes it evident that both models struggled to disentangle the factors for shape and scale.

To show that FactorVAE also gives a valid generative model for both 2D Shapes and 3D Shapes, we present the log marginal likelihood evaluated on the entire data set together with samples from the generative model in Appendix E.

We also show results for -VAE and FactorVAE experiments on the data sets with unknown generative factors, namely 3D Chairs, 3D Faces, and CelebA. Note that inspecting latent traversals is the only evaluation method possible here. We can see from Figure 12 (and Figures 38 and 39 in Appendix I) that FactorVAE has smaller reconstruction error compared to -VAE, and is capable of learning sensible factors of variation, as shown in the latent traversals in Figures 13, 14 and 15. Unfortunately, as explained in Section 4, latent traversals tell us little about the robustness of our method.

## 7 Conclusion and Discussion

We have introduced FactorVAE, a novel method for disentangling that achieves better disentanglement scores than -VAE on the 2D Shapes and 3D Shapes data sets for the same reconstruction quality. Moreover, we have identified weaknesses of the commonly used disentanglement metric of Higgins et al. (2016), and proposed an alternative metric that is conceptually simpler, is free of hyperparameters, and avoids the failure mode of the former. Finally, we have performed an experimental evaluation of disentangling for the VAE-based methods and InfoWGAN-GP, a more stable variant of InfoGAN, and identified its weaknesses relative to the VAE-based methods.

One of the limitations of our approach is that low Total Correlation is necessary but not sufficient for disentangling of independent factors of variation. For example, if all but one of the latent dimensions were to collapse to the prior, the TC would be 0 but the representation would not be disentangled. Our disentanglement metric also requires us to be able to generate samples holding one factor fixed, which may not always be possible, for example when our training set does not cover all possible combinations of factors. The metric is also unsuitable for data with non-independent factors of variation.

For future work, we would like to use discrete latent variables to model discrete factors of variation and investigate how to reliably capture combinations of discrete and continuous factors using discrete and continuous latents.

## Acknowledgements

We thank Chris Burgess and Nick Watters for providing the data sets and helping to set them up, and thank Guillaume Desjardins, Sergey Bartunov, Mihaela Rosca, Irina Higgins and Yee Whye Teh for helpful discussions.

## References

- Achille & Soatto (2018) Achille, A. and Soatto, S. Information Dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
- Arcones & Gine (1992) Arcones, M. A. and Gine, E. On the bootstrap of U and V statistics. The Annals of Statistics, pp. 655–674, 1992.
- Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein Generative Adversarial Networks. In ICML, 2017.
- Aubry et al. (2014) Aubry, M., Maturana, D., Efros, A. A., Russell, B. C., and Sivic, J. Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of cad models. In CVPR, 2014.
- Ba et al. (2016) Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
- Bengio et al. (2013) Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013.
- Brakel & Bengio (2017) Brakel, P. and Bengio, Y. Learning independent features with adversarial nets for non-linear ICA. arXiv preprint arXiv:1710.05050, 2017.
- Chen et al. (2018) Chen, T. Q., Li, X., Grosse, R., and Duvenaud, D. Isolating sources of disentanglement in variational autoencoders. arXiv preprint arXiv:1802.04942, 2018.
- Chen et al. (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. InfoGAN: Interpretable representation learning by information maximizing Generative Adversarial Nets. In NIPS, 2016.
- Cohen & Welling (2014) Cohen, T. and Welling, M. Learning the irreducible representations of commutative lie groups. In ICML, 2014.
- Denton & Birodkar (2017) Denton, E. L. and Birodkar, V. Unsupervised learning of disentangled representations from video. In NIPS, 2017.
- Desjardins et al. (2012) Desjardins, G., Courville, A., and Bengio, Y. Disentangling factors of variation via generative entangling. arXiv preprint arXiv:1210.5474, 2012.
- Duchi et al. (2011) Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12(Jul):2121–2159, 2011.
- Eastwood & Williams (2018) Eastwood, C. and Williams, C. A framework for the quantitative evaluation of disentangled representations. In ICLR, 2018.
- Gini (1971) Gini, C. W. Variability and mutability, contribution to the study of statistical distributions and relations. Journal of American Statistical Association, 66:534–544, 1971.
- Goodfellow et al. (2009) Goodfellow, I., Lee, H., Le, Q. V., Saxe, A., and Ng, A. Y. Measuring invariances in deep networks. In NIPS, 2009.
- Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative Adversarial Nets. In NIPS, 2014.
- Goroshin et al. (2015) Goroshin, R., Bruna, J., Tompson, J., Eigen, D., and LeCun, Y. Unsupervised learning of spatiotemporally coherent metrics. In ICCV, 2015.
- Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. Improved training of wasserstein GANs. In NIPS, 2017.
- Higgins et al. (2016) Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. Beta-VAE: Learning basic visual concepts with a constrained variational framework. 2016.
- Hinton et al. (2011) Hinton, G. E., Krizhevsky, A., and Wang, S. D. Transforming auto-encoders. In International Conference on Artificial Neural Networks, pp. 44–51. Springer, 2011.
- Hoffman & Johnson (2016) Hoffman, M. D. and Johnson, M. J. ELBO surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016.
- Hsu et al. (2017) Hsu, W. N., Zhang, Y., and Glass, J. Unsupervised learning of disentangled and interpretable representations from sequential data. In NIPS, 2017.
- Ioffe & Szegedy (2015) Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
- Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015.
- Kingma & Welling (2014) Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. 2014.
- Kingma et al. (2014) Kingma, D. P., Mohamed, S., Rezende, D. J., and Welling, M. Semi-supervised learning with deep generative models. In NIPS, 2014.
- Kivinen & Williams (2011) Kivinen, J. J. and Williams, C. Transformation equivariant boltzmann machines. In International Conference on Artificial Neural Networks, 2011.
- Kulkarni et al. (2015) Kulkarni, T., Whitney, W. F., Kohli, P., and Tenenbaum, J. Deep convolutional inverse graphics network. In NIPS, 2015.
- Kumar et al. (2018) Kumar, A., Sattigeri, P., and Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018.
- Lake et al. (2016) Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. Building machines that learn and think like people. Behavioral and Brain Sciences, pp. 1–101, 2016.
- Lenc & Vedaldi (2015) Lenc, K. and Vedaldi, A. Understanding image representations by measuring their equivariance and equivalence. In CVPR, 2015.
- Liu et al. (2015) Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738, 2015.
- Maddison et al. (2017) Maddison, C. J., Mnih, A., and Teh, Y. W. The CONCRETE distribution: A continuous relaxation of discrete random variables. In ICLR, 2017.
- Makhzani & Frey (2017) Makhzani, A. and Frey, B. PixelGAN autoencoders. In NIPS, 2017.
- Makhzani et al. (2015) Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., and Frey, B. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
- Mathieu et al. (2016) Mathieu, M. F., Zhao, J. J., Ramesh, A., Sprechmann, P., and LeCun, Y. Disentangling factors of variation in deep representation using adversarial training. In NIPS, 2016.
- Matthey et al. (2017) Matthey, L., Higgins, I., Hassabis, D., and Lerchner, A. dSprites: Disentanglement testing Sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017.
- Mescheder et al. (2017) Mescheder, L., Nowozin, S., and Geiger, A. Adversarial variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks. In ICML, 2017.
- Mnih & Rezende (2016) Mnih, A. and Rezende, D. J. Variational inference for Monte Carlo objectives. In ICML, 2016.
- Mohamed & Lakshminarayanan (2016) Mohamed, S. and Lakshminarayanan, B. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
- Nguyen et al. (2010) Nguyen, X., Wainwright, M. J., and Jordan, M. I. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 2010.
- Nowozin et al. (2016) Nowozin, S., Cseke, B., and Tomioka, R. f-GAN: Training generative neural samplers using variational divergence minimization. In NIPS, 2016.
- Paysan et al. (2009) Paysan, P., Knothe, R., Amberg, B., Romdhani, S., and Vetter, T. A 3D face model for pose and illumination invariant face recognition. In Proceedings of the IEEE International Conference on Advanced Video and Signal based Surveillance, pp. 296–301, 2009.
- Perry et al. (2010) Perry, G., Rolls, E. T., and Stringer, S. M. Continuous transformation learning of translation invariant representations. Experimental Brain Research, 204(2):255–270, 2010.
- Reed et al. (2014) Reed, S., Sohn, K., Zhang, Y., and Lee, H. Learning to disentangle factors of variation with manifold interaction. In ICML, 2014.
- Rezende et al. (2014) Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014.
- Rosca et al. (2018) Rosca, M., Lakshminarayanan, B., and Mohamed, S. Distribution matching in variational inference. arXiv preprint arXiv:1802.06847, 2018.
- Schmidhuber (1992) Schmidhuber, J. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992.
- Siddharth et al. (2017) Siddharth, N., Paige, B., Van de Meent, J. W., Desmaison, A., Wood, F., Goodman, N. D., Kohli, P., and Torr, P. H. S. Learning disentangled representations with semi-supervised deep generative models. In NIPS, 2017.
- Sønderby et al. (2016) Sønderby, C. K., Caballero, J., Theis, L., Shi, W., and Huszár, F. Amortised MAP inference for image super-resolution. In ICLR, 2016.
- Sugiyama et al. (2012) Sugiyama, M., Suzuki, T., and Kanamori, T. Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation. Annals of the Institute of Statistical Mathematics, 64(5):1009–1044, 2012.
- Watanabe (1960) Watanabe, S. Information theoretical analysis of multivariate correlation. IBM Journal of research and development, 4(1):66–82, 1960.
- Yang & Amari (1997) Yang, H. H. and Amari, S. I. Adaptive online learning algorithms for blind separation: maximum entropy and minimum mutual information. Neural computation, 9(7):1457–1482, 1997.

## Appendix

## Appendix A Experimental Details for FactorVAE and -Vae

We use a Convolutional Neural Network for the encoder, a Deconvolutional Neural Network for the decoder and a Multi-Layer Perceptron (MLP) with for the discriminator in FactorVAE for experiments on all data sets. We use [0,1] normalised data as targets for the mean of a Bernoulli distribution, using negative cross-entropy for and Adam optimiser (Kingma & Ba, 2015) with learning rate , for the VAE updates, as in Higgins et al. (2016). We also use Adam for the discriminator updates with and a learning rate tuned from . We use for 2D Shapes and 3D Faces, and for 3D Shapes, 3D Chairs and CelebA. The encoder outputs parameters for the mean and log-variance of Gaussian , and the decoder outputs logits for each entry of the image. We use the same encoder/decoder architecture for -VAE and FactorVAE, shown in Tables 3, 3, and 3. We use the same 6 layer MLP discriminator with 1000 hidden units per layer and leaky ReLU (lReLU) non-linearity, that outputs 2 logits in all FactorVAE experiments. We noticed that smaller discriminator architectures work fine, but noticed small improvements up to 6 hidden layers and 1000 hidden units per layer. Note that scaling the discriminator learning rate is not equivalent to scaling , since does not affect the discriminator loss. See Algorithm 2 for details of FactorVAE updates. We train for iterations on 2D Shapes, iterations on 3D Shapes, and iterations on Chairs, 3D Faces and CelebA. We use a batch size of 64 for all data sets.

Encoder | Decoder |
---|---|

Input binary image | Input |

conv. 32 ReLU. stride 2 | FC. 128 ReLU. |

conv. 32 ReLU. stride 2 | FC. ReLU. |

conv. 64 ReLU. stride 2 | upconv. 64 ReLU. stride 2 |

conv. 64 ReLU. stride 2 | upconv. 32 ReLU. stride 2 |

FC. 128. FC. . | upconv. 32 ReLU. stride 2 |

upconv. 1. stride 2 |

Encoder | Decoder |
---|---|

Input RGB image | Input (3D Shapes) (CelebA, Chairs) |

conv. 32 ReLU. stride 2 | FC. 256 ReLU. |

conv. 32 ReLU. stride 2 | FC. ReLU. |

conv. 64 ReLU. stride 2 | upconv. 64 ReLU. stride 2 |

conv. 64 ReLU. stride 2 | upconv. 32 ReLU. stride 2 |

FC. 256. FC. . | upconv. 32 ReLU. stride 2 |

upconv. 3. stride 2 |

Encoder | Decoder |
---|---|

Input greyscale image | Input |

conv. 32 ReLU. stride 2 | FC. 256 ReLU. |

conv. 32 ReLU. stride 2 | FC. ReLU. |

conv. 64 ReLU. stride 2 | upconv. 64 ReLU. stride 2 |

conv. 64 ReLU. stride 2 | upconv. 32 ReLU. stride 2 |

FC. 256. FC. . | upconv. 32 ReLU. stride 2 |

upconv. 1. stride 2 |

## Appendix B Details for the Disentanglement Metrics

We performed a sensitivity analysis of each metric with respect to its hyperparameters (c.f. Figure 2). In Figures 16, we show that the metric in Higgins et al. (2016) is very sensitive to number of iterations of the Adagrad (Duchi et al., 2011) optimiser with learning rate 0.01 (used in Higgins et al. (2016)), and constantly improves with more iterations. This suggests that one might want to use less noisy multi-class logistic regression solvers than gradient-descent based methods. The number of data points used to evaluate the metric after optimisation did not seem to help reduce variance beyond 800. So in our experiments, we use and 10000 iterations, with a batch size of 10 per iteration of training the linear classifier, and use a batch of size 800 to evaluate the metric at the end of training. Each evaluation of this metric took around 30 minutes on a single GPU, hence we could not afford to train for more iterations.

For our disentanglement metric, we first prune out all latent dimensions that have collapsed to the prior (). Then we just use the surviving dimensions for the majority vote. From the sensitivity analysis our metric in Figure 17, we observe that our metric is much less sensitive to hyperparameters than the metric in Higgins et al. (2016). We use and take the majority vote classifier from 800 votes. This only takes a few seconds on a single GPU. The majority vote classifier works as follows: suppose we are given data (so ). Then for , let . Then the majority vote classifier is defined to be .

Note that , the dimensionality of the latents, does not affect the metric; for a classifier that chooses at random, the accuracy is , independent of .

For discrete latent variables, we use Gini’s definition of empirical variance:

(4) |

for , if and 0 if . Note that this is equal to empirical variance for continuous variables when .

## Appendix C KL Decomposition

The KL term in the VAE objective decomposes as follows (Makhzani & Frey, 2017):

###### Lemma 1.

where .

###### Proof.

∎

###### Remark.

Note that this decomposition is equivalent to that in Hoffman & Johnson (2016), written as follows: where , hence , .

###### Proof.

∎

## Appendix D Using a Batch Estimate of for Estimating TC

We have also tried using a batch estimate for the density , thus optimising this estimate of the TC directly instead of having a discriminator and using the density ratio trick. In other words, we tried , and using the estimate:

(5) |

Note that:

(6) |

for . However while experimenting on 2D Shapes, we observed that the value of becomes very small (negative with high absolute value) for latent dimension during training, because is not a good enough approximation to unless is very big. As training progresses for the VAE, the variance of Gaussians becomes smaller and smaller, so they do not overlap too much in higher dimensions. Hence we get that land on the tails of , giving worryingly small values of . On the other hand , a mixture of Gaussians hence of much higher entropy, gives much more stable values of . From Figure 18, we can see that even with as big as 10,000, we get negative values for the estimate of TC, which is a KL divergence and hence should be non-negative, hence this method of using a batch estimate for does not work. A fix is to use samples from instead of , but this seemed to give a similar reconstruction-disentanglement trade-off to -VAE. Very recently, work from (Chen et al., 2018) has shown that disentangling can be improved by using samples from .

## Appendix E Log Marginal Likelihood and Samples

We give the log marginal likelihood of each of the best performing -VAE and FactorVAE models (in terms of disentanglement) for both the 2D Shapes and 3D Shapes data sets along with samples from the generative model. Since the log marginal likelihood is intractable, we report the Importance-Weighted Autoencoder (IWAE) bound with 5000 particles, in line with standard practice in the generative modelling literature.

In Figures 19 and 20, the samples for FactorVAE are arguably more representative of the data set than those of -VAE. For example -VAE has occasional samples with two separate shapes in the same image (Figure 19). The log marginal likelihood for the best performing -VAE () is -46.1, whereas for FactorVAE it is -51.9 () (a randomly chosen VAE run gives -43.3). So on 2D Shapes, FactorVAE gives better samples but worse log marginal likelihood.

In Figures 21 and 22, the samples for -VAE appear more coherent than those for FactorVAE. However the log marginal likelihood for -VAE () is -3534, whereas for FactorVAE it is -3520 () (a randomly chosen VAE run gives -3517). So on 3D Shapes, FactorVAE gives worse samples but better log marginal likelihood.

In general, if one seeks to learn a generative model with a disentangled latent space, it would make sense to choose the model with the lowest value of or among those with similarly high disentanglement performance.

## Appendix F Losses and Experiments for other related Methods

The Adversarial Autoencoder (AAE) (Makhzani et al., 2015) uses the following objective

(7) |

utilising the density ratio trick to estimate the KL term.

Information Dropout (Achille & Soatto, 2018) uses the objective

(8) |

The following objective is also considered in the paper but is dismissed as intractable:

(9) |

Note that it is similar to the FactorVAE objective (which has ), but with in the first KL term replaced with .

DIP-VAE (Kumar et al., 2018) uses the VAE objective with an additional penalty on how much the covariance of deviates from the identity matrix, either using the law of total covariance (DIP-VAE I):

(10) |

where , or directly (DIP-VAE II):

(11) |

One could argue that during training of FactorVAE, will be similar to , assuming the prior is factorial, due to the term in the objective. Hence we also investigate a modified FactorVAE objective that replaces with :

(12) |

However as shown in Figure 40 of Appendix I, the histograms of samples from the marginals are clearly quite different from the the prior for FactorVAE.

Moreover we show experimental results for AAE (adding a coefficient in front of the term of the objective and tuning it) and the variant of FactorVAE (Eqn. (F)) on the 2D Shapes data. From Figure 23, we see that the disentanglement performance for both are somewhat lower than that for FactorVAE. This difference could be explained as a benefit of directly encouraging to be factorised (FactorVAE) instead of encouraging it to approach an arbitrarily chosen factorised prior (AAE, Eqn. (F)). Information Dropout and DIP-VAE did not have enough experimental details in the paper nor publicly available code to have their results reproduced and compared against.

## Appendix G InfoGAN and InfoWGAN-GP

We give an overview of InfoGAN (Chen et al., 2016) and InfoWGAN-GP, its counterpart using Wasserstein distance and gradient penalty. InfoGAN uses latents where models semantically meaningful codes and models incompressible noise. The generative model is defined by a generator with the process: , . i.e. . GANs are defined as a minimax game on some objective , where is either a discriminator (e.g. for the original GAN (Goodfellow et al., 2014)) that outputs log probabilities for binary classification, or a critic (e.g. for Wasserstein-GAN (Arjovsky et al., 2017)) that outputs a real-valued scalar. InfoGAN defines an extra encoding distribution that is used to define an extra penalty:

(13) |

that is added to the GAN objective. Hence InfoGAN is the following minimax game on the parameters of neural nets :

(14) |

can be interpreted as a variational lower bound to , with equality at . i.e. encourages the codes to be more informative about the image. From the definition of , it can also be seen as the reconstruction error of codes in the latent space. The original InfoGAN defines:

(15) |

same as the original GAN objective where outputs log probabilities. However as we’ll show in Appendix H this has known instability issues in training. So it is natural to try replacing this with the more stable WGAN-GP (Gulrajani et al., 2017) objective:

(16) |

for with , , , stop_gradient and with a new for each iteration of optimisation. Thus we obtain InfoWGAN-GP.

## Appendix H Empirical Study of InfoGAN and InfoWGAN-GP

To begin with, we implemented InfoGAN and InfoWGAN-GP on MNIST using the hyperparameters given in Chen et al. (2016) to better understand its behaviour, using 1 categorical code with 10 categories, 2 continuous codes, and 62 noise variables. We use priors for the continuous codes, for categorical codes with categories, and for the noise variables. For 2D Shapes data we use 1 categorical codes with 3 categories (), 4 continuous codes, and 5 noise variables. The number of noise variables did not seem to have a noticeable effect on the experiment results. We use the Adam optimiser (Kingma & Ba, 2015) with , and learning rate for the generator updates and for the discriminator updates. The detailed Discriminator/Encoder/Generator architecture are given in Tables 6 and 6. The architecture for InfoWGAN-GP is the same as InfoGAN, except that we use no Batch Normalisation (batchnorm) (Ioffe & Szegedy, 2015) for the convolutions in the discriminator, and replace batchnorm with Layer Normalisation (Ba et al., 2016) in the fully connected layer that follows the convolutions as recommended in (Gulrajani et al., 2017). We use gradient penalty coefficient , again as recommended.

discriminator D / encoder Q | generator G |
---|---|

Input greyscale image | Input |

conv. 64 lReLU. stride 2 | FC. 1024 ReLU. batchnorm |

conv. 128 lReLU. stride 2. batchnorm | FC. ReLU. batchnorm |

FC. 1024 lReLU. batchnorm | upconv. 64 ReLU. stride 2. batchnorm |

FC. 1. output layer for D | upconv. 1 Sigmoid. stride 2 |

FC. 128 lReLU. batchnorm. FC |

discriminator D / encoder Q | generator G |
---|---|

Input binary image | Input |

conv. 32 lReLU. stride 2 | FC. 128 ReLU. batchnorm |

conv. 32 lReLU. stride 2. batchnorm | FC. ReLU. batchnorm |

conv. 64 lReLU. stride 2. batchnorm | upconv. 64 lReLU. stride 2. batchnorm |

conv. 64 lReLU. stride 2. batchnorm | upconv. 32 lReLU. stride 2. batchnorm |

FC. 128 lReLU. batchnorm | upconv. 32 lReLU. stride 2. batchnorm |

FC. 1. output layer for D | upconv. 1 Sigmoid. stride 2 |

FC. 128 lReLU. batchnorm. FC for Q |

discriminator D / encoder Q | generator G |
---|---|

Input binary image | Input |

conv. 64 lReLU. stride 2 | FC. 1024 ReLU. batchnorm |

conv. 128 lReLU. stride 2. batchnorm | FC. ReLU. batchnorm |

conv. 256 lReLU. stride 2. batchnorm | upconv. 256 lReLU. stride 1. batchnorm |

conv. 256 lReLU. stride 1. batchnorm | upconv. 256 lReLU. stride 1. batchnorm |

conv. 256 lReLU. stride 1. batchnorm | upconv. 128 lReLU. stride 2. batchnorm |

FC. 1024 lReLU. batchnorm | upconv. 64 lReLU. stride 2. batchnorm |

FC. 1. output layer for D | upconv. 1 Sigmoid. stride 2 |

FC. 128 lReLU. batchnorm. FC for Q |

We firstly observe that for all runs, we eventually get a degenerate discriminator that predicts all inputs to be real, as in Figure 24. This is the well-known instability issue of the original GAN. We have tried using a smaller learning rate for the discriminator, and although this delays the degenerate behaviour it does not prevent it. Hence early stopping seems crucial, and all results shown below are from well before the degenerate behaviour occurs.

Chen et al. (2016) claim that the categorical code learns digit class (discrete factor of variation) and that the continuous codes learn azimuth and width, but when plotting latent traversals for each run, we observed that this is inconsistent. We show five randomly chosen runs in Figure 25. The digit class changes in the continuous code traversals and there are overlapping digits in the categorical code traversal. Similar results hold for InfoWGAN-GP in Figure 36.

We also tried visualising the reconstructions: given an image, we push the image through the encoder to obtain latent codes , fix this and vary the noise to generate multiple reconstructions for the same image. This is to check the extent to which the noise can affect the generation. We can see in Figure 26 that digit class often changes when varying , so the model struggles to cleanly separate semantically meaningful information and incompressible noise.

Furthermore, we investigated the sensitivity of the model to the number of latent codes. We show latent traversals using three continuous codes instead of two in Figure 27. It is evident that the model tries to put more digit class information into the continuous traversals. So the number of codes is an important hyperparameter to tune, whereas VAE methods are less sensitive to the choice of number of codes since they can prune out unnecessary latents by collapsing to the prior .

We also tried varying the number of categories for the categorical code. Using 2 categories, we see from Figure 28 that the model tries to put much more information about digit class into the continuous latents, as expected. Moreover from Figure 30, we can see that the noise variables also have more information about the digit class. However, when we use 20 categories, we see that the model still puts information about the digit class in the continuous latents. However from Figure 31 we see that the noise variables contain less semantically meaningful information.

Using InfoWGAN-GP solved the degeneracy issue and makes training more stable (see Figure 33), but we observed that the other problems persisted (see e.g. Figure 36).

For 2D Shapes, we have also tried using a bigger architecture for InfoWGAN-GP that is used for a data set of similar dimensions (Chairs data set) in Chen et al. (2016). See Table 6. However as can be seen in Figure 34 this did not improve disentanglement scores, yet the latent traversals look slightly more realistic (Figure 35).

In summary, InfoWGAN-GP can help prevent the instabilities in training faced by InfoGAN, but it does not help overcome the following weaknesses compared to VAE-based methods: 1) Disentangling performance is sensitive to the number of code latents. 2) More often than not, the noise variables contain semantically meaningful information. 3) The model does not always generalise well to all across the domain of .

## Appendix I Further Experimental Results

From Figure 37, we see that higher values of in FactorVAE leads to a lower discriminator accuracy. This is as expected, since a higher encourages and to be closer together, hence a lower accuracy for the discriminator to successfully classify samples from the two distributions.

We also show histograms of for each in -VAE and FactorVAE for different values of and at the end of training on 2D Shapes in Figure 40. We can see that the marginals of FactorVAE are quite different from the prior, which could be a reason that the variant of FactorVAE using the objective given by Eqn. (F) leads to different results to FactorVAE. For FactorVAE, the model is able to focus on factorising instead of pushing it towards some arbitrarily specified prior .