Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.


Disentangling Factors of Variation via Generative Entangling


Guillaume Desjardins

Aaron Courville

Yoshua Bengio

Département d’informatique et de recherche op., Université de Montréal Montréal, QC H3C 3J7 CANADA


In many machine learning tasks, data originates from a generative process involving complex interaction of multiple factors. Alone each factor accounts for a source of variability in the data. Together their interaction gives rise to the rich structure characteristic of many of the most challenging domains of application. Consider, for example, the task of facial expression recognition. Two images of different individuals with the same facial expression may result in images that are well separated in pixel space. On the other hand, two images of the same individuals showing different expressions may well be positioned very close together in pixel space. In this simplified scenario, there are two factors at play: (1) the identity of the individual, and (2) the facial expression. One of these factors, the identity, is irrelevant to the task of facial expression recognition and yet of the two factors it could well dominate the representation of the image in pixel space. As a result, pixel space-based facial expression recognition systems seem likely to suffer poor performance due to the variation in appearance of individual faces.

Importantly, these interacting factors frequently do not combine as simple superpositions that can be easily separated by choosing an appropriate affine projection of the data. Rather, these factors often appear tightly entangled in the raw data. Our challenge is to construct representations of the data that cope with the reality of entangled factors of variation and provide features that may be appropriate to a wide variety of possible tasks. In the context of our face data example, a representation capable of disentangling identity and expression would be an effective representation for either the facial recognition or facial expression classification.

In an effort to cope with these factors of variation, there has been a broad-based movement in machine learning and in application domains such as computer vision toward hand-engineering feature sets that are invariant to common sources of variation in data. This is the motivation behind both the inclusion of feature pooling stages in the convolutional network architecture (LeCun et al., 1989) and the recent trend toward representations based on large scale pooling of low-level features  (Wang et al., 2009; Coates et al., 2011). These approaches all stem from the powerful idea that invariant features of the data can be induced through the pooling together of a set of simple filter responses. Potentially even more powerful is the notion that one can actually learn which filters to be pooled together from purely unsupervised data, and thereby extract directions of variance over which the pooling features become invariant (Kohonen et al., 1979; Kohonen, 1996; Hyvärinen and Hoyer, 2000; Le et al., 2010; Kavukcuoglu et al., 2009; Ranzato and Hinton, 2010; Courville et al., 2011b). However, in situations where there are multiple relevant but entangled factors of variation that give rise to the data, we require a means of feature extraction that disentangles these factors in the data rather than simply learn to represent some of these factors at the expense of those that are lost in the filter pooling operation.

Here we propose a novel model family with the objective of learning to disentangle the factors of variation evident in the data. Our approach is based on the spike-and-slab restricted Boltzmann machine (ssRBM) (Courville et al., 2011a) which has recently been shown to be a promising model of natural image data. We generalize the ssRBM to include higher-order interactions among multiple binary latent variables. Seen from a generative perspective, the multiplicative interactions of the binary latent variables emulates the entangling of the factors that give rise to the data. Conversely, inference in the model can be seen as an attempt to assign credit to the various interacting factors for their combined account of the data – in effect, to disentangle the generative factors. Our approach relies only on unsupervised approximate maximum likelihood learning of the model parameters, and as such we do not require the use of any label information in defining the factors to be disentangled. We believe this to be a research direction of critical importance, as it is almost never the case that label information exists for all factors responsible for variations in the data distribution.


The principle that invariant features can actually emerge, using only unsupervised learning, from the organization of features into subspaces was first established in the ASSOM model (Kohonen, 1996). Since then, the same basic strategy has reappeared in a number of different models and learning paradigms, including topological independent component analysis (Hyvärinen and Hoyer, 2000; Le et al., 2010), invariant predictive sparse decomposition (IPSD) (Kavukcuoglu et al., 2009), as well as in Boltzmann machine-based approaches (Ranzato and Hinton, 2010; Courville et al., 2011b). In each case, the basic strategy is to group filters together by, for example, using a variable (the pooling feature) that gates the activation for all elements of the group. This gated activation mechanism causes the filters within the group to share a common window on the dataset, which in turn leads to filter groups composed of mutually complementary filters. In the end, the span of the filter vectors defines a subspace which specifies the directions in which the pooling feature is invariant. Somewhat surprisingly, this basic strategy has repeatedly demonstrated that useful invariant features can be learned in a strictly unsupervised fashion, using only the statistical structure inherent in the data. While remarkable, one important problem with using this learning strategy is that the invariant representation formed by the pooling features offers a somewhat incomplete view on the data as the detailed representation of the lower-level features is abstracted away in the pooling procedure. While we would like higher level features to be more abstract and exhibit greater invariance, we have little control over what information is lost through feature subspace pooling.

Invariant features, by definition, have reduced sensitivity in the direction of invariance. This is the goal of building invariant features and fully desirable if the directions of invariance all reflect sources of variance in the data that are uninformative to the task at hand. However, it is often the case that the goal of feature extraction is the disentangling or separation of many distinct but informative factors in the data. In this situation, the methods of generating invariant features – namely, the feature subspace method – may be inadequate. Returning to our facial expression classification example from the introduction, consider a pooling feature made invariant to the expression of a subject by forming a subspace of low-level filters that represent the subject with various facial expressions (forming a basis for the subspace). If this is the only pooling feature that is associated with the appearance of this subject, then the facial expression information is lost to the model representation formed by the set of pooling features. As illustrated in our hypothetical facial expression classification task, this loss of information becomes a problem when the information that is lost is necessary to successfully complete the task at hand.

Obviously, what we really would like is for a particular feature set to be invariant to the irrelevant features and disentangle the relevant features. Unfortunately, it is often difficult to determine a priori which set of features will ultimately be relevant to the task at hand. Further, as is often the case in the context of deep learning methods (Collobert and Weston, 2008), the feature set being trained may be destined to be used in multiple tasks that may have distinct subsets of relevant features. Considerations such as these lead us to the conclusion that the most robust approach to feature learning is to disentangle as many factors as possible, discarding as little information about the data as is practical. This is the motivation behind our proposed higher-order spike-and-slab Boltzmann machine.

Figure 1: Energy function of our higher-order spike & slab RBM (ssRBM), used to disentangle (multiplicative) factors of variation in the data. Two groups of latent spike variables, and , interact to explain the data , through the weight tensor . While the ssRBM instantiates a slab variable for each hidden unit , our higher-order model employs a slab for each pair of spike variables (,). and are respectively the mean and precision parameters of . An additional set of spike variables are used to gate groups of latent variables , and serve to promote group sparsity. Most parameters are thus indexed by an extra subscript . Finally, , and are standard bias terms for variables , and , while is a diagonal precision matrix on the visible vector.

In this section, we introduce a model which makes some progress toward the ambitious goal of disentangling factors of variation. The model is based on the Boltzmann machine, an undirected graphical model. In particular we build on the spike-and-slab restricted Boltzmann Machine (ssRBM) (Courville et al., 2011b), a model family that has previously shown promise as a means of learning invariant features via subspace pooling. The original ssRBM model possessed a limited form of higher-order interaction of two latent random variables: the spike and the slab Our extension adds higher-order interactions between four distinct latent random variables. These include one set of slab variables and three interacting binary spike variables. Unlike the ssRBM, the interactions between the latent variables violate the conditional independence constraint of the restricted Boltzmann machine and therefore does not belong to this class of models. As a consequence, exact inference in the model is not tractable and we resort to a mean-field approximation.

Our strategy in promoting this model is that we intend to disentangle factors of variation via inference (recovering the posterior distribution over our latent variables) in a generative model. In the context of generative models, inference can roughly be thought of as running the generative process in reverse. Thus if we wish our inference process to disentangle factors of variation, our generative process should describe a means of factor entangling. The generative model we propose here represents one possible means of factor entangling.

Let be the random visible vector that represents our observations with its mean zeroed. We build a latent representation of this data with binary latent variables , and . In the spike-and-slab context, we can think of , and as a factored representation of the “spike” variables. We also include a set of real valued “slab” variables , with element associated with hidden units , and . The interaction between these variables is defined through the energy function of Fig. 1.

The parameters are defined as follows. is a weight 4-tensor connecting visible units to the interacting latent variables, these can be interpreted as forming a basis in image space; and are tensors describing the mean and precision of each ; is a diagonal precision matrix on the visible vector; and finally , and are biases on the matrices , and vector respectively. The energy function fully specifies the joint probability distribution over the variables , ,, and : where is the partition function which ensures that the joint distribution is normalized.

As specified above, the energy function is similar to the ssRBM energy function (Courville et al., 2011b; a), but includes a factored representation of the standard ssRBM spike variable. Yet, clearly the properties of the model are highly dependent on the topology of the interactions between the real-valued slab variables , and three binary spike variables , and . We adopt a strategy that permits local interactions within small groups of , and in a block-like organizational pattern as specified in Fig. 2. The local block structure allows the model to work incrementally towards disentangling the features by focusing on manageable subparts of the problem.

Figure 2: Block-sparse connectivity pattern with dense interactions between and within each block (only shown for -th block). Each block is gated by a separate variable.

Similar to the standard spike-and-slab restricted Boltzmann machine (Courville et al., 2011b; a), the energy function in Eq. 1 gives rise to a Gaussian conditional over the visible variables:

Here we have a four-way multiplicative interaction in the latent variables , , and . The real-valued slab variable acts to scale the contribution of the weight vector . As a consequence, after marginalizing out , the factors , and can also be seen as contributing both to the conditional mean and conditional variance of :

This is an important property of the spike-and-slab framework that is also shared by other latent variable models of real-valued data such as the mean-covariance restricted Boltzmann machine (mcRBM) (Ranzato and Hinton, 2010) and the mean Product of T-distributions model (mPoT) (Ranzato et al., 2010).

From a generative perspective, the model can be thought of as consisting of a set of factor blocks whose activity is gated by the variables. Within each block, the variables and can be thought of as local latent factors whose interaction gives rise to the active block’s contribution to the visible vector. Crucially, the multiplicative interaction between the and for a given block is mediated by the weight tensor and the corresponding slab variables . Contrary to more standard probabilistic factor models whose factors simply sum to give rise to the visible vector, the individual contributions of the elements of and are not easily isolated from one another. We can think of the generative process as entangling the local block factor activations.

From an encoding perspective, we are interested in using the posterior distribution over the latent variables as a representation or encoding of the data. Unlike in RBMs, in the case of the proposed model where we have higher-order interactions over the latent variables, the posterior over the latent variables does not factorize cleanly. By marginalizing over the slab variables , we can recover a set of conditionals describing how the binary latent variables , and interact. The conditional is given below.

It illustrates that with the factor configuration given in Fig. 2, the factors are activated (assume value 1) through the sum-pooled response of all the weight vectors ( and ) differentially gated by the values of and , whose conditionals are respectively given by:

For completeness, we also include the Gaussian conditional distribution over the slab variables

From an encoding perspective, the gating pattern on the and variables, evident from Fig. 2 and from the conditionals distributions, defines a form of local bilinear interaction (Tenenbaum and Freeman, 2000). We can interpret the values of and within block acting as basis indicators, in dimensions and , for the linear subspace in the visible space defined by .

From this perspective, we can think of as defining a block-local binary coordination encoding of the data. Consider the case illustrated by Fig. 2, where we have , and the number of blocks () is 4. For each block, we have filters which we encode using binary latent variables, where each (alternately ) effectively pools over the subspace characterized by the variables , (alternately , ) through their relative interaction with . As a concrete example, imagine that the structure of the weight tensor was such that, along the dimension indexed by , the weight vectors form oriented Gabor-like edge detectors of different orientations. Yet along the dimension indexed by , the weight vectors form oriented Gabor-like edge detectors of different colors. In this hypothetical example, encodes orientation information while being invariance to the color of the edge, while encodes color information while being invariant to orientation. Hence we could say that we have disentangled the latent factors.


As alluded to above, one interpretation of the role of and is as distinct and complementary sum-pooled feature sets. Returning to Fig. 2, we can see that, for each block, the pool across the columns of the th block, along the th row, while the pool across rows, along the th column. The variables are also interpretable as pooling across all elements of the block. One way to interpret the complementary pooling structures of the and is as a multi-way pooling strategy.

This particular pooling structure was chosen to study the potential of learning the kind of bilinear interaction that exists between the and within a block. The are present to promote block cohesion by gating the interaction of between and and the visible vector .

This higher-order structure is of course just one choice of many possible higher-order interaction architectures. One can easily imagine defining arbitrary overlapping pooling regions, with the number of overlapping pooling regions specifying the order of the latent variable interaction. We believe that explorations of overlapping pooling regions of this type is a promising direction of future inquiry. One potentially interesting direction is to consider overlapping blocks (such as our blocks). The overlap will define a topology over the features as they will share lower-level features (i.e. the slab variables). A topology thus defined could potentially be exploited to build higher-level data representations that possess local receptive fields. These kind of local receptive fields have been shown to be useful in building large and deep models that perform well in object classification tasks in natural images (Coates et al., 2011).


Due to the multiplicative interaction between the latent variables , and , computation of , and is intractable. While the slab variables also interact multiplicatively, we are able to analytically marginalize over them. Consequently we resort to a variational approximation of the joint conditional with the standard mean-field structure. i.e. we choose such that the KL divergence is minimized, or equivalently, that the variational lower bound on the log likelihood of the data is maximized:

where the sums are taken over all values of the elements of , and respectively. Maximizing this lower bound with respect to the variational parameters , and , results in the set of approximating factored distributions:

The above equations form a set of fixed point equations which we iterate until the values of all , and converge. Since the expression for does not depend on , , does not depend on , , and does not depend on , , we can define a three stage update strategy where we update the values of all values of in parallel, then update all values of in parallel and finally update all values of in parallel.

Following the variational EM training approach (Saul et al., 1996), we alternately maximize the lower bound with respect to the variational parameters , and (E-step) and maximizing with respect to the model parameters (M-step). The gradient of with respect to the model parameters is given by:

where is the energy function given in Eq. 1. As is evident from Eq. S3.Ex23, the gradient of with respect to the model parameters contains two terms: a positive phase that depends on the data and a negative phase, derived from the partition function of the joint that does not. We adopt a training strategy similar to that of (Salakhutdinov and Hinton, 2009), in that we combine a variational approximation of the positive phase of the gradient with a block Gibbs sampling-based stochastic approximation of the negative phase. Our Gibbs sampler alternately samples, in parallel, each set of random variables, sampling from , , , , and finally sampling from .


Above we have briefly outline our procedure for training the unsupervised learning. The web of interactions between the latent random variables, particularly those between and , makes the unsupervised learning of the model parameters a particularly challenging learning problem. It is the difficultly of learning that motivates our block-wise organization of the interactions between the and variables. The block structure allows the interactions between and to remain local, with each interacting with relatively few and each interacting with relatively few . This local neighborhood structure allows the inference and learning procedures to better manage the complexities of teasing apart the latent variable interactions and adapting the model parameters to (approximately) maximize likelihood.

By using many of these blocks of local interactions we can leverage the known tractable learning properties of models such as the RBM. Specifically, if we consider each block as a kind of super hidden unit gated by , then with no interactions across blocks (apart from those mediated by the mutual connections to the visible units) the model assumes the form of an RBM.

While our chosen interaction structure allows our higher-order model to be able to learn, one consequence is that the model is only capable of disentangling relatively local factors that appear within a single block. We suggest that one promising avenue to accomplish more extensive disentangling is to consider stacking multiple version of the proposed model and consider layer-by-layer disentangling of the factors of variation present in the data. The idea is to start with local disentangling and move gradually toward disentangling non local and more abstract factors.


The model proposed here was strongly influenced by previous attempts to disentangle factors of variation in data using latent variable models. One of the earlier efforts in this direction also used higher-order interactions of latent variables, specifically bilinear (Tenenbaum and Freeman, 2000; Grimes and Rao, 2005) and multilinear (Vasilescu and Terzopoulos, 2005) models. One critical difference between these previous attempts to disentangle factors of variation and our method is that unlike these previous methods, we are attempting to learn to disentangle from entirely unsupervised information. In this way, one can interpret our approach as an attempt to extend the subspace feature pooling approach to the problem of disentangling factors of variation.

Bilinear models are essentially linear models where the higher-level state is factored into the product of two variables. Formally, the elements of observation are given by , , where and are elements of the two factors ( and ) representing the observation and is an element of the tensor of model parameters (Tenenbaum and Freeman, 2000). The tensor can be thought of as a generalization of the typical weight matrix found in most unsupervised models we have considered above. (Tenenbaum and Freeman, 2000) developed an EM-based algorithm to learn the model parameters and demonstrated, using images of letters from a set of distinct fonts, that the model could disentangle the style (font characteristics) from content (letter identity). (Grimes and Rao, 2005) later developed a bilinear sparse coding model of a similar form as described above but included additional terms to the objective function to render the elements of both and sparse. They also require observation of the factors in order to train the model, and used the model to develop transformation invariant features of natural images. Multilinear models are simply a generalization of the bilinear model where the number of factors that can be composed together is 2 or more. (Vasilescu and Terzopoulos, 2005) develop a multilinear ICA model, which they use to model images of faces, to disentangle factors of variation such as illumination, views (orientation of the image plane relative to the face) and identities of the people.

Hinton et al. (2011) also propose to disentangle factors of variation by learning to extract features associated with pose parameters, where the changes in pose parameters (but not the feature values) are known at training time. The proposed model is also closely related to recent work (Memisevic and Hinton, 2010), where higher-order Boltzmann Machines are used as models of spatial transformations in images. While there are a number of differences between this model and ours, the most significant difference is our use of multiplicative interactions between latent variables. While they included higher-order interactions within the Boltzmann energy function, they were used exclusively between observed variables, dramatically simplifying the inference and learning procedures. Another major point of departure is that instead of relying on low-rank approximations to the weight tensor, our approach employs highly structured and sparse connections between latent variables (e.g. is not interact with or for ), reminiscent of recent work on structured sparse coding (Gregor et al., 2011) and structured -norms (Bach et al., 2011). As discussed above, our use of a sparse connection structure allows us to isolate groups of interacting latent variables. Keeping the interactions local in this way, is a key component of our ability to successfully learn using only unsupervised data.


We showcase the ability of our model to disentangle factors of variation, by training it on a synthetic dataset, a subset of which is shown in Fig. 3 (top). Each color image, of size is composed of one basic object of varying color, which can appear at five different positions. The constraint is that all objects in a given image must be of the same color. Additive gaussian noise is super-imposed on the resulting images to facilitate mixing of the RBM negative phase. A bilinear ssRBM with and should in theory have the capacity to disentangle the two factors of variation present in the data, as there are possible colors and configurations of object placement. The resulting filters are shown in Fig. 3 (bottom): the model has succesfully learnt a binary encoding of color along -units (rows) and positions along (columns). Note that this would have been extremely difficult to perform without multiplicative interactions of latent variables: an RBM with hidden units technically has the capacity to learn similar filters, however it would be incapable of enforcing mutual exclusivity between hidden units of different color. The bilinear ssRBM model on the other hand generates near-perfect samples (not shown), while factoring the representation for use in deeper layers.

Figure 3: (top) Samples from our synthetic dataset (before noise). In each image, a figure “X” can appear at five different positions, in one of eight basic colors. Objects in a given image must all be of the same color. (bottom) Filters learnt by a bilinear ssRBM with , , which succesfully show disentangling of color information (rows) from position (columns).

We evaluate our model on the recently introduced Toronto Face Dataset (TFD) (Susskind et al., 2010), which contains a large number of black & white preprocessed facial images. These span a wide range of identities and emotions and as such, the dataset is well suited to study the problem of disentangling: models which can successfully separate identity from emotion should perform well at the supervised learning task, which involves classifying images into one of seven categories: {anger, disgust, fear, happy, sad, surprise, neutral}. The dataset is divided into two parts: a large unlabeled set (meant for unsupervised feature learning) and a smaller labeled set. Note that emotions appear much more prominently in the latter, since these are acted out and thus prone to exaggeration. In contrast, most of the unlabeled set contains natural expressions over a wider range of individuals.

In the course of this work, we have made several key refinements to the original spike-and-slab formulation. Notably, since the slab variables can be interpreted as coordinates in the subspace of the spike variable (which spans the set of filters ), it is natural for these filters to be unit-norm. Each maximum likelihood gradient update is thus followed by a projection of the filters onto the unit-norm ball. Similarly, there exists an over-parametrization in the direction of and the sign of , the parameter controlling the mean of . We thus constrain to be positive, in our case greater than 1. Similar constraints are applied on and to ensure that the variances on the visible and slab variables remain bounded. While previous work (Courville et al., 2011a) used the expected value of the spike variables as the input to classifiers, or higher-layers in deep networks, we found that the above re-parametrization consistently lead to better results when using the product of expectations of and . For pooled models, we simply take the product of each binary spike, with the norm of its associated slab vector.


We begin with a qualitative evaluation of our model, by visualizing the learned filters (inner-most dimension of the matrix ) and pooling structures. We trained a model with and (that is to say blocks of interacting and units) on a weighted combination of the labeled and unlabeled training sets. Doing so (as opposed to training on the unlabeled set only) allows for greater interpretability of the results, as emotion is a more prominent factor of variation in the labeled set). The results, shown in Figure 4, clearly show global cohesion within blocks pooled by , with row and column structure correlating with variances in appearance/identity and emotions.

Figure 4: Example blocks obtained with , . The filters (inner-most dimension of tensor ) in each block exhibit global cohesion, specializing themselves to a subset of identities and emotions: {happiness, fear, neutral} in (left) and {happiness, anger} in (right). In both cases, -units (which pool over columns) encode emotions, while -units (which pool over rows) are more closely tied to identity.

We now evaluate the representation learnt by our disentangling RBM, by measuring its usefulness for the task of emotion recognition. Our main objective here is to evaluate the usefulness of disentangling, over traditional approaches of pooling, as well as the use of larger, unpooled models. We thus consider ssRBMs with and features, with either (i) no pooling (i.e. spikes with slabs per spike), (ii) pooling along a single dimension (i.e. spike variables, pooling slabs) or (iii) disentangled through our higher-order ssRBM (i.e. , with and units arranged in a grid, with ).

We followed the standard TFD training protocol of performing unsupervised training on the unlabeled set, and then using the learnt representation as input to a linear SVM, trained and cross-validated on the labeled set. Table 1 shows the test accuracy obtained by various spike-and-slab models, averaged over the 5-folds.

Factored Unfactored
Model K M N valid test valid test
ssRBM 3000 1 n/a n/a 76.0% 75.7%
ssRBM 999 3 72.9% 74.4% 74.9% 73.5%
hossRBM 330 3 3 76.0% 75.7% 75.3% 75.2%
hossRBM 120 5 5 71.4% 70.7% 74.5% 74.2%
ss-RBM 5000 1 n/a n/a 76.7% 76.3%
ss-RBM 1000 5 74.2% 74.0% 75.9% 74.6%
hossRBM 555 3 3 77.6% 77.4% 76.2% 75.9%
hossRBM 200 5 5 73.3% 73.3% 75.6% 75.3%
Table 1: Classification accuracy for Toronto Face Dataset. We compare our higher-order ssRBM for various block sizes and pooling regions . The comparison is against first-order ssRBMs, which thus pool in a single dimension of size . First four models contain approximately filters, while bottom four contain . In both cases, we compare the effect of using the factored representation, to the unfactored representation.

We report two sets of numbers for models with pooling or disentangling: one where we use the “factored representation”, which is the element-wise product of spike variables with the norm of their associated slab vector, and the “unfactored representation”: the higher-dimensional representation formed by considering all slab variables, each multiplied by their associated spikes.

We can see that the higher-order ssRBM achieves the best result: , using the factored representation. The fact that that our model outperforms the “unfactored” one, confirms our disentangling hypothesis: our model has successfully learnt a lower-dimensional (factored) representation of the data, useful for classification. For reference, a linear SVM classifier on the pixels achieves (Susskind et al., 2010), an MLP trained with supervised backprop 111Salah Rifai, personal communication., while a deep mPoT model (Ranzato et al., 2011), which exploits local receptive fields achieves .


We have presented a higher-order extension of the spike-and-slab restricted Boltzmann machine that factors the standard binary spike variable into three interacting factors. From a generative perspective, these interactions act to entangle the factors represented by the latent binary variables. Inference is interpreted as a process of disentangling the factors of variation in the data. As previously mentioned, we believe an important direction of future research to be the exploration of methods to gradually disentangle the factors of variation by stacking multiple instantiations of proposed model into a deep architecture.


  • Bach et al. (2011) F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Structured sparsity through convex optimization. CoRR, abs/1109.2397, 2011.
  • Coates et al. (2011) A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011), 2011.
  • Collobert and Weston (2008) R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, ICML 2008, pages 160–167. ACM, 2008.
  • Courville et al. (2011a) A. Courville, J. Bergstra, and Y. Bengio. Unsupervised models of images by spike-and-slab RBMs. In ICML’2011, 2011a.
  • Courville et al. (2011b) A. Courville, J. Bergstra, and Y. Bengio. A Spike and Slab Restricted Boltzmann Machine. In AISTATS’2011, 2011b.
  • Gregor et al. (2011) K. Gregor, A. Szlam, and Y. LeCun. Structured sparse coding via lateral inhibition. In Advances in Neural Information Processing Systems (NIPS 2011), volume 24, 2011.
  • Grimes and Rao (2005) D. B. Grimes and R. P. Rao. Bilinear sparse coding for invariant vision. Neural computation, 17(1):47–73, January 2005.
  • Hinton et al. (2011) G. Hinton, A. Krizhevsky, and S. Wang. Transforming auto-encoders. In ICANN’2011: International Conference on Artificial Neural Networks, 2011.
  • Hyvärinen and Hoyer (2000) A. Hyvärinen and P. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–1720, 2000.
  • Kavukcuoglu et al. (2009) K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun. Learning invariant features through topographic filter maps. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR’09), pages 1605–1612. IEEE, 2009.
  • Kohonen (1996) T. Kohonen. Emergence of invariant-feature detectors in the adaptive-subspace self-organizing map. Biological Cybernetics, 75:281–291, 1996. ISSN 0340-1200.
  • Kohonen et al. (1979) T. Kohonen, G. Nemeth, K.-J. Bry, M. Jalanko, and H. Riittinen. Spectral classification of phonemes by learning subspaces. In ICASSP ’79., volume 4, pages 97 – 100, 1979.
  • Le et al. (2010) Q. Le, J. Ngiam, Z. Chen, D. J. hao Chia, P. W. Koh, and A. Ng. Tiled convolutional neural networks. In NIPS’2010, 2010.
  • LeCun et al. (1989) Y. LeCun, L. D. Jackel, B. Boser, J. S. Denker, H. P. Graf, I. Guyon, D. Henderson, R. E. Howard, and W. Hubbard. Handwritten digit recognition: Applications of neural network chips and automatic learning. IEEE Communications Magazine, 27(11):41–46, Nov. 1989.
  • Memisevic and Hinton (2010) R. Memisevic and G. E. Hinton. Learning to represent spatial transformations with factored higher-order boltzmann machines. Neural Computation, 22(6):1473–1492, June 2010.
  • Ranzato and Hinton (2010) M. Ranzato and G. H. Hinton. Modeling pixel means and covariances using factorized third-order Boltzmann machines. In CVPR’2010, pages 2551–2558, 2010.
  • Ranzato et al. (2010) M. Ranzato, V. Mnih, and G. Hinton. Generating more realistic images using gated MRF’s. In NIPS’2010, 2010.
  • Ranzato et al. (2011) M. Ranzato, J. Susskind, V. Mnih, and G. Hinton. On deep generative models with applications to recognition. In CVPR’2011, 2011.
  • Salakhutdinov and Hinton (2009) R. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In AISTATS’2009, volume 5, pages 448–455, 2009.
  • Saul et al. (1996) L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4:61–76, 1996.
  • Susskind et al. (2010) J. Susskind, A. Anderson, and G. E. Hinton. The Toronto face dataset. Technical Report UTML TR 2010-001, U. Toronto, 2010.
  • Tenenbaum and Freeman (2000) J. B. Tenenbaum and W. T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247–1283, 2000.
  • Vasilescu and Terzopoulos (2005) M. A. O. Vasilescu and D. Terzopoulos. Multilinear independent components analysis. In CVPR’2005, volume 1, pages 547–553, 2005.
  • Wang et al. (2009) H. Wang, M. M. Ullah, A. Kläser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In British Machine Vision Conference (BMVC), pages 127–127, London, UK, September 2009.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description