Predictive Encoding of Contextual Relationships for Perceptual Inference, Interpolation and Prediction
We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework. The model learns latent contextual representations by maximizing the predictability of visual events based on local and global contextual information through both top-down and bottom-up processes. In contrast to standard predictive coding models, the prediction error in this model is used to update the contextual representation but does not alter the feedforward input for the next layer, and is thus more consistent with neurophysiological observations. We establish the computational feasibility of this model by demonstrating its ability in several aspects. We show that our model can outperform state-of-art performances of gated Boltzmann machines (GBM) in estimation of contextual information. Our model can also interpolate missing events or predict future events in image sequences while simultaneously estimating contextual information. We show it achieves state-of-art performances in terms of prediction accuracy in a variety of tasks and possesses the ability to interpolate missing frames, a function that is lacking in GBM.
|Computer Science Department|
|Electrical Engineering Department|
|Computer Science Department|
|Tai Sing Lee|
|Center for the Neural Basis of Cognition and Computer Science Department|
|Carnegie Mellon University|
In theoretical neuroscience, it has been proposed that in order to rapidly process the constant influx of sensory inputs, which are complex, noisy and full of ambiguity, the brain needs to learn internal models of the world, and use them to generate expectations and predictions based on memory and context to speed up and facilitate inference. Comprehension is achieved when the synthesized prediction or expectation, mediated by recurrent feedback from the higher visual areas to the early visual areas, explains the incoming signals (Mumford, 1992). This framework was recently popularized by Rao & Ballard (1999) in psychology and neuroscience as the predictive coding theory, and can be understood more generally in the framework of hierarchical Bayesian inference (Lee & Mumford, 2003). The predictive coding idea has been generalized to non-visual systems (Bar, 2007; Todorovic et al., 2011), and even a “unified theory” of the brain (Friston, 2010). However, the computational utility and power of these conceptual models remains to be elucidated.
In this work, we propose a new framework that can learn internal models of contextual relationships between visual events in space and time. These internal models of context allow the system to interpolate missing frames in image sequences or to predict future frames. The internal models, learned as latent variables, help accomplish these tasks by rescaling the weights of the basis functions represented by the neurons during the synthesis process in the framework of predictive coding. This model is inspired and related to Memisevic and Hinton’s (Memisevic, 2013; Susskind et al., 2011; Memisevic, 2011) gated Boltzmann machines (GBM) and gated autoencoder (GAE) which also model spatiotemporal transformations in image sequences. Their gated machines, modeling 3-way multiplicative interaction, make strong assumption on the role of neural synchrony utilized in learning and inference. Generalizing that model to -way interaction is problematic because it involves way multiplicative interaction. As a result, the GBM machines has primarily been used to learn transformation between two frames. Prediction in GBM is accomplished by applying the transformation estimated based on the first two frames to the second frame to generate/predict a third frame. It cannot interpolate a missing frame in the middle if given a frame before and a frame after the missing frame. Our model explicitly defines a cost function based on mutual predictability. It is more flexible and can propagate information in both directions to predict future frames and interpolate missing frames in a unified framework.
In our formulation, synchrony is not required. Evidence from multiple frames are weighed and then summed together in a way similar to spatiotemporal filtering of the input signals by visual cortical neurons in the primary visual cortex. The inference of the latent context variables is nonlinear, accomplished by minimizing the prediction error of the synthesized image sequence and the observed sequence. This is a crucial difference from GBM, which, as is the case with most deep learning networks (Hinton et al., 2006), relies on one-pass feedforward computation for inference. Our model, by exploiting top-down and bottom-up processes to minimize the same predictive coding cost function during both learning and inference, is able to estimate more meaningful and accurate contextual information.
Our framework of predictive coding under contextual modulation allows the model to accomplish the similar functions as GBM, but also makes it more flexible and achieve more functions such as integrating more than 2 frames, and performing interpolation. Our model is also more biologically plausible than the standard predictive coding model (Rao & Ballard, 1999) in that the prediction error signals are used to update the contextual representation only, and do not replace the feedforward input to the next layer. This model also provides a framework for understanding how contextual modulation can influence the certain constructive and generative aspects of visual perception.
2 Description of the Model
The proposed model seeks to learn relationships between visual events in a spatial or temporal neighborhood to provide contextual modulation for image reconstruction, interpolation and prediction. It can be conceptualized as an autoencoder with contextual modulation, or a context-dependent predictive coding model. A predictive coding model states that the brain continually generates models of the world based on context and memory to predict sensory input. It synthesizes a predicted image and feeds back to match the input image represented in the lower sensory areas. The mismatch between the prediction and the input produces a residue signal that can be used to update the top-down models to generate predictions to explain away the inputs (Mumford, 1992; Rao & Ballard, 1999). Our model extends this “standard” predictive coding model, using the residue signals to update the contextual representation, which in turn modulates the image synthesis process by rescaling the basis functions of the neurons adaptively so that the synthesized images for all the frames in a temporal neighborhood maximally predict one another.
The problem can be formulated as the following energy function,
where is the input signal, is the predicted signal, is the contextual latent variables, is a collection of parameters of the model to be learned, including the feedforward connections or receptive fields of the neurons in the hidden layer, and the feedback connections, from the to modulate the generation of by . The second term in the function is a regularization term that makes the contextual latent variables sparse. serves to balance the importance of the prediction error term and the regularization term. Note that this cost function can be considered as a generalization of the classical energy functional or Bayesian formulation in computer vision (Horn 1986, Blake and Zisserman 1987, Geman and Geman 1984) with the first ”data term” term replaced by a generative model and the second ”smoothness prior” term replaced by a set of contextual relationship priors.
The objective of the model is to learn a set of relationship contexts that can modulate the predictive synthesis process to maximize the mutual predictability of the synthesized images across space and/or time. We will now describe how the prediction can be generated in this model. The model’s information flow can be depicted as a circuit diagram shown in Figure 1 (left panel). It consists of an input (visible) layer , a hidden layer that performs a spatiotemporal filter operation on the input layer, a prediction layer that represents the prediction generated by with contextual modulation from the latent contextual representation layer . Prediction error signals are propagated up to drive the updating of the contextual representation .
Units in the visible layer, , represents a sequence of images, with representing the image frame (with number of pixels) at and indicates a sequence of video image frames. Note that in the visible layer it is not necessary for all the visual events to be present, since the model possesses the ability to predict the missing events from available partial observations based on the principle of mutual predictability.
Units in the hidden layer, , are defined as:
where is the number of units for each , defines the index set of ’s neighbors that provide the local temporal support to , and returns the size of a set. is weight matrix to be learned as parameters. Each row of can be viewed as a feature filter for a particular visual event in an image frame . It can be considered as feedforward weight or filter for a hidden neuron or that neuron’s spatial receptive field at a particular time frame. The corresponding rows of a particular sequence of related for unit is the spatiotemporal filter receptive field of that neuron whose activity as the response of its spatiotemporal filter at frame to a particular sequence of image frames in . Our definition of the neighborhood is flexible. The temporal neighborhood can be made causal, including up to frame , or and the model would still work. Here, we use this non-causal symmetrical neighborhood to underscore the fact that our model can be used to model spatial context (which is symmetrical), and can go back and forth in time to modify our interpretation of the past events based on current evidence and recent history.
An additional crucial hidden layer, with a set of latent variables , is used to model contextual information. is computed by minimizing the residue errors between a sequence of reconstructed image frames and the input frames. is filtered by a weight matrix (the dot product with a row of this matrix) to provide feedback to rescale the contribution of each latent variable activity for generating the prediction signal . In Section 4.2, we will study the contextual representation in greater details.
The prediction in the prediction layer is given by
where is a set of weights or basis functions that filter the contextual representation to generate a modulating signal for each , is an element-wise product, and thus the contribution of each neuron to the predicted is its activity due to feedforward input rescaled by context modulation to produce a weight for its spatial synthesis basis function . Modulator can be viewed as a high-dimensional distributed representation of context, the structure of which is modeled by a low-dimensional contextual representation which is made sparse by the sparsity term.
Combining all the equations together, the prediction generated by the context-dependent predictive coding model is given by
Computationally, the update of the contextual latent variables is driven by the residue signals . The model can also be considered as a factor graph as shown in Figure 1 (right panel) together with its expansion for sequence of arbitrary length . Each factor node (represented by the solid squares) corresponds to the mutual predictability defined by Equation 1 between consecutive frames modulated by the contextual representation . Contextual representation will evolve over time, thus priors such as smoothness constraint can be imposed on the temporal evolution of , as shown in the graphical model, though no priors are imposed in our current implementation. At the abstract graphical model level, our model is very similar to autoencoder, as well as to Memisevic and Hinton’s gated Boltzmann machines (Susskind et al., 2011; Memisevic, 2011). But there are difference in the concrete formulation of our model, as well as the learning and inference algorithms, with GBM, as discussed in the introduction.
3 Description of the Algorithms
In this section, we describe the learning and inference algorithms developed for our model. Bottom-up and top-down estimations are involved during both inference and learning.
3.1 Unsupervised Parameters Learning
The training dataset is composed of image sequences and we assume each of them to be an i.i.d sample from an unknown distribution. The objective is to optimize the following problem:
We adopt an EM-like algorithm that updates parameters and imputes hidden variable alternatively while keeping the other one fixed.
Update We use Stochastic Gradient Descent (SGD) to update based on the following update rule:
where the free parameter is the learning rate, defines the mini-batch used for training at time and is the momentum term weighted by a free parameter . The momentum term helps to avoid oscillations during the iterative update procedure and to speed up the learning process. All the free parameters in the experiments are chosen under the guidance of Hinton (2010).
The algorithm is implemented using Theano (Bergstra et al., 2010) which provides highly optimized symbolic differentiation for efficient and automatic gradient calculation with respect to the objective function. The idea of denoising (Vincent et al., 2008) is also used to learn more robust filters.
Estimate Given fixed , we estimate the contextual representation for each sequence by solving the following optimization problem independently and in parallel:
where is computed by Eqn.(4). To better exploit the quadratic structure of the objective function, we solve this convex optimization problem using a more efficient quasi-Newton method Limited memory BFGS (L-BFGS) algorithm instead of gradient descent (Ngiam et al., 2011).
During the training with each batch of data, we first update the parameter using one step stochastic gradient descent, then iterate at most five steps of L-BFGS to estimate the hidden variable .
3.2 Inference with Partial Observation
Inference with partial observation refers to prediction or reconstruction of a missing image frame by the trained model given observed neighboring frames in the sequence. This problem is posed as an optimization problem that simultaneously estimates the latent variables for the contextual representation and the missing event/frame :
This optimization problem can be solved efficiently and iteratively by an alternating top-down and bottom-up estimation procedure. The top-down estimation “hallucinates” a missing event based on the neighboring events and the higher-level contextual representation. The bottom-up procedure uses the prediction residue error to update the contextual representation. Specifically, minimizing Eqn.(9) is realized by alternately estimating and iteratively.
Estimate Given learned and current estimation of , we use the same method as Eqn.(8).
Estimate Given learned and current estimation of , we estimate a missing event/frame by solving the following optimization problem:
While Eqn.(4) considers only the prediction of , this optimization problem factors the role of in predicting/constructing its neighbors. Notice that this objective function is a standard quadratic function, which has a closed form solution in one step. For a video sequence, predicting a future frame and interpolating a missing frame are formulated and accomplished in a unified framework.
4 Experimental Results
4.1 Receptive Field Learning
In the first experiment, we trained our model using movies synthesized from natural images. Each movie sequence exhibited either translation, rotation or scaling transformation. We trained models for each type of transformation movies independently, as well as a mixture of the three. We will show results of the feedforward filters of models trained with three frames (). The algorithm, however, is not limited to three frames and we will also show results of the model trained with relatively longer sequences such as six frames ().
The images used to generate the training movie sequences were random samples from the whitened natural images used in Olshausen & Field (1997). For translation or rotation, the patch size was pixels. Translation steps were sampled from the interval [-3, 3] (in pixels) uniformly, and rotation angles were uniformly sampled from at intervals. For scaling and the mixture of motion cases, the patch size was pixels. The scaling ratio was uniformly sampled from . For mixture of motion, the training set was simply a combination of all three types of single-transformation movies, each with a constant transformation parameter. For models trained by a single type of motion, we used 25 contextual representation units in and 20000 training sequences and the size of is . For the models trained with all three types of motions, we used 75 contextual representation units and 30000 training sequences and the size of is . We used unsupervised parameters learning algorithm described earlier with a learning rate () of 0.05 and momentum () 0.5. Every model was trained for 500 epochs.
Figure 2 shows that the feedforward filters (or receptive fields) learned from translation resemble Fourier basis with a quadrature phase difference between frames. Figure 2 shows that the filters learned from rotation are Fourier basis in polar coordinates, also with a quadrature phase in polar angle between frames. The filters learned from scaling shown in Figure 2 depicts filters trained by scaling. They resemble rays emanating from the center or circles contracting to the center, reflecting the trajectories of points during scaling. Figure 2 shows the filters trained with a motion mixture, which appear to encode the transformations in a distributed manner using localized Gabor filters, similar to the receptive fields of the simple neurons in the primary visual cortex. Figure 2 shows the filters trained with frames rotation sequences. This demonstrates the model can be used to learn longer sequence filters. We found training time scales linearly with .
4.2 Understanding the Contextual Representation
To understand the information encoded in the contextual relationship latent variables , we used t-SNE method (Van der Maaten & Hinton, 2008) to see how pattern content and transformation content are clustered in a low-dimensional space. We applied the model pre-trained by the motion mixture to a combination of test data of 6,000 synthetic movies generated by randomly translating, rotating, or scaling image patches. The image patches were randomly sampled from 3 different datasets (MNIST, natural whiten images, and bouncing balls). Translation steps were no less than 1 pixel per frame, the rotation angles were no less than , and scaling ratio sampled from or to keep the different transformations distinct.
We visualize the activities in the z-layer using Hinton’s t-sne algorithm in Figure 3, 3 in response to sequences from the three databases. It can be observed that the content data from the three databases (natural images, balls and MNIST) are all mixed together, indicating that the latent variables cannot discriminate the image patterns. On the other hand, the transformations are relatively well clustered and segregated, suggesting that these transformations are distinctly encoded in, and can be decoded from, .
To investigate how transformations are distinctly represented in , we trained 3 SVM based on to decode each of the three transformations (rotation, translation and scaling) from . For each test sequence, we inferred the context representation , and then computed the probability of the three SVMs and chose the classification with the highest probability. All the SVMs were trained using the dot product as kernel function only. The confusion matrix as shown in Table 4.2 suggests that the contextual representations encode the content-invariant transformation information.
|Time||30 min||2 hours||15 min|
We also compared the representational power of the inferred in our model, computed using bottom-up and top-down processing, with the transformation latent variables in the GBM and GAE, computed using one-pass feedforward computation. We compared a 2-frame version of our model with 3-way (2-frame) GBM and GAE 111We used the CPU implementation of GBM on http://www.cs.toronto.edu/~rfm/factored/ and used theano implementation of GAE on http://www.iro.umontreal.ca/~memisevr/code.html and ran it on CPU. and trained 3 SVMs for each model to decode the type of transformation. As shown in Table 4.2, our model is comparable, and in fact outperforms those two models while the time needed for training is comparable to GAE and faster than GBM.
In addition, we trained a linear regression model using contextual representation as the regressor to predict/estimate the transformations, namely, translation velocity, angular rotation velocity and scaling factor. denotes the three transformation parameters. The relative regression error is defined as . The cumulative distribution functions (CDFs) of relative regression error of the estimates in Figure 3 shows that the contextual representation contains sufficient information about the transformation parameters.
4.3 Prediction and Interpolation
A crucial feature of our model is its ability to predict and interpolate. Note that the GBM can perform prediction but not interpolation. We first tested the model’s ability in interpolation and prediction using training sequences generated from three datasets: face images (Gourier et al., 2004), MNIST handwritten digits and natural images (Olshausen & Field, 1997) and then we evaluated its performance in predicting and interpolating 3D rotation on the NORB dataset (LeCun et al., 2004).
In our test, we drew an image from one of the three databases, applied one of the transformation to generate another transformed image. This pair of images were feed into nodes and of our model, which was trained using the mixture transformation sequences of natural images in the previous section. The bottom-up top-down process simultaneously infer the latent contextual variables and the subsequent frame as the prediction of the model. In GBM, would be first inferred, and then applied to the second frame to generate the third frame. In contrast, the prediction of the next frame in our model is not limited to the second frame, but could be based on as many previous frames as stipulated by the model.
Figure 4 shows the results of predictions (third row) given the first and second frames. They demonstrate our model’s ability to accomplish prediction using the top-down bottom-up algorithm. These findings show the contextual representation encoded sufficient content-invariant transformation information for providing contextual modulation to generate predictions.
We used a similar parameter setting as that in Michalski et al. (2014). Each chirp sequence contained 160 frames in one second, partitioned into 16 non-overlapping 10-frame intervals, yielding 10-dimensional input vectors. The frequency of chirp signals varies between 15Hz and 20Hz. The task is to predict the subsequent intervals given the first 5 intervals. The RMSE results per interval for each of the subsequent intervals being predicted is shown in Table 4.
Interpolation can be accomplished in the same way. When and are provided to the model, and are simultaneously inferred. The second row of Figure 5 shows the interpolation results; GBM cannot perform such computation.
Next, we tested our model with a more challenging the NORB dataset that contains images of objects in 3D rotation, under different views and lighting conditions. There were 5 categories of objects (animals, human, cars, trucks, planes) with 10 objects in each category taken with a camera in 18 directions, 9 camera elevations and under 6 illuminations. We trained a model for each of the five object categories. Within each category, the data were divided into a training set and a test set based on their elevations. The test set for each model included all the images taken at two particular elevations (4th and 6th), and the training set included image sequences taken at the 7 other elevations. At each elevation, the camera was fixed and the object was rotating in 3D across frames. To train the model for each category, we took sequences of three successive frames (each representing a view from a particular azimuth) of each object in under a particular condition to learn a 3-input model. We tested the model with two input images in a sequence taken from one of the untrained elevations. The prediction results of the NORB dataset were obtained in a similar manner for the face and digit cases by presenting the image frames to and to infer and simultaneously. The results of the prediction are shown in the third column of each object instance in Figure 6. These results are comparable to the reported in GBM, actually with slightly better performance (see Table 3). For all the prediction and interpolation results, we normalized the output images by matching the pixel histogram of the output image with that of the input images using histogram matching techniques.
The receptive fields of the model trained with three or more consecutive frames in this database exhibited a quadrature phase relationship between adjacent frames. That means the filters for and have a phase shift of degrees. With only the responses of these two filters to and , but missing , the direction of motion is underdetermined – the movement could go in either direction. The model fails to interpolate in this case. We improved the temporal resolution of the model by training the filters for and with the sequences, which develop a quadrature phase relationship, then we fixed the filters for and to train with more sequences. This allowed the adjacent filters in the model to have finer phase difference, and yielded reasonable interpolation results, as shown in Figure 6. A more elegant solution to this problem requires further investigation.
We reported the performance (root mean square error) of our model on prediction and interpolation quantitatively in Table 4.3. We also used 3-way GBM and GAE in the prediction test as described in Section 4.2. All the models were trained using the mixture of motion sequences and tested on other image sequences. The result suggests that our model is comparable to, and in fact slightly outperforms, the gated machines in prediction. Additionally, our model can perform interpolation, which is not possible for GBM.
In this paper, we have presented a new predictive coding framework that can learn to encode contextual relationships to modulate analysis by synthesis during perception. As discussed in the Introduction, this model is distinct from the standard predictive coding model (Rao & Ballard, 1999) and in addition to being conceptually novel, might be more biologically plausible. Our model shares some similarities with the autoencoder model but differs in that (1) our model uses contextual representation to gate the prediction synthesis process while the autoencoder does not utilize context, (2) the autoencoder relies solely on fast feed-forward computation while our model utilizes a fast top-down and bottom-up procedure to update the contextual representations which modulate image synthesis during inference. Such contextual variables can be considered as a generalized form of the smoothness constraint in early vision, and can be implemented locally by interneurons in each particular visual area. A key contribution of this work is in demonstrating for the first time the usefulness of local contextual modulation in the predictive coding or the auto-encoder framework.
Recurrent neural networks still provide state-of-the-art performance in sequence modeling. But RNN requires a lot of data to train. Thus, despite its power in modeling short and long sequences, particularly when trained with large dataset, it falters in a more limited dataset like the NORB database. The predictive encoder proposed here learns contextual latent variables that can provide information about transformation explicitly while RNN¡¯s latent variables are used to represent the content of images only and the transformations are encoded in connections. The transforms encoded in the latent variables in the predictive encoder are directly related to the perceptual variables such as motion velocity, optical flow and binocular disparity.
Our model shares similar goals with the gated Boltzmann machine in learning transformational relationships but our model utilizes a different mechanism, with a unified framework for inference, interpolation and prediction. We consider the GBM as a state-of-art method for learning transform with limited amount of data and thus we have mostly focused our quantitative evaluation of our predictive encoder model against the performance of GBM. We found that our model is comparable or superior in performance relative to the gated Boltzmann machine in terms of inference and prediction while being additionally able to perform interpolation. Our model relies on standard spatiotemporal filtering in the feedforward path, without the need for the N-way multiplicative interaction or neural synchrony as required in the GBM. It is thus simpler in conceptualization and maybe more biologically plausible. It is important to recognize that our model currently just is a module that uses latent variables to encode spatiotemporal local context and transformation. Such a module can be used to build recurrent neural networks to model the temporal evolution of the contextual variables, or be stacked up to form deep networks to learn hierarchical features, each layer with their own local spatiotemporal contextual representations.
This research was supported by research grants 973-2015CB351800, NSFC-61272027 (to YZ Wang), and NSF 1320651 and NIH R01 EY022247 (to TS Lee). Both Wang and Lee’s labs acknowledge the support of NVIDIA Corporation for the donation of GPUs for this research. Mingmin Zhao and Chengxu Zhuang were supported by Peking University and Tsinghua University undergraduate scholarships respectively when they visited Carnegie Mellon to carry out this research.
- Bar (2007) Bar, Moshe. The proactive brain: using analogies and associations to generate predictions. Trends in cognitive sciences, 2007.
- Bergstra et al. (2010) Bergstra, James, Breuleux, Olivier, Bastien, Frédéric, Lamblin, Pascal, Pascanu, Razvan, Desjardins, Guillaume, Turian, Joseph, Warde-Farley, David, and Bengio, Yoshua. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference, 2010.
- Friston (2010) Friston, Karl. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 2010.
- Gourier et al. (2004) Gourier, Nicolas, Hall, Daniela, and Crowley, James L. Estimating face orientation from robust detection of salient facial structures. In FG Net Workshop on Visual Observation of Deictic Gestures. FGnet (IST–2000–26434) Cambridge, UK, 2004.
- Hinton (2010) Hinton, Geoffrey. A practical guide to training restricted boltzmann machines. Momentum, 2010.
- Hinton et al. (2006) Hinton, Geoffrey, Osindero, Simon, and Teh, Yee-Whye. A fast learning algorithm for deep belief nets. Neural computation, 2006.
- LeCun et al. (2004) LeCun, Yann, Huang, Fu Jie, and Bottou, Leon. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition, CVPR 2004. Proceedings of the IEEE Computer Society Conference on, 2004.
- Lee & Mumford (2003) Lee, Tai Sing and Mumford, David. Hierarchical bayesian inference in the visual cortex. JOSA A, 2003.
- Memisevic (2013) Memisevic, R. Learning to relate images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1829–1846, Aug 2013. ISSN 0162-8828. doi: 10.1109/TPAMI.2013.53.
- Memisevic (2011) Memisevic, Roland. Gradient-based learning of higher-order image features. In ICCV. IEEE, 2011.
- Michalski et al. (2014) Michalski, Vincent, Memisevic, Roland, and Konda, Kishore. Modeling deep temporal dependencies with recurrent “grammar cells”. In NIPS, pp. 1925–1933, 2014.
- Mumford (1992) Mumford, David. On the computational architecture of the neocortex. Biological cybernetics, 1992.
- Ngiam et al. (2011) Ngiam, Jiquan, Coates, Adam, Lahiri, Ahbik, Prochnow, Bobby, Le, Quoc V, and Ng, Andrew Y. On optimization methods for deep learning. In ICML, 2011.
- Olshausen & Field (1997) Olshausen, Bruno A and Field, David J. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 1997.
- Rao & Ballard (1999) Rao, Rajesh PN and Ballard, Dana H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience, 1999.
- Susskind et al. (2011) Susskind, Joshua, Memisevic, Roland, Hinton, Geoffrey, and Pollefeys, Marc. Modeling the joint density of two images under a variety of transformations. In CVPR. IEEE, 2011.
- Todorovic et al. (2011) Todorovic, Ana, van Ede, Freek, Maris, Eric, and de Lange, Floris P. Prior expectation mediates neural adaptation to repeated sounds in the auditory cortex: an meg study. The Journal of neuroscience, 2011.
- Van der Maaten & Hinton (2008) Van der Maaten, Laurens and Hinton, Geoffrey. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.
- Vincent et al. (2008) Vincent, Pascal, Larochelle, Hugo, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Extracting and composing robust features with denoising autoencoders. In ICML, 2008.