Recurrent Ladder Networks

Recurrent Ladder Networks

Isabeau Prémont-Schwarz, Alexander Ilin, Tele Hotloo Hao,
Antti Rasmus, Rinu Boney, Harri Valpola
The Curious AI Company
{isabeau,alexilin,hotloo,antti,rinu,harri}@cai.fi
Abstract

We propose a recurrent extension of the Ladder networks Rasmus et al. (2015) whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.

 

Recurrent Ladder Networks


  Isabeau Prémont-Schwarz, Alexander Ilin, Tele Hotloo Hao, Antti Rasmus, Rinu Boney, Harri Valpola The Curious AI Company {isabeau,alexilin,hotloo,antti,rinu,harri}@cai.fi

\@float

noticebox[b]31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\end@float

1 Introduction

Many cognitive tasks require learning useful representations on multiple abstraction levels. Hierarchical latent variable models are an appealing approach for learning a hierarchy of abstractions. The classical way of learning such models is by postulating an explicit parametric model for the distributions of random variables. The inference procedure, which evaluates the posterior distribution of the unknown variables, is then derived from the model – an approach adopted in probabilistic graphical models (see, e.g., Bishop (2006)).

The success of deep learning can, however, be explained by the fact that popular deep models focus on learning the inference procedure directly. For example, a deep classifier like AlexNet Krizhevsky et al. (2012) is trained to produce the posterior probability of the label for a given data sample. The representations that the network computes at different layers are related to the inference in an implicit latent variable model but the designer of the model does not need to know about them.

However, it is actually tremendously valuable to understand what kind of inference is required by different types of probabilistic models in order to design an efficient network architecture. Ladder networks Rasmus et al. (2015); Valpola (2015) are motivated by the inference required in a hierarchical latent variable model. By design, the Ladder networks aim to emulate a message passing algorithm, which includes a bottom-up pass (from input to label in classification tasks) and a top-down pass of information (from label to input). The results of the bottom-up and top-down computations are combined in a carefully selected manner.

The original Ladder network implements only one iteration of the inference algorithm but complex models are likely to require iterative inference. In this paper, we propose a recurrent extension of the Ladder network for iterative inference and show that the same architecture can be used for temporal modeling. We also show how to use the proposed architecture as an inference engine in more complex models which can handle multiple independent objects in the sensory input. Thus, the proposed architecture is suitable for the type of inference required by rich models: those that can learn a hierarchy of abstractions, can handle temporal information and can model multiple objects in the input.

2 Recurrent Ladder

Recurrent Ladder networks

In this paper, we present a recurrent extension of the Ladder networks which is conducive to iterative inference and temporal modeling. Recurrent Ladder (RLadder) is a recurrent neural network whose units resemble the structure of the original Ladder networks Rasmus et al. (2015); Valpola (2015) (see Fig. 1a). At every iteration , the information first flows from the bottom (the input level) to the top through a stack of encoder cells. Then, the information flows back from the top to the bottom through a stack of decoder cells. Both the encoder and decoder cells also use the information that is propagated horizontally. Thus, at every iteration , an encoder cell in the -th layer receives three inputs: 1) the output of the encoder cell from the level below , 2) the output of the decoder cell from the same level from the previous iteration, 3) the encoder state from the same level from the previous iteration. It updates its state value and passes the same output both vertically and horizontally:

(1)
(2)

The encoder cell in the bottom layer typically sends observed data (possibly corrupted by noise) as its output . Each decoder cell is stateless, it receives two inputs (the output of the decoder cell from one level above and the output of the encoder cell from the same level) and produces one output

(3)

which is passed both vertically and horizontally. The exact computations performed in the cells can be tuned depending on the task at hand. In practice, we have used LSTM Hochreiter and Schmidhuber (1997) or GRU Cho et al. (2014) cells in the encoder and cells inspired by the original Ladder networks in the decoder (see Appendix A).

(a)

(b)

(c)
Figure 1: (a): The structure of the Recurrent Ladder networks. The encoder is shown in red, the decoder is shown in blue, the decoder-to-encoder connections are shown in green. The dashed line separates two iterations and . (b)-(c): The type of hierarchical latent variable models for which RLadder is designed to emulate message passing. (b): A graph of a static model. (c): A fragment of a graph of a temporal model. White circles are unobserved latent variables, gray circles represent observed variables. The arrows represent the directions of message passing during inference.

Similarly to Ladder networks, the RLadder is usually trained with multiple tasks at different abstraction levels. Tasks at the highest abstraction level (like classification) are typically formulated at the highest layer. Conversely, the output of the decoder cell in the bottom level is used to formulate a low-level task which corresponds to abstractions close to the input. The low-level task can be denoising (reconstruction of a clean input from the corrupted one), other possibilities include object detection Newell et al. (2016), segmentation Badrinarayanan et al. (2015); Ronneberger et al. (2015), or in a temporal setting, prediction. A weighted sum of the costs at different levels is optimized during training.

Connection to hierarchical latent variables and message passing

The RLadder architecture is designed to mimic the computational structure of an inference procedure in probabilistic hierarchical latent variable models. In an explicit probabilistic graphical model, inference can be done by an algorithm which propagates information (messages) between the nodes of a graphical model so as to compute the posterior distribution of the latent variables (see, e.g., Bishop (2006)). For static graphical models implicitly assumed by the RLadder (see Fig. 1b), messages need to be propagated from the input level up the hierarchy to the highest level and from the top to the bottom, as shown in Fig. 1a. In Appendix B, we present a derived iterative inference procedure for a simple static hierarchical model to give an example of a message-passing algorithm. We also show how that inference procedure can be implemented in the RLadder computational graph.

In the case of temporal modeling, the type of a graphical model assumed by the RLadder is shown in Fig. 1c. If the task is to do next step prediction of observations , an online inference procedure should update the knowledge about the latent variables , using observed data and compute the predictive distributions for the input . Assuming that the distributions of the latent variables at previous time instances () are kept fixed, the inference can be done by propagating messages from the observed variables and the latent variables , bottom-up, top-down and from the past to the future, as shown in Fig. 1c. The architecture of the RLadder (Fig. 1a) is designed so as to emulate such a message-passing procedure, that is the information can propagate in all the required directions: bottom-up, top-down and from the past to the future. In Appendix C, we present an example of the message-passing algorithm derived for a temporal hierarchical model to show how it is related to the RLadders’s computation graph.

Even though the motivation of the RLadder architecture is to emulate a message-passing procedure, the nodes of the RLadder do not directly correspond to nodes of any specific graphical model.111To emphasize this, we used different shapes for the nodes of the RLadder network (Fig. 1a) and the nodes of graphical models that inspired the RLadder architecture (Figs. 1b-c). The RLadder directly learns an inference procedure and the corresponding model is never formulated explicitly. Note also that using stateful encoder cells is not strictly motivated by the message-passing argument but in practice these skip connections facilitate training of a deep network.

As we mentioned previously, the RLadder is usually trained with multiple tasks formulated at different representation levels. The purpose of tasks is to encourage the RLadder to learn the right inference procedure, and hence formulating the right kind of tasks is crucial for the success of training. For example, the task of denoising encourages the network to learn important aspects of the data distribution Alain et al. (2012); Arponen et al. (2017). For temporal modeling, the task of next step prediction plays a similar role. The RLadder is most useful in problems that require accurate inference on multiple abstraction levels, which is supported by the experiments presented in this paper.

Related work

The RLadder architecture is similar to that of other recently proposed models for temporal modeling Eyjolfsdottir et al. (2016); Finn et al. (2016); Cricri et al. (2016); Tietz et al. (2017); Laukien et al. (2016). In Cricri et al. (2016), the recurrent connections (from time to time ) are placed in the lateral links between the encoder and the decoder. This can make it easier to extend an existing feed-forward network architecture to the case of temporal data as the recurrent units do not participate in the bottom-up computations. On the other hand, the recurrent units do not receive information from the top, which makes it impossible for higher layers to influence the dynamics of lower layers. The architectures in Eyjolfsdottir et al. (2016); Finn et al. (2016); Tietz et al. (2017) are quite similar to ours but they could potentially derive further benefit from the decoder-to-encoder connections between successive time instances (green links in Fig. 1b). The aforementioned connections are well justified from the message-passing point of view: When updating the posterior distribution of a latent variable, one should combine the latest information from the top and from the bottom, and it is the decoder that contains the latest information from the top. We show empirical evidence to the importance of those connections in Section 3.1.

3 Experiments with temporal data

In this section, we demonstrate that the RLadder can learn an accurate inference algorithm in tasks that require temporal modeling. We consider datasets in which passing information both in time and in abstraction hierarchy is important for achieving good performance.

3.1 Occluded Moving MNIST

                                            
observed frames
frames with occlusion visualized
optimal temporal reconstruction
Figure 2: The Occluded Moving MNIST dataset. Bottom row: Optimal temporal recombination for a sequence of occluded frames from the dataset.

We use a dataset where we know how to do optimal inference in order to be able to compare the results of the RLadder to the optimal ones. To this end we designed the Occluded Moving MNIST dataset. It consists of MNIST digits downscaled to pixels flying on a white background with white vertical and horizontal occlusion bars (4 pixels in width, and spaced by 8 visible pixels apart) which, when the digit flies behind them, occludes the pixels of the digit (see Fig. 2). We also restrict the velocities to be randomly chosen in the set of eight discrete velocities pixels/frame, so that apart from the bouncing, the movement is deterministic. The digits are split into training, validation, and test sets according to the original MNIST split. The primary task is then to classify the digit which is only partially observable at any given moment, at the end of five time steps.

In order to do optimal classification, one would need to assimilate information about the digit identity (which is only partially visible at any given time instance) by keeping track of the observed pixels (see the bottom row of Fig. 2) and then feeding the resultant reconstruction to a classifier.

In order to encourage optimal inference, we add a next step prediction task to the RLadder at the bottom of the decoder: The RLadder is trained to predict the next occluded frame, that is the network never sees the un-occluded digit. This thus mimics a realistic scenario where the ground truth is not known. To assess the importance of the features of the RLadder, we also do an ablation study. In addition, we compare it to three other networks. In the first comparison network, the optimal reconstruction of the digit from the five frames (as shown in Fig. 2) is fed to a static feed-forward network from which the encoder of the RLadder was derived. This is our gold standard, and obtaining similar results to it implies doing close to optimal temporal inference. The second, a temporal baseline, is a deep feed-forward network (the one on which the encoder is based) with a recurrent neural network (RNN) at the top only so that, by design the network can propagate temporal information only at a high level, and not at a low level. The third, a hierarchical RNN, is a stack of convolutional LSTM units with a few convolutional layers in between, which is the RLadder amputated of its decoder. See Fig. 3 and Appendix D.1 for schematics and details of the architectures.

Temporal baseline network

Hierarchical RNN

RLadder
Figure 3: Architectures used for modeling occluded Moving MNIST. Temporal baseline network is a convolutional network with a fully connected RNN on top.
Fully supervised learning results.

The results are presented in Table 1. The first thing to notice is that the RLadder reaches (up to uncertainty levels) the classification accuracy obtained by the network which was given the optimal reconstruction of the digit. Furthermore, if the RLadder does not have a decoder or the decoder-to-encoder connections, or if it is trained without the auxiliary prediction task, we see the classification error rise almost to the level of the temporal baseline. This means that even if a network has RNNs at the lowest levels (like with only the feed-forward encoder), or if it does not have a task which encourages it to develop a good world model (like the RLadder without the next-frame prediction task), or if the information cannot travel from the decoder to the encoder, the high level task cannot truly benefit from lower level temporal modeling.

Classification error (%) Prediction error,
Optimal reconstruction and static classifier
Temporal baseline
Hierarchical RNN (encoder only)
RLadder w/o prediction task
RLadder w/o decoder-to-encoder conn.
RLadder w/o classification task
RLadder
Table 1: Performance on Occluded Moving MNIST

Next, one notices from Table 1 that the top-level classification cost helps the low-level prediction cost in the RLadder (which in turn helps the top-level cost in a mutually beneficial cycle). This mutually supportive relationship between high-level and low-level inferences is nicely illustrated by the example in Fig. 4. Up until time step inclusively, the network believes the digit to be a five (Fig. 4a). As such, at , the network predicts that the top right part of the five which has been occluded so far will stick out from behind the occlusions as the digit moves up and right at the next time step (Fig. 4b). Using the decoder-to-encoder connections, the decoder can relay this expectation to the encoder at . At the encoder can compare this expectation with the actual input where the top right part of the five is absent (Fig. 4c). Without the decoder-to-encoder connections this comparison would have been impossible. Using the upward path of the encoder, the network can relay this discrepancy to the higher classification layers. These higher layers with a large receptive field can then conclude that since it is not a five, then it must be a three (Fig. 4d). Now thanks to the decoder, the higher classification layers can relay this information to the lower prediction layers so that they can change their prediction of what will be seen at appropriately (Fig. 4e). Without a decoder which would bring this high level information back down to the low level, this drastic update of the prediction would be impossible. With this information the lower prediction layer can now predict that the top-left part of the three (which it has never seen before) will appear at the next time step from behind the occlusion, which is indeed what happens at (Fig. 4f).

                                            
ground-truth unoccluded digits

observed frames

c

f

predicted frames

b

e

probe of internal representations

a

d

Figure 4: Example prediction of an RLadder on the occluded moving MNIST dataset. First row: the ground truth of the digit, which the network never sees and does not train on. Second row: The actual five frames seen by the network and on which it trains. Third row: the predicted next frames of a trained RLadder. Fourth row: A stopped-gradient (gradient does not flow into the RLadder) readout of the bottom layer of the decoder trained on the ground truth to probe what aspects of the digit are represented by the neurons which predict the next frame. Notice how at , the network does not yet know in which direction the digit will move and so it predicts a superposition of possible movements. Notice further (red annotations a-f), that until , the network thought the digit was a five, but when the top bar of the supposed five did not materialize on the other side of the occlusion as expected at , the network immediately concluded correctly that it was actually a three.
Semi-supervised learning results.

In the following experiment, we test the RLadder in the semi-supervised scenario when the training data set contains 1.000 labeled sequences and 59.000 unlabeled ones. To make use of the unlabeled data, we added an extra auxiliary task at the top level which was the consistency cost with the targets provided by the Mean Teacher (MT) model Tarvainen and Valpola (2017). Thus, the RLadder was trained with three tasks: 1) next step prediction at the bottom, 2) classification at the top, 3) consistency with the MT outputs at the top. As shown in Table 2, the RLadder improves dramatically by learning a better model with the help of unlabeled data independently and in addition to other semi-supervised learning methods. The temporal baseline model also improves the classification accuracy by using the consistency cost but it is clearly outperformed by the RLadder.

1k labeled 1k labeled & 59k unlabeled
w/o MT MT
Optimal reconstruction and static classifier
Temporal baseline
RLadder
Table 2: Classification error (%) on semi-supervised Occluded Moving MNIST

3.2 Polyphonic Music Dataset

In this section, we evaluate the RLadder on the midi dataset converted to piano rolls Boulanger-Lewandowski et al. (2012). The dataset consists of piano rolls (the notes played at every time step, where a time step is, in this case, an eighth note) of various piano pieces. We train an 18-layer RLadder containing five convolutional LSTMs and one fully-connected LSTM. More details can be found in Appendix D.2. Table 3 shows the negative log-likelhoods of the next-step prediction obtained on the music dataset, where our results are reported as mean plus or minus standard deviation over 10 seeds. We see that the RLadder is competitive with the best results, and gives the best results amongst models outputting the marginal distribution of notes at each time step.

Piano-midi.de Nottingham Muse JSB Chorales
Models outputting a joint distribution of notes:
NADE masked Berglund et al. (2015) 7.42 3.32 6.48 8.51
NADE Berglund et al. (2015) 7.05 2.89 5.54 7.59
RNN-RBM Boulanger-Lewandowski et al. (2012) 7.09 2.39 6.01 6.27
RNN-NADE (HF) Boulanger-Lewandowski et al. (2012) 7.05 2.31 5.60
LSTM-NADE Johnson (2017) 7.39 2.06 5.03 6.10
TP-LSTM-NADE Johnson (2017) 5.49 1.64 4.34 5.92
BALSTM Johnson (2017) 5.86
Models outputting marginal probabilities for each note:
RNN Berglund et al. (2015) 7.88 3.87 7.43 8.76
LSTM Jozefowicz et al. (2015) 6.866 3.492
MUT1 Jozefowicz et al. (2015) 6.792 3.254
RLadder
Table 3: Negative log-likelihood (smaller is better) on polyphonic music dataset

The fact that the RLadder did not beat Johnson (2017) on the midi datasets shows one of the limitations of RLadder. Most of the models in Table 3 output a joint probability distribution of notes, unlike RLadder which outputs the marginal probability for each note. That is to say, those models, to output the probability of a note, take as input the notes at previous time instances, but also the ground truth of the notes to the left at the same time instance. RLadder does not do that, it only takes as input the past notes played. Even though, as the example in 3.1 of the the digit five turning into a three after seeing only one absent dot, shows that internally the RLadder models the joint distribution.

4 Experiments with perceptual grouping

In this section, we show that the RLadder can be used as an inference engine in a complex model which benefits from iterative inference and temporal modeling. We consider the task of perceptual grouping, that is identifying which parts of the sensory input belong to the same higher-level perceptual components (objects). We enhance the previously developed model for perceptual grouping called Tagger Greff et al. (2016) by replacing the originally used Ladder engine with the RLadder. For another perspective on the problem see Greff et al. (2017) which also extends Tagger to a recurrent neural network, but does so from an expectation maximization point of view.

4.1 Recurrent Tagger

Tagger is a model designed for perceptual grouping. When applied to images, the modeling assumption is that each pixel belongs to one of the objects, which is described by binary variables : if pixel belongs to object and otherwise. The reconstruction of the whole image using object only is which is a vector with as many elements as there are pixels. Thus, the assumed probabilistic model can be written as follows:

(4)

where is a vector of elements and is (a hierarchy of) latent variables which define the shape and the texture of the objects. See Fig. 5a for a graphical representation of the model and Fig. 5b for possible values of model variables for the textured MNIST dataset used in the experiments of Section 4.2. The model in (4) is defined for noisy image because Tagger is trained with an auxiliary low-level task of denoising. The inference procedure in model (4) should evaluate the posterior distributions of the latent variables , , for each of the groups given corrupted data . Making the approximation that the variables of each of the groups are independent a posteriori

(5)

the inference procedure could be implemented by iteratively updating each of the approximate distributions , if the model (4) and the approximation (5) were defined explicitly.

Tagger does not explicitly define a probabilistic model (4) but learns the inference procedure directly. The iterative inference procedure is implemented by a computational graph with copies of the same Ladder network doing inference for one of the groups (see Fig. 5c). At the end of every iteration, the inference procedure produces the posterior probabilities that pixel belongs to object and the point estimates of the reconstructions (see Fig. 5c). Those outputs, are used to form the low-level cost and the inputs for the next iteration (see more details in Greff et al. (2016)). In this paper, we replace the original Ladder engine of Tagger with the RLadder. We refer to the new model as RTagger.

(a)

(b)

(c)
Figure 5: (a): Graphical model for perceptual grouping. White circles are unobserved latent variables, gray circles represent observed variables. (b): Examples of possible values of model variables for the textured MNIST dataset. (c): Computational graph that implements iterative inference in perceptual grouping task (RTagger). Two graph iterations are drawn. The plate notation represent copies of the same graph.

4.2 Experiments on grouping using texture information

The goal of the following experiment is to test the efficiency of RTagger in grouping objects using the texture information. To this end, we created a dataset that contains thickened MNIST digits with 20 textures from the Brodatz dataset Brodatz (1966). An example of a generated image is shown in Fig. 6a. To create a greater diversity of textures (to avoid over-fitting), we randomly rotated and scaled the 20 Brodatz textures when producing the training data.

(a)
(b)
(c)
(d)
Figure 6: (a): Example image from the Brodatz-textured MNIST dataset. (b): The image reconstruction by the group that learned the background. (c): The image reeconstruction by the group that learned the digit. (d): The original image colored using the found grouping .

The network trained on the textured MNIST dataset has the architecture presented in Fig. 5c with three iterations. The number of groups was set to . The details of the RLadder architecture are presented in Appendix D.3. The network was trained on two tasks: The low-level segmentation task was formulated around denoising, the same way as in the Tagger model Greff et al. (2016). The top-level cost was the log-likelihood of the digit class at the last iteration.

Table 4 presents the obtained performance on the textured MNIST dataset in both fully supervised and semi-supervised settings. All experiments were run over 5 seeds. We report our results as mean plus or minus standard deviation. In some runs, Tagger experiments did not converge to a reasonable solution (because of unstable or too slow convergence), so we did not include those runs in our evaluations. Following (Greff et al., 2016), the segmentation accuracy was computed using the adjusted mutual information (AMI) score (Vinh et al., 2010) which is the mutual information between the ground truth segmentation and the estimated segmentation scaled to give one when the segmentations are identical and zero when the output segmentation is random.

For comparison, we trained the Tagger model Greff et al. (2016) on the same dataset. The other comparison method was a feed-forward convolutional network which had an architecture resembling the bottom-up pass (encoder) of the RLadder and which was trained on the classification task only. One thing to notice is that the results obtained with the RTagger clearly improve over iterations, which supports the idea that iterative inference is useful in complex cognitive tasks. We also observe that RTagger outperforms Tagger and both approaches significantly outperform the convolutional network baseline in which the classification task is not supported by the input-level task. We have also observed that the top-level classification tasks makes the RTagger faster to train in terms of the number of updates, which also supports that the high-level and low-level tasks mutually benefit from each other: Detecting object boundaries using textures helps classify a digit, while knowing the class of the digit helps detect the object boundaries. Figs. 6b-d show the reconstructed textures and the segmentation results for the image from Fig. 6a.

50k labeled
Segmentation accuracy, AMI:
RTagger
Tagger
Classification error, %:
RTagger
Tagger
ConvNet
1k labeled + 49k unlabeled
Segmentation accuracy, AMI:
RTagger
Classification error, %:
RTagger
ConvNet
Table 4: Results on the Brodatz-textured MNIST. -th column corresponds to the intermediate results of RTagger after the -th iteration. In the fully supervised case, Tagger was only trained successfully in 2 of the 5 seeds, the given results are for those 2 seeds. In the semi-supervised case, we were not able to train Tagger successfully.

4.3 Experiments on grouping using movement information

The same RTagger model can perform perceptual grouping in video sequences using motion cues. To demonstrate this, we applied the RTagger to the moving MNIST (Srivastava et al., 2015)222For this experiment, in order to have the ground truth segmentation, we reimplemented the dataset ourselves. sequences of length 20 and the low-level task was prediction of the next frame. When applied to temporal data, the RTagger assumes the existence of objects whose dynamics are independent of each other. Using this assumption, the RTagger can separate the two moving digits into different groups. We assessed the segmentation quality by the AMI score which was computed similarly to Greff et al. (2016, 2015) ignoring the background in the case of a uniform zero-valued background and overlap regions where different objects have the same color. The achieved averageAMI score was 0.75. An example of segmentation is shown in Fig. 7. When we tried to use Tagger on the same dataset, we were only able to train successfully in a single seed out of three. This is possibly because speed is intermediate level of abstraction not represented at the pixel level. Due to its reccurent connections, RTagger can keep those representations from one time step to the next and segment accordingly, something more difficult for Tagger to do, which might explain the training instability.

Figure 7: Example of segmentation and generation by the RTagger trained on the Moving MNIST. First row: frames 0-9 is the input sequence, frames 10-15 is the ground truth future. Second row: Next step prediction of frames 1-9 and future frame generation (frames 10-15) by RTagger, the colors represent grouping performed by RTagger.

5 Conclusions

In the paper, we presented recurrent Ladder networks. The proposed architecture is motivated by the computations required in a hierarchical latent variable model. We empirically validated that the recurrent Ladder is able to learn accurate inference in challenging tasks which require modeling dependencies on multiple abstraction levels, iterative inference and temporal modeling. The proposed model outperformed strong baseline methods on two challenging classification tasks. It also produced competitive results on a temporal music dataset. We envision that the purposed Recurrent Ladder will be a powerful building block for solving difficult cognitive tasks.

Acknowledgments

We would like to thank Klaus Greff and our colleagues from The Curious AI Company for their contribution in the presented work, especially Vikram Kamath and Matti Herranen.

References

  • Alain et al. (2012) Alain, G., Bengio, Y., and Rifai, S. (2012). Regularized auto-encoders estimate local statistics. CoRR, abs/1211.4246.
  • Arponen et al. (2017) Arponen, H., Herranen, M., and Valpola, H. (2017). On the exact relationship between the denoising function and the data distribution. arXiv preprint arXiv:1709.02797.
  • Badrinarayanan et al. (2015) Badrinarayanan, V., Kendall, A., and Cipolla, R. (2015). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561.
  • Berglund et al. (2015) Berglund, M., Raiko, T., Honkala, M., Kärkkäinen, L., Vetek, A., and Karhunen, J. T. (2015). Bidirectional recurrent neural networks as generative models. In Advances in Neural Information Processing Systems.
  • Bishop (2006) Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA.
  • Boulanger-Lewandowski et al. (2012) Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1159–1166.
  • Brodatz (1966) Brodatz, P. (1966). Textures: a photographic album for artists and designers. Dover Pubns.
  • Cho et al. (2014) Cho, K., Van Merriënboer, B., Bahdanau, D., and Bengio, Y. (2014). On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.
  • Cricri et al. (2016) Cricri, F., Honkala, M., Ni, X., Aksu, E., and Gabbouj, M. (2016). Video Ladder networks. arXiv preprint arXiv:1612.01756.
  • Eyjolfsdottir et al. (2016) Eyjolfsdottir, E., Branson, K., Yue, Y., and Perona, P. (2016). Learning recurrent representations for hierarchical behavior modeling. arXiv preprint arXiv:1611.00094.
  • Finn et al. (2016) Finn, C., Goodfellow, I. J., and Levine, S. (2016). Unsupervised learning for physical interaction through video prediction. In Advances in Neural Information Processing Systems 29.
  • Greff et al. (2015) Greff, K., Srivastava, R. K., and Schmidhuber, J. (2015). Binding via reconstruction clustering. CoRR, abs/1511.06418.
  • Greff et al. (2016) Greff, K., Rasmus, A., Berglund, M., Hao, T., Valpola, H., and Schmidhuber, J. (2016). Tagger: Deep unsupervised perceptual grouping. In Advances in Neural Information Processing Systems 29.
  • Greff et al. (2017) Greff, K., van Steenkiste, S., and Schmidhuber, J. (2017). Neural expectation maximization. In ICLR Workshop.
  • Hochreiter and Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735–1780.
  • Johnson (2017) Johnson, D. D. (2017). Generating polyphonic music using tied parallel networks. In International Conference on Evolutionary and Biologically Inspired Music and Art.
  • Jozefowicz et al. (2015) Jozefowicz, R., Zaremba, W., and Sutskever, I. (2015). An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15).
  • Kingma and Ba (2015) Kingma, D. and Ba, J. (2015). Adam: A method for stochastic optimization. In The International Conference on Learning Representations (ICLR), San Diego.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems.
  • Laukien et al. (2016) Laukien, E., Crowder, R., and Byrne, F. (2016). Feynman machine: The universal dynamical systems computer. arXiv preprint arXiv:1609.03971.
  • Newell et al. (2016) Newell, A., Yang, K., and Deng, J. (2016). Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision. Springer.
  • Rasmus et al. (2015) Rasmus, A., Berglund, M., Honkala, M., Valpola, H., and Raiko, T. (2015). Semi-supervised learning with Ladder networks. In Advances in Neural Information Processing Systems.
  • Ronneberger et al. (2015) Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention.
  • Springenberg et al. (2014) Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
  • Srivastava et al. (2015) Srivastava, N., Mansimov, E., and Salakhudinov, R. (2015). Unsupervised learning of video representations using LSTMs. In International Conference on Machine Learning, pages 843–852.
  • Tarvainen and Valpola (2017) Tarvainen, A. and Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems.
  • Tietz et al. (2017) Tietz, M., Alpay, T., Twiefel, J., and Wermter, S. (2017). Semi-supervised phoneme recognition with recurrent ladder networks. In International Conference on Artificial Neural Networks 2017.
  • Valpola (2015) Valpola, H. (2015). From neural PCA to deep unsupervised learning. Advances in Independent Component Analysis and Learning Machines.
  • Vinh et al. (2010) Vinh, N. X., Epps, J., and Bailey, J. (2010). Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct), 2837–2854.

Appendix A Cells used in the decoder of the RLadder

Decoder cells receive two inputs: vector from the top and vector from the encoder and produce output .

a.1 G1 and convG1 cells

where is batch normalization and

(6)

with being the elements of vector and the element-wise product. In the convolutional version ConvG1 of that cell, in the first formula is computed with the convolution operation :

a.2 convG2 cell

where is defined in (6) and is layer normalization.

a.3 convG3 cell

where is layer normalization.

a.4 Detailes of normalizations used

Encoder: For all the experiments all non-recurrent layers in the encoder, whether convolutional or not, are normalized using batch normalization. In the convolution LSTM, there is a layer normalization after the all the first convolutions of the inputs in the RLadder experiments and batch normalization in the RTagger experiments, these normalizations were important to get things working.

Decoder: In the decoder gating functions, there is always a layer normalization after the first convolution over the concatenation of the inputs in the RLadder experiments, and this normalization was replaced by batch normalization in the RTagger experiments. A normalization at this point was critical for optimal performance.

Appendix B Example of approximate inference in a static model

The structure of the RLadder is designed to emulate a message-passing algorithm in a hierarchical latent variable model. To illustrate this, let us consider a simple hierarchical latent variable model with three one-dimensional variables whose joint probability distribution is given by

(7)
(8)
(9)
(10)

where denotes the Gaussian probability density function with mean and variance . We want to derive a denoising algorithm that recovers clean observation from its corrupted version , where the corruption is also modeled to be Gaussian:

(11)

The reason we look at denoising is because denoising is a task which can be used for unsupervised learning of the data distribution Alain et al. (2012); Arponen et al. (2017). The graphical representation of the model shown in Fig. 8a.

(a)

(b)

(c)
Figure 8: (a): Simple hierarchical latent variable model. A filled node represents an observed variable. (b): Directions of message propagation in an inference procedure. (c): Computational graph which implements an iterative inference procedure has the RLadder architecture.

In order to do optimal denoising, one needs to evaluate the expectation of , which can be done by learning the joint posterior distribution of the unknown variables , and . For this linear Gaussian model, it is possible to derive an inference algorithm that is guaranteed to produce the exact posterior distribution in a finite number of steps (see, e.g., Bishop (2006)). However, in more complex models, the posterior distribution of the latent variables is impossible to represent exactly and therefore a neural network that learns the inference procedure needs to represent an approximate distribution. To this end, we derive an approximate inference procedure for this simple probabilistic model.

Using the variational Bayesian approach, we can approximate the joint posterior distribution by a distribution of a simpler form. For example, all the latent variables can be modeled to be independent Gaussian variables a posteriori:

(12)
(13)
(14)
(15)

and the goal of the inference procedure is to estimate parameters , , , , , of the approximate posterior (12)–(15) for the model in (7)–(11) with fixed parameters , , , , , , .

The optimal posterior approximation can be found by minimizing the lower bound of the Kullback-Leibler divergence between the true distribution and the approximation:

(16)

where denotes the expectation over . This can be done with the following iterative procedure:

(17)
(18)
(19)

Thus, in order to update the posterior distribution of a latent variable, one needs to combine the information coming from one level above and from one level below. For example, in order to update , one needs information from below ( and ) and from above ( and ). We can think of the information needed for updating the parameters describing the posterior distributions of the latent variables as ‘messages’ propagating between the nodes of a probabilistic graphical model (see Fig. 8b). Since there are mutual dependencies between , and in (17)–(19), the update rules need to be iterated multiple times until convergence. This procedure can be implemented using the RLadder computational graph with the messages shown in Fig. 8c. Note that in practice, the computations used in the cells of the RLadder are not dictated by any particular explicit probabilistic model. The original Ladder networks Rasmus et al. (2015) contain only one iteration of the bottom-up and top-down passes. The RLadder extends the model to multiple passes. Note also that the computations used in the decoder (top-down pass) of the Ladder networks are inspired by the gating structure of the update rules (17)–(19) in simple Gaussian models.

Appendix C Example of approximate inference in a simple temporal model

In this section we consider a simple hierarchical and temporal model and look at the relationship between temporal inferance in that model and the computational structure of RLadder. Consider a simple probabilistic model with three levels of hierarchy , , in which variables vary in time (see Fig. 9a). The conditional distributions given the latent variables in the past , are defined as follows:

(20)
(21)
(22)

At time the observed variables are . Using the variational Bayesian approach, we can approximate the joint posterior distribution of the latent variables and by a distribution of a simpler form:

(a)

(b)

(c)
Figure 9: (a): Fragment of a graph of a temporal model relevant for updating the distributions of unknown variables after observing . Light-gray circles represent latent variables whose distribution is not updated. (b): Directions of information propagation needed for inference. (c): The structure of the RLadder network can be seen as a computational graph implementing information flow in (b). The dotted arrows are the skip connections that would be needed if we forced the activations to be literally interpreted as the distribution parameters of the latent variables.

The cost function minimized in the variational Bayesian approach is:

where the expectation is taken over . Let us elaborate the terms of . The first term is

The second term is

The last term is a function of variational parameters which we do not update at time instance . Taking the derivative of wrt the variational parameters yields:

Equating the derivatives to zero yields:

These computations can be done by first estimating , before observing and then correcting using the observation. Let us show how it is done for . We define to be the posterior mean before observing (next step decoder prediction), (encoder posterior) to be the posterior means after observing but before updating the approximate posterior of higher latent variables ( in this case), and finally to be the posterior mean after observing and updating all posteriors (decoder, same step prediction). Thus, we can write

(23)
(24)
(25)
(26)

which if we eliminate the tilded and primed ’s, give exactly the same equations as previously.

Generalizing the results for an arbitrary number of variables in the chain we have that for a generic level in a hierarchical chain, we have:

(27)
(28)
(29)
(30)