Hierarchical Autoregressive Image Modelswith Auxiliary Decoders

Hierarchical Autoregressive Image Models
with Auxiliary Decoders

Jeffrey De Fauw*  Sander Dieleman*  Karen Simonyan
*Equal contribution
DeepMind, London, UK

Autoregressive generative models of images tend to be biased towards capturing local structure, and as a result they often produce samples which are lacking in terms of large-scale coherence. To address this, we propose two methods to learn discrete representations of images which abstract away local detail. We show that autoregressive models conditioned on these representations can produce high-fidelity reconstructions of images, and that we can train autoregressive priors on these representations that produce samples with large-scale coherence. We can recursively apply the learning procedure, yielding a hierarchy of progressively more abstract image representations. We train hierarchical class-conditional autoregressive models on the ImageNet dataset and demonstrate that they are able to generate realistic images at resolutions of 128128 and 256256 pixels. We also perform a human evaluation study comparing our models with both adversarial and likelihood-based state-of-the-art generative models.

Figure 1: Selected class conditional 256256 samples from our models. More are available in the appendix and at https://bit.ly/2FJkvhJ.

1 Introduction

Generative models can be used to model the distribution of natural images. With enough capacity, they are then capable of producing new images from this distribution, which enables the creation of new natural-looking images from scratch. These models can also be conditioned on various annotations associated with the images (e.g. class labels), allowing for some control over the generated output.

In recent years, adversarial learning has proved a powerful tool to create such models Goodfellow et al. (2014); Radford et al. (2015); Karras et al. (2018a); Brock et al. (2019); Karras et al. (2018b). An alternative approach is to specify a model in the form of the joint distribution across all pixels, and train the model on a set of images by maximising their likelihood under this distribution (or a lower bound on this likelihood). Several families of models fit into this likelihood-based paradigm, including variational autoencoders (VAEs) Kingma and Welling (2013); Rezende et al. (2014), flow-based models Dinh et al. (2014, 2017); Kingma and Dhariwal (2018) and autoregressive models Theis and Bethge (2015); van den Oord et al. (2016, 2016).

Likelihood-based models currently lag behind their adversarial counterparts in terms of the visual fidelity and the resolution of their samples. However, adversarial models are known to drop modes of the distribution, something which likelihood-based models are inherently unlikely to do. Within the likelihood-based model paradigm, autoregressive models such as PixelCNN tend to be the best at capturing textures and details in images, because they make no independence assumptions and they are able to use their capacity efficiently through spatial parameter sharing. They also achieve the best likelihoods. We describe PixelCNN in more detail in Section 2.

However, autoregressive models are markedly worse at capturing structure at larger scales, and as a result they tend to produce samples that are lacking in terms of large-scale coherence (see appendix for a demonstration). This can be partially attributed to the inductive bias embedded in their architecture, but it is also a consequence of the likelihood loss function, which rewards capturing local correlations much more generously than capturing long-range structure. As far as the human visual system is concerned, the latter is arguably much more important to get right, and this is where adversarial models currently have a substantial advantage.

Figure 2: Schematic overview of a hierarchical autoregressive model. The dashed lines indicate different stages, which capture different scales of structure in the input image.

To make autoregressive models pay more attention to large-scale structure, an effective strategy is to remove local detail from the input representation altogether. A simple way to do this for images is by reducing their bit-depth Kingma and Dhariwal (2018); Menick and Kalchbrenner (2019). An alternative approach is to learn new input representations that abstract away local detail, by training encoder models. We can then train autoregressive models of the image pixels conditioned on these representations, as well as autoregressive priors for these representations, effectively splitting the task into two separate stages van den Oord et al. (2017). We can extend this approach further by stacking encoder models, yielding a hierarchy of progressively more high-level representations Dieleman et al. (2018), as shown in Figure 2. This way, we can explicitly assign model capacity to different scales of structure in the images, and turn the bias these models have towards capturing local structure into an advantage. Pseudocode for the full training and sampling procedures is provided in the appendix.

Learning representations that remove local detail while preserving enough information to enable a conditional autoregressive model to produce high-fidelity pixel-level reconstructions is a non-trivial task. A natural way to do this would be to turn the conditional model into an autoencoder, so that the representations and the reconstruction model can be learnt jointly. However, this approach is fraught with problems, as we will discuss in Section 3.

Instead, we propose two alternative strategies based on auxiliary decoders, which are particularly suitable for hierarchical models: we use feed-forward (i.e. non-autoregressive) decoders or masked self-prediction (MSP) to train the encoders. Both techniques are described in Section 4. We show that the produced representations allow us to construct hierarchical models trained using only likelihood losses that successfully produce samples with large-scale coherence. Bringing the capabilities of likelihood-based models up to par with those of their adversarial counterparts in terms of scale and fidelity is important, because this allows us to sidestep any issues stemming from mode dropping and exert more control over the mapping between model capacity and image structure at different scales.

We make the representations learnt by the encoders discrete by inserting vector quantisation (VQ) bottlenecks van den Oord et al. (2017). This bounds the information content of the representations, and it enables more efficient and stable training of autoregressive priors van den Oord et al. (2016). In addition, models with VQ bottlenecks do not suffer from posterior collapse Bowman et al. (2016), unlike regular VAEs. Because of their discrete nature, we will also refer to the learnt representations as codes. We cover VQ bottlenecks in neural networks in more detail in Section 2. We also include a downsampling operation in the encoders so that higher-level codes have a lower spatial resolution.

The contributions of this work are threefold: we study the problems associated with end-to-end training of autoencoders with autoregressive decoders. We also propose two alternative strategies for training such models using auxiliary decoders: feed-forward decoding and masked self-prediction (MSP). Finally, we construct hierarchical likelihood-based models that produce high-fidelity and high-resolution samples (128128 and 256256) which exhibit large-scale coherence. Selected samples are shown in Figure 1.

2 Background

We will use PixelCNN as the main building block for hierarchical image models. We will also insert vector quantisation bottlenecks in the encoders to enable them to produce discrete representations. We briefly describe both of these components below and refer to van den Oord et al. (2016, 2016, 2017) for a more detailed overview.

2.1 PixelCNN

PixelCNN is an autoregressive model: it assumes an arbitrary ordering of the pixels and colour channels of an image, and then models the distribution of each intensity value in the resulting sequence conditioned on the previous values. In practice, the intensities are typically flattened into a sequence in ‘raster scan’ order: from top to bottom, then from left to right, and then according to red, green and blue intensities. Let be the intensity value at position in this sequence. Then the density across all intensity values is factorised into a product of conditionals: . PixelCNN models each of these conditionals with the same convolutional neural network, using weight masking to ensure that each value in the sequence depends only on the values before it.

2.2 Vector quantisation

Vector quantisation variational autoencoders (VQ-VAE) use a vector quantisation (VQ) bottleneck to learn discrete representations. The encoder produces a continuous -dimensional vector , which is then quantised to one of possible vectors from a codebook. This codebook is learnt jointly with the other model parameters. The quantisation operation is non-differentiable, so gradients are backpropagated through it using straight-through estimation Bengio et al. (2013). In practice, this means that they are backpropagated into the encoder as if the quantisation operation were absent, which implies that the encoder receives approximate gradients.

The model is trained using the loss function , where is the input, is the output of the encoder and is the quantised output. is a hyperparameter and square brackets indicate that the contained expressions are treated as constant w.r.t. differentiation111 is like tf.stop_gradient(x) in TensorFlow.. The three terms correspond to the reconstruction log-likelihood, the codebook loss and the commitment loss respectively. Instead of optimising all terms using gradient descent, we use an alternative learning rule for the codebook using an exponentially smoothed version of K-means, which replaces the codebook loss and which is described in the appendix of van den Oord et al. (2017). Although this approach was introduced in the context of autoencoders, VQ bottlenecks can be inserted in any differentiable model, and we will make use of that fact in Section 4.2.

3 Challenges of autoregressive autoencoding

Autoregressive autoencoders are autoencoders with autoregressive decoders. Their appeal lies in the combination of two modelling strategies: using latent variables to capture global structure, and autoregressive modelling to fill in local detail. This idea has been explored extensively in literature van den Oord et al. (2016); Gulrajani et al. (2016); Chen et al. (2016); van den Oord et al. (2017); Engel et al. (2017); Dieleman et al. (2018).

During training, autoregressive models learn to predict one step ahead given the ground truth. This is often referred to as teacher forcing Williams and Zipser (1989). However, when we sample from a trained model, we use previous predictions as the model input instead of ground truth. This leads to a discrepancy between the training and inference procedures: in the latter case, prediction errors can accumulate. Unfortunately, autoregressive autoencoders can exhibit several different pathologies, especially when the latent representation has the same spatial structure as the input (i.e., there is a spatial map of latents, not a single latent vector). Most of these stem from an incompatibility between teacher forcing and the autoencoder paradigm:

Figure 3: Illustration of an issue with autoregressive autoencoders caused by teacher forcing. Left: original 128128 image. Right: reconstruction sampled from an autoregressive autoencoder.
  • [leftmargin=*]

  • When the loss actively discourages the use of the latent representation to encode information (like the KL term does in VAEs), the decoder will learn to ignore it and use only the autoregressive connections. This phenomenon is known as posterior collapse Bowman et al. (2016).

  • On the other hand, when the latent representation has high information capacity, there is no incentive for the model to learn to use the autoregressive connections in the decoder.

  • The encoder is encouraged to preserve in the latent representations any noise that is present in the input, as the autoregressive decoder cannot accurately predict it from the preceding pixels. This is counter to the intuition that the latent representations should capture high-level information, rather than local noise. This effect is exacerbated by teacher forcing which, during training, enables the decoder to make very good next-step predictions in the absence of noise. When noise is present, there will be a very large incentive for the model to store this information in the codes.

  • The encoder is encouraged to ignore slowly varying aspects of the input that are very predictable from local information, because the ground truth input is always available to the autoregressive decoder during training. This affects colour information in images, for example: it is poorly preserved when sampling image reconstructions, as during inference the sampled intensities will quickly deviate slightly from their original values, which then recursively affects the colour of subsequent pixels. This is demonstrated in Figure 3.

Workarounds to these issues include strongly limiting the capacity of the representation (e.g. by reducing its spatial resolution, using a latent vector without spatial structure, inserting a VQ bottleneck and/or introducing architectural constraints in the encoder) or limiting the receptive field of the decoder Gulrajani et al. (2016); van den Oord et al. (2017); Dieleman et al. (2018). Unfortunately, this limits the flexibility of the models. Instead, we will try to address these issues more directly by decoupling representation learning from autoregressive decoder training.

4 Auxiliary decoders

To address the issues associated with jointly training encoders and autoregressive decoders, we introduce auxiliary decoders: separate decoder models which are only used to provide a learning signal to the encoders. Once an encoder has been trained this way, we can discard the auxiliary decoder and replace it with a separately trained autoregressive decoder conditioned on the encoder representations. The autoregressive decoder consists of a local model (PixelCNN) and a modulator which maps the encoder representations to a set of biases for each layer of the local model. This separation allows us to design alternative decoders with architectures and loss functions that are tuned for feature learning rather than for reconstruction quality alone.

Although the encoder and decoder are trained using different loss functions, it is still convenient to train them simultaneously, taking care not to backpropagate the autoregressive decoder loss to the encoder. Otherwise, multiple networks would have to be trained in sequence for each level in the hierarchy. We use simultaneous training in all of our experiments. This has a negligible effect on the quality of the reconstructions.

4.1 Feed-forward decoders

The most straightforward form the auxiliary decoder can take is that of a feed-forward model that tries to reconstruct the input. Even though the task of both decoders is then the same, the feed-forward architecture shapes what kinds of information the auxiliary decoder is able to capture. Because such a decoder does not require teacher forcing during training, the issues discussed in Section 3 no longer occur.

When trained on RGB images, we can treat the pixel intensities as continuous and use the mean squared error (MSE) loss for training. For other types of inputs, such as codes produced by another encoder, we can use the same multinomial log-likelihood that is typically used for autoregressive models. Using the MSE would not make sense as the discrete codes are not ordinal and cannot be treated as continuous values. Figure 4 (left) shows a diagram of an autoregressive autoencoder with an auxiliary feed-forward decoder.

Figure 4: Discrete autoencoders with auxiliary decoders. Left: a feed-forward decoder is used to train the encoder. Right: a masked self-prediction (MSP) model is trained and then distilled into a new model with unmasked input to obtain the encoder. Both models feature autoregressive decoders. Note the dashed arrows indicating that no gradients are backpropagated along these connections.

A significant benefit of this approach is its simplicity: we do not stray too far from the original autoencoder paradigm because the model is still trained using a reconstruction loss. However, an important drawback is that the reconstruction task still encourages the model to capture as much information as possible in the codes, even unimportant details that would be easy for the autoregressive decoder to fill in. It affords relatively little control over the nature and the quantity of information captured in the codes.

4.2 Masked self-prediction decoders

Models with feed-forward decoders are encouraged to encode as much information as possible in the codes to help the decoders produce detailed reconstructions. As a result, the codes may not be very compressible (see Section 7.1), which makes stacking multiple encoders to create a hierarchy quite challenging. Instead of optimising the auxiliary decoder for reconstruction, we can train it to do self-prediction: predict the distribution of a pixel given the surrounding pixels. By masking out some region of the input around the pixel to be predicted, we can prevent the decoder from using strong local correlations, and force it to rely on weaker long-range dependencies instead. As a result, the produced codes will be much more compressible because they only contain information about longer-range correlations in the input. The local detail omitted from these representations can later be filled in by the autoregressive decoder.

In practice, masked self-prediction (MSP) entails masking some square region of the input and predicting the middle pixel of this region. If the region is 77 pixels large, for example, the model can only rely on correlations between pixels that are at least 4 positions away for its prediction. In the presence of noise or other unpredictable local detail (such as textures), the MSP model will be uncertain about its predictions and produce a distribution across all possibilities. The encoder representation will then capture this uncertainty, rather than trying to encode the exact pixel values.

Given an input , a pixel position and an offset (corresponding to a mask size of ), the MSP objective is to maximise , where the input mask is given by:


In practice, we can select multiple pixel positions per input to make training more sample-efficient, but the total number of positions should be limited because too much of the input could be masked out otherwise.

Because this approach requires input masking, we would have to run a forward pass through the encoder with a different mask for each spatial position if we wanted to compute representations for the entire input. This is computationally prohibitive, so instead, we use distillation Hinton et al. (2015) to obtain an encoder model that does not require its input to be masked. We train a teacher model with masked input (a simple feed-forward network), and simultaneously distill its predictions for the selected pixel positions into a student model with unmasked input and a vector quantisation bottleneck. Because it is convolutional, this student model will learn to produce valid representations for all spatial positions, even though it is only trained on a subset of spatial positions for each input. Representations for an input can then be computed in a single forward pass. The full setup is visualised in Figure 4 (right) and described in pseudocode in the appendix.

5 Related work

Recent work on scaling generative models of images to larger resolutions has been focused chiefly on adversarial models. Karras et al. (2018a, b) trained generative adversarial networks (GANs) that can generate various scenes and human faces at resolutions of 256256 and higher (up to 1 megapixel for the latter). Brock et al. (2019) generate 512512 images for each of the 1000 classes in the ImageNet dataset, all with the same GAN model.

Reed et al. (2017) train a multiscale autoregressive model which gradually upsamples images, starting from 44 pixels, and makes some independence assumptions in the process. Conditioned on textual captions and spatial keypoints, the model is capable of producing realistic 256256 images of birds. Although using low-resolution images as representations that abstract away local detail is appealing, this necessarily removes any high-frequency information. Learning these representations instead is more flexible and allows for capturing high-frequency structure.

Menick and Kalchbrenner (2019) train a variant of PixelCNN dubbed ‘subscale pixel network’ (SPN), which uses a different, hierarchical ordering of the pixels rather than the raster scan order to factorise the joint distribution into conditionals. Their best models consist of two separate SPNs, where one models only the 3 most significant bits of the intensity values at a lower resolution, and another conditionally fills in the remaining information. Trained on the ImageNet dataset, this model is able to generate visually compelling unconditional samples at 128128 resolution.

The idea of sequential, separate training of levels in a hierarchy dates back to the early days of deep learning Bengio et al. (2007); Hinton et al. (2006); Vincent et al. (2010). Dieleman et al. (2018) train a hierarchical autoregressive model of musical audio signals by stacking autoregressive discrete autoencoders. However, the autoencoders are trained end-to-end, which makes them prone to the issues described in Section 3. Training the second level autoencoder is cumbersome, requiring expensive population-based training Jaderberg et al. (2017) or alternative quantisation strategies to succeed. Applied to images, it leads to colour information being ignored.

Goyal et al. (2017) use an auxiliary reconstruction loss to avoid posterior collapse in recurrent models. Masked self-prediction is closely related to representation learning methods such as context prediction Doersch et al. (2015) and context encoders Pathak et al. (2016), which also rely on predicting pixels from other nearby pixels. Contrastive predictive coding Oord et al. (2018) on the other hand relies on prediction in the feature domain to extract structure that varies predictably across longer ranges. Although the motivation behind approaches such as these is usually to extract high-level, semantically meaningful features, our goal is different: we want to remove some of the local detail to make the task of modelling large-scale structure easier. We achieve this by predicting only the middle pixel of the masked-out region, which is the main difference compared to previous work. Our representations also need to balance abstraction with reconstruction, so they need to retain enough information from the input.

Context prediction is also a popular representation learning approach in natural language processing, with well-known examples such as word2vec Mikolov et al. (2013), ELMo Peters et al. (2018) and BERT Devlin et al. (2018). Other related work includes PixelNet Bansal et al. (2017), which uses loss functions defined on subsets of image pixels (much like MSP) to tackle dense prediction tasks such as edge detection and semantic segmentation.

In concurrent work, Razavi et al. (2019b) present a hierarchical generative model of images based on a multi-scale VQ-VAE model combined with one or more autoregressive priors. All levels of latent representations depend directly on the pixels and differ only in their spatial resolution. The pixel-level decoder of this model is feed-forward rather than autoregressive, which enables faster sampling but also results in some degree of blurriness. While this can be an acceptable trade-off for image modelling, accurately capturing high-frequency structure may be important for other data modalities.

6 Evaluation

Likelihood-based models can typically be evaluated simply by measuring the likelihood in the pixel domain on a held-out set of images. With the hierarchical approach, however, different parts of the model are trained using likelihoods measured in different feature spaces, so they are not directly comparable. For a given image, we can measure the likelihood in the feature space modelled by the prior, as well as conditional likelihoods in the domains modelled by each autoregressive decoder, and use these to calculate a joint likelihood across all levels of the model. We can use this as a lower bound for the marginal likelihood (see appendix), but in practice it is dominated by the conditional likelihood of the pixel-level decoder, so it is not particularly informative. Although we report this bound for some models, we stress that likelihoods measured in the pixel domain are not suitable for measuring whether a model captures large-scale structure Theis et al. (2016) – indeed, this is the primary motivation behind our hierarchical approach.

Implicit generative models such as GANs (for which computing likelihoods is intractable) are commonly evaluated using metrics that incorporate pre-trained discriminative models, such as the Inception Score (IS) Salimans et al. (2016) and the Fréchet Inception Distance (FID) Heusel et al. (2017). Both require a large number of samples, so they are expensive to compute. Nevertheless, sampling from our models is fast enough for this to be tractable (see appendix). These metrics are not without problems however Barratt and Sharma (2018); Bińkowski et al. (2018), and it is unclear if they correlate well with human perception when used to evaluate non-adversarial generative models.

To complement these computational metrics, we also use human evaluation. We conduct two types of experiments to assess the realism of the generated images, and compare our results to state of the art models: we ask participants to rate individual images for realism on a scale of 1 to 5 and also to compare pairs of images and select the one which looks the most realistic. Note that these experiments are not suitable for assessing diversity, which is much harder to measure.

To allow for further inspection and evaluation, we have made samples for all classes that were used for evaluation available at https://bit.ly/2FJkvhJ. When generating samples, we can change the temperature of the multinomial distribution which we sequentially sample from for each channel and spatial position. We find that slightly reducing the temperature from the default of to or more consistently yields high quality samples.

7 Experiments and results

All experiments were performed on the ImageNet Deng et al. (2009) and Downsampled ImageNet van den Oord et al. datasets, with full bit-depth RGB images (8 bits per channel)222We use the splits (training, validation and testing) as defined by the dataset creators.. For experiments on resolutions of 128128 and 256256, we rescale the shortest side of the image to the desired size and then randomly crop a square image during training (to preserve the aspect ratio). We also use random flipping and brightness, saturation and contrast changes to augment the training data Szegedy et al. (2015). For 6464 experiments, we use the images made available by van den Oord et al. (2016), without augmentation.

7.1 Auxiliary decoder design

As the auxiliary decoders are responsible for shaping the representations learnt by the encoders, their architecture can be varied to influence the information content. We use residual networks for both the encoders and auxiliary decoders He et al. (2016a) and vary the number of layers. Each additional layer extends the receptive field of the decoder, which implies that a larger neighbourhood in the code space can affect any given spatial position in the output space. As a result, the information about each pixel is spread out across a larger neighbourhood in the code space, which allows for more efficient use of the discrete bottleneck.

To measure the effect this has on the compressibility of the codes, we first train some autoencoder models on 6464 colour images using a discrete bottleneck with a single 8-bit channel (256 codes) and downsampling to 3232 using a strided convolutional layer. The codes are upsampled in the decoder using a subpixel convolutional layer Shi et al. (2016). We then train prior models and measure the validation negative log-likelihood (NLL) they achieve at the end of training. The priors are modestly sized PixelCNN models with 20 layers and 128 units per layer.

Figure 5: Code predictability for different encoders, as measured by the validation NLL of a small PixelCNN prior (see text for details). Left: increasing the number of auxiliary decoder layers makes the codes harder to predict. NLLs for codes from feed-forward decoders (red circles) flatten out more quickly than those from MSP decoders (mask size 55, blue squares). Right: increasing the mask size for MSP decoders makes the resulting codes easier to predict (orange triangles).

The results for both feed-forward and MSP decoders (mask size 55) are shown in Figure 5 (left). It is clear that the codes become less predictable as the receptive field of the auxiliary decoder increases. As expected, the MSP codes are also more predictable than the feed-forward codes. The predictability of the feed-forward codes seems to flatten out after about 8 decoder layers, while that of the MSP codes decreases more gradually.

For MSP decoders, we repeat this experiment fixing the number of layers to 2 and varying the mask size instead (Figure 5, right). As expected, increasing the mask size reduces the information content of the codes and makes them more predictable. In the appendix, we also discuss the effect of the auxiliary decoder design on reconstruction quality.

7.2 Codes abstract away local detail

In this subsection and the next, we evaluate the building blocks that we will use to construct hierarchical models of 128128 and 256256 RGB images. To verify that the codes learnt using auxiliary decoders abstract away local detail, we show the variability of sampled reconstructions. Note that the codes for a given input are deterministic, so all stochasticity comes from the autoregressive decoder. We compress all images into single-channel 8-bit codes (256 bins) at 3232 resolution. We could obtain higher-fidelity reconstructions by increasing the code capacity (by adding more channels, increasing their bit-depth or increasing the resolution), but we choose to use single-channel codes with a modest capacity, so that we can train powerful priors that do not require code channel masking to ensure causality.

For 128128 images (48 compression333From bits to bits), we use a feed-forward auxiliary decoder with 8 layers trained with the MSE loss, and an MSP decoder with 8 layers and a 33 mask size. Reconstructions for both are shown in Figure 6. Note how the MSE decoder is better at preserving local structure (e.g. text on the box), while the MSP decoder is better at preserving the presence of textures (e.g. wallpaper, body of the bird). Both types of reconstructions show variation in colour and texture.

Figure 6: Autoregressive autoencoder reconstructions of 128128 images. Left: original images. Middle: two different sampled reconstructions from models with a feed-forward auxiliary decoder trained with the MSE loss. Right: two different sampled reconstructions from models with an MSP auxiliary decoder with mask size 33. The sampling temperature was . More examples can be found in the appendix.
Figure 7: Autoregressive autoencoder reconstructions of 256256 images. Left: original images. Middle: three different sampled two-level reconstructions from models with a feed-forward auxiliary decoder trained with the MSE loss. Right: three different sampled two-level reconstructions from models with an MSP auxiliary decoder with mask sizes 55 (level 1) and 33 (level 2). The sampling temperature was . More examples can be found in the appendix.

For 256256 images (192 compression), we use a stack of two autoencoders, because using a single autoencoder would result in a significant degradation in visual fidelity (see appendix). In Figure 7, we show reconstructions from a stack trained with feed-forward auxiliary decoders, where the first level model compresses to single-channel 8-bit codes at 128128 resolution, so the first and second level models compress by factors of 12 and 16 respectively (using 1 and 12 auxiliary decoder layers respectively). We also show reconstructions from a stack trained with MSP auxiliary decoders, where the first level compresses to 8-bit codes at 6464 resolution with 3 channels, so the first and second level models compress by factors of 16 (with 4 auxiliary decoder layers) and 12 (with 8 layers) respectively. We refer to the appendix for an exploration of the information captured in the codes as a function of the hyperparameters of the auxiliary decoders.

7.3 Hierarchical models

We construct hierarchical models using the autoencoders from the previous section, by training class-conditional autoregressive priors on the codes they produce. Once a prior is trained, we can use ancestral sampling to generate images. Most of the model capacity in our hierarchical models should be used for visually salient large-scale structure, so we use powerful prior models, while the decoder models are relatively small.

Figure 8: Selected class conditional 128128 samples from our models with feed-forward auxiliary decoders (top) and MSP decoders (bottom). More are available in the appendix and at https://bit.ly/2FJkvhJ.
128128 images.

We report results and show samples for two priors: one trained on the feed-forward codes from the previous section, and one for the MSP codes. The prior models are large gated PixelCNNs augmented with masked self-attention layers Vaswani et al. (2017), which are inserted after every few convolutional layers as in PixelSNAIL Chen et al. (2018). Selected samples are shown in Figure 8. Samples for all classes are available at https://bit.ly/2FJkvhJ. IS and FID are reported in Table 1, as well as the joint NLL over the pixels and codes. We do not directly compare with results from previous papers as we cannot compute exact likelihoods, and differences in preprocessing of the images can significantly affect these measurements. The IS and FID are much worse than those reported for recent adversarial models Brock et al. (2019), but they are in the same ballpark as those reported for PixelCNN on 3232 ImageNet by Ostrovski et al. (2018) (IS 8.33, FID 33.27).

Aux. decoder IS FID joint NLL
Feed-forward 18.10  0.96 44.95 3.343 bits/dim
MSP 17.02  0.79 46.05 3.409 bits/dim

Table 1: IS and FID for autoregressive priors trained on 32x32 codes obtained from 128x128 images. We also report the joint NLL as discussed in Section 6.
256256 images.

We trained two priors, one on feed-forward codes and one on MSP codes. Samples from both are available in the appendix and at https://bit.ly/2FJkvhJ.

Human evaluation.

We asked human raters to rate 128128 images from 20 different classes generated by different models on a scale from 1 to 5 in terms of realism (Table 2). Note that even real images get a relatively low rating on this scale due to the limited resolution. In line with expectations, our hierarchical models receive lower realism scores than BigGAN and are on par with subscale pixel networks. We also asked human raters to compare samples from certain models side by side and pick the most realistic looking ones (Table 3). Here, we get similar results: samples from a hierarchical model are preferred over BigGAN samples in 22.89% of cases, and over real images in just 5.39% of cases. Details of our experimental setup can be found in the appendix, as well as a few nearest neighbour comparisons of samples with images from the dataset in different feature spaces.

Model Average rating
real images
BigGAN (high truncation) Brock et al. (2019)
BigGAN (low truncation) Brock et al. (2019)
Subscale pixel network Menick and Kalchbrenner (2019)
Hierarchical (MSP, ours)
Hierarchical (feed-forward, ours)
Table 2: Human realism ratings (from 1 to 5) for samples from different models. We report the average rating and standard error.
Model Preference
A B A B Unknown
MSP (ours) Feed-forward (ours)
MSP (ours) SPN
MSP (ours) BigGAN (low truncation)
MSP (ours) real images
Table 3: Human rater preference in a pairwise comparison experiment between samples from different models.

8 Conclusion

We have discussed the challenges of training autoregressive autoencoders, and proposed two techniques that address these challenges using auxiliary decoders. The first uses a feed-forward network to reconstruct pixels, while the second relies on predicting missing pixels. This enabled us to build hierarchical autoregressive models of images which are capable of producing high-fidelity class-conditional samples with large-scale coherence at resolutions of 128128 and 256256 when trained on ImageNet. This demonstrates that our hierarchical approach can be used to effectively scale up likelihood-based generative models. In future work, we would like to compare both techniques in more challenging settings and further explore their relative strengths and limitations.


We would like to thank the following people for their help and input: Aäron van den Oord, Ali Razavi, Jacob Menick, Marco Cornero, Tamas Berghammer, Andy Brock, Jeff Donahue, Carl Doersch, Jacob Walker, Chloe Hillier, Louise Deason, Scott Reed, Nando de Freitas, Mary Chesus, Jonathan Godwin, Trevor Back and Anish Athalye.


  • A. Bansal, B. Russell, A. Gupta, and D. Ramanan (2017) PixelNet: representation of the pixels, by the pixels, and for the pixels. arXiv:1702.06506. Cited by: §5.
  • S. Barratt and R. Sharma (2018) A note on the inception score. arXiv preprint arXiv:1801.01973. Cited by: §6.
  • Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle (2007) Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems 19, B. Schölkopf, J. C. Platt, and T. Hoffman (Eds.), pp. 153–160. External Links: Link Cited by: §5.
  • Y. Bengio, N. Léonard, and A. Courville (2013) Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Cited by: §2.2.
  • M. Bińkowski, D. J. Sutherland, M. Arbel, and A. Gretton (2018) Demystifying MMD GANs. In International Conference on Learning Representations, Cited by: §6.
  • S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Józefowicz, and S. Bengio (2016) Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pp. 10–21. Cited by: §1, 1st item.
  • A. Brock, J. Donahue, and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, Cited by: Appendix H, §1, §5, §7.3, Table 2.
  • X. Chen, D. P. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Schulman, I. Sutskever, and P. Abbeel (2016) Variational lossy autoencoder. CoRR abs/1611.02731. External Links: 1611.02731 Cited by: §3.
  • X. Chen, N. Mishra, M. Rohaninejad, and P. Abbeel (2018) PixelSNAIL: an improved autoregressive generative model. In ICML, JMLR Workshop and Conference Proceedings, Vol. 80, pp. 863–871. Cited by: §A.3, §7.3.
  • J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255. Cited by: §7.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805. Cited by: §5.
  • S. Dieleman, A. van den Oord, and K. Simonyan (2018) The challenge of realistic music generation: modelling raw audio at scale. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 8000–8010. Cited by: §1, §3, §3, §5.
  • L. Dinh, D. Krueger, and Y. Bengio (2014) NICE: non-linear independent components estimation. CoRR abs/1410.8516. Cited by: §1.
  • L. Dinh, J. Sohl-Dickstein, and S. Bengio (2017) Density estimation using real nvp. Cited by: §1.
  • C. Doersch, A. Gupta, and A. A. Efros (2015) Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422–1430. Cited by: §5.
  • J. Engel, C. Resnick, A. Roberts, S. Dieleman, M. Norouzi, D. Eck, and K. Simonyan (2017) Neural audio synthesis of musical notes with wavenet autoencoders. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 1068–1077. Cited by: §3.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2672–2680. Cited by: §1.
  • A. Goyal, A. Sordoni, M. Côté, N. R. Ke, and Y. Bengio (2017) Z-forcing: training stochastic recurrent networks. 31st Conference on Neural Information Processing Systems (NIPS 2017 - Advances in Neural Information Processing Systems) edition, pp. 6697–6707. External Links: Link Cited by: §5.
  • I. Gulrajani, K. Kumar, F. Ahmed, A. Ali Taiga, F. Visin, D. Vazquez, and A. Courville (2016) PixelVAE: a latent variable model for natural images. arXiv e-prints abs/1611.05013. Cited by: §3, §3.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016a) Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: Figure 13, §7.1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016b) Identity mappings in deep residual networks. In ECCV, Cited by: Appendix A.
  • M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §6.
  • G. E. Hinton, S. Osindero, and Y. Teh (2006) A fast learning algorithm for deep belief nets. Neural Comput. 18 (7), pp. 1527–1554. External Links: ISSN 0899-7667, Link, Document Cited by: §5.
  • G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, Cited by: §4.2.
  • M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals, T. Green, I. Dunning, K. Simonyan, C. Fernando, and K. Kavukcuoglu (2017) Population based training of neural networks. CoRR abs/1711.09846. External Links: 1711.09846 Cited by: §5.
  • T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018a) Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, Cited by: §1, §5.
  • T. Karras, S. Laine, and T. Aila (2018b) A style-based generator architecture for generative adversarial networks. CoRR abs/1812.04948. External Links: 1812.04948 Cited by: §1, §5.
  • D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. CoRR abs/1312.6114. Cited by: §1.
  • D. P. Kingma and P. Dhariwal (2018) Glow: generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 10236–10245. Cited by: §1, §1.
  • J. Menick and N. Kalchbrenner (2019) Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In International Conference on Learning Representations, Cited by: §1, §5, Table 2.
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.), pp. 3111–3119. Cited by: §5.
  • A. v. d. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §5.
  • G. Ostrovski, W. Dabney, and R. Munos (2018) Autoregressive quantile networks for generative modeling. In ICML, JMLR Workshop and Conference Proceedings, Vol. 80, pp. 3933–3942. Cited by: §7.3.
  • T. L. Paine, P. Khorrami, S. Chang, Y. Zhang, P. Ramachandran, M. A. Hasegawa-Johnson, and T. S. Huang (2016) Fast wavenet generation algorithm. CoRR abs/1611.09482. External Links: Link, 1611.09482 Cited by: Appendix C.
  • D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544. Cited by: §5.
  • M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237. External Links: Document Cited by: §5.
  • B. T. Polyak and A. B. Juditsky (1992) Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30 (4), pp. 838–855. External Links: ISSN 0363-0129, Link, Document Cited by: Appendix A.
  • A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
  • A. Razavi, A. v. d. Oord, B. Poole, and O. Vinyals (2019a) Preventing posterior collapse with delta-vaes. arXiv preprint arXiv:1901.03416. Cited by: §A.3.
  • A. Razavi, A. van den Oord, and O. Vinyals (2019b) Generating diverse high resolution images with vq-vae. ICLR 2019 Workshop DeepGenStruct. External Links: Link Cited by: §5.
  • S. E. Reed, A. van den Oord, N. Kalchbrenner, S. G. Colmenarejo, Z. Wang, Y. Chen, D. Belov, and N. de Freitas (2017) Parallel multiscale autoregressive density estimation. In ICML, Proceedings of Machine Learning Research, Vol. 70, pp. 2912–2921. Cited by: §5.
  • D. J. Rezende, S. Mohamed, and D. Wierstra (2014) Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, E. P. Xing and T. Jebara (Eds.), Proceedings of Machine Learning Research, Vol. 32, Bejing, China, pp. 1278–1286. Cited by: §1.
  • T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234–2242. Cited by: §6.
  • W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, pp. 1874–1883. Cited by: §A.1.1, §7.1.
  • K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: Figure 12.
  • C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), Cited by: §7.
  • L. Theis, A. van den Oord, and M. Bethge (2016) A note on the evaluation of generative models. In International Conference on Learning Representations, Cited by: §6.
  • L. Theis and M. Bethge (2015) Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 1927–1935. Cited by: §1.
  • A. van den Oord, N. Kalchbrenner, L. Espeholt, k. kavukcuoglu, O. Vinyals, and A. Graves (2016) Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 4790–4798. Cited by: Appendix A, Appendix D, §1, §2, §3.
  • [50] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu Downsampled imagenet 64x64. External Links: Link Cited by: §7.
  • A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu (2016) Pixel recurrent neural networks. In Proceedings of The 33rd International Conference on Machine Learning, M. F. Balcan and K. Q. Weinberger (Eds.), Proceedings of Machine Learning Research, Vol. 48, New York, New York, USA, pp. 1747–1756. Cited by: §1, §1, §2, §7.
  • A. van den Oord, O. Vinyals, and k. kavukcuoglu (2017) Neural discrete representation learning. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 6306–6315. Cited by: §1, §1, §2.2, §2, §3, §3.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5998–6008. Cited by: §7.3.
  • P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. Manzagol (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research 11 (Dec), pp. 3371–3408. Cited by: §5.
  • R. J. Williams and D. Zipser (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1, pp. 270–280. Cited by: §3.
  • S. Zhou, M. Gordon, R. Krishna, A. Narcomey, D. Morina, and M. S. Bernstein (2019) HYPE: human eye perceptual evaluation of generative models. CoRR abs/1904.01121. External Links: Link, 1904.01121 Cited by: Appendix G.

Appendix A Architectural details

In this section, we describe the architecture of the different components of our models, and provide hyperparameters for all experiments. All hierarchical models consist of one or two autoencoder models and a prior, which are trained separately and in sequence. We use Polyak averaging Polyak and Juditsky [1992] for all models with a decay constant of 0.9999.

Each autoencoder consists of a number of subnetworks: an encoder, an autoregressive decoder and an auxiliary decoder. A quantisation bottleneck is inserted between the encoder and both decoders. In the case of MSP training, there is also an additional teacher network. The autoregressive decoder in turn consists of a modulator and a local model. The local model is always a gated PixelCNN van den Oord et al. [2016]. The modulator is a residual net and is responsible for mapping the code input to a set of biases for each layer in the local model. The encoder, auxiliary decoder, and teacher networks are all residual nets as well. For all residual networks, we use the ResNet v2 ‘full pre-activation’ formulation He et al. [2016b] (without batch normalisation), where each residual block consists of a ReLU nonlinearity, a 33 convolution, another ReLU nonlinearity and a 11 convolution, in that order. Note that we chose not to condition any of the autoencoder components on class labels in our experiments (only the priors are class-conditional).

We train all models on 6464 crops, unless the input representations already have a resolution of 6464 or smaller. When training MSP models, we use a different number of masks per image, depending on the mask size. For mask sizes 11 and 33 we use 30 masks. For mask sizes 55 and 77 we use 10 masks. For sizes 9, 11, 13 and 15 we use 3 masks, and for 17 and 19 we us a single mask per image. In preliminary experiments, these settings enabled us to get the best self-prediction likelihoods. Note that we present the images and codes to the MSP teacher and encoder in a one-hot representation, so that masked pixels () can be distinguished from black pixels ().

We first describe the architecture details for the autoregressive autoencoders with feed-forward and masked self-prediction auxiliary decoders for the different resolutions. The priors on the resulting codes are described jointly for all models in Section A.3.

a.1 2-level models for 128128 images

a.1.1 With feed-forward decoder

The encoder, auxiliary decoder, and modulator are all residual networks with 512 hiddens and a residual bottleneck of 128 hiddens. The encoder and modulator both have 16 layers whereas the auxiliary decoder has only 2 layers. In the encoder the features are downscaled at the end using a strided convolution with a stride of 2. In the auxiliary decoder and the modulator the upsampling (by a factor of 2) is done in the beginning using subpixel convolutional layer Shi et al. [2016]. The local model, a gated PixelCNN, has 16 layers with 128 units. The VQ bottleneck has 1 channel with 9 bits444We use 9 bits instead of the 8 bits used in other experiments because there was a significant increase in reconstruction quality when using 9 bits instead of 8. (512 bins). The model was trained with the Adam optimizer for 300000 iterations.

a.1.2 With MSP decoder

The teacher, encoder, decoder and modulator are all residual networks with 128 hiddens. The encoder, teacher and modulator have 16 layers whereas the auxiliary decoder has 8 layers. We use a mask size of 33. The local model has 20 layers and the VQ bottleneck has 1 channel with 8 bits. The model was trained with the Adam optimizer for 200000 iterations. Other than that, the setup matches the one used for the feed-forward decoder.

a.2 3-level models for 256256 images

a.2.1 With feed-forward decoder

For the first level model, which maps 256256 RGB images to 128128 single channel codes with 8 bits, we can make the components relatively small. We use the same model as described in Section A.1.1 except for the following: we use only 4 encoder layers, 1 layer in the auxiliary decoder and 8 autoregressive layers.

The second level model, which maps the 128128 codes to single channel 3232 codes with 8 bits, uses 16 encoder and 16 modulator layers with 1024 hiddens and residual bottlenecks of 256 hiddens. The auxiliary decoder has 12 layers and the autoregressive decoder is also only 8 layers but now has 384 hiddens per layer.

a.2.2 With MSP decoder

For the first level model, which maps 256256 RGB images to 6464 3-channel codes with 8 bits, we use the same model as described in Section A.1.2, except that the auxiliary decoder has 4 layers and the mask size is 55.

The second level model, which maps the 6464 codes to single channel 3232 codes with 8 bits is also the same, but has an auxiliary decoder with 8 layers and the mask size is 33.

a.3 Prior details

Our autoregressive priors are very similar to those described by Razavi et al. [2019a], which are in turn closely related to PixelSNAIL Chen et al. [2018]. We list their details in Table 4.

128128 128128 256256 256256
20 20 20 20
640 640 640 640
2048 2048 2048 2048
add add concat add
6 5 5 5
10 10 15 10
0.2 0.1 0.0 0.1
2048 2048 2048 2048
490200 267000 422700 185000

Table 4: Architecture details for the priors on the different codes: FF denotes the feed-forward decoder model and MSP denotes the masked self-prediction model. is the number of layers, the number of hiddens for each layer, is residual filter size, denotes if the timing signal was added or concatenated with the input, is number of attention layers, is the number of attention heads, is the probability of dropout, is the batch size and is the number of iterations the model has been trained for.

Appendix B Training

All models were trained on Google TPU v3 Pods. The encoders and decoders were trained on pods with 32 cores, whereas the priors have been trained on pods with 512 cores.

The procedure for training hierarchical models is outlined in Algorithm 1. The procedures for training encoders with feed-forward and masked self-prediction auxiliary decoders are outlined in Algorithms 2 and 3 respectively.

: number of levels
: input representation at level
: output representation at level
: pixels
: encoder at level
: auxiliary decoder at level
: autoregressive decoder at level
: top-level prior (at level )
for  in  do
     Train , using auxiliary loss
     Train using conditional log-likelihood loss
end for
Train using log-likelihood loss
Combine into a hierarchical autoregressive model
Algorithm 1 Training procedure for hierarchical autoregressive models. Note that training of and is carried out simultaneously in practice.
: input representation at current level
: encoder at level
: auxiliary decoder at level
if l = 1 then
     , mean-squared error in pixel space
     , categorical negative log-likelihood in code space
end if
Train , using auxiliary loss
Discard , retain
Algorithm 2 Training procedure for encoders with feed-forward auxiliary decoders.
: input representation at current level
: mask size at current level
: random input mask, masking one or more image regions of size
: output mask, masking all except the middle pixels in the masked out regions in
: encoder at level
: auxiliary decoder at level
: teacher at level
Train using masked self-prediction loss
Train , using masked distillation loss
Discard , , retain
Algorithm 3 Training procedure for encoders with masked self-prediction (MSP) auxiliary decoders. Note that training of , and is carried out simultaneously in practice.

Appendix C Sampling

The procedure for sampling from hierarchical models is outlined in Algorithm 4.

: number of levels
: input representation at level
: pixels
: autoregressive decoder at level
: top-level prior (at level )
for  in  do
end for
Algorithm 4 Ancestral sampling procedure for hierarchical autoregressive models.

We report all sampling timings using a single NVIDIA V100 GPU. We use a version of incremental sampling Paine et al. [2016] which uses buffers to avoid unnecessary recomputation. Because our current version of incremental sampling does not support models with attention, we use naive sampling for sampling for the autoregressive priors: at every point we simply pass in the entire previously sampled input to the model. For 128128 it takes roughly 23 minutes to sample a batch of 25 codes from the prior and 9 minutes to use the level 1 model to map these codes to 128128 images. For a batch of 9 images at 256256 resolution, it takes 10 minutes to first sample the level 2 codes from the prior, less than 2 minutes to sample level 1 codes conditioned on these level 2 codes, and finally 9 minutes to use these codes to sample the 256256 images themselves.

Appendix D PixelCNN bias towards local structure

To demonstrate that autoregressive models like PixelCNN are inherently biased towards capturing local structure, we trained a class-conditional gated PixelCNN model van den Oord et al. [2016] with 20 layers, 384 hidden units and 1024 hidden units for the residual connections on 6464 ImageNet images. Some conditional samples from this model are shown in Figure 9. While these samples feature recognisable textures associated with the respective classes, they are not globally coherent. Generating coherent samples would require a model that is too large to train in a feasible amount of time.

Figure 9: Samples from a gated PixelCNN trained on 6464 ImageNet images. From left to right, the classes are ‘lion’, ‘ambulance’ and ‘cheeseburger’.

Appendix E Effect of auxiliary decoder design on reconstructions

In the main paper, we discussed the effect of the architecture of the auxiliary decoder on the information content and compressibility of the codes. Here, we show how this affects reconstruction quality by using the same trained models to reconstruct some example 6464 images and visualising the result. Figure 10 shows how changing the number of layers and the mask size of an MSP decoder affects sampled reconstructions. Note that the autoregressive decoder is stochastic, so repeated sampling yields slightly different reconstructions (not shown). We also show difference images to make subtle differences easier to spot. Smaller decoders lead to worse reconstruction quality overall. Larger mask sizes have the same effect and particularly affect local detail.

Figure 10: Reconstructions from MSP-trained codes with different auxiliary decoder hyperparameters. The top left image in each grid is the original 6464 image. The rest of row 1 shows sampled reconstructions for decreasing decoder depth (16, 14, 12, 10, 8, 6, 4, 2, 0 layers respectively) with mask size 55. Row 3 shows reconstructions for increasing mask size (1, 3, 5, 7, 9, 11, 13, 15, 17, 19 respectively) with 2 decoder layers. Rows 2 and 4 show the difference between the rows above and the original image.

Appendix F Bounding the marginal likelihood by the joint likelihood

As discussed in the main paper, the joint likelihood across the image pixels and all code representations can be used as a lower bound for the marginal likelihood of the image pixels only. Let be an image, and the representation of the image for each level in a hierarchy of levels (so ). Then we can compute:


Because all encoders are deterministic, they associate only one set of codes with each image. If the corresponding decoders had infinite capacity, this would imply that the marginal distribution is equal to the joint distribution . In practice however, the decoders are imperfect and they will assign some non-zero probability to an input when conditioned on other codes: let , then . This implies that to calculate correctly, we would have to integrate over all codes. This is intractable, but instead we can use as a lower bound for . As mentioned in the main text, will dominate this bound in practice, and it is not suitable for measuring whether a model captures large-scale structure.

Appendix G Human evaluation

For our human evaluation experiments, we used a varied subset of 20 classes in an attempt to cover the different types of object classes available in the ImageNet dataset, and used 50 128128 samples for each class (the same ones that we have made available online), for a total of 1,000 images per model. Where real images were used, we used the same rescaling strategy as during training (see main text). To display the images to the raters, they were first upscaled to 512512 using bilinear interpolation. We also show the ImageNet class label to the raters.

For the realism rating experiments, each image was rated 3 times by different individuals, for a total of 6,000 ratings. Raters were explicitly asked not to pay attention to scaling artifacts, but this is hard to control for, and the relatively low score for real images indicates that this has impacted the results. They were also shown examples of real and generated images, so they would have an idea of what to look out for.

For the pairwise comparison experiments, we asked the raters “which image looks the most realistic?”. When comparing against real images, we asked instead “which image is real?”. This nuance is important as it gives the raters some extra information. Each pair consists of one sample from the first model, and one sample from the second model. We created 500 random sample pairs per class, in such a way that every sample is used exactly 10 times (using all possible pairs would require too many ratings). Each pairing was shown in random order and rated by 3 different individuals, for a total of 30,000 ratings.

A very small fraction of answers for each experiment were unusable (we report this fraction in the ‘unknown’ column). For the experiment comparing samples from a hierarchical model trained with an MSP auxiliary decoder versus one trained with a feed-forward auxiliary decoder, we found that all 3 raters agreed in 63.68% of cases (accounting for 53 unusable answers, or 0.18%). The samples from the MSP model are preferred in 50.89% of cases, and the samples from the feed-forward model are preferred 48.93% of cases. Although this result is very balanced, there are larger differences between the models within each class. For this experiment, we report the preferences for each class in Table 5.

Our pairwise comparison experiments are similar to those conducted by Zhou et al. [2019], with a few key differences: our models are class-conditional, so we evaluate over a diverse subset of classes and we need to use a larger number of images. As a consequence of this, we gather fewer evaluations per image. We do not provide the raters with immediate feedback when comparing against real images: we do not tell them whether they guessed correctly or not. It is impossible to provide such feedback when comparing models directly against each other (because neither image is real), so for consistency across experiments, we do not do this even when it is technically possible.

Class Feed-forward MSP
megalith (649) 36.24% 63.62%
giant panda (388) 36.53% 63.20%
cheeseburger (933) 41.07% 58.80%
Geoffroy’s spider monkey (381) 42.00% 57.80%
coral reef (973) 43.20% 56.67%
schooner (780) 46.07% 53.60%
Pomeranian (259) 47.40% 52.60%
white stork (127) 47.63% 51.97%
seashore (978) 50.33% 49.67%
starfish (327) 50.80% 49.00%
volcano (980) 50.93% 48.93%
bookcase (453) 51.13% 48.80%
Granny Smith (948) 51.40% 48.47%
monarch butterfly (323) 51.87% 48.13%
yellow garden spider (72) 52.13% 47.73%
ambulance (407) 52.60% 47.00%
frying pan (567) 55.07% 44.73%
grey whale (147) 55.33% 44.47%
tiger (292) 56.93% 42.87%
Dalmatian (251) 59.93% 39.80%

Table 5: Preference for 128128 samples from hierarchical models trained with feed-forward auxiliary decoders and with MSP auxiliary decoders, for 20 classes (50 samples per model per class, 500 random pairwise comparisons by 3 raters, 1,500 answers per class in total).

Appendix H Nearest neighbours

Although overfitting in likelihood-based models can be identified directly be evaluating likelihoods on a holdout set, we searched for nearest neighbours in the dataset in different feature spaces for some model samples. Following Brock et al. [2019], we show nearest neighbours using L2 distance in pixel space, as well as in ‘VGG-16-fc7’ and ‘ResNet-50-avgpool’ (feature spaces obtained from pre-trained discriminative models on ImageNet) in Figures 11, 12 and 13 respectively.

Figure 11: Nearest neighbours in pixel space. The generated image is in the top left.
Figure 12: Nearest neighbours in VGG-16-fc7 Simonyan and Zisserman [2015] feature space. The generated image is in the top left.
Figure 13: Nearest neighbors in ResNet-50-avgpool He et al. [2016a] feature space. The generated image is in the top left.

Appendix I Additional reconstructions

We provide some additional autoregressive autoencoder reconstructions of 128128 images in Figure 14 to further demonstrate their variability and the differences between both auxiliary decoder strategies. Figure 15 contains additional reconstructions of 256256 images. Figure 16 shows reconstructions from an autoencoder that directly compresses 256256 images to single-channel 3232 8-bit codes (MSP, 8 auxiliary decoder layers, mask size 55), resulting in a significant degradation in visual fidelity.

Figure 14: Additional autoregressive autoencoder reconstructions of 128128 images. Left: original images. Middle: three different sampled reconstructions from models with a feed-forward auxiliary decoder trained with the MSE loss. Right: three different sampled recontructions from models with an MSP auxiliary decoder with mask size 33. The sampling temperature was .
Figure 15: Additional autoregressive autoencoder reconstructions of 256256 images. Left: original images. Middle: three different sampled reconstructions from models with a feed-forward auxiliary decoder trained with the MSE loss. Right: three different sampled recontructions from models with an MSP auxiliary decoder with mask sizes 55 (level 1) and 33 (level 2). The sampling temperature was .
Figure 16: Autoregressive autoencoder reconstructions of 256256 images, using a single autoencoder (MSP auxiliary decoder, mask size 55). The reconstructed images are significantly degraded in terms of visual fidelity. The sampling temperature was . Please refer to Figure 14 for the original images and a comparison with 2-level reconstructions.

Appendix J Additional samples

More samples are available online at https://bit.ly/2FJkvhJ in original quality (some figures in the paper are compressed to save space). Figures 17-20 are 128128 samples for a model using a feed-forward auxiliary decoder. Figures 21-24 are 128128 samples for a model using a MSP auxiliary decoder. Figures 25-28 are 256256 samples from a model using feed-forward auxiliary decoders. Figures 29-31 are 256256 samples from a model using MSP auxiliary decoders.

Figure 17: Great grey owl.
Figure 18: Maltese.
Figure 19: Monarch butterfly.
Figure 20: Cheeseburger.
Figure 21: Great grey owl.
Figure 22: Lorikeet.
Figure 23: Samoyed.
Figure 24: Cheeseburger.
Figure 25: Ostrich.
Figure 26: Tarantula.
Figure 27: Monarch butterfly.
Figure 28: Home theater.
Figure 29: Toucan.
Figure 30: Red admiral.
Figure 31: Valley.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description