Video Compression With Rate-Distortion Autoencoders

Video Compression With Rate-Distortion Autoencoders

Amirhossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, Taco S. Cohen
Qualcomm AI Research, Amsterdam, the Netherlands
{habibian, ties, jtomczak, tacos}@qti.qualcomm.com
Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.
Abstract

In this paper we present a deep generative model for lossy video compression. We employ a model that consists of a 3D autoencoder with a discrete latent space and an autoregressive prior used for entropy coding. Both autoencoder and prior are trained jointly to minimize a rate-distortion loss, which is closely related to the ELBO used in variational autoencoders. Despite its simplicity, we find that our method outperforms the state-of-the-art learned video compression networks based on motion compensation or interpolation. We systematically evaluate various design choices, such as the use of frame-based or spatio-temporal autoencoders, and the type of autoregressive prior.

In addition, we present three extensions of the basic method that demonstrate the benefits over classical approaches to compression. First, we introduce semantic compression, where the model is trained to allocate more bits to objects of interest. Second, we study adaptive compression, where the model is adapted to a domain with limited variability, \egvideos taken from an autonomous car, to achieve superior compression on that domain. Finally, we introduce multimodal compression, where we demonstrate the effectiveness of our model in joint compression of multiple modalities captured by non-standard imaging sensors, such as quad cameras. We believe that this opens up novel video compression applications, which have not been feasible with classical codecs.

1 Introduction

In recent years, tremendous progress has been made in generative modelling. Although much of this work has been motivated by potential future applications such as model based reinforcement learning, data compression is a very natural application that has received comparatively little attention. Deep learning-based video compression in particular has only recently started to be explored [11, 33, 40]. This is remarkable because improved video compression would have a large economic impact: it is estimated that very soon, of internet traffic will be in the form of video [12].

Figure 1: Overview of the proposed compression inference pipeline. The encoder encodes a sequence of frames into a sequence of quantized latent variables . A code model is used to transform into a bitstream using adaptive arithmetic coding (AAC). On the receiver side, the bitstream is used to reconstruct which is then lossily decoded into .

In this paper, we present a simple yet effective and theoretically grounded method for video compression that can serve as the basis for future work in this nascent area. Our model consists of off-the-shelf components from the deep generative modelling literature, namely autoencoders (AE) and autoregressive models (ARM). Despite its simplicity, the model outperforms all methods to which a direct comparison is possible, including substantially more complicated approaches.

On the theoretical side, we show that our method, as well as state-of-the-art image compression methods [28] can be interpreted as VAEs [25, 31] with a discrete latent space and a deterministic encoder. The VAE framework is an especially good fit for the problem of lossy compression, because it provides a natural mechanism for trading off rate and distortion, as measured by the two VAE loss terms [3]. However, as we will argue in this paper, it is not beneficial for the purpose of compression to use a stochastic encoder (approximate posterior) as usually done in VAEs, because any noise added to the encodings results in increased bitrate without resulting in an improvement in distortion [18].

On the experimental side, we perform an extensive evaluation of several architectural choices, such as the use of 2D or 3D autencoders, and the type of autoregressive prior. Our best model uses a ResNet [17] autoencoder with 3D convolutions, and a temporally-conditioned gated PixelCNN [37] as prior. We benchmark our method against existing learned video compression methods, and show that it achieves better rate/distortion. We also find that our method outperforms the state-of-the-art traditional codecs when these are used with restricted settings, as it is done in previous work, but more work remains to be done before it can be claimed that these learned video compression methods suppress traditional codecs under optimal settings.

Additionally, we introduce several extensions of our method that highlight the benefits of using learned video codecs. In semantic compression, we bridge the gap between semantic video understanding and compression by learning to allocate more bits to objects from categories of interest, \ie, people. During training, we weight the rate and distortion losses to ensure a high quality reconstruction for regions of interest extracted by off-the-shelf object detection or segmentation networks, such as Mask R-CNN[16].

We also demonstrate adaptive compression, where the model is trained on a specific domain, either before or after deployment, to fine-tune it to the distribution of videos it is actually used for. We show that adaptive compression of footage from autonomous cars can result in large improvement in terms of rate and distortion. With classical codecs, finetuning for a given domain is often not feasible.

Finally, we show that our method is very effective in joint compression of multiple modalities, which exist in videos from depth, stereo, or multi view cameras. By utilizing the siginifcant redundancy, which exist in multimodal videos, our model outperforms HEVC/H.265 and AVC/H.264 by a factor of 4.

The main contributions of this paper are: i) We present a simple yet effective and theoretically grounded method for video compression that can serve as the basis for future work. ii) We clarify theoretically the relation between rate-distortion autoencoders and VAEs. iii) We introduce semantic compression to bridge the gap between semantic video understanding and compression. iv) We introduce adaptive compression to adapt a compression model to the domain of interest. v) We introduce multimodal compression to jointly compress multiple modalities, which exist in a video using a deep video compression network.

The rest of the paper is organized as follows. In the next section, we discuss related work on learned image and video compression. Then, in section 3, we discuss the theoretical framework of learned compression using rate-distortion autoencoders, as well as the relation to variational autoencoders. In section 4 we discuss our methodology in detail, including data preprocessing and autoencoder and prior architecture. We present experimental results in section 5, comparing our method to classical and learned video codecs, evaluating semantic compression, adaptive compression, and multimodal compression. Section 6 concludes the paper.

2 Related Work

Learned Image Compression Deep neural networks are the state-of-the-art in image compression outperforming all traditional compression algorithms such as BPG and JPEG2000. They often embed an input image into a low dimensional representation using fully convolutional [28] or recurrent networks [4, 22, 36]. The image representation is quantized by soft scalar quantization [2], stochastic binarization [36], or by adding uniform noise [5] to approximate the non-differentiable quantization operation. The discrete image representation can be further compressed by minimizing the entropy during [10, 28] or after training [5, 6, 26]. The models are typically trained to minimize the mean squared error between original and decompressed images or by using more perceptual metrics such as MS-SSIM [32] or adversarial loss [34].

The closest to us is the rate-distortion autoencoder proposed in [28] for image compression. We extend this work to video compression by: i) proposing a gated conditional autoregressive prior using 2D convolutions [37] with, optionally, a recurrent neural net for better entropy estimation over time, ii) encoding/decoding multiple frames by using 3D convolutions, iii) simplifying the model and training by removing the spatial importance map [26] and disjoint entropy estimation, without any loss on compression performance.

Learned Video Compression Video compression shares many similarities with image compression, but the large size of video data, and the very high degree of redundancy create new challenges [15, 30, 33, 40]. One of the first deep learning-based approaches proposes to model video autoregressively with a RNN-conditioned PixelCNN [23]. While being powerful and flexible, this model scales rather poorly to larger videos, and can only be used for lossless compression. Hence, we employ this method for lossless compression of latent codes, which are much smaller than the video itself. An extension of this method was proposed in [11] where blocks of pixels are modeled in an autoregressive fashion and the latent space is binarized like in [36]. The applicability of this approach is rather limited since it is still not very scalable, and introduces artifacts in the boundary between blocks, especially for low bit rates.

The method described in [40] compresses videos by first encoding key frames, and then interpolating them in a hierarchical manner. The results are on par with AVC/H.264 when inter-frame compression is limited to only few (up to ) frames. However, this method requires additional components to handle a context of the predicted frame. In our approach, we aim at learning these interactions through 3D convolutions instead. In [15] a stochastic variational compression method for video was presented. The model contains a separate latent variable for each frame, and for the inter-frame dependencies, and uses the prior proposed in [6]. By contrast, we use a simpler model with a single latent space, and use a deterministic instead of stochastic encoder.

Very recently the video compression problem was attacked by considering flow compression and residual compression [27, 33]. The additional components for flow and residual modeling allow to improve distortion in general, however, for low bit rates the proposed method is still outperformed by HEVC/H.265 on benchmark datasets. Nevertheless, we believe that these ideas are promising and may be able to further improve the result presented in this paper.

3 Rate-Distortion Autoencoders & VAEs

Our general approach to lossy compression is to learn a latent variable model in which the latent variables capture the important information that is to be transmitted, and from which the original input can be approximately reconstructed. We begin by defining a joint model of data and discrete latent variables ,

(1)

In the next section we will discuss the specific form of (the prior / code model) and (the likelihood / decoder), both of which will be defined in terms of deep networks, but for now we will consider them as general parameterized distributions.

Since the likelihood is intractable, one optimizes the variational bound [8, 38],

(2)

where is a newly introduced approximate posterior. In the VAE [25, 31], one uses neural networks to parameterize both and , which can thus be thought of as the encoder and decoder part of an autoencoder.

The VAE is commonly interpreted as a regularized auto-encoder, where the first term of the loss measures the reconstruction error and the KL term acts as a regularizer [25]. But the variational bound also has an interesting interpretation in terms of compression / minimum description length [10, 14, 18, 19, 20]. Under this interpretation, the first term of the rhs of Eq. 2 measures the expected number of bits required to encode given that we know a sample . More specifically, one can derive a code for from the decoder distribution , which assigns roughly bits to [13]. Averaged over , one obtains the first term of the VAE loss (Eq. 2).

We note that in lossy compression, we do not actually encode using , which would allow lossless reconstruction. Instead, we only send and hence refer to the first loss term as the distortion.

The second term of the bound (the KL) is related to the cost of coding the latents coming from the encoder using an optimal code derived from the prior . Such a code will use about bits to encode . Averaging over the encoder , we find that the average coding cost is equal to the cross-entropy between and :

(3)

The cross-entropy is related to the KL via the relation , where is the entropy of the encoder . So the measures the coding cost, except that there is a discount worth bits: randomness coming from the encoder is free. It turns out that there is indeed a scheme, known as bits-back coding, that makes it possible to transmit and get bits back, but this scheme is difficult to implement in practice, and can only be used in lossless compression [18].

Since we cannot use bits-back coding for lossy compression, the cross-entropy provides a more suitable loss than the . Moreover, when using discrete latents, the entropy is always non-negative, so we can add it to the rhs of Eq. 2 and obtain a valid bound. We thus obtain the rate-distortion loss

(4)

where is a rate-distortion tradeoff parameter.

Since the cross-entropy loss does not include a discount for the encoder entropy, there is a pressure to make the encoder more deterministic. Indeed, for a fixed and , the optimal solution for is a deterministic (“one hot”) distribution that puts all its mass on the state that minimizes .

For this reason, we only consider deterministic encoders in this work. When using deterministic encoders, the rate-distortion loss (Eq. 4) is equivalent to the variational bound (Eq. 2), because (assuming discrete ), we have and hence .

Finally, we note that limiting ourselves to deterministic encoders does not lower the best achievable likelihood, assuming a sufficiently flexible class of prior and likelihood. Indeed, given any fixed deterministic encoder , we can still achieve the maximum likelihood by setting and , where is the true data distribution.

4 Methodology

Figure 2: Training Rate-Distortion autoencoders. The rate loss is a measure for the expected coding cost, under the autoregressive code model, while the distortion loss expresses the reconstruction error.

In the previous section, we have outlined the general compression framework using rate-distortion autoencoders.Here we will describe the specific models we use for encoder, code model, and decoder, as well as the data format, preprocessing, and loss functions.

4.1 Preprocessing

Our model processes chunks of video of shape , where denotes the number of frames, denotes the number of channels (typically for RGB), and are the height and width of a crop, which we fix to pixels in all of our experiments. The RGB values are not scaled, \ie, they always lie in .

4.2 Autoencoder

The encoder takes as input a chunk of video and produces a discrete latent code . If the input has shape , the latent code will have shape , where is the number of channels in the latent space, and is the total spatial stride of the encoder (so the latent space has spatial size ). We do not use stride in the time dimension.

The encoder and decoder are based on the architecture presented by [28], which in turn is based on the architecture presented in [35]. The encoder and decoder are both fully convolutional models with residual connections [17], batchnorm [21], and ReLU nonlinearities. In the first two convolution layers of the encoder, this model uses filter size and stride . The remaining layers are residual blocks with two convolution layers per block, filter size , channels, batchnorm, and ReLU nonlinearities. The final layer is a convolution with filter size , stride , and output channels. The decoder is the reverse of this, and uses transposed convolutions instead of convolutions. More details on the architecture can be found in the supplementary material.

We will evaluate two versions of this autoencoder: one with 2D convolutions applied to each frame separately, and one with 3D spatio-temporal convolutions. To apply the 2D model to a video sequence, we simply fold the time axis into the batch axis before running the 2D AE.

The encoder network first outputs continuous latent variables , which are then quantized. The quantizer discretizes the coordinates of using a learned codebook consisting of centers, , where . In the forward pass, we compute (where is a four dimensional multi-index). As a probability distribution, this corresponds to a one-hot that puts all mass on the computed value . Because the argmin is not differentiable, we use the gradient of a softmax in the backward pass, as in [7, 28]. We found this approach to be stable and effective during training.

On the decoder side, we replace by the corresponding codebook value , to obtain an approximation of the original continuous representation . The resulting vector is then processed by the decoder to produce a reconstruction . In a standard VAE, one might use as the mean of a Gaussian likelihood , which corresponds to an L2 loss: . Instead, we use the MS-SSIM loss (discussed in Sec. 4.4), which corresponds to the unnormalized likelihood of the Boltzmann distribution, , where is the log-partition function treated as a constant, because it better reflects human subjective judgments of similarity.

(a) Unconditional
(b) Frame-conditioned
(c) GRU-conditioned
Figure 3: Proposals for temporal conditioning of prior.
(a) AVC/H.264 (0.037 BPP)
(b) HEVC/H.265 (0.036 BPP)
(c) Our model (0.037 BPP)
Figure 4: Compression results for the state-of-the-art traditional codecs, AVC/H.264 and HEVC/H.265, and our proposed model. On a similar bitrate, our model approachs these codecs while generatinng less artifacts.

4.3 Autoregressive Prior

Instead of naively storing / transmitting the latent variables using bits (for a -dimensional latent space with states per variable), we encode the latents using the prior in combination with adaptive arithmetic coding. For , we use a gated PixelCNN [37] over individual latent frames, optionally conditioned on past latent frames as in video pixel networks [23]. In Figure 3, we illustrate the three priors considered in this paper.

In the simplest case, we model each frame independently, i.e. , where a latent frame is modelled autoregressively as by the PixelCNN. Here denotes a 3D multi-index over channels and spatial axes, and denotes the elements that come before in the autoregressive ordering.

A better prior is obtained by including temporal dependencies (Figure 3b). In this model, the prior is factorized as , where . Thus, the prediction for pixel in latent frame is based on previous pixels in the same frame, as well as the whole previous frame . The dependence on is mediated by the masked convolutions of the PixelCNN architecture, whereas the dependence on the previous frame is mediated by additional conditioning connections added to each layer, as in the original Conditional PixelCNN [37].

Conditioning on the previous frame may be limiting if long-range temporal dependencies are necessary. Hence, we also consider a model where a recurrent neural network (Gated Recurrent Units, GRU) summarizes all relevant information from past frames. The prior factorizes as with , where is the hidden state of a GRU that has processed latent frames . As in the frame-conditional prior, in the GRU-conditional prior, the dependency on is mediated by the causal convolutions of the PixelCNN, and the dependency on is mediated by conditioning connections in each layer of the PixelCNN.

4.4 Loss functions, encoding, and decoding

To measure distortion, we use the Multi-Scale Structural Similarity (MS-SSIM) loss [39]. This loss gives a better indication of the subjective similarity of and than a simple L2 loss, and has been popular in (learned) image compression. To measure rate, we simply use the log-likelihood where is produced by the encoder deterministically. The losses are visualized in Figure 2.

To encode a chunk of video , we map it through the enncoder to obtain latents . Then, we go through the latent variables one by one, and make a prediction for the next latent variable using the autoregressive prior . We then use an arithmetic coding algorithm to obtain a bitstream for the -th variable. The expected length of is .

To decode, we take the bitstream and combine it with the prediction to obtain . Once we have decoded all latents, we pass them through the decoder of the AE to obtain .

5 Experiments

5.1 Dataset

Kinetics [9] We use videos with a width and height greater than , which results in videos as our training set. We only use the first frames for training. The resulting dataset has about frames, which is sufficient for training our model, though larger models and datasets will likely result in better rate/distortion (at the cost of increased computational cost during training and testing).

Ultra Video Group [1] UVG contains videos with frames in full HD resolution (). We use this dataset to compare with state-of-the-art.

Standard Definition Videos SDV contains videos with frames of resolution . We use this dataset for ablation studies.

Human Activity contains real-world videos of people in various everyday scenes, and is mostly used for human pose estimation and tracking in video. Following the standard partitions of the data, we use and videos as train and test set for semantic compression experiments.

Dynamics is an internal dataset containing ego-view video from a car driving on different highways at different times of day. The full dataset consists of clips taken at different dates, times, and locations. We use clips of minutes each (120k frames) as train set, and use the fifth clip of minutes (25k frames) as test sequence.

Berkeley MHAD [29] contains videos of human actions, recorded by four multi-view cameras. We use this dataset for multi-modal compression experiments. We use all four video streams from the first quad-camera, each of which records the same scene from a slightly shifted vantage point. The MHAD dataset contains actions each performed by participants, with repetitions per participant. We use the first repetitions for training, and the last one for testing.

Kinetics, Dynamics and Human Activity are only available in compressed form, and hence contain compression artifacts. In order to remove these artifacts, we downscale videos from these datasets so that the smallest side has length , before taking crops. For uncompressed datasets (UVG, SDV, and MHAD), we do not perform downscaling.

5.2 Training

We train all of our models with batchsize , using the Adam optimizer [24] with learning rate (decaying with every epochs) for a total of epochs. Only for the Kinetics dataset, which is much larger, we use epochs and learning rate decay every epochs.

We use MS-SSIM (multi-scale structural similarity) as a distortion loss, and the cross-entropy as a rate loss. In order to obtain rate-distortion curves, we train separate models for beta values (unless stated otherwise), and report their rate/distortion score.

Figure 5: Ablation experiments. The both autoencoder and prior exploit temporal dependencies, in pixel and latent space respectively, to improve video compression.

5.3 Ablation studies

We evaluate several AE and prior design choices as discussed in Section 4. Specifically, we compare the use of 2D and 3D convolutions in the autoencoder, Frame AE and Video AE respectively, as well as three kinds of priors: a 2D frame-based ARM that does not exploit temporal dependencies (Frame ARM), an ARM conditioned on the previous frame (Video ARM-last frame), and one conditioned on the output of a Conv-GRU (Video ARM-Conv-GRU). We train each model on Kinetics and evaluate on SDV.

The results are presented in Figure 5. The results show that conditioning the ARM on the previous frame yields a substantial boost over frame-based encoding, particularly when using a frame AE. Introducing a Conv-GRU only marginally improves results compared to conditioning on the last frame only.

We also note that using the 3D autoencoder is substantially better than using a 2D autoencoder, even when a video prior is not being used. This suggests that the 3D AE is able to produce latents that are temporally decorrelated to a large extent, so that they can be modelled fairly effectively by a frame AE. The difference between 2D and 3D AEs is substantially bigger than the difference between 2D and 3D priors, so in applications where a few frames of latency is not an issue, the 3D AE is to be preferred, and can reduce the burden on the prior.

For the rest of the experiments, we will use the best performing model: the Video AE + Video ARM (last frame).

5.4 Comparison to state of the art

Figure 6: Comparison to the state-of-the-art traditional and learned codecs. Our proposal outperforms the learned counterparts and approaches AVC/H.264 and HEVC/H.265 evaluated in their default setting.

We benchmark our method against the state-of-the-art traditional and learned compression methods on UVG standard test sequences. We compare against classical codecs AVC/H.264 and HEVC/H.265, as well as the recent learned compression methods presented by [27] and [40]. For the classical codecs, we use the default FFmpeg settings, without imposing any restriction, and only vary the CRF setting to obtain rate/distortion curves. For the other learned compression methods, we use the results as reported in the respective papers. For our method, we use 6 different values, namely, .

Figure 6 shows that our method consistently outperforms other learned compression methods, and is approaching the performance of classical codecs, particularly in the bpp range. We note that in some previous works, learned compression was shown to outperform classical codecs, when the latter are evaluated under restricted settings by limiting the inter-frame compression to only few frames, \ieby setting GOP flag to . The results under restricted setting are reported in supplementary materials.

(a) Semantic Compression
(b) Adaptive Compression
(c) Multimodal Compression
Figure 7: Three extensions of our model that demonstrate the benefits of learned over classical approaches to compression.

5.5 Semantic Compression

The perceived quality of a compressed video depends more on how well salient objects are reconstructed, and less on how well non-salient objects are reconstructed. For instance, in video conferencing, it is more important to preserve details of faces than background regions. It follows that better subjective quality can be achieved by allocating more bits to salient / foreground objects than to non-salient / background objects.

Developing such a task-tuned video codec requires a semantic understanding of videos. This is difficult to do with classical codecs as it would require distinguishing foreground and background objects. For learned compression methods, the asymmetry is easily incorporated by using different weights for the rate/distortion losses for foreground (FG) and background (BG) objects, assuming that ground-truth FG/BG annotations are available during training.

In this experiment, we study the semantic compression of the person category. The groundtruth person regions are extracted using a Mask R-CNN [16] trained on COCO images. We use bounding boxes around the objects, but the approach is applicable to segmentation masks without any modification required. The detected person regions are converted to a binary mask and used for training.

The MS-SSIM loss is a sum over scales of the SSIM loss. The SSIM loss computes an intermediate quantity called the similarity map, which is usually aggregated over the whole image. Instead, we aggregate these maps separately for foreground and background, where the FG and BG mask at a given scale is obtained from the high-resolution mask by average pooling. We then sum the FG and BG components over each scale, and multiply the resulting FG and BG losses by separate weights and , respectively. We set the to in our experiments.

The rate loss is a sum of , so we can multiply each term with a foreground/background weight. Each latent covers an region of pixels, thus, we need to aggregate the pixel-wise labels to obtain a label for each latent. We do this by average pooling the FG/BG mask over regions to obtain a weight per latent position which we multiply with the rate loss at that position.

The results are shown in Figure (a)a. We observe that in the non-semantic model, BG is reconstructed more accurately than FG at a fixed average bitrate. The same behavior is observed for classical codecs as reported in supplementary materials. The worse reconstruction of FG is not surprising because person regions usually contain more details compared to the more homogeneous background regions. However, when using semantic loss weighting, the relation is reversed. Semantic loss weighting leads to an improvement in MS-SSIM score for FG at the expense of MS-SSIM score for BG. It demonstrates the effectiveness of learned video compression in incorporating semantic understanding of video content into compression. We believe that it opens up novel video compression applications which have not been feasible with classical codecs.

Figure 8: Multimodal compression results for HEVC/H.265 (top) and our proposal (bottom). By utilizing the redundancies between different views of a quad camera (columns), our model achieves a significantly better reconstruction while using less bits ( vs BPP).

5.6 Adaptive Compression

Classical codecs are optimized for good performance across a wide range of videos. However, in some applications, the codec is used on a distribution of lower entropy videos, i.e. scenes with predictable types of activities. For example, a security camera placed at a fixed location and viewpoint will produce a very predictable video. In this experiment we show that learned compression models can utilize the lower entropy videos by simply being finetuned on them, which is difficult to do with classical codecs.

In this experiment, we show that by finetuning a learned compression model on the Dynamics dataset, substantial improvements in compression can be achieved. Figure (b)b compares the classical codecs with our generic model as well as the adapted model. The generic model is trained on a generic training set from Kinetics. The adapted model takes a pretrained generic model and finetunes it on videos of a similar domain. The results show that our generic method outperforms the classical codecs on this dataset, and the adapted method shows even better performance.

This experiment indicates a great practical potential of learned compression models. Finetuning a compression model allows to maintain high reconstruction quality with substantially lower compression rate, while the model could be transferred from a generic compression model.

5.7 Multimodal Compression

Classical codecs are designed for typical videos captured by monocluar color cameras. When other modalities are included, such as depth, stereo, audio, or spectral imaging sensors, classical codecs are often not applicable or not able to exploit dependencies which exist between various modalities. Developing a codec for every new modality is possible, but very expensive considering the amount of engineering work involved in designing classical codecs. Using our learned compression method, however, adding new modalities is as easy as retraining the model on a new dataset with minimal modifications required.

In this experiment, we adapt our learned compression method to compress videos of human actions recorded by quad (four view) cameras from MHAD dataset. We compare four methods: AVC/H.264 and HEVC/H.265, as well as a learned unimodal model and a learned multimodal model. The unimodal model is trained on the individual video streams, and the multimodal model is trained on the channel-wise concatenation of the four streams. The network architecture for the unimodal model and the multimodal model is the same as the one described in Section 4, the only difference being that the multimodal model has more input channels ( vs ).

Interestingly, our approach retains more details than the classical codec (e.g., see the face of a person in Figure 8) while obtaining times smaller BPP. The quantitative results, shown in Figure (c)c, show that the multimodal compression model substantially outperforms all three baselines by utilizing the great amount of redundancy which exist between multiple data modalities. This shows that without further tuning of the architecture or training procedure, our method can be applied to compress spatio-temporal signals from non-standard imaging sensors.

6 Conclusion

We have presented a video compression method based on variational autoencoders with a deterministic encoder. Our theoretical analysis shows that in lossy compression, where bits-back coding cannot be used, deterministic encoders are preferred. Concretely, our model consists of an autoencoder and an autoregressive prior. We found that 3D spatio-temporal autoencoders are very effective, and greatly reduce the need for temporal conditioning in the prior. Our best model outperforms recent learned video compression methods without incorporating video-specific techniques like flow estimation or interpolation, and performs on par with the latest non-learned codec H.265 / HEVC.

In addition, we have explicitly demonstrated the potential advantages of learned over non-learned compression, beyond mere compression performance. In semantic compression, the rate and distortion losses are weighted by the semantics of the video content, giving priority to important regions, resulting in better visual quality at lower bitrates in those regions. In adaptive compression, a pretrained video compressor is finetuned on a specific dataset. With minimal engineering effort, this yields a highly effective method for compressing domain specific videos. Finally, in our multi-modal compression experiments, we have demonstrated a dramatic improvement in compression performance, obtained simply by training the same model on a multi-modal dataset consisting of quad-cam footage.

References

7 Supplementary Material

Appendix A Images used in figures

Video used in Figures 1 and 2 by Ambrose Productions, and Figure 4 by TravelTip. Both [CC BY-SA 3.0 https://creativecommons.org/licenses/by/3.0/legalcode], via YouTube.

Appendix B Architectural details

In this section we detail the architecture and hyperparameters of the autoencoder and code-model that we use in our experiments.

b.1 Autoencoder

As decribed in section 4.2, we design our autoencoder by extending the (2D) model of [28] to use 3D convolutions. The exact model is depicted in Figure 9.

b.1.1 Quantization

As explained in Section 4.2, the encoder network outputs continuous latent variables , which are then quantized using a learned codebook .

Quantization involves computing that defines the probability for each codebook center at each position in (note that this is one-hot over the axis). We assume independence between all elements of given , and we use the codebook distance to compute :

(5)

Where for notational simplicity, we are using a single index to index over all dimensions of .

Note that as , will put more and more probability mass on a single (the closest) center and will eventually be deterministic. This is desirable, as we want a deterministic encoder. In practice, we use a which we observe to always give us one-hot vectors for 32-bit precision floats. In the backward pass, we use the gradient of a softmax with a for numerical stability.

On the decoder side, the one-hot probabilities are embedded using the same codebook to obtain the scalar tensor approximating that is then decoded to predict .

Figure 9: Architecture of our autoencoder. Tconv denotes transposed convolution. For (transposed) convolutional layers denotes the number of output channels, denotes the kernel size and denotes the stride. These are either expressed as triplets or as a single number that is used for each dimension. “Same”-padding is used for all (transposed) convolution layers. refers to the tensor of one-hot probabilities .

b.2 Autoregressive Code-Model

The code model takes as input the one-hot probability tensor output by the encoder, and predicts the probability for each entry in in an autoregressive manner:

(6)

We use a 4 layer PixelCNN [37] architecture with a kernel size of 5x5 and 8 hidden channels (). We embed the one-hot probabilities of using a learnable scalar embedding. We experimented with using the encoder codebook as the prior embedding, but we found that it did not make any difference in performance in practice.

b.2.1 Conditioning

For the frame-conditioned and GRU-conditioned model (see Figure 3), we inject the conditioning variable into each of the autoregressive blocks, right before applying the gated nonlinearity.

This conditioning input is a featuremap, and its number of channels should match the number of channels in the ARMBlock. The gated nonlinearity requires two times the number of hidden channels . As there is a nonlinearity for both the horizontal and vertical stack, this would require channels. Since the filters in PixelCNN are fully connected along the channel dimension, the required number of output channels for the conditioning featuremap is .

We use a (conventional) convolutional layer to pre-process the conditioning input and to upsample the number of channels to match the size of each autoregressive block in the PixelCNN. The prior architecture is depicted in Figure 10.

Figure 10: Architecture of our prior / code-model. ARMBlock refers to a block in PixelCNN [37] with horizontal stack , vertical stack and conditioning input (Figure 2 of [37]). represents the conditioning input that is used in our frame-conditioned and GRU-conditioned code-model.

b.2.2 Encoder and Code-Model gradients

During training, the code model is updated to minimize the rate loss . The rate loss is a sum over the elementwise cross-entropy between and (which is summed over each class of the codebook and each element of ).

(7)
(8)

Note that unlike [28] we do not do any detaching of the gradient. As a result, the derivative of the rate loss w.r.t. the encoder parameters is affected by the code-model in the following way:

(9)

Thus, the encoder is trained not only to minimize the distortion, but also to minimize the rate (e.g. to predict latents that are easily predictable by the code-model). This can also been seen by reversing the paths of the forward arrows in Figure 2.

Appendix C Evaluation procedure

c.1 Traditional codec baselines

We use FFMPEG111https://ffmpeg.org/ 2.8.15-0 to obtain the performance for the H.264/AVC and H.265/HEVC baselines. We use the default settings unless reported otherwise.

c.2 Data preprocessing

We build our evaluation datasets by extracting the png frames from the raw source videos using FFMPEG. Because some videos are in yuv colorspace, the conversion to rgb could in theory lead to some distortion, though this is imperceptible in practice. We use the same dataloading pipeline to evaluate our neural networks and the FFMPEG baselines as to avoid differences in ground-truth data.

Figure 11: Rate/distortion results on UVG for classical and learned compression methods. H.264 and H.265 results were obtained with restricted FFMPEG settings (Group of Pictures set to 12).
Figure 12: Semantic compression. Background is easier for all models, HEVC, AVC, and non-semantic model, except for our learned semantic compresison.

c.3 Rate-Distortion

For the FFMPEG baselines, we divide the total filesize by the total number of pixels to obtain bpp. For our neural network, we use the rate loss (converted into bpp) as a proxy for rate. By definition, the rate loss gives the expected bitrate under adaptive arithmetic coding, and expected bpp was shown to be highly correlated to actual bpp [27].

We calculate MS-SSIM [39] using our own implementation which we benchmarked against the implementation in tensorflow222https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/image/ssim_multiscale. We use the same power factors that are initially proposed in [39].

Appendix D Additional results

d.1 Comparison to other methods

When comparing neural networks to traditional codecs, it is common practice to evaluate those codecs under restrictive settings. For example, group of pictures (GoP) is often set to a value that is similar to the number of frames used to evaluate the neural networks [40, 27]. Furthermore, encoding preset will be set to fast (which will result in worse compression performance) [40, 27]. In our evaluation (presented in Figure 6) we instead use the FFMPEG default values of GoP=25 and preset=medium.

In Figure 12 we compare our end-to-end method to other learned compression methods and use baseline codecs with the restrictive setting of GoP=12, which is used in [40, 27]. The figure shows that our model has a better rate-distortion performance than H.265/HEVC under these restrictive settings for bitrates higher than 1.2 bpp.

d.2 Semantic Compression

The results for semantic compression are reported in Figure (a)a. To avoid clutter, background performance for H.264/AVC and H.265/HEVC are omitted there. Figure 12 shows the full results including background performance for traditional codecs.

d.3 Adaptive Compression

Quantitative rate-distortion performance for adaptive compression is reported in Figure (b)b. In Figure 13 we show a qualitative sample. Notice the clear block artifacts that can be observed around the road markings for H.265/HEVC. For our generic model, we do not observe such articacts, though we can see that edges are somewhat blurry around the line markings. In our adapted model, the road markings are significantly improved.

We note that the expanding perspective motion observed in road-driving footage is a great example of a predictable pattern in the data that a neural network could learn to exploit, while it would be difficult to manually engineer algorithms that use these patterns.

(a) HEVC/H.265 (0.025 BPP)
(b) Generic model (0.030 BPP)
(c) Adapted model (0.025 BPP)
Figure 13: Qualitative results for adaptive compression.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398132
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description