Video Frame Interpolation by PlugandPlay Deep Locally Linear Embedding
Abstract
We propose a generative framework which takes on the video frame interpolation problem. Our framework, which we call Deep Locally Linear Embedding (DeepLLE), is powered by a deep convolutional neural network (CNN) while it can be used instantly like conventional models. DeepLLE fits an autoencoding CNN to a set of several consecutive frames and embeds a linearity constraint on the latent codes so that new frames can be generated by interpolating new latent codes. Different from the current deep learning paradigm which requires training on large datasets, DeepLLE works in a plugandplay and unsupervised manner, and is able to generate an arbitrary number of frames. Thorough experiments demonstrate that without bells and whistles, our method is highly competitive among current stateoftheart models.
Keywords:
Frame Synthesis, Video Processing, Manifold Learning, Convolutional Neural Network, Unsupervised Learning.1 Introduction
Video frame interpolation is among the most longstanding and challenging problems in computer vision. Traditionally, optical flow based and phase based methods are extensively studied to deal with this problem. However, while optical flow estimation is another classic and difficult task per se, phase based methods are proven to be suitable for small motion videos only. Nonetheless, a merit of these learningfree methods is that they are offtheshelf models and can be used instantly, and hence they always work at their full capacities.
Recent years have seen a huge success of convolutional neural networks (CNNs), especially in the largescale ImageNet classification challenge [1]. Since then, researchers have considerably brought them into use in many different computer vision tasks including frame synthesis. The performances of deep learning based methods are superior to those of conventional algorithms thanks to their phenomenal generalization abilities, but unfortunately, training on a largescale dataset is usually required beforehand, which is generally timeconsuming and infeasible in many situations. Also, to make them work at their fullest potentials, adaptation or finetuning might be required since in training, one may impose several regularizations on the models, which makes them biased estimators.
In this paper, we propose a novel framework of video frame interpolation, dubbed as Deep Locally Linear Embedding (DeepLLE), which combines the advantages of conventional models and deep networks to address the frame synthesis problem. Different from previous deep learning based studies in frame synthesis, our method works in a plugandplay and unsupervised fashion like conventional methods. DeepLLE is built upon a hypothesis that in some latent space, consecutive frames lie very close to each other on a manifold and we can explicitly embed a linearity constraint to the latent codes so that new frames can be produced by interpolating new latent codes. To do so, we resort to the computing power of an autoencoding CNN.
Figure 1 illustrates the difference between our scheme and existing models. While current trends require a separate training stage for CNNs, our method can perform both optimization and inference on the fly. Compared with existing methods, ours has a number of advantages. First, DeepLLE is an instant method like conventional ones while having a performance level of deep learning. Second, DeepLLE can synthesize new frames between any number of successive frames greater than one. Lastly, DeepLLE can generate an arbitrary number of frames in a run. To the best of our knowledge, no existing method can include all these properties at the same time. Our network is also optimized with a perceptual cost function so the synthesized images have a much better perceptual quality than those produced by existing methods.
Contributions. Our work mainly focuses on the following contributions. First, we propose a new frame synthesis framework that bridges the gap between conventional methods and deep learning based models. The method is plugandplay, which always allows it to work at its full potential, but still possesses a deeplearningstandard performance. Next, different from existing methods, ours is entirely based on manipulating the underlying latent structures of videos, which has not been successfully applied to realworld videos consisting of complex motions. Finally, we discover that deep networks can capture a great deal of video statistics, which may not only be beneficial to the frame synthesis field but also advance the deep unsupervised learning area.
2 Related Work
Video frame interpolation is a classic problem in video processing. Traditionally, this is done by estimating motions in consecutive frames [2, 3]. Mahajan et al. defined “path”, which is optical flowlike, and then used 3D Poisson reconstruction for inbetween frame generation [4]. Alternative to optical flow based methods, Meyer et al. utilized phase information to interpolate frames but this method may fail to retain high frequency details in regions containing large motions [5].
Since the huge success of deep CNNs in image recognition/classification [6, 7, 8], many researchers have proposed to estimate optical flow using CNNs [9, 10, 11]. These methods are not optimized to directly synthesize new images, and so the generated images may contain many artifacts due to inaccurate flows and warping. The recent stateoftheart Deep Voxel Flow (DVF) [12] unsupervisedly estimates a voxel flow field, which is optical flow from the interpolated image to the previous and next frames, and trilinearly interpolates pixels in the new frame. In a different direction, the method in [13] is based on a technique called pixel hallucination which directly generates new pixels from scratch. However, perceptually, the results are not goodlooking [12]. Lastly, a recent technique based on adaptive convolution was proposed in [14, 15]. These methods learn to produce a set of pixeldependent filters and convolve these filters with the input frames to interpolate each pixel. The results are notable but the method requires to learn a set of filters for each pixel, which is computationally unfriendly. Above all, all these methods require a separate training step on big data and might need finetuning or adaptation to take full advantage of the networks.
The technical inspiration for our work is Roweis and Saul [16]. The method finds a linear relationship between neighboring samples in their original space and then finds another space, preferably having lower dimension, where this relationship is preserved. We, however, do not impose any linearity on images but only on their latent codes and this linear relationship is explicitly specified instead of learning from data as in [16]. Another interpolation research close to ours is proposed by Bregler et al. which also relies on manifold learning [17]. However, they estimate a nonlinear manifold for the whole series of frames and the images are simply talking lips. Here, the method is designed to work with natural images containing different kinds of motion. In addition, our method does not estimate the manifold of the whole video but several consecutive frames only. In a remote study [18], an observation that latent variables can encode motions backs up our idea, but their latent codes are randomly sampled from a Gaussian distribution, and their work concentrates on predicting the trajectory of motions given a still image, and hence the generated images are nowhere near realistic.
3 Locally Linear Embedding
To make the paper selfcontained, we briefly summarize the idea of locally linear embedding (LLE) [16], and then we highlight key concepts that inspire our approach. Suppose we have a dataset having samples of dimensions. We want to find a compact representation of , i.e., a lowerdimensional space that preserves local relationships of each point. We assume that there are sufficient data so that all the twists and variations in the manifold are accounted for. Having this condition, we hypothesize that each point is a linear combination of those in its neighborhood. To find such a linear relationship , for each , we find a set containing all the indices of the neighbors of according to some distance threshold and then solve the following optimization to find the weights
(1) 
where if . The goal of LLE is to find a new lowdimensional space in which the linear relationship is preserved. Suppose we have a function which maps each in the original space to each in some latent space where , then we can find by solving the following problem
(2) 
which is similar to (1) but the minimization is over instead of .
Applying LLE directly to frame interpolation, however, is infeasible due to two reasons. First, like other manifold learning methods, LLE requires abundant data so that the manifold is wellsampled. Unlike facial expression images in [16] or talking lips in [17], realworld videos do not contain any common objects or structures. Given a target frame, only a few neighboring frames are useful for estimating manifold. Second, in order to synthesize new images, we can somehow synthesize some new and , and use this on to synthesize a new image. However, finding may involve an optimization process, and it is not clear how we can synthesize corresponding to a desired in the first place. Nevertheless, LLE gives us several key ideas that motivate our framework. First, we do not need to work with the whole video but several successive frames only, and assume their underlying manifold in some latent space is linear. Second, if we can encode video frames into this latent space and then map their codes back into image space, then synthesizing new frames can simply be done in the latent space. In Section 4, we depict how DeepLLE is built based on this idea.
4 Deep Locally Linear Embedding
4.1 Our framework
The overall framework of DeepLLE is shown in Figure 2. Suppose we have consecutive frames which are temporally uniform as in Figure 3. We consider for now. We refer to these frames as references, and and as nodes. Different from previous studies, we do not assume that motions are symmetric over the middle frame or all frames are stabilized with respect to the starting frame. Our method first performs a fitting process to construct a linear manifold from the given nodes so that certain points on this manifold can reconstruct the references. After that, the constructed manifold is used to interpolate new frames between two nodes.
We now describe the fitting process. As can be seen from Figure 2, at a first glance, DeepLLE simply fits a CNN to a set of input images. The encoder of DeepLLE first maps into in some dimensional latent space for . Then, the decoder decodes and into and , respectively. The crucial ingredient added to the fitting procedure is an LLE module in the decoding process of the frames inbetween and . Concretely, for some , instead of calculating and decoding its own latent code, the decoder decodes , which is the output of the LLE module, to reconstruct . This module is similar to LLE; i.e., it linearly combines and to produce . However, in LLE, the weights are determined through an optimization process. Such complicated weights are not suitable in our method since LLE is designed to reduce data dimension, not to interpolate new latent variables and produce new data. Instead, we manually define the weights in a more intuitive way. An illustration of our strategy is shown in Figure 4. We define the weights based on the relative temporal position of with respect to the two nodes so that all are uniformly distributed between and and in the same position as the temporal position of frame with respect to the two nodes. Specifically, is calculated as
(3) 
that is, the new latent codes lie evenly on the line segment between and . Obviously, this is an oversimplification of the underlying manifold. The real manifold in any nontrivial case should be highly nonlinear and there is no guarantee whatsoever that the latent codes of the references are coplanar, much less on the same line. However, this is a reasonable choice because it is difficult to walk along the manifold if the manifold is complicated, and this oversimplification is very intuitive when it comes to choosing a frame position to interpolate.
We can generalize the framework into a matrix form. Let be a 4D tensor containing consecutive frames having height , width , and channels, and contains the first and last frames of the sequence. First, the encoder maps the two nodes to a latent space in which the images are
(4) 
where contains the latent code of dimension of each node in its row, and are trainable parameters of the encoder. Next, we perform linear interpolation on the latent codes based on the relative temporal position of the reference frames with respect to the nodes. Towards this goal, we define a relative position matrix (RPM) which interpolates new latent variables from the two nodes’ codes. To fit the network to the reference frames, is defined as
(5) 
where , and denotes the transpose of a matrix. The coefficients are determined based on the relative temporal position of each reference frame with respect to the nodes. Also, the sum of each row is equal to , which has the same spirit as LLE. The interpolation can then be expressed as
(6) 
where contains the interpolated codes of the reference frames in its rows. Finally, the interpolated codes are decoded back into the image space to become
(7) 
where is the reconstructed versions of the reference frames, and parametrize the decoder of DeepLLE. Like a common autoencoder framework, we minimize some expected loss between and for optimal and . In practice, the fitting is carried out blindly without any stopping criterion based on a development set so the optimization is very close to function approximation in its true sense.
After the fitting is done, to generate new frames inbetween and for any , we can simply define a suitable RPM. For example, suppose we desire to synthesize a new frame halfway in temporal order between and , then we can simply define a new RPM and the new latent code can be calculated as
(8) 
where . intuitively reorders all the latent codes according to the relative positions of the input frames with respect to the nodes. Therefore, we can interpolate any frame inbetween the two nodes by simply reflecting its relative position with respect to the two nodes to the RPM. Needless to say, to synthesize any number of new frames, we can horizontally stack all RPMs into a big RPM. Moreover, the RPM can be tweaked so that fitting can be performed for more than one sequence of reference frames at a time^{1}^{1}1Please check the supplementary materials. .
Note that the RPM has nothing to do with optical flow. Different from optical flowbased methods, our approach does not have any control over how many pixels an object should move. Setting the RPM so that the latent code is midway between two reference codes does not guarantee the generated motion is symmetric around the interpolated frame, but it does make sure that the synthesized frame is halfway between the two reference frames in temporal order. The RPM is simply a convenient and intuitive way that we embed a linearity constraint on the underlying manifold instead of learning a linear relationship in the neighborhood of each point like in LLE.
4.2 Architecture
Our model works in a plugandplay and unsupervised manner so the only required input is a set of several consecutive frames. As introduced before, DeepLLE employs an autoencoding CNN to go back and forth between image space and latent space. We use the 18layer deep residual network (ResNet18) [7] as the encoder of DeepLLE after removing the first mean pooling layer, the global average pooling layer and the softmax layer. Another modification is that we replace the rectified linear unit (ReLU) [19] activation by leaky ReLU (LReLU) [20] with the leaky parameter of . The reason is that we do not want the network to be robust against small changes in its input, which is brought by functions having saturating regions [21]. Another reason is that it is more linear than all of its siblings^{1}. We perform extensive experiments to verify this choice. For the decoder part, we found that simply stacking up convolutional layers interleaved with bicubic upsampling works consistently well in all our simulations. The detail of the decoder is described in Figure 2. The numbers of convolutional layers in each stacking block are 3, 5, 7 and 9, respectively. All the kernels in the decoder have a receptive field size of . LReLU is used in all layers except for the output layer which is activated by the hyperbolic tangent function. The reconstructed images are simply scaled by , where is the output tensor from the decoder. Interestingly, we discovered that dropout [22] reduces glaring artifacts of the generated images very efficiently. Therefore, we add two dropout layers with a dropout probability of after the third and fourth stacking modules. We note that a more sophisticated choice of the network structure may improve performance but is not the main issue of our work.
To optimize DeepLLE, we use Huber loss with a threshold of on the reconstructed images and reference frames. Together with Huber loss, we optimize the network with a first derivative loss and the structural similarity score (SSIM) [23] between the output images and ground truths. For SSIM, we first convert the output images and ground truths to YCbCr color space and apply SSIM to the first channels only. Overall, our objective function is
(9) 
where and are the Y channel of and , respectively; denotes the gradient operator, stands for the Huber loss, indicates the SSIM metric, and is the norm. We empirically set and in all experiments. The network is optimized endtoend so that the decoder can guide the encoder to find a latent space where both the reconstructions from the nodes’ latent codes and the interpolated codes are possible. We initialize the weights of the autoencoder using He initialization [24]. We use the ADAM optimization scheme [25] with a learning rate of and other parameters are set to the authors’ suggestions. Since the fitting is blindly operated, we can either run until convergence or terminate the optimization at some fixed iteration. In our experiments, we chose the latter and set the number of iterations to . As a side note, better performance might be achieved by running the optimization longer. We implemented our model using Theano^{2}^{2}2The code will be released upon the publication of the paper. [26].
5 Experimental Results
We assessed our framework using the UCF101 [27], DAVIS [28, 29] and reallife videos. For UCF101, we tested our method on the test set provided in [13]. In each sequence, there are consecutive frames and we took the last frames as references. To quantitatively evaluate our method, for each sequence, we left out all the even frames to compare the interpolated frames with. We emphasize that the quantitative evaluation is unfair to our method due to two reasons. First, we have to skip all even frames as ground truths for evaluation, and so in the optimization process, the frames are farther from each other compared with the inputs of other methods, which hurts our linearity assumption. Second, our method is not directly trained to interpolate a frame halfway between two inputs but to generate any frame between two nodes, and so the generated motions are likely to be different from those in ground truths. Therefore, the quantitative results do not reflect the full potential of our method. Nonetheless, we used five metrics, namely visual information fidelity (VIF) [30], detail loss metric (DLM) [31], noise quality measure (NQM) [32], SSIM and PSNR, which are widely used in image quality assessment literature, to evaluate the synthesized images^{3}^{3}3Evaluations of DVF except for SSIM and PSNR were done on their provided results.. Following [12, 13], we evaluated on the motion regions only, which are extracted by applying the masks in [13]. All input images were processed at their original resolution () and normalized into the range of . We chose to interpolate a frame halfway between each two reference frames. For DAVIS, we selected several videos and only measured the performance visually since there is no previous work performing any benchmark on this database. To show off the ability to generate any number of frames inbetween two nodes, we interpolated 3 new frames between each two reference frames. We applied the same strategy to several reallife video segments including discoveries, music videos and movies. Due to the limited GPU memory, all sequences are processed at . We chose to generate three images between each two reference frames.
To benchmark our model, we chose several stateoftheart methods in the field which are EpicFlow [33], a stateoftheart optical flow method, DVF [12], an implicit optical flow based method, Beyond MSE [13], a stateoftheart pixel hallucination method, and phasebased frame interpolation [5], a plugandplay and nonlearning method.
5.1 Quantitative results
Method  SSIM  VIF  DLM  NQM  PSNR 
Beyond MSE  0.93        32.8 
EpicFlowbased  0.95        34.2 
DVF  0.96  0.62  0.93  23.2  35.8 
Phasebased  0.88  0.69  0.96  26.9  27.9 
Ours  0.97  0.71  0.98  27.4  33.1 
Table 1 shows the benchmark on UCF101 between our method versus the chosen models. As we can see, our method outperforms all other benchmarking methods in terms of perceptual quality. We optimize the network with SSIM so it is not surprising our SSIM results are highest among all methods. That is why we employ other perceptual quality metrics for a fair benchmark with DVF. However, our synthesized images still have higher scores than DVF. The only score we lose to DVF is PSNR. However, there might exist several correct interpolated frames between any two frames so a pixeltopixel difference metric like PSNR cannot judge the correctness of the interpolated motions. In addition, it is wellknown that PSNR often does not correlate well with humansâ opinions, hence it usually yields poor performances in quality assessment studies [34]. On the other hand, SSIM, VIF, DLM and NQM are famous for their correlations with the quality that humans perceive. Having high performances on these metrics is a strong indicator that our generated images are likely to be favored by viewers.
5.2 Qualitative results
Figure 5 demonstrates the interpolated frames of DVF, the phasebased method and ours, along with the corresponding ground truths. In the first row, when the movement (of the heel of the left person) is medium, all methods perform well but the phasebased still leaves visible artifacts. In the second row, DeepLLE and the phasebased produce similar results, but DVF generates artifacts even though the motion of the hand is very simple. This is perhaps due to the motion blur in the first frame, and this reveals that DVF may not succeed when there is some abrupt intensity change between the two inputs. In the third row, when the motions are large, all methods struggle to interpolate the inbetween frames. While the phasebased’s results and ours are still reasonable, the DVF’s result is distorted because of the incorrect image warping. This sort of distortion is typical in optical flow based methods. In the last row, when motions are too large, all methods fail. Our method, however, still manages to generate an image with heavy blur in the motion regions while DVF confusingly warps the image and the result is unrecognizable. It can be seen that the images generated by our method are of equal quality compared with even stateoftheart learningbased methods. Also, taking a closer look, we can notice that our method actually reduces artifacts and distortions in the original images. The same phenomenon has been observed and discussed in [35], a study concurrent to ours. We conclude that our method is more preferred than existing methods in many situations since the method does not need pretraining on big data and works like conventional methods but at the same time possesses the computing power of deep learning which greatly improves the quality of the synthesized images over conventional models and makes it on par with stateoftheart deep learning based methods.
Figure 6 shows some of our interpolated frames from several DAVIS and reallife videos. We processed several segments of the videos and ran them at for slowmotion effect. We highly encourage readers to check our webpage^{1} for more results since it is impossible to evaluate temporal coherence on still paper.
5.3 Extension to extrapolation
Our framework can be easily extended to video frame extrapolation. The setup of the experiment is described in our supplementary material. Figure 7 displays the future frames synthesized by our method. Our observation is that in many sequences the generated images do not show clear motions. Even if the motions in the reference frames are large, the generated motions are still limited. This suggests that extending our framework to extrapolation is a nontrivial task which we will take on in our future work.
5.4 Ablation study
Ablation 







Ours  

SSIM  0.96  0.94  0.95  0.95  0.96  0.96  0.95  0.98  
VIF  0.70  0.59  0.60  0.64  0.61  0.69  0.64  0.71  
DLM  0.96  0.92  0.93  0.95  0.97  0.96  0.96  0.97  
NQM  28.0  22.4  24.0  27.3  30.9  25.7  27.4  27.3  
PSNR  30.9  27.2  28.0  32.5  31.5  29.6  30.7  33.4 
We verify some of our choices for the proposed framework. To perform the ablation study, we randomly selected videos from the UCF101 dataset. All settings except for the ablated components were set to our defaults.
Locally linear embedding module. To study the effect of the locally linear embedding module, we train an autoencoding CNN by a common training scheme. Concretely, we remove the LLE module and use all the reference frames as input. The network architecture is the same as in Section 4.2. Table 2 and Figure 8 show the quantitative and qualitative performances when the LLE module is removed in training. In general, the quality of the generated images is blurrier than our full framework. In some sequences, the synthesized images do not contain any motion. They look almost the same as either the two nodes. The numerical results consensually suggest that without LLE, the quality is much worse. We can see that using only two reference frames () is in fact equivalent to this case. Therefore, our method can work with only two input frames by discarding the LLE module. However, this is highly discouraged. Without the module, the network is not taught how to decode the points on the line segment passing through the two nodes. By explicitly embedding the linearity on the latent manifold, we guide the network to encode the nodes so that the decoding of the synthesized latent codes can be performed properly.
Nevertheless, an important observation from this benchmark is that the deep network captures a great deal of the underlying manifolds of video sequences since it still can manage to decode an interpolated latent variable even though it is not optimized to do so. In [35], the authors discovered that deep networks can capture the structure of an image, and through that they can blindly restore or superresolve an image. Here, we have found that in addition to image structure, they can capture video structure as well. This analysis can be greatly beneficial to the advancement of the deep unsupervised learning.
Number of reference frames. We conducted an ablation study on the number of reference frames. Table 2 demonstrates our numerical results when the number of reference images is . As can be seen, the results are much worse than the three reference frame case. However, readers might take this benchmark with a grain of salt since in this case, the two nodes are 5 frames apart, which may contain too large a motion. Nevertheless, there is a tradeoff between the number of references and the linearity assumption. In practice, the manifold should be highly nonlinear, and the latent codes of the references may not even be coplanar, much less on the same line. That is why, keeping the number of references small might result in a better performance. Further, using three frames is more computationally economical, so we use three reference frames as a default setting.
Effect of number of iterations. Figure 9 displays the test PSNR values when the number of iterations is . In our experiments, the general tendency is that the longer the optimization, the better the performance. Different from existing methods, ours blindly optimizes the cost function and also, there is no data distribution to generalize. In that sense, our method is closer to pure optimization than learning, which requires a minimization of an empirical risk. Thus, an optimization of at least iterations is recommended for a satisfying result.
Effect of learning rate adjustment. We experimented with a manual learning rate adjustment scheme in which we halved the learning rate at iterations and . The results of such a scheme are shown in Figure 9. Although the scheme does not help improve the overall result, it does stabilize the optimization when the network is near its saturation point. Unlike the case without any learning rate adjustment in which the curves fluctuate strongly, this scheme guarantees a good solution no matter when the optimization is terminated. Therefore, it is advised to turn this scheme on although it is only optional.
Effect of dropout. Figure 8 shows the visual results of our method with and without dropout. As can be seen from the figure, the images generated without dropout have noticeable artifacts in the interpolated regions while with dropout do not show any sort of artifacts. Moreover, the network with dropout recovers highfrequency components better, which makes the images more realistic and less blurry. Table 2 shows the numerical results of our ablation study on dropout. It is clear that the performance of the network without dropout is nowhere near the true performance. We conclude that dropout can significantly improve the quality of the synthesized images.
Choice of activation. As can be seen from Table 2, our method using LReLU surpasses all other configurations of activation functions. An explanation is that ReLU, SELU and ELU saturate when the argument is below some threshold. This saturation makes the network robust against small perturbations in input [21], which is quite unwanted in our method since the two nodes’ latent variables and the interpolated codes are very close to each other. This experiment implies that although deep networks is a powerful computing tool, one may need to design their architectures based on thorough analyses of the problems that they are applied to.
Effect of SSIM cost. From Table 2, we can see that when excluding SSIM from the cost function, SSIM and PSNR slightly drop. The visual results can be seen in Figure 2. At first, we do not find much difference between the images optimized with and without SSIM but upon closer examination, we can see several artifacts in the images. Since SSIM is a structural similarity metric, including it in the cost function may prevent the network from generating structural artifacts, which makes the synthesized images betterlooking.
Limitations. We observe that our method completely fails when the input images are mostly black such as some UCF101 PlayingPiano and PlayingFlute sequences. Also, due to the nature of the method, it cannot be used in an online scenario since the network has to perform optimization for each input sequence.
6 Conclusion
In this paper, we present a new method for inbetween frame generation. Our method is constructed by embedding an explicit linearity assumption to the latent representations of consecutive frames obtained from an autoencoding CNN. Our model is an offtheshelf method that can be applied instantly to a video like conventional methods while possessing deeplearninglevel performances. Our model can synthesize an arbitrary number of images between any number of frames simultaneously. Different benchmarks reveal that our synthesized images are comparable with stateoftheart performances in the field. Moreover, our work exposes that deep CNNs capture a great deal of video statistics, which may be greatly beneficial to the study of deep unsupervised learning.
References
 [1] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3) (2015) 211–252
 [2] Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92(1) (2011) 1–31
 [3] Werlberger, M., Pock, T., Unger, M., Bischof, H.: Optical flow guided tvl1 video interpolation and restoration. In: International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, Springer 273–286
 [4] Mahajan, D., Huang, F.C., Matusik, W., Ramamoorthi, R., Belhumeur, P.: Moving gradients: a pathbased method for plausible image interpolation. In: ACM Transactions on Graphics (TOG). Volume 28., ACM 42
 [5] Meyer, S., Wang, O., Zimmer, H., Grosse, M., SorkineHornung, A.: Phasebased frame interpolation for video. In: Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, IEEE 1410–1418
 [6] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS). 1097–1105
 [7] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778
 [8] Simonyan, K., Zisserman, A.: Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556 (2014)
 [9] Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., van der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision. 2758–2766
 [10] Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Deep end2end voxel2voxel prediction. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2016 IEEE Conference on, IEEE 402–409
 [11] Weinzaepfel, P., Revaud, J., Harchaoui, Z., Schmid, C.: Deepflow: Large displacement optical flow with deep matching. In: Computer Vision (ICCV), 2013 IEEE International Conference on, IEEE 1385–1392
 [12] Liu, Z., Yeh, R., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: International Conference on Computer Vision (ICCV). Volume 2.
 [13] Mathieu, M., Couprie, C., LeCun, Y.: Deep multiscale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440 (2015)
 [14] Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive separable convolution. arXiv preprint arXiv:1708.01692 (2017)
 [15] Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive convolution. In: CVPR. Volume 2. 6
 [16] Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. science 290(5500) (2000) 2323–2326
 [17] Bregler, C., Omohundro, S.M.: Nonlinear image interpolation using manifold learning. In: Advances in neural information processing systems. 973–980
 [18] Walker, J., Doersch, C., Gupta, A., Hebert, M.: An uncertain future: Forecasting from static images using variational autoencoders. In: European Conference on Computer Vision, Springer 835–851
 [19] Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th international conference on machine learning (ICML10). 807–814
 [20] Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proc. ICML. Volume 30.
 [21] Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 (2015)
 [22] Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15(1) (2014) 1929–1958
 [23] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4) (2004) 600–612
 [24] He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. 1026–1034
 [25] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR)
 [26] Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., WardeFarley, D., Bengio, Y.: Theano: A cpu and gpu math compiler in python. In: Proceedings of the 9th Python in Science Conference. 1–7
 [27] Soomro, K., Zamir, A.R., Shah, M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
 [28] PontTuset, J., Perazzi, F., Caelles, S., ArbelÃ¡ez, P., SorkineHornung, A., Van Gool, L.: The 2017 davis challenge on video object segmentation. arXiv preprint arXiv:1704.00675 (2017)
 [29] Perazzi, F., PontTuset, J., McWilliams, B., Van Gool, L., Gross, M., SorkineHornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 724–732
 [30] Sheikh, H.R., Bovik, A.C.: A visual information fidelity approach to video quality assessment. In: The First International Workshop on Video Processing and Quality Metrics for Consumer Electronics. 23–25
 [31] Li, S., Zhang, F., Ma, L., Ngan, K.N.: Image quality assessment by separately evaluating detail losses and additive impairments. IEEE Transactions on Multimedia 13(5) (2011) 935–949
 [32] DameraVenkata, N., Kite, T.D., Geisler, W.S., Evans, B.L., Bovik, A.C.: Image quality assessment based on a degradation model. IEEE transactions on image processing 9(4) (2000) 636–650
 [33] Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: Epicflow: Edgepreserving interpolation of correspondences for optical flow. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1164–1172
 [34] Kim, J., Zeng, H., Ghadiyaram, D., Lee, S., Zhang, L., Bovik, A.C.: Deep convolutional neural models for picturequality prediction: Challenges and solutions to datadriven image quality assessment. IEEE Signal Processing Magazine 34(6) (2017) 130–141
 [35] Dmitry Ulyanov, Andrea Vedaldi, V.L.: Deep image prior. (2017)