Learning for Video Super-Resolution through HR Optical Flow Estimation

Learning for Video Super-Resolution through HR Optical Flow Estimation

Longguang Wang, Yulan Guo, Zaiping Lin, Xinpu Deng, and Wei An
School of Electronic Science, National University of Defense Technology
Changsha 410073, China
{wanglongguang15, yulan.guo, linzaiping, dengxinpu, anwei}@nudt.edu.cn
Abstract

Video super-resolution (SR) aims to generate a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The generation of accurate correspondence plays a significant role in video SR. It is demonstrated by traditional video SR methods that simultaneous SR of both images and optical flows can provide accurate correspondences and better SR results. However, LR optical flows are used in existing deep learning based methods for correspondence generation. In this paper, we propose an end-to-end trainable video SR framework to super-resolve both images and optical flows. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed according to the HR optical flows. Finally, compensated LR inputs are fed to a super-resolution network (SRnet) to generate the SR results. Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance. Comparative results on the Vid4 and DAVIS-10 datasets show that our framework achieves the state-of-the-art performance. The codes will be released soon at: https://github.com/LongguangWang/SOF-VSR-Super-Resolving-Optical-Flow-for-Video-Super-Resolution-.

1 Introduction

Super-resolution (SR) aims to generate high-resolution (HR) images or videos from their low-resolution (LR) counterparts. As a typical low-level computer vision problem, SR has been widely investigated for decades [23, 5, 7]. Recently, the prevalence of high-definition display further advances the development of SR. For single image SR, image details are recovered using the spatial correlation in a single frame. In contrast, inter-frame temporal correlation can further be exploited for video SR.

Since temporal correlation is crucial to video SR, the key to success lies in accurate correspondence generation. Numerous methods [6, 19, 22] have demonstrated that the correspondence generation and SR problems are closely interrelated and can boost each other’s accuracy. Therefore, these methods integrate the SR of both images and optical flows in a unified framework. However, current deep learning based methods [18, 13, 35, 2, 20, 21] mainly focus on the SR of images, and use LR optical flows to provide correspondences. Although LR optical flows can provide sub-pixel correspondences in LR images, their limited accuracy hinders the performance improvement for video SR, especially for scenarios with large upscaling factors.

Figure 1: Temporal profiles under configuration for VSRnet [13], TDVSR [20] and our SOF-VSR on Calendar and City. Purple boxes represent corresponding temporal profiles. Our SOF-VSR produces finer details in temporal profiles, which are more consistent with the groundtruth.
Figure 2: Overview of the proposed framework. Our framework is fully convolutional and can be trained in an end-to-end manner.

In this paper, we propose an end-to-end trainable video SR framework to generate both HR images and optical flows. The SR of optical flows provides accurate correspondences, which not only improves the accuracy of each HR image, but also achieves better temporal consistency. We first introduce an optical flow reconstruction net (OFRnet) to reconstruct HR optical flows in a coarse-to-fine manner. These HR optical flows are then used to perform motion compensation on LR frames. A space-to-depth transformation is therefore used to bridge the resolution gap between HR optical flows and LR frames. Finally, the compensated LR frames are fed to a super-resolution net (SRnet) to generate each HR frame. Extensive evaluation is conducted to test our framework. Comparison to existing video SR methods shows that our framework achieves the state-of-the-art performance in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Moreover, our framework achieves better temporal consistency for visual perception (as shown in Fig. 1).

Our main contributions can be summarized as follows: 1) We integrate the SR of both images and optical flows into a single SOF-VSR (super-resolving optical flow for video SR) network. The SR of optical flows provides accurate correspondences and improves the overall performance; 2) We propose an OFRnet to infer HR optical flows in a coarse-to-fine manner; 3) Extensive experiments have demonstrated the effectiveness of our framework. It is shown that our framework achieves the state-of-the-art performance.

2 Related Work

In this section, we briefly review some major methods for single image SR and video SR.

2.1 Single Image SR

Dong et al. [3] proposed the pioneering work to use deep learning for single image SR. They used a three-layer convolutional neural network (CNN) to approximate the non-linear mapping from the LR image to the HR image. Recently, deeper and more complex network architectures have been proposed [14, 33, 11]. Kim et al.[14] proposed a very deep super-resolution network (VDSR) with 20 convolutional layers. Tai et al. [33] developed a deep recursive residual network (DRRN) and used recursive learning to control the model parameters while increasing the depth. Hui et al. [11] proposed an information distillation network to reduce computational complexity and memory consumption.

Figure 3: Architecture of our OFRnet. Our OFRnet works in a coarse-to-fine manner. At each level, the output of its previous level is used to compute a residual optical flow.

2.2 Video SR

Traditional video SR. To handle complex motion patterns in video sequences, Protter et al. [26] generalized the non-local means framework to address the SR problem. They used patch-wise spatio-temporal similarity to perform adaptive fusion of multiple frames. Takeda et al. [34] further introduced 3D kernel regression to exploit patch-wise spatio-temporal neighboring relationship. However, the resulting HR images of these two methods are over-smoothed. To exploit pixel-wise correspondences, optical flow is used in [6, 19, 22]. Since the accuracy of correspondences provided by optical flows in LR images is usually low [17], an iterative framework is used in these methods [6, 19, 22] to estimate both HR images and optical flows.

Deep video SR with separated motion compensation. Recently, deep learning has been investigated for video SR. Liao et al. [18] performed motion compensation under different parameter settings to generate an ensemble of SR-drafts, and then employed a CNN to recover high-frequency details from the ensemble. Kappelar et al. [13] also performed image alignment through optical flow estimation, and then passed the concatenation of compensated LR inputs to a CNN to reconstruct each HR frame. In these methods, motion compensation is separated from CNN. Therefore, it is difficult for them to obtain the overall optimal solution.

Deep video SR with integrated motion compensation. More recently, Caballero et al. [2] proposed the first end-to-end CNN framework (namely, VESPCN) for video SR. It comprises a motion compensation module and a sub-pixel convolutional layer used in [31]. Since that, end-to-end framework with motion compensation dominates the research of video SR. Tao et al. [35] used the motion estimation module in VESPCN, and proposed an encode-decoder network based on LSTM. This architecture facilitates the extraction of temporal context. Liu et al. [20] customized ESPCN [31] to simultaneously process different numbers of LR frames. They then introduced a temporal adaptive network to aggregate multiple HR estimates with learned dynamic weights. Sajjadi et al. [29] proposed a frame-recurrent architecture to use previously inferred HR estimates for the SR of subsequent frames. The recurrent architecture can assimilate previous inferred HR frames without increase in computational demands.

It is already demonstrated by traditional video SR methods [6, 19, 22] that simultaneous SR of images and optical flows produces better result. However, current CNN-based methods only focus on the SR of images. Different from previous works, we propose an end-to-end video SR framework to super-resolve both images and optical flows. It is demonstrated that the SR of optical flows facilitates our framework to achieve the state-of-the-art performance.

3 Network Architecture

Our framework takes consecutive LR frames as inputs and super-resolves the central frame. The LR inputs are first divided into pairs and fed to OFRnet to infer an HR optical flow. Then, a space-to-depth transformation [29] is employed to shuffle the HR optical flow into LR grids. Afterwards, motion compensation is performed to generate an LR draft cube. Finally, the draft cube is fed to SRnet to infer the HR frame. The overview of our framework is shown in Fig. 2.

3.1 Optical Flow Reconstruction Net (OFRnet)

It is demonstrated that CNN has the capability to learn the non-linear mapping between LR and HR images for the SR problem [3]. Recent CNN-based works [4, 12] have also shown the potential for motion estimation. In this paper, we incorporate these two tasks into a unified network to infer HR optical flows from LR images. Specifically, our OFRnet takes a pair of LR frames and as inputs, and reconstruct an optical flow between their corresponding HR frames and :

(1)

where represents the HR optical flow and is the set of parameters.

Motivated by the pyramid optical flow estimation method in [1], we use a coarse-to-fine approach to handle complex motion patterns (especially large displacements). As illustrated in Fig. 3, a 3-level pyramid is employed in our OFRnet.

Level 1: The pair of LR images and are downsampled by a factor of 2 to produce and , which are further concatenated and fed to a feature extraction layer. Then, two residual dense blocks (RDB) [38] with 4 layers and a growth rate of 32 are customized. Within each residual dense block, the first 3 layers are followed by a leaky ReLU using a leakage factor of 0.1, while the last layer performs feature fusion. The residual dense block works in a local residual learning manner with a local skip connection at the end. Once dense features are extracted by the residual dense blocks, they are concatenated and fed to a feature fusion layer. Then, the optical flow at this level is inferred by the subsequent flow estimation layer.

Level 2: Once the raw optical flow is obtained from level 1, it is upscaled by a factor of 2. The upscaled flow is then used to warp , resulting in . Next, , and are concatenated and fed to a network module. Note that, this module at level 2 is similar to that at level 1, except that residual learning is used.

Level 3: The module at level 2 generates an optical flow with the same size as the LR input . Therefore, the module at level 3 works as an SR part to infer the HR optical flow. The architecture at level 3 is similar to level 2 except that the flow estimation layer is replaced by a sub-pixel convolutional layer [31] for resolution enhancement.

Figure 4: Illustration of space-to-depth transformation. The space-to-depth transformation folds an HR optical flow in LR space to generate an LR flow cube.

Although numerous networks for SR [28, 16, 33, 11] and optical flow estimation [32, 27, 10] can be found in literature, our OFRnet is, to the best of our knowledge, the first unified network to integrate these two tasks. Note that, inferring HR optical flow from LR images is quite challenging, our OFRnet has demonstrated the potential of CNN to address this challenge. Our OFRnet is compact, with only 0.6M parameters. It is further demonstrated in Sec. 4.3 that the resulting HR optical flows benefit our video SR framework in both accuracy and consistency performance.

3.2 Motion Compensation

Once HR optical flows are produced by OFRnet, space-to-depth transformation is used to bridge the resolution gap between HR optical flows and LR frames. As illustrated in Fig. 4, regular LR grids are extracted from the HR flow and placed into the channel dimension to derive a flow cube with the same resolution as LR frames:

(2)

where and represent the size of the LR frame, is the upscaling factor. Note that, the magnitude of optical flow is divided by a scalar during the transformation to match the spatial resolution of LR frames.

Then, slices are extracted from the flow cube to warp the LR frame , resulting in multiple warped drafts:

(3)

where denotes warping operation and represents the warped drafts after concatenation, namely draft cube.

3.3 Super-resolution Net (SRnet)

After motion compensation, all the drafts are concatenated with the central LR frame, as shown in Fig. 2. Then, the draft cube is fed to SRnet to infer the HR frame:

(4)

where is the super-resolved result of the central LR frame, represents the draft cube and is the set of parameters.

Figure 5: Architecture of our SRnet.

As illustrated in Fig. 5, the draft cube is first passed to a feature extraction layer with 64 kernels, and then the output features are fed to 5 residual dense blocks (which are similar to our OFRnet). Here, we increase the number of layers to 5 and the growth rate to 64 for each residual dense block. Afterwards, we concatenate all the outputs of residual dense blocks and use a feature fusion layer to distillate the dense features. Finally, a sub-pixel layer is used to generate the HR frame.

The combination of densely connected layers and residual learning in residual dense blocks has been demonstrated to have a contiguous memory mechanism [38, 9]. Therefore, we employ residual dense blocks in our SRnet to facilitate effective feature learning from preceding and current local features. Furthermore, feature reuse in the residual dense blocks improves the model compactness and stabilizes the training process.

3.4 Loss Function

We design two loss terms and for OFRnet and SRnet, respectively. For the training of OFRnet, intermediate supervision is used at each level of the pyramid:

(5)

where

(6)

here denotes the temporal window size and is the regularization term to constrain the smoothness of the optical flow. We empirically set and to make our OFRnet focus on the last level. We also set as the regularization coefficient.

For the training of SRnet, we use the widely applied mean square error (MSE) loss:

(7)

Finally, the total loss used for joint training is , where is empirically set to 0.01 to balance the two loss terms.

4 Experiments

In this section, we first conduct ablation experiments to evaluate our framework. Then, we further compare our framework to several existing video SR methods.

4.1 Datasets

We collected 152 1080P HD video clips from the CDVL Database111www.cdvl.org and the Ultra Video Group Database222ultravideo.cs.tut.fi. The collected videos cover diverse natural and urban scenes. We used 145 videos from the CDVL Database as the training set, and 7 videos from the Ultra Video Group Database as the validation set. Following the configuration in [19, 18, 35], we downsampled the video clips to the size of as the HR groundtruth using Matlab function. In this paper, we only focus on the upscaling factor of 4 since it is the most challenging case. Therefore, the HR video clips were further downsampled to produce LR inputs of size .

For fair comparison to the state-of-the-arts, we chose the widely used Vid4 benchmark dataset. We also used another 10 video clips from the DAVIS dataset [25] for further comparison, which we refer to as DAVIS-10.

PSNR() SSIM()
T-MOVIE()
()
MOVIE()
()
VQM-VFD()
SOF-VSR w/o OFRnet 25.80 0.760 20.08 4.54 0.240
SOF-VSR w/o OFRnet 25.88 0.764 19.95 4.48 0.235
SOF-VSR w/o OFRnet + upsampling 25.86 0.763 19.92 4.50 0.231
SOF-VSR 26.01 0.771 19.78 4.32 0.227
Table 1: Comparative results achieved by our framework and its variants on the Vid4 dataset under configuration. Best results are shown in boldface.

4.2 Implementation Details

Following [3, 20], we converted input LR frames into YCbCR color space and only fed the luminance channel to our network. All metrics in this section are computed in the luminance channel. During the training phase, we randomly extracted 3 consecutive frames from an LR video clip, and randomly cropped a patch as the input. Meanwhile, its corresponding patch in HR video clip was cropped as groundtruth. Data augmentation was performed through rotation and reflection to improve the generalization ability of our network.

We implemented our framework in PyTorch. We applied the Adam solver [15] with , and batch size of 16. The initial learning rate was set to and reduced to half after every 50K iterations. We trained our network from scratch for 300K iterations. All experiments were conducted on a PC with an Nvidia GTX 970 GPU.

4.3 Ablation Study

In this section, we present ablation experiments on the Vid4 dataset to justify our design choices.

Figure 6: Visual comparison of optical flow estimation results achieved on City and Walk under 4 configuration. The super-resolved optical flow recovers fine correspondences, which are consistent with the groundtruth.

4.3.1 Network Variants

We proposed several variants of our SOF-VSR to perform ablation study. All the variants were re-trained for 300K iterations on the training data.

SOF-VSR w/o OFRnet. To handle complex motion patterns in video sequences, optical flow is used for motion compensation in our framework. To test the effectiveness of motion compensation for video SR, we removed the whole OFRnet and fed LR frames directly to our SRnet. Note that, replicated LR frames were used to match the dimension of the draft cube .

SOF-VSR w/o OFRnet. The SR of optical flows provides accurate correspondences for video SR and improves the overall performance. To validate the effectiveness of HR optical flows, we removed the module at level 3 in our OFRnet. Specifically, the LR optical flows at level 2 were directly used for motion compensation and subsequent processing. To match the dimension of the draft cube, compensated LR frames were also replicated before feeding to SRnet.

SOF-VSR w/o OFRnet + upsampling. Super-resolving the optical flow can also be simply achieved using interpolation-based methods. However, our OFRnet can recover more reliable optical flow details. To demonstrate this, we removed the module at level 3 in our OFRnet, and upsampled the LR optical flows at level 2 using bilinear interpolation. Then, we used the modules in our original framework for subsequent processing.

4.3.2 Experimental Analyses

To test the accuracy of individual output image, we used PSNR/SSIM as metrics. To further test the consistency performance, we used the temporal motion-based video integrity evaluation index (T-MOVIE) [30]. Besides, we used MOVIE [30] and video quality measure with variable frame delay (VQM-VFD) [37] for overall evaluation. The MOVIE and VQM-VFD metrics are correlated with human perception and widely applied in video quality assessment. Evaluation results of our original framework and the 3 variants achieved on the Vid4 dataset are shown in Table 1.

Motion compensation. It can be observed from Table 1 that motion compensation plays a significant role in performance improvement. If OFRnet is removed, the PSNR/SSIM values are decreased from 26.01/0.771 to 25.80/0.760. Besides, the consistency performance is also degraded, with T-MOVIE value being increased to 20.08. That is because, it is difficult for SRnet to learn the non-linear mapping between LR and HR images under complex motion patterns.

Upsampled
optical flow
Super-resolved
optical flow
Calendar 0.85 0.39
City 1.17 0.49
Foliage 1.18 0.36
Walk 1.25 0.55
Average 1.11 0.45
Table 2: Average EPE results achieved on the Vid4 dataset under configuration. Best results are shown in boldface.

HR optical flow. If modules at levels 1 and 2 are introduced to generate LR optical flows for motion compensation, the PSNR/SSIM values are increased to 25.88/0.764. However, the performance is still inferior to our SOF-VSR method using HR optical flows. That is because, HR optical flows provide more accurate correspondences for performance improvement. If bilinear interpolation is used to upsample LR optical flows, no consistent improvement can be observed. That is because, upsampling operation cannot recover reliable correspondence details as the module at level 3. To demonstrate this, we further compared the super-resolved optical flow (output at level 3), upsampled optical flow (upsampling result of the output at level 2) to the groundtruth. Since no groundtruth optical flow is available for the Vid4 dataset, we used the method proposed by Hu et al. [8] to compute the groundtruth optical flow. We used the average end-point error (EPE) for quantitative comparison, and present the results in Table 2.

It can be seen from Table 2 that the super-resolved optical flow significantly outperforms the upsampled optical flow, with an average EPE being reduced from 1.11 to 0.45. It demonstrates that the module at level 3 effectively recovers the correspondence details. Figure  6 further illustrates the qualitative comparison on City and Walk. In the upsampled optical flow, we can roughly distinguish the outlines of the building and the pedestrian. In contrast, more distinct edges can be observed in the super-resolved optical flow, with finer details being recovered. Although some checkboard artifacts generated by the sub-pixel layer can also be observed [24], the resulting HR optical flow provides highly accurate correspondences for the video SR task.

4.4 Comparisons to the state-of-the-art

We first compared our framework to IDNnet [11] (the latest state-of-the-art single image SR method) and several video SR methods including VSRnet [13], VESCPN [2], DRVSR [35], TDVSR [20] and FRVSR [29] on the Vid4 dataset. Then, we conducted comparative experiments on the DAVIS-10 dataset.

BI degradation model BD degradation model
IDNnet
[11]
VSRnet
[13]
VESCPN
[2]
TDVSR
[20]
SOF-VSR
DRVSR
[35]
FRVSR-3-64
[29]
SOF-VSR-BD
PSNR() 25.06 24.81 25.35* 25.49 26.01 25.99 26.17* 26.19
SSIM() 0.715 0.702 0.756* 0.746 0.771 0.773 0.798* 0.785
T-MOVIE()
()
23.98 26.05 - 23.23 19.78 18.28 - 17.63
MOVIE()
()
5.99 6.01 5.82* 4.92 4.32 4.00 - 4.00
VQM-VFD() 0.268 0.273 - 0.238 0.227 0.217 - 0.215
Table 3: Comparison of accuracy and consistency performance achieved on the Vid4 dataset under configuration. Note that, the first and last two frames are not used in our evaluation since VSRnet and TDVSR do not produce outputs for these frames. Results marked with * are directly copied from the corresponding papers. Best results are shown in boldface.
BI degradation model BD degradation model
IDNnet[11] VSRnet[13] SOF-VSR DRVSR[35] SOF-VSR-BD
PSNR() 33.74 32.63 34.32 33.02 34.27
SSIM() 0.915 0.897 0.925 0.911 0.925
T-MOVIE()() 12.16 14.60 11.77 14.06 10.93
MOVIE()() 2.19 2.85 1.96 3.15 1.90
VQM-VFD() 0.146 0.163 0.119 0.142 0.127
Table 4: Comparative results achieved on the DAVIS-10 dataset under configuration. Best results are shown in boldface.

For IDNnet and VSRnet, we used the codes provided by the authors to produce the results. For DRVSR and TDVSR, we used the output images provided by the authors. For VESCPN and FRVSR, the results reported in their papers [2, 29] are used. Here, we report the performance of FRVSR-3-64 since its network size is comparable to our SOF-VSR. Following [36], we crop borders of for fair comparison.

Note that, DRVSR and FRVSR are trained on a degradation model different from other networks. Specifically, the degradation model used in IDNnet, VSRnet, VESCPN and TDVSR is bicubic downsampling with Matlab function (denoted as BI). However, in DRVSR and FRVSR, the HR images are first blurred using Gaussian kernel and then downsampled by selecting every pixel (denoted as BD). Consequently, we re-trained our framework on the BD degradation model (denoted as SOF-VSR-BD) for fair comparison to DRVSR and FRVSR.

Without optimization of the implementation, our SOF-VSR network takes about 250ms to generate an HR image of size 720576 under configuration on an Nvidia GTX 970 GPU.

Figure 7: Consistency and accuracy performance achieved on the Vid4 dataset under 4 configuration. Dots and squares represent performance for BI and BD degradation models, respectively. Our framework achieves the best performance in terms of both PSNR and T-MOVIE.
Figure 8: Visual comparison of SR results on Calendar and City. Zoom-in regions from left to right: IDNnet [11], VSRnet [13], TDVSR [20], our SOF-VSR, DRVSR [35] and our SOF-VSR-BD. IDNnet, VSRnet, TDVSR and SOF-VSR are based on the BI degradation model, while DRVSR and SOF-VSR-BD are based on the BD degradation model.
Figure 9: Visual comparison of SR results on Boxing and Demolition. Zoom-in regions from left to right: IDNnet [11], VSRnet [13], our SOF-VSR, DRVSR [35] and our SOF-VSR-BD. IDNnet, VSRnet and SOF-VSR are based on the BI degradation model, while DRVSR and SOF-VSR-BD are based on the BD degradation model.

4.4.1 Quantitative Evaluation

Quantitative results achieved on the Vid4 dataset and the DAVIS-10 dataset are shown in Tables 3 and  4.

Evaluation on the Vid4 dataset. It can be observed from Table 3 that our SOF-VSR achieves the best performance for the BI degradation model in terms of all metrics. Specifically, the PSNR and SSIM values achieved by our framework are better than other methods by over 0.5 dB and 0.15 dB. That is because, more accurate correspondences can be provided by HR optical flows and therefore more reliable spatial details and temporal consistency can be well recovered.

For the BD degradation model, although the FRVSR-3-64 method achieves higher SSIM, our SOF-VSR-BD method outperforms FRVSR-3-64 in terms of PSNR. Compared to the DRVSR method, PSNR, SSIM and T-MOVIE values achieved by our SOF-VSRBD method are improved by a notable margin, while a comparable performance is achieved in terms of MOVIE and VQM-VFD.

We further show the trade-off between consistency and accuracy in Fig. 7. It can be seen that our SOF-VSR and SOF-VSR-BD methods achieve the highest PSNR values, while maintaining superior T-MOVIE performance.

Evaluation on the DAIVIS-10 dataset. It is clear in Table 4 that our SOF-VSR and SOF-VSR-BD methods surpass the state-of-the-arts for both the BI and BD degradation models in terms of all metrics. Since the DAVIS-10 dataset comprises scenes with fast moving objects, complex motion patterns (especially large displacements) lead to deterioration of existing video SR methods. In contrast, more accurate correspondences are provided by HR optical flows in our framework. Therefore, complex motion patterns can be handled more robustly and better performance can be achieved.

4.4.2 Qualitative Evaluation

Figure  8 illustrates the qualitative results on two scenarios of the Vid4 dataset. It can be observed from the zoom-in regions that our framework recovers finer and more reliable details, such as the word “MAREE” and the stripes of the building. The qualitative comparison on the DAVIS-10 dataset (as shown in Fig. 9) also demonstrates the superior visual quality achieved by our framework. The pattern on the shorts, the word “PEUA” and the logo “CAT” are better recovered by our SOF-VSR and SOF-VSR-BD methods.

Figure  1 further shows the temporal profiles achieved on Calendar and City. It can be observed that the word “MAREE” can hardly be recognized by VSRnet in both image space and temporal profile. Although finer results are achieved by TDVSR, the building is still obviously distorted. In contrast, smooth and reliable patterns with fewer artifacts can be observed in temporal profiles of our results. In summary, our framework produces temporally more consistent results and better perceptual quality.

5 Conclusions

In this paper, we propose a deep end-to-end trainable video SR framework to super-resolve both images and optical flows. Our OFRnet first super-resolves the optical flows to provide accurate correspondences. Motion compensation is then performed based on HR optical flows and SRnet is used to infer the final results. Extensive experiments have demonstrated that our OFRnet can recover reliable correspondence details for the improvement of both accuracy and consistency performance. Comparison to existing video SR methods has shown that our framework achieves the state-of-the-art performance.

References

  • [1] J.-Y. Bouguet. Pyramidal implementation of the lucas kanade feature tracker: Description of the algorithm. Technical report, Intel Corporation, 1999.
  • [2] J. Caballero, C. Ledig, A. P. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi. Real-time video super-resolution with spatio-temporal networks and motion compensation. In CVPR, pages 2848–2857, 2017.
  • [3] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In ECCV, pages 184–199, 2014.
  • [4] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In CVPR, pages 2758–2766, 2015.
  • [5] R. Fattal. Image upsampling via imposed edge statistics. ACM Trans. Graph., 26(3):95, 2007.
  • [6] R. Fransens, C. Strecha, and L. J. V. Gool. Optical flow based super-resolution: A probabilistic approach. Computer Vision and Image Understanding, 106(1):106–115, 2007.
  • [7] G. Freedman and R. Fattal. Image and video upscaling from local self-examples. ACM Trans. Graph., 30(2):12:1–12:11, 2011.
  • [8] Y. Hu, Y. Li, and R. Song. Robust interpolation of correspondences for large displacement optical flow. In CVPR, pages 4791–4799, 2017.
  • [9] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, pages 2261–2269, 2017.
  • [10] T. Hui, X. Tang, and C. C. Loy. Liteflownet: A lightweight convolutional neural network for optical flow estimation. In CVPR, 2018.
  • [11] Z. Hui, X. Wang, and X. Gao. Fast and accurate single image super-resolution via information distillation network. In CVPR, 2018.
  • [12] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, pages 1647–1655, 2017.
  • [13] A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos. Video super-resolution with convolutional neural networks. IEEE Trans. Computational Imaging, 2(2):109–122, jun 2016.
  • [14] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, pages 1646–1654, 2016.
  • [15] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
  • [16] W. Lai, J. Huang, N. Ahuja, and M. Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In CVPR, pages 5835–5843, 2017.
  • [17] H. S. Lee and K. M. Lee. Simultaneous super-resolution of depth and images using a single camera. In CVPR, pages 281–288, 2013.
  • [18] R. Liao, X. Tao, R. Li, Z. Ma, and J. Jia. Video super-resolution via deep draft-ensemble learning. In ICCV, pages 531–539, 2015.
  • [19] C. Liu and D. Sun. On bayesian adaptive video super resolution. IEEE Trans. Pattern Anal. Mach. Intell., 36(2):346–360, feb 2014.
  • [20] D. Liu, Z. Wang, Y. Fan, and X. Liu. Robust video super-resolution with learned temporal dynamics. In ICCV, pages 2526–2534, 2017.
  • [21] D. Liu, Z. Wang, Y. Fan, X. Liu, Z. Wang, S. Chang, X. Wang, and T. S. Huang. Learning temporal dynamics for video super-resolution: A deep learning approach. IEEE Trans. Image Process., 27(7):3432–3445, 2018.
  • [22] Z. Ma, R. Liao, X. Tao, L. Xu, J. Jia, and E. Wu. Handling motion blur in multi-frame super-resolution. In CVPR, pages 5224–5232, 2015.
  • [23] N. Nguyen, P. Milanfar, and G. Golub. A computationally efficient superresolution image reconstruction algorithm. IEEE Trans. Image Process., 10(4):573–583, 2001.
  • [24] A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
  • [25] J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbelaez, A. Sorkine-Hornung, and L. V. Gool. The 2017 DAVIS challenge on video object segmentation. arXiv 1704.00675, pages 1–9, 2017.
  • [26] M. Protter, M. Elad, H. Takeda, and P. Milanfar. Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Trans. Image Process., 18:36–51, 2008.
  • [27] A. Ranjan and M. J. Black. Optical flow estimation using a spatial pyramid network. In CVPR, pages 2720–2729, 2017.
  • [28] M. S. M. Sajjadi, B. Schölkopf, and M. Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. In ICCV, pages 4501–4510, 2017.
  • [29] M. S. M. Sajjadi, R. Vemulapalli, and M. Brown. Frame-recurrent video super-resolution. In CVPR, 2018.
  • [30] K. Seshadrinathan and A. C. Bovik. Motion tuned spatio-temporal quality assessment of natural videos. IEEE Trans. Image Process., 19(2):335–350, 2010.
  • [31] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In CVPR, pages 1874–1883, 2016.
  • [32] D. Sun, X. Yang, M. Liu, and J. Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. CVPR, 2018.
  • [33] Y. Tai, J. Yang, and X. Liu. Image super-resolution via deep recursive residual network. In CVPR, pages 2790–2798, 2017.
  • [34] H. Takeda, P. Milanfar, M. Protter, and M. Elad. Super-resolution without explicit subpixel motion estimation. IEEE Trans. Image Process., 18(9):1958–1975, 2009.
  • [35] X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia. Detail-revealing deep video super-resolution. In ICCV, pages 4482–4490, 2017.
  • [36] R. Timofte, E. Agustsson, L. V. Gool, M. Yang, L. Zhang, and et al. NTIRE 2017 challenge on single image super-resolution: Methods and results. In CVPR, pages 1110–1121, 2017.
  • [37] S. Wolf and M. Pinson. Video quality model for variable frame delay (VQM-VFD). US Dept. Commer., Nat. Telecommun. Inf. Admin., Boulder, CO, USA, Tech. Memo TM-11-482, 2011.
  • [38] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu. Residual dense network for image super-resolution. In CVPR, 2018.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
283434
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description