Deep Back-Projection Networks For Super-Resolution

Deep Back-Projection Networks For Super-Resolution

Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita
Toyota Technological Institute, Japan Toyota Technological Institute at Chicago, United States
{mharis, ukita}@toyota-ti.ac.jp, greg@ttic.edu
Abstract

The feed-forward architectures of recently proposed deep super-resolution networks learn representations of low-resolution inputs, and the non-linear mapping from those to high-resolution output. However, this approach does not fully address the mutual dependencies of low- and high-resolution images. We propose Deep Back-Projection Networks (DBPN), that exploit iterative up- and down-sampling layers, providing an error feedback mechanism for projection errors at each stage. We construct mutually-connected up- and down-sampling stages each of which represents different types of image degradation and high-resolution components. We show that extending this idea to allow concatenation of features across up- and down-sampling stages (Dense DBPN) allows us to reconstruct further improve super-resolution, yielding superior results and in particular establishing new state of the art results for large scaling factors such as across multiple data sets.

1 Introduction

Significant progress in deep learning for vision [15, 13, 5, 40, 27, 34, 17] has recently been propagating to the field of super-resolution (SR) [20, 30, 6, 12, 21, 22, 25, 43].

Figure 1: Super-resolution result on enlargement. PSNR: LapSRN [25] (15.25 dB), EDSR [31] (15.33 dB), and Ours (16.63 dB)
Figure 2: Comparisons of Deep Network SR. (a) Predefined upsampling (e.g., SRCNN [6], VDSR [22], DRRN [43]) commonly uses the conventional interpolation, such as Bicubic, to upscale LR input images before entering the network. (b) Single upsampling (e.g., FSRCNN [7], ESPCN [38]) propagates the LR features, then construct the SR image at the last step. (c) Progressive upsampling uses a Laplacian pyramid network which gradually predicts SR images [25]. (d) Iterative up and downsampling approach is proposed by our DBPN which exploit the mutually connected up- (blue box) and down-sampling (gold box) stages to obtain numerous HR features in different depths.

Single image SR is an ill-posed inverse problem where the aim is to recover a high-resolution (HR) image from a low-resolution (LR) image. A currently typical approach is to construct an HR image by learning non-linear LR-to-HR mapping, implemented as a deep neural network [6, 7, 38, 25, 22, 23, 43]. These networks compute a sequence of feature maps from the LR image, culminating with one or more upsampling layers to increase resolution and finally construct the HR image. In contrast to this purely feed-forward approach, human visual system is believed to use a feedback connection to simply guide the task for the relevant results [9, 24, 26]. Perhaps hampered by lack of such feedback, the current SR networks with only feed-forward connections have difficulty in representing the LR to HR relation, especially for large scaling factors.

On the other hand, feedback connections were used effectively by one of the early SR algorithms, the iterative back-projection [18]. It iteratively computes the reconstruction error then fuses it back to tune the HR image intensity. Although it has been proven to improve the image quality, the result still suffers from ringing effect and chessboard effect [4]. Moreover, this method is sensitive to choices of parameters such as the number of iterations and the blur operator, leading to variability in results.

Inspired by [18], we construct an end-to-end trainable architecture based on the idea of iterative up- and down-sampling: Deep Back-Projection Networks (DBPN). Our networks successfully perform large scaling factors, as shown in Fig. 1. Our work provides the following contributions:

(1) Error feedback. We propose an iterative error-correcting feedback mechanism for SR, which calculates both up- and down-projection errors to guide the reconstruction for obtaining better results. Here, the projection errors are used to characterize or constraint the features in early layers. Detailed explanation can be seen in Section 3.

(2) Mutually connected up- and down-sampling stages. Feed-forward architectures, which is considered as a one-way mapping, only map rich representations of the input to the output space. This approach is unsuccessful to map LR and HR image, especially in large scaling factors, due to limited features available in the LR spaces. Therefore, our networks focus not only generating variants of the HR features using upsampling layers but also projecting it back to the LR spaces using downsampling layers. This connection is shown in Fig. 2 (d), alternating between up- (blue box) and down-sampling (gold box) stages, which represent the mutual relation of LR and HR image.

(3) Deep concatenation. Our networks represent different types of image degradation and HR components. This ability enables the networks to reconstruct the HR image using deep concatenation of the HR feature maps from all of the up-sampling steps. Unlike other networks, our reconstruction directly utilizes different types of LR-to-HR features without propagating them through the sampling layers as shown by the red arrow in Fig. 2 (d).

(4) Improvement with dense connection. We improve the accuracy of our network by densely connected [15] each up- and down-sampling stage to encourage feature reuse.

2 Related Work

2.1 Image super-resolution using deep networks

Deep Networks SR can be primarily divided into four types as shown in Fig. 2.

(a) Predefined upsampling commonly uses interpolation as the upsampling operator to produce middle resolution (MR) image. This schema was firstly proposed by SRCNN [6] to learn MR-to-HR non-linear mapping with simple convolutional layers. Later, the improved networks exploited residual learning [22, 43] and recursive layers [23]. However, this approach might produce new noise from the MR image.

(b) Single upsampling offers simple yet effective way to increase the spatial resolution. This approach was proposed by FSRCNN [7] and ESPCN [38]. These methods have been proven effective to increase the spatial resolution and replace predefined operators. However, they fail to learn complicated mapping due to limited capacity of the networks. EDSR [31], the winner of NTIRE2017 [44], belongs to this type. However, it requires a large number of filters in each layer and lengthy training time, around eight days as stated by the authors. These problems open the opportunities to propose lighter networks that can preserve HR components better.

(c) Progressive upsampling was recently proposed in LapSRN [25]. It progressively reconstructs the multiple SR images with different scales in one feed-forward network. For the sake of simplification, we can say that this network is the stacked of single upsampling networks which only relies on limited LR features. Due to this fact, LapSRN is outperformed even by our shallow networks especially for large scaling factors such as in experimental results.

(d) Iterative up and downsampling is proposed by our networks. We focus on increasing the sampling rate of SR features in different depths and distribute the tasks to calculate the reconstruction error to each stage. This schema enables the networks to preserve the HR components by learning various up- and down-sampling operators while generating deeper features.

2.2 Feedback networks

Rather than learning a non-linear mapping of input-to-target space in one step, the feedback networks compose the prediction process into multiple steps which allow the model to have a self-correcting procedure. Feedback procedure has been implemented in various computing tasks [3, 35, 47, 29, 49, 39, 32].

In the context of human pose estimation, Carreira et al. [3] proposed an iterative error feedback by iteratively estimating and applying a correction to the current estimation. PredNet [32] is an unsupervised recurrent network to predictively code the future frames by recursively feeding the predictions back into the model. For image segmentation, Li et al. [29] learn implicit shape priors and use them to improve the prediction. However, to our knowledge, feedback procedures have not been implemented to SR.

2.3 Adversarial training

Adversarial training, such as with Generative Adversarial Networks (GANs) [10] has been applied to various image reconstruction problems [28, 37, 34, 5, 20]. For the SR task, Johnson et al. [20] introduced perceptual losses based on high-level features extracted from pre-trained networks. Ledig et al. [28] proposed SRGAN which is considered as a single upsampling method. It proposed the natural image manifold that is able to create photo-realistic images by specifically formulating a loss function based on the euclidian distance between feature maps extracted from VGG19 [41] and SRResNet.

Our networks can be extended with the adversarial loss as generator network. However, we optimize our network only using an objective function such as mean square root error (MSE). Therefore, instead of training DBPN with the adversarial loss, we can compare DBPN with SRResNet which is also optimized by MSE.

2.4 Back-projection

Back-projection [18] is well known as the efficient iterative procedure to minimize the reconstruction error. Previous studies have proven the effectivity of back-projection [51, 11, 8, 46]. Originally, back-projection is designed for the case with multiple LR inputs. However, given only one LR input image, the updating procedure can be obtained by upsampling the LR image using multiple upsampling operators and calculate the reconstruction error iteratively [4]. Timofte et al. [46] mentioned that back-projection can improve the quality of SR image. Zhao et al. [51] proposed a method to refine high-frequency texture details with an iterative projection process. However, the initialization which leads to an optimal solution remains unknown. Most of the previous studies involve constant and unlearnable predefined parameters such as blur operator and number of iteration.

To extend this algorithm, we develop an end-to-end trainable architecture which focuses to guide the SR task using mutually connected up- and down-sampling stages to learn non-linear relation of LR and HR image. The mutual relation between HR and LR image is constructed by creating iterative up and down-projection unit where the up-projection unit generates HR features, then the down-projection unit projects it back to the LR spaces as shown in Fig. 2 (d). This schema enables the networks to preserve the HR components by learned various up- and down-sampling operators and generates deeper features to construct numerous LR and HR features.

3 Deep Back-Projection Networks

Let and be HR and LR image with and , respectively, where and . The main building block of our proposed DBPN architecture is the projection unit, which is trained (as part of the end-to-end training of the SR system) to map either an LR feature map to an HR map (up-projection), or an HR map to an LR map (down-projection).

3.1 Projection units

The up-projection unit is defined as follows:

scale up: (1)
scale down: (2)
residual: (3)
scale residual up: (4)
output feature map: (5)

where * is the spatial convolution operator, and are, respectively, the up- and down-sampling operator with scaling factor , and are (de)convolutional layers at stage .

This projection unit takes the previously computed LR feature map as input, and maps it to an (intermediate) HR map ; then it attempts to map it back to LR map (“back-project”). The residual (difference) between the observed LR map and the reconstructed is mapped to HR again, producing a new intermediate (residual) map ; the final output of the unit, the HR map , is obtained by summing the two intermediate HR maps. This step is illustrated in the upper part of Fig. 3.

The down-projection unit is defined very similarly, but now its job is to map its input HR map to the LR map as illustrated in the lower part of Fig. 3.

scale down: (6)
scale up: (7)
residual: (8)
scale residual down: (9)
output feature map: (10)

We organize projection units in a series of stages, alternating between and . These projection units can be understood as a self-correcting procedure which feeds a projection error to the sampling layer and iteratively changes the solution by feeding back the projection error.

Figure 3: Proposed up- and down-projection unit in the DBPN.

The projection unit uses large sized filters such as and . In other existing networks, the use of large-sized filter is avoided because it slows down the convergence speed and might produce sub-optimal results. However, iterative utilization of our projection units enables the network to suppress this limitation and to perform better performance on large scaling factor even with shallow networks.

3.2 Dense projection units

The dense inter-layer connectivity pattern in DenseNets [15] has been shown to alleviate the vanishing-gradient problem, produce improved feature, and encourage feature reuse. Inspired by this we propose to improve DBPN, by introducing dense connections in the projection units called, yielding Dense DBPN (D-DBPN).

Unlike the original DenseNets, we avoid dropout and batch norm, which are not suitable for SR, because they remove the range flexibility of the features [31]. Instead, we use convolution layer as feature pooling and dimensional reduction [42, 12] before entering the projection unit.

In D-DBPN, the input for each unit is the concatenation of the outputs from all previous units. Let the and be the input for dense up- and down-projection unit, respectively. They are generated using which is used to merge all previous outputs from each unit as shown in Fig. 4. This improvement enables us to generate the feature maps effectively, as shown in the experimental results.

Figure 4: Proposed up- and down-projection unit in the D-DBPN. The feature maps of all preceding units (i.e., and in up- and down-projections units, respectively) are concatenated and used as inputs, and its own feature maps are used as inputs into all subsequent units.

3.3 Network architecture

Figure 5: An implementation of D-DBPN for super-resolution. Unlike the original DBPN, D-DBPN exploits densely connected projection unit to encourage feature reuse.

The proposed D-DBPN is illustrated in Fig. 5. It can be divided into three parts: initial feature extraction, projection, and reconstruction, as described below. Here, let be a convolutional layer, where is the filter size and is the number of filters.

  1. Initial feature extraction. We construct initial LR feature-maps from the input using . Then is used to reduce the dimension from to before entering projection step where is the number of filters used in the initial LR features extraction and is the number of filters used in each projection unit.

  2. Back-projection stages. Following initial feature extraction is a sequence of projection units, alternating between construction of LR and HR feature maps , ; each unit has access to the outputs of all previous units.

  3. Reconstruction. Finally, the target HR image is reconstructed as where use as reconstruction and refers to the concatenation of the feature-maps produced in each up-projection unit.

Due to the definitions of these building blocks, our network architecture is modular. We can easily define and train networks with different numbers of stages, controlling the depth. For a network with stages, we have the initial extraction stage (2 layers), and then up-projection units and down-projection units, each with 3 layers, followed by the reconstruction (one more layer). However, for the dense network, we add in each projection unit, except the first three units.

4 Experimental Results

4.1 Implementation and training details

In the proposed networks, the filter size in the projection unit is various with respect to the scaling factor. For enlargement, we use convolutional layer with two striding and two padding. Then, enlargement use convolutional layer with four striding and two padding. Finally, the enlargement use convolutional layer with eight striding and two padding.111We found these settings to work well based on general intuition and preliminary experiments.

We initialize the weights based on [14]. Here, std is computed by where , is the filter size, and is the number of filters. For example, with and , the std is . All convolutional and deconvolutional layers are followed by parametric rectified linear units (PReLUs).

We trained all networks using images from DIV2K [44], Flickr [31], and ImageNet dataset [36] without augmentation.222The comparison on DIV2K only are available in the supplementary material. To produce LR images, we downscale the HR images on particular scaling factors using Bicubic. We use batch size of 20 with size for LR image, while HR image size corresponds to the scaling factors. The learning rate is initialized to for all layers and decrease by a factor of 10 for every iterations for total iterations. For optimization, we use Adam with momentum to and weight decay to . All experiments were conducted using Caffe, MATLAB R2017a on NVIDIA TITAN X GPUs.

4.2 Model analysis

Figure 6: The depth analysis of DBPNs compare to other networks (VDSR [22], DRCN [23], DRRN [43], LapSRN [25]) on Set5 dataset for 4 enlargement.

Depth analysis. To demonstrate the capability of our projection unit, we construct multiple networks (), (), and () from the original DBPN. In the feature extraction, we use followed by . Then, we use for the reconstruction. The input and output image are luminance only.

The results on enlargement are shown in Fig. 6. DBPN outperforms the state-of-the-art methods. Starting from our shallow network, the network gives the higher PSNR than VDSR, DRCN, and LapSRN. The network uses only 12 convolutional layers with smaller number of filters than VDSR, DRCN, and LapSRN. At the best performance, networks can achieve dB which better dB, dB, dB than VDSR, DRCN, and LapSRN, respectively. The network shows performance improvement which better than all four existing state-of-the-art methods (VDSR, DRCN, LapSRN, and DRRN). At the best performance, the network can achieve dB which better dB, dB, dB, dB than VDSR, DRCN, LapSRN, and DRRN respectively. In total, the network use 24 convolutional layers which has the same depth as LapSRN. Compare to DRRN (up to 52 convolutional layers), the network undeniable shows the effectiveness of our projection unit. Finally, the network outperforms all methods with dB which better dB, dB, dB, dB than VDSR, DRCN, LapSRN, and DRRN, respectively.

The results of enlargement are shown in Fig. 7. The networks outperform the current state-of-the-art for enlargement which clearly show the effectiveness of our proposed networks on large scaling factors. However, we found that there is no significant performance gain from each proposed network especially for and networks where the difference only dB.

Figure 7: The depth analysis of DBPN on Set5 dataset for 8 enlargement. S (), M (), and L ()

Number of parameters. We show the tradeoff between performance and number of network parameters from our networks and existing deep network SR in Fig. 9 and 9.

For the sake of low computation for real-time processing, we construct network which is the lighter version of the network, . We only use followed by for the initial feature extraction. However, the results outperform SRCNN, FSRCNN, and VDSR on both and enlargement. Moreover, our network performs better than VDSR with and fewer parameters on and enlargement, respectively.

Our network has about fewer parameters and higher PSNR than LapSRN on enlargement. Finally, D-DBPN has about fewer parameters, and approximately the same PSNR, compared to EDSR on enlargement. On the enlargement, D-DBPN has about fewer parameters with better PSNR compare to EDSR. This evidence show that our networks has the best trade-off between performance and number of parameter.

Figure 8: Performance vs number of parameters. The results are evaluated with Set5 dataset for enlargement.
Figure 9: Performance vs number of parameters. The results are evaluated with Set5 dataset for enlargement.
Figure 8: Performance vs number of parameters. The results are evaluated with Set5 dataset for enlargement.

Deep concatenation. Each projection unit is used to distribute the reconstruction step by constructing features which represent different details of the HR components. Deep concatenation is also well-related with the number of (back-projection stage), which shows more detailed features generated from the projection units will also increase the quality of the results. In Fig. 10, it is shown that each stage successfully generates diverse features to reconstruct SR image.

Figure 10: Sample of activation maps from up-projection units in D-DBPN where . Each feature has been enhanced using the same grayscale colormap for visibility.

Dense connection. We implement D-DBPN-L which is a dense connection of the network to show how dense connection can improve the network’s performance in all cases as shown in Table 1. On enlargement, the dense network, D-DBPN-L, gains dB and dB higher than DBPN-L on the Set5 and Set14, respectively. On , the gaps are even larger. The D-DBPN-L has dB and dB higher that DBPN-L on the Set5 and Set14, respectively.

Set5 Set14
Algorithm Scale PSNR SSIM PSNR SSIM
DBPN-L 4
D-DBPN-L 4
DBPN-L 8
D-DBPN-L 8
Table 1: Comparison of the DBPN-L and D-DBPN-L on 4 and 8 enlargement. Red indicates the best performance.

4.3 Comparison with the-state-of-the-arts

Figure 11: Qualitative comparison of our models with other works on super-resolution.

To confirm the ability of the proposed network, we performed several experiments and analysis. We compare our network with eight state-of-the-art SR algorithms: A+ [45], SRCNN [6], FSRCNN [7], VDSR [22], DRCN [23], DRRN [43], LapSRN [25], and EDSR [31]. We carry out extensive experiments using 5 datasets: Set5 [2], Set14 [50], BSDS100 [1], Urban100 [16] and Manga109 [33]. Each dataset has different characteristics. Set5, Set14 and BSDS100 consist of natural scenes; Urban100 contains urban scenes with details in different frequency bands; and Manga109 is a dataset of Japanese manga. Due to computation limit of Caffe, we have to divide each image in Urban100 and Manga109 into four parts and then calculate PSNR separately.

Our final network, D-DBPN, uses then for the initial feature extraction and for the back-projection stages. In the reconstruction, we use . RGB color channels are used for input and output image. It takes less than four days to train.

PSNR [19] and structural similarity (SSIM) [48] were used to quantitatively evaluate the proposed method. Note that higher PSNR and SSIM values indicate better quality. As used by existing networks, all measurements used only the luminance channel (Y). For SR by factor , we crop pixels near image boundary before evaluation as in [31, 7]. Some of the existing networks such as SRCNN, FSRCNN, VDSR, and EDSR did not perform enlargement. To this end, we retrained the existing networks by using author’s code with the recommended parameters.

We show the quantitative results in the Table 2. Our D-DBPN outperforms the existing methods by a large margin in all scales except EDSR. For the and enlargement, we have comparable PSNR with EDSR. However, the result of EDSR tends to generate stronger edge than the ground truth and lead to misleading information in several cases. The result of EDSR for eyelashes in Fig. 11 shows that it was interpreted as a stripe pattern. On the other hand, our result generates softer patterns which subjectively closer to the ground truth. On the butterfly image, EDSR separates the white pattern which shows that EDSR tends to construct regular pattern such ac circle and stripe, while D-DBPN constructs the same pattern as the ground truth. The previous statement is strengthened by the results from the Urban100 dataset which consist of many regular patterns from buildings. In Urban100, EDSR has dB higher than D-DBPN.

Our network shows it’s effectiveness in the enlargement. The D-DBPN outperforms all of the existing methods by a large margin. Interesting results are shown on Manga109 dataset where D-DBPN obtains dB which is dB better than EDSR. While on the Urban100 dataset, D-DBPN achieves 23.25 which is only dB better than EDSR. The results show that our networks perform better on fine-structures images such as manga characters, even though we do not use any animation images in the training.

The results of enlargement are visually shown in Fig. 12. Qualitatively, D-DBPN is able to preserve the HR components better than other networks. It shows that our networks can extract not only features but also create contextual information from the LR input to generate HR components in the case of large scaling factors, such as enlargement.

Figure 12: Qualitative comparison of our models with other works on super-resolution. line: LapSRN [25] (19.77 dB), EDSR [31] (19.79 dB), and Ours (19.82 dB). line: LapSRN [25] (16.45 dB), EDSR [31] (19.1 dB), and Ours (23.1 dB). line: LapSRN [25] (24.34 dB), EDSR [31] (25.29 dB), and Ours (28.84 dB)
Set5 Set14 BSDS100 Urban100 Manga109
Algorithm Scale PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
Bicubic 2
A+ [45] 2
SRCNN [6] 2
FSRCNN [7] 2
VDSR [22] 2
DRCN [23] 2
DRRN [43] 2
LapSRN [25] 2
EDSR [31] 2
D-DBPN 2
Bicubic 4
A+ [45] 4
SRCNN [6] 4
FSRCNN [7] 4
VDSR [22] 4
DRCN [23] 4
DRRN [43] 4
LapSRN [25] 4
EDSR [31] 4
D-DBPN 4
Bicubic 8
A+ [45] 8
SRCNN [6] 8
FSRCNN [7] 8
VDSR [22] 8
LapSRN [25] 8
EDSR [31] 8
D-DBPN 8
Table 2: Quantitative evaluation of state-of-the-art SR algorithms: average PSNR/SSIM for scale factors 2, 4 and 8. Red indicates the best and blue indicates the second best performance. (* indicates that the input is divided into four parts and calculated separately due to computation limitation of Caffe)

5 Conclusion

We have proposed Deep Back-Projection Networks for Single Image Super-resolution. Unlike the previous methods which predict the SR image in a feed-forward manner, our proposed networks focus to directly increase the SR features using multiple up- and down-sampling stages and feed the error predictions on each depth in the networks to revise the sampling results, then, accumulates the self-correcting features from each upsampling stage to create SR image. We use error feedbacks from the up- and down-scaling steps to guide the network to achieve a better result. The results show the effectiveness of the proposed network compares to other state-of-the-art methods. Moreover, our proposed network successfully outperforms other state-of-the-art methods on large scaling factors such as enlargement.

References

  • [1] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5):898–916, 2011.
  • [2] M. Bevilacqua, A. Roumy, C. Guillemot, and M.-L. A. Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In British Machine Vision Conference (BMVC), 2012.
  • [3] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4733–4742, 2016.
  • [4] S. Dai, M. Han, Y. Wu, and Y. Gong. Bilateral back-projection for single image super resolution. In Multimedia and Expo, 2007 IEEE International Conference on, pages 1039–1042. IEEE, 2007.
  • [5] E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pages 1486–1494, 2015.
  • [6] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295–307, 2016.
  • [7] C. Dong, C. C. Loy, and X. Tang. Accelerating the super-resolution convolutional neural network. In European Conference on Computer Vision, pages 391–407. Springer, 2016.
  • [8] W. Dong, L. Zhang, G. Shi, and X. Wu. Nonlocal back-projection for adaptive image enlargement. In Image Processing (ICIP), 2009 16th IEEE International Conference on, pages 349–352. IEEE, 2009.
  • [9] D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cerebral cortex (New York, NY: 1991), 1(1):1–47, 1991.
  • [10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [11] M. Haris, M. R. Widyanto, and H. Nobuhara. First-order derivative-based super-resolution. Signal, Image and Video Processing, 11(1):1–8, 2017.
  • [12] M. Haris, M. R. Widyanto, and H. Nobuhara. Inception learning super-resolution. Appl. Opt., 56(22):6043–6048, Aug 2017.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 1026–1034, 2015.
  • [15] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [16] J.-B. Huang, A. Singh, and N. Ahuja. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5197–5206, 2015.
  • [17] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017.
  • [18] M. Irani and S. Peleg. Improving resolution by image registration. CVGIP: Graphical models and image processing, 53(3):231–239, 1991.
  • [19] M. Irani and S. Peleg. Motion analysis for image enhancement: Resolution, occlusion, and transparency. Journal of Visual Communication and Image Representation, 4(4):324–335, 1993.
  • [20] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711. Springer, 2016.
  • [21] A. Kappeler, S. Yoo, Q. Dai, and A. K. Katsaggelos. Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2(2):109–122, 2016.
  • [22] J. Kim, J. Kwon Lee, and K. Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1646–1654, June 2016.
  • [23] J. Kim, J. Kwon Lee, and K. Mu Lee. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1637–1645, 2016.
  • [24] D. J. Kravitz, K. S. Saleem, C. I. Baker, L. G. Ungerleider, and M. Mishkin. The ventral visual pathway: an expanded neural framework for the processing of object quality. Trends in cognitive sciences, 17(1):26–49, 2013.
  • [25] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In IEEE Conferene on Computer Vision and Pattern Recognition, 2017.
  • [26] V. A. Lamme and P. R. Roelfsema. The distinct modes of vision offered by feedforward and recurrent processing. Trends in neurosciences, 23(11):571–579, 2000.
  • [27] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
  • [28] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017.
  • [29] K. Li, B. Hariharan, and J. Malik. Iterative instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3659–3667, 2016.
  • [30] R. Liao, X. Tao, R. Li, Z. Ma, and J. Jia. Video super-resolution via deep draft-ensemble learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 531–539, 2015.
  • [31] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee. Enhanced deep residual networks for single image super-resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
  • [32] W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. arXiv preprint arXiv:1605.08104, 2016.
  • [33] Y. Matsui, K. Ito, Y. Aramaki, A. Fujimoto, T. Ogawa, T. Yamasaki, and K. Aizawa. Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, pages 1–28, 2016.
  • [34] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • [35] S. Ross, D. Munoz, M. Hebert, and J. A. Bagnell. Learning message-passing inference machines for structured prediction. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2737–2744. IEEE, 2011.
  • [36] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015.
  • [37] M. S. Sajjadi, B. Schölkopf, and M. Hirsch. Enhancenet: Single image super-resolution through automated texture synthesis. arXiv preprint arXiv:1612.07919, 2016.
  • [38] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874–1883, 2016.
  • [39] A. Shrivastava and A. Gupta. Contextual priming and feedback for faster r-cnn. In European Conference on Computer Vision, pages 330–348. Springer, 2016.
  • [40] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from simulated and unsupervised images through adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [41] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [42] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
  • [43] Y. Tai, J. Yang, and X. Liu. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [44] R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, L. Zhang, B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, et al. Ntire 2017 challenge on single image super-resolution: Methods and results. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1110–1121. IEEE, 2017.
  • [45] R. Timofte, V. De Smet, and L. Van Gool. A+: Adjusted anchored neighborhood regression for fast super-resolution. In Asian Conference on Computer Vision, pages 111–126. Springer, 2014.
  • [46] R. Timofte, R. Rothe, and L. Van Gool. Seven ways to improve example-based single image super resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1865–1873, 2016.
  • [47] Z. Tu and X. Bai. Auto-context and its application to high-level vision tasks and 3d brain image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10):1744–1757, 2010.
  • [48] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. Image Processing, IEEE Transactions on, 13(4):600–612, 2004.
  • [49] A. R. Zamir, T.-L. Wu, L. Sun, W. Shen, J. Malik, and S. Savarese. Feedback networks. arXiv preprint arXiv:1612.09508, 2016.
  • [50] R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse-representations. In Curves and Surfaces, pages 711–730. Springer, 2012.
  • [51] Y. Zhao, R.-G. Wang, W. Jia, W.-M. Wang, and W. Gao. Iterative projection reconstruction for fast and efficient image upsampling. Neurocomputing, 226:200–211, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
116986
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description