High-Order Residual Network for Light Field Super-Resolution

High-Order Residual Network for Light Field Super-Resolution

Abstract

Plenoptic cameras usually sacrifice the spatial resolution of their SAIs to acquire geometry information from different viewpoints. Several methods have been proposed to mitigate such spatio-angular trade-off, but seldom make use of the structural properties of the light field (LF) data efficiently. In this paper, we propose a novel high-order residual network to learn the geometric features hierarchically from the LF for reconstruction. An important component in the proposed network is the high-order residual block (HRB), which learns the local geometric features by considering the information from all input views. After fully obtaining the local features learned from each HRB, our model extracts the representative geometric features for spatio-angular upsampling through the global residual learning. Additionally, a refinement network is followed to further enhance the spatial details by minimizing a perceptual loss. Compared with previous work, our model is tailored to the rich structure inherent in the LF, and therefore can reduce the artifacts near non-Lambertian and occlusion regions. Experimental results show that our approach enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.

Compared to a 2D imaging system, the plenoptic camera not only captures the accumulated intensity of a light ray at each point in space, but also provides the directional radiance information. Together they form the light field (LF), which has shown advantages over 2D imagery in problems such as disparity estimation [10, 24] or 3D reconstruction [7] of a scene, generation of images for a novel viewpoint [12, 19], and refocusing [21].

Nevertheless, in practice, it can be difficult to achieve a dense sampling of the entire LF due to the limited resolution of the camera sensor. Acquisition of a densely sampled sub-aperture image (SAI) usually sacrifices the view point information, or vice versa [27]. As a result, the LF views exhibit a lower spatial resolution than images obtained by conventional cameras, and many applications such as depth estimation are constrained by low-resolution (LR) images, which increases the significance of the algorithms for light field super-resolution (LFSR).

Different from single image super-resolution (SISR), a LF is characterized by a structure that needs to be maintained when increasing the data resolution. Such structural information is implicitly encoded in the neighboring views, leading to non-integral shift between two corresponding pixels in the view images [31]. From this perspective, most depth-based approaches [27, 21] generally depend on such geometric properties as priors to explicitly register the novel SAIs from other views. Their success depend on the accurate depth information, which however is challenging to acquire. Consequently, the disparity errors give rise to artifacts such as tearing and ghosting, especially in the occluded or non-Lambertian areas where depth information is not properly estimated.

Recently, deep learning has shown to be powerful in various computer vision applications [17, 33], including LFSR [32, 20, 18]. The learning-based approaches relieve the dependency on explicit depth information, leading to the improvement of robustness at depth discontinuities. However, the intrinsic limitation of 2D (or 3D) convolution makes existing frameworks difficult to handle the high-dimensional structure in a LF, and therefore most learning-based algorithms simplify the reconstruction to consider only the spatio-angular relations in epipolar plane images (EPIs) [32], or the angular correlations among adjacent views [34, 35]. Given that the geometric information is encoded in a complex way within the LF, such simplifications result in performance degradation.

To remedy the problems of existing learning-based approaches for LFSR, we propose to establish a framework tailored to the LF structural information. Such an approach enables the network to learn representations by fully exploiting the LF information from all adjacent views. The main contributions of our model are threefold: 1) We propose a novel high-order structure, named high-order residual block (HRB) to learn the features by fully considering the information from all SAIs of a LF. Such features extracted from HRB preserve high angular coherence. 2) By stacking a set of the HRBs, the proposed network is able to extract diverse spatial features endowed with scene geometry information. In addition, the network also propagates the geometric information encoded in the learned features to achieve high reconstruction quality with the LF structural property. 3) Experimental results demonstrate that our model not only outperforms the state-of-the-art reconstruction methods on quantitative measurements but also generates spatial details and novel views with better fidelity.

Related Work

Spatial super-resolution

A number of super-resolution algorithms [1, 16, 25] have been developed specifically for LF data. For example, in [27], a variational framework is applied to super-resolve a novel view based on a continuous depth map calculated from the epipolar plane images (EPIs). Mitra and Veeraraghavan [21] proposed a patch-based approach, which is also based on the estimated depth information. These methods usually require an accurate estimate of the disparity information, which can be challenging on LR images. Learning-based approaches mitigate the dependency on geometric disparity, and therefore are more robust in regions where the depth information is difficult to estimate correctly. In [34], LFCNN cascades two CNNs to enhance the target views and generate novel perspectives based on the super-resolved views. However, the stepwise processing does not make use of the entire structural information of the LF, and therefore limits the potential of the model. Recently, [26] employed a bidirectional recurrent CNN framework to model the spatial correlations horizontally and vertically. Likewise, [2] adopted an example-based spatial SR algorithm on the patch-volumes across the SAIs. Both approaches consider the LF as an image sequence, and therefore lose one angular dimension information. [35], however, utilized the relations of SAIs in 4 different directions to super-resolve the center target image. By considering the angular information from multiple directions, their model has superior performance over other previous methods.

Angular super-resolution

Angular super-resolution for LF is also known as view synthesis. Many techniques [12, 19, 27] take advantage of the disparity map to warp the existing SAIs to novel views. For instance, [22] introduced a layer-based synthesis method to render arbitrary views by using probabilistic interpolation and calculating the depth layer information. [37] adopted a phase-based method, which integrates the disparity into a phase term of a reference image to warp the input view to any close novel view.

Similar to spatial super-resolution, depth-based techniques are inadequate in the occluded and textureless regions, which prompts researchers to explore algorithms based on CNNs. [4] are among the first to apply deep learning to view synthesis from a set of images with wide baselines. Meanwhile, [12] exploited two sequential CNNs to estimate depth and color information, and subsequently wrapped them to generate the novel view. The dependency on disparity restricts the model performance and easily results in ghosting effects near occluded regions. In [32], the author proposed to use a blur-deblur scheme to address the asymmetry problem caused by sparse angular sampling. However, this EPI-based model only utilizes horizontal or vertical angular correlations of a low-resolution LF, which severely restricts the accessible information of the model. Recently, [29] applied the alternating convolution to learn the spatio-angular clues for view synthesis and achieves more accurate results.

Compared with the aforementioned approaches, we explore a deeper residual structure for both spatial and angular SR of the LF. The proposed network can harness the high-dimensional LF data efficiently to extract geometric features, which contribute to the high reconstruction accuracy.

Method

Problem formulation

Following [15, 5], a light ray is defined by the intersection points of an angular plane and a spatial plane . We consider the LFSR as the recovery of the HR LF from the input LR LF by two spatial and angular SR factors and , respectively. The learning-based SR process can be described as

(1)

where stands for the super-resolved LF, represents the mapping from LR to super-resolved LF, and denotes the parameters of the model.

Figure 1: The architecture of our proposed hierarchical high-order network.

Architecture overview

The intrinsic limitation of 2D and 3D convolution makes existing schemes unable to fully exploit highly-correlated 4D LF data. As a result, most existing methods consider only partial spatio-angular relations (e.g., EPI) [32, 30], or angular correlations (e.g., SAI) [12, 26, 35] which underuse the potential of LF. To resolve the problem, we employ a high-order convolution (HConv) that encapsulates the information from all coordinates by convolving a 4D kernel with the inputs. For any hidden layer , the operation of the HConv (together with the following activation layer) is implemented as . denotes the weights of the layer with size , where is the channel number of the filter bank, is the spatial filter size and is the angular filter size. stands for the input , and the activation function is the leaky rectified linear unit (LReLU) with slope . The notation is the convolution between an input feature map and a filter. Furthermore, to utilize spatial information hierarchically from all SAIs, we design the high-order residual block (HRB) to effectively extract the geometric features from a LF. As is shown in Fig. 1, the proposed network mainly consists of four parts: 1) shallow feature extraction, 2) geometric representation learning network (GRLNet), 3) upsampling network (UpNet), and 4) spatial refinement network (SReNet). Specifically, we use a HConv layer to extract shallow features from the LR input:

(2)

where denotes the HConv operation. Subsequently, in the GRLNet, the representations are learned in a hierarchical manner by a set of HRBs. Assuming there are HRBs, the feature maps of the HRB can be expressed as:

(3)

where denotes the operation of the HRB, and the symbol stands for function composition. The mapping can be a composite function of operations, including HConvs to fully utilize all the view information within the block (Fig. 2) to obtain the local geometric feature . By cascading multiple HRBs, the geometric features are learned in an hierarchical manner during the training of GRLNet, and therefore more representative features with diverse spatial representations are obtained. We then apply the global residual learning to combine the hierarchically learned geometric features and the shallow features before conducting upsampling by

(4)

where is an operation of batch normalization defined later. The following UpNet then upsamples the obtained feature maps from the LR space to the HR space:

(a) The angular receptive field of HRB
(b) The structure of HRB
Figure 2: The high-order residual block (HRB) architecture, where denotes the element-wise addition.
(5)

where is used to describe the upsampling operation on the LR features. In the experiments, however, directly reconstructing the HR LF based on the fused features is hard, and the results always lack high-frequency spatial details. Therefore, we employ a refinement network (SReNet) supervised by a perceptual loss to recover the spatial details in the HR space:

(6)

where and denotes the fused refined feature, which is further used for the reconstruction of the final super-resolved LF:

(7)

In the following sections, we will illustrate the components of the proposed high-order network in details, and demonstrate the properties of learned geometric features.

High-order residual block

(a) 2D feature slices
(b) Feature EPIs
Figure 3: Visualization of the geometric features. (a) The collection of 2D slices through the learned feature maps. (b) The EPI located at the corresponding lines.

As a basic building block in our network, the HRB’s structure is presented in Fig. 2. According to the Fig. (a)a, each HRB contains two HConv layers with the angular receptive field which makes the block possible to fully utilize the information from all SAIs of input features. In addition, to ease the training of the proposed high-order network, we apply the normalization operation to the outputs of the HConv layer [9]. Given that the inputs preserve high coherence among the views, the normalization should not be counted in an aperture-wise manner to avoid that the whitening decorrelates the coherence. We consequently implement the normalization over a group of SAIs in every channel of the feature maps and therefore propose an aperture group batch normalization (AGBN).

Let the outputs of a particular hidden HConv layer be , where and stand for the two indices of angular dimensions as is defined in the problem formulation. The superscript denotes the number of channels, and for each feature SAI contains values (). Therefore, the AGBN transform is implemented as in Algorithm 1.

Input: Features from HConv layer: ;  Parameters and
Output: The output features:
1 for  do
2        
3 end for
Algorithm 1 Aperture group batch normalization

By stacking the layers as is shown in Fig. (b)b, the HRB is able to extract features that preserve geometrical properties by considering all SAIs. The learned geometrical features not only contain spatial structures (such as textures or edges) but also record the relations between adjacent feature views. Fig. 3 exhibits an example of the geometric features learned by the HRB. To illustrate such high-dimensional features, we display a grid of 2D slices through the 4D features in Fig. (a)a, and the EPIs located at the corresponding lines in a certain feature slice in Fig. (b)b. The feature EPIs very much resemble the LF EPIs, reflecting that the HRB has the capacity to extract features preserving high coherence.

Geometric representation learning network

Figure 4: Illustration of the geometric features extracted from different HRBs in the GRLNet.

The GRLNet is composed of a set of cascaded HRBs. Such structure enables the network to learn multiple spatial representations endowed with geometry information. Compared with the features extracted from traditional CNN-based model, the learned geometric features are different in two aspects: 1) the high coherence among SAIs in angular dimension; 2) the smooth effects near object borders in spatial dimension. The former has been discussed in Fig. (b)b, where we show the EPI property of features. The latter can be illustrated according to the features from different HRBs. In Fig. 4, we visualize the spatial appearance of the geometric features extracted from the , , and HRBs. The red boxes zoom in at the features of object border, while the blue boxes zoom in at the texture features. Compared with the reconstruction result, the edges of the object border in the feature space (red boxes) are not as sharp. Such effects are caused by the rapid changes in the parallax of the object border, and therefore indicate the scene geometric information. In addition to the diverse spatial representations, the feature angular coherence is also maintained (e.g., the EPI in the yellow boxes). Consequently, the GRLNet is able to learn more representative spatial features hierarchically through a set of HRBs and simultaneously propagates the geometric information.

Upsampling network

Figure 5: Illustration of UpNet for spatio-angular resolution enhancement. The red arrow stands for HConv opeartion, the yellow one denotes angular linear interpolation, and the green one denotes channel-to-space pixel shuffle operation.

The upsampling network is applied to increase the spatio-angular resolution using the extracted hierarchical geometric features in the LR space. We design the network to fit to the properties of the LF geometric features. As illustrated in Fig. 5, we assume a single LR feature map with a single channel as input. The feature map has dimension , where and . The spatial and angular upsampling factors are (strictly speaking, the angular dimension is increased from to ). The first step expands the feature channel by a factor of using the HConv operation. Then, given the EPI property of the geometric features, we apply a linear interpolation to the angular dimension of the feature maps to upscale the resolution by a factor of . Finally, the channel-to-space shuffle operation is applied to increase the spatial resolution by a fator of .

The UpNet upsamples the learned geometric features to the HR space. Such features are used to reconstruct the primary super-resolved LF directly to get the per-pixel reconstruction loss for training, and are also passed to the SReNet to further recover the high-frequency details.

Spatial refinement network

The SReNet aims at restoring the realistic spatial details of the previous super-resolved output. Given that the GRLNet is trained using a pixel-wise loss, it tends to generate smooth results with poor fidelity [14, 6]. The SReNet in contrast learns the geometric features directly in the HR space and is supervised by a novel perceptual loss defined on SAIs to make the reconstruction sharper. In the experiments, we will discuss the effects of the SAI-wise perceptual loss for recovering spatial details.

Loss function

We propose a two-stage loss function (see Fig. 1) to encourage the proposed network to learn the geometric features and reconstruct high-quality spatial details. In general, the loss function is a linear combination of two terms:

(8)

The reconstruction loss models the pixel-wise difference between the super-resolved LF and the ground truth :

(9)

The perceptual loss measures the quality of the spatial reconstruction. Inspired by [11], we define the loss function acquired from a VGG network to describe the aperture-wise differenes between high-level features ,

(10)

where and denote the LR input and the ground truth with angular coordinate , respectively.

Table 1: Quantitative evaluation of state-of-the-art methods for spatial and angular LFSR. We report the average PSNR and SSIM over all sub-aperture images for Spatial , , and Angular (). The bold values indicate the best performance.

Experiments

Data and experiment settings

In the experiments, we randomly select 100 scenes from the Lytro Archive (Stanford) (excluding “Occlusions” and “Reflective”) and the entire Fraunhofer densely-sampled high-resolution [38] datasets for training. The former consists of 353 real-world scenes captured using a Lytro Illum camera with a small baseline, and in addition, we exclude the corner samples and only select the center views in the experiments. The latter contains 9 real-world scenes that are densely sampled by a high-resolution camera with a larger baseline. The experimental results show that our trained network can be generalized to various synthetic and real-world scenes, as well as some microscopy light fields. This indicates that the learned geometric features are generic for various situations.

During training, the system each time receives a 4D patch of LF, which is spatially cropped to as input. For spatial SR, the downsampling is based on the classical model [3]

(11)

where is Gaussian noise with zero mean and unit standard deviation, denotes the nearest neighbor downsampling operator applied to each view, is the magnification factor, and stands for a Gaussian blurring kernel with a window size of and standard deviation of pixels. The network is trained using the Stochastic Gradient Descent solver with the initial learning rate of , which is decreased by a factor of for every 10 epochs. The entire implementation is available at https://github.com/monaen/LightFieldReconstruction.

Loss evaluation

To exam the effectiveness of different loss components, we adjust the proposed network to obtain multiple variants which are further trained using different losses. The parameters of different variants are kept constant to control the model representational capacity. In Table 2, we evaluate the performance of the variants with 8 HRBs in total. By combining the reconstruction and perceptual losses, the model can achieve comprehensively better quantitative results and reconstruct the LF with good visual fidelity (refer to the supplementary materials).

Table 2: Ablation study of different components in the proposed model. “G8” denotes 8 HRBs in GRLNet, “S8” denotes 8HRBs in SReNet, and “G5S3” stands for 5 HRBs in GRLNet and 3 HRBs in SReNet.

Spatial super-resolution evaluation

For evaluation in terms of spatial resolution, we compare against several top-performing algorithms designed for LFSR, including LFCNN [34], BM PCA+RR [2], LFNet [26], Zhang et al. [35] and some state-of-the-art methods for SISR, like MSLapSRN [13] and RDN [36]. For fair comparison, all the methods are retrained using the same datasets and downsampling method described in Eq. 11. Table 1 shows the quantitative comparisons for , , and SR on five public LF dataset. The real-world datasets consist of 20 scenes from “Occlusions” and 20 scenes from “Reflective” in Stanford Lytro Archive (Stanford), and 21 scenes from EPFL [23], while the synthetic datasets are selected from the HCI dataset [8, 28]. We carefully fine-tune and retrain each algorithm to fit the classic downsampling method described in Eq. 11 using their publicly available code to reach their best performance. The results are measured in terms of the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) over the center views of each evaluation scene, and we report the average value on each test dataset.

Fig. 6 compares the visual reconstruction results for spatial SR. For LFSR algorithms, LFCNN receives pairs of SAIs as input without modeling their correlations. Therefore, it underuses the angular information and tends to generate over-smoothed results with blurry details. Likewise, the two SISR methods (MSLapSRN and RDN) do not model the correlations either, which leads to the blurring and twisting in their EPIs. BM PCA+RR and LFNet simplify the problem by considering only one dimension of angular correlations. Such strategy restricts the performance of their methods in terms of both visual results (Fig. 6) and quantitative measurements (Table 1). In contrast, a recent approach proposed by Zhang et al. [35] exploits the angular information from 4 directions for LF spatial SR. By integrating more directional information, the algorithm shows better quantitative results on spatial and tasks. Nevertheless, their approach fuses the angular information by roughly concatenating SAIs from directions in the channel dimension. Given that the differences between adjacent views decrease rapidly in the LR LF, its performance on spatial SR is badly affected. Compared with these methods, our model exploits the entire angular information from all directions. In addition, such angular information is further fused in the learned geometric features during the network training. In this way, all of the structural information is utilized for the final reconstruction allowing our model to achieve superior performance in terms of both visual fidelity (e.g., the texture on the wall in Fig. 6) and quantitative measurements. More visual results on both real-world and synthetic scenes are presented in our supplementary materials to illustrate the model generalization ability.

Figure 6: Visual comparison for spatial SR on the real-world scene General 15 from Stanford Archive.

Angular super-resolution evaluation

Figure 7: Visual comparison of our model with Kalantari et al. and Yeung et al. for the task, and with Wu et al. for the task.

For angular super-resolution, all the models are trained only using the 100 Lytro Archive scenes. We carry out comparisons with three state-of-the-art CNN based methods, namely, Kalantari et al. [12], Wu et al. [32] and Yeung et al. [29]. For all the angular SR task, we evaluate a simplified version of the proposed model with HRBs in GRLNet and HRB in SReNet ( HConvs in total). Even with only HRBs, our model is able to defeat the other three methods. For the synthesis task, the quantitative comparisons on average PSNR and SSIM are presented in the last part of Table 1. Our model defeats Kalantari et al. and Wu et al. on most real-world and synthetic LF scenes. Fig. 7 compares the visual results. The depth-dependent method Kalantari et al. tends to produce ghosting artifacts near boundaries of objects. Wu et al. only uses the EPI information and therefore leads to a loss of spatial details (e.g., Neurons nerve fibre is absent in their reconstructed LF). To demonstrate the effectiveness of the hierarchical HRBs structures, we further compare the performance against Yeung et al.’s model with 16 4D alternating convolutions (16L). According to the Table 3 and Fig.7, our model achieves higher quantitative values and synthesizes more realistic novel views (also refer to our supplementary video).

Table 3: Quantitative evaluation of state-of-the-art view synthesis algorithms. We report the average PSNR for the task .

Conclusion

In this paper, we design a hierarchical high-order framework for LF spatial and angular SR. To fully exploit the structural information of the LF, HRB has been proposed. By cascading a set of HRBs, our model is able to extract representative features encoded with geometric information. Such features contribute a lot to the final reconstruction results. In addition, the combination of the pixel-wise loss and the perceptual loss further allows our model to generate more realistic spatial images. The experiments show that our proposed model outperforms the state-of-the-art SR methods in terms of both quantitative measurements and visual fidelity.

Acknowledgments

This work is supported in part by the Research Grants Council of Hong Kong (GRF 17203217, 17201818, 17200019) and the University of Hong Kong (104005009, 104005438).

References

  1. T. E. Bishop and P. Favaro (2012-08) The light field camera: extended depth of field, aliasing, and superresolution. IEEE TPAMI. External Links: Document Cited by: Spatial super-resolution.
  2. R. A. Farrugia, C. Galea and C. Guillemot (2017-08) Super resolution of light field images using linear subspace projection of patch-volumes. IEEE JSTSP. External Links: Document Cited by: Spatial super-resolution, Spatial super-resolution evaluation.
  3. R. A. Farrugia and C. Guillemot (2018-01) Light field super-resolution using a low-rank prior and deep convolutional neural networks. arXiv preprint. Cited by: Data and experiment settings.
  4. J. Flynn, I. Neulander, J. Philbin and N. Snavely (2016-12) Deepstereo: learning to predict new views from the world’s imagery. In CVPR, External Links: Document Cited by: Angular super-resolution.
  5. S. J. Gortler, R. Grzeszczuk, R. Szeliski and M. F. Cohen (1996) The lumigraph. In Siggraph, External Links: Document Cited by: Problem formulation.
  6. P. Gupta, P. Srivastava, S. Bhardwaj and V. Bhateja (2011-Feburary) A modified PSNR metric based on HVS for quality assessment of color images. In ICCIA, External Links: Document Cited by: Spatial refinement network.
  7. S. Heber, W. Yu and T. Pock (2017) Neural EPI-volume networks for shape from light field. In ICCV, Cited by: High-Order Residual Network for Light Field Super-Resolution.
  8. K. Honauer, O. Johannsen, D. Kondermann and B. Goldluecke (2016) A dataset and evaluation methodology for depth estimation on 4D light fields. In ACCV, Cited by: Spatial super-resolution evaluation.
  9. S. Ioffe and Christian (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint. Cited by: High-order residual block.
  10. H. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y. Tai and I. So Kweon (2015-10) Accurate depth map estimation from a lenslet light field camera. In CVPR, External Links: Document Cited by: High-Order Residual Network for Light Field Super-Resolution.
  11. J. Johnson, A. Alahi and L. Fei-Fei (2016-09) Perceptual losses for real-time style transfer and super-resolution. In ECCV, External Links: Document Cited by: Loss function.
  12. N. K. Kalantari, T. Wang and R. Ramamoorthi (2016-11) Learning-based view synthesis for light field cameras. ACM TOG. External Links: Document Cited by: Angular super-resolution, Angular super-resolution, Architecture overview, Angular super-resolution evaluation, High-Order Residual Network for Light Field Super-Resolution.
  13. W. Lai, J. Huang, N. Ahuja and M. Yang (2018) Fast and accurate image super-resolution with deep laplacian pyramid networks. IEEE TPAMI. Cited by: Spatial super-resolution evaluation.
  14. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang and W. Shi (2017-11) Photo-realistic single image super-resolution using a generative adversarial network. CVPR. External Links: Document Cited by: Spatial refinement network.
  15. M. Levoy and P. Hanrahan (1996) Light field rendering. In ACM CCGIT, External Links: Document Cited by: Problem formulation.
  16. J. Lim, H. Ok, B. Park, J. Kang and S. Lee (2009-02) Improving the spatail resolution based on 4D light field data. In ICIP, External Links: Document Cited by: Spatial super-resolution.
  17. N. Meng, E. Lam, K. K. M. Tsia and H. K. So (2018) Large-scale multi-class image-based cell classification with deep learning. IEEE JBHI. Cited by: High-Order Residual Network for Light Field Super-Resolution.
  18. N. Meng, H. K. So, X. Sun and E. Lam (2019) High-dimensional dense residual convolutional neural network for light field reconstruction. IEEE TPAMI. Cited by: High-Order Residual Network for Light Field Super-Resolution.
  19. N. Meng, X. Sun, H. K. So and E. Y. Lam (2019) Computational light field generation using deep nonparametric bayesian learning. IEEE Access. Cited by: Angular super-resolution, High-Order Residual Network for Light Field Super-Resolution.
  20. N. Meng, T. Zeng and E. Y. Lam (2019) Spatial and angular reconstruction of light field based on deep generative networks. In ICIP, Cited by: High-Order Residual Network for Light Field Super-Resolution.
  21. K. Mitra and A. Veeraraghavan (2012-07) Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior. In CVPRW, External Links: Document Cited by: Spatial super-resolution, High-Order Residual Network for Light Field Super-Resolution, High-Order Residual Network for Light Field Super-Resolution.
  22. J. Pearson, M. Brookes and P. L. Dragotti (2013-06) Plenoptic layer-based modeling for image based rendering. IEEE TIP. External Links: Document Cited by: Angular super-resolution.
  23. M. Rerabek and T. Ebrahimi (2016-06) New light field image dataset. In QoMEX, Cited by: Spatial super-resolution evaluation.
  24. X. Sun, Z. Xu, N. Meng, E. Y. Lam and H. K. So (2016-11) Data-driven light field depth estimation using deep convolutional neural networks. In IJCNN, External Links: Document Cited by: High-Order Residual Network for Light Field Super-Resolution.
  25. S. Vagharshakyan, R. Bregovic and A. Gotchev (2018) Light field reconstruction using shearlet transform. IEEE TPAMI. Cited by: Spatial super-resolution.
  26. Y. Wang, F. Liu, K. Zhang, G. Hou, Z. Sun and T. Tan LFNet: a novel bidirectional recurrent convolutional neural network for light-field image super-resolution. IEEE TIP. Cited by: Spatial super-resolution, Architecture overview, Spatial super-resolution evaluation.
  27. S. Wanner and B. Goldluecke (2014-08) Variational light field analysis for disparity estimation and super-resolution. IEEE TPAMI. External Links: Document Cited by: Spatial super-resolution, Angular super-resolution, High-Order Residual Network for Light Field Super-Resolution, High-Order Residual Network for Light Field Super-Resolution.
  28. S. Wanner, S. Meister and B. Goldluecke Datasets and benchmarks for densely sampled 4D light fields.. In VMV, Cited by: Spatial super-resolution evaluation.
  29. H. Wing Fung Yeung, J. Hou, J. Chen, Y. Ying Chung and X. Chen (2018-09) Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues. In ECCV, Cited by: Angular super-resolution, Angular super-resolution evaluation.
  30. G. Wu, Y. Liu, L. Fang, Q. Dai and T. Chai (2018) Light field reconstruction using convolutional network on EPI and extended applications. IEEE TPAMI. External Links: Document Cited by: Architecture overview.
  31. G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai and Y. Liu (2017-08) Light field image processing: an overview. IEEE JSTSP. External Links: Document Cited by: High-Order Residual Network for Light Field Super-Resolution.
  32. G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai and Y. Liu (2017-11) Light field reconstruction using deep convolutional network on EPI. In CVPR, External Links: Document Cited by: Angular super-resolution, Architecture overview, Angular super-resolution evaluation, High-Order Residual Network for Light Field Super-Resolution.
  33. Y. Yang, H. Chen and J. Shao (2019) Triplet enhanced autoencoder: model-free discriminative network embedding. In IJCAI, Cited by: High-Order Residual Network for Light Field Super-Resolution.
  34. Y. Yoon, H. Jeon, D. Yoo, J. Lee and I. S. Kweon (2017) Light-field image super-resolution using convolutional neural network. IEEE SPL. Cited by: Spatial super-resolution, Spatial super-resolution evaluation, High-Order Residual Network for Light Field Super-Resolution.
  35. S. Zhang, Y. Lin and H. Sheng (2019) Residual networks for light field image super-resolution. In CVPR, Cited by: Spatial super-resolution, Architecture overview, Spatial super-resolution evaluation, Spatial super-resolution evaluation, High-Order Residual Network for Light Field Super-Resolution.
  36. Y. Zhang, Y. Tian, Y. Kong, B. Zhong and Y. Fu (2018) Residual dense network for image super-resolution. In CVPR, Cited by: Spatial super-resolution evaluation.
  37. Z. Zhang, Y. Liu and Q. Dai (2015-10) Light field from micro-baseline image pair. CVPR. External Links: Document Cited by: Angular super-resolution.
  38. M. Ziegler, R. op het Veld, J. Keinert and F. Zilly (2017-02) Acquisition system for dense lightfield of large scenes. In CTVCTD, External Links: Document Cited by: Data and experiment settings.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
412818
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description