SynSin: End-to-end View Synthesis from a Single Image

SynSin: End-to-end View Synthesis from a Single Image


Single image view synthesis allows for the generation of new views of a scene given a single input image. This is challenging, as it requires comprehensively understanding the 3D scene from a single image. As a result, current methods typically use multiple images, train on ground-truth depth, or are limited to synthetic data. We propose a novel end-to-end model for this task; it is trained on real images without any ground-truth 3D information. To this end, we introduce a novel differentiable point cloud renderer that is used to transform a latent 3D point cloud of features into the target view. The projected features are decoded by our refinement network to inpaint missing regions and generate a realistic output image. The 3D component inside of our generative model allows for interpretable manipulation of the latent feature space at test time, \egwe can animate trajectories from a single image. Unlike prior work, we can generate high resolution images and generalise to other input resolutions. We outperform baselines and prior work on the Matterport, Replica, and RealEstate10K datasets.


1 Introduction

Given an image of a scene, as in Fig. LABEL:fig:teaser (top-left), what would one see when turning left or walking forward? We can reason that the window and the wall will extend to the left and more chairs will appear to the right. The task of novel view synthesis addresses these questions: given a view of a scene, the aim is to generate images of the scene from new viewpoints. This task has wide applications in image editing, animating still photographs or viewing RGB images in 3D. To unlock these applications for any input image, our goal is to perform view synthesis in complex, real-world scenes using only a single input image.

View synthesis is challenging, as it requires comprehensive scene understanding. Specifically, successful view synthesis requires understanding both the 3D structure and the semantics of the input image. Modelling 3D structure is important for capturing the relative motion of visible objects under a view transform. For example in Fig. LABEL:fig:teaser (bottom-left), the sink is closer than the shower and thus shifts more as we change viewpoints. Understanding semantics is necessary for synthesising plausible completions of partially visible objects, \egthe chair in Fig. LABEL:fig:teaser (top-left).

One way to overcome these challenges is to relax the single-image constraint and use multiple views to reconstruct 3D scene geometry [?, 41, 65, ?]. This also simplifies semantic modelling, as fewer positions will be occluded from all views. Recent methods [63, 45, 57] can be extremely effective even for complex real-world scenes. However the assumption of multiple views severely limits their applicability, since the vast majority of images are not accompanied by views from other angles.

Another approach is to train a convolutional network to estimate depth from images [11, 28], enabling single-image view synthesis in realistic scenes [34]. Unfortunately this approach requires a training dataset of images with ground-truth depth. Worse, depth predictors may not generalise beyond the scene types on which they are trained (\ega network trained on indoor scenes will not work on outdoor images) so this approach can only perform view synthesis on scene types for which ground-truth depth can be obtained.

To overcome these shortcomings, there has been growing interest in view synthesis methods that do not use any 3D information during training. Instead, an end-to-end generative model with 3D-aware intermediate representations can be trained from image supervision alone. Existing methods have shown promise on synthetic scenes of single objects [26, 49, 56, 43, 44], but have been unable to scale to complex real-world scenes. In particular, several recent methods represent 3D structure using dense voxel grids of latent features [43, 31]. With voxels, the fidelity of 3D information that can be represented is tied to the voxel dimensions, thus limiting the output resolution. On the other hand, point clouds are more flexible, generalise naturally to varying resolutions and are more efficient.

In this paper we introduce SynSin, a model for view synthesis from a single image in complex real-world scenes. SynSin is an end-to-end model trained without any ground-truth 3D supervision. It represents 3D scene structure using a high-resolution point cloud of learned features, predicted from the input image using a pair of convolutional networks. To generate new views from the point cloud, we render it from the target view using a high-performance differentiable point cloud renderer. SynSin models scene semantics by building upon recent advances in generative models [3], and training adversarially against learned discriminators. Since all model components are differentiable, SynSin is trained end-to-end using image pairs and their relative camera poses; at test-time it receives only a single image and a target viewpoint.

We evaluate our approach on three complex real-world datasets: Matterport [4], RealEstate10K [63], and Replica [47]. All datasets include large angle changes and translations, increasing the difficulty of the task. We demonstrate that our approach generates high-quality images and outperforms baseline methods that use voxel-based 3D representations. We also show that our trained models can generalise at test-time to high-resolution output images, and even to new datasets with novel scene types.

Figure 1: Our end-to-end system. The system takes as input an image of a scene and change in pose . The spatial feature predictor () learns a set of features (visualised by projecting features using PCA to RGB) and the depth regressor () a depth map . are projected into 3D (the diagram shows RGB for clarity) to give a point cloud of features. is transformed according to and rendered. The rendered features are passed through the refinement network () to generate the final image . should match the target image, which we enforce using a set of discriminators and photometric losses.

2 Related work

Research into new view synthesis has a long history in computer vision. These works differ based on whether they use multiple images or a single image at test time and on whether they require annotated 3D or semantic information.

View synthesis from multiple images. If multiple images of a scene can be obtained, inferred 3D geometry can be used to reconstruct the scene and then generate new views. Traditionally, this was done using depth maps [38, 5] or multi-view geometry [?, 41, 65, ?, 25, 10].

In the learning era, DNNs can be used to learn depth. [9, 18, 32, 31, 1] use a DNN to improve view synthesis from a set of noisy, incomplete, or inconsistent depth maps. Given two or more images of a scene within a small baseline, [46, 13, 45, 52, 63, 57] show impressive results at synthesising views within this narrow baseline. [43, 30] learn an implicit voxel representation of one object given many training views and generate new views of that object at test time. [12] use no implicit 3D representation. Unlike these methods, we assume only one image at test time.

View synthesis from a single image using ground-truth depth or semantics. A second vein of work assumes a large dataset of images with corresponding ground-truth 3D and semantic information to train their 3D representation [34, 51, 42]. These methods are reliant on a large scale benchmark and corresponding annotation effort. The depth may be obtained using a depth or lidar camera [14, ?, 23] or SfM [28]; however, this is time-consuming and challenging, especially for outdoor scenes, often necessitating the use of synthetic environments. We aim to make predictions anywhere, \egthe wood scene in Fig. 4, and in realistic settings, without 3D information or semantic labels.

View synthesis from a single image. DNNs can be used to learn view synthesis in an end-to-end fashion. One such line of work synthesises new views using purely image to image transformations [49, 64, 35, 48, 26, 7]. Later work performs 3D operations directly on the learned embedding [56] or interprets the latent space as an implicit surface [44]. However, these works consider synthetic datasets with a single object per image and train one model per object class. Most similar to ours is the recent work of [8]. However, they do not consider larger movements that lead to significant holes and dis-occlusions in the target image. They also consider a more constrained setup; they consider synthetic object classes and mostly forward motion in KITTI [14], whereas we use a variety of indoor and outdoor scenes.

Many works explore using a DNN to predict 3D object shapes [16, 15, 53, 58, 21, 19] or the depth of a scene given an image [6, 11, 28, 62]. These works focus on the quality of the 3D predictions as opposed to the view-synthesis task.

Generative models. We build on recent advances in generative models to produce high-quality images with DNNs [?, 3, 22, 33, 36]. In [3, 22], moving between the latent codes of different instances of an object class seemingly interpolates pose, but explicitly modifying pose is hard to control and evaluate. [33] allows for explicit pose control but not from a given image; they also use a voxel representation, which we find to be computationally limiting.

3 Method

In this section, we introduce SynSin (Fig. 1) and in particular how we overcome the two main challenges of representing 3D scene structure and scene semantics. To represent the 3D scene structure, we project the image into a latent feature space which is in turn transformed using a differentiable point cloud renderer. This renderer injects a 3D prior into the network, as the predicted 3D structure must obey geometric principles. To satisfy the scene semantics, we frame the entire end-to-end system as a GAN and build on architectural innovations of recent state-of-the-art generative models.

SynSin takes an input image and relative pose . The input image is embedded to a feature space via a spatial feature predictor (), and a depth map via a depth regressor (). From and , a point cloud is created which is rendered into the new view (neural point cloud renderer). The refinement network () refines the rendered features to give the final generated image . At training time, we enforce that should match the target image (discriminator).

3.1 Spatial feature and depth networks

Two networks, and , are responsible for mapping the raw input image into a higher dimensional feature map and a depth map, respectively. The spatial feature network predicts feature maps at the same resolution as the original image. These feature maps should represent scene semantics, \iea higher-level representation than simply RGB colours. The depth network estimates the 3D structure of the input image at the same resolution. The depth does not have to be (nor would we expect it to be) perfectly accurate; however, it is explicitly learned in order to perform the task. The design for and follows standard architectures built for the two tasks respectively:

Spatial feature network . We build on the BigGAN architecture [3] and use 8 ResNet blocks that maintain image resolution; the final block predicts a -dimensional feature for each pixel of the input image.

Depth network . We use a UNet [?] with 8 downsampling and upsampling layers to give a final prediction of the same spatial resolution as the input. This is followed by a sigmoid layer and a renormalisation step so the predicted depths fall within the per-dataset min and max values. Please refer to the supplement for the precise details.

Figure 2: Comparison of our rendering pipeline to a naïve version. Given a set of points ordered in a z-buffer, our renderer projects points to a region of radius using -compositing, not just the nearest point. When back-propagating through our renderer, gradients flow not just to the nearest point, but to all points in the z-buffer. (For simplicity we show 1D projections.)

3.2 Neural point cloud renderer

We combine the spatial features and predicted depths to give a 3D point cloud of feature vectors . Given the input view transform , we want to view this point cloud at the target viewpoint. This requires rendering the point cloud. Renderers are used extensively in graphics, as reviewed in [24, 39], but they usually focus on forward projection. Our 3D renderer is a component of an end-to-end system, which is jointly optimised, and so needs to allow for gradient propagation; we want to train for depth prediction without any 3D supervision but only with a loss on the final rendered image. Additionally, unlike traditional rendering pipelines, we are not rendering RGB colours but features.

Limitations of a naïve renderer. A naïve renderer projects 3D points to one pixel or a small region – the footprint – in the new view. Points are sorted in depth using a z-buffer. For all points in the new view, the nearest point in depth (using the z-buffer) is chosen to colour that point. A non-differentiable renderer does not provide gradients with respect to the point cloud positions (needed to train our depth predictor) nor the feature vectors (needed to train our spatial feature network). Simply making the operations of a naïve renderer differentiable is problematic for two reasons (illustrated in Fig. 2). (1) Small neighbourhoods: each point projects to only one or a few pixels in the rendered view. In this case, there are only a few gradients for each point in the -plane of the rendered view; this drawback of local gradients is discussed in [20] in the context of bilinear samplers. (2) The hard z-buffer: each rendered pixel is only affected by the nearest point in the z-buffer (\egif a new pixel becomes closer in depth, the output will suddenly change).

Our solution. We propose a neural point cloud renderer in order to solve the prior two problems by softening the hard decisions, as illustrated in Fig. 2. This is inspired by [29], which introduces a differentiable renderer for meshes by similarly softening the hard rasterisation decisions. First, to solve the issue of small neighbourhoods, we splat 3D points to a disk of varying influence controlled by hyperparameters and . Second, to solve the issue of the hard z-buffer, we accumulate the effects of the nearest points, not just the nearest point, using a hyperparameter .

Our renderer first projects onto a 2D grid under the given transformation . A 3D point is projected and splatted to a region with centre and radius . The influence of the 3D point on a pixel is proportional to the Euclidean distance from the centre of the region:

Though is not differentiable, we can approximate derivatives using the subderivative. and control the spread and fall-off of the influence of a 3D point.

The projected points are then accumulated in a -buffer; they are sorted according to their distance from the new camera and only the nearest points kept for each pixel in the new view. The sorted points are accumulated using alpha over-compositing (where is a hyperparameter):


where is the projected feature map in the new view and in the original view. controls the blending; if , this is hard z-buffering. This setup is illustrated in Fig. 2.

Implementation. Our renderer must be high-performance, since we process batches of high-resolution point clouds during training. We implement our renderer using a sequence of custom CUDA kernels, building upon work on high-performance triangle rasterisation with CUDA [27]. We use a two-stage approach: in the first stage we break the output image into tiles, and determine the set of points whose footprint intersects each tile. In the second stage, we determine the nearest points for each pixel in the output image, sorting points in depth using per-pixel priority queues in shared memory to reduce global memory traffic.

Other approaches. This method is related to the mesh rasteriser of [29] and the point cloud rasterisers of [59, 19, 1]. However, our renderer is a simpler than [59] and we apply it in an end-to-end framework. While [1] also renders point clouds of features, they only back-propagate to the feature vectors, not the 3D positions. [19] stores the predicted points in a voxel grid before performing the projection step; this limits the resolution, as a voxel grid scales cubically.

Performance. On a single V100 GPU, rendering a batch of six point clouds with points each to a batch of six images of size takes 36ms for the forward pass and 5ms for the backward pass. In contrast, converting the same point cloud to a voxel grid using the implementation from [19] takes ms for the forward pass and ms for the backward pass.

Figure 3: Qualitative results on RealEstate for ours and baseline methods. Given the input view and the camera parameters, the methods are tasked to produce the target image. The red squares denote interesting differences between the methods. In the upper row, our model better recreates the true 3D; in the bottom row, our model is better able to preserve detail.

3.3 Refinement module and discriminator

Even if the features are projected accurately, regions not visible in the input view will be empty in the target view. The refinement module should inpaint [2, ?] these missing regions in a semantically meaningful (\egmissing portions of a couch should be filled in with similar texture) and geometrically accurate (\egstraight lines should continue to be straight) manner. To solve this task, we take inspiration from recent generative models [3, 22, 36].

Deep networks have been previously applied to inpainting [?, 50, 55]. In a typical inpainting setup, we know a-priori which pixels are correct and which need to be synthesised. In our case, the refinement network should perform two tasks. First, it should inpaint regions with no projected features, \egregions on the image boundary or dis-occluded regions. The refinement module can discover these regions, as the features have values near zero. Second, the refinement module should correct local errors (\egnoisy regions resulting from noisy depth).

To build the refinement network, we use 8 ResNet [?] blocks, taking inspiration from [3]. Unlike [3], we aim to generate a new image conditioned on an input view not a random vector. Consequently, we find that it is important to maintain the image resolution as much as possible to obtain high quality results. We modify their ResNet block to create a downsampling block. The downsampling block is used to decrease the image resolution by two sizes before upsampling to the original image resolution. To model the ambiguity in the inpainting task, we use batch normalisation injected with noise [3]. We additionally apply spectral normalisation following each convolutional layer [60].

The GAN architecture and objective used is that of [54]. We use 2 multi-layer discriminators at a lower and higher resolution and a feature matching loss on the discriminator.

3.4 Training

Training objective. The network is trained with an L1 loss, content loss and discriminator loss between the generated and target image. The total loss is then .

Training details. The models are trained with the Adam optimiser using a 0.01 learning rate for the discriminator, 0.0001 for the generator and momentum parameters (0, 0.9). , , = 1. , pixels, , . The models are trained for K iterations. We implement our models in PyTorch [37]; they take 1-2 days to train on 3 Titan V100 GPUs.

Matterport [4] RealEstate10K [63] Replica [47] PSNR SSIM Perc Sim PSNR SSIM Perc Sim PSNR SSIM Perc Sim Both InVis Vis Both InVis Vis Both InVis Vis 1. SynSin (small ft) 21.36 20.37 22.06 0.72 0.70 0.70 1.58 0.43 0.91 20.78 0.70 1.16 21.64 0.79 1.70 2. SynSin (hard z) 20.14 19.51 20.62 0.66 0.68 0.64 1.93 0.46 1.20 21.03 0.69 1.17 21.95 0.79 1.69 3. SynSin (rgb) 21.03 19.98 21.69 0.68 0.69 0.66 2.15 0.47 1.35 21.19 0.67 1.45 21.71 0.80 2.03 4. SynSin 21.82 20.59 22.63 0.73 0.71 0.71 1.51 0.42 0.86 22.78 0.74 0.95 22.28 0.80 1.47 5. SynSin (w/ GT) 23.76 20.84 26.87 0.82 0.75 0.84 1.22 0.47 0.55 24.84 0.88 1.08 6. SynSin (sup. by GT) 21.93 20.63 22.86 0.73 0.71 0.72 1.50 0.42 0.85 21.58 0.78 1.60 7. Im2Im 13.22 13.42 13.33 0.32 0.36 0.30 3.94 1.00 2.84 16.51 0.47 1.94 12.66 0.37 3.88 8. Vox w/ UNet 18.52 17.85 19.05 0.57 0.57 0.57 2.98 0.77 1.96 17.31 0.53 2.30 18.69 0.71 2.68 9. Vox w/ ours 20.62 19.64 21.22 0.70 0.69 0.68 1.97 0.47 1.19 21.88 0.71 1.30 19.77 0.75 2.24
Table 1: Results on Matterport3D [4], RealEstate10K [63], and Replica [47]. denotes higher is better, lower is better. denotes std dev. . The ablations demonstrate the utility of each aspect of our model. We outperform all baselines for both datasets and are nearly as good as a model supervised with depth (SynSin (sup. by GT)). We also perform best when considering regions visible (Vis) and not visible (InVis) in the input view.

4 Experiments

We evaluate our approach on the task of view synthesis using novel real-word scenes. We validate our design choices in Section 4.3 by ablating our approach and comparing against competing end-to-end view synthesis pipelines. We also compare to other systems and find that our model performs better than one based on a trained depth predictor, which fails to generalise well to the new domain. We additionally evaluate SynSin’s generalisation performance to novel domains (Section 4.3) as well as higher image resolutions (Section 4.4). Finally, we use SynSin to synthesise trajectories from an initial image in Section 4.6, demonstrating that it can be used for a walk-through application. Additional results are given in the supplement.

4.1 Experimental setup

Datasets. We focus on using realistic data of indoor and outdoor environments as opposed to synthetic objects.

The first framework we use is Habitat [40], which allows for testing in a variety of scanned indoor scenes. The Habitat framework can efficiently generate image and viewpoint pairs for an input scene. We use two sources of indoor scenes: Matterport3D [4], consisting of reconstructions of homes, and Replica [47], which consists of higher fidelity scans of indoor scenes. The Matterport3D dataset is divided at the scene level into train/val/test which contain 61/11/18 scenes. The Replica dataset is only used at evaluation time to test generalisability. Pairs of images are generated by randomly selecting a viewpoint in a scene and then randomly modifying the viewing angle in a range of in each Euclidean direction and the position within m.

The second dataset we use is RealEstate10K [63], which consists of videos of walkthroughs of properties and the corresponding camera parameters (intrinsic and extrinsic) obtained using SfM. The dataset contains both indoor and outdoor scenes. It is pre split into a disjoint set of train and test scenes; we subdivide train into a training and validation set to give approximately 57K/14K/7K scenes in train/val/test. The scenes in the test set are unseen. We sample viewpoints by selecting a reference video frame and then selecting a second video frame a maximum of 30 frames apart. In order to sample more challenging frames, we choose pairs with a change in angle of and a change in position of greater than if possible (see [63] for a discussion on metric scale). To report results, we randomly generate a set of 2000 pairs of images from the test set.

Metrics. Determining the similarity of images in a manner correlated with human judgement is challenging [61]. We report multiple metrics to obtain a more robust estimate of the relative quality of images. We report the PSNR, SSIM, and perceptual similarity of the images generated by the different models. Perceptual similarity has been recently demonstrated to be an effective method for comparing the similarity of images [61]. Finally, we validate that these metrics do indeed correlate with human judgement by performing a user study on Amazon Mechanical Turk (AMT).

4.2 Baselines

We first abate the need for a soft differentiable renderer by comparing to variants with a small footprint, hard z-buffering, and that directly project RGB values. These models use the same setup, training schedule, and sequence of input images/viewpoints as SynSin.

SynSin (small ft): We set and in our model to investigate the utility of a large footprint.

SynSin (hard z): We set and in our model to investigate the utility of the soft z-buffer.

SynSin (rgb): We project RGB values not features.

SynSin does not assume ground-truth depth at test time; the depth predictor is trained end-to-end for the given task. We investigate the impact of ground-truth (GT) depth by reporting two variants of our model. These models act as upper bounds and can only be trained on Matterport3D (not RealEstate10K), as they use true depth information.

SynSin (w/ GT): The true depth is used as .

SynSin (sup. by GT): is supervised by the true depth. (In all other cases SynSin’s is learned with no supervision).

We evaluate our 3D representation by comparing to a method that uses no 3D representation and one that uses a voxel representation. As no methods perform view synthesis on the challenging datasets we consider, we re-implement these baselines for a fair comparison. These baselines use the same setup, training schedule, and sequence of input images/viewpoints as SynSin.

Im2Im: This baseline evaluates an image-to-image method; we re-implement [64]. [64] only considered a set of discretised rotations about the azimuth and a smaller set of rotations in elevation. However, the changes in viewpoint in our datasets arise from rotating continuously in any direction and translating in 3D. We modify their method to allow for these more complex transformations.

Vox: This baseline swaps our implicit 3D representation for a voxel based representation. The model is based on that of [43]. However, [43] trains one model per object, so their model effectively learns to interpolate between the 100 training views unlike our model, which extrapolates to new real-world test scenes given a single input view. We consider two variants: Vox w/ UNet uses the UNet encoder/decoder of [43] whereas Vox w/ ours uses a similar ResNet encoder/decoder setup to SynSin. This comparison evaluates our 3D approach as opposed to a voxel based one as well as whether our encoder/decoder setup is preferable.

Finally, we compare SynSin to existing pipelines that perform view synthesis. These systems make different assumptions and follow different approaches. This comparison validates our use of a learned end-to-end system.

StereoMag [63]: This system takes two images as input at test time. Assuming two input views simplifies the problem of 3D understanding compared to our work, which estimates 3D from a single view.

3D View: This system trains a single-image depth predictor on images with ground-truth depth (\egMegaDepth [28]). Predicted depths are used to convert the input image to a textured 3D mesh, which is extended in space near occlusion boundaries using isotropic colour diffusion [17]. Finally the mesh is rendered from the target view.

System comparison on RealEstate10K [63] PSNR SSIM Perc Sim SynSin 22.78 0.74 0.95 3DView StereoMag [63]
Table 2: SynSin performs better than a system trained with GT depth (3DView) and approaches the performance of [63], which uses 2 input views at test time.
Generalisation to higher res. PSNR SSIM Perc Sim SynSin 22.06 0.72 1.00 Vox w/ ours 18.82 0.61 2.47
Table 3: Results when applying models trained on images to images.
AMT User Study Ours Vox w/ ours Neither E-O 68.7 31.3 E-O-N 55.6 27.3 17.2
Table 4: % of videos chosen as most realistic. In E-O, users choose the better method; in E-O-N, users can say neither is better.
Figure 4: System comparisons on RealEstate10K. Note StereoMag [63] uses two input images (second is shown as an inset). Unlike [63] we inpaint missing regions (bottom row); [63] fails to model the left region and cannot inpaint the missing region. 3DView uses a model pretrained for depth, causing their system to produce inaccurate results for large viewpoint changes (\egthe bed in the top row).

4.3 Comparisons with other methods

Results on Matterport3D and RealEstate10K. We train our models, ablations, and baselines on these datasets.

To better analyse the results, we compare models on how well they understand the 3D scene structure and the scene semantics (discussed in Section 1). To achieve this, we report metrics on the final prediction (Both) but also on the regions of the target image that are visible (Vis) and not visible (InVis) in the input image. (Vis) evaluates the quality of the learned 3D scene structure, as it can be largely solved by accurate depth prediction. (InVis) evaluates the quality of a model’s understanding of scene semantics; it requires a holistic understanding of semantic and geometric properties to reasonably in-paint missing regions. In order to determine the (Vis) and (InVis) regions, we use the GT depth in the input view to obtain a binary mask of which pixels are visible in the target image. This is only possible on Matterport3D (RealEstate10K does not have GT depth).

Table 1 and Fig. 3 report results on Matterport3D and RealEstate10K. On both datasets, we perform better than the baselines on all metrics and under all conditions, demonstrating the utility of both our 3D representation and our inpainting module. These results demonstrate that the differentiable renderer is important for training the depth model (rows 1-4). Our encoder decoder setup is shown to be important, as it improves the baseline’s performance significantly (rows 8-9). Qualitatively, our model preserves fine detail and predicts 3D structure better than the baselines.

System comparison on RealEstate10K. We compare our system to the 3DView and StereoMag [63] in Table 4 and Fig. 4. Our model performs better than 3DView despite their method having been trained with hundreds of thousands of depth images. We hypothesise that this gap in performance is due to the 3DView’s depth prediction not generalising well; their dataset consists of images of mostly close ups of objects whereas ours consists of scenes taken inside or outdoors. This baseline demonstrates that using an explicit 3D representation is problematic when the test domain differs from the training domain, as the depth predictor cannot generalise. Finally, our method of inpainting is better than that of 3DView, which produces a blurry result. [63] does not inpaint unseen regions in the generated image.

Comparison with upper bounds. We compare our model to SynSin (w/ GT) and SynSin (sup. with GT) in Table 1. These models either use GT depth or are supervised by GT depth; they are upper bounds of performance. While there is a performance gap between SynSin and SynSin (w/ GT) under the (Vis) condition, this gap shrinks for the (InVis) condition. Interestingly, SynSin trained with no depth supervision performs nearly as well as SynSin (sup. with GT) under both the (Vis) and (InVis) conditions; our model also generalises better to the Replica dataset. This experiment demonstrates that having true depth during training does not necessarily give a large boost in a downstream task and could hurt generalisation performance. It validates our decision to use an end-to-end system (as opposed to using depth estimated from a self-supervised method).

Generalisation to Replica. Given the models trained on Matterport3D, we evaluate generalisation performance (with no further fine-tuning) on Replica in Table 1. Replica contains additional types of rooms (\egoffice and hotel rooms) and is higher quality than Matterport (it has fewer geometric and lighting artefacts and more complex textures). SynSin generalises better to this unseen dataset; qualitatively, SynSin seems to introduce fewer artefacts (Fig. 5).

Figure 5: Comparison of SynSin against the baseline, Vox w/ ours, at generalising to higher res images and Replica [47]. Ours generalises better with fewer artefacts.

4.4 Generalisation to higher resolution images

We also evaluate generalisation to higher image resolutions in Table 4 and Fig. 5. SynSin can be applied to higher resolution images without any further training and limited degradation in performance. The ability to generalise to higher resolutions is due to the flexible 3D representation in our approach: the networks are fully convolutional and the 3D point cloud can be sampled at any resolution to maintain the resolution of the features. As a result, it is straightforward at test time to apply a network trained on a smaller image size (\eg) to one of a different size (\eg). Unlike our approach, the voxel baseline suffers a dramatic performance drop when applied to a higher resolution image. This drop in performance is presumably a result of the heavy downsampling and imprecision resulting from representing the world as a coarse voxel grid.

4.5 Depth predictions

We evaluate the quality of the learned 3D representation qualitatively in Fig. 6 for SynSin trained on RealEstate10K. We note that the accuracy of the depth prediction only matters in so far as it improves results on the view synthesis task. However, we hypothesise that the quality of the generated images and predicted depth maps are correlated, so looking at the quality of the depth maps should give some insight into the quality of the learned models. The depth map predicted by our method is higher resolution and more realistic than the depth map predicted by the baseline methods. Additionally, our differentiable point cloud renderer appears to improve the depth quality over using a hard z-buffer or a smaller footprint. However, we note that small objects and finer details are not accurately recreated. This is probably because these structures have a limited impact on the generated images.

Figure 6: Recovered depth predictions for both our method and the baselines. The baselines predict a less accurate and coarser depth. Using a smaller radius or hard z-buffer produces qualitatively similar or worse depth maps.

4.6 User study: Animating still images

Finally, we task SynSin to synthesise images along a trajectory. Given an initial input frame from a video in RealEstate10K, SynSin generates images at the camera position of the 30 subsequent frames. While changes are hard to see in a figure (\egFig. LABEL:fig:teaser), the supplementary videos clearly show smooth motion and 3D effects. These demonstrate that SynSin can generate reasonable videos despite being trained purely on images. To evaluate the quality of the generated videos, we perform an AMT user study.

We randomly choose 100 trajectories and generate videos using SynSin and the Vox w/ ours baseline. Five users are asked to rate which method’s video is most realistic. For each video, we take the majority vote to determine the best video. We report the percentage of times the users choose a given method in Table 4.

Either-or setup (E-O): Users rate whether the baseline or our generated video is more realistic.

Either-or-neither setup (E-O-N): Users rate whether the baseline or our generated video is more realistic or whether they are equally realistic/unrealistic (neither). When taking the majority vote, if their is no majority, neither video is said to be more / less realistic

In both cases, users prefer our method, presumably because our videos have smoother motion and fewer artefacts.

5 Conclusion

We introduced SynSin, an end-to-end model for performing single image view synthesis. At the heart of our system are two key components: first a differentiable neural point cloud renderer, and second a generative refinement module. We verified that our approach can be learned end-to-end on multiple realistic datasets, generalises to unseen scenes, can be applied directly to higher image resolutions, and can be used to generate reasonable videos along a given trajectory. While we have introduced SynSin in the context of view synthesis, we note that using a neural point cloud renderer within a generative model has applications in other tasks.

6 Acknowledgements

The authors would like to thank Johannes Kopf with help running and sharing code and Manolis Savva and Erik Wijmans for help with the Habitat dataset. Finally, we would like to thank Sebastien Ehrhardt, Oliver Groth, and Weidi Xie for editing and helpful feedback on paper drafts.

We give additional qualitative results in Section A, additional architectural details in Section B, additional information about baselines in Section C, and finally information about datasets in Section D. Finally, we discuss some choices that did not work in Section E.

Appendix A Additional experimental results

We give additional qualitative results on RealEstate10K (Fig. 7-8), Replica (Fig. 9), and Matterport3D (Fig. 10). The supplementary video shows sample videos of a model generating images along a given trajectory. We compare SynSin to the baseline (Vox w/ ours); SynSin has smoother motion with fewer artefacts. We also visualise additional depth prediction results in Fig. 11-12.


[width=]./supp__ex1 Input ImgTarget ImgStereoMag [63]Vox w/ oursSynSin

Figure 7: Additional results on RealEstate10K [63]. Zoom in for details.

[width=]./supp__ex2 Input ImgTarget ImgStereoMag [63]Vox w/ oursSynSin

Figure 8: Additional results on RealEstate10K [63]. Zoom in for details.

Appendix B Additional architectural details

Here we give more information about the precise architectural details used to build the components of our model.

ResNet blocks.

Our spatial feature network and refinement networks are composed of ResNet blocks. The ResNet blocks used are the same as those used in [3] (Appendix B, Fig 15 (b)), reproduced in Fig. 13. However, we consider three different setups. The block may be used to increase the resolution of the features using an upsample layer (as used in the original paper by [3]) (Fig. 13(a)). The block may be used to decrease the resolution of the features using an average pooling layer as opposed to the upsample layer (Fig. 13(b)). The block may be used to maintain the resolution of the features using an identity layer as opposed to the upsample layer (Fig. 13(c)).

Spatial feature network.

ResNet blocks are stacked together to form the embedding network. In particular, we use the setup in Fig. 14(a).

Refinement network.

ResNet blocks are stacked together to form the decoder network. In particular, we use the setup in Fig. 14(b).

Depth regressor.

The depth regressor network uses a UNet architecture, as illustrated in Fig. 15.

Additional details on the perceptual loss.

We follow the perceptual loss used in [36].


[width=]./suppreplica_rep1 Input ImgTarget ImgVox w/ oursVox w/ unetSynSin

Figure 9: Additional results on Replica [47]. Zoom in for details.

[width=]./suppmp3d_rep1 Input ImgTarget ImgVox w/ oursVox w/ unetSynSin

Figure 10: Additional results on Matterport3D [4]. Zoom in for details.

[width=0.9]./realdepth_dep1 Input ImgSynSinSynSin (PC/)SynSin (PC/)

Figure 11: Additional depth predictions on RealEstate10K [63]. We also visualise the point cloud (PC) and the rotated point cloud at . (Note that the point cloud in the model is actually a point cloud of features, not RGB values.)

[width=0.9]./realdepth_dep2 Input ImgSynSinSynSin (PC/)SynSin (PC/)

Figure 12: Additional depth predictions on RealEstate10K [63]. We also visualise the point cloud (PC) and the rotated point cloud at . (Note that the point cloud in the model is actually a point cloud of features, not RGB values.)

Appendix C Additional details on baselines

In this section, we give further information about the baselines used.

Im to im.

We follow the architecture of [64]. However, [64] only considers discrete rotations about the azimuth and a small set of changes in elevation, so [64] takes four values as input, the and values of the azimuth and elevation. However, our datasets include rotation in all three directions, as well as translational motion. As a result, we modify their angle encoder to take 12 values (as opposed to four), and pass the change in viewpoint, to the angle encoder. The network is visualised in Fig. 16.

Vox w/ unet.

This baseline is based on [43], which represents 3D shape in a neural network using a voxel representation. Note that they train one model per instance, so their model only generalises to that one object. Their overall setup is as follows. An image is passed through an encoder (\egour spatial feature network) to obtain a set of features. The features are projected into a voxel grid, which is transformed and projected into the new view. The features are accumulated using an occlusion network, which acts as a pseudo depth predictor and predicts the occupancy of the voxels. The predicted occupancy is used to re-weight and combine features. This is then passed to the decoder (\egour refinement network) which predicts the scene at the new view. Finally, the generated image is compared to the true image using discriminators and photometric losses.

To reimplement this approach, we follow their architectural choices and use a UNet style architecture for all network components (the spatial feature network, refinement network, and occlusion network). However, we use the discriminators and photometric losses used to train SynSin to ensure that both methods are fair in terms of the discriminator. The details for the encoder/decoder setup are given in Fig. 17. The occupancy network is a 3D UNet, which takes as input the rotated voxels and then predicts occupancy for each voxel location; these are then normalised using a softmax layer over the depth dimension. The details are given in Fig. 18. We use their setup but train the network to generate new images of a scene given a single image of a scene.

Vox w/ ours.

Instead of using the UNet style spatial feature and refinement network in vox w/ ours, we use a sequence of ResNet blocks, as described in Fig. 19. The set of ResNet blocks in the spatial feature network downsamples the image to the appropriate size. The refinement network similarly upsamples the projected features to the appropriate image size. We also use a larger capacity in this setup to ensure that our 3D representation is preferable. The network was trained with a lower learning rate (lr=0.0004) as opposed to (lr=0.001) as in our model, as we found that the model struggled to learn with the higher learning rate.

Other setups. We experimented with other ResNet block sequences and multiple learning rates when creating this baseline. Instead of downsampling the features within the encoder (\egthe spatial feature network), we can use the same spatial feature network as SynSin (to obtain features of size and then downsample to obtain features of size . Similarly, instead of upsampling the features within the decoder (\egthe refinement network), we can upsample the transformed features to obtain ones of size and pass these upsampled features to the refinement network and so use the same refinement network we use in SynSin. We found that the results were similar to those of the model used in the paper on RealEstate10K but worse on Matterport.

We additionally found that the results were highly dependent on the learning rate for this model.


This baseline is based on a depth predictor (\eg[28]), so 3DView predicts depth up to a scale ambiguity. As the depth is only predicted up to a scale, we generate images for multiple possible scales for each test image and then report results for the best image.

(a) ResNet block.
(b) ResNet block with an average pool block.
(c) ResNet block with an identity block.
Figure 13: An overview of ResNet blocks. In (a), we show the basic ResNet block, (b) when we replace the upsample block by an average pool block, and (c) when we replace the upsample block by an identity block.
(a) Spatial feature network.
(b) Refinement network.
Figure 14: Our sequence of ResNet blocks in the spatial feature and refinement networks.
Figure 15: Depth regressor network. An Enc Block consists of a sequence of Leaky ReLU, convolution (stride 2, padding 1, kernel size 4), and batch normalisation layers. A Dec Block consists of a sequence of ReLU, 2x bilinear upsampling, convolution (stride 1, padding 1, kernel size 3), and batch normalisation layers (except for the final layer, which has no batch normalisation layer).
Figure 16: An overview of the image to image network. A Conv Layer consists of a sequence of a convolutional layer (stride 2, padding 1, filter size 3), ReLU, and batch normalisation layer. A Linear Layer consists of a sequence of a linear layer, ReLU, and batch normalisation layer. A Dec block consists of a sequence of a convolutional layer (stride 1, padding 1, filter size 3), ReLU, batch normalisation layer and upsample layer (except for the last, which consists of simply a convolutional layer).
(a) Encoder network.
(b) Decoder network.
Figure 17: The encoder and decoder network for the UNet style encoder/decoder setup. An Enc block is a sequence of a LeakyReLU, convolutional layer (stride 2, padding 1, kernel size 4) and batch normalisation layer. A Dec block is a sequence of ReLU, bilinear upsampling layer, convolutional layer (stride 1, padding 1, kernel size 3), and batch normalisation layer (except for the last layer which has no batch normalisation).
Figure 18: The 3D UNet for predicting the occupancy of voxels. An Enc block consists of a sequence of a LeakyReLU, convolutional layer (stride 2, padding 1, kernel size 4) and batch normalisation layer. A Dec block consists of a sequence of ReLU, bilinear upsampling layer, convolutional layer (stride 1, padding 1, kernel size 3), and batch normalisation layer (except for the last layer which has no batch normalisation).
(a) Encoder network.
(b) Decoder network.
Figure 19: The spatial feature and refinement networks for the ResNet style setup in the Vox w/ ours baseline.

Appendix D Additional information about datasets


For Matterport, the minimum depth is and the maximum depth .


For RealEstate10K, the minimum depth is and the maximum depth is .

Appendix E Negative results

Model setup

  • We experimented with using a UNet architecture instead of a sequence of ResNet blocks for the spatial feature network and refinement network. This led to much worse results and was more challenging to train.

  • Other settings for the differentiable renderer: We tried a larger radius, , but this both takes longer to train and gives worse results.


  1. footnotemark:


  1. K. Aliev, D. Ulyanov and V. Lempitsky (2019) Neural point-based graphics. arXiv preprint arXiv:1906.08240. Cited by: §2, §3.2.
  2. M. Bertalmio, G. Sapiro, V. Caselles and C. Ballester (2000) Image inpainting. In siggraph, Cited by: §3.3.
  3. A. Brock, J. Donahue and K. Simonyan (2019) Large scale GAN training for high fidelity natural image synthesis. In iclr, Cited by: Appendix B, §1, §2, §3.1, §3.3, §3.3.
  4. A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva, S. Song, A. Zeng and Y. Zhang (2017) Matterport3D: learning from rgb-d data in indoor environments. In International Conference on 3D Vision (3DV), Cited by: Figure 10, §1, Table 1, Table 1, §4.1.
  5. G. Chaurasia, S. Duchene, O. Sorkine-Hornung and G. Drettakis (2013) Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG). Cited by: §2.
  6. W. Chen, Z. Fu, D. Yang and J. Deng (2016) Single-image depth perception in the wild. In nips, Cited by: §2.
  7. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever and P. Abbeel (2016) InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In nips, Cited by: §2.
  8. X. Chen, J. Song and O. Hilliges (2019) Monocular neural image based rendering with continuous view control. iccv. Cited by: §2.
  9. I. Choi, O. Gallo, A. Troccoli, M. H. Kim and J. Kautz (2019) Extreme view synthesis. iccv. Cited by: §2.
  10. P. Debevec, Y. Yu and G. Borshukov (1998) Efficient view-dependent image-based rendering with projective texture-mapping. In Rendering Techniques, Cited by: §2.
  11. D. Eigen, C. Puhrsch and R. Fergus (2014) Depth map prediction from a single image using a multi-scale deep network. In nips, Cited by: §1, §2.
  12. S. A. Eslami, D. J. Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka and K. Gregor (2018) Neural scene representation and rendering. Science 360 (6394). Cited by: §2.
  13. J. Flynn, M. Broxton, P. Debevec, M. DuVall, G. Fyffe, R. Overbeck, N. Snavely and R. Tucker (2019) DeepView: view synthesis with learned gradient descent. In cvpr, Cited by: §2.
  14. A. Geiger, P. Lenz, C. Stiller and R. Urtasun (2013) Vision meets robotics: the KITTI dataset. International Journal of Robotics Research (IJRR). Cited by: §2, §2.
  15. G. Gkioxari, J. Malik and J. Johnson (2019) Mesh r-cnn. iccv. Cited by: §2.
  16. T. Groueix, M. Fisher, V. G. Kim, B. C. Russell and M. Aubry (2018) AtlasNet: a papier-mâché approach to learning 3d surface generation. cvpr. Cited by: §2.
  17. P. Hedman and J. Kopf (2018) Instant 3d photography. ACM Transactions on Graphics (TOG). Cited by: §4.2.
  18. P. Hedman, J. Philip, T. Price, J. Frahm, G. Drettakis and G. Brostow (2018) Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (TOG). Cited by: §2.
  19. E. Insafutdinov and A. Dosovitskiy (2018) Unsupervised learning of shape and pose with differentiable point clouds. In nips, Cited by: §2, §3.2, §3.2.
  20. W. Jiang, W. Sun, A. Tagliasacchi, E. Trulls and K. M. Yi (2019) Linearized multi-sampling for differentiable image transformation. iccv. Cited by: §3.2.
  21. A. Kanazawa, S. Tulsiani, A. A. Efros and J. Malik (2018) Learning category-specific mesh reconstruction from image collections. In eccv, Cited by: §2.
  22. T. Karras, S. Laine and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In cvpr, Cited by: §2, §3.3.
  23. A. Knapitsch, J. Park, Q. Zhou and V. Koltun (2017) Tanks and temples: benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (TOG). Cited by: §2.
  24. L. Kobbelt and M. Botsch (2004) A survey of point-based techniques in computer graphics. Computers & Graphics. Cited by: §3.2.
  25. J. Kopf, F. Langguth, D. Scharstein, R. Szeliski and M. Goesele (2013) Image-based rendering in the gradient domain. ACM Transactions on Graphics (TOG). Cited by: §2.
  26. T. D. Kulkarni, W. F. Whitney, P. Kohli and J. Tenenbaum (2015) Deep convolutional inverse graphics network. In nips, Cited by: §1, §2.
  27. S. Laine and T. Karras (2011) High-performance software rasterization on gpus. In Proc. ACM SIGGRAPH Symposium on High Performance Graphics., Cited by: §3.2.
  28. Z. Li and N. Snavely (2018) Megadepth: learning single-view depth prediction from internet photos. In cvpr, Cited by: Appendix C, §1, §2, §2, §4.2.
  29. S. Liu, W. Chen, T. Li and H. Li (2019) Soft rasterizer: differentiable rendering for unsupervised single-view mesh reconstruction. iccv. Cited by: §3.2, §3.2.
  30. S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann and Y. Sheikh (2019) Neural volumes: learning dynamic renderable volumes from images. ACM Transactions on Graphics (TOG). Cited by: §2.
  31. R. Martin-Brualla, R. Pandey, S. Yang, P. Pidlypenskyi, J. Taylor, J. Valentin, S. Khamis, P. Davidson, A. Tkach and P. Lincoln (2018) LookinGood: enhancing performance capture with real-time neural re-rendering. ACM Transactions on Graphics (TOG). Cited by: §1, §2.
  32. M. Meshry, D. B. Goldman, S. Khamis, H. Hoppe, R. Pandey, N. Snavely and R. Martin-Brualla (2019) Neural rerendering in the wild. In cvpr, Cited by: §2.
  33. T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt and Y. Yang (2019) HoloGAN: unsupervised learning of 3d representations from natural images. In iccv, Cited by: §2.
  34. S. Niklaus, L. Mai, J. Yang and F. Liu (2019) 3D Ken Burns effect from a single image. ACM Transactions on Graphics (TOG). Cited by: §1, §2.
  35. E. Park, J. Yang, E. Yumer, D. Ceylan and A. C. Berg (2017) Transformation-grounded image generation network for novel 3D view synthesis. In cvpr, Cited by: §2.
  36. T. Park, M. Liu, T. Wang and J. Zhu (2019) Semantic image synthesis with spatially-adaptive normalization. In cvpr, Cited by: Appendix B, §2, §3.3.
  37. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein and L. Antiga (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, Cited by: §3.4.
  38. E. Penner and L. Zhang (2017) Soft 3D reconstruction for view synthesis. ACM Transactions on Graphics (TOG). Cited by: §2.
  39. M. Sainz and R. Pajarola (2004) Point-based rendering techniques. Computers & Graphics. Cited by: §3.2.
  40. M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh and D. Batra (2019) Habitat: A platform for embodied AI research. In iccv, Cited by: §4.1.
  41. S. M. Seitz, B. Curless, J. Diebel, D. Scharstein and R. Szeliski (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms. In cvpr, Cited by: §1, §2.
  42. D. Shin, Z. Ren, E. B. Sudderth and C. C. Fowlkes (2019) Multi-layer depth and epipolar feature transformers for 3D scene reconstruction. In cvpr, Cited by: §2.
  43. V. Sitzmann, J. Thies, F. Heide, M. Nießner, G. Wetzstein and M. Zollhofer (2019) DeepVoxels: learning persistent 3D feature embeddings. In cvpr, Cited by: Appendix C, §1, §2, §4.2.
  44. V. Sitzmann, M. Zollhöfer and G. Wetzstein (2019) Scene representation networks: continuous 3D-structure-aware neural scene representations. In nips, Cited by: §1, §2.
  45. P. P. Srinivasan, R. Tucker, J. T. Barron, R. Ramamoorthi, R. Ng and N. Snavely (2019) Pushing the boundaries of view extrapolation with multiplane images. In cvpr, Cited by: §1, §2.
  46. P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi and R. Ng (2017) Learning to synthesize a 4D rgbd light field from a single image. In iccv, Cited by: §2.
  47. J. Straub, T. Whelan, L. Ma, Y. Chen, E. Wijmans, S. Green, J. J. Engel, R. Mur-Artal, C. Ren, S. Verma, A. Clarkson, M. Yan, B. Budge, Y. Yan, X. Pan, J. Yon, Y. Zou, K. Leon, N. Carter, J. Briales, T. Gillingham, E. Mueggler, L. Pesqueira, M. Savva, D. Batra, H. M. Strasdat, R. D. Nardi, M. Goesele, S. Lovegrove and R. Newcombe (2019) The Replica dataset: a digital replica of indoor spaces. arXiv preprint arXiv:1906.05797. Cited by: Figure 9, §1, Table 1, Table 1, Figure 5, §4.1.
  48. S. Sun, M. Huh, Y. Liao, N. Zhang and J. J. Lim (2018) Multi-view to novel view: synthesizing novel views with self-learned confidence. In eccv, Cited by: §2.
  49. M. Tatarchenko, A. Dosovitskiy and T. Brox (2016) Multi-view 3D models from single images with a convolutional network. In eccv, Cited by: §1, §2.
  50. P. Teterwak, A. Sarna, D. Krishnan, A. Maschinot, D. Belanger, C. Liu and W. T. Freeman (2019) Boundless: generative adversarial networks for image extension. In iccv, Cited by: §3.3.
  51. S. Tulsiani, S. Gupta, D. F. Fouhey, A. A. Efros and J. Malik (2018) Factoring shape, pose, and layout from the 2D image of a 3D scene. In cvpr, Cited by: §2.
  52. S. Tulsiani, R. Tucker and N. Snavely (2018) Layer-structured 3D scene inference via view synthesis. In eccv, Cited by: §2.
  53. S. Tulsiani, T. Zhou, A. A. Efros and J. Malik (2017) Multi-view supervision for single-view reconstruction via differentiable ray consistency. In cvpr, Cited by: §2.
  54. T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In cvpr, Cited by: §3.3.
  55. Y. Wang, X. Tao, X. Shen and J. Jia (2019) Wide-context semantic image extrapolation. In cvpr, Cited by: §3.3.
  56. D. E. Worrall, S. J. Garbin, D. Turmukhambetov and G. J. Brostow (2017) Interpretable transformations with encoder-decoder networks. In iccv, Cited by: §1, §2.
  57. Z. Xu, S. Bi, K. Sunkavalli, S. Hadap, H. Su and R. Ramamoorthi (2019) Deep view synthesis from sparse photometric images. ACM Transactions on Graphics (TOG). Cited by: §1, §2.
  58. X. Yan, J. Yang, E. Yumer, Y. Guo and H. Lee (2016) Perspective transformer nets: learning single-view 3d object reconstruction without 3d supervision. In nips, Cited by: §2.
  59. W. Yifan, F. Serena, S. Wu, C. Öztireli and O. Sorkine-Hornung (2019) Differentiable surface splatting for point-based geometry processing. ACM Transactions on Graphics (TOG). Cited by: §3.2.
  60. H. Zhang, I. Goodfellow, D. Metaxas and A. Odena (2019) Self-attention generative adversarial networks. Cited by: §3.3.
  61. R. Zhang, P. Isola, A. A. Efros, E. Shechtman and O. Wang (2018) The unreasonable effectiveness of deep features as a perceptual metric. In cvpr, Cited by: §4.1.
  62. T. Zhou, M. Brown, N. Snavely and D. G. Lowe (2017) Unsupervised learning of depth and ego-motion from video. In cvpr, Cited by: §2.
  63. T. Zhou, R. Tucker, J. Flynn, G. Fyffe and N. Snavely (2018) Stereo magnification: learning view synthesis using multiplane images. ACM Transactions on Graphics (TOG). Cited by: Figure 7, Figure 7, Figure 8, Figure 8, Figure 11, Figure 12, §1, §1, §2, Table 1, Table 1, Figure 4, §4.1, §4.2, §4.3, Table 4, Table 4.
  64. T. Zhou, S. Tulsiani, W. Sun, J. Malik and A. A. Efros (2016) View synthesis by appearance flow. In eccv, Cited by: Appendix C, §2, §4.2.
  65. C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. Winder and R. Szeliski (2004) High-quality video view interpolation using a layered representation. ACM transactions on graphics (TOG). Cited by: §1, §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description