3D Ken Burns Effect from a Single Image

# 3D Ken Burns Effect from a Single Image

Simon Niklaus Portland State University Long Mai Adobe Research Jimei Yang Adobe Research  and  Feng Liu Portland State University
###### Abstract.

The Ken Burns effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables significantly more compelling results. Creating such effects manually is time-consuming and demands sophisticated editing skills. Existing automatic methods, however, require multiple input images from varying viewpoints. In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud. Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.

ken burns, novel view synthesis
copyright: acmlicensedjournal: TOGjournalyear: 2019journalvolume: 38journalnumber: 6article: 184publicationmonth: 11doi: 10.1145/3355089.3356528ccs: Computing methodologies Scene understandingccs: Computing methodologies Computational photographyccs: Computing methodologies Image-based rendering

## 1. Introduction

Advanced image- and video-editing tools allow artists to freely augment photos with depth information and to animate virtual cameras, enabling motion parallax as the camera scans over a still scene. This cinematic effect, which we refer to as 3D Ken Burns effect, has become increasingly popular in documentaries, commercials, and other media. Compared to the traditional Ken Burns effect which animates images with 2D scan and zoom, this 3D counterpart enables much more compelling experiences. However, creating such effects from a single image is painstakingly difficult: The photo must be manually separated into different segments, which then have to carefully be arranged in the virtual 3D space, and inpainting needs to be performed to avoid holes when the virtual camera moves away from its origin. In this paper, we target the problem of automatically synthesizing the 3D Ken Burns effect from a single image. We further optionally incorporate simple user-specified camera paths, parameterized by the desired start- and end-view, to grant the user more control over the resulting effect.

This problem of synthesizing realistic moving-camera effects from a single image is highly challenging. Two fundamental concerns need to be addressed. First, to synthesize a new view from a novel camera position, the scene geometry of the original view needs to be recovered accurately. Second, from the predicted scene geometry, a temporally consistent sequence of novel views has to be synthesized which requires dealing with disocclusion. We address both challenges and provide a complete system that enables synthesizing the 3D Ken Burns effect from a single image.

To synthesize the 3D Ken Burns effect, our method first estimates the depth map from the input image. While existing depth prediction methods have rapidly improved over the past few years, monocular depth estimation remains an open problem. We observed that existing depth prediction methods are not particularly suitable for view synthesis tasks such as ours. Specifically, we identified three critical issues of existing depth prediction methods that need to be addressed to make them applicable to 3D Ken Burns synthesis: geometric distortions, semantic distortions, and inaccurate depth boundaries. Based on this observation, we designed a depth estimation pipeline along with the training framework dedicated to addressing these issues. To this end, we developed a semantic-aware neural network for depth estimation and train the network on our newly constructed large-scale synthetic dataset which contains accurate ground truth depth of various photo-realistic scenes.

From the input image and the associated depth map, a sequence of novel views has to be synthesized to produce an output video for the 3D Ken Burns effect. The synthesis process needs to handle three requirements. First, as the camera moves away from its original position, disocclusion necessarily happens. The missing information needs to be filled-in with geometrically consistent content. Second, the novel view renderings need to be synthesized in a temporally consistent manner. The straightforward approach of filling-in the missing information and synthesizing each view independently is not only computationally inefficient but also temporally unstable. Third, we have found that professional artists that use our system manually produce the most compelling effects when they are able to immediately perceive the result of their interaction. The synthesis thus needs to be real-time in order to best support such users. To address these requirements, we propose a simple yet effective solution: We map the input image to points in a point cloud according to the estimated depth. We then perform color- and depth-inpainting of novel view renderings at extreme views like at the beginning and at the end of the virtual camera path. This allows us to extend the point cloud with geometrically sound information. The extended point cloud can then be used to synthesize all novel view renderings in an efficient and temporally consistent manner.

Together, our depth prediction pipeline and novel view synthesis approach provide a complete system for generating the 3D Ken Burns effect from a single image. This system provides a fully automatic solution where the start- and end-view of the virtual camera path are automatically determined so as to minimize the amount of disocclusion. In addition to the fully automatic mode, our system also provides an interactive mode in which users can control the start- and end-view through an intuitive user interface. This allows a more fine-grained control over the resulting 3D Ken Burns effect, thus supporting users in their artistic freedom.

The key contributions of this paper are as follows. We introduce the problem of 3D Ken Burns synthesis from a single image which enables automatic video generation in the form of a moving-camera effect. We leverage existing computer vision technologies and augment them to achieve plausible synthesis results. Our system offers a fully automatic mode which generates a convincing effect without any user feedback, and a view control mode which allows users to control the effect with simple interactions. Experiments on a wide range of real-world imagery demonstrate the effectiveness of our system. Our study shows that our system enables users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.

## 2. Related Work

### 2.1. Novel View Synthesis

Novel view synthesis focuses on generating novel views of scenes or 3D objects from input images taken from a sparse set of viewpoints. It is important for a wide range of applications, including virtual and augmented reality [Hedman et al., 2017; Huang et al., 2017; Rematas et al., 2018], 3D display technologies [Didyk et al., 2013; Kellnhofer et al., 2017; Lai et al., 2016; Ranieri et al., 2012; Xie et al., 2016], and image- or video-manipulation [Klose et al., 2015; Kopf, 2016; Lang et al., 2010; Liu et al., 2009; Rahaman and Paul, 2018; Zitnick et al., 2004]. Novel view synthesis is typically solved using image based rendering techniques [Kang et al., 2006], with recent approaches allowing for high-quality view synthesis results [Chaurasia et al., 2013; Chaurasia et al., 2011; Hedman et al., 2017; Hedman and Kopf, 2018; Hedman et al., 2018; Penner and Zhang, 2017]. With the emergence of deep neural networks, learning-based techniques have become an increasingly popular tool for novel view synthesis [Flynn et al., 2016; Ji et al., 2017; Kalantari et al., 2016; Meshry et al., 2019; Mildenhall et al., 2019; Sitzmann et al., 2019; Srinivasan et al., 2019; Thies et al., 2019; Thies et al., 2018; Xu et al., 2019; Zhou et al., 2018]. To enable high-quality synthesis results, existing methods typically require multiple input views [Kang et al., 2006; Penner and Zhang, 2017]. In this paper, we target an extreme form of novel view synthesis which aims to generate novel views along the whole camera path given only a single input image.

### 2.2. Learning-based View Synthesis from a Single Image

Recent novel view synthesis methods approach the single-image setting using deep learning [Tatarchenko et al., 2015; Zhou et al., 2016]. Synthesizing novel views from a single image is inherently challenging and existing methods are often only applicable to specific scene types [Habtegebrial et al., 2018; Liu et al., 2018; Nguyen-Phuoc et al., 2019], 3D object models [Olszewski et al., 2019; Park et al., 2017; Rematas et al., 2017; Yan et al., 2016; Yang et al., 2015], or domain-specific light field imagery [Srinivasan et al., 2017]. Most relevant to our work are methods that estimate the scene geometry of the input image via depth [Cun et al., 2019; Liu et al., 2018], normal maps [Liu et al., 2018], or layered depth [Tulsiani et al., 2018]. While we perform depth-based view synthesis as well, we focus on predicting depth maps suitable for high-quality view synthesis. Specifically, we directly improve the estimated depth and thus the estimated scene geometry to suppress artifacts such as geometric distortions and to tailor the depth prediction to the task of view synthesis.

### 2.3. Single-image Depth Estimation

Single-image depth estimation has gained a lot of research interest over the past decades [Koch et al., 2018]. Recent advances in deep neural networks along with the introduction of annotated depth image datasets [Abarghouei and Breckon, 2018; Chen et al., 2016; Laina et al., 2016; Li and Snavely, 2018; Saxena et al., 2009; Silberman et al., 2012; Xian et al., 2018; Zheng et al., 2018] enabled large improvements in monocular depth estimation. Another promising direction is the use of spatial or temporal pixel-correspondence to train for depth estimation in a self-supervised manner [Garg et al., 2016; Godard et al., 2017; Gordon et al., 2019; Li et al., 2019; Luo et al., 2018; Ummenhofer et al., 2017; Zhou et al., 2017]. However, depth estimation from a single image remains an open research problem. The quality of the predicted depth maps varies depending on the image type and the depth maps from existing methods are in many scenarios not suitable for generating high-quality novel view synthesis results due to geometric and semantic distortions as well as inaccurate depth boundaries. To support the 3D Ken Burns effect synthesis, we develop our depth prediction, adjustment, and refinement to specifically address those issues.

### 2.4. Creative Effect Synthesis

With 3D scene information such as depth or scene layouts, a range of creative camera effects can be produced from the input image, such as depth-of-field synthesis [Wadhwa et al., 2018; Wang et al., 2018], 2D-to-3D conversion [Xie et al., 2016], and photo pop-up [Hoiem et al., 2005; Srivastava et al., 2009]. In this paper, we focus on synthesizing the 3D Ken Burns effect which is a camera motion effect. Our desired output is a whole video corresponding to a given camera path. A number of methods have been proposed in the past to enable camera fly-through effects from a single image. [Horry et al., 1997] present a semi-automatic system that lets users represent the scene with a simplified spidery mesh after a manual foreground segmentation process. The image is then projected onto that simplified scene geometry which allows flying a camera through it to obtain certain 3D illusions. Based on a similar idea, follow-up work enriches the scene representation to handle scenes with more than one vanishing point and more diverse camera motions [Kang et al., 2001; Li and Huang, 2001]. While realistic effects can be achieved for certain types of images, the simplified scene representation is often too simplistic to handle general types of images and still requires manual segmentation which demands significant user effort. Most related to our work is the system from [Zheng et al., 2009] which synthesizes a video with realistic parallax from still images. This method, however, requires multiple images as input. We focus on a more challenging problem of synthesizing the effect from a single image.

### 2.5. Image-to-Video Generation

The intended output of our method is a video representing the 3D Ken Burns effect. Our research is thus also related to image-to-video generation, an increasingly popular topic in computer vision. Existing work in this area focuses on developing generative models to predict motions in video frames given one or a few starting frames [Hsieh et al., 2018; Lee et al., 2018; Liang et al., 2017; Mathieu et al., 2015; Reda et al., 2018; Vondrick et al., 2016; Xu et al., 2018]. While promising results have been achieved for synthesizing object motion in videos with static background, they are often not suitable to synthesize realistic camera motion effects as in our problem.

## 3. 3D Ken Burns Effect Synthesis

Our framework consists of two main components, namely the depth estimation pipeline (Figure 3), and the novel view synthesis pipeline (Figure 7). In this section, we describe each component in detail.

### 3.1. Semantic-aware Depth Estimation

To synthesize the 3D Ken Burns effect, our method first estimates the depth of the input image. While recent advanced methods for monocular depth estimation have shown good performance on public benchmarks, we observed that their predictions are at times not suitable to produce high-quality view synthesis results. In particular, there are at least three major issues when applying existing depth estimation methods to generate the 3D Ken Burns effect:

1. Geometric distortions. While state-of-the-art depth estimation methods can generate reasonable depth orderings, they often have difficulty in capturing geometric relations such as planarity. Geometric distortion, such as bending planes, thus often appear in the synthesis results (Figure 2, top row).

2. Semantic distortions. Existing depth estimation methods predict the depth maps without explicitly taking the semantics of objects into account. Therefore, in many cases the depth values are assigned inconsistently inside regions of the same object, resulting in unnatural synthesis results such as objects sticking to the ground plane or different parts of an object being torn apart (Figure 2, bottom row).

3. Inaccurate depth boundaries. Current state-of-the-art methods for single-image depth estimation process the input image at a low resolution and utilize bilinear interpolation to obtain the full-resolution depth estimate. They are thus unable to accurately capture depth boundaries, resulting in artifacts in the novel view renderings (Figure 5).

In this paper, we design a semantic-aware depth estimation dedicated to addressing these issues. To do so, we separate the depth estimation into three steps. First, estimating coarse depth using a low-resolution image while relying on semantic information extracted using VGG-19 [Simonyan and Zisserman, 2014] to facilitate generalizability. Second, adjusting the depth map according to the instance-level segmentation of Mask R-CNN [He et al., 2017] to ensure consistent depth values inside salient objects. Third, refining the depth boundaries guided by the input image while upsampling the low-resolution depth estimate. Our depth estimation pipeline is illustrated in Figure 3 and we subsequently elaborate each step.

#### 3.1.1. Depth Estimation

Following existing work on monocular depth estimation, we leverage a neural network to predict a coarse depth map. To facilitate a semantic-aware depth prediction, we further provide semantic guidance by augmenting the input of our network with the feature maps extracted from the pool_4 layer of VGG-19 [Simonyan and Zisserman, 2014]. We found that granting explicit access to this semantic information encourages the network to better capture the geometry of large scene structures, thus addressing the concern of geometric distortions. Different from existing work, we do not resize the input image to a fixed resolution when providing it to the network and instead resize it such that its largest dimension is 512 pixels while preserving its aspect ratio.

Architecture. We employ a GridNet [Fourure et al., 2017] architecture with the modifications proposed by [Niklaus and Liu, 2018] to prevent checkerboard artifacts [Odena et al., 2016]. We incorporate this grid architecture with a configuration of six rows and four columns, where the first two columns perform downsampling and the last two columns perform upsampling. This multi-path GridNet architecture allows the network to effectively combine feature representations from multiple scales. We feed the input image into the first row, while inserting the semantic features from VGG-19 into the fourth row of the grid. We explicitly encourage the network to focus more on the semantic features and less on the input image by letting the first three rows of the grid (corresponding to the input image) have a channel size of 32, 48, and 64 respectively while the fourth through sixth row (corresponding to the semantic features) have 512 channels each. As such, a majority of the parameters reside in the bottom half of the network, forcing it to heavily make use of semantic features and in-turn supporting the generalization capability of our depth estimation network.

Loss Functions. To train our depth estimation network, we adopt the pixel-wise as well as the scale invariant gradient loss proposed by [Ummenhofer et al., 2017] to emphasize depth discontinuities. Specifically, given the ground truth inverse depth , we supervise the estimated inverse depth using the -based loss as

 (1) Lord=∑i,j∥∥ξ(i,j)−^ξ(i,j)∥∥1

Similar to [Ummenhofer et al., 2017], we encourage more pronounced depth discontinuities and stimulate smoothness in homogeneous regions by incorporating a scale invariant gradient loss as

where the discrete scale invariant gradient is defined as

 (3) gh[f](i,j)=(f(i+h,j)−f(i,j)|f(i+h,j)|+|f(i,j)|,f(i,j+h)−f(i,j)|f(i,j+h)|+|f(i,j)|)⊤

We emphasize the scale invariant gradient loss when training our depth estimation network and combine the two losses as

As such, we encourage accurate depth boundaries which are important when synthesizing the 3D Ken Burns effect.

Training. We utilize Adam [Kingma and Ba, 2014] with , , and and train our depth estimation network for iterations. We incorporate 13017 samples from the raw dataset of NYU v2 [Silberman et al., 2012] together with 8685 samples from MegaDepth [Li and Snavely, 2018]. Since these datasets are subject to noise and an inaccurate depth at object boundaries, we also leverage our own dataset which is described in Section 3.4. Our dataset consists of realistic renderings which provide high-quality depth maps with clear discontinuities at object boundaries.

We have found that our depth prediction network augmented with semantic features and trained using our high-quality dataset significantly improves the scene geometry represented by the estimate depth. However, semantic distortions have not been entirely resolved. It is extremely challenging to obtain accurate object-level depth predictions as the neural network not only needs to reason about the boundary of each object but also needs to determine the geometric relationship between different parts of an object. One approach to address this problem is to either provide semantic labels as input to the depth estimation network, or to train the depth estimation network in a multi-task setting to jointly predict segmentation masks [Eigen and Fergus, 2015; Liu et al., 2010; Mousavian et al., 2016; Nekrasov et al., 2018] which would encourage the network to reason about object boundaries.

In contrast, we borrow a technique frequently employed by artists when creating the 3D Ken Burns effect manually: Identify the object segments and approximate each object with a frontal plane positioned upright on the ground plane. We mimic this practice and utilize instance-level segmentation masks from Mask R-CNN [He et al., 2017] for this purpose. Specifically, we select the masks of semantically important objects such as humans, cars, and animals and adjust the estimated depth values by assigning the smallest depth value from the bottom of the salient object to the entire mask. We note that this approximation is not physically correct. However, it is effective in producing perceptually plausible results for a majority of content as demonstrated by many artist-created results.

#### 3.1.3. Depth Refinement

So far, our depth estimation network is designed to reduce geometric distortions with the depth adjustment addressing semantic distortions. However, the resulting depth estimate is of low resolution and may be erroneous at boundary regions. One possible solution to this problem is to apply joint bilateral filtering to upsample the depth map. However, this does not work well in our case. As also observed in previous work [Li et al., 2016], we found that the texture of the guiding image tends to be transferred to the upsampled depth. In this work, we thus instead employ a neural network that, guided by a high-resolution image, learns how to perform depth upsampling that is subject to erroneous estimates at object boundaries. During inference, this model predicts the refined depth map at an aspect-dependent resolution with the largest dimension being pixels. This upscaling factor can further be increased by modifying the neural network accordingly.

Architecture. We insert the input image into a U-Net with three downsampling blocks which use strided convolutions and three corresponding upsampling blocks which use convolutions and bilinear upsampling. We insert the estimated depth at the bottom of the U-Net, allowing the network to learn how to downsample the input image in order to guide the depth during upsampling.

Loss Functions. Like with our depth estimation network, we encourage accurate predictions at object boundaries and employ the same loss when training our refinement network.

Training. We utilize Adam [Kingma and Ba, 2014] with , , and and train our depth refinement network for iterations. Since accurate ground truth depth boundaries are crucial for training this network, we only use our computer-generated dataset which is described in Section 3.4. Specifically, we downsample and distort the ground truth depth to simulate the coarse predicted depth map and use it, together with the high-resolution image, as inputs to the depth refinement network.

#### 3.1.4. Summary

Our depth estimation pipeline is designed to address each of the identified issues that are important when using depth estimation methods to create the 3D Ken Burns effect: geometric distortions, semantic distortions, and inaccurate depth boundaries. Please see Figure 4 which demonstrates the contribution of each step in our pipeline to the final depth estimate.

### 3.2. Context-aware Inpainting for View Synthesis

To synthesize the 3D Ken Burns effect from the estimated depth, our method first maps the input image to points in a point cloud. Each frame of the resulting video can then be synthesized by rendering the point cloud from the corresponding camera position along a pre-determined camera path. The point cloud, however, is only a partial view of the world geometry as seen from the input image. Therefore, the resulting novel view renderings are incomplete with holes caused by disocclusion. One possible solution is to utilize off-the-shelf image inpainting methods to fill-in the missing areas in each synthesized video frame. This approach, however, fails to satisfy the following requirements:

1. Geometrically consistent inpainting. Due to the nature of disocclusion, the filled-in area should resemble the background with a clear separation of the foreground object. Existing off-the-shelf inpainting methods do not explicitly reason about the geometry of the inpainting result though, which is why they are unable to satisfy this requirement (Figure 6).

2. Temporal consistency. When rendering multiple novel views to generate a moving-camera effect, the result needs to be temporally consistent. The traditional inpainting formulation does not consider our given scenario, which is why independently applying an existing off-the-shelf inpainting method is subject to temporal inconsistencies (Figure 6).

3. Real-time synthesis. When manually specifying the camera path for the 3D Ken Burns effect, we found that the best user experience is achieved when users can immediately perceive the result and make adjustments accordingly. Applying off-the-shelf inpainting methods in a frame-by-frame manner would be too computationally expensive to adequately support this use case scenario (Section 3.3).

In this paper, we design a dedicated view synthesis pipeline to address these requirements as illustrated in Figure 7. Given the point cloud obtained from the input image and its depth estimate, we perform joint color- and depth-inpainting to fill-in missing areas in incomplete novel view renderings. Having the inpainting method also incorporate depth enables geometrically consistent inpainting. The inpainted depth can then be used to map the inpainted color to new points in the existing point cloud, addressing the problem of disocclusion. To synthesize the 3D Ken Burns effect along a pre-determined camera path, it is in this regard sufficient to perform the color- and depth-inpainting only at extreme views like at the beginning and at the end. Rendering this extended point cloud preserves temporal consistency and can be done in real-time. To enable real-time synthesis when having an artist specify an arbitrary camera path, we repeat this procedure at extreme views to the left, right, top, and bottom. Our synthesis approach is illustrated in Figure 7 and we subsequently elaborate the involved steps.

#### 3.2.1. Point Cloud Rendering

We obtain novel view renderings by projecting the point cloud to an image plane subject to the pinhole camera model. In doing so, we utilize a z-buffer to correctly address occlusion. When moving the virtual camera forward, the point cloud rendering may, however, suffer from shine-through artifacts in which occluded background points becomes visible in foreground regions. [Tulsiani et al., 2018] address these artifacts by rendering the point cloud at half the input resolution. In order to preserve the image resolution, we instead filter the z-buffer before projecting the points to the image plane. Specifically, we identify shined-through artifact regions by identifying pixels for which two adjacently opposing neighbors are significantly closer to the virtual camera. We then fill the cracks in the z-buffer with the average depth of the neighboring foreground pixels.

#### 3.2.2. Context Extraction

[Niklaus and Liu, 2018] observed that incorporating contextual information is beneficial for generating high-quality novel view synthesis results. Specifically, each point in the point cloud can be extended with contextual information that describes the neighborhood of where the corresponding pixel used to be in the input image. This augments the point cloud with rich information that can, for example, be leveraged for computer graphics in the form of neural rendering [Aliev et al., 2019; Bui et al., 2018; Meshry et al., 2019]. To make use of this technique, we leverage a neural network with two convolutional layers to extract 64 channels of context information from the input image. We train this context extractor jointly with the subsequent inpainting network, which allows the extractor to learn how to gather information that is useful when inpainting incomplete novel view renderings.

#### 3.2.3. Color- and Depth-inpainting

Different from existing image inpainting methods, our inpainting network accepts color-, depth-, and context-information as input and performs joint color- and depth-inpainting. The additional context provides rich information that is beneficial for high-quality image synthesis while the depth enables geometrically consistent inpainting results with foreground objects clearly being separated from the background. Specifically, we render the color-, depth-, and context-information of the input image to a novel view that is incomplete due to disocclusion. We then use our color- and depth-inpainting network to fill-in missing areas. The inpainted depth allows us to map the inpainted color to new points in the existing point cloud, effectively extending the world geometry that the point cloud represents.

Architecture. Similarly to our depth estimation network, we employ a GridNet [Fourure et al., 2017] architecture for our inpainting network due to its ability to learn how to combine representations at multiple scales. Specifically, we utilize a grid with four rows and four columns with a per-row channel size of 32, 64, 128, and 256 respectively. It accepts the color, depth, and context of the incomplete novel view rendering and returns the inpainted color and depth.

Loss Functions. We adopt a pixel-wise loss as well as a perceptual loss based on deep image features to supervise the color inpainting. Specifically, given a ground truth novel view , we supervise the inpainted color using the -based loss as

 (5) Lcolor=∥∥I−Igt∥∥1

For the perceptual loss, we employ a content loss based on the difference between deep image features as

 (6) Lpercep=∥∥ϕ(I)−ϕ(Igt)∥∥22

where represents feature activations from a generic image classification network. Specifically, we use the activations of the relu4_4 layer from VGG-19 [Simonyan and Zisserman, 2014]. To supervise the depth-inpainting, we use the -based loss as well as the scale invariant gradient loss , thus yielding

as the combination of loss functions that we use to supervise the training of our color- and depth-inpainting network.

Training. We utilize Adam [Kingma and Ba, 2014] with , and and train our inpainting network for iterations. Given an input image, we require ground truth novel views to supervise the training of the inpainting network. To this end, we extended our synthetic dataset and collected multiple views as described in Section 3.4 and shown in Figure 9.

#### 3.2.4. Summary

Our novel view synthesis approach is designed to address each of the identified requirements that are important when synthesizing the 3D Ken Burns effect: geometrically consistent inpainting, temporal consistency, and real-time synthesis. Please consider our supplementary video demo to further examine our synthesis results. This video demo also contains an example interaction with our user interface which exemplifies why real-time synthesis is a key feature when manually specifying the camera path.

### 3.3. User Interface

Given an input image, our system synthesizes the 3D Ken Burns effect from a virtual camera path parameterized by a start- and end-position. We obtain a sequence of frames by uniformly sampling novel view renderings across the linear path between the two positions. Here we describe how to derive camera positions from cropping windows placed on the input image, how to automatically select suitable cropping windows, and how to support the artist in using our system interactively.

#### 3.3.1. Camera Parametrization

When synthesizing the 2D Ken Burns effect, it is common practice to specify a source- and a target-crop within the input image. This approach provides an intuitive way to manually define the 2D scan and zoom. We adopt this paradigm of parameterizing the start- and end-view for our 3D Ken Burns effect. It is not trivial to match a cropping window in the 2D image space to a virtual camera position in 3D space. In our method, we choose the XY-coordinate of the two virtual cameras such that the foreground object within the scene moves in accordance with the cropping windows. That is, if the source- and target-crop are 100 pixels apart then the foreground object should move by 100 pixels in the synthesized 3D Ken Burns result. Lastly, we use the size of the cropping windows in relation to the input image to determine the Z-coordinate of the corresponding virtual cameras.

#### 3.3.2. Automatic Mode

In the fully automatic mode, we let the algorithm automatically determine the start- and end-view such that the amount of disocclusion is minimized. Specifically, we treat the entire input image as the start-view and employ a uniform sampling grid to find the cropping window corresponding to the end-view that results in the minimum amount of disocclusion. In the resulting 3D Ken Burns effect, the virtual camera naturally approaches the the dominant salient foreground object and emphasizes it through motion parallax. An example result that we obtained using the automatic mode can be found at the top of Figure 1.

#### 3.3.3. Interactive Mode

Some users may desire a more fine-grained control over the synthesized 3D Ken Burns effect. To support this use case, we provide an interactive mode in which users determine the two cropping windows which represent the start- and end-view. Thanks to our efficient novel view rendering pipeline, our system can provide real-time feedback when manipulating the start- and end-view windows, which allows users to immediately perceive the effect of their actions. Please refer to our supplementary video demo for an example of our system in action.

### 3.4. Training Data

We evaluated several datasets that provide ground truth depth information to supervise the training of our depth estimation pipeline, including the MegaDepth [Li and Snavely, 2018] as well as the NYU v2 [Silberman et al., 2012] dataset. However, as shown in Figure 8, these datasets only provide sparse annotations that are subject to inaccurate depth boundaries. We also examined the KITTI dataset [Geiger et al., 2013], which also provides multi-view data and thus would be useful to supervise the training of our color- and depth-inpainting network. However, it is sparse and subject to inaccuracies as well and particularly limited in terms of scene types and content. As previously shown in Figure 5, accurate depth boundaries are crucial for novel view synthesis.

We thus created our own computer-generated dataset from 32 virtual environments, which enables us to extract accurate ground truth depth information. Those virtual environments were collected from the UE4 Marketplace. We intentionally collected highly realistic environments covering a wide range of scene types such as indoor scenes, urban scenes, rural scenes, and nature scenes. More specifically, we use the Unreal Engine to create a virtual camera rig to capture 134041 scenes from 32 environments where each scene consists of 4 views. Each view contains color-, depth-, and normal-maps at a resolution of pixels. Please see Figure 9 for an example from our dataset. While we did not use any normal-maps, we collected them regardless such that other researchers can make better use of our dataset in the future. Note that, while training our depth estimation network, we randomly crop either the top and bottom or the left and right of each sample in order to facilitate invariance to the aspect ratio of the input image.

## 4. Experiments

### 4.1. Usability Study

We conduct an informal user study to evaluate the usability of our system in supporting the creation of the 3D Ken Burns effect. In particular, we are interested in investigating how easy it is for non-expert users to achieve desirable results for images with different content. To simulate a plausible scenario, we collected 3D Ken Burns videos created by artists. Specifically, we searched for phrases like “3D Ken Burns effect” or “Parallax Effect” on YouTube and selected 30 representative results from tutorial videos. We then only further considered those results that do not contain additional artistic effects such as compositing, artificial lighting, and particle effects. We categorize the remaining videos into four groups according to the scene types of the input image, namely “landscape”, “portrait”, “indoor”, “man-made outdoor environment” and randomly selected three videos in each category. We thus conduct our informal user study on those 12 examples, for which we have the input image as well as reference 3D Ken Burns effect results.

We recruit 8 participants for our study. In each session, the participant is assigned one image along with the reference result created by an artist. The participant is asked to use our as well as two other systems to create a similar effect from the provided image. The order in which the systems are being used is randomly selected for each participant. The usability and quality of each tool is subjectively rated by the participant at the end of the session.

We compare our framework with existing solutions for creating the 3D Ken Burns effect. We consider two commercial systems. The first is the Photo Motion software package which is implemented as a template for Adobe After Effects. This package provides a commercial implementation for the framework introduced by [Horry et al., 1997] which is one of the most well-known frameworks for interactive camera fly-through synthesis. The second baseline system we consider is the mobile app Viewmee that has been developed to allow non-expert users to easily create the 3D Ken Burns effect. This is one of very few systems that support simple interactions targeting casual users with limited image- or video-editing experience.

At the end of each session, the participant is asked to rate the three systems in terms of two criteria: system usability and result quality. For system usability, the participant rates each system with a score from one to five, with one indicating the lowest usability (i.e. the tool is too difficult to use to obtain acceptable results within the allocated 30 minutes) and five indicating the best usability (i.e. the tool is easy to use to create good results). For the result quality, the participant is shown the three results that he or she created and asked to score each result from one to five, with one indicating the lowest quality and five indicating the highest quality.

We compare the user-provided usability scores as well as the per-system time for each of the 8 participants in Figure 10. The results show that using our system, the participants can obtain better results with much less effort compared to the other systems. Viewmee only seems to work for cases with a distinct foreground object in front of a distant background. Photo Motion Pro can model the scene depth for scenes with clear perspective but requires a lot of effort for manual segmentation and scene arrangement. It also is extremely difficult to use in scenes with many different depth layers. Please refer to our supplementary materials for more visual examples shown in video form.

### 4.2. Automatic Mode Evaluation

As discussed in Section 3.3.2, our system provides an automatic mode that requires no user interaction. We investigate the effectiveness of our method in generating 3D Ken Burns effects from the input images automatically. In this experiment, we collect images from Flickr using different keywords, including “indoor”, “landscape”, “outdoor”, and “portrait” to cover images of different scene types. We collect 12 images in total, with three images with different level of scene complexity in each category. We then use our automatic mode to generate one result for each image. For comparison, for each of our 3D Ken Burns effect result, we also generate a 2D Ken Burns effect result corresponding to the same camera path (i.e. the same start- and end-view cropping windows).

We evaluate the quality of our results with a subjective human evaluation procedure. We recruit 21 participants to subjectively compare the quality of our 3D Ken Burns synthesis results and the 2D counterparts. Each participant performs 12 comparison sessions corresponding to our 12 test images. Each session consists of a pair-wise comparison test presenting both the 3D and 2D Ken Burns synthesis results from an image in our test set. The participant is then asked to determine the result with better quality in terms of both 3D perception and overall visual quality.

Figure 12 shows average user preference percentage for our 3D Ken Burns effect results and those from the baseline 2D version for images in each category. The result indicates that our 3D Ken Burns synthesis results are preferred by the users in a majority of cases, which demonstrates the usefulness and effectiveness of our system. Please refer to our supplementary video for more visual examples of the comparison. Figure 11 shows two examples comparing our generated 3D Ken Burns effect with the 2D version resulting from the same start- and end-view cropping windows. The 2D results show a typical zooming effect with no parallax. Our results, on the other hand, contain realistic motion parallax with strong depth perception, leading to a much more desirable effect.

### 4.3. Depth Prediction Quality

We now evaluate the effectiveness of our depth prediction module. We compare our depth prediction results with those from three state-of-the-art monocular depth prediction methods, including MegaDepth [Li and Snavely, 2018], DeepLens [Wang et al., 2018], and DIW [Chen et al., 2016]. For each method, we use the publicly available implementations provided by the authors. We evaluate the depth prediction quality using two public benchmarks on single-image depth estimation. We report the performance of MegaDepth, DeepLens, and DIW with their models trained on their proposed datasets. To address the scale-ambiguity of depth estimation, we scale and shift each depth prediction to minimize the absolute error between it and the ground truth.

NYU v2. [Silberman et al., 2012] created one of the most well-known a benchmarks and datasets for single-image-depth estimation, consisting of 464 indoor scenes. Each scene contains aligned RGB and depth images, acquired from a Microsoft Kinect sensor. Following previous works on single-image depth estimation [Chen et al., 2016; Qi et al., 2018; Zoran et al., 2015], we use the standard training-testing split and evaluate our method on the 654 image-depth pairs from the testing set.

iBims-1. Recently [Koch et al., 2018] introduced a new benchmark aiming for a more holistic evaluation of the depth prediction quality. This benchmark consists of 100 images with high-quality ground-truth depth maps. These images cover a wide variety of indoor scenes and the benchmark provides a comprehensive set of quality metrics to quantify different desired properties of a well-predicted depth map such as depth boundary quality, planarity, depth consistency, and absolute distance accuracy.

Table 4.1 (top) compares the depth prediction quality of different methods according to various quantitative metrics defined by each benchmark. Our method compares favorably to state-of-the-art depth prediction methods in all depth quality metrics. In addition, the result demonstrates that our depth prediction pipeline improves significantly over off-the-shelf methods in terms of the Planarity Error (PE) and Depth Boundary Error (DBE) metrics on the iBims-1 benchmark. Those metrics are particularly designed to assess the quality in planarity and depth boundary preservation, respectively, which are particularly important for our synthesis task.

Table 4.1 (bottom) lists two additional variations of our approach to better analyze the effect of our depth estimation network as well as our training dataset. Specifically, we supervised the network architecture from DIW [Chen et al., 2016] with all available training data to compare this architecture to ours. Furthermore, we supervised our depth estimation network only on the training data from MegaDepth and NYU v2 without incorporating our computer-generated dataset. Both variants lead to significantly worse depth quality metrics in the benchmark, which exemplifies the importance of all individual components of our proposed approach. Interestingly, both variants compare favorably to state-of-the-art depth prediction models.

Figure 13 compares the three-dimensional renderings with respect to different depth prediction results. We can observe better preservation of the scene structure such as the planarity in our result compared to off-the-shelf depth prediction methods.

### 4.4. Discussion

Our previous experiment in Section 4.2 shows that users prefer our 3D Ken Burns effects in favor of the traditional 2D Ken Burns technique. It is also interesting to investigate how the effects created by our method compare to the ones made by skilled professional artists through laborious manual processing.

We conduct an additional subjective evaluation test. For each of the 12 artist-generated 3D Ken Burns results that we collected in Section 4.1, we use our system to create similar 3D Ken Burns effects using the corresponding input image. For each of the 12 test examples, we thus have a reference result generated by an artist and our result created by our proposed system. Please see Figure 14 for an example. We follow the same procedure as in Section 4.2. We ask the same set of 21 participants to perform 12 additional pair-wise comparison tests, comparing the results created by our system with the original artist-generated ones.

Figure 15 shows user preference percentage averaged over test cases in each category. Interestingly, our results are rated on-par with the ones from professional artists. Looking closely into each individual category, we observe that our results are slightly preferred compared to the artist’s results in the indoor category. These scenes typically have a complicated depth distribution with many objects, which makes it extremely tedious to manually achieve the 3D Ken Burns effect. Our method can rely on a good depth prediction to handle those complicated scenes. The artist-created results, however, are more preferred in the portrait category. Looking into the results, we observe that portrait images often have simpler scene layouts which makes it easier to manually achieve good results. More importantly, we found that artists often intentionally exaggerate the parallax effect in portrait photos to make the effect much more dramatic to an extent that is not possible with physically-correct depth. This artistic emphasis is often preferred by viewers. Our method is limited by the parallax enabled by our depth prediction which is trained to match physically-correct depth and thus is not able to generate such dramatic effects.

We hope that our geometric- and semantic-aware depth prediction framework provides useful insights for future research in developing a more effective depth prediction tailored to view synthesis tasks. We would in this regard like to emphasize that the 3D Ken Burns effect is an artistic effect. In certain scenarios, view synthesis results generated from a physically correct scene prediction may not be optimal in delivering the desired artistic impression. Allowing such artistic manipulation in the 3D Ken Burns effect synthesis is an interesting direction to extend our work in the future.

### 4.5. Limitations

While our method can generate a plausible 3D Ken Burns effect for images of different scene types, the results are not always perfect as shown in Figure 16. Single image depth estimation is highly challenging and our semantic-aware depth estimation network is not infallible. While our method can produce depth estimates subject to little or no distortion, we found that our results may still fail to predict accurate depth maps for challenging cases such as reflective surfaces (the reflection on the glossy poster in Fig. 16 (a)) or thin structures (the flagpole in Fig. 16 (b)). Object segmentation is challenging as well and the salient depth adjustment may fail due to erroneous masks. While our depth upsamling module can perform boundary-aware refinement to account for some mask inaccuracies, our result is affected when the error in the segmentation mask is significantly large. In Fig. 16 (c), the nose of the deer is cut off due to Mask R-CNN providing an inaccurate segmentation. Finally, we note that while our joint color- and depth-inpainting is an intuitive approach to extend the estimated scene geometry, it has only been supervised on our synthetic data and thus may sometimes generate artifacts when the input differs too much from the training data. In Fig. 16 (d), the inpainting result lacks texture and is darker than expected. Training the color- and depth-inpainting model with real images and leveraging an adversarial supervision regime and a more sophisticated architecture, like one that uses partial convolutions, is an interesting direction to explore in future work.

## 5. Conclusion

In this paper, we developed a complete framework to produce the 3D Ken Burns effect from a single input image. Our method consists of a depth prediction model which predicts scene depth from the input image and a context-aware depth-based view synthesis model to generate the video results. To this end, we presented a semantically-guided training strategy along with high-quality synthetic data to train our depth prediction network. We couple its prediction with a semantics-based depth adjustment and a boundary-focused depth refinement process to enable an effective depth prediction for view synthesis. We subsequently proposed a depth-based synthesis model that jointly predicts the image and the depth map at the target view using a context-aware view synthesis framework. Using our synthesis model, the extreme views of the camera path are synthesized from the input image and the predicted depth map, which can be used to efficiently synthesize all intermediate views of the target video, resulting in the final 3D Ken Burns effect. Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study shows that our system enables users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.

###### Acknowledgements.
This work was done while Simon was interning at Adobe Research. We would like to thank Tobias Koch for his help with the iBims-1 benchmark. We are grateful for being allowed to use footage from Ian D. Keating (Figure 1, top), Kirk Lougheed (Figure 1, bottom), Leif Skandsen (Figure 2, top), Oliver Wang (Figure 2, bottom), Ben Abel (Figure 3, 4, 5, 6, 7), Aurel Manea (Figure 14), Jocelyn Erskine-Kellie (Figure 16, top right), Jaisri Lingappa (Figure 16, bottom left), and Intiaz Rahim (Figure 16, bottom right).

## References

• [1]
• Abarghouei and Breckon [2018] Amir Atapour Abarghouei and Toby P. Breckon. 2018. Real-Time Monocular Depth Estimation Using Synthetic Data With Domain Adaptation via Image Style Transfer. In IEEE Conference on Computer Vision and Pattern Recognition.
• Aliev et al. [2019] Kara-Ali Aliev, Dmitry Ulyanov, and Victor S. Lempitsky. 2019. Neural Point-Based Graphics. arXiv/1906.08240 (2019).
• Bui et al. [2018] Giang Bui, Truc Le, Brittany Morago, and Ye Duan. 2018. Point-Based Rendering Enhancement via Deep Learning. The Visual Computer 34, 6-8 (2018), 829–841.
• Chaurasia et al. [2013] Gaurav Chaurasia, Sylvain Duchêne, Olga Sorkine-Hornung, and George Drettakis. 2013. Depth Synthesis and Local Warps for Plausible Image-Based Navigation. ACM Transactions on Graphics 32, 3 (2013), 30:1–30:12.
• Chaurasia et al. [2011] Gaurav Chaurasia, Olga Sorkine, and George Drettakis. 2011. Silhouette-Aware Warping for Image-Based Rendering. Computer Graphics Forum 30, 4 (2011), 1223–1232.
• Chen et al. [2016] Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. 2016. Single-Image Depth Perception in the Wild. In Advances in Neural Information Processing Systems.
• Cun et al. [2019] Xiaodong Cun, Feng Xu, Chi-Man Pun, and Hao Gao. 2019. Depth-Assisted Full Resolution Network for Single Image-Based View Synthesis. In IEEE Computer Graphics and Applications.
• Didyk et al. [2013] Piotr Didyk, Pitchaya Sitthi-amorn, William T. Freeman, Frédo Durand, and Wojciech Matusik. 2013. Joint View Expansion and Filtering for Automultiscopic 3D Displays. ACM Transactions on Graphics 32, 6 (2013), 221:1–221:8.
• Eigen and Fergus [2015] David Eigen and Rob Fergus. 2015. Predicting Depth, Surface Normals and Semantic Labels With a Common Multi-Scale Convolutional Architecture. In IEEE International Conference on Computer Vision.
• Flynn et al. [2016] John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. DeepStereo: Learning to Predict New Views From the World'S Imagery. In IEEE Conference on Computer Vision and Pattern Recognition.
• Fourure et al. [2017] Damien Fourure, Rémi Emonet, Élisa Fromont, Damien Muselet, Alain Trémeau, and Christian Wolf. 2017. Residual Conv-Deconv Grid Network for Semantic Segmentation. In British Machine Vision Conference.
• Garg et al. [2016] Ravi Garg, B. G. Vijay Kumar, Gustavo Carneiro, and Ian D. Reid. 2016. Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue. In European Conference on Computer Vision.
• Geiger et al. [2013] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. 2013. Vision Meets Robotics: The KITTI Dataset. International Journal of Robotics Research 32, 11 (2013), 1231–1237.
• Godard et al. [2017] Clément Godard, Oisin Mac Aodha, and Gabriel J. Brostow. 2017. Unsupervised Monocular Depth Estimation With Left-Right Consistency. In IEEE Conference on Computer Vision and Pattern Recognition.
• Gordon et al. [2019] Ariel Gordon, Hanhan Li, Rico Jonschkowski, and Anelia Angelova. 2019. Depth From Videos in the Wild: Unsupervised Monocular Depth Learning From Unknown Cameras. arXiv/1904.04998 (2019).
• Habtegebrial et al. [2018] Tewodros Habtegebrial, Kiran Varanasi, Christian Bailer, and Didier Stricker. 2018. Fast View Synthesis With Deep Stereo Vision. arXiv/1804.09690 (2018).
• He et al. [2017] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. 2017. Mask R-Cnn. In IEEE International Conference on Computer Vision.
• Hedman et al. [2017] Peter Hedman, Suhib Alsisan, Richard Szeliski, and Johannes Kopf. 2017. Casual 3D Photography. ACM Transactions on Graphics 36, 6 (2017), 234:1–234:15.
• Hedman and Kopf [2018] Peter Hedman and Johannes Kopf. 2018. Instant 3D Photography. ACM Transactions on Graphics 37, 4 (2018), 101:1–101:12.
• Hedman et al. [2018] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel J. Brostow. 2018. Deep Blending for Free-Viewpoint Image-Based Rendering. ACM Transactions on Graphics 37, 6 (2018), 257:1–257:15.
• Hoiem et al. [2005] Derek Hoiem, Alexei A. Efros, and Martial Hebert. 2005. Automatic Photo Pop-Up. ACM Transactions on Graphics 24, 3 (2005), 577–584.
• Horry et al. [1997] Youichi Horry, Ken-Ichi Anjyo, and Kiyoshi Arai. 1997. Tour Into the Picture: Using a Spidery Mesh Interface to Make Animation From a Single Image. In Conference on Computer Graphics and Interactive Techniques.
• Hsieh et al. [2018] Jun-Ting Hsieh, Bingbin Liu, De-An Huang, Fei-Fei Li, and Juan Carlos Niebles. 2018. Learning to Decompose and Disentangle Representations for Video Prediction. In Advances in Neural Information Processing Systems.
• Huang et al. [2017] Jingwei Huang, Zhili Chen, Duygu Ceylan, and Hailin Jin. 2017. 6-Dof VR Videos With a Single 360-Camera. In IEEE Virtual Reality.
• Ji et al. [2017] Dinghuang Ji, Junghyun Kwon, Max McFarland, and Silvio Savarese. 2017. Deep View Morphing. In IEEE Conference on Computer Vision and Pattern Recognition.
• Kalantari et al. [2016] Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. 2016. Learning-Based View Synthesis for Light Field Cameras. ACM Transactions on Graphics 35, 6 (2016), 193:1–193:10.
• Kang et al. [2001] Hyung Woo Kang, Soon Hyung Pyo, Ken ichi Anjyo, and Sung Yong Shin. 2001. Tour Into the Picture Using a Vanishing Line and Its Extension to Panoramic Images. Computer Graphics Forum 20, 3 (2001), 132–141.
• Kang et al. [2006] Sing Bing Kang, Yin Li, Xin Tong, and Heung-Yeung Shum. 2006. Image-Based Rendering. Foundations and Trends in Computer Graphics and Vision 2, 3 (2006), 173–258.
• Kellnhofer et al. [2017] Petr Kellnhofer, Piotr Didyk, Szu-Po Wang, Pitchaya Sitthi-amorn, William T. Freeman, Frédo Durand, and Wojciech Matusik. 2017. 3DTV at Home: Eulerian-Lagrangian Stereo-To-Multiview Conversion. ACM Transactions on Graphics 36, 4 (2017), 146:1–146:13.
• Kingma and Ba [2014] Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv/1412.6980 (2014).
• Klose et al. [2015] Felix Klose, Oliver Wang, Jean Charles Bazin, Marcus A. Magnor, and Alexander Sorkine-Hornung. 2015. Sampling Based Scene-Space Video Processing. ACM Transactions on Graphics 34, 4 (2015), 67:1–67:11.
• Koch et al. [2018] Tobias Koch, Lukas Liebel, Friedrich Fraundorfer, and Marco Körner. 2018. Evaluation of CNN-based Single-Image Depth Estimation Methods. arXiv/1805.01328 (2018).
• Kopf [2016] Johannes Kopf. 2016. 360°Video Stabilization. ACM Transactions on Graphics 35, 6 (2016), 195:1–195:9.
• Lai et al. [2016] Chun-Jui Lai, Ping-Hsuan Han, and Yi-Ping Hung. 2016. View Interpolation for Video See-Through Head-Mounted Display. In Conference on Computer Graphics and Interactive Techniques.
• Laina et al. [2016] Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. 2016. Deeper Depth Prediction With Fully Convolutional Residual Networks. In International Conference on 3D Vision.
• Lang et al. [2010] Manuel Lang, Alexander Hornung, Oliver Wang, Steven Poulakos, Aljoscha Smolic, and Markus H. Gross. 2010. Nonlinear Disparity Mapping for Stereoscopic 3D. ACM Transactions on Graphics 29, 4 (2010), 75:1–75:10.
• Lee et al. [2018] Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, and Sergey Levine. 2018. Stochastic Adversarial Video Prediction. arXiv/1804.01523 (2018).
• Li and Huang [2001] Nan Li and Zhiyong Huang. 2001. Tour Into the Picture Revisited. In Conference on Computer Graphics, Visualization and Computer Vision.
• Li et al. [2016] Yijun Li, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2016. Deep Joint Image Filtering. In European Conference on Computer Vision.
• Li et al. [2019] Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, and William T. Freeman. 2019. Learning the Depths of Moving People by Watching Frozen People. In IEEE Conference on Computer Vision and Pattern Recognition.
• Li and Snavely [2018] Zhengqi Li and Noah Snavely. 2018. MegaDepth: Learning Single-View Depth Prediction From Internet Photos. In IEEE Conference on Computer Vision and Pattern Recognition.
• Liang et al. [2017] Xiaodan Liang, Lisa Lee, Wei Dai, and Eric P. Xing. 2017. Dual Motion GAN for Future-Flow Embedded Video Prediction. In IEEE International Conference on Computer Vision.
• Liu et al. [2010] Beyang Liu, Stephen Gould, and Daphne Koller. 2010. Single Image Depth Estimation From Predicted Semantic Labels. In IEEE Conference on Computer Vision and Pattern Recognition.
• Liu et al. [2009] Feng Liu, Michael Gleicher, Hailin Jin, and Aseem Agarwala. 2009. Content-Preserving Warps for 3D Video Stabilization. ACM Transactions on Graphics 28, 3 (2009), 44.
• Liu et al. [2018] Miaomiao Liu, Xuming He, and Mathieu Salzmann. 2018. Geometry-Aware Deep Network for Single-Image Novel View Synthesis. In IEEE Conference on Computer Vision and Pattern Recognition.
• Luo et al. [2018] Yue Luo, Jimmy S. J. Ren, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, and Liang Lin. 2018. Single View Stereo Matching. In IEEE Conference on Computer Vision and Pattern Recognition.
• Mathieu et al. [2015] Michaël Mathieu, Camille Couprie, and Yann LeCun. 2015. Deep Multi-Scale Video Prediction Beyond Mean Square Error. arXiv/1511.05440 (2015).
• Meshry et al. [2019] Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla. 2019. Neural Rerendering in the Wild. In IEEE Conference on Computer Vision and Pattern Recognition.
• Mildenhall et al. [2019] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis With Prescriptive Sampling Guidelines. ACM Transactions on Graphics 38, 4 (2019), 29:1–29:14.
• Mousavian et al. [2016] Arsalan Mousavian, Hamed Pirsiavash, and Jana Kosecka. 2016. Joint Semantic Segmentation and Depth Estimation With Deep Convolutional Networks. In International Conference on 3D Vision.
• Nazeri et al. [2019] Kamyar Nazeri, Eric Ng, Tony Joseph, Faisal Z. Qureshi, and Mehran Ebrahimi. 2019. EdgeConnect: Generative Image Inpainting With Adversarial Edge Learning. arXiv/1901.00212 (2019).
• Nekrasov et al. [2018] Vladimir Nekrasov, Thanuja Dharmasiri, Andrew Spek, Tom Drummond, Chunhua Shen, and Ian D. Reid. 2018. Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations. arXiv/1809.04766 (2018).
• Nguyen-Phuoc et al. [2019] Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. 2019. HoloGAN: Unsupervised Learning of 3D Representations From Natural Images. arXiv/1904.01326 (2019).
• Niklaus and Liu [2018] Simon Niklaus and Feng Liu. 2018. Context-Aware Synthesis for Video Frame Interpolation. In IEEE Conference on Computer Vision and Pattern Recognition.
• Odena et al. [2016] Augustus Odena, Vincent Dumoulin, and Chris Olah. 2016. Deconvolution and Checkerboard Artifacts. Technical Report.
• Olszewski et al. [2019] Kyle Olszewski, Sergey Tulyakov, Oliver J. Woodford, Hao Li, and Linjie Luo. 2019. Transformable Bottleneck Networks. arXiv/1904.06458 (2019).
• Park et al. [2017] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C. Berg. 2017. Transformation-Grounded Image Generation Network for Novel 3D View Synthesis. In IEEE Conference on Computer Vision and Pattern Recognition.
• Penner and Zhang [2017] Eric Penner and Li Zhang. 2017. Soft 3D Reconstruction for View Synthesis. ACM Transactions on Graphics 36, 6 (2017), 235:1–235:11.
• Qi et al. [2018] Xiaojuan Qi, Renjie Liao, Zhengzhe Liu, Raquel Urtasun, and Jiaya Jia. 2018. GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation. In IEEE Conference on Computer Vision and Pattern Recognition.
• Rahaman and Paul [2018] D. M. Motiur Rahaman and Manoranjan Paul. 2018. Virtual View Synthesis for Free Viewpoint Video and Multiview Video Compression Using Gaussian Mixture Modelling. IEEE Transactions on Image Processing 27, 3 (2018), 1190–1201.
• Ranieri et al. [2012] Nicola Ranieri, Simon Heinzle, Quinn Smithwick, Daniel Reetz, Lanny S. Smoot, Wojciech Matusik, and Markus H. Gross. 2012. Multi-Layered Automultiscopic Displays. Computer Graphics Forum 31, 7-2 (2012), 2135–2143.
• Reda et al. [2018] Fitsum A. Reda, Guilin Liu, Kevin J. Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, and Bryan Catanzaro. 2018. SDC-Net: Video Prediction Using Spatially-Displaced Convolution. In European Conference on Computer Vision.
• Rematas et al. [2018] Konstantinos Rematas, Ira Kemelmacher-Shlizerman, Brian Curless, and Steve Seitz. 2018. Soccer on Your Tabletop. In IEEE Conference on Computer Vision and Pattern Recognition.
• Rematas et al. [2017] Konstantinos Rematas, Chuong H. Nguyen, Tobias Ritschel, Mario Fritz, and Tinne Tuytelaars. 2017. Novel Views of Objects From a Single Image. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 8 (2017), 1576–1590.
• Saxena et al. [2009] Ashutosh Saxena, Min Sun, and Andrew Y. Ng. 2009. Make3D: Learning 3D Scene Structure From a Single Still Image. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 5 (2009), 824–840.
• Silberman et al. [2012] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. 2012. Indoor Segmentation and Support Inference From RGBD Images. In European Conference on Computer Vision.
• Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv/1409.1556 (2014).
• Sitzmann et al. [2019] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Niessner, Gordon Wetzstein, and Michael Zollhofer. 2019. DeepVoxels: Learning Persistent 3D Feature Embeddings. In IEEE Conference on Computer Vision and Pattern Recognition.
• Srinivasan et al. [2019] Pratul P. Srinivasan, Richard Tucker, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavely. 2019. Pushing the Boundaries of View Extrapolation With Multiplane Images. In IEEE Conference on Computer Vision and Pattern Recognition.
• Srinivasan et al. [2017] Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. 2017. Learning to Synthesize a 4D RGBD Light Field From a Single Image. In IEEE International Conference on Computer Vision.
• Srivastava et al. [2009] Savil Srivastava, Ashutosh Saxena, Christian Theobalt, Sebastian Thrun, and Andrew Y. Ng. 2009. I23 - Rapid Interactive 3D Reconstruction From a Single Image. In Vision, Modeling, and Visualization Workshop.
• Tatarchenko et al. [2015] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. 2015. Single-View to Multi-View: Reconstructing Unseen Views With a Convolutional Network. arXiv/1511.06702 (2015).
• Thies et al. [2019] Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred Neural Rendering: Image Synthesis Using Neural Textures. ACM Transactions on Graphics 38, 4 (2019), 66:1–66:12.
• Thies et al. [2018] Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, and Matthias Nießner. 2018. IGNOR: Image-Guided Neural Object Rendering. arXiv/1811.10720 (2018).
• Tulsiani et al. [2018] Shubham Tulsiani, Richard Tucker, and Noah Snavely. 2018. Layer-Structured 3D Scene Inference via View Synthesis. In European Conference on Computer Vision.
• Ummenhofer et al. [2017] Benjamin Ummenhofer, Huizhong Zhou, Jonas Uhrig, Nikolaus Mayer, Eddy Ilg, Alexey Dosovitskiy, and Thomas Brox. 2017. DeMoN: Depth and Motion Network for Learning Monocular Stereo. In IEEE Conference on Computer Vision and Pattern Recognition.
• Vondrick et al. [2016] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. 2016. Generating Videos With Scene Dynamics. In Advances in Neural Information Processing Systems.
• Wadhwa et al. [2018] Neal Wadhwa, Rahul Garg, David E. Jacobs, Bryan E. Feldman, Nori Kanazawa, Robert Carroll, Yair Movshovitz-Attias, Jonathan T. Barron, Yael Pritch, and Marc Levoy. 2018. Synthetic Depth-Of-Field With a Single-Camera Mobile Phone. ACM Transactions on Graphics 37, 4 (2018), 64:1–64:13.
• Wang et al. [2018] Lijun Wang, Xiaohui Shen, Jianming Zhang, Oliver Wang, Zhe L. Lin, Chih-Yao Hsieh, Sarah Kong, and Huchuan Lu. 2018. DeepLens: Shallow Depth of Field From a Single Image. ACM Transactions on Graphics 37, 6 (2018), 245:1–245:11.
• Xian et al. [2018] Ke Xian, Chunhua Shen, Zhiguo Cao, Hao Lu, Yang Xiao, Ruibo Li, and Zhenbo Luo. 2018. Monocular Relative Depth Perception With Web Stereo Data Supervision. In IEEE Conference on Computer Vision and Pattern Recognition.
• Xie et al. [2016] Junyuan Xie, Ross B. Girshick, and Ali Farhadi. 2016. Deep3D: Fully Automatic 2d-To-3d Video Conversion With Deep Convolutional Neural Networks. In European Conference on Computer Vision.
• Xu et al. [2018] Jingwei Xu, Bingbing Ni, Zefan Li, Shuo Cheng, and Xiaokang Yang. 2018. Structure Preserving Video Prediction. In IEEE Conference on Computer Vision and Pattern Recognition.
• Xu et al. [2019] Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. 2019. Deep View Synthesis From Sparse Photometric Images. ACM Transactions on Graphics 38, 4 (2019), 76:1–76:13.
• Yan et al. [2016] Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. 2016. Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction Without 3D Supervision. In Advances in Neural Information Processing Systems.
• Yang et al. [2015] Jimei Yang, Scott E. Reed, Ming-Hsuan Yang, and Honglak Lee. 2015. Weakly-Supervised Disentangling With Recurrent Transformations for 3D View Synthesis. In Advances in Neural Information Processing Systems.
• Yu et al. [2018] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S. Huang. 2018. Generative Image Inpainting With Contextual Attention. In IEEE Conference on Computer Vision and Pattern Recognition.
• Zheng et al. [2018] Kecheng Zheng, Zheng-Jun Zha, Yang Cao, Xuejin Chen, and Feng Wu. 2018. LA-Net: Layout-Aware Dense Network for Monocular Depth Estimation. In ACM Multimedia.
• Zheng et al. [2009] Ke Colin Zheng, Alex Colburn, Aseem Agarwala, Maneesh Agrawala, David Salesin, Brian Curless, and Michael F. Cohen. 2009. Parallax Photography: Creating 3D Cinematic Effects From Stills. In Graphics Interface Conference.
• Zhou et al. [2017] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G. Lowe. 2017. Unsupervised Learning of Depth and Ego-Motion From Video. In IEEE Conference on Computer Vision and Pattern Recognition.
• Zhou et al. [2018] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning View Synthesis Using Multiplane Images. ACM Transactions on Graphics 37, 4 (2018), 65:1–65:12.
• Zhou et al. [2016] Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A. Efros. 2016. View Synthesis by Appearance Flow. In European Conference on Computer Vision.
• Zitnick et al. [2004] C. Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon A. J. Winder, and Richard Szeliski. 2004. High-Quality Video View Interpolation Using a Layered Representation. ACM Transactions on Graphics 23, 3 (2004), 600–608.
• Zoran et al. [2015] Daniel Zoran, Phillip Isola, Dilip Krishnan, and William T. Freeman. 2015. Learning Ordinal Relationships for Mid-Level Vision. In IEEE International Conference on Computer Vision.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters