Differentiable Surface Splatting for Point-based Geometry Processing

Differentiable Surface Splatting for Point-based Geometry Processing

Wang Yifan ETH ZurichSwitzerland Felice Serena ETH ZurichSwitzerland Shihao Wu ETH ZurichSwitzerland Cengiz Öztireli Disney Research ZurichSwitzerland  and  Olga Sorkine-Hornung ETH ZurichSwitzerland
Abstract.

We propose Differentiable Surface Splatting (DSS), a high-fidelity differentiable renderer for point clouds. Gradients for point locations and normals are carefully designed to handle discontinuities of the rendering function. Regularization terms are introduced to ensure uniform distribution of the points on the underlying surface. We demonstrate applications of DSS to inverse rendering for geometry synthesis and denoising, where large scale topological changes, as well as small scale detail modifications, are accurately and robustly handled without requiring explicit connectivity, outperforming state-of-the-art techniques. The data and code are at https://github.com/yifita/DSS.

point cloud, inverse rendering, differential programming
copyright: noneccs: Computing methodologies Point-based modelsccs: Computing methodologies Visibilityccs: Computing methodologies Rendering\acmSubmissionID

201

Figure 1. Using our differentiable point-based renderer, scene content can be optimized to match target rendering. Here, the positions and normals of points are optimized in order to reproduce the reference rendering of the Stanford bunny. It successfully deforms a sphere to a target bunny model, capturing both large scale and fine-scale structures. From left to right are the input points, the results of iteration 18, 57, 198, 300, and the target.

1. Introduction

Differentiable processing of scene-level information in the image formation process is emerging as a fundamental component for both 3D scene and 2D image and video modeling. The challenge of developing a differentiable renderer lies at the intersection of computer graphics, vision, and machine learning, and has recently attracted a lot of attention from all communities due to its potential to revolutionize digital visual data processing and high relevance for a wide range of applications, especially when combined with the contemporary neural network architectures [loper2014opendr; kato2018neural; liu2018paparazzi; yao20183d; petersen2019pix2vex].

A differentiable renderer (DR) takes scene-level information such as 3D scene geometry, lighting, material and camera position as input, and outputs a synthesized image . Any changes in the image can thus be propagated to the parameters , allowing for image-based manipulation of the scene. Assuming a differentiable loss function on a rendered image , we can update the parameters with the gradient . This view provides a generic and powerful shape-from-rendering framework where we can exploit vast image datasets available, deep learning architectures and computational frameworks, as well as pre-trained models. The challenge, however, is being able to compute the gradient in the renderer.

Existing DR methods can be classified into three categories based on their geometric representation: voxel-based [nguyen2018rendernet; tulsiani2017multi; liu2017material], mesh-based [loper2014opendr; kato2018neural; liu2018paparazzi], and point-based [insafutdinov2018unsupervised; lin2018learning; roveri2018pointpronets; rajeswar2018pix2scene]. Voxel-based methods work on volumetric data and thus come with high memory requirements even for relatively coarse geometries. Mesh-based DRs solve this problem by exploiting the sparseness of the underlying geometry in the 3D space. However, they are bound by the mesh structure with limited room for global and topological changes, as connectivity is not differentiable. Equally importantly, acquired 3D data typically comes in an unstructured representation that needs to be converted into a mesh form, which is itself a challenging and error-prone operation. Point-based DRs circumvent these problems by directly operating on point samples of the geometry, leading to flexible and efficient processing. However, existing point-based DRs use simple rasterization techniques such as forward-projection or depth maps, and thus come with well-known deficiencies in point cloud processing when capturing fine geometric details, dealing with gaps and occlusions between near-by points, and forming a continuous surface.

In this paper, we introduce Differentiable Surface Splatting (DSS), the first high fidelity point based differentiable renderer. We utilize ideas from surface splatting [zwicker2001surface], where each point is represented as a disk or ellipse in the object space, which is projected onto the screen space to form a splat. The splats are then interpolated to encourage hole-free and antialiased renderings. For inverse rendering, we carefully design gradients with respect to point locations and normals by taking each forward operation apart and utilizing domain knowledge. In particular, we introduce regularization terms for the gradients to carefully drive the algorithms towards the most plausible point configuration. There are infinitely many ways splats can form a given image due to the high degree of freedom of point locations and normals. Our inverse pass ensures that points stay on local geometric structures with uniform distribution.

We apply DSS to render multi-view color images as well as auxiliary maps from a given scene. We process the rendered images with state-of-the-art techniques and show that this leads to high-quality geometries when propagated utilizing DSS. Experiments show that DSS yields significantly better results compared to previous DR methods, especially for substantial topological changes and geometric detail preservation. We focus on the particularly important application of point cloud denoising. The implementation of DSS, as well as our experiments, will be available upon publication.

2. Related work

In this section we provide some background and review the state of the art in differentiable rendering and point based processing.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
374934
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description