Static/Dynamic Filtering for Mesh Geometry
The joint bilateral filter, which enables feature-preserving signal smoothing according to the structural information from a guidance, has been applied for various tasks in geometry processing. Existing methods either rely on a static guidance that may be inconsistent with the input and lead to unsatisfactory results, or a dynamic guidance that is automatically updated but sensitive to noises and outliers. Inspired by recent advances in image filtering, we propose a new geometry filtering technique called static/dynamic filter, which utilizes both static and dynamic guidances to achieve state-of-the-art results. The proposed filter is based on a nonlinear optimization that enforces smoothness of the signal within designated neighborhoods, while preserving signal variations that correspond to features of certain scales. We develop an efficient iterative solver for the problem, which unifies existing filters that are based on static or dynamic guidances. The filter can be applied to mesh face normals followed by vertex position update, to achieve scale-aware and feature-preserving filtering of mesh geometry. It also works well for other types of signals defined on mesh surfaces, such as texture colors. Extensive experimental results demonstrate the effectiveness of the proposed filter for various geometry processing applications such as mesh denoising, geometry feature enhancement, and texture color filtering.
Signal filtering, the process of modifying signals to achieve desirable properties, has become a fundamental tool for different application areas. In image processing, for example, various filters have been developed for smoothing images while preserving sharp edges. Among them, the bilateral filter  updates an image pixel using the weighted average of nearby pixels, taking into account their spatial and range differences. Its simplicity and effectiveness makes it popular in image processing, and inspires various follow-up work with improved performance [2, 3, 4, 5].
Besides image processing, filtering techniques have also been utilized for processing 3D geometry. Indeed, many geometric descriptors such as normals and vertex positions can be considered as signals defined on two-dimensional manifold surfaces, where image filtering methods can be naturally extended and applied. For example, the bilateral filter has been adapted for feature-preserving mesh smoothing and denoising [6, 7, 8, 9].
Development of new geometry filters has also been inspired by other techniques that improve upon the original bilateral filter. Among them, the joint bilateral filter [2, 3] determines the filtering weights using the information from a guidance image instead of the input image, and achieves more robust filtering results when the guidance provides reliable structural information. One limitation of this approach is that the guidance image has to be specified beforehand, and remains static during the filtering processing. For image texture filtering, Cho et al.  address this issue by computing the guidance using a patch-based approach that reliably captures the image structure. This idea was later adopted by Zhang et al.  for mesh denoising, where a patch-based guidance is computed for filtering the face normals. Another improvement for the joint bilateral filter is the rolling guidance filter proposed by , which iteratively updates an image using the previous iterate as a dynamic guidance, and is able to separate signals at different scales. Recently, this approach was adapted by Wang et al.  to derive a rolling guidance normal filter (RGNF), with impressive results for scale-aware geometric processing.
For guided filtering, the use of static vs dynamic guidance presents a trade-off between their properties. Static guidance enables direct and intuitive control over the filtering process, but is not trivial to construct a priori for general shapes. Dynamic guidance, such as the one used in RGNF, is automatically updated according to the current signal values, but can be less robust when there are outliers or noises in the input signal. Recently, Ham et al.  combine static and dynamic guidance for robust image filtering. Inspired by their work, we propose in this paper a new approach for filtering signals defined on mesh surfaces, by utilizing both static and dynamic guidances. The filtered signal is computed by minimizing a target function that enforces consistency of signal values within each neighborhood, while incorporating structural information provided by a static guidance. To solve the resulting noncovex optimization problem, we develop an efficient fixed-point iteration solver, which significantly outperforms the majorization-minimization (MM) algorithm proposed by  for similar problems. Moreover, unlike the MM algorithm, our solver can handle constraints such as unit length for face normals, which are important for geometry processing problems. Our solver iteratively updates the signal values by combining the original signal with the current signal from a spatial neighborhood. The combination weights are determined according to the static input guidance as well as a dynamic guidance derived from the current signal. The proposed method, called static/dynamic (SD) filtering, benefits from both types of guidance and produces scale-aware and feature-preserving results.
The proposed method can be applied to different signals on mesh surfaces. When applied to face normals followed by vertex updates, it filters geometric features according to their scales. When applied to mesh colors obtained from texture mapping, it filters the texture image based on the metric on the mesh surface. In addition, utilizing the scale-awareness of the filter, we apply it repeatedly to separate signal components of different scales; the resulting components can be combined according to user-specified weights, allowing for intuitive feature manipulation and enhancement. Extensive experimental results demonstrate the efficiency and effectiveness of our filter. We also release the source codes to ensure reproducibility.
In addition, we propose a new method for vertex update according to face normals, using a nonlinear optimization formulation that enforces the face normal conditions while preserving local triangle shapes. The vertex positions are computed by iteratively solving a linear system with a fixed sparse positive definite matrix, which is done efficiently via pre-factorization of the matrix. Compared with existing approaches, our method produces meshes that are more consistent with the filtered face normals.
In summary, our main contributions include:
we extend the work of Ham et al.  and propose an SD filter for signals defined on triangular meshes, formulated as an optimization problem;
we develop an efficient fixed-point iteration solver for the SD filter, which significantly outperforms the MM algorithm from  and is able to handle constraints such as unit normals;
we propose an efficient approach for updating vertex positions according to filtered face normals, which produces new meshes that are consistent with the target normals while preserving local triangle shapes;
based on the SD filter, we develop a method to separate and combine signal components of different scales, enabling intuitive feature manipulation for mesh geometry and texture color.
2 Related Work
In the past, various filtering approaches have been proposed to process mesh geometry. Early work from Taubin  and Desbrun et al.  applied low-pass filters on meshes, which remove high-frequency noises but also attenuate sharp features. To better preserve features, more sophisticated image filtering techniques such as the bilateral filter  were adapted to mesh domains. On images, the bilateral filter updates a pixel using a weighted average of its neighboring pixels, with larger contribution from pixels that are closer in spatial or range domain. It can smooth images while preserving edges where there is large difference between neighboring pixel values . Different methods have been developed to adapt the bilateral filter to mesh geometry. Fleishman et al.  and Jones et al.  applied the bilateral filter for feature-preserving mesh denoising, by treating vertex positions as geometry signals. Later, Zheng et al.  performed mesh denoising by applying the bilateral filter on mesh face normals instead, followed by vertex position update to reconstruct the mesh shape. Recently, Solomon et al.  proposed a framework for bilateral filter that is applicable for signals on general domains including images and meshes, with a rigorous theoretical foundation. Besides denoising, bilateral filtering has also been applied for other geometry processing applications such as point cloud normal enhancement  and mesh feature recovery .
The bilateral filter inspired a large amount of follow-up work on image filtering. Among them, the joint bilateral filter [2, 3] extends the original bilateral filter by evaluating the spatial kernel using a guidance image. It can produce more reliable results when the guidance image correctly captures the structural information of the target signal. This property was utilized by Eisemann & Durand  and Petschnigg et al.  to filter flash photos, using corresponding non-flash photos as the guidance. Kopf et al.  and Cho et al.  applied the joint bilateral filter for image upsampling and structure-preserving image decomposition, respectively. In particular,  constructed a patch-based guidance to capture the structure of the input image. This idea was later adopted in  for filtering mesh face normals, where the guidance normals are computed using surface patches with the most consistent normals. Zhang et al.  proposed a different approach to guidance construction in their iterative rolling guidance filter, where the resulting image from an iteration is used as a dynamic guidance for the next iteration. The rolling guidance filter produces impressive results for scale-aware image processing, and is able to filter out features according to their scales. Wang et al.  adapted this approach to filter mesh face normals; the resulting rolling guidance normal filter enables scale-aware processing of geometric features, but is sensitive to noises on the input model. Recently, Ham et al.  proposed a robust image filtering technique based on an optimization formulation that involves a nonconvex regularizer. Their technique is effectively an iterative filter that incorporates both static and dynamic guidances, and achieves superior results in terms of robustness, feature-preservation, and scale-awareness. Our SD filter is based on a similar optimization formulation, but takes into account the larger filtering neighborhoods that are necessary for geometry signals. It enjoys the same desirable properties as its counterpart in image processing. In addition, the numerical solver proposed in  can only handle unconstrained signals, and is less efficient for the large neighborhoods used in our formulation. We therefore propose a new solver that outperforms the one from , while allowing for constrained signals such as unit normals.
Besides filtering approaches, feature-preserving signal smoothing can also be achieved via direct optimization. Notable examples include image smoothing algorithms that induces sparsity of resulting image gradients via -norm  or -norm  regularization. These approaches were later adapted for mesh smoothing and denoising [21, 22, 23]. Although effective in many cases, their optimization formulation only regularizes the signal difference between immediately neighboring faces. In comparison, our optimization compares signals within a neighborhood with user-specified size, which provides more flexibility and achieves better preservation of large-scale features.
From a signal processing point of view, meshes can be seen as a combination of signals with multiple frequency bands, which also relates with the scale space analysis . Previous work separate geometry signals of different frequencies using eigenfunctions of the heat kernel  or the Laplace operator [26, 27]. Although developed with sound theoretical foundations, such approaches are computationally expensive. Moreover, as specific geometric features can span across a wide range of frequencies, it is not easy to preserve or manipulate them with such approaches. The recent work from Wang et al.  provides an efficient way to separate and edit geometric features of different scales, harnessing the scale-aware property of the rolling guidance filter. Our SD filter also supports scale-aware processing of geometry signals, with more robustness than RGNF thanks to the incorporation of both static and dynamic guidances.
3 The SD Filter
The SD filter was originally proposed by Ham et al.  for robust image processing. Given an input image and a static guidance image , they compute an output image via optimization
where are the pixel values of , and respectively, and are user-specified weights, denotes the set of 8-connected neighboring pixels, and
The first term in the target function is a fidelity term that requires the output image to be close to the input image, while the second term is a regularizer for the output image. The function (see Fig. 2) penalizes the difference between adjacent pixels, but with bounded penalty for pixel pairs with large difference which correspond to edges or outliers. When approaches , approaches the norm. Function is a Gaussian weight function according to the guidance, giving higher a weight to a pixel pair with more similar guidance. Thus the regularizer promotes smooth regions and preserves sharp features according to the guidance, while being robust to outliers.
In this paper, we propose an SD filter for signals defined on 2-manifold surfaces represented as triangular meshes. We begin our discussion with filtering face normals, a common approach for smoothing mesh geometry [29, 8, 9, 10].
3.1 SD filter for face normals
For a given orientable triangular mesh, let be the oriented unit normal of face , computed as
where are its vertex positions enumerated according to the orientation. We associate the normal with the face centroid . We would like to filter the face normals, and update the mesh vertices according to the filtered normals. To define an SD filter for the normals , we must consider some major differences compared with image filtering:
Image pixels are located on a regular grid, but mesh faces may result from irregular sampling of the surface.
To smooth an image, the SD filter as per Eq. (1) only considers the difference between a pixel and its eight neighbor pixels. On meshes, however, geometry features can span across a large region, thus we may need to compare face normals beyond one-ring neighborhoods . Moreover, similar to the bilateral filter, such comparison should consider the difference between the spatial locations, with stronger penalty for normal deviation between faces that are closer to each other.
Therefore, we can compute the filtered normals by minimizing a target function
with a user-specified weight . Here is a fidelity term between the input and output normals,
where are the face normals on the input mesh, and is the area of face on the input mesh. is a regularization term defined as
where are the guidance face normals, and denotes the set of neighboring faces of . The Gaussian standard deviation parameters are controlled by the user. Compared with the image regularizer in Eq. (1), this formulation introduces a Gaussian weight for the spatial locations of face normals. Here is defined according to the Euclidean distance between face centroids because of the simplicity of its computation, but other distance measures such as the geodesic distance can also be used. For each face , its neighborhood is chosen to be the set of faces with a significant value of the spatial weight . Using the empirical three-sigma rule , we include in the faces with , which can be found using a breadth-first search from .
The target function is nonconvex because of , and needs to be minimized numerically. In the following, we first show how the majorization-minimization (MM) algorithm proposed by  can be extended to solve this problem. Afterwards, we propose a new fixed-point iteration solver that significantly outperforms the MM algorithm and is suitable for interactive applications.
MM algorithm. For the SD image filter, Ham et al.  proposed a majorization-minimization (MM) algorithm to iteratively minimize the target function (1). In each iteration, the target function is replaced by a convex surrogate function that bounds it from above, which is computed using the current variable values. This surrogate function is then minimized to update the variables. The MM solver is guaranteed to converge to a local minimum of the target function. Thus a straightforward way to minimize the new target function (4) is to employ the MM algorithm, using the convex surrogate function for at :
Specifically, with the variable values at iteration , we replace the term in the target function by its convex surrogate according to Eq. (7). The updated variable values are computed from the resulting convex problem
Due to the symmetry of neighboring relation between faces (i.e., ), the optimization problem (8) amounts to solving a linear system:
where with being the number of faces, stack the values of and respectively, and is a symmetric matrix with diagonal elements
and off-diagonal elements
The linear system matrix in Eq. (10) is diagonally dominant and symmetric positive definite, and can be solved using standard linear algebra routines.
Fixed-point iteration solver. Although the MM algorithm works well on images, its performance on meshes is often unsatisfactory. Due to larger face neighborhoods, there are a large number of nonzeros in the linear system matrix of Eq. (10), resulting in long computation time for each iteration. In the following, we propose a more efficient solver that is suitable for interactive applications. Note that a local minimum of the target function (4) should satisfy the first order optimality condition for each , which expands into
The equations (11) can be solved using fixed-point iteration
with . In this way, the updated normal of a face is a convex combination of its initial normal and the current normals of the faces in its neighborhood. The convex combination coefficient for a neighboring face normal depends on both the (static) difference between the guidance normals on the two faces, and the (dynamic) difference between their current normals , hence the name static/dynamic filter. Moreover, there is an interesting connection between the fixed-point iteration and the MM algorithm: it can be shown that the iteration (13) is a single step of Jacobi iteration for solving the MM linear system (10).
It can be shown that the fixed-point iteration monotonically decreases the target function value until it converges to a local minimum. A proof is provided in the supplementary material. Moreover, the face normal updates are trivially parallelizable, which enables significant speedup on GPUs and multi-core CPUs. Our fixed-point solver is significantly faster than the MM algorithm, as shown in Fig. 3 where we compare the change of target energy with respect to the computational time between the two solvers. Here the spatial Gaussian parameter is set to three times the average distance between neighboring face centroids in the mesh. To achieve the best performance for the MM solver, we tested two strategies for solving the MM linear system: 1) pre-computing symbolic Cholesky factorization of the system matrix, and performing numerical factorization in each iteration, with the three right-hand-sides for -, and - and -coordinates solved in parallel; 2) conjugate gradient method with parallel sparse matrix-vector multiplication. Due to the large neighborhood size, the MM system matrix has a large number of non-zeros in each row, and the Cholesky factorization approach is much more time-consuming than conjugate gradient. Therefore, we only show the timing of the algorithm with conjugate gradient solver. We can see that the fixed-point solver is much more efficient than the MM algorithm, drastically reducing the energy to a value close to the solution within a fraction of the computational time for one MM iteration. This phenomenon is observed in our experiments with other models as well. Detailed results are provided in the supplemental materials.
Enforcing unit normal constraints. The target function in Eq. (4) adapts the SD filter to mesh face normals in a straightforward way, but fails to recognize the requirement that all normals should lie on the unit sphere. In fact, starting from the unit face normals of the input mesh, can be decreased by simply shrinking the normals without changing their directions, and the filtered normals can have different lengths across the mesh. In other words, without the requirement of unit normals, the difference between two normal vectors is not a reliable measure of the deviation between their directions, which makes less effective for controlling the filter. To resolve this issue, we derive a new target function for optimization, by substituting each normal vector in with its normalization . However, such normalization increases the nonlinearity of the problem and makes its numerical solution more challenging. The MM algorithm is no longer applicable, because the quadratic surrogate in Eq. (7) does not hold here. On the other hand, the fixed-point iteration solver can be slightly modified to minimize efficiently. At a local minimum, the first-order optimality condition amounts to
and is the identity matrix. The matrix represents the projection onto the subspace orthogonal to . Geometrically, condition (14) means that the linear combination must be parallel to . Therefore, we can update via
with . Compared with the previous iteration format (13), this simply adds a normalization step after each iteration.
Similar to the previous fixed-point iteration format, the new solver with Eq. (15) is embarrassingly parallel, and rapidly decreases the target function within a small number of iterations (see Fig. 4). In the following, all examples of SD normal filtering are processed using this solver.
Vertex update. After the face normals are filtered, the mesh vertices need to be updated accordingly. Many existing methods compute the new vertex positions by enforcing the orthogonality between the new edge vectors and the target face normals . Although very efficient, such methods can result in a large number of flipped triangles, because the orthogonality constraint is still satisfied if the updated face normal is opposite to the target one. To address this issue, we propose a new update method that optimizes the vertex positions by directly enforcing the oriented normals as soft constraints, in the same way as . Specifically, for each face with a target oriented unit normal , we define as the feasible set of its vertex positions for which the resulting oriented unit normal is . The new vertex positions are then determined by solving
Here the first term penalizes the deviation between the new vertex positions and the original vertex positions , with being the Frobenius norm. is a user-specified positive weight, which is set to 0.001 by default. Matrix stores the vertex positions of face in its rows. Matrix
produces the mean-centered vertex positions for a face. are auxiliary variables representing the closest projection of onto the feasible set , and is an indicator function for , so that
The second term of the target function (16) penalizes the violation of oriented normal constraint for each face, using the squared Euclidean distance to the feasible sets. The use of mean-centering matrix utilizes the translation-invariance of the oriented normal constraint, to allow for faster convergence of the solver . Overall, this optimization problem searches for new vertex positions that satisfy the oriented normal constraints as much as possible, while being close the original positions. It is solved via alternating minimization of and , following the approach of :
First, we fix and update . This reduces to a set of separable subproblems, each projecting the current mean-centered vertex positions of a face to the corresponding feasible set . Namely, we look for vertex positions that is closest to while achieving the target oriented unit normal . Note that for the oriented normal condition to hold, must lie on a plane orthogonal to . Moreover, as the mean of the three vertex positions in is at the origin, it can be shown that the mean of must also lie at the origin. As a result, must lie on a plane that passes through the origin and is orthogonal to . The closest projection from onto this plane can be computed as
Let be the oriented unit normal for the current vertex positions . Then depending on the relation between and , we have two possible solutions for .
If , then the oriented unit normal for is , and we have .
If , then the oriented unit normal for is . In this case, the solution degenerates to three colinear points that lie in the plane of and minimizes the distance . This can be computed as
where is the right-singular vector of corresponding to its largest singular value.
The subproblem for each face is independent and can be solved in parallel.
Next, we fix and update . This is equivalent to
where the sparse matrix collects the mean-centering matrix coefficients for each face. This amounts to solving a sparse positive definite linear system
where is the identity matrix. The three right-hand-sides of the system corresponds to the -, -, and -coordinates, and can be solved in parallel. Moreover, the system matrix is fixed during all iterations, thus we can pre-compute its Cholesky factorization to allow for efficient solving in each iteration.
The above alternating minimization is repeated until convergence. We use 20 iterations in all our experiments, which is sufficient to achieve nice results.
Our vertex update method can efficiently compute a new mesh that is consistent with the target face normals, while being close to the original mesh shape. Fig. 5 compares our approach with the vertex update method proposed in , which also avoids flipped triangles. Given the target oriented normal for each face, they first rotate the current face to align its oriented normal with the target normal; then the new vertex positions are computed by matching the new face gradients with the rotated ones in a least-squares manner, by solving a Poisson linear system. For each method, we evaluate the deviation between the resulting mesh and the original mesh by aligning their centroids to minimize their norm of their vertex deviation (shown in the top center of Fig. 5), and then visualizing the deviation of each vertex via color coding. The resulting mesh using our method is noticeably closer to the original mesh, as we explicitly enforce closeness in our target function. This is desirable for many applications such as mesh denoising. In addition, we compute the deviation between the resulting face normals and the target normals, and visualize their distribution using histogram as well as color-coding on the mesh surface. Our method leads smaller deviation between the target and the resulting normals. Although computational time of our method (0.4313 seconds) is higher than the method from  (0.1923 seconds), it does not make a significant difference to the total filtering time (see Table I).
3.2 SD filter for texture colors
Our SD filter can be applied to not only face normals, but also other signals defined on mesh surfaces. One example is RGB colors from texture mapping. Given a texture image associated with a triangular mesh, we can use the texture coordinates to identify each pixel that gets mapped to the surface, as well as its mapped position on the mesh. Let be the guidance colors for these pixels. We can compute the filtered texture colors for the pixels by minimizing the target function in Eq. (4), with replaced by respectively, and with the neigborhood determined according to the 3D positions . In this way, the texture image is filtered according to the metric on the mesh surface instead of the distance on the image plane. The optimization problem is solved using unconstrained fixed-point iteration similar to (13). Fig. 6 shows some examples of texture color filtering.
4 Results and Applications
In this section, we use a series of examples to demonstrate the efficiency and effectiveness of our SD filter, as well as its various applications. We also compare the SD filter with related methods including optimization  and rolling guidance normal filter (RGNF) .
Implementation. Our algorithm is implemented in C++, using the Eigen library  for all linear algebra operations. For filtering of face normals, we run the iterative solver until one of the following conditions is satisfied: 1) the solver reaches the maximum number iterations, which is set to 100 for all our experiments; or 2) the area-weighted norm of normal changes between two consecutive iterations is smaller than a certain threshold angle , i.e.,
We set to 0.2 degrees in all our experiments. For filtering of texture colors, we run the solver for 50 iterations. Unless stated otherwise, all examples are run on a desktop PC with 16GB of RAM and a quad-core CPU at 3.6 GHz, using OpenMP for parallelization.
In all examples, the spatial Gaussian parameter is specified with respect to the average distance between adjacent face centroids, denoted as . By default, the initial signal is used as the guidance. For more intuitive control of the optimization, we also rescale the user-specified regularizer weight according to the value of . As increases, the integral of spatial Gaussian on the corresponding face neighborhood also increases, and the relative scale of the regularizer term with respect to the fidelity term grows. Therefore, we compensate for the change of the regularizer scale due to , by rescaling with a factor .
The source code for our SD filter is available at https://github.com/bldeng/MeshSDFilter. The parameters for the examples in this section can be found in the supplemental material.
Scale-aware and feature-preserving filtering. The SD filter can effectively remove features based on their scales according to the user-specified parameters. This is demonstrated in Fig. 7, where the input models are a sphere and a cube with additional features of different scales on the surfaces. Using different parameter settings, the SD filter gradually removes the geometry features of increasing scales, while preserving the sharp edges on the cube. Similar scale-aware and feature-preserving effects are observed for filtering of texture colors, as shown in Fig. 6.
In Figs. 8, 9, and 10, we show more examples of scale-aware and feature-preserving filtering of mesh geometry using the SD filter, and compare the results with optimization method from  and RGNF from . We tune the parameters of each method to achieve the best results, while ensuring the comparable effects from different methods. In all examples, the SD filter achieves better or similar results compared with RGNF, and outperforms optimization. In Fig. 8, the input model is a cube with additional features on each face, and resulting mesh with the SD filter is the closest to the cube shape. The RGNF leads to a result with larger deviation from the cube shape, because the filtered signals are computed as a combination of the original signals within a neighborhood; as a consequence, when there is large deviation between the input signals and the desired output within a certain region, RGNF may not produce a desirable result inside the region. In Fig. 9, the SD filter is able to smooth out the star-shaped features on the knot surface, while enhancing the sharp feature lines between different sides of the knot. Although optimization also enhances the feature lines, it leads to piecewise flat shapes because the norm promotes piecewise constant signals. In Fig. 10, the three methods produce similar results on the Merlion model, while the scale-awareness of the SD filter enables it to remove the fine brick lines at the base while clearly retaining the letters.
Choice of parameters. Our filter is influenced by four parameters: the regularizer weight , the spatial Gaussian parameter which also influences the neighborhood size, the guidance Gaussian parameter , and the range Gaussian parameter . For both and , a larger value leads to a smoother result, as shown in Fig. 12. Parameters and determine which face normals within a neighborhood affects the central face in a fixed-point iteration. For more intuitive control, we propose a method to interactively set the and parameters. Firstly, the user selects two smooth regions on different sides of a feature intended to be kept. We denote the two regions as and , and compute the mean and variance () of the normals within each region via
Then the range of is determined using the following strategy: should be small enough such that the two mean normals have negligible influence on each other according to the range Gaussian; at the same time, within each region the normals should influence each other such that sharp features do not emerge. Based on this strategy, we first determine the lower- and upper-bounds of via
If , then the user needs to select another pair of regions; otherwise, a value between and as the parameter . In our experiments, good results can often be achieved by choosing and setting with . Fig 11 shows the effects of different values on the Chinese lion model.
Feature manipulation and enhancement. The scale-awareness of our filter enables us to manipulate mesh details according to their scales. Given an input mesh , we can apply the SD normal filter with different parameters, to obtain a series of filtered meshes with less and less fine details retained. If we denote the input mesh as , then forms a coarse-to-fine sequence of meshes, with being the base mesh, and being the original mesh. We encode the difference between two consecutive meshes by comparing their corresponding vertex positions and face normals, represented as and , where and are the vertex positions and face normals of mesh . These differences represent the required deformation for to introduce the additional details in . They can be linearly combined according to coefficients and added to the face normals and vertex positions of the base mesh, to derive the target vertex positions and target face normals for a new mesh :
Note that the target vertex positions and target face normals are often incompatible, i.e., the face normals for a mesh with the target vertex positions are different from the target normals. To combine the two conditions, we determine the new mesh by solving the same optimization problem as our vertex update (16), with the matrix in the target function storing the target vertex positions. In this way, the linear combination coefficients indicate the contribution of geometric features from the original model within a certain range of scales. By changing the value of , a user can control geometric features according to their scales. Setting all coefficients to 1 recovers the original mesh, while setting a coefficient to a value different from 1 can boost or attenuate the features of the corresponding scales. Moreover, the linear system matrix for the optimization problem is fixed regardless of the value of , and only needs to be pre-factorized once. Afterwards, the user can choose any value of , and the resulting mesh can be efficiently computed using the pre-factorized system. This allows the user to interactively explore different linear combination coefficients to achieve desirable results. Figs. 1, 13, and 14 show examples of new meshes created in this manner. We can see that the coarse-to-fine sequence of meshes captures the geometrical features of different scales, which are effectively manipulated using the linear combination coefficients. In some application scenarios, it is desirable to only modify the features within a certain region on the surface. In this case, the target vertex positions and target normals are only computed via linear combination within user-selected regions; outside these regions they remain the same as the original mesh. Fig. 16 shows such an example, where a 3D human face model is locally enhanced.
Similarly, we can manipulate and enhance texture colors based on the SD filter, as shown in Fig. 15. We first filter the input texture image incrementally to derive a coarse-to-fine sequence of texture images . Then a new texture image is computed via linear combination with coefficients :
Mesh denoising. By constructing appropriate guidance signals, our SD filter can also be applied for mesh denoising, as shown in Fig. 17. Here we repeatedly apply the SD normal filter to an input mesh to remove the noise. In each run of the SD normal filter, the guidance normals are computed from the current mesh using the patch-based construction approach proposed in . For each model, we run the SD filter for multiple times to perform denoising. The results are evaluated using the average normal deviation and the average vertex deviation from the ground truth mesh as proposed in . In addition, similar to , we measure the perceptual difference between the denoised mesh and the ground truth using the spatial error term of the STED distance proposed in , computed with one-ring vertex neighborhood. Fig. 17 compares our denoising results with the guided mesh normal filtering (GMNF) method from . The results from the two methods are quite close, with similar error metric values. Detailed parameter settings are provided in the supplementary materials.
Performance. Using parallelization, our SD filter can compute the results efficiently. Table I provides the representative computation time of the SD normal filter on different models, showing the timing for each part of the algorithm:
: the pre-processing time for finding the neighborhood;
: the average timing of one iteration;
: the timing for mesh vertex update;
: The total timing for the whole filtering process.
For meshes with less than 100K faces and with , the whole process typically takes only a few seconds.
5 Discussion and Conclusion
We present the SD filter for triangular meshes, which is formulated as an optimization problem with a target energy that combines a quadratic fidelity term and a nonconvex robust regularizer. We develop an efficient fixed-point iteration solver for the problem, enabling the filter to be applied for interactive applications. Our SD filter generalizes the joint bilateral filter, combining the static guidance with a dynamic guidance that is derived from the current signal values. Thanks to the joint static/dynamic guidance, the SD filter is robust, feature-preserving and scale-aware, producing state-of-the-art results for various geometry processing problems.
Although our solver can incorporate simple constraints such as unit length for normal vectors, we do not consider global conditions for the signals. For example, we do not ensure the integrability of normals, i.e., the existence of a mesh whose face normals match the filter results; as a result, some parts of the updated mesh may not be consistent with the filtered normals. Neither do we consider the prevention of self intersection of the updated mesh. Due to the local nature of our fixed-point iteration, it is not easy to incorporate such global constraints into the solver. A possible remedy is to introduce a separate step to enforce these conditions after a few iterations. A more in-depth investigation into such global conditions is an interesting future work.
In this paper, we only consider the filtering of face normals and texture colors on mesh surfaces. But our formulation is general enough to allow for other scenarios. In the future, we would like to extend the filter to other geometry signals such as curvatures and shape operators, and to other geometric representations such as point clouds and implicit surfaces.
We thank Yang Liu for providing the implementation of RGNF. The Welsh Dragon mesh model was released by Bangor University, UK, for Eurographics 2011. This work was supported by the National Key R&D Program of China (No. 2016YFC0800501), the National Natural Science Foundation of China (No. 61672481, No. 61672482 and No. 11626253), and the One Hundred Talent Project of the Chinese Academy of Sciences.
-  C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” ser. ICCV ’98, 1998.
-  E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph., vol. 23, no. 3, pp. 673–678, 2004.
-  G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph., vol. 23, no. 3, pp. 664–672, 2004.
-  H. Cho, H. Lee, H. Kang, and S. Lee, “Bilateral texture filtering,” ACM Trans. Graph., vol. 33, no. 4, pp. 128:1–128:8, 2014.
-  Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Computer Vision–ECCV 2014. Springer, 2014, pp. 815–830.
-  S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral mesh denoising,” ACM Trans. Graph., vol. 22, no. 3, 2003.
-  T. R. Jones, F. Durand, and M. Desbrun, “Non-iterative, feature-preserving mesh smoothing,” ACM Trans. Graph., vol. 22, no. 3, pp. 943–949, 2003.
-  Y. Zheng, H. Fu, O.-C. Au, and C.-L. Tai, “Bilateral normal filtering for mesh denoising,” IEEE Trans. Vis. Comput. Graphics, vol. 17, no. 10, pp. 1521–1530, 2011.
-  J. Solomon, K. Crane, A. Butscher, and C. Wojtan, “A general framework for bilateral and mean shift filtering,” arXiv preprint arXiv:1405.4734, 2014.
-  W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu, “Guided mesh normal filtering,” Comput. Graph. Forum, vol. 34, no. 7, pp. 23–34, 2015.
-  P. Wang, X. Fu, Y. Liu, X. Tong, S. Liu, and B. Guo, “Rolling guidance normal filter for geometric processing,” ACM Trans. Graph., vol. 34, no. 6, p. 173, 2015.
-  B. Ham, M. Cho, and J. Ponce, “Robust image filtering using joint static and dynamic guidance,” in CVPR, 2015.
-  G. Taubin, “A signal processing approach to fair surface design,” ser. SIGGRAPH ’95, 1995, pp. 351–358.
-  M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” ser. SIGGRAPH ’99, 1999, pp. 317–324.
-  S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, Bilateral filtering: Theory and applications. Now Publishers Inc, 2009.
-  T. Jones, F. Durand, and M. Zwicker, “Normal improvement for point rendering,” IEEE Computer Graphics and Applications, vol. 24, no. 4, pp. 53–56, 2004.
-  C. C. Wang, “Bilateral recovering of sharp edges on feature-insensitive sampled meshes,” IEEE Trans. Vis. Comput. Graphics, vol. 12, no. 4, pp. 629–639, 2006.
-  J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph., vol. 26, no. 3, p. 96, 2007.
-  L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via gradient minimization,” ACM Trans. Graph., vol. 30, no. 6, pp. 174:1–174:12, 2011.
-  L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D, vol. 60, no. 1-4, pp. 259–268, 1992.
-  G. Taubin, “Introduction to geometric processing through optimization,” IEEE Computer Graphics and Applications, vol. 32, no. 4, pp. 88–94, 2012.
-  L. He and S. Schaefer, “Mesh denoising via minimization,” ACM Trans. Graph., vol. 32, no. 4, pp. 64:1–64:8, 2013.
-  H. Zhang, C. Wu, J. Zhang, and J. Deng, “Variational mesh denoising using total variation and piecewise constant function space,” IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 7, pp. 873–886, 2015.
-  P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7, pp. 629–639, 1990.
-  J. Sun, M. Ovsjanikov, and L. Guibas, “A concise and provably informative multi-scale signature based on heat diffusion,” in Proceedings of the Symposium on Geometry Processing, 2009, pp. 1383–1392.
-  B. Vallet and B. Lévy, “Spectral geometry processing with manifold harmonics,” Computer Graphics Forum, vol. 27, no. 2, pp. 251–260, 2008.
-  H. Zhang, O. Van Kaick, and R. Dyer, “Spectral mesh processing,” Computer Graphics Forum, vol. 29, no. 6, pp. 1865–1894, 2010.
-  R. Wang, Z. Yang, L. Liu, J. Deng, and F. Chen, “Decoupling noise and features via weighted -analysis compressed sensing,” ACM Trans. Graph., vol. 33, no. 2, pp. 18:1–18:12, 2014.
-  X. Sun, P. L. Rosin, R. R. Martin, and F. C. Langbein, “Fast and effective feature-preserving mesh denoising,” IEEE Trans. Vis. Comput. Graphics, vol. 13, no. 5, pp. 925–938, 2007.
-  F. Pukelsheim, “The three sigma rule,” The American Statistician, vol. 48, no. 2, pp. 88–91, 1994.
-  S. Bouaziz, M. Deuss, Y. Schwartzburg, T. Weise, and M. Pauly, “Shape-up: Shaping discrete geometry with projections,” Computer Graphics Forum, vol. 31, no. 5, pp. 1657–1667, 2012.
-  G. Guennebaud, B. Jacob et al., “Eigen v3,” http://eigen.tuxfamily.org, 2010.
-  L. Vasa and V. Skala, “A perception correlated comparison method for dynamic meshes,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 2, pp. 220–230, 2011.