Static/Dynamic Filtering for Mesh Geometry

Static/Dynamic Filtering for Mesh Geometry

filtering, the process of modifying signals to achieve desirable properties, has become a fundamental tool for different application areas. In image processing, for example, various filters have been developed for smoothing images while preserving sharp edges. Among them, the bilateral filter [1] updates an image pixel using the weighted average of nearby pixels, taking into account their spatial and range differences. Its simplicity and effectiveness makes it popular in image processing, and inspires various follow-up work with improved performance [2].

Figure 1: Our SD filter can be used for scale-aware filtering of mesh geometry, allowing us to separate geometry signals according to their scales. Such decomposition can be used for manipulating geometric details, boosting or attenuating features at different scales.
Figure 1: Our SD filter can be used for scale-aware filtering of mesh geometry, allowing us to separate geometry signals according to their scales. Such decomposition can be used for manipulating geometric details, boosting or attenuating features at different scales.

Besides image processing, filtering techniques have also been utilized for processing 3D geometry. Indeed, many geometric descriptors such as normals and vertex positions can be considered as signals defined on two-dimensional manifold surfaces, where image filtering methods can be naturally extended and applied. For example, the bilateral filter has been adapted for feature-preserving mesh smoothing and denoising [6].

Development of new geometry filters has also been inspired by other techniques that improve upon the original bilateral filter. Among them, the joint bilateral filter [2] determines the filtering weights using the information from a guidance image instead of the input image, and achieves more robust filtering results when the guidance provides reliable structural information. One limitation of this approach is that the guidance image has to be specified beforehand, and remains static during the filtering processing. For image texture filtering, Cho et al. [4] address this issue by computing the guidance using a patch-based approach that reliably captures the image structure. This idea was later adopted by Zhang et al. [10] for mesh denoising, where a patch-based guidance is computed for filtering the face normals. Another improvement for the joint bilateral filter is the rolling guidance filter proposed by [5], which iteratively updates an image using the previous iterate as a dynamic guidance, and is able to separate signals at different scales. Recently, this approach was adapted by Wang et al. [11] to derive a rolling guidance normal filter (RGNF), with impressive results for scale-aware geometric processing.

For guided filtering, the use of static vs dynamic guidance presents a trade-off between their properties. Static guidance enables direct and intuitive control over the filtering process, but is not trivial to construct a priori for general shapes. Dynamic guidance, such as the one used in RGNF, is automatically updated according to the current signal values, but can be less robust when there are outliers or noises in the input signal. Recently, Ham et al. [12] combine static and dynamic guidance for robust image filtering. Inspired by their work, we propose in this paper a new approach for filtering signals defined on mesh surfaces, by utilizing both static and dynamic guidances. The filtered signal is computed by minimizing a target function that enforces consistency of signal values within each neighborhood, while incorporating structural information provided by a static guidance. To solve the resulting noncovex optimization problem, we develop an efficient fixed-point iteration solver, which significantly outperforms the majorization-minimization (MM) algorithm proposed by [12] for similar problems. Moreover, unlike the MM algorithm, our solver can handle constraints such as unit length for face normals, which are important for geometry processing problems. Our solver iteratively updates the signal values by combining the original signal with the current signal from a spatial neighborhood. The combination weights are determined according to the static input guidance as well as a dynamic guidance derived from the current signal. The proposed method, called static/dynamic (SD) filtering, benefits from both types of guidance and produces scale-aware and feature-preserving results.

The proposed method can be applied to different signals on mesh surfaces. When applied to face normals followed by vertex updates, it filters geometric features according to their scales. When applied to mesh colors obtained from texture mapping, it filters the texture image based on the metric on the mesh surface. In addition, utilizing the scale-awareness of the filter, we apply it repeatedly to separate signal components of different scales; the resulting components can be combined according to user-specified weights, allowing for intuitive feature manipulation and enhancement. Extensive experimental results demonstrate the efficiency and effectiveness of our filter. We also release the source codes to ensure reproducibility.

In addition, we propose a new method for vertex update according to face normals, using a nonlinear optimization formulation that enforces the face normal conditions while preserving local triangle shapes. The vertex positions are computed by iteratively solving a linear system with a fixed sparse positive definite matrix, which is done efficiently via pre-factorization of the matrix. Compared with existing approaches, our method produces meshes that are more consistent with the filtered face normals.

In summary, our main contributions include:

  • we extend the work of Ham et al. [12] and propose an SD filter for signals defined on triangular meshes, formulated as an optimization problem;

  • we develop an efficient fixed-point iteration solver for the SD filter, which significantly outperforms the MM algorithm from [12] and is able to handle constraints such as unit normals;

  • we propose an efficient approach for updating vertex positions according to filtered face normals, which produces new meshes that are consistent with the target normals while preserving local triangle shapes;

  • based on the SD filter, we develop a method to separate and combine signal components of different scales, enabling intuitive feature manipulation for mesh geometry and texture color.

1Related Work

In the past, various filtering approaches have been proposed to process mesh geometry. Early work from Taubin [13] and Desbrun et al. [14] applied low-pass filters on meshes, which remove high-frequency noises but also attenuate sharp features. To better preserve features, more sophisticated image filtering techniques such as the bilateral filter [1] were adapted to mesh domains. On images, the bilateral filter updates a pixel using a weighted average of its neighboring pixels, with larger contribution from pixels that are closer in spatial or range domain. It can smooth images while preserving edges where there is large difference between neighboring pixel values [15]. Different methods have been developed to adapt the bilateral filter to mesh geometry. Fleishman et al. [6] and Jones et al. [7] applied the bilateral filter for feature-preserving mesh denoising, by treating vertex positions as geometry signals. Later, Zheng et al. [8] performed mesh denoising by applying the bilateral filter on mesh face normals instead, followed by vertex position update to reconstruct the mesh shape. Recently, Solomon et al. [9] proposed a framework for bilateral filter that is applicable for signals on general domains including images and meshes, with a rigorous theoretical foundation. Besides denoising, bilateral filtering has also been applied for other geometry processing applications such as point cloud normal enhancement [16] and mesh feature recovery [17].

The bilateral filter inspired a large amount of follow-up work on image filtering. Among them, the joint bilateral filter [2] extends the original bilateral filter by evaluating the spatial kernel using a guidance image. It can produce more reliable results when the guidance image correctly captures the structural information of the target signal. This property was utilized by Eisemann & Durand [2] and Petschnigg et al. [3] to filter flash photos, using corresponding non-flash photos as the guidance. Kopf et al. [18] and Cho et al. [4] applied the joint bilateral filter for image upsampling and structure-preserving image decomposition, respectively. In particular, [4] constructed a patch-based guidance to capture the structure of the input image. This idea was later adopted in [10] for filtering mesh face normals, where the guidance normals are computed using surface patches with the most consistent normals. Zhang et al. [5] proposed a different approach to guidance construction in their iterative rolling guidance filter, where the resulting image from an iteration is used as a dynamic guidance for the next iteration. The rolling guidance filter produces impressive results for scale-aware image processing, and is able to filter out features according to their scales. Wang et al. [11] adapted this approach to filter mesh face normals; the resulting rolling guidance normal filter enables scale-aware processing of geometric features, but is sensitive to noises on the input model. Recently, Ham et al. [12] proposed a robust image filtering technique based on an optimization formulation that involves a nonconvex regularizer. Their technique is effectively an iterative filter that incorporates both static and dynamic guidances, and achieves superior results in terms of robustness, feature-preservation, and scale-awareness. Our SD filter is based on a similar optimization formulation, but takes into account the larger filtering neighborhoods that are necessary for geometry signals. It enjoys the same desirable properties as its counterpart in image processing. In addition, the numerical solver proposed in [12] can only handle unconstrained signals, and is less efficient for the large neighborhoods used in our formulation. We therefore propose a new solver that outperforms the one from [12], while allowing for constrained signals such as unit normals.

Besides filtering approaches, feature-preserving signal smoothing can also be achieved via direct optimization. Notable examples include image smoothing algorithms that induces sparsity of resulting image gradients via -norm [19] or -norm [20] regularization. These approaches were later adapted for mesh smoothing and denoising [21]. Although effective in many cases, their optimization formulation only regularizes the signal difference between immediately neighboring faces. In comparison, our optimization compares signals within a neighborhood with user-specified size, which provides more flexibility and achieves better preservation of large-scale features.

From a signal processing point of view, meshes can be seen as a combination of signals with multiple frequency bands, which also relates with the scale space analysis [24]. Previous work separate geometry signals of different frequencies using eigenfunctions of the heat kernel [25] or the Laplace operator [26]. Although developed with sound theoretical foundations, such approaches are computationally expensive. Moreover, as specific geometric features can span across a wide range of frequencies, it is not easy to preserve or manipulate them with such approaches. The recent work from Wang et al. [28] provides an efficient way to separate and edit geometric features of different scales, harnessing the scale-aware property of the rolling guidance filter. Our SD filter also supports scale-aware processing of geometry signals, with more robustness than RGNF thanks to the incorporation of both static and dynamic guidances.

2The SD Filter

The SD filter was originally proposed by Ham et al. [12] for robust image processing. Given an input image and a static guidance image , they compute an output image via optimization

where are the pixel values of , and respectively, and are user-specified weights, denotes the set of 8-connected neighboring pixels, and

The first term in the target function is a fidelity term that requires the output image to be close to the input image, while the second term is a regularizer for the output image. The function (see Figure 2) penalizes the difference between adjacent pixels, but with bounded penalty for pixel pairs with large difference which correspond to edges or outliers. When approaches , approaches the norm. Function is a Gaussian weight function according to the guidance, giving higher a weight to a pixel pair with more similar guidance. Thus the regularizer promotes smooth regions and preserves sharp features according to the guidance, while being robust to outliers.

Figure 2: The graphs of function \psi_{\nu}(x) with different parameters. As \nu decreases, the function \psi_{\nu} approaches the \ell_{0} norm.
Figure 2: The graphs of function with different parameters. As decreases, the function approaches the norm.

In this paper, we propose an SD filter for signals defined on 2-manifold surfaces represented as triangular meshes. We begin our discussion with filtering face normals, a common approach for smoothing mesh geometry [29].

2.1SD filter for face normals

For a given orientable triangular mesh, let be the oriented unit normal of face , computed as

where are its vertex positions enumerated according to the orientation. We associate the normal with the face centroid . We would like to filter the face normals, and update the mesh vertices according to the filtered normals. To define an SD filter for the normals , we must consider some major differences compared with image filtering:

  • Image pixels are located on a regular grid, but mesh faces may result from irregular sampling of the surface.

  • To smooth an image, the SD filter as per Eq. only considers the difference between a pixel and its eight neighbor pixels. On meshes, however, geometry features can span across a large region, thus we may need to compare face normals beyond one-ring neighborhoods [11]. Moreover, similar to the bilateral filter, such comparison should consider the difference between the spatial locations, with stronger penalty for normal deviation between faces that are closer to each other.

Therefore, we can compute the filtered normals by minimizing a target function

with a user-specified weight . Here is a fidelity term between the input and output normals,

where are the face normals on the input mesh, and is the area of face on the input mesh. is a regularization term defined as

where are the guidance face normals, and denotes the set of neighboring faces of . The Gaussian standard deviation parameters are controlled by the user. Compared with the image regularizer in Eq. , this formulation introduces a Gaussian weight for the spatial locations of face normals. Here is defined according to the Euclidean distance between face centroids because of the simplicity of its computation, but other distance measures such as the geodesic distance can also be used. For each face , its neighborhood is chosen to be the set of faces with a significant value of the spatial weight . Using the empirical three-sigma rule [30], we include in the faces with , which can be found using a breadth-first search from .

The target function is nonconvex because of , and needs to be minimized numerically. In the following, we first show how the majorization-minimization (MM) algorithm proposed by [12] can be extended to solve this problem. Afterwards, we propose a new fixed-point iteration solver that significantly outperforms the MM algorithm and is suitable for interactive applications.

MM algorithm.

For the SD image filter, Ham et al. [12] proposed a majorization-minimization (MM) algorithm to iteratively minimize the target function . In each iteration, the target function is replaced by a convex surrogate function that bounds it from above, which is computed using the current variable values. This surrogate function is then minimized to update the variables. The MM solver is guaranteed to converge to a local minimum of the target function. Thus a straightforward way to minimize the new target function is to employ the MM algorithm, using the convex surrogate function for at [12]:

Specifically, with the variable values at iteration , we replace the term in the target function by its convex surrogate according to Eq. . The updated variable values are computed from the resulting convex problem

where

Due to the symmetry of neighboring relation between faces (i.e., ), the optimization problem amounts to solving a linear system:

where with being the number of faces, stack the values of and respectively, and is a symmetric matrix with diagonal elements

and off-diagonal elements

The linear system matrix in Eq. is diagonally dominant and symmetric positive definite, and can be solved using standard linear algebra routines.

Fixed-point iteration solver.

Although the MM algorithm works well on images, its performance on meshes is often unsatisfactory. Due to larger face neighborhoods, there are a large number of nonzeros in the linear system matrix of Eq. , resulting in long computation time for each iteration. In the following, we propose a more efficient solver that is suitable for interactive applications. Note that a local minimum of the target function should satisfy the first order optimality condition for each , which expands into

where

The equations can be solved using fixed-point iteration

with . In this way, the updated normal of a face is a convex combination of its initial normal and the current normals of the faces in its neighborhood. The convex combination coefficient for a neighboring face normal depends on both the (static) difference between the guidance normals on the two faces, and the (dynamic) difference between their current normals , hence the name static/dynamic filter. Moreover, there is an interesting connection between the fixed-point iteration and the MM algorithm: it can be shown that the iteration is a single step of Jacobi iteration for solving the MM linear system .

Figure 3: The change of target energy E_{\textrm{SD}} for the Gargoyle model with respect to the computational time, using the fixed-point iteration solver  for 100 iterations and the MM algorithm for 5 iterations, respectively. The fixed-point iteration solver takes much shorter time per iteration, and reduces the energy to a value close to the solution within a fraction of the time for one MM iteration.
Figure 3: The change of target energy for the Gargoyle model with respect to the computational time, using the fixed-point iteration solver for 100 iterations and the MM algorithm for 5 iterations, respectively. The fixed-point iteration solver takes much shorter time per iteration, and reduces the energy to a value close to the solution within a fraction of the time for one MM iteration.

It can be shown that the fixed-point iteration monotonically decreases the target function value until it converges to a local minimum. A proof is provided in the supplementary material. Moreover, the face normal updates are trivially parallelizable, which enables significant speedup on GPUs and multi-core CPUs. Our fixed-point solver is significantly faster than the MM algorithm, as shown in Figure 3 where we compare the change of target energy with respect to the computational time between the two solvers. Here the spatial Gaussian parameter is set to three times the average distance between neighboring face centroids in the mesh. To achieve the best performance for the MM solver, we tested two strategies for solving the MM linear system: 1) pre-computing symbolic Cholesky factorization of the system matrix, and performing numerical factorization in each iteration, with the three right-hand-sides for -, and - and -coordinates solved in parallel; 2) conjugate gradient method with parallel sparse matrix-vector multiplication. Due to the large neighborhood size, the MM system matrix has a large number of non-zeros in each row, and the Cholesky factorization approach is much more time-consuming than conjugate gradient. Therefore, we only show the timing of the algorithm with conjugate gradient solver. We can see that the fixed-point solver is much more efficient than the MM algorithm, drastically reducing the energy to a value close to the solution within a fraction of the computational time for one MM iteration. This phenomenon is observed in our experiments with other models as well. Detailed results are provided in the supplemental materials.

Figure 4: The change of the modified target energy \overline{E}_{\textrm{SD}} with respect to the number of iterations, using our fixed-point iteration solver  for the cube model in Fig. . The solver rapidly decreases the energy within a small number of iterations.
Figure 4: The change of the modified target energy with respect to the number of iterations, using our fixed-point iteration solver for the cube model in Fig. . The solver rapidly decreases the energy within a small number of iterations.
Figure 5: Comparison between our vertex update method and the Poisson-based update method from , based on the same original mesh (left) and target normals. Top center: we align the centroid of each resulting mesh (in blue) with the centroid of the original mesh (in yellow) to minimize the \ell_2 norm of the deviation between their vertices; the Poisson-based method leads to larger deviation from the original mesh. Bottom center: the deviation between individual vertices after the centroid alignment is visualized via color coding. Right: we evaluate the deviation between the resulting normals and target normals for each face, and visualize its distribution across the mesh via histogram (top right) and color coding (bottom right).
Figure 5: Comparison between our vertex update method and the Poisson-based update method from , based on the same original mesh (left) and target normals. Top center: we align the centroid of each resulting mesh (in blue) with the centroid of the original mesh (in yellow) to minimize the norm of the deviation between their vertices; the Poisson-based method leads to larger deviation from the original mesh. Bottom center: the deviation between individual vertices after the centroid alignment is visualized via color coding. Right: we evaluate the deviation between the resulting normals and target normals for each face, and visualize its distribution across the mesh via histogram (top right) and color coding (bottom right).

Enforcing unit normal constraints.

The target function in Eq. adapts the SD filter to mesh face normals in a straightforward way, but fails to recognize the requirement that all normals should lie on the unit sphere. In fact, starting from the unit face normals of the input mesh, can be decreased by simply shrinking the normals without changing their directions, and the filtered normals can have different lengths across the mesh. In other words, without the requirement of unit normals, the difference between two normal vectors is not a reliable measure of the deviation between their directions, which makes less effective for controlling the filter. To resolve this issue, we derive a new target function for optimization, by substituting each normal vector in with its normalization . However, such normalization increases the nonlinearity of the problem and makes its numerical solution more challenging. The MM algorithm is no longer applicable, because the quadratic surrogate in Eq. does not hold here. On the other hand, the fixed-point iteration solver can be slightly modified to minimize efficiently. At a local minimum, the first-order optimality condition amounts to

where

and is the identity matrix. The matrix represents the projection onto the subspace orthogonal to . Geometrically, condition means that the linear combination must be parallel to . Therefore, we can update via

with . Compared with the previous iteration format , this simply adds a normalization step after each iteration.

Similar to the previous fixed-point iteration format, the new solver with Eq. is embarrassingly parallel, and rapidly decreases the target function within a small number of iterations (see Figure 4). In the following, all examples of SD normal filtering are processed using this solver.

Vertex update.

After the face normals are filtered, the mesh vertices need to be updated accordingly. Many existing methods compute the new vertex positions by enforcing the orthogonality between the new edge vectors and the target face normals [29]. Although very efficient, such methods can result in a large number of flipped triangles, because the orthogonality constraint is still satisfied if the updated face normal is opposite to the target one. To address this issue, we propose a new update method that optimizes the vertex positions by directly enforcing the oriented normals as soft constraints, in the same way as [31]. Specifically, for each face with a target oriented unit normal , we define as the feasible set of its vertex positions for which the resulting oriented unit normal is . The new vertex positions are then determined by solving

Here the first term penalizes the deviation between the new vertex positions and the original vertex positions , with being the Frobenius norm. is a user-specified positive weight, which is set to 0.001 by default. Matrix stores the vertex positions of face in its rows. Matrix

produces the mean-centered vertex positions for a face. are auxiliary variables representing the closest projection of onto the feasible set , and is an indicator function for , so that

The second term of the target function penalizes the violation of oriented normal constraint for each face, using the squared Euclidean distance to the feasible sets. The use of mean-centering matrix utilizes the translation-invariance of the oriented normal constraint, to allow for faster convergence of the solver [31]. Overall, this optimization problem searches for new vertex positions that satisfy the oriented normal constraints as much as possible, while being close the original positions. It is solved via alternating minimization of and , following the approach of [31]:

Figure 6: SD filtering of texture image, which can smooth out features based on their scales on the mesh surface.
Figure 6: SD filtering of texture image, which can smooth out features based on their scales on the mesh surface.
  • First, we fix and update . This reduces to a set of separable subproblems, each projecting the current mean-centered vertex positions of a face to the corresponding feasible set . Namely, we look for vertex positions that is closest to while achieving the target oriented unit normal . Note that for the oriented normal condition to hold, must lie on a plane orthogonal to . Moreover, as the mean of the three vertex positions in is at the origin, it can be shown that the mean of must also lie at the origin. As a result, must lie on a plane that passes through the origin and is orthogonal to . The closest projection from onto this plane can be computed as

    Let be the oriented unit normal for the current vertex positions . Then depending on the relation between and , we have two possible solutions for .

    1. If , then the oriented unit normal for is , and we have .

    2. If , then the oriented unit normal for is . In this case, the solution degenerates to three colinear points that lie in the plane of and minimizes the distance . This can be computed as

      where is the right-singular vector of corresponding to its largest singular value.

    The subproblem for each face is independent and can be solved in parallel.

  • Next, we fix and update . This is equivalent to

    where the sparse matrix collects the mean-centering matrix coefficients for each face. This amounts to solving a sparse positive definite linear system

    where is the identity matrix. The three right-hand-sides of the system corresponds to the -, -, and -coordinates, and can be solved in parallel. Moreover, the system matrix is fixed during all iterations, thus we can pre-compute its Cholesky factorization to allow for efficient solving in each iteration.

The above alternating minimization is repeated until convergence. We use 20 iterations in all our experiments, which is sufficient to achieve nice results.

Our vertex update method can efficiently compute a new mesh that is consistent with the target face normals, while being close to the original mesh shape. Figure 5 compares our approach with the vertex update method proposed in [11], which also avoids flipped triangles. Given the target oriented normal for each face, they first rotate the current face to align its oriented normal with the target normal; then the new vertex positions are computed by matching the new face gradients with the rotated ones in a least-squares manner, by solving a Poisson linear system. For each method, we evaluate the deviation between the resulting mesh and the original mesh by aligning their centroids to minimize their norm of their vertex deviation (shown in the top center of Figure 5), and then visualizing the deviation of each vertex via color coding. The resulting mesh using our method is noticeably closer to the original mesh, as we explicitly enforce closeness in our target function. This is desirable for many applications such as mesh denoising. In addition, we compute the deviation between the resulting face normals and the target normals, and visualize their distribution using histogram as well as color-coding on the mesh surface. Our method leads smaller deviation between the target and the resulting normals. Although computational time of our method (0.4313 seconds) is higher than the method from [11] (0.1923 seconds), it does not make a significant difference to the total filtering time (see Table 1).

2.2SD filter for texture colors

Our SD filter can be applied to not only face normals, but also other signals defined on mesh surfaces. One example is RGB colors from texture mapping. Given a texture image associated with a triangular mesh, we can use the texture coordinates to identify each pixel that gets mapped to the surface, as well as its mapped position on the mesh. Let be the guidance colors for these pixels. We can compute the filtered texture colors for the pixels by minimizing the target function in Eq. , with replaced by respectively, and with the neigborhood determined according to the 3D positions . In this way, the texture image is filtered according to the metric on the mesh surface instead of the distance on the image plane. The optimization problem is solved using unconstrained fixed-point iteration similar to . Figure 6 shows some examples of texture color filtering.

3Results and Applications

In this section, we use a series of examples to demonstrate the efficiency and effectiveness of our SD filter, as well as its various applications. We also compare the SD filter with related methods including optimization [22] and rolling guidance normal filter (RGNF) [11].

Figure 7: By repeatedly applying our SD normal filter with different parameters, we can gradually remove the geometry features of increasing scales. Detailed parameter values can be found in the supplemental material.
Figure 7: By repeatedly applying our SD normal filter with different parameters, we can gradually remove the geometry features of increasing scales. Detailed parameter values can be found in the supplemental material.
Figure 8: Comparison between the SD normal filter, RGNF, and \ell_0 optimization, using a cube shape with added features on each side. The colormap shows the deviation between the results and the original cube shape. The result from the SD filter is the closest to the cube.
Figure 8: Comparison between the SD normal filter, RGNF, and optimization, using a cube shape with added features on each side. The colormap shows the deviation between the results and the original cube shape. The result from the SD filter is the closest to the cube.

Implementation.

Our algorithm is implemented in C++, using the Eigen library [32] for all linear algebra operations. For filtering of face normals, we run the iterative solver until one of the following conditions is satisfied: 1) the solver reaches the maximum number iterations, which is set to 100 for all our experiments; or 2) the area-weighted norm of normal changes between two consecutive iterations is smaller than a certain threshold angle , i.e.,

Figure 9: Filtering of the Knot model. The SD filter removes the features on each side of the knot and produces smooth appearance, while sharpening the edges between different sides.
Figure 9: Filtering of the Knot model. The SD filter removes the features on each side of the knot and produces smooth appearance, while sharpening the edges between different sides.

We set to 0.2 degrees in all our experiments. For filtering of texture colors, we run the solver for 50 iterations. Unless stated otherwise, all examples are run on a desktop PC with 16GB of RAM and a quad-core CPU at 3.6 GHz, using OpenMP for parallelization.

In all examples, the spatial Gaussian parameter is specified with respect to the average distance between adjacent face centroids, denoted as . By default, the initial signal is used as the guidance. For more intuitive control of the optimization, we also rescale the user-specified regularizer weight according to the value of . As increases, the integral of spatial Gaussian on the corresponding face neighborhood also increases, and the relative scale of the regularizer term with respect to the fidelity term grows. Therefore, we compensate for the change of the regularizer scale due to , by rescaling with a factor .

Figure 10: Scale-aware filtering of the Merlion model, in comparison with \ell_0 optimization and RGNF. Each row shows comparable results using the three methods, while each column shows the results from one method. The SD filter successfully removes the fine brick lines at the base, while preserving the letters on the base.
Figure 10: Scale-aware filtering of the Merlion model, in comparison with optimization and RGNF. Each row shows comparable results using the three methods, while each column shows the results from one method. The SD filter successfully removes the fine brick lines at the base, while preserving the letters on the base.
Figure 11: According to two user-selected regions (shown in red and yellow), we determine upper bound \nu_{\max{}} and \nu_{\min{}} for the \nu parameter, which inhibits influence between the face normals from the two regions, and ensures the smoothing of normals within each region. Within this range, a larger \nu leads to smoother results, while a smaller \nu promotes sharp features between the two selected regions. Here each row shows one pair of selected regions, and the resulting meshes using different values of \nu within the corresponding range.
Figure 11: According to two user-selected regions (shown in red and yellow), we determine upper bound and for the parameter, which inhibits influence between the face normals from the two regions, and ensures the smoothing of normals within each region. Within this range, a larger leads to smoother results, while a smaller promotes sharp features between the two selected regions. Here each row shows one pair of selected regions, and the resulting meshes using different values of within the corresponding range.

The source code for our SD filter is available at https://github.com/bldeng/MeshSDFilter. The parameters for the examples in this section can be found in the supplemental material.

Scale-aware and feature-preserving filtering.

The SD filter can effectively remove features based on their scales according to the user-specified parameters. This is demonstrated in Figure 7, where the input models are a sphere and a cube with additional features of different scales on the surfaces. Using different parameter settings, the SD filter gradually removes the geometry features of increasing scales, while preserving the sharp edges on the cube. Similar scale-aware and feature-preserving effects are observed for filtering of texture colors, as shown in Figure 6.

Figure 12: A larger value of \lambda or \eta leads to a smoother filtering result. Here each rows shows the filtering results with increasing values of \lambda or \eta, while keeping the other parameters fixed.
Figure 12: A larger value of or leads to a smoother filtering result. Here each rows shows the filtering results with increasing values of or , while keeping the other parameters fixed.
Figure 13: Geometry feature manipulation and enhancement for the Armadillo model, by controlling the contribution from features of different scales. Left: the coarse-to-fine sequence of meshes M^0, M^1, M^2, M_3 obtained by repeatedly applying the SD normal filter using different parameters. Right: new meshes generated using the linearly combined target vertex positions and target normals (Equation ). The combination coefficients are shown below the resulting mesh.
Figure 13: Geometry feature manipulation and enhancement for the Armadillo model, by controlling the contribution from features of different scales. Left: the coarse-to-fine sequence of meshes obtained by repeatedly applying the SD normal filter using different parameters. Right: new meshes generated using the linearly combined target vertex positions and target normals (Equation ). The combination coefficients are shown below the resulting mesh.
Figure 14: Geometry feature manipulation and enhancement for the Welsh Dragon model. Left: the coarse-to-fine sequence of meshes resulting from SD normal filtering. Right: generated new meshes and their corresponding linear combination coefficients.
Figure 14: Geometry feature manipulation and enhancement for the Welsh Dragon model. Left: the coarse-to-fine sequence of meshes resulting from SD normal filtering. Right: generated new meshes and their corresponding linear combination coefficients.

In Figs. Figure 8, Figure 9, and Figure 10, we show more examples of scale-aware and feature-preserving filtering of mesh geometry using the SD filter, and compare the results with optimization method from [22] and RGNF from [11]. We tune the parameters of each method to achieve the best results, while ensuring the comparable effects from different methods. In all examples, the SD filter achieves better or similar results compared with RGNF, and outperforms optimization. In Figure 8, the input model is a cube with additional features on each face, and resulting mesh with the SD filter is the closest to the cube shape. The RGNF leads to a result with larger deviation from the cube shape, because the filtered signals are computed as a combination of the original signals within a neighborhood; as a consequence, when there is large deviation between the input signals and the desired output within a certain region, RGNF may not produce a desirable result inside the region. In Figure 9, the SD filter is able to smooth out the star-shaped features on the knot surface, while enhancing the sharp feature lines between different sides of the knot. Although optimization also enhances the feature lines, it leads to piecewise flat shapes because the norm promotes piecewise constant signals. In Figure 10, the three methods produce similar results on the Merlion model, while the scale-awareness of the SD filter enables it to remove the fine brick lines at the base while clearly retaining the letters.

Figure 15: Texture image feature enhancement using the SD filter.
Figure 15: Texture image feature enhancement using the SD filter.

Choice of parameters.

Our filter is influenced by four parameters: the regularizer weight , the spatial Gaussian parameter which also influences the neighborhood size, the guidance Gaussian parameter , and the range Gaussian parameter . For both and , a larger value leads to a smoother result, as shown in Figure 12. Parameters and determine which face normals within a neighborhood affects the central face in a fixed-point iteration. For more intuitive control, we propose a method to interactively set the and parameters. Firstly, the user selects two smooth regions on different sides of a feature intended to be kept. We denote the two regions as and , and compute the mean and variance () of the normals within each region via

Then the range of is determined using the following strategy: should be small enough such that the two mean normals have negligible influence on each other according to the range Gaussian; at the same time, within each region the normals should influence each other such that sharp features do not emerge. Based on this strategy, we first determine the lower- and upper-bounds of via

If , then the user needs to select another pair of regions; otherwise, a value between and as the parameter . In our experiments, good results can often be achieved by choosing and setting with . Figure 11 shows the effects of different values on the Chinese lion model.

Feature manipulation and enhancement.

The scale-awareness of our filter enables us to manipulate mesh details according to their scales. Given an input mesh , we can apply the SD normal filter with different parameters, to obtain a series of filtered meshes with less and less fine details retained. If we denote the input mesh as , then forms a coarse-to-fine sequence of meshes, with being the base mesh, and being the original mesh. We encode the difference between two consecutive meshes by comparing their corresponding vertex positions and face normals, represented as and , where and are the vertex positions and face normals of mesh . These differences represent the required deformation for to introduce the additional details in . They can be linearly combined according to coefficients and added to the face normals and vertex positions of the base mesh, to derive the target vertex positions and target face normals for a new mesh :

Figure 16: Local geometry feature enhancement on a human face model, by limiting the linear combination of target vertex positions and target normals to a local region of the model. The local region is annotated below each resulting model.
Figure 16: Local geometry feature enhancement on a human face model, by limiting the linear combination of target vertex positions and target normals to a local region of the model. The local region is annotated below each resulting model.

Note that the target vertex positions and target face normals are often incompatible, i.e., the face normals for a mesh with the target vertex positions are different from the target normals. To combine the two conditions, we determine the new mesh by solving the same optimization problem as our vertex update , with the matrix in the target function storing the target vertex positions. In this way, the linear combination coefficients indicate the contribution of geometric features from the original model within a certain range of scales. By changing the value of , a user can control geometric features according to their scales. Setting all coefficients to 1 recovers the original mesh, while setting a coefficient to a value different from 1 can boost or attenuate the features of the corresponding scales. Moreover, the linear system matrix for the optimization problem is fixed regardless of the value of , and only needs to be pre-factorized once. Afterwards, the user can choose any value of , and the resulting mesh can be efficiently computed using the pre-factorized system. This allows the user to interactively explore different linear combination coefficients to achieve desirable results. Figs. Figure 1, Figure 13, and Figure 14 show examples of new meshes created in this manner. We can see that the coarse-to-fine sequence of meshes captures the geometrical features of different scales, which are effectively manipulated using the linear combination coefficients. In some application scenarios, it is desirable to only modify the features within a certain region on the surface. In this case, the target vertex positions and target normals are only computed via linear combination within user-selected regions; outside these regions they remain the same as the original mesh. Figure 16 shows such an example, where a 3D human face model is locally enhanced.

Similarly, we can manipulate and enhance texture colors based on the SD filter, as shown in Figure 15. We first filter the input texture image incrementally to derive a coarse-to-fine sequence of texture images . Then a new texture image is computed via linear combination with coefficients :

Mesh denoising.

By constructing appropriate guidance signals, our SD filter can also be applied for mesh denoising, as shown in Figure 17. Here we repeatedly apply the SD normal filter to an input mesh to remove the noise. In each run of the SD normal filter, the guidance normals are computed from the current mesh using the patch-based construction approach proposed in [10]. For each model, we run the SD filter for multiple times to perform denoising. The results are evaluated using the average normal deviation and the average vertex deviation from the ground truth mesh as proposed in [10]. In addition, similar to [9], we measure the perceptual difference between the denoised mesh and the ground truth using the spatial error term of the STED distance proposed in [33], computed with one-ring vertex neighborhood. Figure 17 compares our denoising results with the guided mesh normal filtering (GMNF) method from [10]. The results from the two methods are quite close, with similar error metric values. Detailed parameter settings are provided in the supplementary materials.

Figure 17: Comparison between denoising results using the SD normal filter, and the guided mesh normal filtering (GMNF) method from . The results from the two methods are close to each other, with similar error metric values.
Figure 17: Comparison between denoising results using the SD normal filter, and the guided mesh normal filtering (GMNF) method from . The results from the two methods are close to each other, with similar error metric values.

Performance.

Using parallelization, our SD filter can compute the results efficiently. Table 1 provides the representative computation time of the SD normal filter on different models, showing the timing for each part of the algorithm:

  • : the pre-processing time for finding the neighborhood;

  • : the average timing of one iteration;

  • : the timing for mesh vertex update;

  • : The total timing for the whole filtering process.

For meshes with less than 100K faces and with , the whole process typically takes only a few seconds.

Table 1: Computational time (in seconds) for different parts of the SD normal filtering method.
Model #Faces
Armadillo 43K 2 2.5 1.5 0.45 0.57 0.036 0.19 1.45
Cube 49K 1 0.8 0.4 2.03 0.15 0.37 3.42
Sphere 60K 100 3 1.5 0.17 1.10 0.065 0.34 2.88
Duck 68K 10 2 2.5 0.3 0.61 0.031 0.33 1.69
Knot 100K 1 2 2.5 0.3 1.30 0.07 0.50 5.69
Gargoyle 100K 5 3 10 0.42 1.90 0.12 0.35 8.17
Merlion 566K 10 2.5 2 0.26 6.96 0.49 5.24 32.22
Welsh Dragon 2.21M 10 1.5 20 0.35 19.85 0.49 36.17 71.26

4Discussion and Conclusion

We present the SD filter for triangular meshes, which is formulated as an optimization problem with a target energy that combines a quadratic fidelity term and a nonconvex robust regularizer. We develop an efficient fixed-point iteration solver for the problem, enabling the filter to be applied for interactive applications. Our SD filter generalizes the joint bilateral filter, combining the static guidance with a dynamic guidance that is derived from the current signal values. Thanks to the joint static/dynamic guidance, the SD filter is robust, feature-preserving and scale-aware, producing state-of-the-art results for various geometry processing problems.

Although our solver can incorporate simple constraints such as unit length for normal vectors, we do not consider global conditions for the signals. For example, we do not ensure the integrability of normals, i.e., the existence of a mesh whose face normals match the filter results; as a result, some parts of the updated mesh may not be consistent with the filtered normals. Neither do we consider the prevention of self intersection of the updated mesh. Due to the local nature of our fixed-point iteration, it is not easy to incorporate such global constraints into the solver. A possible remedy is to introduce a separate step to enforce these conditions after a few iterations. A more in-depth investigation into such global conditions is an interesting future work.

In this paper, we only consider the filtering of face normals and texture colors on mesh surfaces. But our formulation is general enough to allow for other scenarios. In the future, we would like to extend the filter to other geometry signals such as curvatures and shape operators, and to other geometric representations such as point clouds and implicit surfaces.

Acknowledgments

We thank Yang Liu for providing the implementation of RGNF. The Welsh Dragon mesh model was released by Bangor University, UK, for Eurographics 2011. This work was supported by the National Key R&D Program of China (No. 2016YFC0800501), the National Natural Science Foundation of China (No. 61672481, No. 61672482 and No. 11626253), and the One Hundred Talent Project of the Chinese Academy of Sciences.

References

  1. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” ser. ICCV ’98, 1998.
  2. E. Eisemann and F. Durand, “Flash photography enhancement via intrinsic relighting,” ACM Trans. Graph., vol. 23, no. 3, pp. 673–678, 2004.
  3. G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph., vol. 23, no. 3, pp. 664–672, 2004.
  4. H. Cho, H. Lee, H. Kang, and S. Lee, “Bilateral texture filtering,” ACM Trans. Graph., vol. 33, no. 4, pp. 128:1–128:8, 2014.
  5. Q. Zhang, X. Shen, L. Xu, and J. Jia, “Rolling guidance filter,” in Computer Vision–ECCV 2014.1em plus 0.5em minus 0.4em Springer, 2014, pp. 815–830.
  6. S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral mesh denoising,” ACM Trans. Graph., vol. 22, no. 3, 2003.
  7. T. R. Jones, F. Durand, and M. Desbrun, “Non-iterative, feature-preserving mesh smoothing,” ACM Trans. Graph., vol. 22, no. 3, pp. 943–949, 2003.
  8. Y. Zheng, H. Fu, O.-C. Au, and C.-L. Tai, “Bilateral normal filtering for mesh denoising,” IEEE Trans. Vis. Comput. Graphics, vol. 17, no. 10, pp. 1521–1530, 2011.
  9. J. Solomon, K. Crane, A. Butscher, and C. Wojtan, “A general framework for bilateral and mean shift filtering,” arXiv preprint arXiv:1405.4734, 2014.
  10. W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu, “Guided mesh normal filtering,” Comput. Graph. Forum, vol. 34, no. 7, pp. 23–34, 2015.
  11. P. Wang, X. Fu, Y. Liu, X. Tong, S. Liu, and B. Guo, “Rolling guidance normal filter for geometric processing,” ACM Trans. Graph., vol. 34, no. 6, p. 173, 2015.
  12. B. Ham, M. Cho, and J. Ponce, “Robust image filtering using joint static and dynamic guidance,” in CVPR, 2015.
  13. G. Taubin, “A signal processing approach to fair surface design,” ser. SIGGRAPH ’95, 1995, pp. 351–358.
  14. M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr, “Implicit fairing of irregular meshes using diffusion and curvature flow,” ser. SIGGRAPH ’99, 1999, pp. 317–324.
  15. S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, Bilateral filtering: Theory and applications.1em plus 0.5em minus 0.4emNow Publishers Inc, 2009.
  16. T. Jones, F. Durand, and M. Zwicker, “Normal improvement for point rendering,” IEEE Computer Graphics and Applications, vol. 24, no. 4, pp. 53–56, 2004.
  17. C. C. Wang, “Bilateral recovering of sharp edges on feature-insensitive sampled meshes,” IEEE Trans. Vis. Comput. Graphics, vol. 12, no. 4, pp. 629–639, 2006.
  18. J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph., vol. 26, no. 3, p. 96, 2007.
  19. L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via gradient minimization,” ACM Trans. Graph., vol. 30, no. 6, pp. 174:1–174:12, 2011.
  20. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D, vol. 60, no. 1-4, pp. 259–268, 1992.
  21. G. Taubin, “Introduction to geometric processing through optimization,” IEEE Computer Graphics and Applications, vol. 32, no. 4, pp. 88–94, 2012.
  22. L. He and S. Schaefer, “Mesh denoising via minimization,” ACM Trans. Graph., vol. 32, no. 4, pp. 64:1–64:8, 2013.
  23. H. Zhang, C. Wu, J. Zhang, and J. Deng, “Variational mesh denoising using total variation and piecewise constant function space,” IEEE Transactions on Visualization and Computer Graphics, vol. 21, no. 7, pp. 873–886, 2015.
  24. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7, pp. 629–639, 1990.
  25. J. Sun, M. Ovsjanikov, and L. Guibas, “A concise and provably informative multi-scale signature based on heat diffusion,” in Proceedings of the Symposium on Geometry Processing, 2009, pp. 1383–1392.
  26. B. Vallet and B. Lévy, “Spectral geometry processing with manifold harmonics,” Computer Graphics Forum, vol. 27, no. 2, pp. 251–260, 2008.
  27. H. Zhang, O. Van Kaick, and R. Dyer, “Spectral mesh processing,” Computer Graphics Forum, vol. 29, no. 6, pp. 1865–1894, 2010.
  28. R. Wang, Z. Yang, L. Liu, J. Deng, and F. Chen, “Decoupling noise and features via weighted -analysis compressed sensing,” ACM Trans. Graph., vol. 33, no. 2, pp. 18:1–18:12, 2014.
  29. X. Sun, P. L. Rosin, R. R. Martin, and F. C. Langbein, “Fast and effective feature-preserving mesh denoising,” IEEE Trans. Vis. Comput. Graphics, vol. 13, no. 5, pp. 925–938, 2007.
  30. F. Pukelsheim, “The three sigma rule,” The American Statistician, vol. 48, no. 2, pp. 88–91, 1994.
  31. S. Bouaziz, M. Deuss, Y. Schwartzburg, T. Weise, and M. Pauly, “Shape-up: Shaping discrete geometry with projections,” Computer Graphics Forum, vol. 31, no. 5, pp. 1657–1667, 2012.
  32. G. Guennebaud, B. Jacob et al., “Eigen v3,” http://eigen.tuxfamily.org, 2010.
  33. L. Vasa and V. Skala, “A perception correlated comparison method for dynamic meshes,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 2, pp. 220–230, 2011.
10028
This is a comment super asjknd jkasnjk adsnkj
""
The feedback cannot be empty
Submit
Cancel
Comments 0
""
The feedback cannot be empty
   
Add comment
Cancel

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.