A Functional Representation for Graph MatchingThis research is funded by NSFC-projects under the contracts No.61771350 and No.41820104006.

A Functional Representation for Graph Matchingthanks: This research is funded by NSFC-projects under the contracts No.61771350 and No.41820104006.

Fu-Dong Wang, Gui-Song Xia, Nan Xue,
Yipeng Zhang, Marcello Pelillo,
LIESMARS-CAPTAIN, Wuhan University, Wuhan, China
School of Computer Science, Wuhan University, China

Computer Vision Lab., University of Venice, Italy
{fu-dong.wang, guisong.xia, xuenan, zhangyp}@whu.edu.cn,
pelillo@unive.it.
Abstract

Graph matching is an important and persistent problem in computer vision and pattern recognition for finding node-to-node correspondence between graph-structured data. However, as widely used, graph matching that incorporates pairwise constraints can be formulated as a quadratic assignment problem (QAP), which is NP-complete and results in intrinsic computational difficulties. In this paper, we present a functional representation for graph matching (FRGM) that aims to provide more geometric insights on the problem and reduce the space and time complexities of corresponding algorithms. To achieve these goals, we represent a graph endowed with edge attributes by a linear function space equipped with a functional such as inner product or metric, that has an explicit geometric meaning. Consequently, the correspondence between graphs can be represented as a linear representation map of that functional. Specifically, we reformulate the linear functional representation map as a new parameterization for Euclidean graph matching, which is associative with geometric parameters for graphs under rigid or nonrigid deformations. This allows us to estimate the correspondence and geometric deformations simultaneously. The use of the representation of edge attributes rather than the affinity matrix enables us to reduce the space complexity by two orders of magnitudes. Furthermore, we propose an efficient optimization strategy with low time complexity to optimize the objective function. The experimental results on both synthetic and real-world datasets demonstrate that the proposed FRGM can achieve state-of-the-art performance.

1 Introduction

Graph matching (GM) is widely used to find node-to-node correspondence [1, 2] between graph-structured data in many computer vision and pattern recognition tasks, such as shape matching and retrieval [3, 4], object categorization [5], action recognition [6], and structure from motion [7], to name a few. In these applications, real-world data are generally represented as abstract graphs equipped with node attributes (e.g., SIFT descriptor, shape context) and edge attributes (e.g., relationships between nodes). In this way, many GM methods have been proposed based on the assumption that nodes or edges with more similar attributes are more likely to be matched. Generally, GM methods construct objective functions w.r.t. the varying correspondence to measure similarities (or dissimilarities) between nodes and edges. Then, they maximize (or minimize) the objective functions to pursue an optimal correspondence that achieves maximal (or minimal) total similarities (or dissimilarities) between two graphs. In the literature, an objective function is generally composed of unary [3], pairwise [8, 9] or higher-order [10, 11] potentials. In practice, matching graphs using only unary potential (node attributes) might lead to undesirable results due to the insufficient discriminability of node attributes. Therefore, pairwise or higher-order potentials are often integrated to better preserve the structural alignments between graphs.

Although the past decades have witnessed remarkable progresses in GM [1], there are still many challenges with respect to both computational difficulty and formulation expression. Specifically, as widely used, GM that incorporates pairwise constraints can be formulated as a quadratic assignment problem (QAP) [12], among which Lawler’s QAP [13] and Koopmans-Beckmann’s QAP [14] are two common formulations. However, due to the NP-complete [15] nature of QAP, only approximate solutions are available in polynomial time. In practice, solving GM problems with pairwise constraints often encounters intrinsic difficulties due to the high computational complexity in space or time. For GM methods that apply Lawler’s QAP, the affinity matrix results in high space complexity w.r.t. the graph sizes . For GM methods that aim to solve the objective functions with discrete binary solutions through a gradually convex-concave continuous optimization strategy, the verbose iterations result in high time complexity. Restricted by these limitations, only graphs with dozens of nodes can be handled by these methods in practice.

In addition to the computational difficulties, how to formulate the GM model for real applications is also important. Representing real-world data in the conventional graph model can provide some generalities for the general GM methods mentioned above. However, their formulations can neither reflect the geometric nature of real-world data nor handle graphs with geometric deformations (rigid or nonrigid). For example, when the edge attributes of graphs are computed as distances [16, 17, 18] on some explicit or implicit spaces that contain the real-world data, the formulations of the original GM methods that define objective functions in the form of Lawler’s or Koompmans-Beckmann’s QAP ignore the geometric properties behind these data. They can only achieve generality and ignore the geometric nature of real-world data. For graphs with rigid or nonrigid geometric deformations [16], the original GM methods cannot compatibly handle the two tasks that estimate both correspondence and deformation parameters because they can hardly provide the correspondence a geometric interpretation that is naturally contained in the deformation parameters.

Figure 1: FRGM: given two graphs and , we construct two function spaces and as representations, where and are two sets of basis functions that represent the nodes and , and and are the inner product or metric that represent the edge attributes and . The matching between two graphs can be viewed as a transformation , which may be nonlinear and complicated. Fortunately, can be recovered from a linear functional: , which is induced from by the push-forward operation and represented by a linear functional representation map . is exactly a correspondence between graphs. Based on the inner product or metric defined as , each transformed node will lie closer to its correct match, as shown in matrix . This property is helpful for improving the matching performance.

Facing these issues, this paper introduces a new functional representation for graph matching (FRGM). The main idea is to represent the graphs and the node-to-node correspondence in linear functional representations for both general and Euclidean GM models. Specifically, for general GM, as shown in Fig. 1, given two undirected graphs, we can identically represent the node sets as linear function spaces, on which some specified functionals (e.g., inner product or metric) can be compatibly constructed to represent the edge attributes. Then, between the two function spaces, a functional induced by the push-forward operation is represented by a linear representation map , which is exactly the correspondence between graphs. With these concepts, our general GM algorithm is proposed by minimizing the objective function w.r.t. that measures the difference of graph attributes between graph and its transformed graph . Namely, we want an optimal functional in the sense of preserving the inner product or metric. For the Euclidean GM in which the graphs are embedded in Euclidean space , the functional that plays the role of correspondence between graphs can be directly deduced on the background space . Due to the natural linearity of , can also be represented by a linear representation map , which is not only a parameterization for GM but also associative with geometric parameters for graphs under geometric deformations. A preliminary version of this work was presented in [19].

FRGM only needs to compute and store the edge attributes of graphs; thus its space complexity is (with ). To reduce the time complexity, we first propose an optimization algorithm with time complexity based on the Frank-Wolfe method. Then, by taking advantage of the specified property of the relaxed feasible field, we improve the Frank-Wolfe method by an approximation that has a lower time complexity of .

The contributions of this paper can be distinguished in the following aspects:

  • We introduce a new functional representation perspective that can bridge the gap between the formulation of general GM and the geometric nature behind the real-world data. This guides us in constructing more efficient objective functions and algorithms for general GM problem.

  • For graphs embedded in Euclidean space, we extend the linear functional representation map as a new geometric parameterization that achieves compatibility with the geometric parameters of graphs. This helps to globally handle graphs with or without geometric deformations.

  • We propose GM algorithms with low space complexity and time complexity by avoiding the use of an affinity matrix and by improving the optimization strategy. The proposed algorithms outperform the state-of-the-art methods in terms of both efficiency and accuracy.

The remainder of this paper is organized as follows. Sec. 2 presents the mathematical formulation and related work of GM. In Sec. 3 we demonstrate the functional representation for GM in general settings and the resulting algorithm. In Sec. 4 and Sec. 5, we discuss FRGM for matching graphs in Euclidean space with and without geometric deformations, respectively. In Sec. 6, we present a numerical analysis of our optimization strategy. Finally, we report the experimental results and analysis in Sec. 7 and conclude this paper in Sec. 8.

2 Background and Related Work

This section first introduces the preliminaries and basic notations of GM and then it discusses some related works on GM.

2.1 Definition of GM Problem

An undirected graph of size is defined by a discrete set of nodes and a set of undirected edges such that . Generally, the edge of graph is written as a symmetric edge indicator matrix (also denoted as) , where if there is an edge between and , and otherwise. An important generalization is the weighted graph defined by the association of non-negative real values to graph edges, and is called adjacency weight matrix. We assume graphs with no self-loop in this paper, i.e., .

In many real applications, graph is associated with node and edge attributes expressed as scalars or vectors. For an attribute graph , we denote as the node attribute of and as the edge attribute of . Typically, an edge attribute matrix will be calculated by some user-specified functions such as or .

Given two graphs of size and () respectively, the GM problem is to find an optimal node-to-node correspondence , where when the nodes and are matched and otherwise. It is clear that any possible correspondence equals a (partial) permutation matrix when GM imposes the one-to-(at most)-one constraints. Therefore, the feasible field of can be defined as:

(1)

where is a unit vector. When , is orthogonal: , where is a unit matrix.

To find the optimal correspondence, GM methods that incorporate pairwise constraints generally minimize or maximize their objective functions w.r.t. upon the feasible field . There are two main typical objective functions: Lawler’s QAP [13] and Koopmans-Beckmann’s QAP [14].

The main idea behind Lawler’s QAP [13, 8, 20, 9, 16] is to maximize the sum of the node and edge similarities:

(2)

where is the columnwise vectorized replica of . The diagonal element measures the node affinity calculated with node attributes as , and measures the edge affinity calculated with edge attributes as . is called the affinity matrix of and .

Koopmans-Beckmann’s QAP [14, 21] formulates GM as

(3)

where measures the dissimilarity between node and , and are the adjacency weight matrices of and . is a weight between the unary and pairwise terms. This formulation differs from Eq. (2) mainly in the pairwise term, which measures the edge compatibility as the linear similarity of adjacency matrices and . In fact, Eq. (3) can be regarded as a special case of Lawler’s QAP (Eq. (2)) if , where denotes the Kronecker product. With this formulation, the space complexity of GM is and much lower than that of Eq. (2).

The Eq. (3) has another approximation, which aims to minimize the node and edge dissimilarity between two graphs:

(4)

where is the Frobenius dot-product defined as and is the Frobenius matrix norm defined as . The conversion from Eq. (4) to Eq. (3) holds equally under the fact that any is an orthogonal matrix.

Due to the NP-complete nature of the above formulations, GM methods generally approximate the discrete feasible field by a continuous relaxation , which is known as the doubly stochastic relaxation. Then the objective functions can be approximately solved by applying constrained optimization methods and employing a post-discretization step such the Hungarian algorithm [22] to obtain a discrete binary solution.

2.2 Related Work

Over the past decades, the GM problem of finding node-to-node correspondence between graphs has been extensively studied [2, 1]. Earlier works (exact GM) [23, 24] tended to regard GM as (sub)graph isomorphism. However, this assumption is too strict and leads to less flexibility for real applications. Therefore, later works on GM (inexact/error-tolerant GM) [18, 16, 20, 9] focused more on finding inexact matching between weighted graphs via optimizing more flexible objective functions.

Among the inexact GM methods, some of them aim to reduce the considerable space complexity caused by the affinity matrix in Eq. (2). A typical work is the factorized graph matching (FGM) [16], which factorized as a Kronecker product of several smaller matrices. An efficient sampling heuristic was proposed in [25] to avoid storing the whole at once. Some works [21, 26] constructed objective functions similar to Eq. (4) or Eq. (3) to avoid using matrix . Our work use the representation of edge attributes rather than .

Since exactly solving the objective functions upon discrete feasible field is NP-complete, most GM methods relax for approximation purpose in several ways. The first typical relaxation is spectral relaxation, as proposed in [8, 27], by forcing ; then, the solution is computed as the leading eigenvector of . The second relaxation [21] is to consider as a subset of an orthogonal matrices set such that , which is the basis of converting Eq. (4) into Eq. (3). Semidefinite-programming (SDP) was also applied to approximately solve the GM problem in [28, 29] by introducing a new variable under the convex semidefinite constraint . Then, is approximately recovered by .

The most widely used relaxation approach is the doubly stochastic relaxation , which is the convex hull of . Since is a convex set defined in a linear form, it allows the GM objectives functions to be solved by more flexible convex or nonconvex optimization algorithms. To find more global optimal solutions with a binary property, the algorithms proposed in [21, 16, 30, 31] constructed objective functions in both convex and concave relaxations controlled by a continuation parameter, and then they developed a path-following-based strategy for optimization. These approaches are generally time consuming, particularly for matching graphs with more than dozens of nodes. The graduated assignment method [32] iteratively solved a series of first-order approximations of the objective function. Its improvement [33] provided more convergence analysis. The decomposition-based work in [34] developed its optimization technique by referring to dual decomposition. Additionally, another method in [18] decomposed the matching constraints and then used an optimization strategy based on the alternating direction method of multipliers. To ensure binary solutions, some methods such as the integer-projected fixed point algorithm [20] and iterative discrete gradient assignment [11], have been proposed by searching in the discrete feasible domain. We also adopt the doubly stochastic relaxation, and we construct an objective function that can be solved with a nearly binary solution, which helps to reduce the effect of the post-discretization step.

In addition to approximating the objective functions, some works also intended to provide more interpretations of the GM problem. The probability-based works [25, 35] solved the GM problem from a maximum likelihood estimation perspective. Some learning-based works [36, 37] went further to explore how to improve the affinity matrix by considering rotations and scales of real data. A pioneering work [38] presented an end-to-end deep learning framework for GM. A random walk view [9] was introduced by simulating random walks with reweighting jumps. A max-pooling-based strategy was proposed in [39] to address the presence of outliers. Compared to these these works, the proposed FRGM provides more geometric insights for GM with general settings by using a functional representation to interpret the geometric nature of real-world data, and it then matches graphs embedded in Euclidean space by providing a new parameterization view to handle graphs under geometric deformations.

3 Functional Representation for GM

This section presents the functional representation for general GM that incorporates pairwise constraints. In Sec. 3.1, we introduce the function space of a graph, on which functionals can be defined as the inner product or metric to compatibly represent the edge attributes. In Sec. 3.2, we discuss how to represent the correspondence between graphs as a linear functional representation map between function spaces. Finally, the correspondence is an optimal functional map obtained by the algorithm in Sec. 3.3.

3.1 Function Space on Graph

Given an undirected graph with edge attribute matrix , we aim to establish function space of , on which some geometric structures, such as inner product or metric can be defined. This is especially meaningful when graphs are embedded in explicit or hidden manifolds.

Let denote the function space of all real-valued functions on . Since is finite discrete, we can choose a finite set of basis functions to explicitly construct .

Definition 3.1.

The function space on graph can be defined as:

(5)

For example, can be chosen as the indicator of :

(6)

Considering the fact that the correspondence matrix is positive, i.e., , a typical subset of can be defined as follows, which is the convex hull of :

(7)

Once the function space is built, some trivial operations can be defined, e.g., inner product and metric . However, these definitions cannot express the edge attribute . Therefore, we aim to define some other operations to represent based on . An available approach is to define functionals on the product space . Moreover, the functionals should (1) be compatible with and (2) have geometric structures such as inner product or metric, as demonstrated in the following:

Definition 3.2.

A functional is compatible with if it satisfies .

Among all the compatible functionals, there are some specified ones that can be defined as the inner product or metric on the function space or its subset , as follows.

Definition 3.3.

The inner product on the function space can be defined in an explicit form: ,

(8)

For the given edge attribute matrix that is symmetric, satisfies the first two inner product axioms: symmetry and linearity. To satisfy the third axiom, positive-definiteness, we need more knowledge about , e.g., is positive-definite. However, if the positive-definiteness is too strong, we can relax it to a weaker condition.

Proposition 1.

Assume that satisfies iff . Then, the functional in Eq. (8) satisfies all three axioms on by replacing with with small enough. Here, is a pointwise product.

This proposition holds because when is sufficiently small, all the eigenvalues of matrix will be positive. In particular, when is computed as a metric (distance) matrix on an explicit or hidden manifold, it satisfies that and is positive-definite. Moreover, can be used to adjust the eigenspace of . Fig. 2 illustrates an empirical study on thousands of ’s extracted from both realistic and synthetic datasets used in Sec. 7. The edge attribute of each graph is computed in a metric form (either Euclidean distance or geodesic distance) and then normalized to divided by the maximum element. We can see that

  • All the eigenvalues of are positive.

  • The ratio between the minimum and maximum eigenvalues has a similar tendency when varies from to .

It shows that will become indistinguishable if is too small or unbalanced if is too large. We can choose a suitable to adjust the eigenspace of to achieve better matching performance.

Figure 2: Empirical statistics of extracted from the realistic and synthetic datasets used in the experimental section. For thousands of graphs in all six datasets, as varies from 0 to 1, the ratio between the minimum and maximum eigenvalues of changes with a similar tendency.

The inner product can induce a metric by definition . Moreover, we can also define another metric on the subset based on itself.

Definition 3.4.

The metric on the convex hull can be defined in an implicit form: ,

(9)

where .

When is computed as a metric, satisfies all three distance axioms on , and it is a typical Wasserstein distance (or Sinkhorn distance) [40]. The definition in Eq. (9) is not differentiable w.r.t. ; one can use the entropy-regularized Wasserstein distance [40] to achieve differentiability.

With the function space equipped with inner product or metric, each graph is assigned with explicit geometric structures that are compatible with the edge attribute. Next, we demonstrate the idea of using the functional map representation to formulate the correspondence between graphs as a functional between two function spaces .

3.2 Functional Map Representation for GM

The matching between two graphs and can be viewed as a mapping from to , which may be nonlinear and complicated. Therefore, we use the push-forward operation to induce a functional rather than to equally represent the matching between graphs.

Assume that is an injective mapping; then, is bijective and invertible. Without ambiguity, we can assume that is bijective. Each induces a natural transformation via the push-forward operation, which is widely used in functional analysis [41] and real applications [42][43]:

Definition 3.5.

The functional induced from is defined as: , the image of is .

Proposition 2.

The original can be recovered from .

For each point , it can be associated with an indicator function as Eq. (6). To recover the image from , we utilize the function , which satisfies

(10)

Since is bijective and invertible, a unique exists s.t. . Then, once we find , we have , and must equal the image of : . Thus, the functional can be used to equally represent .

Proposition 3.

is a linear mapping from function spaces to .

It holds because ,

Although may be nonlinear and complicated, is linear and simple.

With function spaces and defined by basis functions and respectively, each basis function can be transformed into and represented in a linear form as . Whenever reaches an extreme point of the feasible field , it is a binary correspondence between graphs, and consequently, is transformed into (i.e., matches) a , where .

To find an optimal correspondence between two graphs with edge attributes and , we declare that the induced functional should be able to preserve the geometric structures defined on function spaces. Namely, should be the inner product or metric preserving. More precisely, for each pair , the functional value should be similar to the functional value of the transformed pair , which is calculated as

(11)

The functionals defined in Definition 3.3 or Definition 3.4 can be used to calculate it. Finally, to incorporate the pairwise constraints, we aim to minimize the total sum as follows:

(12)

where is computed based on the edge attributes matrix . Note tha the affinity matrix with size is replaced here by the edge attributes matrix with size and with size .

3.3 FRGM-G: matching graphs with general settings

Here, we propose our FRGM-G algorithm for matching graphs with general settings, i.e. without knowledge on the geometrical structures of the graphs. To find an optimal correspondence, i.e., functional map mentioned above, we first minimize an objective function as

(13)

where balances the weights of the unary term and pairwise term. In general, is nonconvex and minimizing upon the feasible field results in a local minimum. The minimizer may be not binary, and the post-discretization of may reduce the matching accuracy. Therefore, we next construct another objective function to find a better solution based on the obtained .

According to the definition , each lies in the convex set , which is the convex hull of . Therefore, the transformed functions lies in the same function space spanned by , and the offset between and can be controlled. Moreover, since indeed preserves the pairwise geometric structure between two graphs, will lie closer to the correct matching . This means that, based on the metric defined on the function spaces, the distance make sense and will be smaller than . Therefore, we define the second objective function as:

(14)

where is the distance between and computed by the metric functional defined on or . The minimizer can be viewed as a displacement interpolation: to minimize we obtain a solution that is an extreme point (thus, binary) of the feasible field ; to minimize , we obtain a solution that equals . Then, is an interpolation between and controlled by . Finally, we use the Hungarian method to discretize into being binary.

4 FRGM in Euclidean Space

In many computer vision applications, graphs are often embedded in explicit or implicit manifolds , e.g., Euclidean space and surface , where graphs with nodes are naturally associated with some specific geometric properties. For example, the node attributes can be computed as SIFT [44], shape context [3], HKS [45] and so on, and the edge attribute matrix can be computed as Euclidean distance on or geodesic distance on surface .

We can use the proposed method for general GM in Sec. 3 to match graphs in these cases. Furthermore, for graphs embedded in , we can construct another method for Euclidean GM based on the fact that the functional representation of between abstract function spaces can be deduced into the concrete Euclidean space with explicit geometric interpretations. Since each node can be represented as a vector , the expression naturally makes sense. Consequently, we can directly define the unknown transformation in a linear form:

(15)
(16)

The transformed nodes can be rewritten in a matrix notation and . Now, is a linear representation map of the unknown transformation . With the constraint that , each node lies in the convex hull of . Once reaches a binary correspondence matrix, is transformed into , where .

For graphs embedded in Euclidean spaces, the edge attributes, such as edge length and edge orientation, are widely used. The edge attributes of the transformed graph can be computed as a function w.r.t. as:

  • edge length computed as the Euclidean distance

  • edge orientation computed as the vector between nodes

where is the Euclidean norm. We propose our algorithm for matching graphs in Euclidean space, i.e. FRGM-E, in the following sections.

4.1 Preserving edge-length

Given two graphs with visually similar structures, a general constraint is to preserve the edge length between the original edge and its corresponding edge . Thus, the pairwise potential of the first objective function can be defined as follows:

(17)
(18)

We can add a unary term computed with node attributes to this pairwise term as follows:

(19)

Due to the nonconvexity of , its solution often reaches a local minimum and is not binary, and the post-discretization procedure will result in low accuracy; see Fig. 3 (b) for illustration. Consequently, the transformed node is not exactly equal to a , and there is often an offset between and its correct match . Fig. 3 (a) shows this phenomenon, where each shifts from the correct match to some degree.

Figure 3: (a) Nodes shift after being transformed by minimizing in a 20-vs-30 case. The lines in blue are the offset vectors, and the points in green are transformed nodes . (b) Representation map (top) and the post-discretization (bottom) corresponding to (a). (c) Nodes transformed by minimizing with almost no offset. (d) Representation map (top) and the post-discretization (bottom) corresponding to (c). In (b) and (d), red points mark the ground-truth correspondence.

4.2 Reducing node offset

Benefiting from the property of the solution that preserves the edge length of , the offset vectors of adjacent transformed nodes in have similar directions and norms, as shown in Fig. 3 (a). To reduce the node offset from to the corresponding correct match denoted by

we aim to minimize the sum of differences between adjacent offset vectors, i.e.,

(20)

where and is computed to indicate the adjacency relation of node pair . The undirected graph here will result in a symmetric ; therefore is positive-definite and is convex.

Figure 4: Outlier removal with transformation map obtained by alternately minimizing and . In each iteration, the red dots are inliers, and the green plus signs are the nodes remaining after removal.

Compared to the algorithm proposed for general GM in Sec. 3.3, the distance matrix here can be computed with an explicit geometric interpretation: is the Euclidean distance between the transformed node and . As shown in Fig. 3 (c), is smaller than , where denotes the correct matching of . Therefore, the unary term can be added as a useful constraint during matching. Finally, is summarized as:

(21)

In general, this objective function is solved by a (nearly) binary solution if is small. This significantly improves the matching accuracy. See Fig. 3 (d) as an example.

4.3 Explicit outlier-removal strategy

In practice, outliers generally occur in graphs and affect the matching accuracy. Based on the ability of the optimal representation map and that preserves the geometric structure between and the transformed graph or , we can propose an explicit outlier-removal strategy.

The transformed graph with nodes lies in the convex hull of . In some sense, the operation can be viewed as a domain adaptation [46] from the source domain to the target domain . The graph has a geometric structure similar to the original graph and lies in the same space of with a relatively small offset. Then, we can remove outliers adaptively using a ratio test technique. Given two point sets and , we compute the Euclidean distance of all the pairs . For each node , we find the closest node and remove all the nodes when for a given . If the number of remaining nodes is less than , nodes are selected from the removed ones that are closer to and added. See Fig. 4 as an example, where after several iterations most outliers are removed. More experimental results are reported in the experimental section.

5 FRGM with Geometric Deformation

For Euclidean GM, rigid or nonrigid geometric deformations may exist between graphs. In these cases, we need to estimate both the correspondence and deformation parameters. This section demonstrates that the FRGM can provide a new parameterization of transformation between graphs. Due to the associative law of matrix multiplication, this parameterization is associative with the deformation parameters. Theoretically, this allows us to estimate the correspondence and deformation parameters alternately.

5.1 Geometric deformation

Given two point sets with geometric transformation , the task to estimate both the correspondence and parameters of is generally formulated as minimizing the sum of residuals:

(22)

where is a regularization term of . On the one hand, most of the state-of-the-art registration algorithms such as [47, 48, 49] do not explicitly recover the correspondence as a binary solution. Rather, they estimate in a soft way as to give a probability interpretation: stands for the correspondence probability between and