Nonrigid 3D Shape Registration using an Adaptive Template
Abstract
We present a new fullyautomatic nonrigid 3D shape registration (morphing) framework comprising (1) a new 3D landmarking and pose normalisation method; (2) an adaptive shape template method to accelerate the convergence of registration algorithms and achieve a better final shape correspondence and (3) a new iterative registration method that combines Iterative Closest Points with Coherent Point Drift (CPD) to achieve a more stable and accurate correspondence establishment than standard CPD. We call this new morphing approach Iterative Coherent Point Drift (ICPD). Our proposed framework is evaluated qualitatively and quantitatively on three datasets: Headspace, BU3D and MPIFAUST and compared with several other methods. The proposed framework is shown to give stateoftheart performance.
keywords:
3D registration, 3D shape morphing, 3D morphable models1 Introduction
The goal of shape registration is to align and morph a source point set (or mesh) to a target point set. By using some form of template shape as the source, nonrigid shape registration (morphing) is able to reparametrise a collection of raw 3D scans of some object class into a consistent form. This facilitates 3D dataset alignment and subsequent 3D Morphable Model (3DMM) construction. In turn, the 3DMM constitutes a useful shape prior in many computer vision tasks, such as recognition and missing parts reconstruction.
Currently, methods that deform a 3D template to all members of a specific 3D object dataset use the same template shape to do the shape morphing. However, datasets representative of global object classes often have a wide variation in terms of the spatial distribution of their constituent parts. Our primary dataset in this paper is that of the human head, where the relative positions of key parts, such as the ears, mouth, and nose are highly varied, particularly when trying to build 3D Morphable Models (3DMMs) across a wide demographic range of age, gender and ethnicity. Using a single template shape means that often key parts of the template are not at the same relative positions as those of the raw 3D scan. This causes slow convergence of shape morphing and, worse still, leads to end results that have a visible residual error due, to an inaccurate correspondence in salient local parts. To counter this, we propose an adaptive template approach that provides a tailored template for each raw 3D scan in the dataset. The adaptive template is obtained from the original template using sparse shape information (typically landmarks), thereby locally matching the raw 3D scan very specifically. Although this is a preprocess that involves template shape adaptation, we do not consider it as part of the main template morphing process, which operates over dense shape information.
The work presented here advances the state of the art in fullyautomatic nonrigid 3D shape registration (morphing) by integrating several powerful ideas from the computer vision and graphics communities into a new shape registration framework. These include ZhuRamanan 2D landmarking Ramanan12 (), Iterative Closest Points (ICP) besl1992method (), Coherent Point Drift (CPD) myronenko2010 (), Gaussian Processes (GPs) Thomas17 (), and mesh editing using the LaplaceBeltrami (LB) operator Sorkine2004 (). Our contributions include: 1) a new 3D landmarking and pose normalisation method; 2) an adaptive shape template method to accelerate the convergence of registration algorithms and achieve a better final shape correspondence; 3) a new iterative registration method that combines ICP with CPD to achieve a more stable and accurate correspondence establishment than standard CPD. We call this approach Iterative Coherent Point Drift (ICPD). Our proposed framework is evaluated qualitatively and quantitatively on three datasets: Headspace headspace10 (); robertson2017morphable (); Dai_2017_ICCV (), BU3D yin2008high () and MPIFAUST bogo2014faust () and compared with several other methods. The proposed framework is shown to give stateoftheart performance. Fig. 1 is a qualitative illustration of a typical result where the proposed method achieves a more stable and accurate correspondence than standard CPD. Note that the landmarks on the proposed method are almost exactly the same position as their corresponding groundtruth points on the raw 3D scan. Even though standard CPDaffine is aided with the LaplaceBeltrami regularised projection (LBRP, a component of our proposed pipeline), the result shows a ”squeezed” face around the eye and mouth regions and the landmarks are far away from their corresponding groundtruth positions.
The rest of the paper is structured as follows. After presenting related work, we give a technical background on our template adaptation approaches. In Sec. 4 we describe our nonrigid registration framework, while the following section evaluates it over three datasets. Lastly we present conclusions.
2 Related work
The Iterative Closest Points (ICP) algorithm arun1987least (); besl1992method () is the standard rigidmotion registration method. Several extensions of ICP for the nonrigid case were proposed Amberg07 (); booth2018large (); hontani2012robust (); cheng2015active (); cheng2017statistical (); kou2016modified (). Often these have good performance in shape difference elimination but have problems in over fitting and point sliding. Another approach is based on modelling the transformation with thin plate splines (TPS) bookstein1989principal () followed by robust point matching (RPM) and is known as TPSRPM chui2000new (). However, it is slow in largescale point set registration yang2011thin (); lee2011topology (); lee2015non (); ma2017non (). Amberg et al. Amberg07 () defined the optimalstep Nonrigid Iterative Closest Points (NICP) framework. Recently Booth et al. booth2018large () built a Large Scale Facial Model (LSFM), using the same NICP template morphing approach with error pruning, followed by Generalised Procrustes Analysis (GPA) for alignment, and Principal Component Analysis (PCA) for the model construction. Li et al. li2008global () show that using proximity heuristics to determine correspondences is less reliable when large deformations are present. Global correspondence optimization solves simultaneously for both the deformation parameters and correspondences li2008global ().
Myronenko et al. consider the alignment of two point sets as a probability density estimation myronenko2010 () and they call the method Coherent Point Drift (CPD). There is no closedform solution for this optimisation, so it employs an EM algorithm to optimize the Gaussian Mixture Model (GMM) fitting. Algorithms are provided to solve for several shape deformation models such a affine (CPDaffine) and generally nonrigid (CPDnonrigid). The ‘nonrigid’ motion model in myronenko2010 () employs an Gaussian kernel for motion field smoothing, and the Mstep requires solving for an matrix that generates the template deformation (GMM motion field) as . Such motion regularisation is related to motion coherence, and inspired the algorithm’s name. The CPD method was has been extended by various groups wang2011refined (); golyanik2016extended (); hu2010deformable (); trimech20173d (). Compared to TPSRPM, CPD offers superior accuracy and stability with respect to nonrigid deformations in presence of outliers. A modified version of CPD imposed a Local Linear Embedding topological constraint to cope with highly articulated nonrigid deformations ge2014non (). However, this extension is more sensitive to noise than CPD. A nonrigid registration method used Studentâs Mixture Model (SMM) to do probability density estimation zhou2014robust (). The results are more robust and accurate on noisy data than CPD. Dai et al. Dai_2017_ICCV () proposed a hierarchical partsbased CPDLB morphing framework to avoid underfitting and overfitting. It overcomes the sliding problem to some extent, but the end result still has a small tangential error.
Marcel et al. luthi2017gaussian () model the shape variations with a Gaussian process (GP), which they represent using the leading components of its KarhunenLoeve expansion. Such Gaussian Process Morphable Models (GPMMs) unify a variety of nonrigid deformation models. Gerig et al. Thomas17 () present a novel pipeline for morphable face model construction based on Gaussian processes. GPMMs separate problem specific requirements from the registration algorithm by incorporating domainspecific adaptions as a prior model.
Template morphing methods need an automatic initialisation to bring them within the convergence basin of the global minimum of alignment and morphing. Recent work has focused on global spatial models built on top of local part detectors, sometimes known as Constrained Local Models (CLMs) smith2012joint (); zhou2013exemplar (); Creusot2013 (). Zhu and Ramanan Ramanan12 () use a tree structured part model of the face, which both detects faces and locates facial landmarks. One of the major advantages of their approach is that it can handle extreme head poses even at relatively low image resolutions, and we exploit these qualities directly in our proposed registration framework.
3 Technical background for template adaptation
To achieve template adaptation, we employ and evaluate two different methods, (i) LaplaceBeltrami mesh manipulation and (ii) the posterior model (PM) of a Gaussian Process Morphable Model (GPMM). The technical background for these is given in the following sections.
3.1 LaplaceBeltrami mesh manipulation
The LaplaceBeltrami (LB) operator is widely used in 3D mesh manipulation. The LB term regularises the landmarkguided template adaptation in two ways: 1) the landmarks on the template are manipulated towards their corresponding landmarks on the raw 3D scan; 2) all other points in original template are moved as rigidly as possible regarding the landmarks’ movement, according to an optimised cost function, described later.
Following Sorkine et al. Sorkine2007 (), the idea for measuring the rigidity of a deformation of the whole mesh is to sum up over the deviations from rigidity. Thus, the energy functional can be formed as:
(1) 
where we denote a mesh by , with its deformed mesh and is a rotation. Mesh topology is determined by vertices and triangles. Also is the set of vertices connected to vertex ; these are the onering neighbours. The parameters , are fixed cell and edge weights. Note that depends solely on the geometries of , , i.e., on the vertex positions , . In particular, since the reference mesh (our input shape) is fixed, the only variables in are the deformed vertex positions . The gradient of is computed with respect to the positions . The partial derivatives w.r.t. can be computed as:
(2) 
Setting the partial derivatives to zero w.r.t. each gives the following sparse linear system of equations:
(3) 
The linear combination on the lefthand side is the discrete LaplaceBeltrami operator applied to , hence the system of equations can be written as:
(4) 
where is an nvector whose ith row contains the righthand side expression from (3). We also need to incorporate the modeling constraints into this system. In the simplest form, those can be expressed by some fixed positions
(5) 
where is the set of indices of the constrained vertices. In our case, these are the landmark positions, automatically detected on the raw 3D data, with the corresponding points known a priori on the template. Incorporating such constraints into (4) requires substituting the corresponding variables, erasing respective rows and columns from and updating the righthand side with the values .
3.2 Gaussian process morphable model
A Gaussian Process Morphable Model (GPMM) uses manually defined arbitrary kernel functions to describe the deformation’s covariance matrix. This enables a GPMM to aid the construction of a 3DMM, without the need for training data. The posterior models (PMs) of GPMMs are regression models of the shape deformation field. Given partial observations, such posterior models are able to determine what is the potential complete shape. A posterior model is able to estimate other points’ movements when some set of landmarks and their target positions are given.
Instead of modelling absolute vertex positions using PCA, GPMMs model a shape as a deformation vector field from a reference shape , i.e. a shape can be represented as
(6) 
for some deformation vector field . We model the deformation as a Gaussian process where is a mean deformation and a covariance function or kernel. The core idea behind this approach is that a parametric, lowdimensional model can be obtained by representing the Gaussian process using the leading basis functions of its KarhunenLove expansion:
(7) 
The biggest advantage of GPMMs compared to statistical shape models (eg 3DMMs) is that we have much more freedom in defining the covariance function. GPMMs allow expressive prior models for registration to be derived, by leveraging the modeling power of Gaussian processes. By estimating the covariances from example data GPMMs becomes a continuous version of a statistical shape model. When we have no or only little training data available, arbitrary kernel functions can be used to define the covariances. However, the shape generated by such models may not wellrepresent the shape required, and so may not be useful as an adaptive template generator. Indeed. adequate training data is still required to make it usable for our adaptive template concept. We explore this later in the paper.
4 Nonrigid shape registration framework
The proposed registration framework is shown in Fig. 2 and includes four stages: 1) data preprocessing: pose normalisation and 3D landmarking; 2) template adaptation: global alignment and adapting the template shape; 3) template morphing using Iterative Coherent Point Drift (ICPD); 4) point projection regularised by the LaplaceBeltrami operator (LBRP). These are detailed in the following four subsections.
4.1 Data preprocessing
Data preprocessing comprises five stages: (i) 2D landmarking, (ii) projection to 3D landmarks, (iii) pose normalisation (iv) synthetic frontal 2D image landmarking and (v) projection to 3D landmarks.
We use the method of Zhu and Ramanan Ramanan12 () to localise facial landmarks on the texture channel of each 3D image. This 2D image contains all 5 viewpoints of the capture system and usually two face detections are found, 1545 degrees yaw from frontal pose, corresponding to the left and right side of the face. Detected 2D points are in a tree structure and are projected to 3D using OBJ texture coordinates.
Each face detection employs one of thirteen tree models Ramanan12 () and we automatically learn how to orientate each of these to frontal pose, based on their 3D structure. To do this, we apply Generalised Procrustes Analysis (GPA) to each collection of 3D trees (11 of the 13 models are used by the dataset) and find the nearesttomean tree shape in a scalenormalised setting. We then apply a 3D face landmarker Creusot2013 () to the 3D data of the nearesttomean tree shape (11 of these), which generates a set of 14 landmarks with clear semantic meaning. Finally, we find the alignment that moves the symmetry plane of these 14 landmarks to the YZ plane with the nasion above the subnasale (larger Y coordinate) and at the same Zcoordinate, in order to normalise the tilt (X rotation). To complete the training phase, the mean 3D tree points for each of the 13 trees are then carried into this canonical frontal pose using the same rotation, and are used as reference points for the frontal pose normalisation of the 3D trees.
In around 1% of the dataset, only one tree is detected and that is used for pose normalisation, and in the rest 23 images are detected. In the cases where 3 trees are detected, the lowest scoring tree is always false positive and can be discarded. For the remaining two trees, a weighted combination of the two rotations is computed using quaternions, where the weighting is based on the mean Euclidean error to the mean tree, in the appropriate tree component.
Finally we perform 2D landmarking on the synthetic (rotated to frontal) image, as more repeatable and accurate landmarks are achieved on a frontal pose image. We choose the repeatable 2D landmarks and again project these to 3D using the OBJ texture coordinates.
4.2 Template adaptation
As shown in Fig. 2, template adaptation consists of two stages: (i) global alignment followed by (ii) dynamically adapting the template shape to the data. For global alignment, we manually select the same landmarks on the template as we extract on the data scans. This needs to be done just once and so doesn’t impact on the autonomy of the framework. Then we align rigidly (without scaling) from the 3D landmarks on raw 3D data to the same landmarks on the template. The rigid transformation matrix is used for the raw data alignment to the template.
The template is adapted to better align with the raw scan. A better template helps the later registration converge faster and gives more accurate correspondence at the beginning and end of registration. A good template has the same size and position of local facial parts (e.g. eyes, nose, mouth and ears) as the raw scan. This cannot be achieved by mesh alignment alone. We propose two method to give a better template that is adapted to the raw 3D scan: (1) LaplaceBeltrami mesh editing; (2) Template estimation via posterior GPMMs. For both methods, three ingredients are needed: landmarks on 3D raw data, the corresponding landmarks on template, and the original template.
4.2.1 LaplaceBeltrami mesh manipulation:
We decompose the template into several facial parts: eyes, nose, mouth, left ear and right ear. We rigidly align landmarks on each part separately to their corresponding landmarks on 3D raw data. These rigid transformation matrices are used for aligning the decomposed parts to 3D raw data. The rigidly transformed facial parts tell the original template where it should be. We treat this as a mesh manipulation problem. We use LaplaceBeltrami mesh editing to manipulate the original template towards the rigidly transformed facial parts, as follows: (1) the facial parts (fp) of the original template are manipulated towards their target positions  these are rigidly transformed facial parts; (2) all other parts of the original template are moved as rigidly as possible.
Given the vertices of a template stored in the matrix and a better template obtained whose vertices are stored in the matrix , we define the selection matrices as those that select the vertices (facial parts in and ) from the raw template and a better template respectively. This linear system can be written as:
(8) 
where is the cotangent Laplacian approximation to the LB operator and is the better template that we wish to solve for. The parameter weights the relative influence of the position and regularisation constraints, effectively determining the ‘stiffness’ of the mesh manipulation. As , the facial parts of the original template are manipulated exactly to the rigidly transformed facial parts. As , the adaptive template will only be at the same position as the original template .
4.2.2 Template estimation via posterior models:
A common task in shape modelling is to infer the full shape from a set of measurements of the shape. This task can be formalised as a regression problem. The posterior models of Gaussian Process Morphable Models (GPMMs) are regression models of the deformation field. Given partial observations, posterior models are able to answer what is the potential full shape. Posterior models show the points’ potential movements when the landmarks are fixed to their target position.
In a GPMM, let be a fixed set of input 3D points and assume that there is a regression function , which generates a new vector field according to
(9) 
where is independent Gaussian noise, i.e. . The regression problem is to infer the function at the input points . The possible deformation field is modelled using a Gaussian process model that models the shape variations of a given shape family.
In our case, the reference shape is the original template, the landmarks on the original template are the fixed set of input 3D points. The same landmarks on 3D raw data are the target position of the fixed set of input 3D points. Given a GPMM that models the shape variations of a shape family, the adaptive template is
(10) 
The mean of is shown in Fig. 3 (6) and (7).
4.3 Iterative coherent point drift
The task of nonrigid 3D registration (shape morphing) is to deform and align the template to the target raw 3D scan. Nonrigid Coherent Point Drift (CPD) myronenko2010 () has better deformation results when partial correspondences are given and we have found that it is more stable and converges better when the template and the raw data have approximately the same number of points. However, the correspondence is often not known before registration. Thus, following an Iterative Closest Points (ICP) scheme besl1992method (), we supply CPD registration with coarse correspondences using ‘closest points’. We refine such correspondences throughout iterations of the Iterative Coherent Point Drift (ICPD) approach described here. The pseudocode of ICPD is given as:
Iterative Closest Point Coherent Point Drift (Output: fit) Inputs are the adaptive template and the 3D scan; fit = adaptive template; targetV = 3D scan; while(avg.(knnsearch(targetV,fit,’Distance’)) < ) idx1 = knnsearch(targetV,fit); opt.method=’affine’; % use affine registration Transformtarget =cpd_register(targetV(idx1,:),template,opt); verts = Transformtarget.Y; idx2 = knnsearch(targetV,verts); opt.method=’nonrigid_lowrank’; % use nonrigid registration Transform = cpd_register(targetV(idx2,:),template,opt); fit = Transform.Y; end
We use the original code package of CPD available online as library calls for ICPD. Other option parameters can be found in the CPD author’s release code. The global affine transformation is used as a small adjustment of correspondence computation. A better correspondence idx2 is used as the priors for CPD nonrigid registration. The qualitative output of ICPD is very smooth, a feature inherited from standard CPD. A subsequent regularised point projection process is required to capture the target shape detail, and this is described next.
4.4 LaplaceBeltrami regularised projection
When ICPD has deformed the template close to the scan, point projection is required to eliminate any (normal) shape distance error. Again, we overcome this by treating the projection operation as a mesh editing problem with two ingredients. First, position constraints are provided by those vertices with mutual nearest neighbours between the deformed template and raw scan. Using mutual nearest neighbours reduces sensitivity to missing data. Second, regularisation constraints are provided by the LB operator which acts to retain the local structure of the mesh. We call this process LaplaceBeltrami regularised projection (LBRP), as shown in the registration framework in Fig. 2.
We write the point projection problem as a linear system of equations. Given the vertices of a scan stored in the matrix and the deformed template obtained by CPD whose vertices are stored in the matrix , we define the selection matrices and as those that select the vertices with mutual nearest neighbours from deformed template and scan respectively. This linear system can be written as:
(11) 
where is the cotangent Laplacian approximation to the LB operator and are the projected vertex positions that we wish to solve for. The parameter weights the relative influence of the position and regularisation constraints, effectively determining the ‘stiffness’ of the projection. As , the projection tends towards nearest neighbour projection. As , the deformed template will only be allowed to rigidly transform.
5 Evaluation
We evaluated the proposed registration framework using three datasets: Headspace headspace10 (); robertson2017morphable (); Dai_2017_ICCV (), BU3D (neutral expression) yin2008high () and MPIFAUST (intrasubject challenge) bogo2014faust (). The latter two have groundtruth information. We use error to manuallydefined landmark and the average nearest point distance error for evaluation on the Headspace dataset. Recently, two registration frameworks have become publicly available for comparison: Basel’s Open Framework (OF) luthi2017gaussian () and the Largescale Face Model (LSFM) pipeline booth2018large ().
5.1 Internal comparison of approaches
Using the BU3D dataset, we compared the performance of (i) the proposed ICPD registration, (ii) ICPD with an adaptive template using LB mesh manipulation and (iii) ICPD with an adaptive template, using a posterior model (PM). The mean pervertex error is computed between the registration results and their groundtruth. The number of ICPD iterations and computation time is recorded, when using the same computation platform. The pervertex error plot in Fig. 4 illustrates that the adaptive template improves the correspondence accuracy of ICPD. The number of ICPD iterations and computation time is significantly decreased by the adaptive template method. In particular adaptive template using LB mesh manipulation has better performance than adaptive template using a posterior model. Thus, we employ an adaptive template approach using LB mesh manipulation for later experiments.
5.2 Correspondence comparison
5.2.1 Headspace:
We evaluate correspondence accuracy both qualitatively and quantitatively. 1212 scans from Headspace are used for evaluation. A typical registration result is shown in Fig. 5. Apart from the proposed method, there are clearly significant errors around the ear region, the eye region, or even multiple regions. We use two metrics for quantitative shape registration evaluation: 1) landmark error; 2) pervertex nearest point error. The pervertex nearest point error is computed by measuring the nearest point distance from the morphed template to raw scan and averaging over all vertices. As can be seen in Fig. 6, the proposed method has the best performance regarding the two metrics, when compared to Basel’s Open Framework (OF) luthi2017gaussian () and the LSFM booth2018large () registration approach. The OF method has a smoothed output without much shape detail. The LSFM method captures shape detail, but it has greater landmark error and pervertex nearest point error. The quantitative evaluation in Fig. 6 validates that the proposed method outperforms the other two contemporary methods.
5.2.2 Bu3d:
For the BU3D dataset, 100 scans with neutral expression are used for evaluation. We use 12 landmarks to perform adaptive template generation. Qualitatively from Fig. 7 (1) and (2), the adaptive template can be seen to improve the registration performance. As shown in Fig. 8 (1), compared with the groundtruth data, over 90% of the registration results have less than 2 mm pervertex error. The proposed method has the best performance in face shape registration.
5.2.3 MpiFaust:
Without employing an adaptive template, ICPD, OF and LSFM cannot be used for body registration, as the template is often outside the convergence basin. In the MPIFAUST dataset, we use 100 scans for evaluation and 10 landmarks for the adaptive template generation. Note that the body shape has a larger size than face and head data, so the measurement unit for body shape registration is centimetres. As shown in Fig. 8 (2), compared with the groundtruth data, over 75% of the registration results have less than 3 cm pervertex error. The proposed method has the best performance in body shape registration.
Dataset  Proposed  no adaptive T  OF  LSFM 

Headspace (mm)  0.5194  1.5726  1.4617  1.1036 
BU3D (mm)  0.9775  2.0730  2.3480  1.4062 
FAUST (cm)  3.9737  5.2233  5.8992 
6 Conclusions
We proposed a new fullyautomatic shape registration framework with an adaptive template initialisation. This accelerated the convergence of registration algorithms and achieved a more accurate correspondence. We provided two methods: LaplaceBeltrami mesh manipulation and the posterior model of the GPMM to achieve template adaptation. In particular, an adaptive template using LB mesh manipulation has a better performance than an adaptive template using a GP posterior model. We proposed a new morphing method that combined the ICP and CPD algorithms that is both more stable and accurate in correspondence establishment. We evaluated the proposed framework both qualitatively and quantitatively on three datasets: Headspace, BU3D and MPIFAUST. The proposed framework has better performance than other methods across all datasets.
References
 (1) X. Zhu, D. Ramanan, Face detection, pose estimation, and landmark localization in the wild, in: Proceedings of CVPR, 2012, pp. 2879–2886.
 (2) P. J. Besl, N. D. McKay, Method for registration of 3d shapes, in: Sensor Fusion IV: Control Paradigms and Data Structures, Vol. 1611, International Society for Optics and Photonics, 1992, pp. 586–607.
 (3) A. Myronenko, X. Song, Point set registration: Coherent point drift, IEEE transactions on pattern analysis and machine intelligence 32 (12) (2010) 2262–2275.

(4)
T. Gerig, A. Forster, C. Blumer, B. Egger, M. Lüthi, S. Schönborn,
T. Vetter, Morphable face models  an
open framework, CoRR abs/1709.08398.
arXiv:1709.08398.
URL http://arxiv.org/abs/1709.08398  (5) O. Sorkine, D. CohenOr, Y. Lipman, M. Alexa, C. Rössl, H.P. Seidel, Laplacian surface editing, in: Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing, 2004, pp. 175–184.
 (6) C. Duncan, R. Armstrong, Alder hey headspace project (2011).
 (7) B. Robertson, H. Dai, N. Pears, C. Duncan, A morphable model of the human head validating the outcomes of an agedependent scaphocephaly correction, International Journal of Oral and Maxillofacial Surgery 46 (2017) 68.
 (8) H. Dai, N. Pears, W. A. P. Smith, C. Duncan, A 3d morphable model of craniofacial shape and texture variation, in: The IEEE International Conference on Computer Vision (ICCV), 2017.
 (9) L. Yin, X. Chen, Y. Sun, T. Worm, M. Reale, A highresolution 3d dynamic facial expression database, in: Automatic Face & Gesture Recognition, 2008. FG’08. 8th IEEE International Conference on, IEEE, 2008, pp. 1–6.
 (10) F. Bogo, J. Romero, M. Loper, M. J. Black, Faust: Dataset and evaluation for 3d mesh registration, in: Proceedings of CVPR, 2014, pp. 3794–3801.
 (11) K. S. Arun, T. S. Huang, S. D. Blostein, Leastsquares fitting of two 3d point sets, IEEE Transactions on pattern analysis and machine intelligence (5) (1987) 698–700.
 (12) B. Amberg, S. Romdhani, T. Vetter, Optimal step nonrigid icp algorithms for surface registration, in: Proceedings of CVPR., 2007, pp. 1–8.
 (13) J. Booth, A. Roussos, A. Ponniah, D. Dunaway, S. Zafeiriou, Large scale 3d morphable models, International Journal of Computer Vision 126 (24) (2018) 233–254.
 (14) H. Hontani, T. Matsuno, Y. Sawada, Robust nonrigid icp using outliersparsity regularization, in: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE, 2012, pp. 174–181.
 (15) S. Cheng, I. Marras, S. Zafeiriou, M. Pantic, Active nonrigid icp algorithm, in: Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Vol. 1, IEEE, 2015, pp. 1–8.
 (16) S. Cheng, I. Marras, S. Zafeiriou, M. Pantic, Statistical nonrigid icp algorithm and its application to 3d face alignment, Image and Vision Computing 58 (2017) 3–12.
 (17) Q. Kou, Y. Yang, S. Du, S. Luo, D. Cai, A modified nonrigid icp algorithm for registration of chromosome images, in: International conference on intelligent computing, Springer, 2016, pp. 503–513.
 (18) F. L. Bookstein, Principal warps: Thinplate splines and the decomposition of deformations, IEEE Transactions on pattern analysis and machine intelligence 11 (6) (1989) 567–585.
 (19) H. Chui, A. Rangarajan, A new algorithm for nonrigid point matching, in: Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, Vol. 2, IEEE, 2000, pp. 44–51.
 (20) J. Yang, The thin plate spline robust point matching (tpsrpm) algorithm: A revisit, Pattern Recognition Letters 32 (7) (2011) 910–918.
 (21) J.H. Lee, C.H. Won, Topology preserving relaxation labeling for nonrigid point matching, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (2) (2011) 427–432.
 (22) A. X. Lee, M. A. Goldstein, S. T. Barratt, P. Abbeel, A nonrigid point and normal registration algorithm with applications to learning from demonstrations, in: Robotics and Automation (ICRA), 2015 IEEE International Conference on, IEEE, 2015, pp. 935–942.
 (23) J. Ma, J. Zhao, J. Jiang, H. Zhou, Nonrigid point set registration with robust transformation estimation under manifold regularization., in: AAAI, 2017, pp. 4218–4224.
 (24) H. Li, R. W. Sumner, M. Pauly, Global correspondence optimization for nonrigid registration of depth scans, in: Computer graphics forum, Vol. 27, Wiley Online Library, 2008, pp. 1421–1430.
 (25) P. Wang, P. Wang, Z. Qu, Y. Gao, Z. Shen, A refined coherent point drift (cpd) algorithm for point set registration, Science China Information Sciences 54 (12) (2011) 2639–2646.
 (26) V. Golyanik, B. Taetz, G. Reis, D. Stricker, Extended coherent point drift algorithm with correspondence priors and optimal subsampling, in: Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, IEEE, 2016, pp. 1–9.
 (27) Y. Hu, E.J. Rijkhorst, R. Manber, D. Hawkes, D. Barratt, Deformable vesselbased registration using landmarkguided coherent point drift, in: International Workshop on Medical Imaging and Virtual Reality, Springer, 2010, pp. 60–69.
 (28) I. H. Trimech, A. Maalej, N. E. B. Amara, 3d facial expression recognition using nonrigid cpd registration method, in: Information and Digital Technologies (IDT), 2017 International Conference on, IEEE, 2017, pp. 478–481.
 (29) S. Ge, G. Fan, M. Ding, Nonrigid point set registration with globallocal topology preservation, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, IEEE, 2014, pp. 245–251.
 (30) Z. Zhou, J. Zheng, Y. Dai, Z. Zhou, S. Chen, Robust nonrigid point set registration using student’st mixture model, PloS one 9 (3) (2014) e91381.
 (31) M. Lüthi, T. Gerig, C. Jud, T. Vetter, Gaussian process morphable models, IEEE transactions on pattern analysis and machine intelligence.
 (32) B. M. Smith, L. Zhang, Joint face alignment with nonparametric shape models, in: Proceedings of ECCV, 2012, pp. 43–56.
 (33) F. Zhou, J. Brandt, Z. Lin, Exemplarbased graph matching for robust facial landmark localization, in: Proceedings of ICCV, 2013, pp. 1025–1032.
 (34) C. Creusot, N. E. Pears, J. Austin, A machinelearning approach to keypoint detection and landmarking on 3d meshes, Int. Journ. Computer Vision (1) (2013) 146–179.
 (35) O. Sorkine, M. Alexa, Asrigidaspossible surface modeling, in: Proceedings of the Fifth Eurographics Symposium on Geometry Processing, 2007, pp. 109–116.