Mesh-to-raster region-of-interest-based non-rigid registration of multi-modal images

Mesh-to-raster region-of-interest-based non-rigid registration of multi-modal images

Rosalia Tatano Benjamin Berkels Aachen Institute for Advanced Study in Computational Engineering Science (AICES), RWTH Aachen University, Schinkelstraße 2, 52062 Aachen, Germany Thomas M. Deserno Peter L. Reichertz Institute for Medical Informatics, University of Braunschweig – Institute of Technology and Hannover Medical School, Mühlenpfordtstr. 23, 38106 Braunschweig, Germany
Abstract

Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix.

Curve-to-pixel registration, Surface-to-voxel registration, ROI alignment, Evaluation, STAPLE, Virtual Physiological Human
\SetBgContents

Copyright 2017 Society of Photo Optical Instrumentation Engineers (SPIE).

Journal of Medical Imaging 4(4), 044002 (27 October 2017).

http://dx.doi.org/10.1117/1.JMI.4.4.044002

\cftpagenumbersoff

figure \cftpagenumbersofftable

*Rosalia Tatano, \linkabletatano@aices.rwth-aachen.de

1 Introduction

Two-dimensional (2D) as well as three-dimensional (3D) images play a crucial role in diagnosis, treatment planning, treatment, and the assessment of progression and/or regression of a condition or disease in a patient. In this context, comparisons of subject data acquired using different imaging modalities or of subject and model data are often necessary. Hence, multi-modal image registration methods aim at aligning images (2D) or volumes (3D) acquired with different devices, thus integrating the information provided by this data. The goal is to find the optimal transform that best aligns structures in two input 2D or 3D images [1].

Mutual information (MI) has been widely used in multi-modal image registration [2, 3]. The idea is to maximize iteratively the MI between the two images globally with respect to the transformation. This is equivalent to minimizing the joint entropy of the two data sets, which occurs when the two images or the two volumes are correctly registered.

However, in several cases, there is a particular region of interest (ROI) predefined in the medical recording, such as a tumorous region, a lesion, or some other pathologies. Therefore, accurate ROI alignment is of primary importance. Moreover, in some applications, the ROI might change, e.g., due to pathology or tumor growth. Using MI to register these ROIs might lead to inaccuracies, since MI is computed on the entire image disregarding the corresponding and overlapping parts of the images and hence, is sensitive to the ROI’s size and content [3].

Therefore, ROI-based registration has been addressed in several works already. Wilkie et al.[4], proposed a modification of MI registration that takes into account information from the ROIs, using a weighted combination of the statistics of the entire images and of the ROIs. However, the registration is affected by the value of the weighting parameter, which is difficult to determine, and the ROIs probability distributions. Yavariabdi et al. presented a registration method for magnetic resonance imaging (MRI) and transvaginal ultrasound (TUS) that matches manually marked contours of the ROIs in the two modalities through a one-step deformable iterative closest point (ICP) method[5]. Gu et al.[6] have proposed a contour-guided deformable image registration (DIR) algorithm for adaptive radiotherapy that deforms images using a demons registration algorithm with an additional regularization term based on modified image intensities inside manually marked ROIs. These modified images differ from the original image only if the intensities of the ROIs differs from the surrounding. Finite element model (FEM) -based deformable registration has been employed in Penjweini et al.[7] to match the contours of the ROIs in a series of computed tomography (CT) scans of the lungs acquired pre-operatively with intra-operative images acquired using an infrared camera-based navigation system during the surgery stage of pleural photodynamic therapy. Zhong et al. [8] proposed a method for a dental implant guide that uses ICP on teeth contours point sets extracted from CT scans and from 3D patient model’s cross sections to retrieve the rigid transformation between two images. A registration method for dynamic contrast-enhanced MRI (DCE-MRI) images of the liver is proposed by Zhang et al.,[9] but the energy to be minimized is evaluated just on the segmented liver, disregarding the other image areas completely.

Whenever considering a ROI, automatic segmentation of such is important. Obviously, the accuracy of the registration depends on the accuracy of segmentation. ROI segmentation is frequently based on the pixel or contour levels, and several approaches have been proposed in the literature. By way of example, the following approaches are taken from dentistry, where digital photography (DP) and quantitative light-induced fluoroscopy (QLF) are common 2D imaging modalities:

  • manual draw: Hope et al. [10] manually draw the contour around the boundary of the tooth;

  • gray scale: A threshold technique based on the intensities of the tooth region and of the background is used in Yan et al. [11] to determine the tooth region in fluorescence and white light images. The underlying assumption is that the tooth region has higher intensities than the background;

  • color: A color based segmentation technique combined with morphological operations is used in Datta et al. [12] to segment a tooth from gum, lips and neighbors teeth in RGB images;

  • statistical model: In Mansoor et al. [13], an initial segmentation of the tooth was achieved using a Gauss-Markov random field statistical model and then refined by the practitioner;

  • active contour: In Shah et al. [14], the contour of teeth from dental images is extracted using an active contour algorithm that depends on the intensity of the tooth region. However, the algorithm is sensitive to poor contrast in image intensities and the presence of neighboring teeth;

Another issue in registration is the multi-modality. Usually, multi-modal means that data from the same subject is taken with different imaging modalities, such as, for example, computed tomography (CT) and magnetic resonance imaging (MRI). Furthermore, atlas registration is required frequently in modern medicine to transfer knowledge coded in a general model (i.e., the atlas) to a specific subject (e.g., the patient in a diagnostic or therapeutic process). Such comparison of virtual physiological human (VPH) models[15] with subject-specific scan data bears another challenge for registration: the VPH models are usually in triangulated mesh-based coding, while patient measurements are obtained with computed axial tomography (CAT) scanners and stored as pixel or voxel data [16]. This yields curve-to-pixel and surface-to-voxel registration problems in 2D and 3D, respectively, disregarding whether registration is considered as global or local (i.e., ROI-based) problem.

In our previous work, a curve-to-pixel-based registration method has been presented [17, 18] and used to align the ROIs of 2D images acquired with QLF and DP. The ROIs are segmented using a color space transform into grayscale, which were adapted to the imaging modality, and thresholded. Registration is based on aligning the ROI’s contours, i.e., the tooth region. Our curve-to-pixel method allows superimposing DP with QLF and thus, a direct comparison of the detected demineralized areas, an undesirable side effect of orthodontic treatment with fixed appliances [19]. To the best of our knowledge, registration of QLF and DP for direct comparison of these two image modalities had not been explicitly addressed before.

In our study on the comparison of the demineralized areas in DP and QLF images of a tooth, the demineralized area on the tooth surface often is more evident in one modality than in the other, or is even indistinguishable in one of the modalities. Thus, the correlation between the demineralized areas shown in the two modalities needs to be investigated. Relying on ROI-features can induce a correlation bias in the registration results and therefore, using a registration method that does not use this information is desirable. In addition, while it is relatively easy to segment the tooth region in QLF images, this segmentation is more challenging in the photo due to the poor contrast between tooth and gum colors and the lack of separation of neighboring teeth.

Motivated by the setting outlined above, the aim of the present work is to provide a general registration methodology that is applicable in 2D and 3D, copes with different imaging modalities as well as types of data representations (mesh and raster-scan), and handles global and local (ROI-based) problems. The key features of the proposed method, whose combination sets it apart from the existing methods, are as follows:

  • a contour-to-image approach: the proposed method is able to directly link data types of different dimension, e.g. it can align a surface to a volumetric image. Most existing methods can only align data of the same dimension and type, e.g., surface-to-surface or image-to-image;

  • only ROI shape required: previous works rely on information about the interior of the entire image or the ROI (e.g., internal features, intensities, statistics…). The proposed method does not make any assumption on the features inside the ROIs. This is particularly suitable for studies where the correlation of the modalities inside the ROIs is unknown and supposed to be determined. In fact, the use of feature-based registration algorithms is bound to induce a bias towards the correlation assumed by the registration;

  • no strict requirement on the ROI segmentation in the reference image: unlike previous works, the proposed method only requires an accurate contour of the ROI in the template. The classification of the reference image ROI only needs to provide sufficient information about the ROI’s shape (see below for an example);

  • topology preserving ROI segmentation: the method can also be used to segment the ROI in the reference image. In this case, the contour of the template image serves as initial guess for the segmentation of the ROI in the other image modality. This is helpful in case the reference image modality is difficult to segment but the topology of the segment is known;

  • nonlinear least-square problem: the proposed method leads to a nonlinear least-square problem, which can be solved efficiently using the Gauss-Newton algorithm. Many multi-modal registration approached like Mutual Information lead to much more involved optimization problems with fewer (or no) guarantees on optimality of minimization algorithms.

Due to the properties listed above, the proposed method is particularly suited for the QLF/DP registration problem the method was originally designed for. Indeed, the proposed method allows to align the tooth areas in the two modalities without relying on ROI-features (cf. ii). Due to the challenges posed by the photo classification into tooth area and background, neighboring teeth are still present. However, due to the fact that the bottom and the upper part the tooth region is clearly delineated, the proposed method is able to extract the contour of a target tooth (cf. iii), disregarding the neighboring teeth on the side of it, and align the corresponding tooth region to the one shown in the QLF, since the algorithm preserves the ROI topology (cf. i, iv). Nevertheless, the method can be applied to a much larger class of registration problems.

Furthermore, we aim to comprehensively and reliably evaluate the general registration approach using sufficiently large and reliable data sets.

2 Materials and methods

In this section, we present a general mesh-to-raster (M2R) registration framework, its application, and the methodology used for quantitative evaluation.

2.1 M2R registration method

The M2R method is constructed in the continuous setting as minimization problem and then discretized. After the discretization, we propose a numerical minimization strategy. Moreover, a parametric registration is performed to provide a first alignment of the two input images.

2.1.1 Continuous approach

Let us assume that we are given two data sets, named and , of the same anatomical structure acquired with different modalities and that the image is given as mapping , , on the unit cube . Let indicate a hypersurface, i.e. a curve for and a surface for , representing the contour of the region of interest (ROI), which needs to be extracted from , in case is also an image or may be identical to , if was directly acquired as hypersurface, for instance, by a laser scan.

We denote the region of interest in the image by and now want to find a non-rigid deformation that matches to the boundary of the set . To this end, let be the signed distance function of , i.e. . Thus, is the Euclidean distance of the point to the boundary of , where the sign is positive if is outside of and negative otherwise [20]. Then, the desired alignment of and is attained by minimizing the energy

(1)

where is the vector of the Laplacian of the components of and denotes the dimensional Hausdorff measure. Thus, the first term is a hypersurface integral. In particular, it is a curve integral in case and a surface integral in case .

The energy measures the distance of the deformed hypersurface to the boundary of the ROI in the image and the smoothness of the deformation. The parameter controls the smoothness of . Here, are application dependent weights defined to control the influence of the hypersurface points. Since the data term is using the integral over the hypersurface, it involves only the deformation of . However, the use of a higher order regularizer extends the non-rigid deformation to the whole domain .

In contrast to our previous implementation [17, 18], here the data term is formulated as the integral over the hypersurface. Previously, the data term was defined using the sum over the hypersurface points of the weighted signed distance function calculated at the deformed hypersurface points. However, this approach might lead to problems if the hypersurface points are not approximatively equidistant. Using the integral instead, the distance of a hypersurface point with respect to its neighbors is taken into account.

2.1.2 Discretization

The deformation is expressed as displacement via , for , noting that . For the spatial discretization of , we use multilinear Finite Elements on a uniform rectangular grid on the image domain [21]. Let denote the FE basis functions, with nodal index set . Let and denote the lumped mass matrix and the stiffness matrix respectively, i.e. where is the bilinear Lagrangian interpolation. Although the chosen regularizer involves second derivatives, it can be approximated as [22], using the lumped mass and stiffness matrices. Here, denotes the vector of nodal values that uniquely represents the scalar FE function .

2.1.3 Minimization

In contrast to our previous implementation [18], the data term now is evaluated using simplicial finite elements on the hypersurface using a quadrature rule for the numerical evaluation of the integral. The minimization of is formulated as nonlinear least squares problem, , where

(2)

Here, are the weights corresponding to the -th quadrature point in the -th simplex describing the hypersurface. and denote the total number of simplices and of quadrature points in a simplex, respectively. The minimization is efficiently solved using the Gauss-Newton method [23]. To avoid the minimization from getting stuck in local minima, the non-rigid registration problem is solved for decreasing values of the parameter . For the sake of simplicity, we use this strategy to avoid a a multi level approach, which would also require to create a multi level representation of the unstructured simplicial grid of the hypersurface.

2.2 Parametric registration algorithm

To provide a reasonable initial guess for the Gauss-Newton algorithm above, a regularized parametric registration [24] is performed. The aim is to find an affine deformation , where is a matrix and a translation vector, that minimizes the energy , with

(3)

where indicates the Jacobian of with respect to and represents the -th component of the vector . Here, the data term is the same as the non-rigid model. In contrast to our previous formulation [18], here as prior for the deformation the sum of two terms is used, with positive scalars and that weight the contribution of these terms to the value of the energy. The first term in the regularizer is the Dirichlet energy of the displacement. Noting that , the second term is the squared Frobenius norm of the matrix , where is the identity matrix and is a diagonal matrix whose entries are the diagonal entries of the matrix . Using this term, the difference of all the possible ratios of the diagonal entries of from 1 is penalized, thus isotropic scalings are preferred to anisotropic ones.

Similarly to the non-rigid case, the parametric registration is formulated as a least squares problem by defining the vector and solved using Gauss-Newton.

2.3 Applications

In this section, we demonstrate how to apply the method for solving registration problems in 2D and 3D.

2.3.1 2D Example

The selected 2D application aims at registering the tooth as ROI in DP and QLF for demineralization assessment. In contrast to the state of the art in ROI segmentation, we apply the proposed curve-to-image registration method for both the extraction of the tooth contour from the QLF (ROI segmentation) and for its alignment to the tooth region shown in the DP (Fig. 1).

QLF

Mesh-to-raster registration (M2R)

Binary QLF

Circle

DP

Mesh-to-raster registration (M2R)

Binary DP

QLF contour

Deformation

field

QLF contour

on DP

QLF contour

Deformed DP

Image fusion

Blended images

Figure 1: Workflow of the curve-to-pixel registration method: a DP/QLF pair is given as input; first the algorithm is used to extract the contour of the tooth from the QLF image; then the segmented contour and the DP are used as input for the algorithm, obtaining the segmented DP and a deformation field that is used to align the DP to the QLF. Finally, an image that blends the QLF and the deformed DP can be created.
  • Segmentation: In the QLF segmentation step, the tooth contour is represented as a small circle in the center of the image, since a tooth is topologically equivalent to a circle. The size of the circle was chosen to ensure that its entire boundary is within the tooth. Then, the registration algorithm aligns the circle to the boundary of the tooth region shown in the QLF, thus, yielding the shape of the tooth[18]. For the alignment of DP and QLF, the proposed algorithm is applied to the extracted QLF tooth contour and the tooth region shown in the DP.

  • Registration: We assume that the curve is discretized as set of line segments between points , where is the number of points. The data term is discretized using the midpoint rule over each line segment as

    (4)

    where indicates the midpoint of the line segment with endpoints and and its the length, i.e. . The weights of the data term in Eq. (1) are used to get a proper alignment of the curve points on the vertical boundary of the tooth. In fact, these points may have no counterpart in the boundary of , since often the thresholded images do not exhibit a clear separation between a tooth and its neighbors. Thus, the weights are defined using the vector that characterizes the orientation of the curve. The bigger the absolute value of the -component of , the less vertical is at . Hence, it set to this value.

    Thus, the vector (2) that encodes the non-linear least squares problem in the 2D setting is

    (5)

    The vector for the parametric registration step is defined similarly (see Appendix A).

    At each iteration of the Gauss-Newton algorithm, the resulting linear system is solved using a sparse Cholesky factorization of the matrix , where denotes the Jacobian of . To this end, we use CHOLMOD from the SuiteSparse[25] library.

2.3.2 3D Example

The 3D use-case is taken from the Regional Anaesthesia Simulator and Assistance (RASimAs) project [26], where subject-specific MRI need to be registered to a VPH model composed of mesh-based surfaces for skin, fascia, muscle, bone, vessels, and nerves.

Hence, we assume that is obtained from a volumetric scan, the surface represents the ROI contour of , and that is given as a triangle mesh. Then, the data term is discretized using linear FE on the triangle mesh as

(6)

where the weights are chosen to be equal to 1, is a triangle in the set of triangles defining the triangle mesh of the ROI, is the barycenter of the triangle and is the area of . Thus, the vector (2) that encodes the non-linear least squares problem in the 3D setting is

(7)

The definition of the vector for the parametric registration is similar to the non-rigid case. The details are presented in Appendix B.

As in the 2D case, Gauss-Newton requires the solution of a linear system in each iteration of the algorithm. Unlike in the 2D case, it is not feasible though to assemble the system matrix due to memory requirements. does not only have more rows and columns, but also considerably more non-zero entries in each row. Thus, it is crucial not to assemble this product matrix. Instead, we solve the linear system using the LSMR[27] algorithm, where it is sufficient to compute and store the matrix and its transposed.

Figure 2 illustrates the effect of the minimization strategy proposed in Section 2.1.3. First the parametric registration is performed with the empirically determined parameters and . Estimating the parameters is rather straightforward since their value mainly depends on the order of magnitude of the initial energy. Then, the non-rigid registration is performed iteratively for decreasing values of the parameter . In this case, the chosen values of the parameters were . This allows to get more accurate registration results as decreases. For the different settings of , Figure 2 visualizes the distances , for , on the registered template mesh, , for human hips.

Figure 2: The distances , for , are displayed as color coding on the registered template mesh for every value of the parameter used for the non-rigid registration. From left to right: front, back, top and bottom sides of , color bar.

2.4 Evaluation

Reliable valuation of non-rigid registration is difficult, since large databases with reliable ground truth (GT) annotations are not available. Nonetheless, we aim at quantitatively evaluating our approach.

2.4.1 Aim

Our evaluation aims at determining the absolute error (accuracy) of the method. To define accuracy, we need to rely on a ROI-based registration problem. This requires automatic segmentation, and the accuracy of the segmentation deeply impacts the overall performance. In addition, we aim at comparing the results to a state-of-the-art method for multi-modal registration, which is considered to be based on mutual information (MI)[28].

2.5 Metrics

The alignment accuracy is assessed by the Dice Coefficient[29] (DC) and the symmetric Hausdorff distance[30] (HD), which quantify the agreement of two segmentation and the accuracy of the contour alignment, respectively. The larger the DC and the smaller the HD the better the two ROIs correspond.

2.5.1 Database

Quantitative measures on ROI alignment are image specific. In order to obtain statistically significant results, a sufficiently large number of images shall be processed. Hence, large databases of images annotated with reliable GT are needed. Therefore, we selected the 2D application of QLF / DP registration, where the ROI is defined as tooth contour. In total, 150 pairs of QLF and DP of upper and lower incisors and canines have been acquired from 32 subjects. All subjects were exhibiting white spot lesions after orthodontic treatment with a fixed appliance.

2.5.2 Ground truth

Manually references are unreliable, since they cannot be reproduced exactly, even with the same rater[31]. However, based on several manual markings, a gold standard can be estimated using the simultaneous truth and performance level estimation (STAPLE) algorithm[32]. The core idea of STAPLE is to iteratively (i) compute the observer-weighted mean of a binary segmentation and (ii) adjust the weights of the observers with respect to the similarity to that mean segmentation. In other words, if a observer has large discrepancies to the estimated GT, the corresponding weights are lowered in the next iteration.

To generate the ground truth with STAPLE, three trained raters (R1, R2, R3) manually marked the tooth contours on both of the image modalities. All the raters were presented the 300 images in random order.

2.5.3 Assessment of segmentation

The accuracy of the automatic ROI segmentation is assessed by calculating the DCs and HDs of the automatic segmentation and the ground truth estimated using the STAPLE algorithm. Including the automatic segmentation, in total four observers are available. The performance of the raters is compared for both modalities, QLF and DP. The contour extraction from the QLF images is done with the parameters , and for the parametric registration step and the non-rigid step, respectively, while for the DP, , and were used.

A repeated measures analysis of variance (ANOVA) is applied to assess statistical significances between the automatic segmentation and the human raters, and between the performance of the algorithm in QLF and DP modalities. The significance level is .

2.5.4 Assessment of registration

The accuracy of the DP/QLF ROI alignment is measured again by DCs and HDs of the photo ground truth deformed using the deformation fields obtained from the proposed registration method and the QLF ground truth. For the DP/QLF alignment, , and were used.

To compare our approach with the state-of-the-art, the same analysis is done using the deformation fields obtained from the Insight Segmentation and Registration Toolkit (ITK)[33] MI registration and those obtained from the MI registration implemented with Elastix[34], which is an established registration method.[35, 36, 37] In both cases, the registration is performed on the same grayscale version of the DP/QLF pairs, which are used for the proposed automatic segmentation. The parameters used for the registrations are specified in Table 1. The listed parameters are the default values suggested by the original authors, except for the “Final Grid spacing” parameter for Elastix. The latter has been increased from its default value of 16 pixels to 48 pixels, as using the default value was resulting in obviously unrealistic transformations. In the ITK implementation, the MI is optimized using the Limited-memory Broyden Fletcher Goldfarb Shannon[38] (L-BFGS-B) algorithm.

Based on DC and HD, a one-way repeated measures ANOVA is used to compare the proposed method with the mutual information-based ITK implementation that is considered as gold standard.

Parameter ITK Elastix
Metric Mattes MI Advanced Mattes MI
Number of histogram bin 50 32
Transformation Third-order B-spline Affine + Third-order B-spline
Final Grid spacing # pixels in the input image 48 pixels
Optimization algorithm L-BFGS-B Adaptive stochastic gradient
descent
Maximum number of iterations 1000 200 (Affine) - 500 (B-spline)
Number of multiresolution levels - 4
Table 1: Parameters used for ITK and Elastix. Here, MI indicates mutual information and L-BFGS-B refers to Limited-memory Broyden Fletcher Goldfarb Shannon algorithm.

3 Results

In this section, we present the results of qualitative registration in 2D and 3D, as well as the accuracy determined in the 2D evaluation for segmentation and registration.

3.1 2D application

Figure 3 shows qualitative results obtained in the exemplary 2D application. After non-rigid deformation, the DP matches the QLF, as the QLF-based contour (red) matches the tooth in the DP, as depicted in Panel (d).

a

b

c

d

e

Figure 3: Results in 2D: The QLF image with the the superimposed curve (red) representing the automatic segmentation (a); the DP with superimposed curves (b); QLF-based tooth contour before (red) and after parametric registration (blue); and the curves after parametric and non-rigid registration in blue and green, respectively (c). Panel (d) shows the DP after non-rigid deformation and the corresponding QLF curve (red). Finally, an image created by blending the QLF and the registered DP is displayed in (e).

3.2 3D application

The M2R registration method has been applied in 3D, too. For the use case from the RASimAs project, the resolution of the corresponding 3D grid is , with pixel dimensions (3.73, 3.73, 3.73) mm. Figure 4 depicts the results and suggests an appropriate surface-to-voxel alignment. The Dice coefficient of the two meshes after the alignment is 0.9959. As measure of the distance between the contours, the Hausdorff distance is calculated. The Hausdorff distance of the reference mesh from the template mesh is 90.60 mm (24.28 pixel units) before performing the registration and 15.73 mm (4.22 pixel units) after. The 99th percentile of the Hausdorff distance of the reference from the template is 78.67 mm (21.14 pixel units) before and 2.12 mm (0.57 pixel units) after the registration. Note that the 99th percentile after the registration is much lower than the Hausdorff distance after the registration. This indicates the presence of just very few outlier points (Fig. 2, blue/red dots in the last row) that make the Hausdorff distance appear relatively large even though the alignment is very accurate. Furthermore, note that we are not considering the symmetric Hausdorff distance here since one surface only contains part of the other surface. The proposed method, in fact, has the advantage that it can cope with non-symmetric relations between the two input meshes.

Figure 4: Results in 3D. The first row shows the starting setting: (a), template mesh (b), and initial position of the input images (c). Note that (a) and (b) use different manually chosen view angles to simplify the comparison, (c) shows the true initial mismatch of the data sets. Middle and lower row depict the results after parametric registration and non-rigid deformation, respectively. Panels (f) and (j) visualize the according slices of and of the registered template mesh after parametric and non-rigid registration respectively.

3.3 Quantitative evaluation of segmentation performance

Based on the 150 QLF images, the mean of the DC between the segments obtained from the automatic procedure and the STAPLE-based GT that is calculated from the three human observers is 0.978 (range 0.9420-0.9940). The mean HD is 0.032 (range 0.0102-0.1060).

Based on the 150 DP images, these values turn to 0.981 (range 0.9440-0.9940) and 0.030 (range 0.0102-0.0792) for DC and HD, respectively (Table 2).

For both QLF and DP, a pairwise comparison of the means of the DC as well as of the HD using a one-way repeated measure ANOVA was performed between the automatic segmentation and each rater, determining a statistical significant difference between these means (Table 4).

The one-way repeated measures ANOVA revealed a statistical significant difference between the means of the DC (, ) between DP and QLF modalities, but no significant difference between the means of the HD (, ).

Modality QLF DP
Metrics DC HD DC HD
Mean SD Mean SD Mean SD Mean SD
Method
Rater R1 0.990 0.0098 0.015 0.0117 0.984 0.0076 0.021 0.0087
Rater R2 0.980 0.0068 0.023 0.0067 0.991 0.0066 0.015 0.0094
Rater R3 0.992 0.0082 0.012 0.0116 0.996 0.0049 0.008 0.0079
M2R 0.978 0.0101 0.032 0.0185 0.981 0.0083 0.030 0.0129
Table 2: Mean and standard deviation (SD) of Dice coefficient (DC) and symmetric Hausdorff distance (HD) of the ground truth and manual markings and of ground truth and the proposed automatic segmentation (M2R) for QLF and DP.

3.4 Quantitative evaluation of registration performance

The descriptive statistics (mean and standard deviation) of DCs and HDs measuring the accuracy of the alignment for the ITK-MI and Elastix state-of-the-art methods and our mesh-to-raster (M2R) approach are reported in Table 3. Based on the 150 QLF/DP image pairs, the means of the DC are 0.940 (range 0.7600-0.9900), 0.959 (range 0.874-0.991) and 0.971 (range 0.8910-0.9930) for ITK-MI, Elastix and M2R, respectively. The means of the HD are 0.072 (range 0.0181-0.2630), 0.055 (range 0.014-0.153) and 0.041 (range 0.0102-0.1050), respectively.

The one-way repeated measures ANOVA revealed a statistical significant difference between the means of the DC (, ) as well as the means of the HD (, ) for the registration with ITK and the proposed method. Similar results were given by ANOVA for the means of the DC (, ) and for the means of the HD (, ) for M2R and Elastix. The statistical significance is emphasized by the box plots in Figure 5, which are visualizing the data from Table 3. Both the DCs and the HDs exhibit more variability in the case of ITK-MI and Elastix rather than for our method.

Metrics DC HD
Mean SD Mean SD
Method
M2R 0.971 0.0129 0.041 0.0180
ITK-MI 0.940 0.0471 0.072 0.0494
Elastix 0.959 0.0219 0.055 0.0279
Table 3: Mean and standard deviation (SD) of Dice coefficient (DC) and symmetric Hausdorff distance (HD) of the deformed DP ground truth and the QLF ground truth for the proposed mesh-to-raster (M2R), the ITK-MI and Elastix reference methods.
Figure 5: Boxplots of the Dice coefficients and the symmetric Hausdorff distance of the QLF ground truth and the DP ground truth deformed using the proposed algorithm (M2R), the ITK mutual information (ITK-MI) and the registration implemented with Elastix.

4 Discussion

The alignment of ROIs in multi-modal medical images is important for many applications. Here, a mesh-to-raster (M2R)-based method is described, which aligns the contour of the ROI of the reference image to the target image with a deformation field that is determined on the whole image domain. Although this paper provides examples in 2D and 3D only, the method is applicable in general in any dimension.

The quantitative evaluation is based on multi-modal 2D data from dentistry. Our methods was applied to QLF and DP images, including an evaluation of the automatic segmentation that is needed to determine the tooth region, which acts as ROI.

With respect to the evaluation of the ROI segmentation, our M2R approach is not as accurate as a human observer (a significant difference among the means of each human rater and the automatic segmentation was determined by ANOVA). In absolute numbers for QLF, the distance in terms of mean DC between M2R () and the best-performing human observer R3 () is only about , while the distance between the M2R and the worst-performing human observer R2 is negligible (about ). Similar results hold for the DP. Also, the mean of the HD for the automatic segmentation is slightly bigger than those of the manual markings.

The observed deviation from the GT in a QLF is due to the extended and smooth transition zone between tooth and background, leading to an imprecise classification of the image into tooth and non-tooth regions and thus to a less accurate contour extraction than in cases where the images exhibit clear distinction between tooth and background. Similarly for the DP images, small inaccuracies in tooth and non-tooth regions classification are caused mostly by the low contrast between the colors of tooth and gum or adjacent teeth. However, the segmentation step may be replaced easily by another algorithm.

Contour-based registration methods have been used in several works for ROI extraction and alignment. Chen et al.[39] used an automatic method to extract and align teeth contours to register dental radiographs. A deformable image registration method that used ROI’s contour propagation was proposed by Wu et al.[40] in radiotherapy. Here, the proposed method is used for both ROI extraction and matching. The method is correctly aligning the contour of the ROI in the target image to the boundary of the ROI in the reference image. However, if the boundary of ROI in the reference image is not accurately delineated, it might lead to inaccuracies in the registration step, as already pointed out in the case of QLF and DP segmentation.

The best and worst DCs and HDs for our method, the ITK-MI registration and the MI-based registration implemented with Elastix are depicted in Fig. 6. The MI-registration implemented with Elastix yields better results than the ITK-MI registration. However, in some cases, the registration implemented with Elastix returned unrealistic deformations, as shown for example in the last row of Figure 6. These results clearly illustrate the improved accuracy of the proposed method over both the ITK-MI and Elastix that was shown quantitatively but in an abstract manner by the DC and HD means. Disregarding that we have used a non-optimal ROI extractor, we have been able to outperform both the ITK-MI registration and the MI-registration implemented with Elastix significantly. This is due to the ROI-based vs. global approach, respectively. It emphasizes the need of ROI-based registration, which is in line with Yi et al.[41].

Using an optimal ROI segmentation improves the accuracy of the registration with the proposed algorithm. In the case of the DP/QLF alignment, we performed also the registration using the segmentation of the ROIs provided from the ground truth estimated from the manual markings instead of automatically computing the segmentation. This eliminates the segmentation error that is implicitly included in the results shown above. We calculated also the Dice coefficients and the Hausdorff measure for all 150 image pairs. The mean and standard deviation of the DC for the M2R registration with optimal segmentation (ground truth estimated from manual segmentation) are 0.9913 and 0.0018 respectively. The mean and the standard deviation for the HD are 0.0126 and 0.0048. The registration results are therefore even more accurate if an optimal segmentation of the ROI (or classification of the images to be registered into ROI and background) is available.

QLF with
marked GT
DP with
marked GT
DP deformed
using M2R
DP deformed
with ITK-MI
DP deformed
with Elastix
Quality measure
for M2R
Quality measure
for ITK-MI
Quality measure
for Elastix
Best DC (0.9930)
Best HD (0.0102)
Best DC (0.9900)
      HD (0.0201)
Best DC (0.9910)
Best HD (0.0145)
DC (0.9750)
HD (0.0397)
       DC (0.9820)
Best HD (0.0181)
DC (0.9830)
HD (0.0255)
DC (0.9760)
HD (0.0322)
Worst DC (0.7600)
Worst HD (0.2630)
DC (0.9080)
HD (0.1420)
Worst DC (0.8910)
Worst HD (0.1050)
DC (0.8810)
HD (0.1190)
Worst DC (0.8740)
       HD (0.1140)
DC (0.9570)
HD (0.0571)
DC (0.8580)
HD (0.1600)
             DC (0.8750)
Worst HD (0.1530)
Figure 6: Best and worst DC and HD for the proposed method (M2R), the ITK mutual information (ITK-MI) and the registration implemented with Elastix. From left to right: QLF with marked GT contour (red), DP with marked GT contour (white), DP with marked GT contour (blue) deformed using the M2R and contour extracted from QLF (red), DP with marked GT contour (blue) deformed using ITK-MI and contour extracted from QLF (red), and DP with marked GT contour (blue) deformed using Elastix and contour extracted from QLF (red).

For both 2D and 3D, the objects of interest presented in our applications are relatively simple in shape. However, the proposed method is capable of handling more complex shapes. As an example in 2D, our method was applied to images of a hand and a bone taken from the 1070-Shape Dataset of the Laboratory for Engineering Man/Machine Systems (LEMS) [42]. The registration results, with values of the parameters , and , , are given in Figure 7. The Dice coefficient and the symmetric Hausdorff distance are 0.9819 and 0.0102 for the bone example, 0.9899 and 0.0488 for the first hand example (middle row in Figure 7) and 0.9902 and 0.0191 for the second hand example (last row in Figure 7).

a

b

c

d

e

a

b

c

d

e

a

b

c

d

e

Figure 7: Results for more complex shapes in 2D: The template image with the the superimposed curve (red) (a); the reference with superimposed curves; shape contour before (red) and after parametric registration (blue) (b); and the curves after parametric and non-rigid registration in blue and green, respectively (c). Panel (d) shows the reference after non-rigid deformation and the corresponding template curve (red). Finally, an image created by blending the template and the registered reference is displayed in (e).

As an example of application of the proposed method to more complicated shapes in 3D, a brain CT scan is used. The template has been obtained by applying a synthetic deformation to the reference mesh. The considered deformation was defined as , , , with . After applying the defined nonlinear deformation to the reference mesh, a parametric rigid body transformation was applied to the resulting mesh. The chosen parameters were as translation vector and radians for the rotation angles. Figure 8 shows the results of the proposed registration method for the brain example, using , and as parameters values. As in the hips example, the resolution of the 3D grid is , with pixels dimensions (1.9380, 1.9380, 1.9380) mm. Figure 9 displays the distances . The Dice coefficient after the registration is 0.9993. The two one-sided Hausdorff distances before the registration are 100.72 mm (51.97 pixel units) for the reference mesh from the template and 45.30 mm (23.38 pixel units) for the other direction. The 99th percentiles of the Hausdorff distance of the reference mesh from the template are 94.82 mm (48.93 pixel units) and 39.50 mm (20.38 pixel units) for the other direction. After the registration, the one-sided Hausdorff distance of the reference mesh from the registered template is 1.35 mm (0.0054 pixel units) and 6.27 mm (0.0251) for the other direction, while the 99th percentiles are 1 mm (0.52 pixel units) and 2.65 mm (1.37 pixel units) for the Hausdorff distance of the reference mesh from the registered template and the other direction respectively. The noticeable difference between the Hausdorff distance and its 99th percentile is due to the presence of a few outliers, displayed as colored (red or blue) dots in Figure 9.

Figure 8: Results in 3D for the brain example. The first row shows the starting setting: (a), template mesh (b), and initial position of the input images (c). Note that (a) and (b) use different manually chosen view angles to simplify the comparison, (c) shows the true initial mismatch of the data sets. Middle and lower row depict the results after parametric registration and non-rigid deformation, respectively, from different viewing angles.
Figure 9: The distances , for , are displayed as color coding on the registered template mesh for every value of the parameter used for the non-rigid registration. From left to right: as seen from four different angles, color bar. The measure of the distance in mm was estimated considering an average acquisition field of view for head CT scans of 250 mm.

A limitation of our work is that the extensive quantitative evaluation is performed only in 2D. In future, we plan to obtain extensive quantitative assessment in 3D, too. Here data from the Evaluation of Methods for Pulmonary Image Registration 2010 (EMPIRE10) or other challenges might be helpful [43]. In this context, the proposed method could be used to perform lung registration by aligning the lung boundaries. However, to align the major fissures inside the lung, an additional data term would need to be included in the proposed method, which we plan to exploit in a future work.

Since the proposed method can also be used for segmentation, even though it is performed as a pre-processing step, it would be interesting to explore the possibility of extending the method to an integrated segmentation/registration scheme in the future.

5 Conclusions

A ROI-based registration method for multi-modal images was presented. It uses a curve-to-pixel or surface-to-voxel approach to align the ROIs from the reference and the target images in 2D or 3D, respectively. Qualitative examples in 2D and 3D were presented. The accuracy of the alignment was tested on multi-modal 2D images, as well as in 3D examples, by calculating the Dice coefficient and the Hausdorff distance between the registered images. For the quantitative evaluation in 2D, the ground truth was established by applying the STAPLE algorithm to manually marked images. In a comprehensive analysis based on 150 pairs of images, the proposed method statistical significantly outperforms mutual information (MI)-based global registration implemented using both ITK and Elastix, which is considered as state-of-the-art method.

Appendix A details of the parametric registration in 2D

For the parametric registration in 2D, if , , the regularizer can be expressed as

Therefore, the vector for the parametric registration is

Appendix B details of the parametric registration in 3D

For the parametric registration in 3D, if , , the regularizer for the parametric registration, (3), can be expressed as

Therefore, the vector for the parametric registration is

Appendix C details of the ANOVA for the segmentation evaluation

This section presents the results ( and value) of the one-way repeated measures ANOVA for the comparison of the means of the Dice coefficient (DC) as well as of the symmetric Hausdorff distance (HD) of each human rater and the automatic segmentation for both the QLF and the DP.

Modality QLF DP
Metrics DC HD DC HD
(R1,M2R) 113.16 107.73 20.01 54.85
(R2,M2R) 8.76 0.0036 38.34 155.08 139.09
(R3,M2R) 234.00 134.41 415.61 345.48
Table 4: ANOVA results for the comparison of the means of the Dice coefficient (DC) and symmetric Hausdorff distance (HD) of the human raters and the automatic segmentation for QLF and DP.

Informed consent

Informed consent was obtained from all individuals participating in the trial. The study was approved by the IRB no IORG0006299 (Ethics Committee, Uniklinik RWTH Aachen).

Disclosures

No conflicts of interest, financial or otherwise, are declared by the authors.

Acknowledgements.
The authors would like to thank Eva E. Ehrlich and Ekaterina Sirazitdinova, Uniklinik RWTH Aachen, for their assistance in the preparation of the 2D data. The authors at AICES RWTH Aachen were funded in part by the Excellence Initiative of the German Federal and State Governments. The RASimAs project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 610425. The authors declare no conflict of interest and have nothing to disclose.

References

  • [1] A. P. Keszei, B. Berkels, and T. M. Deserno, “Survey of non-rigid registration tools in medicine,” J Digit Imaging , 1–15 (2016).
  • [2] P. Viola and W. M. Wells, “Alignment by maximization of mutual information,” Int J Comput Vision 24(2), 137–154 (1997).
  • [3] J. P. W. Pluim, J. B. A. Maintz, and M. A. Viergever, “Mutual-information-based registration of medical images: a survey,” IEEE Trans Med Imaging 22(8), 986–1004 (2003).
  • [4] K. P. Wilkie and E. R. Vrscay, “Mutual information-based methods to improve local region-of-interest image registration,” in Proceedings of the Second International Conference on Image Analysis and Recognition, ICIAR’05, 63–72, Springer-Verlag, (Berlin, Heidelberg) (2005).
  • [5] A. Yavariabdi and et al., “Contour-based TVUS-MR image registration for mapping small endometrial implants,” in Abdominal Imaging. Computation and Clinical Applications: 5th International Workshop, Held in Conjunction with MICCAI 2013, Nagoya, Japan, September 22, 2013. Proceedings, H. Yoshida, S. Warfield, and M. W. Vannier, Eds., 145–154, Springer Berlin Heidelberg, (Berlin, Heidelberg) (2013).
  • [6] X. Gu and et al., “A contour-guided deformable image registration algorithm for adaptive radiotherapy,” Phys Med Biol 58(6), 1889 (2013).
  • [7] R. Penjweini and et al., “Deformable medical image registration of pleural cavity for photodynamic therapy by using finite-element based method,” in Proc SPIE Int Soc Opt Eng. 9701, 970106 (2016).
  • [8] Y. Zhong and et al., “Research on digitalized design technology of teeth shape support implant guide based on image guide,” in 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 1595–1599 (2016).
  • [9] T. Zhang and et al., “Improved registration of dce-mr images of the liver using a prior segmentation of the region of interest,” Proc. SPIE 9784, Medical Imaging 2016: Image Processing , 978443–1 – 978443–8 (2016).
  • [10] C. Hope and et al., “Assessing the association between oral hygiene and preterm birth by quantitative light-induced fluorescence,” Scientific World J 2014(374694), 1–9 (2014).
  • [11] J. Yan, Y. Xiang, and X. Jian, “Automated detection and quantification of early caries lesions on images captured by intraoral camera,” in Bioelectronics and Bioinformatics (ISBB), 2011 International Symposium on, 251–254 (2011).
  • [12] S. Datta and N. Chaki, “Detection of dental caries lesion at early stage based on image analysis technique,” in 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS), 89–93 (2015).
  • [13] A. Mansoor and et al., “A statistical modeling approach to computer-aided quantification of dental biofilm,” IEEE J Biomed Health Inform 19(1), 358–366 (2015).
  • [14] S. Shah and et al., “Automatic tooth segmentation using active contour without edges,” in 2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference, 1–6 (2006).
  • [15] M. Viceconti and P. Hunter, “The virtual physiological human: Ten years after,” Annu Rev Biomed Eng 18(1) (2016).
  • [16] J. E. E. de Oliveira and et al., “Surface mesh to voxel data registration for patient-specific anatomical modeling,” Proc. SPIE 9786, Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling , 978625 (2016).
  • [17] B. Berkels and et al., “Non-rigid contour-to-pixel registration of photographic and quantitative light-induced fluorescence imaging of decalcified teeth,” Proc. SPIE 9784, Medical Imaging 2016: Image Processing , 97840Z–7 (2016).
  • [18] B. Berkels and et al., “Curve-to-image based non-rigid registration of digital photos and quantitative light-induced fluorescence images in dentistry,” in Bildverarbeitung für die Medizin 2016, Informatik aktuell, 80–85, Springer (2016).
  • [19] L. Gorelick, A. Geiger, and A. Gwinnett, “Incidence of white spot formation after bonding and banding,” Am J Orthod 81(2), 93–98 (1982).
  • [20] S. Bauer and et al., “Joint ToF image denoising and registration with a CT surface in radiation therapy,” in Lect Notes Computer Sci, 98–109 (2011).
  • [21] D. Braess, Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics 3rd Edition, Cambridge University Press (2007).
  • [22] B. Berkels and et al., “Co-registration of intra-operative brain surface photographs and pre-operative MR images,” Int J Comput Assist Radiol Surg 9(3), 387–400 (2014).
  • [23] S. Gratton, A. S. Lawless, and N. K. Nichols, “Approximate Gauss-Newton methods for nonlinear least squares problems,” SIAM J Optim 18(1), 106 – 132 (2007).
  • [24] N. Chumchob and K. Chen, “A robust affine image registration method,” Int J Num Anal Model 6(2), 311–334 (2009).
  • [25] Y. Chen and et al., “Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate,” ACM Trans Math Softw 35(3), 22:1–22:14 (2008).
  • [26] T. Deserno, J. Oliveira, and O. Grottke, “Regional anaesthesia simulator and assistant (RASimAs): Medical image processing supporting anaesthesiologists in training and performance of local blocks,” in Proceedings of the 28th IEEE International Symposium on Computer-Based Medical Systems, 348–51 (2015).
  • [27] D. C.-L. Fong and M. Saunders, “LSMR: an iterative algorithm for sparse least-squares problems,” SIAM J Sci Comput 33(5), 2950–2971 (2011).
  • [28] F. P. Oliveira and J. M. R. Tavares, “Medical image registration: a review,” Comput Methods Biomech Biomed Engin 17(2), 73–93 (2014).
  • [29] L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology 26(3), 297–302 (1945).
  • [30] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Trans Pattern Anal Mach Intell 15(9), 850–863 (1993).
  • [31] T. Lehmann, “From plastic to gold: a unified classification scheme for reference standards in medical image processing,” Proc SPIE 4684(3), 1819–27 (2002).
  • [32] S. K. Warfield, K. H. Zou, and W. M. Wells, “Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation,” IEEE Trans Med Imaging 23(7), 903–921 (2004).
  • [33] H. J. Johnson, M. M. McCormick, and L. Ibanez, The ITK Software Guide: Introduction and Development Guidelines, Kitware Inc., fourth ed. (2015).
  • [34] S. Klein and et al., “Elastix: a toolbox for intensity-based medical image registration,” IEEE Trans Med Imaging 29(1), 196–205 (2010).
  • [35] S. Monti and et al., “An evaluation of the benefits of simultaneous acquisition on pet/mr coregistration in head/neck imaging,” Journal of Healthcare Engineering 2017, Article ID 2634389 (2017).
  • [36] S. Broggi and et al., “A comparative evaluation of 3 different free-form deformable image registration and contour propagation methods for head and neck mri: The case of parotid changes during radiotherapy,” Technology in Cancer Research & Treatment 16(3), 373–381 (2017).
  • [37] M. Malinsky and et al., “Registration of fa and t1-weighted mri data of healthy human brain based on template matching and normalized cross-correlation,” Journal of Digital Imaging 26(4), 774–785 (2013).
  • [38] C. Zhu, R. H. Byrd, and J. Nocedal, “L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization,” ACM Trans Math Softw 23(4), 550–560 (1997).
  • [39] H. Chen and A. K. Jain, “Dental biometrics: alignment and matching of dental radiographs,” IEE Trans Pattern Anal Mach Intell 27(8), 1319–1326 (2005).
  • [40] Q. Wu and et al., “Deformable image registration of CT images for automatic contour propagation in radiation therapy,” Biomed Mater Eng 26, S1037–S1044 (2015).
  • [41] W. Yi and et al., “ROI-based image registration for digital subtraction radiography,” Oral Surg Oral Med Oral Pathol Oral Radiol Endod 101, 523–529 (2006).
  • [42] “The 1070-shape database of the laboratory for engineering man/machine systems (lems), school of engineering at brown university.” https://vision.lems.brown.edu/content/available-software-and-databases.
  • [43] K. Murphy and et al., “Evaluation of registration methods on thoracic CT: the EMPIRE10 challenge,” IEEE Trans Med Imaging 30(11), 1901–1920 (2011).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44710
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description