Breast Cancer: Model Reconstruction and Image Registration from Segmented Deformed Image using Visual and Force based Analysis

Breast Cancer: Model Reconstruction and Image Registration from Segmented Deformed Image using Visual and Force based Analysis

Shuvendu Rana,  Rory Hampson, and Gordon Dobie S. Rana, R. Hampson and G. Dobie are with the Centre for Ultrasonic Engineering, Department of EEE, University of Strathclyde, Glasgow, Scotland, UK (e-mail: shuvendu@ieee.org, rory.hampson@strath.ac.uk, gordon.dobie@strath.ac.uk). This work is funded by the EPSRC under grant number EP/Po11276/1 and supported by Pressure Profile Systems Inc.Copyright (c) 2019 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
Abstract

Breast lesion localization using tactile imaging is a new and developing direction in medical science. To achieve the goal, proper image reconstruction and image registration can be a valuable asset. In this paper, a new approach of the segmentation-based image surface reconstruction algorithm is used to reconstruct the surface of a breast phantom. In breast tissue, the sub-dermal vein network is used as a distinguishable pattern for reconstruction. The proposed image capturing device contacts the surface of the phantom, and surface deformation will occur due to applied force at the time of scanning. A novel force based surface rectification system is used to reconstruct a deformed surface image to its original structure. For the construction of the full surface from rectified images, advanced affine scale-invariant feature transform (A-SIFT) is proposed to reduce the affine effect in time when data capturing. Camera position based image stitching approach is applied to construct the final original non-rigid surface. The proposed model is validated in theoretical models and real scenarios, to demonstrate its advantages with respect to competing methods. The result of the proposed method, applied to path reconstruction, ends with a positioning accuracy of 99.7%.

Breast cancer, medical imaging, affine scale-invariant feature transform (A-SIFT), structure from motion (SfM), force deformation.

I Introduction

Breast cancer is one of the most common causes of death and public fear in today’s clinical environment. An estimated 1.38 million women worldwide were diagnosed with breast cancer in 2008, accounting for nearly a quarter (23%) of all cancers diagnosed in women (11% of the total in men and women). During that year, it was estimated that breast cancer was responsible for approximately 460,000 deaths worldwide [1]. In UK breast cancer is typically detected through self-examination which induces a visit to the General Practitioner, or through the screening of women over the age of fifty using mammography where only about 8% of patients referred to secondary care centres have cancer [2]. This proposal considers a new technique to support screening based on tactile imaging. Tactile imaging in a primary care setting has the potential to significantly improve the accuracy of these referrals, reducing patient anxiety through efficient diagnosis while reducing the financial strain caused by unnecessary referrals to secondary care.

Unlike mammography, which provides a complete image of a breast [3, 4, 5] using radiation, tactile imaging sensors are scanned over the breast in an noninvasive manner; producing a real time feed of the pressure profile under the sensor [6, 7] as shown in Fig. 1.

(a) SureTouch Device
(b) Hardness (lump) detection in breast
(c) Localization of the lump (manually)
Fig. 1: Sample images of SureTouch device and output [6, 7]

This temporal image feed is more difficult to interpret than a global image of the breast as the data has no spatial context reference. In this scenario, optical sensors, such as embedded cameras can be used to localise an image on to the human body.

In this scenario, breast tissue consists of numerous veins to feed the mammary glands, that can be imaged with near field infra-red (NFIR) with wavelengths in the range 650nm to 930nm [8, 9]. That vascular network presents an interesting opportunity for absolute localisation of images with respect to the breast. The accurate positioning of the tactile sensor relative to the breast would enable spatial mapping of the tactile data, facilitating the production of a global stress image of the breast. From a clinical perspective, this global image would be a far more effective means of representing data, simplifying interpretation and enhancing diagnosis.

In this research an Infra-red (IR) camera is used as basis of the prototype model to scan the breast phantom. As the images are captured in time with capturing the tactile data, a full model can be constructed for localization of the current position using non-rigid reconstruction. Also, in time of scanning application of force creates deformation in the breast image and the deformation is created by the camera scanning surface. Thus, the non-rigid reconstruction needs a deformation reconstruction and camera localization method.

In recent literature of non-rigid reconstruction, Agudo et al. proposed the SLAM (Simultaneous Localization And Mapping) for elastic surface [10], where they have used the fixed position of the object boundary for image registration. They then later proposed a free boundary condition approach [11, 12] as an extension of their previous work. In both cases they have used a FEM (finite element model) [13] approach to estimate the deformation with a partially fixed rigid camera. They have used the Navier’s equation over FEM to estimate the deformation force. During of deformation, the surface strain and the material stresses are estimated using an EKF (extended Kalman filter) [14, 15]. This physics-based approach requires an initialization step (say beginning static model). In this model, a noisy environment may create an accuracy error, besides the fact that errors may accumulate over the considered time frame. As such, these approaches [10, 12, 11] may not be able to exploit the mechanical constraints to cover a large deformation range. Additionally, rigid reconstruction techniques fail when applied directly to time-deforming objects. Shape from Template (SFT) is another method to reconstruct deformation in real linear material [16] and non linear material [17]. Haouchine et al. proposed a SFT method for capturing a 3D elastic surface [16] and create augmented 3D objects using a single viewpoint as a reference view. The authors quantify the material elasticity by using non-linear solvers based on the Saint Venant-Kirchhoff model. A better approach is presented by Malti and Herzet using Sparse Linear Elastic-SFT instead of classic SFT. The authors used a relative ground truth of the original 3D object to match with the observed relative deformation by calculating the spatial domain of non-zero deforming forces. Usually, the FEM covers the structured deformation having a sequence [18]. But in our proposed scheme, the deformation does not have any sequence because application of force in time of scanning is responsible for the stretching deformation. Also, Application of human force may not be modelled in any sequence pattern of structure. As a result, the FEM technique may not provide solution of this unstructured large deformation. In order to overcome this limitation, Non-rigid structure from motion (NRSfM) [19, 20] techniques have been proposed to recover the 3D shape of non-rigid objects over time. In this scenario, Garg et al. proposed the structure from motion (SfM) technique, which exploits motion cues in a batch of frames [19] and, have been applied to dense non-rigid surface reconstruction (). In another research work, Sepehrinour and Kasaei have used the NRSfM and optical flow method for reconstruction of 3D objects from video sequences [20]. Generally, perspective projection is considered to be a more realistic model and is ideally suited to a wide range of cameras. Currently, NRSfM technique have focused on an orthographic projection camera model, due to its simplicity but, perspective projection yields equations that are complex and often non-linear. Therefore to simplify the calculations, some approximations are applied on the perspective projection model, such that it can be reduced to an orthographic projection. Orthographic reconstruction of non-rigid surfaces has been done by a singular value decomposition algorithm and using the orthogonal characteristics of the rotation matrix [19, 20], but true perspective reconstruction of non-rigid surfaces, due to the high complexity and the large number of unknowns, seemed impossible. Later, Yu et al. [21] proposed a template based non-rigid 3D reconstruction from a stationary object deformation. The authors have used the dense direct matching template based direct approach to deformable shape reconstruction. In this scenario a template-based method and a feature track based method are used to generate the template from the monocular camera views. Using the concept of the 2D RGB image, Innmann et al. proposed a 3D volumetric approach to map the observed deformation into a 3D model [22]. The main procedure is to match the scale invariant feature transform (SIFT) key points with the original 3D model to estimate the current position of the object, but the authors have used a constructed 3D model to map with the current deformation. In this approach, they [22] have used a rigid movement of a camera for 3D construction, additionally a fixed distance object location system is used on the deforming object to find the appropriate positioning. Later, Agudo et al. proposed a real-time 3D reconstruction technique for nonrigid shapes in [23]. Here the reconstruction is made in the time of scanning with a distance camera. But, in this approach [23] the reconstruction of the deformation due to the camera scanning plane may not work. Also, the stretching deformation is not static due to the handheld camera. As a result, this approach may not solve the proposed problem. Very recently, Newcombe et al. proposed a dens reconstruction method using the hand held RGB-D cameras for non-rigid objects [24]. Their main proposal was to correct the point to plane mapping error in the observed depth map. The authors have used the sparse feature-based method which is fused into a canonical space using an estimated volumetric warp field, which removes the scene motion, and a truncated signed distance function volume reconstruction is obtained. They have used a fixed platform with a hand-held camera to identify the volume or spatial position of the object. Since they omitted the RGB stream which contains the global features, their method fails to track surfaces in specific types of topological changes and it is also prone to drift. In this approach, depth is the main part for 3D model construction. Hence, application of this method [24] may not fulfil the requirement of the proposed method.

The existing literature summarizes that the majority of the existing methods work with either a rigid object or a rigid camera and a few literatures defines the rigid camera and rigid object from a sudden distance. Moreover, for non-rigid camera motion and a non-rigid object, a pre-advised structure of the object is required for registration. Hence, both non-rigid registration in time of the captured non-rigid object is the most challenging task. In this work, the reference segmented non-rigid deformed images are used to construct the surface. Hence, reconstruction is one of the major concerns for model construction. To overcome the issues, force-based reconstruction is carried out to rectify the structure of the surface. A visual reconstruction is carried out to estimate the camera position to reconstruct the deform structure to its reference. To obtain the projection based matching in time of the visual reconstruction Affine SIFT is modified according to the requirement and used to achieve the best performance.

In summary, a deformation model feature estimation is formulated by proposing a modified Affine-SIFT model to estimate the camera position using SfM for deformed and original coefficients in Sec. II. Visual and force based reconstruction and model reconstruction are explained in Sec. III with an experiment set-up in Sec. IV. The accuracy of the proposed method is presented in Sec. V and finally the paper is concluded in Sec. VI.

Ii Deformation based Feature Estimation

Model construction and image stitching is carried out by analysing the overlapping regions of the multi-view images, similar to those in Fig. 11. To estimate that, spatial feature matching is the common method [25]. In this scenario, the scale invariant feature transformation (SIFT) [26] is one of the most efficient feature estimations for identifying similar regions [27]. SIFT features are estimated by analysing the difference of Gaussian (DOG) of the Gaussian pyramid of a selected octave [26], and scaling does not affect the characteristics of the Gaussian pyramid [26, 28]. It is observed that the change in aspect ratio changes the representation of the DOG matrix and create dissimilarity [28] in the feature point location compare to the original ones as shown in Fig. 2.

Fig. 2: Effect of Affine transformation in DOG matrix and the key point selection for SIFT.

In this work, a hand-held camera is used to take the surface vein photograph of the breast phantom. At the time of image acquisition, the camera will touch the surface of the breast phantom resulting in deformation of the vein pattern due to the application of force required for tactile imaging (as the hardness of the breast phantom is close to the real breast). For the re-construction of the breast phantom surface, the accurate position of each sample images needs to be identified using suitable feature extraction and similar regions identification. In this scenario, a suitable feature extraction method is presented by analysing the deformations which are described in the subsequent subsections.

Ii-a Deformation Model

As the camera surface touches the phantom surface the deformation will be formed on the camera view plane. As a flat surface is used for camera view plane as shown in Fig. 3

Fig. 3: Scanner camera and front surface geometry, with assembly shown and IR LED position and SingleTact sensors indicated. (4 single tact sensors are used to measure the force and the application angle. DETAIL A shows the vertical intersection diagram of lens, PCB, 850 nm IR and diffuser.)

the applied force, measured using four 10N rated SingleTact CS8-10N pressure transducers (PPS Inc, US-CA), during image acquisition will cause a stretching of the structure in the lateral directions.

(a) Scaling deformation (where force F1FF2)
(b) Affine deformation (where force F1=F2 and F’1F’2)
Fig. 4: Deformation of images due to the application of force. (a) Shows the surface structure due to an application of a normal force with surface markers ‘a’, ‘b’, ‘c’. (b) Shows the surface structure due to application of angular force at angle ‘’ with surface markers ‘a’, ‘b’, ‘c’, ‘a1’, ‘b1’, ‘c1’. The third image shows the structure of the surface with ‘’ rotation.

As the surface is flat, vertical force (with respect to the surface) will create a uniform force distribution at each point in the view plane and cause symmetrical stretching by maintaining the aspect ratio and structure as shown in Fig. 4(a). An angular offset force will cause different stretching effects at different points as shown in Fig. 4(b).

Here, the stretching () and the force () can be related by (1),

(1)

where is the structure after applying force and is the stretching constant. The stretching constant can be calculated by measuring the Young’s modulus and other material properties. A detailed stretching and force relation is explained in the next sections. The calibration of the Young’s Modulus (stretching constant) must be done for each new sample, or series of measurements, and can be performed using the proposed device (as described in Section IV-B).

In this experiment, the applied force is not always normal to the surface and results in the affine transformation. As discussed earlier, the affine transformation makes significant changes in the Gaussian pyramid. As a result, the extracted SIFT features from the affine deformation do not match with the original image. So an affine feature estimation is necessary to find accurate matching in similar parts of the images.

Ii-B Affine Feature Extraction

An efficient feature estimation technique is required for affine transformation. Affine SIFT is one well known affine feature estimation technique [28]. Here, the tilt parameter is defined by the scaling factor as in (2),

(2)

where is the tilt angle for the scale factor [28, 29].

In this work, the stretching and shrinking can be projected as tilt in camera viewpoint. By experimental observation, the image stretching ratio () is . In the real scenario, the camera can be tilt by to get the tilt ratio. So, using of standard A-Sift will miss feature points for tilts .

Ii-B1 Modified A-SIFT

For this work the features need to be extracted from such a latitude angle, that it can cover the maximum () and minimum tilt (). Thus will be the geometric factor of the scale ratio as shown in (3),

(3)

where the latitude tilt angle to cover the SIFT features for affine transformation. For the longitude or the rotation effect, is taken as  to form a equilateral triangle in the 3D space for scaling measurement. In this scenario, the SIFT calculation will be carried out for {,}. The total SIFT calculation area will be times of original SIFT. The A-SIFT matching overload will be if matched with normal SIFT features.

Iii Reconstruction Model

Deformation rectification is the most essential part of image reconstruction. As the deformations are caused by the application of force at the time of scanning, structure from motion (SfM) [30, 31] based visual models are not sufficient to reconstruct the surface. Thus the deformation needs to be rectified by measuring the applied force and undoing the deformation before applying the visual model. In this work, pressure sensors are used to estimate the applied force, and the angle of the probe as shown in the prototype model in Fig. 5.

Fig. 5: Model probe with the four force sensors creating a right handed orthogonal coordinate frame. axis, axis

Using this force measurement technique and the implied lateral strains of the phantom, original structure can be reconstructed.

Iii-a Force Based Deformation Model

As discussed earlier, the application of force causes a stretching effect on the non-rigid surface based on the material Young’s modulus as shown in (4),

(4)

where is the orthogonal lateral strains in both X and Y dimensions, is the material Poisson ratio, is the material Young’s modulus and is the applied axial stress in the Z direction.

From an imaging standpoint, for an axial load with the camera axis coincident with the loading axis, an image feature will stretch proportionally to that load, which is consistent with (4). Additionally, each feature unit will deform by a factor of the applied load as should be expected from (4) and, the change in a feature’s radial location, with respect to the camera axis, required to restore the feature given an axial load will also be proportional to the feature’s respective stretched radial distance from the camera centre as described in (5).

(5)

Where is the is the radial distance of the image feature after applying the axial load and is the distortion rectification required to get the undistorted structure. is the undistorted radial distance of image feature. So can be represented as (6):

(6)

Using (5) and (6), the relation between and can be represented as shown in (7).

(7)

The stretching relation in (7), can be applied to the wider range of situations where the loading axis is not normal to the breast surface, i.e. the camera is tilted with respect to the structure surface, by realizing that the applied load will not be uniform across the image plane as it would be in the case coincident axes.

are the forces along the Y axis and are the forces along the X axis. The total force in the Z direction, applied at the centre of the image, can be calculated using (8).

(8)

Additionally the tilt angles of & about the X and Y axis respectively can be calculated using (9).

(9)

Where ans are the separation between the sensors along the X and Y axis, for this experiment . For a tilt of , a load variation across the image plane will be caused by a deviation in the scanner depth into the structure dictated by the material spring constant, , as described in (10).

(10)

where is the depth deviation in the Z direction at location (x,y) of the image plane. The load distribution across an image is calculated using the average load at the centre of the image with a position dependent offset related to the tilt angles of the scanner by firstly defining a flat image plane normal vector, . Then, for tilt angle of the tilt normal vector can be calculated using (11).

(11)

The depth deviation can be calculated using (12).

(12)

Combining the lateral strain equation (7) and force distribution equation (10), the relation between the force and change of radial pixel location can be represented in a general form for non normal axial loads, based on tilt angle and average load, as shown in (13)

(13)

where is the scanner area. In this experiment the value of . Using this analytical equation, the surface structure can be estimated as the equation undoes the warping in the image. However the position and orientation of the surface can not be estimated using the force analysis as no yaw term can be measured. A visual position estimation of the surface will result the complete model construction in this situation.

Iii-B Visual Reconstruction Model

For surface construction and image stitching, proper reconstruction of the deformed surface is an important task. In this scenario, camera position estimation and re-projection can be the best possible solution to understand the original surface macro-structure. Hence, camera position can be used to estimate the original structure of the surface. For this, SfM will provide the relative position and orientation of the surface. As the structure of the surface is affine due to the force deformation effect as discussed earlier, modified feature estimation will be used to get matching features for the deformed model.

Using the modified A-SIFT and SfM [32] pipeline, relative camera position can be estimated by defining the rotation and translation vector . As the scanning is done by touching and compressing the surface, we can assume that the change of position in the Z direction will define the change of force as in (4). A relative position change estimation can be used to correct the translation matrix for re-projection. To compute the reconstructed structure, the angle needs to be zero, a condition awarded by the deformation model.

Let define the relative rotation and translation matrix of image to project positions using the relation as shown in (14)

(14)

where (x,y,1) defines the homogeneous location image for the camera intrinsic value . To identify the camera position, re-projection of the points from the corrected non-deformed position will show the original structure.

and are the two images where the image is deformed. To get best features, modified A-SIFT is applied on the image so that better matching is possible with image. Using the SfM pipeline [32], feature matching, and Random sample consensus (RANSAC), we estimate inlier matching from a set of SIFT matching data between image and image that contains outliers [33, 34]. Fig  6 shows a sample of the matching sequences where the white region represents more inliers matching and dark represents the fewer inliers matching.

Fig. 6: Inliers matching for sequential Images. White areas indicate better matching due to physical closeness of images.

It can be assumed that for sequential scanning, the majority of matching will occur in the next image and a sufficient number of matching points can be obtained. The eight-point algorithm will provide the fundamental matrix using the inlier points as shown the relation in (15)

(15)

where defines the homogeneous location of matching inliers. Using the camera parameter and the fundamental matrix , relative camera position and orientation of the image can be calculated using Single Value Decomposition (SVD) and solution estimation method [33, 32, 35]. For this, defines the relative rotation and translation of the camera ‘i+1’.

To correct the visual deformation correctly the camera position, rotation, and translation should be the same plane of the image as shown in Fig. 7.

Fig. 7: Camera plane and view plane re-projection using re-estimation of the rotation and the translation matrix to reduce the force based visual deformation (where force and camera plain and view plain are calculated using image deformation analysis).

Hence, the corrected R and T matrices can be represented as shown in (16)

(16)

where and are the corrected rotation and the translation respectively. In this case, rotation about the Z axis () is removed to correct yaw rotation effect and the rotations about the X and the Y axis ( and ) are removed to correct the affine deformation. The translation vector, and defines the X and Y translation. Here is set to ‘0’ to remove the scaling effect. The image is re-projected using the corrected as described in (17)

(17)

where (x’,y’) are the re-projected location of the (x,y) using the correct rotation matrix and translation vector.

Iii-C Model Reconstruction

Fig. 8: Surface reconstruction and scanning path optimization ( image is compared with reconstruction to get the reconstructed surface of reconstruction.)

As discussed earlier, visual based reconstruction alone is not sufficient to rectify the deformation because of the non-steady deformation due to an application of force. In this scenario, only visual reconstruction creates a relative camera position, but stretching of the surface does not make linear relation with the camera position obtained using the visual reconstruction. Which means, the use of visual estimation alone will not be sufficient for non-rigid surface reconstruction.

The force-based reconstruction tries to rectify the structure using the force analysis. But there is time required to capture the accurate deformation made by a certain force. Moreover, visual deformation made by the lens could not be reconstructed using the force analysis. In this scenario, dual force and visual rectification pipelines are necessary to identify the original structure of the surface.

{, ,…, , ,…} is the image set captured using the probe. It is assumed that is the non deformed surface and assigned as the reconstructed sample . In this scenario, can be defined as the reconstruction. The force based deformation model is applied on the and to generate rectified .

SIFT and modified A-SIFT are applied on the image and respectively. SIFT matching, RANSAC and camera pose of the image is estimated using the reference of . As discussed before, the is the rectified correct camera position to generate the visual and force rectified image .

Using the (X, Y) translations obtained from the translation matrix, the position of the can be identified with respect to the . Assuming the starting point of , the absolute position of can be estimated. Using the image stitching methods, the next reconstruction instance is generated. The overall visual reconstruction procedure is depicted in Fig. 8.

Iii-D Accuracy Analysis

The accuracy of the model depends on the deformation detection and rectification. Firstly, it has been shown that the calculation of the force using the four pressure sensor provides the total force, the angle of the probe, and the depth of the probe. Using the force based orientation estimation method, as discussed in Sec. III-A, the deformation of the view surface, as discussed in Sec. II-A, can be calculated and the original surface structure is estimated. Hence, the scanning surface force angle can be different (vertical or angular) and the surface position may not be always horizontal, though this is not a problem, as angles are made with respect to the surface.

The visual rectification by estimation of the probe (camera) position provides the non-deform surface structure. Moreover, the optimized new version of modified A-SIFT reduces the computational time required for A-SIFT. Using the SfM over the optimised matching, obtained by using the modified A-SIFT, will provide a large number of inlier matching to estimate the relative camera position. As the scanning probe will touch the surface for scanning, the assumption of the structure of the surface is not possible by analysing the camera position. Moreover, the camera angle is calculated by visual measurement of the stretching. So the camera angle depends on both force and stretching constants (Young’s modulus (E) and Poisson ratio ()). Hence, the original probe angle will not be equal with the calculated angle of the visual estimated camera angle, as claimed in Sec.  III-BIII-C.

In this application, a combination of visual and force model will correctly estimate the surface structure and the probe position (rotation and translation), as the force model provides a unified plane allowing for more matching, and the visual model estimates the structure. In other words, the generated error in the visual reconstruction can be removed using the force reconstruction and the error in the force based reconstruction can be removed using the visual reconstruction. Moreover, the image reconstruction is carried out at the position of the first image (assuming no force is applied). So it can be concluded that the reconstruction will provide the original surface structure accurately.

Iv Experiment Set-up

In this work, the breast phantom and the camera is prepared for scanning purpose. All the required equipment details and calibration methods are discussed in the following subsections.

Iv-a Equipment Details

Here, the model probe is constructed using a visual layer of IR support and four sensors (as shown in Fig. 3). The 850 nm IR illumination ring is placed in between the camera and the transparent visual layer to capture the vein pattern inside of the skin.

For the breast model, Ecoflex 00-10 shore hardness silicone (Smooth-On, US-PA) with 20% thinner is used to simulate the elasticity of a typical breast [6, 7]. The skin is made-up with the very soft 000-35 shore hardness silicone to simulate the skin stretching. The vein pattern is created using a mixture of silicone and IR absorb material (graphite powder) and placed in between the breast tissue and skin. Fig. 9 shows the layers of the breast model.

Fig. 9: Breast model material layers and scanning probe assembly

A continuous scan is carried out across the surface of the breast phantom and the scanning path is recorded using a VICON camera tracking system. Here the VICON is constructed using twelve cameras and can measure a global position and orientation with six degree freedom with 0.5mm position accuracy  [36]. The material of the phantom is very soft (like the breast), so very little pressure will change the surface and stretch the vein pattern like in the real scenario.

Iv-B Probe and Model Calibration

The four pressure sensors that are used to measure the tilt angle using (9) can also calculate Hooke’s constant, , with a known reference angle. An Invensense MPU6050 is used to provide a reference measurement for the scanner tilt angle and calibrated using an assumed for the sensors to identify the real Hooke’s constant, . Fig. 10 shows the calculation of the using the data points. Here the measured Hookean constant is estimated as: . With the scanner area, , taken to be 0.0048m2 and the material Poisson ratio, , taken as 0.5 (incompressible), the value of Young’s modulus (E) is as shown Fig. 10. Calibrating the system for different patients is achieved by applying the scanner normally to the tissue, and comparing loaded feature positions with those of an unloaded surface image, as we have done here. The summary of experimental parameters for the model validation is presented in the Table I.

Fig. 10: Young’s modulus and Hooke’s constant calibration data. E is calculated using measured image feature deviation. K is calculated using a reference angle measurement

Image

Sensor

Poisson

Hooke’s

Scanner

Young’s
set distance ratio constant area modulus
336 53mm 0.05 18N/mm 0.0048 16.1 KPa
TABLE I: Experiment set-up.

Here the stretching constant can be calculated in the beginning of the clinical testing. When the probe touches the surface the first set of images will be used for calculating the stretching constant and to select the first image of the experiment.

V Results

In this experiment, the images are pre-filtered by selecting the scanner area using a mask to prepare for the experiment as shown in Fig 11.

Fig. 11: Rectification and vein part selection using mask

After that, the modified A-SIFT feature estimation is carried out to find out the inlier matching. For better matching SIFT parameters are set such a way that only the vein pattern regions are selected for feature extraction. Peak threshold 1, edge threshold 10 is used for modified A-SIFT matching. Fig. 12 shows that the use of modified A-SIFT increases the inlier matching for the breast images in sequential matching by well over 100%.

Fig. 12: Matching efficiency comparison between modified A-SIFT and SIFT for sequential inlier matching

V-a Model Construction Result

In this work, the scanning is carried out in a sequential order. Hence, the majority of correct A-SIFT matching will occur in the next sequence of the image sets. Fig. 13(a) shows the inlier matching with sequential images. It is observed that some better matching appears in intermediate scanning because the random movement of the probe. From the image, it can be concluded that the use of the Modified A-SIFT improves the matching efficiency for the image set over conventional SIFT.

The translation matrix is generated using the sequential inlier matching as shown in Fig. 13(a). In the proposed method, the (X,Y) components of the translation matrix defines the position of the camera. The resultant position in the experiment is compared with the real position obtained using the VICON camera tracking system. The matching result is shown in Fig. 13(b).

(a) Inliers matching of image sequences
(b) Comparison of extracted camera movement path with real VICON data
Fig. 13: Inliers matching and scanning path estimation of the proposed scheme, where the green marked regions shows the recurrence region during scanning

The camera position detection of the proposed scheme is compared with the latest non-rigid reconstruction methods [32, 11, 19]. The authors made use of the detail explanations from research papers Ref. [11] and  [19] (as explained in the literature) for the separate implementations. Here the approach of this research work is different than the existing works. So, the validations of the implementations are carried out by using the similar datasets (as in [32, 11, 19]) and similar results have been achieved to those claimed by Ref. [32, 11, 19]. Also, the claim in Fig 13(a) for the better matching at the sudden stage is clearly visible in the scanning path as indicated by the green marked region. The result of the scheme for 1274 mm camera travel path is tabulated in the Table II.

Proposed SfM[32] FEM[11] NRSFM[19]
RMSE Error (mm) 2.261 15.263 11.821 12.36
Error ratio (%) 0.296 1.833 1.496 1.221
TABLE II: Camera position estimation comparison of the proposed scheme with existing schemes.

Here the average error is the root mean square error (RMSE) in mm. The error ratio defines the total propagated error ratio as shown in (18).

(18)

The total error in time of scanning with respect to the increment of image sequence is shown in Fig. 14. The error is calculated by measuring the linear distance from the VICON [36] position and estimated position in each schemes.

Fig. 14: Camera position estimation error comparison with the increasing of images for proposed scheme and existing schemes [32, 11, 19]

The error in the camera position is occur due to the stretching deformation. A clear view of camera position estimation is shown in Fig. 15. Here the camera position estimation error factor is calculated by the relative camera position translation vector compared original camera position vector obtained from VICON [36]. The camera position error factor is calculated as . It is clear that the camera position estimation is close to accurate for the proposed scheme than the existing schemes [32, 11, 19]. The better camera position provides a better reconstruction surface of the breast phantom.

(a) Steady force is applied (increasing and decreasing of force)
(b) Angular affine force applied (with a ratio of along the X axis)
Fig. 15: Camera position error factor with the application of force

Using the scanning path as shown in Fig. 13(b), the reconstruction is carried out. Fig. 16(b) shows the reconstructed surface of the vein pattern for the original reference of Fig. 16(a).

(a) Original
(b) Reconstructed
Fig. 16: Visual comparison of original and reconstructed surface

It is clearly observed that the proposed scheme creates an identical vein pattern of the reference. Moreover, the scaling and stretching effect does not adversely effect this reconstruction due to the efficient visual and force based model analysis.

It can be claimed from the experiment that the proposed scheme gives an efficient non-rigid reconstruction method for model construction and image registration.

V-B Discussion

In this work, the images are rectified using the force and visual reconstruction methods to make a surface model of a breast phantom using the non-rigid images. The reconstruction procedure has three major contributions to knowledge for efficient reconstruction. Firstly using the force analysis and angular measurement, reduce the contact based deformation. The calculation of accurate Young’s modulus and Hooke’s constant gives the accurate stretching factor as explained in Eq. 13. Secondly, the use of Visual reconstruction methods by analysing the camera position obtained using SfM allows for correction of camera position and re-projection using the rectified position, improving the accuracy of the reconstruction. Finally, modified A-SIFT improves the matching efficiency of the proposed scheme compared with SIFT. Also, the accurate angle measurement and proper selection of the affine coefficients, make the coefficients much faster than the traditional method. The traditional A-SIFT feature extraction works as and matching takes . In this modified A-SIFT the feature extraction and matching takes . Moreover using the A-SIFT improves the accuracy of the SfM approach to optimize the camera position as well as better reconstruction where SfM alone makes an incomprehensible structure.

The stretching constraints of the experiment model is calculated at the when the scanning probe touch the surface. As a result the variation of Breast elasticity for different patient does not make any restriction for getting accurate localization. Also, these advances in image mosaicing, when applied to tactile imaging of breast lesions will allow for comprehensive global pressure maps to be produced, which will allow for greater diagnostic potential in the future.

Vi Conclusions

In this paper, a medical image construction and registration technique has been proposed for segmented images scanned using a prototype model of a breast scanning probe. The proposed force based rectification model, removes the stretching deformation due to the contact force. The newly modified affine SIFT feature estimation technique finds the spatial feature points from the deformed surface for model construction. The combination of the both reconstruction models provide a nearly flawless solution for camera position estimation and image reconstruction technique. Additionally this type of system will work for any body tissues with comprehensive vascular networks and can be extended to clinical examination of tissues such as the abdomen.

Acknowledgment

The authors would like to thank EPSRC for funding “Improvement of Breast Cancer Tactile Imaging through Non-Rigid Mosaicing” project through grant number EP/P011276/1. Also would like to thank Sure Inc. and PPS Inc. for supporting this project with TM pressure sensors.

References

  • [1] A. Jemal, F. Bray, M. M. Center, J. Ferlay, E. Ward, and D. Forman, “Global cancer statistics,” CA: a cancer journal for clinicians, vol. 61, no. 2, pp. 69–90, 2011.
  • [2] S. McCain, J. Newell, S. Badger, R. Kennedy, and S. Kirk, “Referral patterns, clinical examination and the two-week-rule for breast cancer: a cohort study,” The Ulster medical journal, vol. 80, no. 2, p. 68, 2011.
  • [3] K. Parker, S. Huang, R. Musulin, and R. Lerner, “Tissue response to mechanical vibrations for “sonoelasticity imaging”,” Ultrasound in medicine & biology, vol. 16, no. 3, pp. 241–246, 1990.
  • [4] K. Nightingale, M. S. Soo, R. Nightingale, and G. Trahey, “Acoustic radiation force impulse imaging: in vivo demonstration of clinical feasibility,” Ultrasound in medicine & biology, vol. 28, no. 2, pp. 227–235, 2002.
  • [5] J. F. Greenleaf, M. Fatemi, and M. Insana, “Selected methods for imaging elastic properties of biological tissues,” Annual review of biomedical engineering, vol. 5, no. 1, pp. 57–78, 2003.
  • [6] V. Egorov and A. P. Sarvazyan, “Mechanical imaging of the breast,” IEEE transactions on medical imaging, vol. 27, no. 9, pp. 1275–1287, 2008.
  • [7] V. Egorov, T. Kearney, S. B. Pollak, C. Rohatgi, N. Sarvazyan, S. Airapetian, S. Browning, and A. Sarvazyan, “Differentiation of benign and malignant breast lesions by mechanical imaging,” Breast cancer research and treatment, vol. 118, no. 1, p. 67, 2009.
  • [8] S. Nioka and B. Chance, “Nir spectroscopic detection of breast cancer,” Technology in cancer research & treatment, vol. 4, no. 5, pp. 497–512, 2005.
  • [9] V. P. Zharov, S. Ferguson, J. F. Eidt, P. C. Howard, L. M. Fink, and M. Waner, “Infrared imaging of subcutaneous veins,” Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery, vol. 34, no. 1, pp. 56–61, 2004.
  • [10] A. Agudo, B. Calvo, and J. M. M. Montiel, “Fem models to code non-rigid ekf monocular slam,” in 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Nov 2011, pp. 1586–1593.
  • [11] A. Agudo, F. Moreno-Noguer, B. Calvo, and J. M. M. Montiel, “Sequential non-rigid structure from motion using physical priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 5, pp. 979–994, May 2016.
  • [12] A. Agudo, B. Calvo, and J. M. M. Montiel, “Finite element based sequential bayesian non-rigid structure from motion,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, June 2012, pp. 1418–1425.
  • [13] J. Goldak, A. Chakravarti, and M. Bibby, “A new finite element model for welding heat sources,” Metallurgical transactions B, vol. 15, no. 2, pp. 299–305, 1984.
  • [14] S. J. Julier and J. K. Uhlmann, “New extension of the kalman filter to nonlinear systems,” in Signal processing, sensor fusion, and target recognition VI, vol. 3068.   International Society for Optics and Photonics, 1997, pp. 182–194.
  • [15] S. Haykin, Kalman filtering and neural networks.   John Wiley & Sons, 2004, vol. 47.
  • [16] N. Haouchine, J. Dequidt, M. O. Berger, and S. Cotin, “Single view augmentation of 3d elastic objects,” in 2014 IEEE International Symposium on Mixed and Augmented Reality, 2014, pp. 229–236.
  • [17] A. Malti and C. Herzet, “Elastic shape-from-template with spatially sparse deforming forces,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 143–151.
  • [18] A. Agudo, J. M. M. Montiel, L. Agapito, and B. Calvo, “Modal space: A physics-based model for sequential estimation of time-varying shape from monocular video,” Journal of mathematical imaging and vision, vol. 57, no. 1, pp. 75–98, 2017.
  • [19] R. Garg, A. Roussos, and L. Agapito, “Dense variational reconstruction of non-rigid surfaces from monocular video,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, June 2013, pp. 1272–1279.
  • [20] M. Sepehrinour and S. Kasaei, “3d reconstruction of non-rigid surfaces from realistic monocular video,” in 2015 9th Iranian Conference on Machine Vision and Image Processing (MVIP), Nov 2015, pp. 199–202.
  • [21] R. Yu, C. Russell, N. D. F. Campbell, and L. Agapito, “Direct, dense, and deformable: Template-based non-rigid 3d reconstruction from rgb video,” in 2015 IEEE International Conference on Computer Vision (ICCV), Dec 2015, pp. 918–926.
  • [22] M. Innmann, M. Zollhöfer, M. Nießner, C. Theobalt, and M. Stamminger, “Volumedeform: Real-time volumetric non-rigid reconstruction,” in Computer Vision – ECCV 2016.   Springer International Publishing, 2016, pp. 362–379.
  • [23] A. Agudo, F. Moreno-Noguer, B. Calvo, and J. Montiel, “Real-time 3d reconstruction of non-rigid shapes with a single moving camera,” Computer Vision and Image Understanding, vol. 153, pp. 37–54, 2016.
  • [24] R. A. Newcombe, D. Fox, and S. M. Seitz, “Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 343–352.
  • [25] R. Szeliski et al., “Image alignment and stitching: A tutorial,” Foundations and Trends® in Computer Graphics and Vision, vol. 2, no. 1, pp. 1–104, 2007.
  • [26] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, vol. 60, no. 2, pp. 91–110, 2004.
  • [27] R. Summan, G. Dobie, G. West, S. Marshall, C. MacLeod, and S. G. Pierce, “The influence of the spatial distribution of 2-d features on pose estimation for a visual pipe mapping sensor,” IEEE Sensors Journal, vol. 17, no. 19, pp. 6312–6321, 2017.
  • [28] G. Yu and J.-M. Morel, “Asift: An algorithm for fully affine invariant comparison,” Image Processing On Line, vol. 1, pp. 11–38, 2011.
  • [29] A. Shahroudnejad and M. Rahmati, “Copy-move forgery detection in digital images using affine-sift,” in 2016 2nd International Conference of Signal Processing and Intelligent Systems (ICSPIS), Dec 2016, pp. 1–5.
  • [30] L. Torresani, A. Hertzmann, and C. Bregler, “Nonrigid structure-from-motion: Estimating shape and motion with hierarchical priors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 5, pp. 878–892, May 2008.
  • [31] M. Song, H. Watanabe, and J. Hara, “Robust 3d reconstruction with omni-directional camera based on structure from motion,” in 2018 International Workshop on Advanced Image Technology (IWAIT), Jan 2018, pp. 1–4.
  • [32] N. Micheletti, J. H. Chandler, and S. N. Lane, “Structure from motion (sfm) photogrammetry,” 2015.
  • [33] D. Nistér, “Preemptive ransac for live structure and motion estimation,” Machine Vision and Applications, vol. 16, no. 5, pp. 321–329, 2005.
  • [34] R. C. Bolles, H. H. Baker, and D. H. Marimont, “Epipolar-plane image analysis: An approach to determining structure from motion,” International journal of computer vision, vol. 1, no. 1, pp. 7–55, 1987.
  • [35] R. Hartley and A. Zisserman, “Epipolar geometry and the fundamental matrix,” Multiple view geometry, 2000.
  • [36] R. Summan, S. Pierce, C. Macleod, G. Dobie, T. Gears, W. Lester, P. Pritchett, and P. Smyth, “Spatial calibration of large volume photogrammetry based metrology systems,” Measurement, vol. 68, pp. 189–200, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393406
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description