Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network

Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network


Standard scan plane detection in fetal brain ultrasound (US) forms a crucial step in the assessment of fetal development. In clinical settings, this is done by manually manoeuvring a 2D probe to the desired scan plane. With the advent of 3D US, the entire fetal brain volume containing these standard planes can be easily acquired. However, manual standard plane identification in 3D volume is labour-intensive and requires expert knowledge of fetal anatomy. We propose a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes. ITN uses a convolutional neural network to learn the relationship between a 2D plane image and the transformation parameters required to move that plane towards the location/orientation of the standard plane in the 3D volume. During inference, the current plane image is passed iteratively to the network until it converges to the standard plane location. We explore the effect of using different transformation representations as regression outputs of ITN. Under a multi-task learning framework, we introduce additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy. When evaluated on 72 US volumes of fetal brain, our method achieves an error of 3.83mm/12.7 and 3.80mm/12.6 for the transventricular and transcerebellar planes respectively and takes 0.46s per plane.

Authors’ Instructions

1 Introduction

Obstetric ultrasound (US) is conducted as a routine screening examination between 18-24 weeks of gestation. US imaging of the fetal head enables clinicians to assess fetal brain development and detect growth abnormalities. This requires the careful selection of standard scan planes such as the transventricular (TV) and transcerebellar (TC) plane that contain key anatomical structures [6]. However, it is challenging and time-consuming even for experienced sonographers to manually navigate a 2D US probe to find the correct standard plane. The task is highly operator-dependent and requires a great amount of expertise. With the advent of 3D fetal US, a volume of the entire fetal brain can be acquired quickly with little training. But the problem of locating diagnostically required standard planes for biometric measurements remains. There is a strong need to develop automatic methods for 2D standard plane extraction from 3D volumes to improve clinical workflow efficiency.

Related work: Recently, deep learning approaches have shown successes in many medical image analysis applications. Several works have applied deep learning techniques to standard plane detection in fetal US [1, 3, 2, 7]. Baumgartner et al. [1] use a convolutional neural network (CNN) for categorisation of 13 fetal standard views. Chen et al. [3] adopt a CNN-based image classification approach for detecting fetal abdominal standard planes, which they later combined with a recurrent neural network (RNN) that takes into account temporal information [2]. However, these methods identify standard planes from 2D US videos and not 3D volumes. Ryou et al. [7] attempt to detect fetal head and abdominal planes from 3D fetal US by breaking down the 3D volume into a stack of 2D slices which are then classified as head or abdomen using a CNN.

Plane detection is considered an image classification problem in the above works. In contrast, we approach the plane detection problem by regressing rigid transformation parameters that define the plane position and orientation. There are several works on using CNN to predict transformations. Kendall et al. [5] introduce PoseNet for regressing 6-DoF camera pose from RGB image with a loss function that uses quaternions to represent rotation. Hou et al. [4] propose SVRNet for predicting transformation from 2D image to 3D space and use anchor points as a new representation for rigid transformations. These works predict absolute transformation with respect to a known reference coordinate system with one pass of CNN. Our work is different as we use an iterative approach with multiple passes of CNN to predict relative transformation with respect to current plane coordinates, which change at each iteration. Relative transformation is used as our 3D volumes are not aligned to a reference coordinate system.

Contributions: In this paper, we propose the Iterative Transformation Network (ITN) that uses a CNN to detect standard planes in 3D fetal US. The network learns a mapping between a 2D plane and the transformation required to move that plane towards the standard plane within a 3D volume. Our contributions are threefold: (1) ITN is a general deep learning framework built for 2D plane detection in 3D volumes. The iterative approach regresses transformations that bring the plane closer to the standard plane. This reduces computation cost as ITN selectively samples only a few planes in the 3D volume unlike classification-based methods that require dense sampling [1, 3, 2, 7]. (2) We study the effect on plane detection accuracy using different transformation representations (quaternions, Euler angles, rotation matrix, anchor points) as CNN regression outputs. (3) We improve ITN performance by incorporating additional classification probability outputs as confidence measures of the regressed transformation parameters.

Figure 1: (a) Overall plane detection framework using ITN. (b) Composition of transformations. Red: GT plane. Blue: Arbitrary plane. Black: Identity plane.

At inference, the classification probabilities are used as confidence scores to yield more accurate localisation. During training, regression and classification outputs are learned in a multi-task learning framework, which improves the generalisation ability of the model and prevents overfitting.

2 Method

Overall Framework: Fig. 1a presents the overall ITN framework for plane detection. Given a 3D volume , the goal is to find the ground truth (GT) standard plane (red). Starting with a random plane initialisation (blue), the 2D image of the plane is extracted and input to a CNN which then predicts a 3D transformation that will move the plane to a new position closer to the GT plane. The image extracted at the new plane location is then passed to the CNN and the process is repeated until the plane reaches the GT plane.

Composition of Transformations: Transformation is defined with respect to a reference coordinate system. In Fig. 1b, we define an identity plane (black) with origin at the volume centre. and are defined in the coordinate system of the identity plane and they move the identity plane to the arbitrary plane (blue) and GT plane (red) respectively. is defined in the coordinate system of the arbitrary plane and moves the arbitrary plane to the GT plane. Note that our ITN predicts which is a relative transformation from the point of view of the current plane, and not from the identity plane. We compute these transformations from each other using and where and are the composition and inverse composition operators respectively. The computations defined by the operators are dependent on the choice of the transformation representation.

Network Training: During training, an arbitrary plane is randomly sampled from a volume by applying a random transformation to the identity plane. The corresponding 2D plane image is then extracted. We define where is the plane extraction function and is the length of the square plane. We sample such that the plane centre falls in the middle 60% of and the rotation of the plane is within an angle of about each coordinate axis. This avoids sampling of planes at the edges of the volume where there is no informative image data due to regions falling outside of the US imaging cone.

1:procedure plane transformation()
2:     Initialise random plane with
3:     for  to  do
4:          Sample plane image
5:          CNN predicts relative transformation
6:          Update plane position      
7:     return
Algorithm 1 Iterative Inference of Transformation
Representation (Parameter count) Loss function
Translation (3) + Quaternion (4)
Translation (3) + Euler angles (3)
Translation (3) + Rotation matrix (9)
Anchor points (9)
Table 1: Representations of rigid transformations and their loss functions.

A training sample is represented by and the training loss function can be formulated as the norm of the error between the GT and predicted transformation parameters:

Network Inference: Algorithm 1 summarises the steps during network inference to detect a plane. The iterative approach gives rough estimates of the plane in the first few iterations and subsequently makes smaller and more accurate refinements. This coarse-to-fine adjustment improves accuracy and is less susceptible to different initialisations. To improve accuracy and convergence, we repeat Algorithm 1 with 5 random plane initialisations per volume and average their final transformations after iterations.

Transformation Representations: In ITN, plane transformation is rigid, comprising only translation and rotation. We explore the effect of using different transformation representations as the CNN regression outputs (Table 1) since there are few comparative studies that investigate this on deep networks. The first three representations explicitly separate translation and rotation in which rotation is represented by quaternions, Euler angles and rotation matrix respectively. and are weightings given to the translation and rotation losses. Specifically, anchor points [4] are defined as the coordinates of three fixed points on the plane (we use: centre, bottom-left and bottom-right corner). The points uniquely and jointly represent any translation and rotation in 3D space. During inference, the predicted values of certain representations need to be constrained to give valid rotation. For instance, quaternions need to be normalised to unit quaternions and rotation matrices need to be orthogonalised. Anchor points need to be converted to valid rotation matrices as described in [4].

1:procedure compute Transform()
2:      Compute weighted translation
3:      Compute weighted rotation
4:     if  OR  then
5:         Convert to Euler angles using convention ‘xyz’
6:         Rotation about x-axis with magnitude
7:     else if  OR  then
8:         Convert to Euler angles using convention ‘yxz’
9:         Rotation about y-axis with magnitude
10:     else if  OR  then
11:         Convert to Euler angles using convention ‘zxy’
12:         Rotation about z-axis with magnitude      
14:     return
Algorithm 2 Compute relative transformation

Classification Probability as Confidence Measure: We further extend our ITN by incorporating classification probability as a confidence measure for the regressed values of translation and rotation. The method can be applied to any transformation representation but we use quaternions since it yields the best results. In addition to the regression outputs and , the CNN also predicts two classification probability outputs and for translation and rotation respectively. We divide translation into 6 discrete classification categories: positive and negative translation along each coordinate axis. Denoting as the translation classification label, we have where is the category representing translation along the positive x-axis. is then a 6-element vector giving the probability of translation along each axis direction. Similarly, we divide rotation into 6 categories: clockwise and counter-clockwise rotation about each coordinate axis. Denoting as the rotation classification label, we have where is the category representing clockwise rotation about the x-axis. is then a 6-element vector giving the probability of rotation about each axis.

A training sample is represented by . gives the coordinate axis along which the current plane centre has the furthest absolute distance from the GT plane centre. Similarly, gives the coordinate axis about which the current plane will rotate the most to reach the GT plane. Appendix A derives the computations of and during training. The overall training loss function can then be written as:


The first and second terms are the losses for translation and rotation regression while the third and fourth terms are the cross-entropy losses for translation and rotation classification. , , and are weights given to the losses.

During inference, the CNN outputs , , and are combined to compute the relative transformation (Algorithm 2). For translation, each component of the regressed translation is weighted by the corresponding probabilities in the vector . For rotation, we only rotate the plane about the most confident rotation axis as predicted by . In order to determine the magnitude of that rotation, the regressed quaternion needs to be broken down into Euler angles using the appropriate convention in order to determine the rotation angle about that most confident rotation axis. An Euler angle representation using convention ‘xyz’ means a rotation about x-axis first followed by y-axis and finally z-axis. Hence, and are used as confidence weighting for and , allowing the plane to translate and rotate to a greater extent along the more confident axis.

Network Architecture: ITN utilises a multi-task learning framework for predictions of multiple outputs. The architecture differs according to the number of outputs that the CNN predicts. All our networks comprise 5 convolution layers, each followed by a max-pooling layer. These layers contain shared features for all outputs. After the 5th pooling layer, the network branches into fully-connected layers to learn the specific features for each output. Details of all network architectures are described in Appendix B.

3 Experiments and Results

Data and Experiments: ITN is evaluated on 3D US volumes of fetal brain from 72 subjects. For each volume, TV and TC standard planes are manually selected by a clinical expert. 70% of the dataset is randomly selected for training and the rest 30% used for testing. All volumes are processed to be isotropic with mean dimensions of 324207279 voxels. ITN is implemented using Tensorflow running on a machine with Intel Xeon CPU E5-1630 at 3.70 GHz and one NVIDIA Titan Xp 12GB GPU. We set plane size =225, =10 and ====1. During training, we use a batch size of 64. Weights are initialised randomly from a distribution with zero mean and 0.1 standard deviation. Optimisation is carried out for 100,000 iterations using the Adam algorithm with learning rate=0.001, =0.9 and =0.999. The predicted plane is evaluated against the GT using distance between the plane centres () and rotation angle between the planes (). Image similarity of the planes is also measured using peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).

Results: Table 2 compares the plane detection results when different transformation representations are used by ITN. In general, there is little difference in the translation error. This is because all translation representations are the same, which use the three Cartesian axes except for anchor points which have slightly greater translation error. The rotation errors on TC plane suggest that quaternions are a good representation. Rotation matrices and anchor points over-parameterise rotation and can make network learning more difficult with greater degree of freedom. Since these parameters are not constrained, it is also harder to convert them back into valid rotations during inference. Quaternions have fewer parameters and slightly-off quaternion can still be easily normalised to give valid rotation. Compared to Euler angles, quaternions avoid the problem of gimbal lock. For TV plane, there is little difference in rotation error. This is because sonographers use the TV plane as a visual reference when acquiring 3D volumes. This causes the TV plane to lie roughly in the central plane of the volume with lower rotation variances, thus making the choice of rotation representation less important.

CNN TV plane TC plane
outputs (mm) () PSNR SSIM (mm) () PSNR SSIM
6.295.33 17.012.0 15.32.0 0.3750.081 6.236.99 14.97.5 15.52.1 0.3830.100
5.695.85 17.08.5 15.21.7 0.3720.084 7.139.00 16.05.9 14.62.4 0.3570.119
5.796.10 17.711.6 15.81.9 0.3890.091 6.397.39 17.315.4 15.52.4 0.3850.118
6.648.66 17.010.4 15.92.4 0.3990.099 7.8810.0 16.312.6 15.02.7 0.3510.124
Table 2: Evaluation of ITN with different transformation representations for standard plane detection. Results presented as (Mean Standard Deviation).
CNN TV plane TC plane
outputs (mm) () PSNR SSIM (mm) () PSNR SSIM
M1: 6.295.33 17.012.0 15.32.0 0.3750.081 6.236.99 14.97.5 15.52.1 0.3830.100
M2: 5.145.37 16.89.9 16.02.1 0.4080.092 5.125.50 13.97.1 15.82.2 0.3930.115
M3: 6.076.32 14.08.1 15.72.5 0.3990.108 7.667.14 12.76.0 15.53.0 0.3860.123
M4: 3.832.10 12.77.7 16.41.9 0.4190.092 3.801.85 12.66.1 16.52.1 0.4070.110
M4+: 3.491.81 10.75.7 16.61.8 0.4130.082 3.392.13 11.46.3 16.82.1 0.4370.110
Table 3: Evaluation of ITN with/without confidence probability for standard plane detection. Results presented as (Mean Standard Deviation).

Table 3 compares the performance of ITN with/without classification probability outputs. Given a baseline model (M1) that only has regression outputs , , the addition of classification probabilities , improves the translation and rotation accuracy respectively (M2-M4). The classification probabilities act as confidence weights for the regression outputs to improve plane detection accuracy. Furthermore, the classification and regression outputs are trained in a multi-task learning fashion, which allows feature sharing and enables more generic features to be learned, thus preventing model overfitting. M1-M4 use one plane image as CNN input. We further improve our results by using three orthogonal plane images instead as this provides more information about the 3D volume (M4+). M4 and M4+ take 0.46s and 1.35s to predict one plane per volume. The supplementary material provides videos showing the update of a randomly initialised plane and its extracted image through 10 inference iterations.

Fig. 2 shows a visual comparison between the GT planes and the planes predicted by M4. To evaluate the clinical relevance of the predicted planes, a clinical expert manually measures the head circumference (HC) on both the predicted and GT planes and computes the standard deviation of the measurement error to be 1.05mm (TV) and 1.25mm (TC). This is similar to the intraobserver variability of 2.65mm reported for HC measurements on TC plane [8]. Thus, accurate biometrics can be extracted from our predicted planes.

Figure 2: Visualisation of GT planes and planes predicted by M4.

4 Conclusion

We presented ITN, a new approach for standard plane detection in 3D fetal US by using a CNN to regress rigid transformation iteratively. We compare the use of different transformation representations and show quaternions to be a good representation for iterative pose estimation. Additional classification probabilities are learned via multi-task learning which act as confidence weights for the regressed transformation parameters to improve plane detection accuracy. As future work, we are evaluating ITN on other plane detection tasks (eg. view plane selection in cardiac MRI). It is also worthwhile to explore new transformation representations and extend ITN to simultaneous detection of multiple planes.


Supported by the Wellcome Trust IEH Award [102431]. The authors thank Nvidia Corporation for the donation of a Titan Xp GPU.


  1. Baumgartner, C.F., Kamnitsas, K., Matthew, J., Fletcher, T.P., Smith, S., Koch, L.M., Kainz, B., Rueckert, D.: Sononet: Real-time detection and localisation of fetal standard scan planes in freehand ultrasound. IEEE TMI 36(11), 2204–2215 (2017)
  2. Chen, H., Dou, Q., Ni, D., Cheng, J.Z., Qin, J., Li, S., Heng, P.A.: Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks. In: MICCAI 2015. pp. 507–514. Springer (2015)
  3. Chen, H., Ni, D., Qin, J., Li, S., Yang, X., Wang, T., Heng, P.A.: Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE journal of biomedical and health informatics 19(5), 1627–1636 (2015)
  4. Hou, B., Alansary, A., McDonagh, S., Davidson, A., Rutherford, M., Hajnal, J.V., Rueckert, D., Glocker, B., Kainz, B.: Predicting slice-to-volume transformation in presence of arbitrary subject motion. In: MICCAI. pp. 296–304. Springer (2017)
  5. Kendall, A., Grimes, M., Cipolla, R.: Posenet: A convolutional network for real-time 6-dof camera relocalization. In: ICCV 2015. pp. 2938–2946. IEEE (2015)
  6. NHS: Fetal anomaly screening programme: programme handbook June 2015. Public Health England (2015)
  7. Ryou, H., Yaqub, M., Cavallaro, A., Roseman, F., Papageorghiou, A., Noble, J.A.: Automated 3d ultrasound biometry planes extraction for first trimester fetal assessment. In: International Workshop on MLMI. pp. 196–204. Springer (2016)
  8. Sarris, I., Ioannou, C., Chamberlain, P., Ohuma, E., Roseman, F., Hoch, L., Altman, D., Papageorghiou, A.: Intra-and interobserver variability in fetal ultrasound measurements. Ultrasound in Obstetrics & Gynecology 39(3), 266–273 (2012)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description