# Fast and accurate reconstruction of HARDI using a 1D encoder-decoder convolutional network

###### Abstract

High angular resolution diffusion imaging (HARDI) demands a lager amount of data measurements compared to diffusion tensor imaging, restricting its use in practice. In this work, we explore a learning-based approach to reconstruct HARDI from a smaller number of measurements in -space. The approach aims to directly learn the mapping relationship between the measured and HARDI signals from the collecting HARDI acquisitions of other subjects. Specifically, the mapping is represented as a 1D encoder-decoder convolutional neural network under the guidance of the compressed sensing (CS) theory for HARDI reconstruction. The proposed network architecture mainly consists of two parts: an encoder network produces the sparse coefficients and a decoder network yields a reconstruction result. Experiment results demonstrate we can robustly reconstruct HARDI signals with the accurate results and fast speed.

Shi Yin, Zhengqiang Zhang, Qinmu Peng, Xinge You
\addressSchool of Electronic Information and Communications,

Huazhong University of Science and Technology, Wuhan, China
{keywords}
High angular resolution diffusion imaging, a learning-based approach, deep learning, convolutional neural network, -space

## 1 Introduction

High angular resolution diffusion imaging (HARDI) [1] excels in detecting the orientational distribution of water diffusion in the cerebral tissue. However, it demands a higher amount of data measurements compared to diffusion tensor imaging (DTI). As the total scanning time increases linearly with the number of measurements [2], HARDI-based analysis is currently deemed to be ”too slow” to be used in clinical applications involving children or patients with dementia.

Currently, the high time-cost deficiency of HARDI can be overcome using the theory of compressed sensing (CS), which provides a framework to recover HARDI signals from a smaller number of measurements in -space [2, 3, 4, 5, 6, 7]. We call these data measurements low angular resolution (LAR) signals. This CS-based framework involves several steps in its pipeline. First, the relationship between the measurements and HARDI signal is modeled, and a LAR signal dictionary and a HARDI signal dictionary are reconstructed from multiple basis functions and corresponding diffusion-encoding gradient orientations. Then, the reduced measurements are encoded as sparse coefficients by the low resolution dictionary. Finally, the sparse coefficients are linearly mapped into HARDI signal by the high resolution dictionary. The CS-based works have shown competitive results, but there are two main problems in this framework. First, the relationship between the reduced measurements and the HARDI signal is hypothesized rather than learned from data, without using any statistical information. The performance of these CS-based algorithms degrades rapidly when the desired magnification factor is large or the number of available measurements is small [8]. Second, the sparse coefficients in these works are usually obtained by solving a least squares optimization with -norm regularization, which is very time-consuming and may have many solutions.

In this work, we investigate the possibility to learn the -space signals lost in LAR acquisitions from the collecting HARDI acquisitions of other subjects. We explore a learning-based approach for recovering HARDI signals, in this case we can benefit from the statistical properties of the collecting HARDI acquisitions. Specifically, we design a 1D convolutional neural network with the guidance of CS reconstruction algorithm. The network mainly consists of two parts: an encoder network that produces the sparse coefficients and a decoder network that produces a reconstruction result. We name the proposed network 1D Encoder-Decoder Convolutional Neural Network (1d-ED CNN). The architecture of our network is based on autoencoder network with three important improvements: (1) We design four input channels incorporating the information of the HARDI signal and its spherical coordinate, to make the 1D network is meaningful for HARDI signals; (2) We apply a strategy of randomly changing order to the measurement signals and their corresponding gradient coordinates, to fully learn the relationship between diffusion signals from different gradient orientations at the same spatial position; (3) We learn the network by a supervised paradigm, which enables us to directly learn the mapping between the LAR and HARDI signals.

The proposed 1d-ED CNN has several appealing properties. First, the entire pipeline is fully obtained through learning, without explicitly modeling the signal space or designing the dictionary. Second, we can benefit from the statistical properties of collecting high angular resolution signals from other subjects as training data. Third, our method is faster than a series of CS-based methods even on a CPU, because it is fully feed-forward and does not need to solve any optimization problem in usage. Fourth, the property that the 4D real HARDI dataset usually contains very large number of voxels becomes an advantage in training a deep learning network, which is a burden for existing CS-based HARDI reconstruction methods. We demonstrate the efficacy of our method on different numbers of reduced measurements compared with the existing CS-based reconstruction methods.

## 2 Methods

### 2.1 CS Algorithm for HARDI Reconstruction

The relationship between diffusion signals from different diffusion gradient orientations in the spherical coordinate can be efficiently linearly represented using a dictionary and a vector of representation coefficients. For a fixed spatial position , the HARDI signal vector corresponding to diffusion-encoding orientations can be represented by:

(1) |

where is HARDI signal dictionary composed by multiple basis functions of spherical ridgelets [2, 4] or spherical wavelets [3]. is a vector of representation coefficients which depends on the spatial coordinates . Consequently, one can recover the whole sphere by estimating the representation coefficients (i.e., ). Given the diffusion signal vector measured at a set of diffusion-encoding orientations ( is a subset of ), it can be modeled as:

(2) |

where is a measurement dictionary also composed by multiple basis functions and is the vector of corresponding measurement noise.

Due to the overcomplete dictionary and measured noise, then a practical implication of this fact is that the coefficient vector in Eq.(2) is not unique. So this is an underdetermined problem. Usually taking -norm as the sparsity constraint is enforced on the coefficient vector. So we can recover by solving the following optimization problem:

(3) |

So the mapping between and can be viewed as: first nonlinearly mapping the measurements into a intact space to obtain the sparse representation coefficients in Eq.(3) and then linearly mapping it into the high angular resolution representation space in Eq.(1). Motivated by above pipeline, we investigate the possibility in Fig. 1 to directly learn an end-to-end mapping between the LAR signal and original HARDI signal from a set of signal pairs of other subjects.

### 2.2 Network Architecture

In this section, we design a 1D encoder-decoder convolutional neural network depicted in Fig. 2 to learn the entire mapping in Fig. 1.

#### 2.2.1 Input Layers

For a measurement , we first upscale it to the same size of its ground truth HARDI through upsampling layer or interpolation operation and its corresponding gradient orientation is . , and are the point vectors in , and axis with . Though we use a vector to represent the HARDI signal, we should note the diffusion signals or is distributed on the surface of a sphere in -space. If we only input the signal vector without its -space coordinate information, the spherical distribution of these diffusion signals is completely changed into a linear distribution. In this case, the 1D convolutional neural network is meaningless for the HARDI signal. So we concatenate the , , and as four channels of initial input . On the other hand, we should also note the fixed sequence order of the signal vector will prevent the 1D network to fully capture the relationship between diffusion signals from different gradient orientations. So we randomly disrupt the order of the diffusion signals in vector and its corresponding spherical coordinate , and as input .

#### 2.2.2 1D Convolutional Encoder Network

In the encoder stage, there are hidden layers to perform the nonlinear mapping. For th hidden layer, it takes the input and maps it to :

(4) |

where , represent the filters and biases in encoder network, respectively, represents convolution operation and is an element-wise activation function which allows us to transform the signal non-linearly. Because we expect to learn sparse representation, and we specify as rectified linear unit. After hidden layers transforming non-linearly, we can get the sparse code . The operation can be generalized as:

(5) |

#### 2.2.3 1D Convolutional Decoder Network

In the decoder stage, we also use symmetrical hidden layers to perform the linear mapping. For th hidden layer, it takes the input and maps it to :

(6) |

where represents the filters in decoder network and represents deconvolution operation. In CS-based algorithm, the measurement and HARDI signals dictionaries are composed by the same basis functions, so the size of is set the same to The initial input is sparse coefficients . After hidden layers transforming linearly, we can get the reconstruction . The operation can be generalized as:

(7) |

#### 2.2.4 Loss Function

Learning an end-to-end mapping function requires the estimation of parameters . This is achieved through minimizing the loss between the reconstructed signals and the corresponding ground truth HARDI signals through a supervised paradigm. Given a training set including diffusion measurement sequences and the associated HARDI sequences , we can get the corresponding reconstructed signals through the 1d-ED CNN. We use Normalized Mean Squared Error (NMSE) as the loss function:

(8) |

## 3 Experiments and Results

### 3.1 Datasets

The real dMRI images of normal brains were from the HCP [9], which have a spatial resolution of with 90 diffusion weighting directions at . We use the diffusion signals normalized by the corresponding image. The value of gradient directions also belong to [0,1]. Because the diffusion-encoded images were often contaminated by different levels of Rician noise, we used the LPCA filter in [10] to remove noise and used the preprocessed HARDI signals as the gold standard reference for quantitatively evaluating the results of the proposed method. We randomly selected 8000 diffusion signals as training set from 8 subjects and 2000 diffusion signals from other 2 subjects as testing data. For every HARDI signal with the size of , we reduced it into as measurement signal .

### 3.2 Implementation Details

In this experiment, we used three different values with (the 1/3, 1/4, 1/5 of 90) in a range of values typical for DTI to validate the effectiveness of proposed method. After the measurement signal was upsampled into the size of , we concatenated the and the corresponding gradient directions , and as four channels of initial input with the strategy of randomly changing order. In the encoder and decoder networks, we both used 3 hidden layers. All the kernel sizes of filters were set as . For 3 hidden layers in encoder network, the corresponding output channels were set to 400, 200 and 100, the stride sizes were set to 3, 3 and 2. In decoder network, the parameters were set symmetrically. The learning rate was and batch size was 500. For different values of , we trained a specific network. We compared our method with the CS-based algorithm RGD-CS in [4] in terms of accuracy and speed. We also considered here the -norm constraint in comparison because the -solutions appear to be quite informative [2]. We named the -norm constraint method RGD-. All the methods were in python implementation and executed on a 2.00GHz Intel(R) Xeon(R) CPU.

### 3.3 Quantitative Evaluation

As shown in Table 1, the proposed method yields the highest NMSE with all values. We also should note that the accuracy of RGD-CS and RGD- methods decreases rapidly as the value reduces, especially for the RGD- method. In contrast, the proposed method still produces promising results. This also demonstrats the robustness of our method. These observations are further visually depicted by orientation distribution function (ODF) images in Fig. 13. The ODF images were computed and visualized by the matlab codes in DSI studio http://dsi-studio.labsolver.org/. As can be observed, the ODF obtained by the proposed method are more close to the groundtruth.

Method | |||||||||
---|---|---|---|---|---|---|---|---|---|

Min | Max | Average | Min | Max | Average | Min | Max | Average | |

RGD- | 0.0028 | 0.1872 | 0.0187 | 0.0096 | 0.3856 | 0.0685 | 0.0408 | 0.3153 | 0.1128 |

RGD-CS | 0.0017 | 0.2207 | 0.0185 | 0.0065 | 0.3944 | 0.0404 | 0.0169 | 0.5652 | 0.0612 |

1D-SED CNN | 0.0017 | 0.1555 | 0.0154 | 0.0023 | 0.2117 | 0.0196 | 0.0021 | 0.1894 | 0.0199 |

## 4 Conclusion

In this paper, we investigated the possibility to learn the mapping relationship between the LAR and HARDI signals from the collecting HARDI acquisitions of other subjects. Specifically, we designed a novel 1d-ED CNN for HARDI signals reconstruction with the guidance of CS reconstruction algorithm. Experiment results demonstrate we can robustly reconstruct HARDI signals with the accurate results and fast speed.

## References

- [1] David S. Tuch, “Q-ball imaging,” Magnetic Resonance in Medicine, vol. 52, no. 6, pp. 1358–1372, 2004.
- [2] Oleg Michailovich and Yogesh Rathi, “Fast and accurate reconstruction of HARDI data using compressed sensing,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2010, pp. 607–614.
- [3] Antonio Tristan-Vega and Carl-Fredrik Westin, “Probabilistic ODF estimation from reduced HARDI data with sparse regularization,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2011, pp. 182–190.
- [4] Oleg Michailovich, Yogesh Rathi, and Sudipto Dolui, “Spatially regularized compressed sensing for high angular resolution diffusion imaging,” IEEE Transactions on Medical Imaging, vol. 30, no. 5, pp. 1100–1115, 2011.
- [5] Bennett A. Landman, John A. Bogovic, Hanlin Wan, Fatma El Zahraa ElShahaby, Pierre-Louis Bazin, and Jerry L. Prince, “Resolution of crossing fibers with constrained compressed sensing using diffusion tensor MRI,” NeuroImage, vol. 59, pp. 2175–2186, 2012.
- [6] Shi Yin, Xinge You, Xin Yang, Qinmu Peng, Ziqi Zhu, and Xiao-Yuan Jing, “A joint space-angle regularization approach for single 4d diffusion image super-resolution,” Magnetic Resonance in Medicine, vol. 80, no. 5, pp. 2173–2187.
- [7] Shi Yin, Xinge You, Weiyong Xue, Bo Li, Yue Zhao, Xiao-Yuan Jing, Patrick S. P. Wang, and Yuanyan Tang, “A unified approach for spatial and angular super-resolution of diffusion tensor MRI,” in Pattern Recognition: 7th Chinese Conference, CCPR 2016, Chengdu, China, November 5-7, 2016, Proceedings, Part II, 2016, pp. 312–324.
- [8] Jianchao Yang, John Wright, Thomas S. Huang, and Yi Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.
- [9] David C. Van Essen, Stephen M. Smith, Deanna M. Barch, Timothy E.J. Behrens, Essa Yacoub, Kamil Ugurbil, and WU-Minn HCP Consortium, “The WU-Minn human connectome project: an overview,” NeuroImage, vol. 80, pp. 62–79, 2013.
- [10] Jose V Manjon, Pierrick Coupe, Luis Concha, Antonio Buades, D. Louis Collins, and Montserrat Robles, “Diffusion weighted image denoising using overcomplete local PCA,” PloS one, vol. 8, no. 9, pp. e73021, 2013.