An overview of deep learning in medical imaging focusing on MRI

An overview of deep learning in medical imaging
focusing on MRI

Alexander Selvikvåg Lundervold Arvid Lundervold Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway Department of Health and Functioning, Western Norway University of Applied Sciences, Norway

What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI.

Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

Machine learning, Deep learning, Medical imaging, MRI
journal: Zeitschrift für Medizinische Physik

1 Introduction

Machine learning has seen some dramatic developments recently, leading to a lot of interest from industry, academia and popular culture. These are driven by breakthroughs in artificial neural networks, often termed deep learning, a set of techniques and algorithms that enable computers to discover complicated patterns in large data sets. Feeding the breakthroughs is the increased access to data (“big data”), user-friendly software frameworks, and an explosion of the available compute power, enabling the use of neural networks that are deeper than ever before. These models nowadays form the state-of-the-art approach to a wide variety of problems in computer vision, language modeling and robotics.

Deep learning rose to its prominent position in computer vision when neural networks started outperforming other methods on several high-profile image analysis benchmarks. Most famously on the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2012 Krizhevsky et al. (2012) when a deep learning model (a convolutional neural network) halved the second best error rate on the image classification task. Enabling computers to recognize objects in natural images was until recently thought to be a very difficult task, but by now convolutional neural networks have surpassed even human performance on the ILSVRC, and reached a level where the ILSVRC classification task is essentially solved (i.e. with error rate close to the Bayes rate). Deep learning techniques have become the de facto standard for a wide variety of computer vision problems. They are, however, not limited to image processing and analysis but are outperforming other approaches in areas like natural language processing Peters et al. (2018); Howard and Ruder (2018); Radford et al. (2018), speech recognition and synthesis Xiong et al. (2018); van den Oord et al. (2016)111Try it out here:, and in the analysis of unstructured, tabular-type data using entity embeddings Guo and Berkhahn (2016); De Brébisson et al. (2015).222As a perhaps unsurprising side-note, these modern deep learning methods have also entered the field of physics. Among other things, they are tasked with learning physics from raw data when no good mathematical models are available. For example in the analysis of gravitational waves where deep learning has been used for classification George and Huerta (2018), anomaly detection George et al. (2018) and denoising Shen et al. (2017), using methods that are highly transferable across domains (think EEG and fMRI). They are also part of mathematical model and machine learning hybrids Raissi and Karniadakis (2018); Karpatne et al. (2017), formed to reduce computational costs by having the mathematical model train a machine learning model to perform its job, or to improve the fit with observations in settings where the mathematical model can’t incorporate all details (think noise).

The sudden progress and wide scope of deep learning, and the resulting surge of attention and multi-billion dollar investment, has led to a virtuous cycle of improvements and investments in the entire field of machine learning. It is now one of the hottest areas of study world-wide Gartner (2018), and people with competence in machine learning are highly sought-after by both industry and academia333See e.g. for a study focused on the US job market.

Healthcare providers generate and capture enormous amounts of data containing extremely valuable signals and information, at a pace far surpassing what “traditional” methods of analysis can process. Machine learning therefore quickly enters the picture, as it is one of the best ways to integrate, analyze and make predictions based on large, heterogeneous data sets (cf. health informatics Ravi et al. (2017)). Healthcare applications of deep learning range from one-dimensional biosignal analysis Ganapathy et al. (2018) and the prediction of medical events, e.g. seizures Kuhlmann et al. (2018) and cardiac arrests Kwon et al. (2018), to computer-aided detection Shin et al. (2016) and diagnosis Kermany et al. (2018) supporting clinical decision making and survival analysis Katzman et al. (2018), to drug discovery Jiménez et al. (2018) and as an aid in therapy selection and pharmacogenomics Kalinin et al. (2018), to increased operational efficiency Jiang et al. (2018), stratified care delivery Vranas et al. (2017), and analysis of electronic health records Rajkomar et al. (2018); Shickel et al. (2017).

The use of machine learning in general and deep learning in particular within healthcare is still in its infancy, but there are several strong initiatives across academia, and multiple large companies are pursuing healthcare projects based on machine learning. Not only medical technology companies, but also for example Google Brain Gulshan et al. (2016); Poplin et al. (2018a, b)444, DeepMind De Fauw et al. (2018)555, Microsoft Qin et al. (2018); Kamnitsas et al. (2017)666 and IBM Xiao et al. (2018)777 There is also a plethora of small and medium-sized businesses in the field888Aidoc, Arterys, Ayasdi, Babylon Healthcare Services, BenevolentAI, Enlitic, EnvoiAI, H2O, IDx, MaxQ AI, Mirada Medical,, Zebra Medical Vision, and many more..

2 Machine learning, artificial neural networks, deep learning

In machine learning one develops and studies methods that give computers the ability to solve problems by learning from experiences. The goal is to create mathematical models that can be trained to produce useful outputs when fed input data. Machine learning models are provided experiences in the form of training data, and are tuned to produce accurate predictions for the training data by an optimization algorithm. The main goal of the models are to be able to generalize their learned expertise, and deliver correct predictions for new, unseen data. A model’s generalization ability is typically measured during training on a separate data set, the validation set, and used as feedback for further tuning of the model. After several iterations of training and tuning, the final model is evaluated on a test set, used to simulate how the model will perform when faced with new, unseen data.

There are several kinds of machine learning, loosely categorized according to how the models utilize its input data during training. In reinforcement learning one constructs agents that learn from their environments through trial and error while optimizing some objective function. A famous recent application of reinforcement learning is AlphaGo and AlphaZero Silver et al. (2017), the Go-playing machine learning systems developed by DeepMind. In unsupervised learning the computer is tasked with uncovering patterns in the data without our guidance. Clustering is a prime example. Most of today’s machine learning systems belong to the class of supervised learning. Here, the computer is given a set of already labelled or annotated data, and asked to produce correct labels on new, previously unseen data sets based on the rules discovered in the labelled data set. From a set of input-output examples, the whole model is trained to perform specific data-processing tasks. Image annotation using human-labelled data, e.g. classifying skin lesions according to malignancy Esteva et al. (2017) or discovering cardiovascular risk factors from retinal fundus photographs Poplin et al. (2018), are two examples of the multitude of medical imaging related problems attacked using supervised learning.

Machine learning has a long history and is split into many sub-fields, of which deep learning is the one currently receiving the bulk of attention.

There are many excellent, openly available overviews and surveys of deep learning. For short general introductions to deep learning, see LeCun et al. (2015); Hinton (2018). For an in-depth coverage, consult the freely available book Goodfellow et al. (2016)999 For a broad overview of deep learning applied to medical imaging, see Litjens et al. (2017). We will only mention some bare essentials of the field, hoping that these will serve as useful pointers to the areas of the field that are currently the most influential in medical imaging.

2.1 Artificial neural networks

Artificial neural networks (ANNs) is one of the most famous machine learning models, introduced already in the 1950s, and actively studied since (Goodfellow et al., 2016, Chapter 1.2).101010The loose connection between artificial neural networks and neural networks in the brain is often mentioned, but quite over-blown considering the complexity of biological neural networks. However, there is some interesting recent work connecting neuroscience and artificial neural networks, indicating an increase in the cross-fertilization between the two fields Marblestone et al. (2016); Hassabis et al. (2017); Banino et al. (2018).

Roughly, a neural network consists of a number of connected computational units, called neurons, arranged in layers. There’s an input layer where data enters the network, followed by one or more hidden layers transforming the data as it flows through, before ending at an output layer that produces the neural network’s predictions. The network is trained to output useful predictions by identifying patterns in a set of labelled training data, fed through the network while the outputs are compared with the actual labels by an objective function. During training the network’s parameters–the strength of each neuron–is tuned until the patterns identified by the network result in good predictions for the training data. Once the patterns are learned, the network can be used to make predictions on new, unseen data, i.e. generalize to new data.

It has long been known that ANNs are very flexible, able to model and solve complicated problems, but also that they are difficult and very computationally expensive to train.111111According to the famous universal approximation theorem for artificial neural networks Cybenko (1989); Hornik et al. (1989); Leshno et al. (1993); Sonoda and Murata (2017), ANNs are mathematically able to approximate any continuous function defined on compact subspaces of , using finitely many neurons. There are some restrictions on the activation functions, but these can be relaxed (allowing for ReLUs for example) by restricting the function space. This is an existence theorem and successfully training a neural network to approximate a given function is another matter entirely. However, the theorem does suggests that neural networks are reasonable to study and develop further, at least as an engineering endeavour aimed at realizing their theoretical powers. This has lowered their practical utility and led people to, until recently, focus on other machine learning models. But by now, artificial neural networks form one of the dominant methods in machine learning, and the most intensively studied. This change is thanks to the growth of big data, powerful processors for parallel computations (in particular, GPUs), some important tweaks to the algorithms used to construct and train the networks, and the development of easy-to-use software frameworks. The surge of interest in ANNs leads to an incredible pace of developments, which also drives other parts of machine learning with it.

The freely available books Goodfellow et al. (2016); Nielsen (2015) are two of the many excellent sources to learn more about artificial neural networks. We’ll only give a brief indication of how they are constructed and trained. The basic form of artificial neural networks121212These are basic when compared to for example recurrent neural networks, whose architectures are more involved, the feedforward neural networks, are parametrized mathematical functions that maps an input to an output by feeding it through a number of nonlinear transformations: Here each component , called a network layer, consists of a simple linear transformation of the previous component’s output, followed by a nonlinear function: . The nonlinear functions are typically sigmoid functions or ReLUs, as discussed below, and the matrices of numbers, called the model’s weights. During the training phase, the network is fed training data and tasked with making predictions at the output layer that match the known labels, each component of the network producing an expedient representation of its input. It has to learn how to best utilize the intermediate representations to form a complex hierarchical representation of the data, ending in correct predictions at the output layer. Training a neural network means changing its weights to optimize the outputs of the network. This is done using an optimization algorithm, called gradient descent, on a function measuring the correctness of the outputs, called a cost function or loss function. The basic ideas behind training neural networks are simple: as training data is fed through the network, compute the gradient of the loss function with respect to every weight using the chain rule, and reduce the loss by changing these weights using gradient descent. But one quickly meets huge computational challenges when faced with complicated networks with thousands or millions of parameters and an exponential number of paths between the nodes and the network output. The techniques designed to overcome these challenges gets quite complicated. See (Goodfellow et al., 2016, Chapter 8) and (Aggarwal, 2018, Chapter 3 and 4) for detailed descriptions of the techniques and practical issues involved in training neural networks.

Artificial neural networks are often depicted as a network of nodes, as in Figure 1.131313As we shall see, modern architectures are often significantly more complicated than captured by the illustration and equations above, with connections between non-consecutive layers, input fed in also at later layers, multiple outputs, and much more.

Figure 1: Artificial neural networks are built from simple linear functions followed by nonlinearities. One of the simplest class of neural network is the multilayer perceptron, or feedforward neural network, originating from the work of Rosenblatt in the 1950s Rosenblatt (1958). It’s based on simple computational units, called neurons, organized in layers. Writing for the -th layer and for the -th unit of that layer, the output of the -th unit at the -th layer is . Here consists of the outputs from the previous layer after they are fed through a simple nonlinear function called an activation function, typically a sigmoid function or a rectified linear unit or small variations thereof. Each layer therefore computes a weighted sum of the all the outputs from the neurons in the previous layers, followed by a nonlinearity. These are called the layer activations. Each layer activation is fed to the next layer in the network, which performs the same calculation, until you reach the output layer, where the network’s predictions are produced. In the end, you obtain a hierarchical representation of the input data, where the earlier features tend to be very general, getting increasingly specific towards the output. By feeding the network training data, propagated through the layers, the network is trained to perform useful tasks. A training data point (or, typically, a small batch of training points) is fed to the network, the outputs and local derivatives at each node are recorded, and the difference between the output prediction and the true label is measured by an objective function, such as mean absolute error (L1), mean squared error (L2), cross-entropy loss, or Dice loss, depending on the application. The derivative of the objective function with respect to the output is calculated, and used as a feedback signal. The discrepancy is propagated backwards through the network, and all the weights are updated to reduce the error. This is achieved using backward propagation Linnainmaa (1970); Werbos (1974); Rumelhart et al. (1986), which calculates the gradient of the objective function with respect to the weights in each node using the chain rule together with dynamic programming, and gradient descent Cauchy (1847), an optimization algorithm tasked with improving the weights.

2.2 Deep learning

Traditionally, machine learning models are trained to perform useful tasks based on manually designed features extracted from the raw data, or features learned by other simple machine learning models. In deep learning, the computers learn useful representations and features automatically, directly from the raw data, bypassing this manual and difficult step. By far the most common models in deep learning are various variants of artificial neural networks, but there are others. The main common characteristic of deep learning methods is their focus on feature learning: automatically learning representations of data. This is the primary difference between deep learning approaches and more “classical” machine learning. Discovering features and performing a task is merged into one problem, and therefore both improved during the same training process. See LeCun et al. (2015) and Goodfellow et al. (2016) for general overviews of the field.

In medical imaging the interest in deep learning is mostly triggered by convolutional neural networks (CNNs) LeCun et al. (1998)141414Interestingly, CNNs was applied in medical image analysis already in the early 90s, e.g. Lo et al. (1993), but with limited success., a powerful way to learn useful representations of images and other structured data. Before it became possible to use CNNs efficiently, these features typically had to be engineered by hand, or created by less powerful machine learning models. Once it became possible to use features learned directly from the data, many of the handcrafted image features were typically left by the wayside as they turned out to be almost worthless compared to feature detectors found by CNNs.151515However, combining hand-engineered features with CNN features is a very reasonable approach when low amounts of training data makes it difficult to learn good features automatically There are some strong preferences embedded in CNNs based on how they are constructed, which helps us understand why they are so powerful. Let us therefore take a look at the building blocks of CNNs.

Figure 2: Building blocks of a typical CNN. A slight modification of a figure in Murray (2018), courtesy of the author.

2.3 Building blocks of convolutional neural networks

When applying neural networks to images one can in principle use the simple feedforward neural networks discussed above. However, having connections from all nodes of one layer to all nodes in the next is extremely inefficient. A careful pruning of the connections based on domain knowledge, i.e. the structure of images, leads to much better performance. A CNN is a particular kind of artificial neural network aimed at preserving spatial relationships in the data, with very few connections between the layers. The input to a CNN is arranged in a grid structure and then fed through layers that preserve these relationships, each layer operation operating on a small region of the previous layer (Fig. 2). CNNs are able to form highly efficient representation of the input data161616It’s interesting to compare this with the biological vision systems and their receptive fields of variable size (volumes in visual space) of neurons at different hierarchical levels, well-suited for image-oriented tasks. A CNN has multiple layers of convolutions and activations, often interspersed with pooling layers, and is trained using backpropagation and gradient descent as for standard artificial neural networks. See Section 2.1. In addition, CNNs typically have fully-connected layers at the end, which compute the final outputs.171717Lately, so-called fully-convolution CNNs have become popular, in which average pooling across the whole input after the final activation layer replaces the fully-connected layers, significantly reducing the total number of weights in the network.

  1. Convolutional layers: In the convolutional layers the activations from the previous layers are convolved with a set of small parameterized filters, frequently of size , collected in a tensor , where is the filter number and is the layer number. By having each filter share the exact same weights across the whole input domain, i.e. translational equivariance at each layer, one achieves a drastic reduction in the number of weights that need to be learned. The motivation for this weight-sharing is that features appearing in one part of the image likely also appear in other parts. If you have a filter capable of detecting horizontal lines, say, then it can be used to detect them wherever they appear. Applying all the convolutional filters at all locations of the input to a convolutional layer produces a tensor of feature maps.

  2. Activation layer: The feature maps from a convolutional layer are fed through nonlinear activation functions. This makes it possible for the entire neural network to approximate almost any nonlinear function Leshno et al. (1993); Sonoda and Murata (2017)181818A neural network with only linear activations would only be able to perform linear approximation. Adding further layers wouldn’t improve its expressiveness. The activation functions are generally the very simple rectified linear units, or ReLUs, defined as , or variants like leaky ReLUs or parametric ReLUs.191919Other options include exponential linear units (ELUs), and the now rarely used sigmoid or tanh activation functions. See Clevert et al. (2015); He et al. (2015) for more information about these and other activation functions. Feeding the feature maps through an activation function produces new tensors, typically also called feature maps.

  3. Pooling: Each feature map produced by feeding the data through one or more convolutional layer is then typically pooled in a pooling layer. Pooling operations take small grid regions as input and produce single numbers for each region. The number is usually computed by using the max function (max-pooling) or the average function (average pooling). Since a small shift of the input image results in small changes in the activation maps, the pooling layers gives CNN some translational invariance.

    A different way of getting the downsampling effect of pooling is to use convolutions with increased stride lengths. Removing the pooling layers simplifies the network architecture without necessarily sacrificing performance Springenberg et al. (2014).

Other common elements in many modern CNNs include

  1. Dropout regularization: A simple idea that gave a huge boost in the performance of CNNs. By averaging several models in an ensemble one tend to get better performance than when using single models. Dropout Srivastava et al. (2014) is an averaging technique based on stochastic sampling of neural networks.202020The idea of dropout is also used for other machine learning models, as in the DART technique for regression trees Rashmi and Gilad-Bachrach (2015) By randomly removing neurons during training one ends up using slightly different networks for each batch of training data, and the weights of the trained network are tuned based on optimization of multiple variations of the network.212121In addition to increased model performance, dropout can also be used to produce robust uncertainty measures in neural networks. By leaving dropout turned on also during inference one effectively performs variational inference Gal (2016); Murray (2018); Wickstrøm et al. (2018). This relates standard deep neural networks to Bayesian neural networks, synthesized in the field of Bayesian deep learning.

  2. Batch normalization: These layers are typically placed after activation layers, producing normalized activation maps by subtracting the mean and dividing by the standard deviation for each training batch. Including batch normalization layers forces the network to periodically change its activations to zero mean and unit standard deviation as the training batch hits these layers, which works as a regularizer for the network, speeds up training, and makes it less dependent on careful parameter initialization Ioffe and Szegedy (2015).

In the design of new and improved CNN architectures, these components are combined in increasingly complicated and interconnected ways, or even replaced by other more convenient operations. When architecting a CNN for a particular task there are multiple factors to consider, including understanding the task to be solved and the requirements to be met, figuring out how to best feed the data to the network, and optimally utilizing one’s budget for computation and memory consumption. In the early days of modern deep learning one tended to use very simple combinations of the building blocks, as in Lenet LeCun et al. (1998) and AlexNet Krizhevsky et al. (2012). Later network architectures are much more complex, each generation building on ideas and insights from previous architectures, resulting in updates to the state-of-the-art. Table 1 contains a short list of some famous CNN architectures, illustrating how the building blocks can be combined and how the field moves along.

AlexNet Krizhevsky et al. (2012) The network that launched the current deep learning boom by winning the 2012 ILSVRC competition by a huge margin. Notable features include the use of RELUs, dropout regularization, splitting the computations on multiple GPUs, and using data augmentation during training. ZFNet Zeiler and Fergus (2014), a relatively minor modification of AlexNet, won the 2013 ILSVRC competition.
VGG Simonyan and Zisserman (2014) Popularized the idea of using smaller filter kernels and therefore deeper networks (up to 19 layers for VGG19, compared to 7 for AlexNet and ZFNet), and training the deeper networks using pre-training on shallower versions.
GoogLeNet Szegedy et al. (2015) Promoted the idea of stacking the layers in CNNs more creatively, as networks in networks, building on the idea of Lin et al. (2013). Inside a relatively standard architecture (called the stem), GoogLeNet contains multiple inception modules, in which multiple different filter sizes are applied to the input and their results concatenated. This multi-scale processing allows the module to extract features at different levels of detail simultaneously. GoogLeNet also popularized the idea of not using fully-connected layers at the end, but rather global average pooling, significantly reducing the number of model parameters. It won the 2014 ILSVRC competition.
ResNet He et al. (2016) Introduced skip connections, which makes it possible to train much deeper networks. A 152 layer deep ResNet won the 2015 ILSVRC competition, and the authors also successfully trained a version with 1001 layers. Having skip connections in addition to the standard pathway gives the network the option to simply copy the activations from layer to layer (more precisely, from ResNet block to ResNet block), preserving information as data goes through the layers. Some features are best constructed in shallow networks, while others require more depth. The skip connections facilitate both at the same time, increasing the network’s flexibility when fed input data. As the skip connections make the network learn residuals, ResNets perform a kind of boosting.
Highway nets Srivastava et al. (2015) Another way to increase depth based on gating units, an idea from Long Short Term Memory (LSTM) recurrent networks, enabling optimization of the skip connections in the network. The gates can be trained to find useful combinations of the identity function (as in ResNets) and the standard nonlinearity through which to feed its input.
DenseNet Huang et al. (2017) Builds on the ideas of ResNet, but instead of adding the activations produced by one layer to later layers, they are simply concatenated together. The original inputs in addition to the activations from previous layers are therefore kept at each layer (again, more precisely, between blocks of layers), preserving some kind of global state. This encourages feature reuse and lowers the number of parameters for a given depth. DenseNets are therefore particularly well-suited for smaller data sets (outperforming others on e.g. Cifar-10 and Cifar-100).
ResNext Xie et al. (2017) Builds on ResNet and GoogLeNet by using inception modules between skip connections.
SENets Hu et al. (2017) Squeeze-and-Excitation Networks, which won the ILSVRC 2017 competition, builds on ResNext but adds trainable parameters that the network can use to weigh each feature map, where earlier networks simply added them up. These SE-blocks allows the network to model the channel and spatial information separately, increasing the model capacity. SE-blocks can easily be added to any CNN model, with negligible increase in computational costs.
NASNet Zoph et al. (2017) A CNN architecture designed by a neural network, beating all the previous human-designed networks at the ILSVRC competition. It was created using AutoML222222, Google Brain’s reinforcement learning approach to architecture design Bello et al. (2017). A controller network (a recurrent neural network) proposes architectures aimed to perform at a specific level for a particular task, and by trial and error learns to propose better and better models. NASNet was based on Cifar-10, and has relatively modest computational demands, but still outperformed the previous state-of-the-art on ILSVRC data.
YOLO Redmon et al. (2016) Introduced a new, simplified way to do simultaneous object detection and classification in images. It uses a single CNN operating directly on the image and outputting bounding boxes and class probabilities. It incorporates several elements from the above networks, including inception modules and pretraining a smaller version of the network. It’s fast enough to enable real-time processing232323You can watch YOLO in action here YOLO makes it easy to trade accuracy for speed by reducing the model size. YOLOv3-tiny was able to process images at over 200 frames per second on a standard benchmark data set, while still producing reasonable predictions.
GANs Goodfellow et al. (2014) A generative adversarial network consists of two neural networks pitted against each other. The generative network G is tasked with creating samples that the discriminative network D is supposed to classify as coming from the generative network or the training data. The networks are trained simultaneously, where G aims to maximize the probability that D makes a mistake while D aims for high classification accuracy.
Siamese nets Koch et al. (2015) An old idea (e.g. Bromley et al. (1994)) that’s recently been shown to enable one-shot learning, i.e. learning from a single example. A siamese network consists of two identical neural networks, both the architecture and the weights, attached at the end. They are trained together to differentiate pairs of inputs. Once trained, the features of the networks can be used to perform one-shot learning without retraining.
U-net Ronneberger et al. (2015) A very popular and successful network for segmentation in 2D images. When fed an input image, it is first downsampled through a “traditional” CNN, before being upsampled using transpose convolutions until it reaches its original size. In addition, based on the ideas of ResNet, there are skip connections that concatenates features from the downsampling to the upsampling paths. It is a fully-convolutional network, using the ideas first introduced in Long et al. (2015).
V-net Milletari et al. (2016) A three-dimensional version of U-net with volumetric convolutions and skip-connections as in ResNet.
Table 1: A far from exhaustive, non-chronological, list of CNN architectures and some high-level descriptions

These neural networks are typically implemented in one or more of a small number of software frameworks that dominates machine learning research, all built on top of NVIDIA’s CUDA platform and the cuDNN library. Today’s deep learning methods are almost exclusively implemented in either TensorFlow, a framework originating from Google Research, Keras, a deep learning library originally built by François Chollet and recently incorporated in TensorFlow, or Pytorch, a framework associated with Facebook Research. There are very few exceptions (YOLO built using the Darknet framework Redmon (2016) is one of the rare ones). All the main frameworks are open source and under active development.

3 Deep learning, medical imaging and MRI

Deep learning methods are increasingly used to improve clinical practice, and the list of examples is long, growing daily. We will not attempt a comprehensive overview of deep learning in medical imaging, but merely sketch some of the landscape before going into a more systematic exposition of deep learning in MRI.

Convolutional neural networks can be used for efficiency improvement in radiology practices through protocol determination based on short-text classification Lee (2018). They can also be used to reduce the gadolinium dose in contrast-enhanced brain MRI by an order of magnitude Gong et al. (2018) without significant reduction in image quality. Deep learning is applied in radiotherapy Meyer et al. (2018), in PET-MRI attenuation correction Liu et al. (2018); Mehranian et al. (2016), in radiomics Lao et al. (2017); Oakden-Rayner et al. (2017) (see Peeken et al. (2018) for a review of radiomics related to radiooncology and medical physics), and for theranostics in neurosurgical imaging, combining confocal laser endomicroscopy with deep learning models for automatic detection of intraoperative CLE images on-the-fly Izadyyazdanabadi et al. (2018).

Another important application area is advanced deformable image registration, enabling quantitative analysis across different physical imaging modalities and across time.242424e.g. test-retest examinations, or motion correction in dynamic imaging. For example elastic registration between 3D MRI and transrectal ultrasound for guiding targeted prostate biopsy Haskins et al. (2018); deformable registration for brain MRI where a “cue-aware deep regression network” learns from a given set of training images the displacement vector associated with a pair of reference-subject patches Cao et al. (2018); fast deformable image registration of brain MR image pairs by patch-wise prediction of the Large Deformation Diffeomorphic Metric Mapping model Yang et al. (2017)252525available at; unsupervised convolutional neural network-based algorithm for deformable image registration of cone-beam CT to CT using a deep convolutional inverse graphics network Kearney et al. (2018); deep learning-based 2D/3D registration framework for registration of preoperative 3D data and intraoperative 2D X-ray images in image-guided therapy Zheng et al. (2018); real-time prostate segmentation during targeted prostate biopsy, utilizing temporal information in the series of ultrasound images Anas et al. (2018).

This is just a tiny sliver of the many applications of deep learning to central problems in medical imaging. There are several thorough reviews and overviews of the field to consult for more information, across modalities and organs, and with different points of view and level of technical details. For example the comprehensive review Ching et al. (2018)262626A continuous collaborative manuscript ( with 500 references., covering both medicine and biology and spanning from imaging applications in healthcare to protein-protein interaction and uncertainty quantification; key concepts of deep learning for clinical radiologists Lee et al. (2017); Rueckert et al. (2016); Chartrand et al. (2017); Erickson et al. (2017); Mazurowski et al. (2018); McBee et al. (2018); Savadjiev et al. (2018); Thrall et al. (2018); Yamashita et al. (2018); Yasaka et al. (2018), including radiomics and imaging genomics (radiogenomics) Giger (2018), and toolkits and libraries for deep learning Erickson et al. (2017); deep learning in neuroimaging and neuroradiology Zaharchuk et al. (2018); brain segmentation Akkus et al. (2017); stroke imaging Lee et al. (2017); Feng et al. (2018); neuropsychiatric disorders Vieira et al. (2017); breast cancer Burt et al. (2018); Samala et al. (2017); chest imaging van Ginneken (2017); imaging in oncology Morin et al. (2018); Parmar et al. (2018); Xue et al. (2017); medical ultrasound Brattain et al. (2018); Huang et al. (2018); and more technical surveys of deep learning in medical image analysis Litjens et al. (2017); Shen et al. (2017); Suzuki (2017); Cao et al. (2018). Finally, for those who like to be hands-on, there are many instructive introductory deep learning tutorials available online. For example Lakhani et al. (2018), with accompanying code available at, where you’ll be guided through the construction of a system that can differentiate a chest X-ray from an abdominal X-ray using the Keras/TensorFlow framework through a Jupyter Notebook. Other nice tutorials are, based on the Deep Learning Toolkit (DLTK) Pawlowski et al. (2017), and, based on the Microsoft Cognitive Toolkit (CNTK).

Let’s now turn to the field of MRI, in which deep learning has seen applications at each step of entire workflows. From acquisition to image retrieval, from segmentation to disease prediction. We divide this into two parts: (i) the signal processing chain close to the physics of MRI, including image restoration and multimodal image registration (Fig. 3), and (ii) the use of deep learning in MR image segmentation, disease detection, disease prediction and systems based on images and text data (reports), addressing a few selected organs such as the brain, the kidney, the prostate and the spine (Fig. 4).

3.1 From image acquisition to image registration

Deep learning in MRI has typically been focused on segmentation and classification of reconstructed magnitude images. Its penetration into the lower levels of MRI measurement techniques is more recent, but already impressive. From MR image acquisition and signal processing in MR fingerprinting, to denoising and super-resolution, and into image synthesis.

Figure 3: Deep learning in the MR signal processing chain, from image acquisition (in complex-valued -space) and image reconstruction, to image restoration (e.g. denoising) and image registration. The rightmost column illustrates coregistration of multimodal brain MRI. sMRI = structural 3D T1-weighted MRI, dMRI = diffusion weighted MRI (stack of slices in blue superimposed on sMRI), fMRI = functional BOLD MRI (in red).

3.1.1 Data acquisition and image reconstruction

Research on CNN and RNN-based image reconstruction methods is rapidly increasing, pioneered by Yang et al. Yang et al. (2016) at NIPS 2016 and Wang et al. Wang et al. (2016) at ISBI 2016. Recent applications addresses e.g. convolutional recurrent neural networks for dynamic MR image reconstruction Qin et al. (2018), reconstructing good quality cardiac MR images from highly undersampled complex-valued -space data by learning spatio-temporal dependencies, outperforming 3D CNN approaches and compressed sensing-based dynamic MRI reconstruction algorithms in computational complexity, reconstruction accuracy and speed for different undersampling rates. Schlemper Schlemper et al. (2018) created a deep cascade of concatenated CNNs for dynamic MR image reconstruction, making use of data augmentation, both rigid and elastic deformations, to increase the variation of the examples seen by the network and reduce overfitting272727Code available at Using variational networks for single-shot fast spin-echo MRI with variable density sampling, Chen Chen et al. (2018) enabled real-time (200 ms per section) image reconstruction, outperforming conventional parallel imaging and compressed sensing reconstruction. In Knoll et al. (2018), the authors explored the potential for transfer learning (pretrained models) and assessed the generalization of learned image reconstruction regarding image contrast, SNR, sampling pattern and image content, using a variational network and true measurement -space data from patient knee MRI recordings and synthetic -space data generated from images in the Berkeley Segmentation Data Set and Benchmarks. Employing least-squares generative adversarial networks (GANs) that learns texture details and suppresses high-frequency noise, Mardani et al. (2018) created a novel compressed sensing framework that can produce diagnostic quality reconstructions “on the fly” (30 ms)282828In their GAN setting, a generator network is used to map undersampled data to a realistic-looking image with high measurement fidelity, while a discriminator network is trained jointly to score the quality of the reconstructed image.. A unified framework for image reconstruction Zhu et al. (2018), called automated transform by manifold approximation (AUTOMAP) consisting of a feedforward deep neural network with fully connected layers followed by a sparse convolutional autoencoder, formulate image reconstruction generically as a data-driven supervised learning task that generates a mapping between the sensor and the image domain based on an appropriate collection of training data (e.g. MRI examinations collected from the Human Connectome Project, transformed to the -space sensor domain).

There are also other approaches and reports on deep learning in MR image reconstruction, e.g. Eo et al. (2018); Han et al. (2018); Shi et al. (2018); Yang et al. (2018), a fundamental field rapidly progressing.

3.1.2 Quantitative parameters - QSM and MR fingerprinting

Another area that is developing within deep learning for MRI is the estimation of quantitative tissue parameters from recorded complex-valued data. For example within quantitative susceptibility mapping, and in the exciting field of magnetic resonance fingerprinting.

Quantitative susceptibility mapping (QSM) is a growing field of research in MRI, aiming to noninvasively estimate the magnetic susceptibility of biological tissue Deistung et al. (2013, 2017). The technique is based on solving the difficult, ill-posed inverse problem of determining the magnetic susceptibility from local magnetic fields. Recently Yoon et al. Yoon et al. (2018) constructed a three-dimensional CNN, named QSMnet and based on the U-Net architecture, able to generate high quality susceptibility source maps from single orientation data. The authors generated training data by using the gold-standard for QSM: the so-called COSMOS method Liu et al. (2009). The data was based on 60 scans from 12 healthy volunteers. The resulting model both simplified and improved the state-of-the-art for QSM. Rasmussen and coworkers Rasmussen et al. (2018) took a different approach. They also used a U-Net-based convolutional neural network to perform field-to-source inversion, called DeepQSM, but it was trained on synthetically generated data containing simple geometric shapes such as cubes, rectangles and spheres. After training their model on synthetic data it was able to generalize to real-world clinical brain MRI data, computing susceptibility maps within seconds end-to-end. The authors conclude that their method, combined with fast imaging sequences, could make QSM feasible in standard clinical practice.

Magnetic resonance fingerprinting (MRF) was introduced a little more than five years ago Ma et al. (2013), and has been called “a promising new approach to obtain standardized imaging biomarkers from MRI” by the European Society of Radiology of Radiology  (ESR). It uses a pseudo-randomized acquisition that causes the signals from different tissues to have a unique signal evolution (“fingerprint”) that is a function of the multiple material properties being investigated. Mapping the signals back to known tissue parameters (T1, T2 and proton density) is then a rather difficult inverse problem. MRF is closely related to the idea of compressed sensing Donoho (2006) in MRI Lustig et al. (2007) in that MRF undersamples data in -space producing aliasing artifacts in the reconstructed images that can be surpressed by compressed sensing.292929See McCann et al. (2017); Shah and Hegde (2018); Lucas et al. (2018); Aggarwal et al. (2018); Li et al. (2018) for recent perspectives and developments connecting deep learning-based reconstruction methods to the more general research field of inverse problems. It can be regarded as a quantitative multiparametric MRI analysis, and with recent acquisition schemes using a single-shot spiral trajectory with undersampling, whole-brain coverage of T, T and proton density maps can be acquired at mm voxel resolution in less than 5 min Ma et al. (2018).

The processing of MRF after acquisition usually involves using various pattern recognition algorithms that try to match the fingerprints to a predefined dictionary of predicted signal evolutions303030A dictionary of time series for every possible combination of parameters like (discretized) T and T relaxation times, spin-density (M), B, off-resonance (), and also voxel-wise cerebral blood volume (CBV), mean vessel radius (R), blood oxygen saturation (SO) and T Christen et al. (2014); Lemasson et al. (2016); Rieger et al. (2018), and more, e.g. MFR-ASL Wright et al. (2018)., created using the Bloch equations Ma et al. (2013); Panda et al. (2017).

Recently, deep learning methodology has been applied to MR fingerprinting. Cohen et al. Cohen et al. (2018) reformulated the MRF reconstruction problem as learning an optimal function that maps the recorded signal magnitudes to the corresponding tissue parameter values, trained on a sparse set of dictionary entries. To achieve this they fed voxel-wise MRI data acquired with an MRF sequence (MRF-EPI, 25 frames in 3 sec; or MRF-FISP, 600 frames in 7.5 sec) to a four-layer neural network consisting of two hidden layers with fully connected nodes and two nodes in the output layer, considering only T and T parametric maps. The network, called MRF Deep RecOnstruction NEtwork (DRONE), was trained by an adaptive moment estimation stochastic gradient descent algorithm with a mean squared error loss function. Their dictionary consisted of 70000 entries (product of discretized T and T values) and training the network to convergence with this dictionary (10 MB for MRF-EPI and 300 MB for MRF-FISP) required 10 to 70 min using an NVIDIA K80 GPU with 2 GB memory. They found their reconstruction time (10 to 70 ms per slice) to be 300 to 5000 times faster than conventional dictionary-matching techniques, using both well-characterized calibrated ISMRM/NIST phantoms and in vivo human brains.

A similar deep learning approach to predict quantitative parameter values (T and T) from MRF time series was taken by Hoppe et al. Hoppe et al. (2017). In their experiments they used 2D MRF-FISP data with variable TR (12-15 ms), flip angles (5-74) and 3000 repetitions, recorded on a MAGNETOM 3T Skyra. A high resolution dictionary was simulated to generate a large collection of training and testing data, using tissues T and T relaxation time ranges as present in normal brain at 3T (e.g. Bojorquez et al. (2017)) resulting in time series. In contrast to Cohen et al. (2018), their deep neural network architecture was inspired from the domain of speech recognition due to the similarity of the two tasks. The architecture with the smallest average error for validation data was a standard convolutional neural network consisting of an input layer of 3000 nodes (number of samples in the recorded time series), four hidden layers, and an output layers with two nodes (T and T). Matching one time series was about 100 times faster than the conventional Ma et al. (2013) matching method and with very small mean absolute deviations from ground truth values.

In the same context, Fang et al. Fang et al. (2017) used a deep learning method to extract tissue properties from highly undersampled 2D MRF-FISP data in brain imaging, where 2300 time points were acquired from each measurement and each time point consisted of data from one spiral readout only. The real and imaginary parts of the complex signal were separated into two channels. They used MRF signal from a patch of pixles to incorporate correlated information between neighboring pixels. In their work they designed a standard three-layer CNN with T and T as output.

Virtue Virtue et al. (2017) investigated a different approach to MRF. By generating 100.000 synthetic MRI signals using a Bloch equation simulator they were able to train feedforward deep neural networks to map new MRI signals to the tissue parameters directly, producing approximate solutions to the inverse mapping problem of MRF. In their work they designed a new complex activation function, the complex cardoid, that was used to construct a complex-valued feedforward neural network. This three-layer network outperformed both the standard MRF techniques based on dictionary matching, and also the analogous real neural network operating on the real and imaginary components separately. This suggested that complex-valued networks are better suited at uncovering information in complex data.313131Complex-valued deep learning is also getting some attention in a broader community of researchers, and have been shown to lead to improved models. See e.g. Tygert et al. (2016); Trabelsi et al. (2017) and the references therein.

3.1.3 Image restoration (denoising, artifact detection)

Estimation of noise and image denoising in MRI has been an important field of research for many years Sijbers et al. (1998); McVeigh et al. (1985), employing a plethora of methods. For example Bayesian Markov random field models Baselice et al. (2017), rough set theory Phophalia and Mitra (2017), higher-order singular value decomposition Zhang et al. (2015), wavelets Van De Ville et al. (2007), independent component analysis Salimi-Khorshidi et al. (2014), or higher order PDEs Lysaker et al. (2003).

Recently, deep learning approaches have been introduced to denoising. In their work on learning implict brain MRI manifolds using deep neural networks, Bermudez et al. Bermudez et al. (2018) implemented an autoencoder with skip connections for image denoising, testing their approach with adding various levels of Gaussian noise to more than 500 T1-weighted brain MR images from healthy controls in the Baltimore Longitudinal Study of Aging. Their autoencoder network outperformed the current FSL SUSAN denoising software according to peak signal-to-noise ratios. Benou et al. Benou et al. (2017) addressed spatio-temporal denoising of dynamic contrast-enhanced MRI of the brain with bolus injection of contrast agent (CA), proposing a novel approach using ensembles of deep neural networks for noise reduction. Each DNN was trained on a different range of SNRs and types of CA concentration time curves (denoted “pathology experts”, “healthy experts”, “vessel experts”) to generate a reconstruction hypothesis from noisy input by using a classification DNN to select the most likely hypothesis and provide a “clean output” curve. Training data was generated synthetically using a three-parameter Tofts pharmacokinetic (PK) model and noise realizations. To improve this model, accounting for spatial dependencies of PK pharmacokinetics, they used concatenated noisy time curves from first-order neighbourhood pixels in their expert DNNs and ensemble hypothesis DNN, collecting neighboring reconstructions before a boosting procedure produced the final clean output for the pixel of interest. They tested their trained ensemble model on 33 patients from two different DCE-MRI databases with either stroke or recurrent glioblastoma (RIDER NEURO323232, acquired at different sites, with different imaging protocols, and with different scanner vendors and field strengths. The qualitative and quantitative (MSE) denoising results were better than spatio-temporal Beltrami, moving average, the dynamic Non Local Means method Gal et al. (2010), and stacked denoising autoencoders Vincent et al. (2010). The run-time comparisons were also in favor of the proposed sDNN.

In this context of DCE-MRI, it’s tempting to speculate whether deep neural network approaches could be used for direct estimation of tracer-kinetic parameter maps from highly undersampled -space data in dynamic recordings Dikaios et al. (2014); Guo et al. (2017), a powerful way to by-pass 4D DCE-MRI reconstruction altogether and map sensor data directly to spatially resolved pharmacokinetic parameters, e.g. K, , in the extended Tofts model or parameters in other classic models Sourbron and Buckley (2013). A related approach in the domain of diffusion MRI, by-passing the model-fitting steps and computing voxel-wise scalar tissue properties (e.g. radial kurtosis, fiber orientation dispersion index) directly from the subsampled DWIs was taken by Golkov et al. Golkov et al. (2016) in their proposed “-space deep learning” family of methods.

Deep learning methods has also been applied to MR artifact detection, e.g. poor quality spectra in MRSI Gurbani et al. (2018); detection and removal of ghosting artifacts in MR spectroscopy Kyathanahally et al. (2018); and automated reference-free detection of patient motion artifacts in MRI Küstner et al. (2018).

3.1.4 Image super-resolution

Image super-resolution, reconstructing a higher-resolution image or image sequence from the observed low-resolution image Yue et al. (2016), is an exciting application of deep learning methods333333See for an instructive introduction to super-resolution.

Super-resolution for MRI have been around for almost 10 years Shilling et al. (2009); Ropele et al. (2010) and can be used to improve the trade-off between resolution, SNR, and acquisition time Plenge et al. (2012), generate 7T-like MR images on 3T MRI scanners Bahrami et al. (2017), or obtain super-resolution T maps from a set of low resolution T weighted images Van Steenkiste et al. (2017). Recently deep learning approaches has been introduced, e.g. generating super-resolution single (no reference information) and multi-contrast (applying a high-resolution image of another modality as reference) brain MR images using CNNs Zeng et al. (2018); constructing super-resolution brain MRI by a CNN stacked by multi-scale fusion units Liu et al. (2018); and super-resolution musculoskeletal MRI (“DeepResolve”) Chaudhari et al. (2018). In DeepResolve thin (0.7 mm) slices in knee images (DESS) from 124 patients included in the Osteoarthritis Initiative were used for training and 17 patients for testing, with a 10 sec inference time per 3D () volume. The resulting images were evaluated both quantitatively (MSE, PSNR, and the perceptual window-based structural similarity SSIM343434 index) and qualitatively by expert radiologists.

3.1.5 Image synthesis

Image synthesis in MRI have traditionally been seen as a method to derive new parametric images or new tissue contrast from a collection of MR acquisition performed at the same imaging session, i.e. “an intensity transformation applied to a given set of input images to generate a new image with a specific tissue contrast” Jog et al. (2017). Another avenue of MRI synthesis is related to quantitative imaging and the development and use of physical phantoms, imaging calibration/standard test objects with specific material properties. This is done in order to assess the performance of an MRI scanner or to assess imaging biomarkers reliably with application-specific phantoms such as a structural brain imaging phantom, DCE-MRI perfusion phantom, diffusion phantom, flow phantom, breast phantom or a proton-density fat fraction phantom Keenan et al. (2018). The in silico modeling of MR images with certain underlying properties, e.g. Jurczuk et al. (2014); Zhou et al. (2018), or model-based generation of large databases of (cardiac) images from real healthy cases Duchateau et al. (2018) is also part of this endeavour. In this context, deep learning approaches have accelerated research and the amount of costly training data.

The last couple of years have seen impressive results for photo-realistic image synthesis using deep learning techniques, especially generative adversarial networks (GANs, introduced by Goodfellow et al. in 2014 Goodfellow et al. (2014)), e.g. Creswell et al. (2018); Hong et al. (2017); Huang et al. (2018). These can also be used for biological image synthesis Osokin et al. (2017); Antipov et al. (2017) and text-to-image synthesis Bodnar (2018); Dong et al. (2017); Reed et al. (2016).353535See here for a list of interesting applications of GAN in medical imaging Recently, a group of researchers from NVIDIA, MGH & BWH Center for Clinical Data Science in Boston, and the Mayo Clinic in Rochester Shin et al. (2018) designed a clever approach to generate synthetic abnormal MRI images with brain tumors by training a GAN based on pix2pix363636 using two publicly available data sets of brain MRI (ADNI and the BRATS’15 Challenge, and later also the Ischemic Stroke Lesion Segmentation ISLES’2018 Challenge). This approach is highly interesting as medical imaging datasets are often imbalanced, with few pathological findings, limiting the training of deep learning models. Such generative models for image synthesis serve as a form of data augmentation, and also as an anonymization tool. The authors achieved comparable tumor segmentation results when trained on the synthetic data rather than on real patient data. A related approach to brain tumor segmentation using coarse-to-fine GANs was taken by Mok & Chung Mok and Chung (2018). Guibas et al. Guibas et al. (2017) used a two-stage pipeline for generating synthetic medical images from a pair of GANs, addressing retinal fundus images, and provided an online repository (SynthMed) for synthetic medical images. Kitchen & Seah Kitchen and Seah (2017) used GANs to synthetize realistic prostate lesions in T, ADC, K resembling the SPIE-AAPM-NCI ProstateX Challenge 2016373737 training data.

Other applications are unsupervised synthesis of T1-weighted brain MRI using a GAN Bermudez et al. (2018); image synthesis with context-aware GANs Nie et al. (2017); synthesis of patient-specific transmission image for PET attenuation correction in PET/MR imaging of the brain using a CNN Spuhler et al. (2018); pseudo-CT synthesis for pelvis PET/MR attenuation correction using a Dixon-VIBE Deep Learning (DIVIDE) network Torrado-Carvajal et al. (2018); image synthesis with GANs for tissue recognition Zhang et al. (2018); synthetic data augmentation using a GAN for improved liver lesion classification Frid-Adar et al. (2018); and deep MR to CT synthesis using unpaired data Wolterink et al. (2017).

3.1.6 Image registration

Image registration383838Image registration can be defined as “the determination of a one-to-one mapping between the coordinates in one space and those in another, such that points in the two spaces that correspond to the same anatomical point are mapped to each other” (C.R Maurer Calvin R. Maurer (1993), 1993). is an increasingly important field within MR image processing and analysis as more an more complementary and multiparametric tissue information are collected in space and time within shorter acquisition times, at higher spatial (and temporal) resolutions, often longitudinally, and across patient groups, larger cohorts, or atlases. Traditionally one has divided the tasks of image registration into dichotomies: intra vs. inter-modality, intra vs. inter-subject, rigid vs. deformable, geometry-based vs. intensity-based, and prospective vs. retrospective image registration. Mathematically, registration is a challenging mix of geometry (spatial transformations), analysis (similarity measures), optimization strategies, and numerical schemes. In prospective motion correction, real-time MR physics is also an important part of the picture Maclaren et al. (2013); Zaitsev et al. (2017). A wide range of methodological approaches have been developed and tested for various organs and applications393939and different hardware e.g. GPUs Fluck et al. (2011); Shi et al. (2012); Eklund et al. (2013) as image registration is often computationally time consuming. Maintz and Viergever (1998); Glocker et al. (2011); Sotiras et al. (2013); Oliveira and Tavares (2014); Saha et al. (2015); Viergever et al. (2016); Song et al. (2017); Ferrante and Paragios (2017); Keszei et al. (2017); Nag (2017), including “previous generation” artificial neural networks Jiang et al. (2010).

Recently, deep learning methods have been applied to image registration in order to improve accuracy and speed (e.g. Section 3.4 in Litjens et al. (2017)). For example: deformable image registration Wu et al. (2016); Yang et al. (2017); model-to-image registration Salehi et al. (2018); Toth et al. (2018); MRI-based attenuation correction for PET Han (2017); Liu et al. (2018); PET/MRI dose calculation Xiang et al. (2017); unsupervised end-to-end learning for deformable registration of 2D CT/MR images Shan et al. (2017); an unsupervised learning model for deformable, pairwise 3D medical image registration by Balakrishnan et al. Balakrishnan et al. (2018)404040with code available at; and a deep learning framework for unsupervised affine and deformable image registration de Vos et al. (2018).

3.2 From image segmentation to diagnosis and prediction

We leave the lower-level applications of deep learning in MRI to consider higher-level (downstream) applications such as fast and accurate image segmentation, disease prediction in selected organs (brain, kidney, prostate, and spine) and content-based image retrieval, typically applied to reconstructed magnitude images. We have chosen to focus our overview on deep learning applications close to the MR physics and will be brief in the present section, even if the following applications are very interesting and clinically important.

Figure 4: Deep learning for MR image analysis in selected organs, partly from ongoing work at MMIV.

3.2.1 Image segmentation

Image segmentation, the holy grail of quantitative image analysis, is the process of partitioning an image into multiple regions that share similar attributes, enabling localization and quantification.414141Segmentation is also crucial for functional imaging, enabling tissue physiology quantification with preservation of anatomical specificity. It has an almost 50 years long history, and has become the biggest target for deep learning approaches in medical imaging. The multispectral tissue classification report by Vannier et al. in 1985 Vannier et al. (1985), using statistical pattern recognition techniques (and satellite image processing software from NASA), represented one of the most seminal works leading up to today’s machine learning in medical imaging segmentation. In this early era, we also had the opportunity to contribute with supervised and unsupervised machine learning approaches for MR image segmentation and tissue classification Lundervold et al. (1988); Taxt et al. (1992); Taxt and Lundervold (1994); Lundervold and Storvik (1995). An impressive range of segmentation methods and approaches have been reported (especially for brain segmentation) and reviewed, e.g. Cabezas et al. (2011); García-Lorenzo et al. (2013); Smistad et al. (2015); Bernal et al. (2017); Dora et al. (2017); Torres et al. (2018); Bernal et al. (2018); Moccia et al. (2018); Makropoulos et al. (2018). MR image segmentation using deep learning approaches, typically CNNs, are now penetrating the whole field of applications. For example acute ischemic lesion segmentation in DWI Chen et al. (2017); brain tumor segmentation Havaei et al. (2017); segmentation of the striatum Choi and Jin (2016); segmentation of organs-at-risks in head and neck CT images Ibragimov and Xing (2017); and fully automated segmentation of polycystic kidneys Kline et al. (2017); deformable segmentation of the prostate Guo et al. (2016); and spine segmentation with 3D multiscale CNNs Li et al. (2018).

See Litjens et al. (2017) and Ching et al. (2018) for more comprehensive lists.

3.2.2 Diagnosis and prediction

A presumably complete list of papers up to 2017 using deep learning techniques for brain image analysis is provided as Table 1 in Litjens at al. Litjens et al. (2017). In the following we add some more recent work on organ-specific deep learning using MRI, restricting ourselves to brain, kidney, prostate and spine.

Brain extraction Kleesiek et al. (2016) A 3D CNN for skull stripping
Functional connectomes Li et al. (2018) Transfer learning approach to enhance deep neural network classification of brain functional connectomes
Zeng et al. (2018) Multisite diagnostic classification of schizophrenia using discriminant deep learning with functional connectivity MRI
Structural connectomes Wasserthal et al. (2018) A convolutional neural network-based approach ( that directly segments tracts in the field of fiber orientation distribution function (fODF) peaks without using tractography, image registration or parcellation. Tested on 105 subjects from the Human Connectome Project
Brain age Cole et al. (2017) Chronological age prediction from raw brain T1-MRI data, also testing the heritability of brain-predicted age using a sample of 62 monozygotic and dizygotic twins
Alzheimer’s disease Liu et al. (2018) Landmark-based deep multi-instance learning evaluated on 1526 subjects from three public datasets (ADNI-1, ADNI-2, MIRIAD)
Islam and Zhang (2018) Identify different stages of AD
Lu et al. (2018) Multimodal and multiscale deep neural networks for the early diagnosis of AD using structural MR and FDG-PET images
Vascular lesions Moeskops et al. (2018) Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI
Identification of MRI contrast Pizarro et al. (2018) Using deep learning algorithms to automatically identify the brain MRI contrast, with implications for managing large databases
Meningioma Laukamp et al. (2018) Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI
Glioma Perkuhn et al. (2018) Glioblastoma segmentation using heterogeneous MRI data from clinical routine
AlBadawy et al. (2018) Deep learning for segmentation of brain tumors and impact of cross-institutional training and testing
Cui et al. (2018) Automatic semantic segmentation of brain gliomas from MRI using a deep cascaded neural network
Hoseini et al. (2018) AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation of glioblastomas (BRATS)
Multiple sclerosis Yoo et al. (2018) Deep learning of joint myelin and T1w MRI features in normal-appearing brain tissue to distinguish between multiple sclerosis patients and healthy controls
Abdominal organs Bobo et al. (2018) CNNs to improve abdominal organ segmentation, including left kidney, right kidney, liver, spleen, and stomach in T-weighted MR images
Cyst segmentation Kline et al. (2017) An artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys
Renal transplant Shehata et al. (2018) A deep-learning-based classifier with stacked non-negative constrained autoencoders to distinguish between rejected and non-rejected renal transplants in DWI recordings
Cancer (PCa) Cheng et al. (2017) Proposed a method for end-to-end prostate segmentation by integrating holistically (image-to-image) nested edge detection with fully convolutional networks. their nested networks automatically learn a hierarchical representation that can improve prostate boundary detection. Obtained very good results (Dice coefficient, 5-fold cross validation) on MRI scans from 250 patients
Ishioka et al. (2018) Computer-aided diagnosis with a CNN, deciding ‘cancer’ ‘no cancer’ trained on data from 301 patients with a prostate-specific antigen level of ng/mL who underwent MRI and extended systematic prostate biopsy with or without MRI-targeted biopsy
Song et al. (2018) Automatic approach based on deep CNN, inspired from VGG, to classify PCa and noncancerous tissues with multiparametric MRI using data from the PROSTATEx database
Wang et al. (2017) Deep CNN and a non-deep learning using feature detection (the scale-invariant feature transform and the bag-of-words model, a representative method for image recognition and analysis) were used to distinguish pathologically confirmed PCa patients from prostate benign conditions patients with prostatitis or prostate benign hyperplasia in a collection of 172 patients with more than 2500 morphologic 2D T-w MR images
Yang et al. (2017) Designed a system which can concurrently identify the presence of PCa in an image and localize lesions based on deep CNN features (co-trained CNNs consisting of two parallel convolutional networks for ADC and T-w images respectively) and a single-stage SVM classifier for automated detection of PCa in multiparametric MRI. Evaluated on a dataset of 160 patients
Le et al. (2017) Designed and tested multimodel CNNs, using clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches. Carefully investigated three critical factors which could greatly affect the performance of their multimodal CNNs but had not been carefully studied previously: (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal mp-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis?
Vertebrae labeling Forsberg et al. (2017) Designed a CNN for detection and labeling of vertebrae in MR images with clinical annotations as training data
Intervertebral disc localization Li et al. (2018) 3D multi-scale fully connected CNNs with random modality voxel dropout learning for intervertebral disc localization and segmentation from multi-modality MR images
Disc-level labeling, spinal stenosis grading Lu et al. (2018) CNN model denoted DeepSPINE, having a U-Net architecture combined with a spine-curve fitting method for automated lumbar vertebral segmentation, disc-level designation, and spinal stenosis grading with a natural language processing scheme
Lumbal neural forminal stenosis (LNFS) Han et al. (2018) Addressed the challenge of automated pathogenesis-based diagnosis, simultaneously localizing and grading multiple spinal structures (neural foramina, vertebrae, intervertebral discs) for diagnosing LNFS and discover pathogenic factors. Proposed a deep multiscale multitask learning network integrating a multiscale multi-output learning and a multitask regression learning into a fully convolutional network where (i) a DMML-Net merges semantic representations to reinforce the salience of numerous target organs (ii) a DMML-Net extends multiscale convolutional layers as multiple output layers to boost the scale-invariance for various organs, and (iii) a DMML-Net joins the multitask regression module and the multitask loss module to combine the mutual benefit between tasks
Spondylitis vs tuberculosis Kim et al. (2018) CNN model for differentiating between tuberculous and pyogenic spondylitis in MR images. Compared their CNN performance with that of three skilled radiologists using spine MRIs from 80 patients
Metastasis Wang et al. (2017) A multi-resolution approach for spinal metastasis detection using deep Siamese neural networks comprising three identical subnetworks for multi-resolution analysis and detection. Detection performance was evaluated on a set of 26 cases using a free-response receiver operating characteristic analysis (observer is free to mark and rate as many suspicious regions as are considered clinically reportable)
Table 2: A short list of deep learning applications per organ, task, reference and description.

3.3 Content-based image retrieval

The objective of content-based image retrieval (CBIR) in radiology is to provide medical cases similar to a given image in order to assist radiologists in the decision-making process. It typically involves large case databases, clever image representations and lesion annotations, and algorithms that are able to quickly and reliably match and retrieve the most similar images and their annotations in the case database. CBIR has been an active area of research in medical imaging for many years, addressing a wide range of applications, imaging modalities, organs, and methodological approaches, e.g. Pilevar (2011); Kumar et al. (2013); Faria et al. (2015); Kumar et al. (2015); Bedo et al. (2016); Muramatsu (2018); Spanier et al. (2018), and at a larger scale outside the medical field using deep learning techniques, e.g. at Microsoft, Apple, Facebook, and Google (reverse image search424242See “search by image” /, and also, indexing more than 30 billion images), and others. See e.g. Gordo et al. (????); Liu et al. (2017); Han et al. (2018); Piplani and Bamman (2018); Yang et al. (2018) and the code repositories One of the first application of deep learning for CBIR in the medical domain came in 2015 when Sklan et al. Sklan et al. (2015) trained a CNN to perform CBIR with more than one million random MR and CT images, with disappointing results (true positive rate of 20%) on their independent test set of 2100 labeled images. Medical CBIR is now, however, dominated by deep learning algorithms Bressan et al. (2018); Qayyum et al. (2017); Chung and Weng (2017). As an example, by retrieving medical cases similar to a given image, Pizarro et al. Pizarro et al. (2018) developed a deep learning algorithm with CNN architecture to automatically infer the contrast of MRI scans based on the image intensity of multiple slices.

Recently, deep learning methods have also been used for automated generation of radiology reports, typically incorporating long-short-term-memory (LSTM) network models to generate the textual paragraphs Jing et al. (2017); Li et al. (2018); Moradi et al. (2018); Zhang et al. (????), and also to identify findings in radiology reports Pons et al. (2016); Zech et al. (2018); Goff and Loehfelm (2018).

4 Open science and reproducible research in machine learning for medical imaging

Machine learning is moving at a breakneck speed, too fast for the standard peer-review process to keep up. Many of the most celebrated and impactful papers in machine learning over the past few years are only available as preprints, or published in conference proceedings long after their results are well-known and incorporated in the research of others. Bypassing peer-review has some downsides, of course, but these are somewhat mitigated by researchers’ willingness to share code and data.434343In the spirit of sharing and open science, we’ve created a GitHub repository to accompany our article, available at

Most of the main new ideas and methods are posted to the arXiv preprint server444444, and the accompanying code shared on the GitHub platform454545 The data sets used are often openly available through various repositories. This, in addition to the many excellent online educational resources464646For example,,,, makes it easy to get started in the field. Select a problem you find interesting based on openly available data, a method described in a preprint, and an implementation uploaded to GitHub. This forms a good starting point for an interesting machine learning project.

Another interesting aspect about modern machine learning and data science is the prevalence of competitions, with the annual ImageNet ILSVRC competition as the main driver of progress in deep learning for computer vision since 2012. Each competition typically draws large number of participants, and the top results often push the state-of-the art to a new level. In addition to inspiring new ideas, competitions also provide natural entry points to modern machine learning. It is interesting to note how deep learning-based models are completely dominating the leaderboards of essentially all image-based competitions. Other machine learning models, or non-machine learning-based techniques, have largely been outclassed.

What’s true about the openness of machine learning in general is increasingly true also for the sub-field of machine learning for medical image analysis. We’ve listed a few examples of openly available implementations, data sets and challenges in tables 3, 4 and 5 below.

Summary Reference Implementation
NiftyNet. An open source convolutional neural networks platform for medical image analysis and image-guided therapy Gibson et al. (2018); Li et al. (2017)
DLTK. State of the art reference implementations for deep learning on medical images Pawlowski et al. (2017)
DeepMedic Kamnitsas et al. (2017)
U-Net: Convolutional Networks for Biomedical Image Segmentation Ronneberger et al. (2015)
V-net Milletari et al. (2016)
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling Badrinarayanan et al. (2017)
Brain lesion synthesis using GANs Shin et al. (2018)
GANCS: Compressed Sensing MRI based on Deep Generative Adversarial Network Mardani et al. (2017)
Deep MRI Reconstruction Schlemper et al. (2018)
Graph Convolutional Networks for brain analysis in populations, combining imaging and non-imaging data Parisot et al. (2017)
Table 3: A short list of openly available code for ML in medical imaging
Name Summary Link
OpenNeuro A free and open platform for analyzing and sharing neuroimaging data. Contains more than 100 data sets. https://openneuro.org474747Data can be downloaded from the AWS S3 Bucket
UK Biobank Health data from half a million participants. Contains MRI images from 15.000 participants, aiming to reach 100.000.
TCIA The cancer imaging archive hosts a large archive of medical images of cancer accessible for public download. Currently contains images from from 14.355 patients across 77 collections.
ABIDE The autism brain imaging data exchange. Contains 1114 datasets from 521 individuals with Autism Spectrum Disorder and 593 controls.
ADNI The Alzheimer’s disease neuroimaging initiative. Contains image data from almost 2000 participants (controls, early MCI, MCI, late MCI, AD)
Table 4: A short list of medical imaging data sets and repositories
Name Summary Link
Grand-Challenges Grand challenges in biomedical image analysis. Hosts and lists a large number of competitions
RSNA Pneumonia Detection Challenge Automatically locate lung opacities on chest radiographs
HVSMR 2016 Segment the blood pool and myocardium from a 3D cardiovascular magnetic resonance image
ISLES 2018 Ischemic Stroke Lesion Segmentation 2018. The goal is to segment stroke lesions based on acute CT perfusion data.
BraTS 2018 Multimodal Brain Tumor Segmentation. The goal is to segment brain tumors in multimodal MRI scans.
CAMELYON17 The goal is to develop algorithms for automated detection and classification of breast cancer metastases in whole-slide images of histological lymph node sections.
ISIC 2018 Skin Lesion Analysis Towards Melanoma Detection
Kaggle’s 2018 Data Science Bowl Spot Nuclei. Speed Cures.
Kaggle’s 2017 Data Science Bowl Turning Machine Intelligence Against Lung Cancer
Kaggle’s 2016 Data Science Bowl Transforming How We Diagnose Heart Disease
MURA Determine whether a bone X-ray is normal or abnormal
Table 5: A short list of medical imaging competitions

5 Challenges, limitations and future perspectives

It is clear that deep neural networks are very useful when one is tasked with producing accurate decisions based on complicated data sets. But they come with some significant challenges and limitations that you either have to accept or try to overcome. Some are general: from technical challenges related to the lack of mathematical and theoretical underpinnings of many central deep learning models and techniques, and the resulting difficulty in deciding exactly what it is that makes one model better than another, to societal challenges related to maximization and spread of the technological benefits Marcus (2018); Lipton and Steinhardt (2018) and the problems related to the tremendous amounts of hype and excitement484848Lipton: Machine Learning: The Opportunity and the Opportunists, Jordan: Artificial Intelligence – The Revolution Hasn’t Happened Yet Others are more domain-specific.

In deep learning for standard computer vision tasks, like object recognition and localization, powerful models and a set of best practices have been developed over the last few years. The pace of development is still incredibly high, but certain things seem to be settled, at least momentarily. Using the basic building blocks described above, placed according to the ideas behind, say, ResNet and SENet, will easily result in close to state-of-the-art performance on two-dimensional object detection, image classification and segmentation tasks.

However, the story for deep learning in medical imaging is not quite as settled. One issue is that medical images are often three-dimensional, and three-dimensional convolutional neural networks are not quite as well-developed as their 2D counterparts. One quickly meet challenges associated to memory and compute consumption when using CNNs with higher-dimensional image data, challenges that researchers are trying various approaches to deal with (treating 3D as stacks of 2Ds, patch- or segment-based training and inference, downscaling, etc). It is clear that the ideas behind state-of-the-art two-dimensional CNNs can be lifted to three dimensions, but also that adding a third spatial dimension results in additional constraints. Other important challenges are related to data, trust, interpretability, workflow integration, and regulations, as discussed below.

5.1 Data

This is a crucially important obstacle for deep neural networks, especially in medical data analysis. When deploying deep neural networks, or any other machine learning model, one is instantly faced with challenges related to data access, privacy issues, data protection, and more.

As privacy and data protection is often a requirement when dealing with medical data, new techniques for training models without exposing the underlying training data to the user of the model are necessary. It is not enough to merely restrict access to the training set used to construct the model, as it is easy to use the model itself to discover details about the training set Zhang et al. (2016). Even hiding the model and only exposing a prediction interface would still leave it open to attack, for example in the form of model-inversion Fredrikson et al. (2015) and membership attacks Shokri et al. (2017). Most current work on deep learning for medical data analysis use either open, anonymized data sets (as those in Table 4), or locally obtained anonymized research data, making these issues less relevant. However, the general deep learning community are focusing a lot of attention on the issue of privacy, and new techniques and frameworks for federated learning McMahan et al. (2017)494949See for example and differential privacy Papernot et al. (2016, 2018); McMahan et al. (2018) are rapidly improving. There are a few examples of these ideas entering the medical machine learning commmunity, as in Chang et al. (2018) where the distribution of deep learning models among several medical institutions was investigated, but then without considering the above privacy issues. As machine learning systems in medicine grows to larger scales, perhaps even including computations and learning on the “edge”, federated learning and differential privacy will likely become the focus of much research in our community.

If you are able to surmount these obstacles, you will be confronted with deep neural networks’ insatiable appetite for training data. These are very inefficient models, requiring large number of training samples before they can produce anything remotely useful, and labeled training data is typically both expensive and difficult to produce. In addition, the training data has to be representative of the data the network will meet in the future. If the training samples are from a data distribution that is very different from the one met in the real world, then the network’s generalization performance will be lower than expected. See Zech et al. (2018) for a recent exploration of this issue. Considering the large difference between the high-quality images one typically work with when doing research and the messiness of the real, clinical world, this can be a major obstacle when putting deep learning systems into production.

Luckily there are ways to alleviate these problems somewhat. A widely used technique is transfer learning, also called fine-tuning or pre-training: first you train a network to perform a task where there is an abundance of data, and then you copy weights from this network to a network designed for the task at hand. For two-dimensional images one will almost always use a network that has been pre-trained on the ImageNet data set. The basic features in the earlier layers of the neural network found from this data set typically retain their usefulness in any other image-related task (or are at least form a better starting point than random initialization of the weights, which is the alternative). Starting from weights tuned on a larger training data set can also make the network more robust. Focusing the weight updates during training on later layers requires less data than having to do significant updates throughout the entire network. One can also do inter-organ transfer learning in 3D, an idea we have used for kidney segmentation, where pre-training a network to do brain segmentation decreased the number of annotated kidneys needed to achieve good segmentation performance Lundervold et al. (2017). The idea of pre-training networks is not restricted to images. Pre-training entire models has recently been demonstrated to greatly impact the performance of natural language processing systems Peters et al. (2018); Howard and Ruder (2018); Radford et al. (2018).

Another widely used technique is augmenting the training data set by applying various transformations that preserves the labels, as in rotations, scalings and intensity shifts of images, or more advanced data augmentation techniques like anatomically sound deformations, or other data set specific operations (for example in our work on kidney segmentation from DCE-MRI, where we used image registration to propagate labels through a time course of images Lundervold et al. (2018)). Data synthesis, as in Shin et al. (2018), is another interesting approach.

In short, as expert annotators are expensive, or simply not available, spending large computational resources to expand your labelled training data set, e.g. indirectly through transfer learning or directly through data augmentation, is typically worthwhile. But whatever you do, the way current deep neural networks are constructed and trained results in significant data size requirements. There are new ways of constructing more data-efficient deep neural networks on the horizon, for example by encoding more domain-specific elements in the neural network structure as in the capsule systems of Hinton et al. (2011); Sabour et al. (2017), which adds viewpoint invariance. It is also possible to add attention mechanisms to neural networks Mnih et al. (2014); Xu et al. (2015), enabling them to focus their resources on the most informative components of each layer input.

However, the networks that are most frequently used, and with the best raw performance, remain the data-hungry standard deep neural networks.

5.2 Interpretability, trust and safety

As deep neural networks relies on complicated interconnected hierarchical representations of the training data to produce its predictions, interpreting these predictions becomes very difficult. This is the “black box” problem of deep neural networks Castelvecchi (2016). They are capable of producing extremely accurate predictions, but how can you trust predictions based on features you cannot understand? Considerable effort goes into developing new ways to deal with this problem, including DARPA launching a whole program “Explainable AI505050 dedicated to this issue, and lots of research going into enhancing interpretability Olah et al. (2018); Montavon et al. (2017), and finding new ways to measure sensitivity and visualize features Zeiler and Fergus (2014); Yosinski et al. (2015); Olah et al. (2017); Hohman et al. (2018); bach2015pixel.

Another way to increase their trustworthiness is to make them produce robust uncertainty estimates in addition to predictions. The field of Bayesian Deep Learning aims to combine deep learning and Bayesian approaches to uncertainty. The ideas date back to the early 90s Neal (1995); MacKay (1992); Dayan et al. (1995), but the field has recently seen renewed interest from the machine learning community at large, as new ways of computing uncertainty estimates from state of the art deep learning models have been developed Murray (2018); Gal (2016); Li and Gal (2017). In addition to producing valuable measures that function as uncertainty measures Leibig et al. (2017); Kendall et al. (2015); Wickstrøm et al. (2018), these techniques can also lessen deep neural networks susceptibility to adversarial attacks Li and Gal (2017); Feinman et al. (2017).

5.3 Workflow integration, regulations

Another stumbling block for successful incorporation of deep learning methods is workflow integration. It is possible to end up developing clever machine learning system for clinical use that turn out to be practically useless for actual clinicians. Attempting to augment already established procedures necessitates knowledge of the entire workflow. Involving the end-user in the process of creating and evaluating systems can make this a little less of an issue, and can also increase the end users’ trust in the systems515151The approach we have taken at our MMIV center, located inside the Department of Radiology, as you can establish a feedback loop during the development process. But still, even if there is interest on the “ground floor” and one is able to get prototype systems into the hands of clinicians, there are many higher-ups to convince and regulatory, ethical and legal hurdles to overcome.

5.4 Perspectives and future expectations

Deep learning in medical data analysis is here to stay. Even though there are many challenges associated to the introduction of deep learning in clinical settings, the methods produce results that are too valuable to discard. This is illustrated by the tremendous amounts of high-impact publications in top-journals dealing with deep learning in medical imaging (for example Ganapathy et al. (2018); Kermany et al. (2018); Poplin et al. (2018a); De Fauw et al. (2018); Hinton (2018); Liu et al. (2018); Rieger et al. (2018); Chen et al. (2018); Zhu et al. (2018); Wasserthal et al. (2018); Yoo et al. (2018), all published in 2018). As machine learning researchers and practitioners gain more experience, it will become easier to classify problems according to what solution approach is the most reasonable: (i) best approached using deep learning techniques end-to-end, (ii) best tackled by a combination of deep learning with other techniques, or (iii) no deep learning component at all.

Beyond the application of machine learning in medical imaging, we believe that the attention in the medical community can also be leveraged to strengthen the general computational mindset among medical researchers and practitioners, mainstreaming the field of computational medicine525252In-line with the ideas of the convergence of disciplines and the “future of health”, as described in Sharp and Hockfield (2017). Once there are enough high-impact software-systems based on mathematics, computer science, physics and engineering entering the daily workflow in the clinic, the acceptance for other such systems will likely grow. The access to bio-sensors and (edge) computing on wearable devices for monitoring disease or lifestyle, plus an ecosystem of machine learning and other computational medicine-based technologies, will then likely facilitate the transition to a new medical paradigm that is predictive, preventive, personalized, and participatory - P4 medicine Hood and Flores (2012)535353


We thank Renate Grüner for useful discussions. The anonymous reviewers gave us excellent constructive feedback that led to several improvements throughout the article. Our work was financially supported by the Bergen Research Foundation through the project “Computational medical imaging and machine learning – methods, infrastructure and applications”.


  • Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet classification with deep convolutional neural networks, in: F. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 25, Curran Associates, Inc., 2012, pp. 1097–1105.
  • Peters et al. (2018) M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, L. Zettlemoyer, Deep contextualized word representations, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pp. 2227–2237.
  • Howard and Ruder (2018) J. Howard, S. Ruder, Universal language model fine-tuning for text classification, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 328–339.
  • Radford et al. (2018) A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, Improving language understanding by generative pre-training (2018).
  • Xiong et al. (2018) W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, A. Stolcke, The Microsoft 2017 Conversational speech recognition system, in: Proc. Speech and Signal Processing (ICASSP) 2018 IEEE Int. Conf. Acoustics, pp. 5934–5938.
  • van den Oord et al. (2016) A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, K. Kavukcuoglu, WaveNet: A generative model for raw audio, arXiv preprint arXiv:1609.03499v2 (2016).
  • Guo and Berkhahn (2016) C. Guo, F. Berkhahn, Entity embeddings of categorical variables, arXiv preprint arXiv:1604.06737 (2016).
  • De Brébisson et al. (2015) A. De Brébisson, É. Simon, A. Auvolat, P. Vincent, Y. Bengio, Artificial neural networks applied to taxi destination prediction, arXiv preprint arXiv:1508.00021 (2015).
  • George and Huerta (2018) D. George, E. Huerta, Deep learning for real-time gravitational wave detection and parameter estimation: Results with advanced LIGO data, Physics Letters B 778 (2018) 64–70.
  • George et al. (2018) D. George, H. Shen, E. Huerta, Classification and unsupervised clustering of LIGO data with deep transfer learning, Physical Review D 97 (2018) 101501.
  • Shen et al. (2017) H. Shen, D. George, E. Huerta, Z. Zhao, Denoising gravitational waves using deep learning with recurrent denoising autoencoders, arXiv preprint arXiv:1711.09919 (2017).
  • Raissi and Karniadakis (2018) M. Raissi, G. E. Karniadakis, Hidden physics models: Machine learning of nonlinear partial differential equations, Journal of Computational Physics 357 (2018) 125–141.
  • Karpatne et al. (2017) A. Karpatne, G. Atluri, J. H. Faghmous, M. Steinbach, A. Banerjee, A. Ganguly, S. Shekhar, N. Samatova, V. Kumar, Theory-guided data science: A new paradigm for scientific discovery from data, IEEE Transactions on Knowledge and Data Engineering 29 (2017) 2318–2331.
  • Gartner (2018) Gartner, Top Strategic Technology Trends for 2018, 2018.
  • Ravi et al. (2017) D. Ravi, C. Wong, F. Deligianni, M. Berthelot, J. Andreu-Perez, B. Lo, G.-Z. Yang, Deep learning for health informatics., IEEE journal of biomedical and health informatics 21 (2017) 4–21.
  • Ganapathy et al. (2018) N. Ganapathy, R. Swaminathan, T. M. Deserno, Deep learning on 1-D biosignals: a taxonomy-based survey, Yearbook of medical informatics 27 (2018) 98–109.
  • Kuhlmann et al. (2018) L. Kuhlmann, K. Lehnertz, M. P. Richardson, B. Schelter, H. P. Zaveri, Seizure prediction - ready for a new era, Nature reviews. Neurology (2018).
  • Kwon et al. (2018) J.-M. Kwon, Y. Lee, Y. Lee, S. Lee, J. Park, An algorithm based on deep learning for predicting in-hospital cardiac arrest, Journal of the American Heart Association 7 (2018).
  • Shin et al. (2016) H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, R. M. Summers, Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning., IEEE transactions on medical imaging 35 (2016) 1285–1298.
  • Kermany et al. (2018) D. S. Kermany, M. Goldbaum, W. Cai, C. C. S. Valentim, H. Liang, S. L. Baxter, A. McKeown, G. Yang, X. Wu, F. Yan, J. Dong, M. K. Prasadha, J. Pei, M. Y. L. Ting, J. Zhu, C. Li, S. Hewett, J. Dong, I. Ziyar, A. Shi, R. Zhang, L. Zheng, R. Hou, W. Shi, X. Fu, Y. Duan, V. A. N. Huu, C. Wen, E. D. Zhang, C. L. Zhang, O. Li, X. Wang, M. A. Singer, X. Sun, J. Xu, A. Tafreshi, M. A. Lewis, H. Xia, K. Zhang, Identifying medical diagnoses and treatable diseases by image-based deep learning, Cell 172 (2018) 1122–1131.e9.
  • Katzman et al. (2018) J. L. Katzman, U. Shaham, A. Cloninger, J. Bates, T. Jiang, Y. Kluger, DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network, BMC medical research methodology 18 (2018) 24.
  • Jiménez et al. (2018) J. Jiménez, M. Škalič, G. Martínez-Rosell, G. De Fabritiis, KDEEP: Protein-Ligand absolute binding affinity prediction via 3D-Convolutional Neural Networks, Journal of Chemical Information and Modeling 58 (2018) 287–296.
  • Kalinin et al. (2018) A. A. Kalinin, G. A. Higgins, N. Reamaroon, S. Soroushmehr, A. Allyn-Feuer, I. D. Dinov, K. Najarian, B. D. Athey, Deep learning in pharmacogenomics: from gene regulation to patient stratification, Pharmacogenomics 19 (2018) 629–650.
  • Jiang et al. (2018) S. Jiang, K.-S. Chin, K. L. Tsui, A universal deep learning approach for modeling the flow of patients under different severities, Computer methods and programs in biomedicine 154 (2018) 191–203.
  • Vranas et al. (2017) K. C. Vranas, J. K. Jopling, T. E. Sweeney, M. C. Ramsey, A. S. Milstein, C. G. Slatore, G. J. Escobar, V. X. Liu, Identifying distinct subgroups of icu patients: A machine learning approach., Critical care medicine 45 (2017) 1607–1615.
  • Rajkomar et al. (2018) A. Rajkomar, E. Oren, K. Chen, A. M. Dai, N. Hajaj, M. Hardt, P. J. Liu, X. Liu, J. Marcus, M. Sun, et al., Scalable and accurate deep learning with electronic health records, npj Digital Medicine 1 (2018) 18.
  • Shickel et al. (2017) B. Shickel, P. J. Tighe, A. Bihorac, P. Rashidi, Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis, IEEE Journal of Biomedical and Health Informatics (2017).
  • Gulshan et al. (2016) V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, et al., Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs, Jama 316 (2016) 2402–2410.
  • Poplin et al. (2018a) R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. McConnell, G. Corrado, L. Peng, D. Webster, Predicting Cardiovascular Risk Factors in Retinal Fundus Photographs using Deep Learning, Nature Biomedical Engineering (2018a).
  • Poplin et al. (2018b) R. Poplin, P.-C. Chang, D. Alexander, S. Schwartz, T. Colthurst, A. Ku, D. Newburger, J. Dijamco, N. Nguyen, P. T. Afshar, S. S. Gross, L. Dorfman, C. Y. McLean, M. A. DePristo, A universal SNP and small-indel variant caller using deep neural networks, Nature Biotechnology (2018b).
  • De Fauw et al. (2018) J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, H. Askham, X. Glorot, B. O’Donoghue, D. Visentin, et al., Clinically applicable deep learning for diagnosis and referral in retinal disease, Nature medicine 24 (2018) 1342.
  • Qin et al. (2018) Y. Qin, K. Kamnitsas, S. Ancha, J. Nanavati, G. Cottrell, A. Criminisi, A. Nori, Autofocus Layer for Semantic Segmentation, arXiv preprint arXiv:1805.08403 (2018).
  • Kamnitsas et al. (2017) K. Kamnitsas, C. Baumgartner, C. Ledig, V. Newcombe, J. Simpson, A. Kane, D. Menon, A. Nori, A. Criminisi, D. Rueckert, et al., Unsupervised domain adaptation in brain lesion segmentation with adversarial networks, in: International Conference on Information Processing in Medical Imaging, Springer, pp. 597–609.
  • Xiao et al. (2018) C. Xiao, E. Choi, J. Sun, Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review, Journal of the American Medical Informatics Association (2018).
  • Silver et al. (2017) D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al., Mastering the game of Go without human knowledge, Nature 550 (2017) 354.
  • Esteva et al. (2017) A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, Nature 542 (2017) 115–118.
  • Poplin et al. (2018) R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, D. R. Webster, Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning, Nature Biomedical Engineering 2 (2018) 158.
  • LeCun et al. (2015) Y. LeCun, Y. Bengio, G. Hinton, Deep learning, nature 521 (2015) 436.
  • Hinton (2018) G. Hinton, Deep Learning — A Technology With the Potential to Transform Health Care (2018) 1–2.
  • Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016.
  • Litjens et al. (2017) G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. W. M. van der Laak, B. van Ginneken, C. I. Sánchez, A survey on deep learning in medical image analysis., Medical image analysis 42 (2017) 60–88.
  • Marblestone et al. (2016) A. H. Marblestone, G. Wayne, K. P. Kording, Toward an Integration of Deep Learning and Neuroscience, Frontiers in Computational Neuroscience 10 (2016) 94.
  • Hassabis et al. (2017) D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick, Neuroscience-Inspired Artificial Intelligence, Neuron 95 (2017) 245–258.
  • Banino et al. (2018) A. Banino, C. Barry, B. Uria, C. Blundell, T. Lillicrap, P. Mirowski, A. Pritzel, M. J. Chadwick, T. Degris, J. Modayil, et al., Vector-based navigation using grid-like representations in artificial agents, Nature 557 (2018) 429.
  • Cybenko (1989) G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of control, signals and systems 2 (1989) 303–314.
  • Hornik et al. (1989) K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural networks 2 (1989) 359–366.
  • Leshno et al. (1993) M. Leshno, V. Y. Lin, A. Pinkus, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural networks 6 (1993) 861–867.
  • Sonoda and Murata (2017) S. Sonoda, N. Murata, Neural network with unbounded activation functions is universal approximator, Applied and Computational Harmonic Analysis 43 (2017) 233–268.
  • Nielsen (2015) M. A. Nielsen, Neural networks and deep learning, Determination Press, 2015.
  • Aggarwal (2018) C. C. Aggarwal, Neural networks and deep learning, Springer, 2018.
  • Rosenblatt (1958) F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain., Psychological review 65 (1958) 386.
  • Linnainmaa (1970) S. Linnainmaa, The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors, Master’s Thesis (in Finnish), Univ. Helsinki (1970) 6–7.
  • Werbos (1974) P. Werbos, Beyond regression: New tools for prediction and analysis in the behavioral sciences, Ph. D. dissertation, Harvard University (1974).
  • Rumelhart et al. (1986) D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations by back-propagating errors, nature 323 (1986) 533.
  • Cauchy (1847) A. Cauchy, Méthode générale pour la résolution des systemes d’équations simultanées, Comp. Rend. Sci. Paris 25 (1847) 536–538.
  • LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86 (1998) 2278–2324.
  • Lo et al. (1993) S. C. Lo, M. T. Freedman, J. S. Lin, S. K. Mun, Automatic lung nodule detection using profile matching and back-propagation neural network techniques., Journal of digital imaging 6 (1993) 48–54.
  • Murray (2018) S. Murray, An exploratory analysis of multi-class uncertainty approximation in Bayesian convolution neural networks, Master’s thesis, University of Bergen, 2018.
  • Clevert et al. (2015) D.-A. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by exponential linear units (elus), arXiv preprint arXiv:1511.07289 (2015).
  • He et al. (2015) K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in: Proceedings of the IEEE international conference on computer vision, pp. 1026–1034.
  • Springenberg et al. (2014) J. T. Springenberg, A. Dosovitskiy, T. Brox, M. Riedmiller, Striving for simplicity: The all convolutional net, arXiv preprint arXiv:1412.6806 (2014).
  • Srivastava et al. (2014) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, The Journal of Machine Learning Research 15 (2014) 1929–1958.
  • Rashmi and Gilad-Bachrach (2015) K. Rashmi, R. Gilad-Bachrach, Dart: Dropouts meet multiple additive regression trees, in: International Conference on Artificial Intelligence and Statistics, pp. 489–497.
  • Gal (2016) Y. Gal, Uncertainty in deep learning, Ph.D. thesis, University of Cambridge, 2016.
  • Wickstrøm et al. (2018) K. Wickstrøm, M. Kampffmeyer, R. Jenssen, Uncertainty Modeling and Interpretability in Convolutional Neural Networks for Polyp Segmentation, in: 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), IEEE, pp. 1–6.
  • Ioffe and Szegedy (2015) S. Ioffe, C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, in: International Conference on Machine Learning, pp. 448–456.
  • Zeiler and Fergus (2014) M. D. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, in: European conference on computer vision, Springer, pp. 818–833.
  • Simonyan and Zisserman (2014) K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2014).
  • Szegedy et al. (2015) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9.
  • Lin et al. (2013) M. Lin, Q. Chen, S. Yan, Network in network, arXiv preprint arXiv:1312.4400 (2013).
  • He et al. (2016) K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778.
  • Srivastava et al. (2015) R. K. Srivastava, K. Greff, J. Schmidhuber, Training very deep networks, in: Advances in neural information processing systems, pp. 2377–2385.
  • Huang et al. (2017) G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in: CVPR, volume 1, p. 3.
  • Xie et al. (2017) S. Xie, R. Girshick, P. Dollár, Z. Tu, K. He, Aggregated residual transformations for deep neural networks, in: Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, IEEE, pp. 5987–5995.
  • Hu et al. (2017) J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, arXiv preprint arXiv:1709.01507 7 (2017).
  • Zoph et al. (2017) B. Zoph, V. Vasudevan, J. Shlens, Q. V. Le, Learning transferable architectures for scalable image recognition, arXiv preprint arXiv:1707.07012 2 (2017).
  • Bello et al. (2017) I. Bello, B. Zoph, V. Vasudevan, Q. V. Le, Neural optimizer search with reinforcement learning, in: D. Precup, Y. W. Teh (Eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, PMLR, International Convention Centre, Sydney, Australia, 2017, pp. 459–468.
  • Redmon et al. (2016) J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788.
  • Goodfellow et al. (2014) I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative Adversarial Nets, in: Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 27, Curran Associates, Inc., 2014, pp. 2672–2680.
  • Koch et al. (2015) G. Koch, R. Zemel, R. Salakhutdinov, Siamese neural networks for one-shot image recognition, in: ICML Deep Learning Workshop, volume 2.
  • Bromley et al. (1994) J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, R. Shah, Signature verification using a “siamese” time delay neural network, in: Advances in neural information processing systems, pp. 737–744.
  • Ronneberger et al. (2015) O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted intervention, Springer, pp. 234–241.
  • Long et al. (2015) J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440.
  • Milletari et al. (2016) F. Milletari, N. Navab, S.-A. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in: 3D Vision (3DV), 2016 Fourth International Conference on, IEEE, pp. 565–571.
  • Redmon (2016) J. Redmon, Darknet: Open source neural networks in C,, 2013–2016.
  • Lee (2018) Y. H. Lee, Efficiency improvement in a busy radiology practice: Determination of musculoskeletal magnetic resonance imaging protocol using deep-learning convolutional neural networks., Journal of digital imaging (2018).
  • Gong et al. (2018) E. Gong, J. M. Pauly, M. Wintermark, G. Zaharchuk, Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI, Journal of magnetic resonance imaging 48 (2018) 330–340.
  • Meyer et al. (2018) P. Meyer, V. Noblet, C. Mazzara, A. Lallement, Survey on deep learning for radiotherapy., Computers in biology and medicine 98 (2018) 126–146.
  • Liu et al. (2018) F. Liu, H. Jang, R. Kijowski, T. Bradshaw, A. B. McMillan, Deep learning MR imaging-based attenuation correction for PET/MR imaging, Radiology 286 (2018) 676–684.
  • Mehranian et al. (2016) A. Mehranian, H. Arabi, H. Zaidi, Vision 20/20: Magnetic resonance imaging-guided attenuation correction in pet/mri: Challenges, solutions, and opportunities., Medical physics 43 (2016) 1130–1155.
  • Lao et al. (2017) J. Lao, Y. Chen, Z.-C. Li, Q. Li, J. Zhang, J. Liu, G. Zhai, A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme., Scientific reports 7 (2017) 10353.
  • Oakden-Rayner et al. (2017) L. Oakden-Rayner, G. Carneiro, T. Bessen, J. C. Nascimento, A. P. Bradley, L. J. Palmer, Precision radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework., Scientific reports 7 (2017) 1648.
  • Peeken et al. (2018) J. C. Peeken, M. Bernhofer, B. Wiestler, T. Goldberg, D. Cremers, B. Rost, J. J. Wilkens, S. E. Combs, F. Nüsslin, Radiomics in radiooncology - challenging the medical physicist., Physica medica 48 (2018) 27–36.
  • Izadyyazdanabadi et al. (2018) M. Izadyyazdanabadi, E. Belykh, M. A. Mooney, J. M. Eschbacher, P. Nakaji, Y. Yang, M. C. Preul, Prospects for theranostics in neurosurgical imaging: Empowering confocal laser endomicroscopy diagnostics via deep learning., Frontiers in oncology 8 (2018) 240.
  • Haskins et al. (2018) G. Haskins, J. Kruecker, U. Kruger, S. Xu, P. A. Pinto, B. J. Wood, P. Yan, Learning deep similarity metric for 3D MR-TRUS registration, arXiv preprint arXiv:1806.04548v1 (2018).
  • Cao et al. (2018) X. Cao, J. Yang, J. Zhang, Q. Wang, P.-T. Yap, D. Shen, Deformable image registration using a cue-aware deep regression network, IEEE transactions on bio-medical engineering 65 (2018) 1900–1911.
  • Yang et al. (2017) X. Yang, R. Kwitt, M. Styner, M. Niethammer, Quicksilver: Fast predictive image registration - a deep learning approach., NeuroImage 158 (2017) 378–396.
  • Kearney et al. (2018) V. P. Kearney, S. Haaf, A. Sudhyadhom, G. Valdes, T. D. Solberg, An unsupervised convolutional neural network-based algorithm for deformable image registration, Physics in medicine and biology (2018).
  • Zheng et al. (2018) J. Zheng, S. Miao, Z. Jane Wang, R. Liao, Pairwise domain adaptation module for CNN-based 2-D/3-D registration., Journal of medical imaging (Bellingham, Wash.) 5 (2018) 021204.
  • Anas et al. (2018) E. M. A. Anas, P. Mousavi, P. Abolmaesumi, A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy, Medical image analysis 48 (2018) 107–116.
  • Ching et al. (2018) T. Ching, D. S. Himmelstein, B. K. Beaulieu-Jones, A. A. Kalinin, B. T. Do, G. P. Way, E. Ferrero, P.-M. Agapow, M. Zietz, M. M. Hoffman, W. Xie, G. L. Rosen, B. J. Lengerich, J. Israeli, J. Lanchantin, S. Woloszynek, A. E. Carpenter, A. Shrikumar, J. Xu, E. M. Cofer, C. A. Lavender, S. C. Turaga, A. M. Alexandari, Z. Lu, D. J. Harris, D. DeCaprio, Y. Qi, A. Kundaje, Y. Peng, L. K. Wiley, M. H. S. Segler, S. M. Boca, S. J. Swamidass, A. Huang, A. Gitter, C. S. Greene, Opportunities and obstacles for deep learning in biology and medicine, Journal of the Royal Society, Interface 15 (2018).
  • Lee et al. (2017) J.-G. Lee, S. Jun, Y.-W. Cho, H. Lee, G. B. Kim, J. B. Seo, N. Kim, Deep learning in medical imaging: General overview., Korean journal of radiology 18 (2017) 570–584.
  • Rueckert et al. (2016) D. Rueckert, B. Glocker, B. Kainz, Learning clinically useful information from images: Past, present and future., Medical image analysis 33 (2016) 13–18.
  • Chartrand et al. (2017) G. Chartrand, P. M. Cheng, E. Vorontsov, M. Drozdzal, S. Turcotte, C. J. Pal, S. Kadoury, A. Tang, Deep learning: A primer for radiologists, Radiographics : a review publication of the Radiological Society of North America, Inc 37 (2017) 2113–2131.
  • Erickson et al. (2017) B. J. Erickson, P. Korfiatis, Z. Akkus, T. L. Kline, Machine learning for medical imaging, Radiographics : a review publication of the Radiological Society of North America, Inc 37 (2017) 505–515.
  • Mazurowski et al. (2018) M. A. Mazurowski, M. Buda, A. Saha, M. R. Bashir, Deep learning in radiology: an overview of the concepts and a survey of the state of the art, arXiv preprint arXiv:1802.08717v1 (2018).
  • McBee et al. (2018) M. P. McBee, O. A. Awan, A. T. Colucci, C. W. Ghobadi, N. Kadom, A. P. Kansagra, S. Tridandapani, W. F. Auffermann, Deep learning in radiology., Academic radiology (2018).
  • Savadjiev et al. (2018) P. Savadjiev, J. Chong, A. Dohan, M. Vakalopoulou, C. Reinhold, N. Paragios, B. Gallix, Demystification of AI-driven medical image interpretation: past, present and future., European radiology (2018).
  • Thrall et al. (2018) J. H. Thrall, X. Li, Q. Li, C. Cruz, S. Do, K. Dreyer, J. Brink, Artificial intelligence and machine learning in radiology: Opportunities, challenges, pitfalls, and criteria for success., Journal of the American College of Radiology : JACR 15 (2018) 504–508.
  • Yamashita et al. (2018) R. Yamashita, M. Nishio, R. K. G. Do, K. Togashi, Convolutional neural networks: an overview and application in radiology., Insights into imaging (2018).
  • Yasaka et al. (2018) K. Yasaka, H. Akai, A. Kunimatsu, S. Kiryu, O. Abe, Deep learning with convolutional neural network in radiology., Japanese journal of radiology 36 (2018) 257–272.
  • Giger (2018) M. L. Giger, Machine learning in medical imaging, Journal of the American College of Radiology : JACR 15 (2018) 512–520.
  • Erickson et al. (2017) B. J. Erickson, P. Korfiatis, Z. Akkus, T. Kline, K. Philbrick, Toolkits and libraries for deep learning, Journal of digital imaging 30 (2017) 400–405.
  • Zaharchuk et al. (2018) G. Zaharchuk, E. Gong, M. Wintermark, D. Rubin, C. P. Langlotz, Deep learning in neuroradiology., AJNR. American journal of neuroradiology (2018).
  • Akkus et al. (2017) Z. Akkus, A. Galimzianova, A. Hoogi, D. L. Rubin, B. J. Erickson, Deep learning for brain MRI segmentation: State of the art and future directions., Journal of digital imaging 30 (2017) 449–459.
  • Lee et al. (2017) E.-J. Lee, Y.-H. Kim, N. Kim, D.-W. Kang, Deep into the brain: Artificial intelligence in stroke imaging., Journal of stroke 19 (2017) 277–285.
  • Feng et al. (2018) R. Feng, M. Badgeley, J. Mocco, E. K. Oermann, Deep learning guided stroke management: a review of clinical applications, Journal of neurointerventional surgery 10 (2018) 358–362.
  • Vieira et al. (2017) S. Vieira, W. H. L. Pinaya, A. Mechelli, Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications., Neuroscience and biobehavioral reviews 74 (2017) 58–75.
  • Burt et al. (2018) J. R. Burt, N. Torosdagli, N. Khosravan, H. RaviPrakash, A. Mortazi, F. Tissavirasingham, S. Hussein, U. Bagci, Deep learning beyond cats and dogs: recent advances in diagnosing breast cancer with deep neural networks, The British journal of radiology 91 (2018) 20170545.
  • Samala et al. (2017) R. K. Samala, H.-P. Chan, L. M. Hadjiiski, M. A. Helvie, K. H. Cha, C. D. Richter, Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms., Physics in medicine and biology 62 (2017) 8894–8908.
  • van Ginneken (2017) B. van Ginneken, Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning, Radiological physics and technology 10 (2017) 23–32.
  • Morin et al. (2018) O. Morin, M. Vallières, A. Jochems, H. C. Woodruff, G. Valdes, S. E. Braunstein, J. E. Wildberger, J. E. Villanueva-Meyer, V. Kearney, S. S. Yom, T. D. Solberg, P. Lambin, A deep look into the future of quantitative imaging in oncology: A statement of working principles and proposal for change., International journal of radiation oncology, biology, physics (2018).
  • Parmar et al. (2018) C. Parmar, J. D. Barry, A. Hosny, J. Quackenbush, H. J. W. L. Aerts, Data analysis strategies in medical imaging., Clinical cancer research : an official journal of the American Association for Cancer Research 24 (2018) 3492–3499.
  • Xue et al. (2017) Y. Xue, S. Chen, J. Qin, Y. Liu, B. Huang, H. Chen, Application of deep learning in automated analysis of molecular images in cancer: A survey., Contrast media & molecular imaging 2017 (2017) 9512370.
  • Brattain et al. (2018) L. J. Brattain, B. A. Telfer, M. Dhyani, J. R. Grajo, A. E. Samir, Machine learning for medical ultrasound: status, methods, and future opportunities, Abdominal radiology 43 (2018) 786–799.
  • Huang et al. (2018) Q. Huang, F. Zhang, X. Li, Machine learning in ultrasound computer-aided diagnostic systems: A survey, BioMed research international 2018 (2018) 5137904.
  • Shen et al. (2017) D. Shen, G. Wu, H.-I. Suk, Deep learning in medical image analysis., Annual review of biomedical engineering 19 (2017) 221–248.
  • Suzuki (2017) K. Suzuki, Overview of deep learning in medical imaging., Radiological physics and technology 10 (2017) 257–273.
  • Cao et al. (2018) C. Cao, F. Liu, H. Tan, D. Song, W. Shu, W. Li, Y. Zhou, X. Bo, Z. Xie, Deep learning and its applications in biomedicine, Genomics, proteomics & bioinformatics 16 (2018) 17–32.
  • Lakhani et al. (2018) P. Lakhani, D. L. Gray, C. R. Pett, P. Nagy, G. Shih, Hello world deep learning in medical imaging., Journal of digital imaging (2018).
  • Pawlowski et al. (2017) N. Pawlowski, S. I. Ktena, M. C. Lee, B. Kainz, D. Rueckert, B. Glocker, M. Rajchl, DLTK: State of the art reference implementations for deep learning on medical images, arXiv preprint arXiv:1711.06853 (2017).
  • Yang et al. (2016) Y. Yang, J. Sun, H. Li, Z. Xu, Deep ADMM-Net for compressive sensing MRI, in: D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett (Eds.), Advances in Neural Information Processing Systems 29, Curran Associates, Inc., 2016, pp. 10–18.
  • Wang et al. (2016) S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, D. Liang, Accelerating magnetic resonance imaging via deep learning, in: Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on, IEEE, pp. 514–517.
  • Qin et al. (2018) C. Qin, J. V. Hajnal, D. Rueckert, J. Schlemper, J. Caballero, A. N. Price, Convolutional recurrent neural networks for dynamic mr image reconstruction., IEEE transactions on medical imaging (2018).
  • Schlemper et al. (2018) J. Schlemper, J. Caballero, J. V. Hajnal, A. N. Price, D. Rueckert, A deep cascade of convolutional neural networks for dynamic MR image reconstruction., IEEE transactions on medical imaging 37 (2018) 491–503.
  • Chen et al. (2018) F. Chen, V. Taviani, I. Malkiel, J. Y. Cheng, J. I. Tamir, J. Shaikh, S. T. Chang, C. J. Hardy, J. M. Pauly, S. S. Vasanawala, Variable-density single-shot fast Spin-Echo MRI with deep learning reconstruction by using variational networks, Radiology (2018) 180445.
  • Knoll et al. (2018) F. Knoll, K. Hammernik, E. Kobler, T. Pock, M. P. Recht, D. K. Sodickson, Assessment of the generalization of learned image reconstruction and the potential for transfer learning, Magnetic resonance in medicine (2018).
  • Mardani et al. (2018) M. Mardani, E. Gong, J. Y. Cheng, S. S. Vasanawala, G. Zaharchuk, L. Xing, J. M. Pauly, Deep generative adversarial neural networks for compressive sensing (GANCS) MRI., IEEE transactions on medical imaging (2018).
  • Zhu et al. (2018) B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, M. S. Rosen, Image reconstruction by domain-transform manifold learning., Nature 555 (2018) 487–492.
  • Eo et al. (2018) T. Eo, Y. Jun, T. Kim, J. Jang, H.-J. Lee, D. Hwang, KIKI-net: cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images, Magnetic resonance in medicine 80 (2018) 2188–2201.
  • Han et al. (2018) Y. Han, J. Yoo, H. H. Kim, H. J. Shin, K. Sung, J. C. Ye, Deep learning with domain adaptation for accelerated projection-reconstruction MR, Magnetic resonance in medicine 80 (2018) 1189–1205.
  • Shi et al. (2018) J. Shi, Q. Liu, C. Wang, Q. Zhang, S. Ying, H. Xu, Super-resolution reconstruction of mr image with a novel residual learning network algorithm., Physics in medicine and biology 63 (2018) 085011.
  • Yang et al. (2018) G. Yang, S. Yu, H. Dong, G. Slabaugh, P. L. Dragotti, X. Ye, F. Liu, S. Arridge, J. Keegan, Y. Guo, D. Firmin, J. Keegan, G. Slabaugh, S. Arridge, X. Ye, Y. Guo, S. Yu, F. Liu, D. Firmin, P. L. Dragotti, G. Yang, H. Dong, DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction., IEEE transactions on medical imaging 37 (2018) 1310–1321.
  • Deistung et al. (2013) A. Deistung, A. Schäfer, F. Schweser, U. Biedermann, R. Turner, J. R. Reichenbach, Toward in vivo histology: a comparison of quantitative susceptibility mapping (QSM) with magnitude-, phase-, and R2*-imaging at ultra-high magnetic field strength, Neuroimage 65 (2013) 299–314.
  • Deistung et al. (2017) A. Deistung, F. Schweser, J. R. Reichenbach, Overview of quantitative susceptibility mapping, NMR in Biomedicine 30 (2017).
  • Yoon et al. (2018) J. Yoon, E. Gong, I. Chatnuntawech, B. Bilgic, J. Lee, W. Jung, J. Ko, H. Jung, K. Setsompop, G. Zaharchuk, E. Y. Kim, J. Pauly, J. Lee, Quantitative susceptibility mapping using deep neural network: QSMnet., NeuroImage 179 (2018) 199–206.
  • Liu et al. (2009) T. Liu, P. Spincemaille, L. De Rochefort, B. Kressler, Y. Wang, Calculation of susceptibility through multiple orientation sampling (COSMOS): a method for conditioning the inverse problem from measured magnetic field map to susceptibility source image in MRI, Magnetic Resonance in Medicine 61 (2009) 196–204.
  • Rasmussen et al. (2018) K. G. B. Rasmussen, M. J. Kristensen, R. G. Blendal, L. R. Ostergaard, M. Plocharski, K. O’Brien, C. Langkammer, A. Janke, M. Barth, S. Bollmann, DeepQSM-Using Deep Learning to Solve the Dipole Inversion for MRI Susceptibility Mapping, Biorxiv (2018) 278036.
  • Ma et al. (2013) D. Ma, V. Gulani, N. Seiberlich, K. Liu, J. L. Sunshine, J. L. Duerk, M. A. Griswold, Magnetic resonance fingerprinting, Nature 495 (2013) 187–192.
  • of Radiology  (ESR) E. S. of Radiology (ESR), Magnetic resonance fingerprinting - a promising new approach to obtain standardized imaging biomarkers from mri., Insights into imaging 6 (2015) 163–165.
  • Donoho (2006) D. L. Donoho, Compressed sensing, IEEE Transactions on Information Theory 52 (2006) 1289–1306.
  • Lustig et al. (2007) M. Lustig, D. Donoho, J. M. Pauly, Sparse mri: The application of compressed sensing for rapid mr imaging., Magnetic resonance in medicine 58 (2007) 1182–1195.
  • McCann et al. (2017) M. T. McCann, K. H. Jin, M. Unser, Convolutional neural networks for inverse problems in imaging: A review, IEEE Signal Processing Magazine 34 (2017) 85–95.
  • Shah and Hegde (2018) V. Shah, C. Hegde, Solving Linear Inverse Problems Using GAN Priors: An Algorithm with Provable Guarantees, arXiv preprint arXiv:1802.08406 (2018).
  • Lucas et al. (2018) A. Lucas, M. Iliadis, R. Molina, A. K. Katsaggelos, Using deep neural networks for inverse problems in imaging: beyond analytical methods, IEEE Signal Processing Magazine 35 (2018) 20–36.
  • Aggarwal et al. (2018) H. K. Aggarwal, M. P. Mani, M. Jacob, MoDL: Model Based Deep Learning Architecture for Inverse Problems, IEEE transactions on medical imaging (2018).
  • Li et al. (2018) H. Li, J. Schwab, S. Antholzer, M. Haltmeier, NETT: Solving Inverse Problems with Deep Neural Networks, arXiv preprint arXiv:1803.00092 (2018).
  • Ma et al. (2018) D. Ma, Y. Jiang, Y. Chen, D. McGivney, B. Mehta, V. Gulani, M. Griswold, Fast 3D magnetic resonance fingerprinting for a whole-brain coverage, Magnetic resonance in medicine 79 (2018) 2190–2197.
  • Christen et al. (2014) T. Christen, N. A. Pannetier, W. W. Ni, D. Qiu, M. E. Moseley, N. Schuff, G. Zaharchuk, MR vascular fingerprinting: A new approach to compute cerebral blood volume, mean vessel radius, and oxygenation maps in the human brain, Neuroimage 89 (2014) 262–270.
  • Lemasson et al. (2016) B. Lemasson, N. Pannetier, N. Coquery, L. S. B. Boisserand, N. Collomb, N. Schuff, M. Moseley, G. Zaharchuk, E. L. Barbier, T. Christen, Mr vascular fingerprinting in stroke and brain tumors models., Scientific reports 6 (2016) 37071.
  • Rieger et al. (2018) B. Rieger, M. Akçakaya, J. C. Pariente, S. Llufriu, E. Martinez-Heras, S. Weingärtner, L. R. Schad, Time efficient whole-brain coverage with MR fingerprinting using slice-interleaved echo-planar-imaging, Scientific reports 8 (2018) 6667.
  • Wright et al. (2018) K. L. Wright, Y. Jiang, D. Ma, D. C. Noll, M. A. Griswold, V. Gulani, L. Hernandez-Garcia, Estimation of perfusion properties with mr fingerprinting arterial spin labeling., Magnetic resonance imaging 50 (2018) 68–77.
  • Panda et al. (2017) A. Panda, B. B. Mehta, S. Coppo, Y. Jiang, D. Ma, N. Seiberlich, M. A. Griswold, V. Gulani, Magnetic resonance fingerprinting-an overview., Current opinion in biomedical engineering 3 (2017) 56–66.
  • Cohen et al. (2018) O. Cohen, B. Zhu, M. S. Rosen, MR fingerprinting deep reconstruction network (DRONE), Magnetic resonance in medicine 80 (2018) 885–894.
  • Hoppe et al. (2017) E. Hoppe, G. Körzdörfer, T. Würfl, J. Wetzl, F. Lugauer, J. Pfeuffer, A. Maier, Deep learning for magnetic resonance fingerprinting: A new approach for predicting quantitative parameter values from time series, Studies in health technology and informatics 243 (2017) 202–206.
  • Bojorquez et al. (2017) J. Z. Bojorquez, S. Bricq, C. Acquitter, F. Brunotte, P. M. Walker, A. Lalande, What are normal relaxation times of tissues at 3 T?, Magnetic resonance imaging 35 (2017) 69–80.
  • Fang et al. (2017) Z. Fang, Y. Chen, W. Lin, D. Shen, Quantification of relaxation times in MR fingerprinting using deep learning., Proceedings of the International Society for Magnetic Resonance in Medicine … Scientific Meeting and Exhibition. International Society for Magnetic Resonance in Medicine. Scientific Meeting and Exhibition 25 (2017).
  • Virtue et al. (2017) P. Virtue, S. X. Yu, M. Lustig, Better than real: Complex-valued neural nets for MRI fingerprinting, in: Proc. IEEE Int. Conf. Image Processing (ICIP), pp. 3953–3957.
  • Tygert et al. (2016) M. Tygert, J. Bruna, S. Chintala, Y. LeCun, S. Piantino, A. Szlam, A mathematical motivation for complex-valued convolutional networks, Neural computation 28 (2016) 815–825.
  • Trabelsi et al. (2017) C. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, C. J. Pal, Deep complex networks, arXiv preprint arXiv:1705.09792 (2017).
  • Sijbers et al. (1998) J. Sijbers, A. J. den Dekker, J. Van Audekerke, M. Verhoye, D. Van Dyck, Estimation of the noise in magnitude MR images., Magnetic resonance imaging 16 (1998) 87–90.
  • McVeigh et al. (1985) E. R. McVeigh, R. M. Henkelman, M. J. Bronskill, Noise and filtration in magnetic resonance imaging., Medical physics 12 (1985) 586–591.
  • Baselice et al. (2017) F. Baselice, G. Ferraioli, V. Pascazio, A. Sorriso, Bayesian mri denoising in complex domain, Magnetic resonance imaging 38 (2017) 112–122.
  • Phophalia and Mitra (2017) A. Phophalia, S. K. Mitra, 3d mr image denoising using rough set and kernel pca method., Magnetic resonance imaging 36 (2017) 135–145.
  • Zhang et al. (2015) X. Zhang, Z. Xu, N. Jia, W. Yang, Q. Feng, W. Chen, Y. Feng, Denoising of 3D magnetic resonance images by using higher-order singular value decomposition., Medical image analysis 19 (2015) 75–86.
  • Van De Ville et al. (2007) D. Van De Ville, M. L. Seghier, F. Lazeyras, T. Blu, M. Unser, Wspm: wavelet-based statistical parametric mapping., NeuroImage 37 (2007) 1205–1217.
  • Salimi-Khorshidi et al. (2014) G. Salimi-Khorshidi, G. Douaud, C. F. Beckmann, M. F. Glasser, L. Griffanti, S. M. Smith, Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers, Neuroimage 90 (2014) 449–468.
  • Lysaker et al. (2003) M. Lysaker, A. Lundervold, X.-C. Tai, Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time, IEEE transactions on image processing 12 (2003) 1579–1590.
  • Bermudez et al. (2018) C. Bermudez, A. J. Plassard, T. L. Davis, A. T. Newton, S. M. Resnick, B. A. Landman, Learning implicit brain MRI manifolds with deep learning, Proceedings of SPIE–the International Society for Optical Engineering 10574 (2018).
  • Benou et al. (2017) A. Benou, R. Veksler, A. Friedman, T. Riklin Raviv, Ensemble of expert deep neural networks for spatio-temporal denoising of contrast-enhanced MRI sequences, Medical image analysis 42 (2017) 145–159.
  • Gal et al. (2010) Y. Gal, A. J. H. Mehnert, A. P. Bradley, K. McMahon, D. Kennedy, S. Crozier, Denoising of dynamic contrast-enhanced MR images using dynamic nonlocal means, IEEE transactions on medical imaging 29 (2010) 302–310.
  • Vincent et al. (2010) P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion, Journal of Machine Learning Research (JMLR) 11 (2010) 3371–3408.
  • Dikaios et al. (2014) N. Dikaios, S. Arridge, V. Hamy, S. Punwani, D. Atkinson, Direct parametric reconstruction from undersampled (k,t)-space data in dynamic contrast enhanced MRI, Medical image analysis 18 (2014) 989–1001.
  • Guo et al. (2017) Y. Guo, S. G. Lingala, Y. Zhu, R. M. Lebel, K. S. Nayak, Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI, Magnetic resonance in medicine 78 (2017) 1566–1578.
  • Sourbron and Buckley (2013) S. P. Sourbron, D. L. Buckley, Classic models for dynamic contrast-enhanced mri., NMR in biomedicine 26 (2013) 1004–1027.
  • Golkov et al. (2016) V. Golkov, A. Dosovitskiy, J. I. Sperl, M. I. Menzel, M. Czisch, P. Samann, T. Brox, D. Cremers, q-space deep learning: Twelve-fold shorter and model-free diffusion MRI scans, IEEE transactions on medical imaging 35 (2016) 1344–1351.
  • Gurbani et al. (2018) S. S. Gurbani, E. Schreibmann, A. A. Maudsley, J. S. Cordova, B. J. Soher, H. Poptani, G. Verma, P. B. Barker, H. Shim, L. A. D. Cooper, A convolutional neural network to filter artifacts in spectroscopic MRI, Magnetic resonance in medicine 80 (2018) 1765–1775.
  • Kyathanahally et al. (2018) S. P. Kyathanahally, A. Döring, R. Kreis, Deep learning approaches for detection and removal of ghosting artifacts in MR spectroscopy, Magnetic resonance in medicine 80 (2018) 851–863.
  • Küstner et al. (2018) T. Küstner, A. Liebgott, L. Mauch, P. Martirosian, F. Bamberg, K. Nikolaou, B. Yang, F. Schick, S. Gatidis, Automated reference-free detection of motion artifacts in magnetic resonance images, MAGMA 31 (2018) 243–256.
  • Yue et al. (2016) L. Yue, H. Shen, J. Li, Q. Yuan, H. Zhang, L. Zhang, Image super-resolution: The techniques, applications, and future, Signal Processing 128 (2016) 389–408.
  • Shilling et al. (2009) R. Z. Shilling, T. Q. Robbie, T. Bailloeul, K. Mewes, R. M. Mersereau, M. E. Brummer, A super-resolution framework for 3-d high-resolution and high-contrast imaging using 2-d multislice mri., IEEE transactions on medical imaging 28 (2009) 633–644.
  • Ropele et al. (2010) S. Ropele, F. Ebner, F. Fazekas, G. Reishofer, Super-resolution mri using microscopic spatial modulation of magnetization., Magnetic resonance in medicine 64 (2010) 1671–1675.
  • Plenge et al. (2012) E. Plenge, D. H. J. Poot, M. Bernsen, G. Kotek, G. Houston, P. Wielopolski, L. van der Weerd, W. J. Niessen, E. Meijering, Super-resolution methods in mri: can they improve the trade-off between resolution, signal-to-noise ratio, and acquisition time?, Magnetic resonance in medicine 68 (2012) 1983–1993.
  • Bahrami et al. (2017) K. Bahrami, F. Shi, I. Rekik, Y. Gao, D. Shen, 7T-guided super-resolution of 3T MRI, Medical physics 44 (2017) 1661–1677.
  • Van Steenkiste et al. (2017) G. Van Steenkiste, D. H. J. Poot, B. Jeurissen, A. J. den Dekker, F. Vanhevel, P. M. Parizel, J. Sijbers, Super-resolution t, javax.xml.bind.jaxbelement@1458c115, estimation: Quantitative high resolution t, javax.xml.bind.jaxbelement@62365375, mapping from a set of low resolution t, javax.xml.bind.jaxbelement@20656587, -weighted images with different slice orientations., Magnetic resonance in medicine 77 (2017) 1818–1830.
  • Zeng et al. (2018) K. Zeng, H. Zheng, C. Cai, Y. Yang, K. Zhang, Z. Chen, Simultaneous single- and multi-contrast super-resolution for brain MRI images based on a convolutional neural network., Computers in biology and medicine 99 (2018) 133–141.
  • Liu et al. (2018) C. Liu, X. Wu, X. Yu, Y. Tang, J. Zhang, J. Zhou, Fusing multi-scale information in convolution network for mr image super-resolution reconstruction., Biomedical engineering online 17 (2018) 114.
  • Chaudhari et al. (2018) A. S. Chaudhari, Z. Fang, F. Kogan, J. Wood, K. J. Stevens, E. K. Gibbons, J. H. Lee, G. E. Gold, B. A. Hargreaves, Super-resolution musculoskeletal MRI using deep learning, Magnetic resonance in medicine 80 (2018) 2139–2154.
  • Jog et al. (2017) A. Jog, A. Carass, S. Roy, D. L. Pham, J. L. Prince, Random forest regression for magnetic resonance image synthesis, Medical image analysis 35 (2017) 475–488.
  • Keenan et al. (2018) K. E. Keenan, M. Ainslie, A. J. Barker, M. A. Boss, K. M. Cecil, C. Charles, T. L. Chenevert, L. Clarke, J. L. Evelhoch, P. Finn, D. Gembris, J. L. Gunter, D. L. G. Hill, C. R. Jack, E. F. Jackson, G. Liu, S. E. Russek, S. D. Sharma, M. Steckner, K. F. Stupic, J. D. Trzasko, C. Yuan, J. Zheng, Quantitative magnetic resonance imaging phantoms: A review and the need for a system phantom, Magnetic resonance in medicine 79 (2018) 48–61.
  • Jurczuk et al. (2014) K. Jurczuk, M. Kretowski, P.-A. Eliat, H. Saint-Jalmes, J. Bezy-Wendling, In silico modeling of magnetic resonance flow imaging in complex vascular networks, IEEE transactions on medical imaging 33 (2014) 2191–2209.
  • Zhou et al. (2018) Y. Zhou, S. Giffard-Roisin, M. De Craene, S. Camarasu-Pop, J. D’Hooge, M. Alessandrini, D. Friboulet, M. Sermesant, O. Bernard, A framework for the generation of realistic synthetic cardiac ultrasound and magnetic resonance imaging sequences from the same virtual patients., IEEE transactions on medical imaging 37 (2018) 741–754.
  • Duchateau et al. (2018) N. Duchateau, M. Sermesant, H. Delingette, N. Ayache, Model-based generation of large databases of cardiac images: Synthesis of pathological cine MR sequences from real healthy cases, IEEE transactions on medical imaging 37 (2018) 755–766.
  • Creswell et al. (2018) A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, A. A. Bharath, Generative adversarial networks: An overview, IEEE Signal Processing Magazine 35 (2018) 53–65.
  • Hong et al. (2017) Y. Hong, U. Hwang, J. Yoo, S. Yoon, How generative adversarial networks and their variants work: An overview of GAN, arXiv preprint arXiv:1711.05914v7 (2017).
  • Huang et al. (2018) H. Huang, P. S. Yu, C. Wang, An introduction to image synthesis with generative adversarial nets, arXiv preprint arXiv:1803.04469v1 (2018).
  • Osokin et al. (2017) A. Osokin, A. Chessel, R. E. C. Salas, F. Vaggi, Gans for biological image synthesis, in: Proc. IEEE Int. Conf. Computer Vision (ICCV), pp. 2252–2261.
  • Antipov et al. (2017) G. Antipov, M. Baccouche, J. Dugelay, Face aging with conditional generative adversarial networks, in: Proc. IEEE Int. Conf. Image Processing (ICIP), pp. 2089–2093.
  • Bodnar (2018) C. Bodnar, Text to image synthesis using generative adversarial networks, arXiv preprent arXiv:1805.00676v1 (2018).
  • Dong et al. (2017) H. Dong, S. Yu, C. Wu, Y. Guo, Semantic image synthesis via adversarial learning, arXiv preprint arXiv:1707.06873v1 (2017).
  • Reed et al. (2016) S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, H. Lee, Generative adversarial text to image synthesis, arXiv preprint arXiv:1605.05396v2 (2016).
  • Shin et al. (2018) H.-C. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter, K. P. Andriole, M. Michalski, Medical Image Synthesis for Data Augmentation and Anonymization Using Generative Adversarial Networks, in: International Workshop on Simulation and Synthesis in Medical Imaging, Springer, pp. 1–11.
  • Mok and Chung (2018) T. C. W. Mok, A. C. S. Chung, Learning data augmentation for brain tumor segmentation with coarse-to-fine generative adversarial networks, arXiv 1805.11291 (2018).
  • Guibas et al. (2017) J. T. Guibas, T. S. Virdi, P. S. Li, Synthetic medical images from dual generative adversarial networks, arXiv preprint arXiv 1709.01872 (2017).
  • Kitchen and Seah (2017) A. Kitchen, J. Seah, Deep generative adversarial neural networks for realistic prostate lesion MRI synthesis, arXiv preprint arXiv:1708.00129 (2017).
  • Nie et al. (2017) D. Nie, R. Trullo, J. Lian, C. Petitjean, S. Ruan, Q. Wang, D. Shen, Medical image synthesis with context-aware generative adversarial networks, Medical image computing and computer-assisted intervention : MICCAI … International Conference on Medical Image Computing and Computer-Assisted Intervention 10435 (2017) 417–425.
  • Spuhler et al. (2018) K. D. Spuhler, J. Gardus, Y. Gao, C. DeLorenzo, R. Parsey, C. Huang, Synthesis of patient-specific transmission image for PET attenuation correction for PET/MR imaging of the brain using a convolutional neural network, Journal of nuclear medicine (2018).
  • Torrado-Carvajal et al. (2018) A. Torrado-Carvajal, J. Vera-Olmos, D. Izquierdo-Garcia, O. A. Catalano, M. A. Morales, J. Margolin, A. Soricelli, M. Salvatore, N. Malpica, C. Catana, Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction, Journal of nuclear medicine (2018).
  • Zhang et al. (2018) Q. Zhang, H. Wang, H. Lu, D. Won, S. W. Yoon, Medical image synthesis with generative adversarial networks for tissue recognition, in: Proc. IEEE Int. Conf. Healthcare Informatics (ICHI), pp. 199–207.
  • Frid-Adar et al. (2018) M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, H. Greenspan, Synthetic data augmentation using GAN for improved liver lesion classification, in: Proc. IEEE 15th Int. Symp. Biomedical Imaging (ISBI 2018), pp. 289–293.
  • Wolterink et al. (2017) J. M. Wolterink, A. M. Dinkla, M. H. F. Savenije, P. R. Seevinck, C. A. T. van den Berg, I. Isgum, Deep MR to CT synthesis using unpaired data, arXiv:1708.01155v1 (2017).
  • Calvin R. Maurer (1993) J. M. F. Calvin R. Maurer, Jr., A review of medical image registration, 1993.
  • Maclaren et al. (2013) J. Maclaren, M. Herbst, O. Speck, M. Zaitsev, Prospective motion correction in brain imaging: a review, Magnetic resonance in medicine 69 (2013) 621–636.
  • Zaitsev et al. (2017) M. Zaitsev, B. Akin, P. LeVan, B. R. Knowles, Prospective motion correction in functional MRI, Neuroimage 154 (2017) 33–42.
  • Fluck et al. (2011) O. Fluck, C. Vetter, W. Wein, A. Kamen, B. Preim, R. Westermann, A survey of medical image registration on graphics hardware., Computer methods and programs in biomedicine 104 (2011) e45–e57.
  • Shi et al. (2012) L. Shi, W. Liu, H. Zhang, Y. Xie, D. Wang, A survey of GPU-based medical image computing techniques., Quantitative imaging in medicine and surgery 2 (2012) 188–206.
  • Eklund et al. (2013) A. Eklund, P. Dufort, D. Forsberg, S. M. LaConte, Medical image processing on the gpu - past, present and future., Medical image analysis 17 (2013) 1073–1094.
  • Maintz and Viergever (1998) J. B. Maintz, M. A. Viergever, A survey of medical image registration., Medical image analysis 2 (1998) 1–36.
  • Glocker et al. (2011) B. Glocker, A. Sotiras, N. Komodakis, N. Paragios, Deformable medical image registration: setting the state of the art with discrete methods., Annual review of biomedical engineering 13 (2011) 219–244.
  • Sotiras et al. (2013) A. Sotiras, C. Davatzikos, N. Paragios, Deformable medical image registration: a survey., IEEE transactions on medical imaging 32 (2013) 1153–1190.
  • Oliveira and Tavares (2014) F. P. M. Oliveira, J. M. R. S. Tavares, Medical image registration: a review., Computer methods in biomechanics and biomedical engineering 17 (2014) 73–93.
  • Saha et al. (2015) P. K. Saha, R. Strand, G. Borgefors, Digital topology and geometry in medical imaging: A survey, IEEE transactions on medical imaging 34 (2015) 1940–1964.
  • Viergever et al. (2016) M. A. Viergever, J. B. A. Maintz, S. Klein, K. Murphy, M. Staring, J. P. W. Pluim, A survey of medical image registration - under review., Medical image analysis 33 (2016) 140–144.
  • Song et al. (2017) G. Song, J. Han, Y. Zhao, Z. Wang, H. Du, A review on medical image registration as an optimization problem., Current medical imaging reviews 13 (2017) 274–283.
  • Ferrante and Paragios (2017) E. Ferrante, N. Paragios, Slice-to-volume medical image registration: A survey., Medical image analysis 39 (2017) 101–123.
  • Keszei et al. (2017) A. P. Keszei, B. Berkels, T. M. Deserno, Survey of non-rigid registration tools in medicine., Journal of digital imaging 30 (2017) 102–116.
  • Nag (2017) S. Nag, Image registration techniques: A survey, arXiv preprint arXiv:1712.07540v1 (2017).
  • Jiang et al. (2010) J. Jiang, P. Trundle, J. Ren, Medical image analysis with artificial neural networks., Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society 34 (2010) 617–631.
  • Wu et al. (2016) G. Wu, M. Kim, Q. Wang, B. C. Munsell, D. Shen, Scalable high-performance image registration framework by unsupervised deep feature representations learning., IEEE transactions on bio-medical engineering 63 (2016) 1505–1516.
  • Salehi et al. (2018) S. S. M. Salehi, S. Khan, D. Erdogmus, A. Gholipour, Real-time deep pose estimation with geodesic loss for image-to-template rigid registration., IEEE transactions on medical imaging (2018).
  • Toth et al. (2018) D. Toth, S. Miao, T. Kurzendorfer, C. A. Rinaldi, R. Liao, T. Mansi, K. Rhode, P. Mountney, 3D/2D model-to-image registration by imitation learning for cardiac procedures., International journal of computer assisted radiology and surgery (2018).
  • Han (2017) X. Han, Mr-based synthetic CT generation using a deep convolutional neural network method, Medical physics 44 (2017) 1408–1419.
  • Liu et al. (2018) M. Liu, D. Cheng, K. Wang, Y. Wang, A. D. N. Initiative, Multi-modality cascaded convolutional neural networks for alzheimer’s disease diagnosis., Neuroinformatics 16 (2018) 295–308.
  • Xiang et al. (2017) L. Xiang, Y. Qiao, D. Nie, L. An, Q. Wang, D. Shen, Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI., Neurocomputing 267 (2017) 406–416.
  • Shan et al. (2017) S. Shan, W. Yan, X. Guo, E. I.-C. Chang, Y. Fan, Y. Xu, Unsupervised end-to-end learning for deformable medical image registration, arXiv preprint arXiv:1711.08608v2 (2017).
  • Balakrishnan et al. (2018) G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, A. V. Dalca, An unsupervised learning model for deformable medical image registration, arXiv preprint arXiv:1802.02604v3 (2018).
  • de Vos et al. (2018) B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, I. Isgum, A deep learning framework for unsupervised affine and deformable image registration, arXiv preprint arXiv:1809.06130v1 (2018).
  • Vannier et al. (1985) M. W. Vannier, R. L. Butterfield, D. Jordan, W. A. Murphy, R. G. Levitt, M. Gado, Multispectral analysis of magnetic resonance images., Radiology 154 (1985) 221–224.
  • Lundervold et al. (1988) A. Lundervold, K. Moen, T. Taxt, Automatic recognition of normal and pathological tissue types in MR images, in: Proc. of the NOBIM Conference, Oslo, Norway, 1988.
  • Taxt et al. (1992) T. Taxt, A. Lundervold, B. Fuglaas, H. Lien, V. Abeler, Multispectral analysis of uterine corpus tumors in magnetic resonance imaging., Magnetic resonance in medicine 23 (1992) 55–76.
  • Taxt and Lundervold (1994) T. Taxt, A. Lundervold, Multispectral analysis of the brain using magnetic resonance imaging., IEEE transactions on medical imaging 13 (1994) 470–481.
  • Lundervold and Storvik (1995) A. Lundervold, G. Storvik, Segmentation of brain parenchyma and cerebrospinal fluid in multispectral magnetic resonance images, IEEE Transactions on Medical Imaging 14 (1995) 339–349.
  • Cabezas et al. (2011) M. Cabezas, A. Oliver, X. Lladó, J. Freixenet, M. B. Cuadra, A review of atlas-based segmentation for magnetic resonance brain images, Computer methods and programs in biomedicine 104 (2011) e158–e177.
  • García-Lorenzo et al. (2013) D. García-Lorenzo, S. Francis, S. Narayanan, D. L. Arnold, D. L. Collins, Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging, Medical image analysis 17 (2013) 1–18.
  • Smistad et al. (2015) E. Smistad, T. L. Falch, M. Bozorgi, A. C. Elster, F. Lindseth, Medical image segmentation on GPUs–a comprehensive review., Medical image analysis 20 (2015) 1–18.
  • Bernal et al. (2017) J. Bernal, K. Kushibar, D. S. Asfaw, S. Valverde, A. Oliver, R. Martí, X. Lladó, Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review, arXiv preprint arXiv:1712.03747v3 (2017).
  • Dora et al. (2017) L. Dora, S. Agrawal, R. Panda, A. Abraham, State-of-the-art methods for brain tissue segmentation: A review, IEEE Reviews in Biomedical Engineering 10 (2017) 235–249.
  • Torres et al. (2018) H. R. Torres, S. Queiros, P. Morais, B. Oliveira, J. C. Fonseca, J. L. Vilaça, Kidney segmentation in ultrasound, magnetic resonance and computed tomography images: A systematic review, Computer methods and programs in biomedicine 157 (2018) 49–67.
  • Bernal et al. (2018) J. Bernal, K. Kushibar, D. S. Asfaw, S. Valverde, A. Oliver, R. Martí, X. Lladó, Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review, Artificial intelligence in medicine (2018).
  • Moccia et al. (2018) S. Moccia, E. De Momi, S. El Hadji, L. S. Mattos, Blood vessel segmentation algorithms - review of methods, datasets and evaluation metrics, Computer methods and programs in biomedicine 158 (2018) 71–91.
  • Makropoulos et al. (2018) A. Makropoulos, S. J. Counsell, D. Rueckert, A review on automatic fetal and neonatal brain MRI segmentation., NeuroImage 170 (2018) 231–248.
  • Chen et al. (2017) L. Chen, P. Bentley, D. Rueckert, Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks, NeuroImage. Clinical 15 (2017) 633–643.
  • Havaei et al. (2017) M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin, H. Larochelle, Brain tumor segmentation with deep neural networks, Medical image analysis 35 (2017) 18–31.
  • Choi and Jin (2016) H. Choi, K. H. Jin, Fast and robust segmentation of the striatum using deep convolutional neural networks, Journal of Neuroscience Methods 274 (2016) 146–153.
  • Ibragimov and Xing (2017) B. Ibragimov, L. Xing, Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks, Medical physics 44 (2017) 547–557.
  • Kline et al. (2017) T. L. Kline, P. Korfiatis, M. E. Edwards, J. D. Blais, F. S. Czerwiec, P. C. Harris, B. F. King, V. E. Torres, B. J. Erickson, Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys., Journal of digital imaging 30 (2017) 442–448.
  • Guo et al. (2016) Y. Guo, Y. Gao, D. Shen, Deformable mr prostate segmentation via deep feature learning and sparse patch matching., IEEE transactions on medical imaging 35 (2016) 1077–1089.
  • Li et al. (2018) X. Li, Q. Dou, H. Chen, C.-W. Fu, X. Qi, D. L. Belavý, G. Armbrecht, D. Felsenberg, G. Zheng, P.-A. Heng, 3D multi-scale FCN with random modality voxel dropout learning for intervertebral disc localization and segmentation from multi-modality MR images, Medical image analysis 45 (2018) 41–54.
  • Kleesiek et al. (2016) J. Kleesiek, G. Urban, A. Hubert, D. Schwarz, K. Maier-Hein, M. Bendszus, A. Biller, Deep MRI brain extraction: A 3D convolutional neural network for skull stripping, Neuroimage 129 (2016) 460–469.
  • Li et al. (2018) H. Li, N. A. Parikh, L. He, A novel transfer learning approach to enhance deep neural network classification of brain functional connectomes., Frontiers in neuroscience 12 (2018) 491.
  • Zeng et al. (2018) L.-L. Zeng, H. Wang, P. Hu, B. Yang, W. Pu, H. Shen, X. Chen, Z. Liu, H. Yin, Q. Tan, K. Wang, D. Hu, Multi-site diagnostic classification of schizophrenia using discriminant deep learning with functional connectivity MRI., EBioMedicine 30 (2018) 74–85.
  • Wasserthal et al. (2018) J. Wasserthal, P. Neher, K. H. Maier-Hein, TractSeg - fast and accurate white matter tract segmentation, Neuroimage 183 (2018) 239–253.
  • Cole et al. (2017) J. H. Cole, R. P. K. Poudel, D. Tsagkrasoulis, M. W. A. Caan, C. Steves, T. D. Spector, G. Montana, Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker, Neuroimage 163 (2017) 115–124.
  • Liu et al. (2018) M. Liu, J. Zhang, E. Adeli, D. Shen, Landmark-based deep multi-instance learning for brain disease diagnosis., Medical image analysis 43 (2018) 157–168.
  • Islam and Zhang (2018) J. Islam, Y. Zhang, Brain mri analysis for Alzheimer’s disease diagnosis using an ensemble system of deep convolutional neural networks, Brain informatics 5 (2018) 2.
  • Lu et al. (2018) D. Lu, K. Popuri, G. W. Ding, R. Balachandar, Beg, M. Faisal, Multimodal and multiscale deep neural networks for the early diagnosis of alzheimer’s disease using structural MR and FDG-PET images, Scientific reports 8 (2018) 5697.
  • Moeskops et al. (2018) P. Moeskops, J. de Bresser, H. J. Kuijf, A. M. Mendrik, G. J. Biessels, J. P. W. Pluim, I. IÅ¡gum, Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in mri., NeuroImage. Clinical 17 (2018) 251–262.
  • Pizarro et al. (2018) R. Pizarro, H.-E. Assemlal, D. De Nigris, C. Elliott, S. Antel, D. Arnold, A. Shmuel, Using deep learning algorithms to automatically identify the brain mri contrast: Implications for managing large databases., Neuroinformatics (2018).
  • Laukamp et al. (2018) K. R. Laukamp, F. Thiele, G. Shakirin, D. Zopfs, A. Faymonville, M. Timmer, D. Maintz, M. Perkuhn, J. Borggrefe, Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI, European radiology (2018).
  • Perkuhn et al. (2018) M. Perkuhn, P. Stavrinou, F. Thiele, G. Shakirin, M. Mohan, D. Garmpis, C. Kabbasch, J. Borggrefe, Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine., Investigative radiology (2018).
  • AlBadawy et al. (2018) E. A. AlBadawy, A. Saha, M. A. Mazurowski, Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing., Medical physics 45 (2018) 1150–1158.
  • Cui et al. (2018) S. Cui, L. Mao, J. Jiang, C. Liu, S. Xiong, Automatic semantic segmentation of brain gliomas from mri images using a deep cascaded neural network., Journal of healthcare engineering 2018 (2018) 4940593.
  • Hoseini et al. (2018) F. Hoseini, A. Shahbahrami, P. Bayat, Adaptahead optimization algorithm for learning deep cnn applied to mri segmentation., Journal of digital imaging (2018).
  • Yoo et al. (2018) Y. Yoo, L. Y. W. Tang, T. Brosch, D. K. B. Li, S. Kolind, I. Vavasour, A. Rauscher, A. L. MacKay, A. Traboulsee, R. C. Tam, Deep learning of joint myelin and T1w MRI features in normal-appearing brain tissue to distinguish between multiple sclerosis patients and healthy controls., NeuroImage. Clinical 17 (2018) 169–178.
  • Bobo et al. (2018) M. F. Bobo, S. Bao, Y. Huo, Y. Yao, J. Virostko, A. J. Plassard, I. Lyu, A. Assad, R. G. Abramson, M. A. Hilmes, B. A. Landman, Fully convolutional neural networks improve abdominal organ segmentation., Proceedings of SPIE–the International Society for Optical Engineering 10574 (2018).
  • Shehata et al. (2018) M. Shehata, F. Khalifa, A. Soliman, M. Ghazal, F. Taher, M. Abou El-Ghar, A. Dwyer, G. Gimel’farb, R. Keynton, A. El-Baz, Computer-aided diagnostic system for early detection of acute renal transplant rejection using diffusion-weighted MRI, IEEE transactions on bio-medical engineering (2018).
  • Cheng et al. (2017) R. Cheng, H. R. Roth, N. Lay, L. Lu, B. Turkbey, W. Gandler, E. S. McCreedy, T. Pohida, P. A. Pinto, P. Choyke, M. J. McAuliffe, R. M. Summers, Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks, Journal of medical imaging 4 (2017) 041302.
  • Ishioka et al. (2018) J. Ishioka, Y. Matsuoka, S. Uehara, Y. Yasuda, T. Kijima, S. Yoshida, M. Yokoyama, K. Saito, K. Kihara, N. Numao, T. Kimura, K. Kudo, I. Kumazawa, Y. Fujii, Computer-aided diagnosis of prostate cancer on magnetic resonance imaging using a convolutional neural network algorithm, BJU international (2018).
  • Song et al. (2018) Y. Song, Y.-D. Zhang, X. Yan, H. Liu, M. Zhou, B. Hu, G. Yang, Computer-aided diagnosis of prostate cancer using a deep convolutional neural network from multiparametric MRI, Journal of magnetic resonance imaging : JMRI (2018).
  • Wang et al. (2017) X. Wang, W. Yang, J. Weinreb, J. Han, Q. Li, X. Kong, Y. Yan, Z. Ke, B. Luo, T. Liu, L. Wang, Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning., Scientific reports 7 (2017) 15415.
  • Yang et al. (2017) X. Yang, C. Liu, Z. Wang, J. Yang, H. L. Min, L. Wang, K.-T. T. Cheng, Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI, Medical image analysis 42 (2017) 212–227.
  • Le et al. (2017) M. H. Le, J. Chen, L. Wang, Z. Wang, W. Liu, K.-T. T. Cheng, X. Yang, Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks, Physics in medicine and biology 62 (2017) 6497–6514.
  • Forsberg et al. (2017) D. Forsberg, E. Sjöblom, J. L. Sunshine, Detection and labeling of vertebrae in MR images using deep learning with clinical annotations as training data, Journal of digital imaging 30 (2017) 406–412.
  • Lu et al. (2018) J.-T. Lu, S. Pedemonte, B. Bizzo, S. Doyle, K. P. Andriole, M. H. Michalski, R. G. Gonzalez, S. R. Pomerantz, DeepSPINE: automated lumbar vertebral segmentation, disc-level designation, and spinal stenosis grading using deep learning, arXiv preprint arXiv:1807.10215v1 (2018).
  • Han et al. (2018) Z. Han, B. Wei, S. Leung, I. B. Nachum, D. Laidley, S. Li, Automated pathogenesis-based diagnosis of lumbar neural foraminal stenosis via deep multiscale multitask learning., Neuroinformatics 16 (2018) 325–337.
  • Kim et al. (2018) K. H. Kim, W.-J. Do, S.-H. Park, Improving resolution of MR images with an adversarial network incorporating images with different contrast, Medical physics 45 (2018) 3120–3131.
  • Pilevar (2011) A. H. Pilevar, CBMIR: content-based image retrieval algorithm for medical image databases, Journal of medical signals and sensors 1 (2011) 12–18.
  • Kumar et al. (2013) A. Kumar, J. Kim, W. Cai, M. Fulham, D. Feng, Content-based medical image retrieval: a survey of applications to multidimensional and multimodality dat., Journal of digital imaging 26 (2013) 1025–1039.
  • Faria et al. (2015) A. V. Faria, K. Oishi, S. Yoshida, A. Hillis, M. I. Miller, S. Mori, Content-based image retrieval for brain mri: an image-searching engine and population-based analysis to utilize past clinical data for future diagnosis, NeuroImage. Clinical 7 (2015) 367–376.
  • Kumar et al. (2015) A. Kumar, F. Nette, K. Klein, M. Fulham, J. Kim, A visual analytics approach using the exploration of multidimensional feature spaces for content-based medical image retrieval., IEEE journal of biomedical and health informatics 19 (2015) 1734–1746.
  • Bedo et al. (2016) M. V. N. Bedo, D. Pereira Dos Santos, M. Ponciano-Silva, P. M. de Azevedo-Marques, A. P. d. L. Ferreira de Carvalho, C. Traina, Endowing a content-based medical image retrieval system with perceptual similarity using ensemble strategy, Journal of digital imaging 29 (2016) 22–37.
  • Muramatsu (2018) C. Muramatsu, Overview on subjective similarity of images for content-based medical image retrieval, Radiological physics and technology (2018).
  • Spanier et al. (2018) A. B. Spanier, N. Caplan, J. Sosna, B. Acar, L. Joskowicz, A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations, International journal of computer assisted radiology and surgery 13 (2018) 165–174.
  • Gordo et al. (????) A. Gordo, J. Almazan, J. Revaud, D. Larlus, End-to-end learning of deep visual representations for image retrieval (????).
  • Liu et al. (2017) P. Liu, J. Guo, C. Wu, D. Cai, Fusion of deep learning and compressed domain features for content-based image retrieval, IEEE Transactions on Image Processing 26 (2017) 5706–5717.
  • Han et al. (2018) J. Han, D. Zhang, G. Cheng, N. Liu, D. Xu, Advanced deep-learning techniques for salient and category-specific object detection: A survey, IEEE Signal Processing Magazine 35 (2018) 84–100.
  • Piplani and Bamman (2018) T. Piplani, D. Bamman, Deepseek: Content based image search & retrieval, arXiv preprint arXiv:1801.03406v2 (2018).
  • Yang et al. (2018) J. Yang, J. Liang, H. Shen, K. Wang, P. L. Rosin, M. Yang, Dynamic match kernel with deep convolutional features for image retrieval, IEEE Transactions on Image Processing 27 (2018) 5288–5302.
  • Sklan et al. (2015) J. E. S. Sklan, A. J. Plassard, D. Fabbri, B. A. Landman, Toward content based image retrieval with deep convolutional neural networks, Proceedings of SPIE–the International Society for Optical Engineering 9417 (2015).
  • Bressan et al. (2018) R. S. Bressan, D. H. A. Alves, L. M. Valerio, P. H. Bugatti, P. T. M. Saito, DOCToR: the role of deep features in content-based mammographic image retrieval, in: Proc. IEEE 31st Int. Symp. Computer-Based Medical Systems (CBMS), pp. 158–163.
  • Qayyum et al. (2017) A. Qayyum, S. M. Anwar, M. Awais, M. Majid, Medical image retrieval using deep convolutional neural network, arXiv preprint arXiv:1703.08472v1 (2017).
  • Chung and Weng (2017) Y.-A. Chung, W.-H. Weng, Learning deep representations of medical images using siamese CNNs with application to content-based image retrieval, arXiv preprint arXiv:1711.08490v2 (2017).
  • Jing et al. (2017) B. Jing, P. Xie, E. Xing, On the automatic generation of medical imaging reports, arXiv preprint arXiv:1711.08195v3 (2017).
  • Li et al. (2018) C. Y. Li, X. Liang, Z. Hu, E. P. Xing, Hybrid retrieval-generation reinforced agent for medical image report generation, arXiv preprint arXiv:1805.08298v1 (2018).
  • Moradi et al. (2018) M. Moradi, A. Madani, Y. Gur, Y. Guo, T. Syeda-Mahmood, Bimodal network architectures for automatic generation of image annotation from text, arXiv preprint arXiv:1809.01610v1 (2018).
  • Zhang et al. (????) Y. Zhang, D. Y. Ding, T. Qian, C. D. Manning, C. P. Langlotz, Learning to summarize radiology findings, arXiv preprint arXiv:1809.04698v1 (????).
  • Pons et al. (2016) E. Pons, L. M. M. Braun, M. G. M. Hunink, J. A. Kors, Natural language processing in radiology: A systematic review, Radiology 279 (2016) 329–343.
  • Zech et al. (2018) J. Zech, M. Pain, J. Titano, M. Badgeley, J. Schefflein, A. Su, A. Costa, J. Bederson, J. Lehar, E. K. Oermann, Natural language-based machine learning models for the annotation of clinical radiology reports, Radiology 287 (2018) 570–580.
  • Goff and Loehfelm (2018) D. J. Goff, T. W. Loehfelm, Automated radiology report summarization using an open-source natural language processing pipeline, Journal of digital imaging 31 (2018) 185–192.
  • Gibson et al. (2018) E. Gibson, W. Li, C. Sudre, L. Fidon, D. I. Shakir, G. Wang, Z. Eaton-Rosen, R. Gray, T. Doel, Y. Hu, T. Whyntie, P. Nachev, M. Modat, D. C. Barratt, S. Ourselin, M. J. Cardoso, T. Vercauteren, NiftyNet: a deep-learning platform for medical imaging, Computer methods and programs in biomedicine 158 (2018) 113–122.
  • Li et al. (2017) W. Li, G. Wang, L. Fidon, S. Ourselin, M. J. Cardoso, T. Vercauteren, On the compactness, efficiency, and representation of 3d convolutional networks: Brain parcellation as a pretext task, in: International Conference on Information Processing in Medical Imaging (IPMI).
  • Kamnitsas et al. (2017) K. Kamnitsas, C. Ledig, V. F. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert, B. Glocker, Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation, Medical image analysis 36 (2017) 61–78.
  • Ronneberger et al. (2015) O. Ronneberger, P.Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention (MICCAI), volume 9351 of LNCS, Springer, 2015, pp. 234–241. (available on arXiv:1505.04597 [cs.CV]).
  • Badrinarayanan et al. (2017) V. Badrinarayanan, A. Kendall, R. Cipolla, SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence (2017).
  • Mardani et al. (2017) M. Mardani, E. Gong, J. Y. Cheng, S. Vasanawala, G. Zaharchuk, M. Alley, N. Thakur, S. Han, W. Dally, J. M. Pauly, et al., Deep generative adversarial networks for compressed sensing automates mri, arXiv preprint arXiv:1706.00051 (2017).
  • Parisot et al. (2017) S. Parisot, S. I. Ktena, E. Ferrante, M. Lee, R. G. Moreno, B. Glocker, D. Rueckert, Spectral graph convolutions for population-based disease prediction, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp. 177–185.
  • Marcus (2018) G. Marcus, Deep learning: A critical appraisal, arXiv preprint arXiv:1801.00631 (2018).
  • Lipton and Steinhardt (2018) Z. C. Lipton, J. Steinhardt, Troubling Trends in Machine Learning Scholarship (2018).
  • Zhang et al. (2016) C. Zhang, S. Bengio, M. Hardt, B. Recht, O. Vinyals, Understanding deep learning requires rethinking generalization, arXiv preprint arXiv:1611.03530 (2016).
  • Fredrikson et al. (2015) M. Fredrikson, S. Jha, T. Ristenpart, Model inversion attacks that exploit confidence information and basic countermeasures, in: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, ACM, pp. 1322–1333.
  • Shokri et al. (2017) R. Shokri, M. Stronati, C. Song, V. Shmatikov, Membership inference attacks against machine learning models, in: Security and Privacy (SP), 2017 IEEE Symposium on, IEEE, pp. 3–18.
  • McMahan et al. (2017) B. McMahan, E. Moore, D. Ramage, S. Hampson, B. A. y Arcas, Communication-Efficient Learning of Deep Networks from Decentralized Data, in: A. Singh, J. Zhu (Eds.), Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, PMLR, Fort Lauderdale, FL, USA, 2017, pp. 1273–1282.
  • Papernot et al. (2016) N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, K. Talwar, Semi-supervised knowledge transfer for deep learning from private training data, arXiv preprint arXiv:1610.05755 (2016).
  • Papernot et al. (2018) N. Papernot, S. Song, I. Mironov, A. Raghunathan, K. Talwar, Ú. Erlingsson, Scalable Private Learning with PATE, arXiv preprint arXiv:1802.08908 (2018).
  • McMahan et al. (2018) H. B. McMahan, D. Ramage, K. Talwar, L. Zhang, Learning Differentially Private Recurrent Language Models, in: International Conference on Learning Representations.
  • Chang et al. (2018) K. Chang, N. Balachandar, C. Lam, D. Yi, J. Brown, A. Beers, B. Rosen, D. L. Rubin, J. Kalpathy-Cramer, Distributed deep learning networks among institutions for medical imaging, Journal of the American Medical Informatics Association : JAMIA 25 (2018) 945–954.
  • Zech et al. (2018) J. R. Zech, M. A. Badgeley, M. Liu, A. B. Costa, J. J. Titano, E. K. Oermann, Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study, PLoS medicine 15 (2018).
  • Lundervold et al. (2017) A. Lundervold, A. Lundervold, J. Rørvik, Fast semi-supervised segmentation of the kidneys in DCE-MRI using convolutional neural networks and transfer learning, 2017.
  • Lundervold et al. (2018) A. Lundervold, K. Sprawka, A. Lundervold, Fast estimation of kidney volumes and time courses in DCE-MRI using convolutional neural networks, 2018.
  • Hinton et al. (2011) G. E. Hinton, A. Krizhevsky, S. D. Wang, Transforming auto-encoders, in: International Conference on Artificial Neural Networks, Springer, pp. 44–51.
  • Sabour et al. (2017) S. Sabour, N. Frosst, G. E. Hinton, Dynamic routing between capsules, in: Advances in Neural Information Processing Systems, pp. 3856–3866.
  • Mnih et al. (2014) V. Mnih, N. Heess, A. Graves, et al., Recurrent models of visual attention, in: Advances in neural information processing systems, pp. 2204–2212.
  • Xu et al. (2015) K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, Y. Bengio, Show, attend and tell: Neural image caption generation with visual attention, in: International conference on machine learning, pp. 2048–2057.
  • Castelvecchi (2016) D. Castelvecchi, Can we open the black box of AI?, Nature News 538 (2016) 20.
  • Olah et al. (2018) C. Olah, A. Satyanarayan, I. Johnson, S. Carter, L. Schubert, K. Ye, A. Mordvintsev, The building blocks of interpretability, Distill 3 (2018).
  • Montavon et al. (2017) G. Montavon, W. Samek, K.-R. Müller, Methods for interpreting and understanding deep neural networks, Digital Signal Processing (2017).
  • Yosinski et al. (2015) J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, H. Lipson, Understanding neural networks through deep visualization, in: Deep Learning Workshop, 31st International Conference on Machine Learning, 2015.
  • Olah et al. (2017) C. Olah, A. Mordvintsev, L. Schubert, Feature visualization, Distill 2 (2017).
  • Hohman et al. (2018) F. M. Hohman, M. Kahng, R. Pienta, D. H. Chau, Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers, IEEE Transactions on Visualization and Computer Graphics (2018).
  • Neal (1995) R. M. Neal, Bayesian learning for neural networks, Ph.D. thesis, University of Toronto, 1995.
  • MacKay (1992) D. J. MacKay, A practical Bayesian framework for backpropagation networks, Neural computation 4 (1992) 448–472.
  • Dayan et al. (1995) P. Dayan, G. E. Hinton, R. M. Neal, R. S. Zemel, The Helmholtz machine, Neural computation 7 (1995) 889–904.
  • Li and Gal (2017) Y. Li, Y. Gal, Dropout Inference in Bayesian Neural Networks with Alpha-divergences, in: International Conference on Machine Learning, pp. 2052–2061.
  • Leibig et al. (2017) C. Leibig, V. Allken, M. S. Ayhan, P. Berens, S. Wahl, Leveraging uncertainty information from deep neural networks for disease detection, Scientific reports 7 (2017) 17816.
  • Kendall et al. (2015) A. Kendall, V. Badrinarayanan, R. Cipolla, Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding, arXiv preprint arXiv:1511.02680 (2015).
  • Feinman et al. (2017) R. Feinman, R. R. Curtin, S. Shintre, A. B. Gardner, Detecting adversarial samples from artifacts, arXiv preprint arXiv:1703.00410 (2017).
  • Sharp and Hockfield (2017) P. Sharp, S. Hockfield, Convergence: The future of health, Science 355 (2017) 589.
  • Hood and Flores (2012) L. Hood, M. Flores, A personal view on systems medicine and the emergence of proactive P4 medicine: predictive, preventive, personalized and participatory, New biotechnology 29 (2012) 613–624.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description