Deep CT to MR Synthesis using Paired and Unpaired Data

Deep CT to MR Synthesis using Paired
and Unpaired Data

Cheng-Bin Jin, Xuenan Cui
School of Information and Communication Engineering, Inha University, SouthKorea
chengbinjin@inha.edu, xncui@inha.ac.kr
&Wonmo Jung
Acupuncture and Meridian Science Research Center, Kyung Hee University, South Korea
&Seongsu Joo, Ensik Park
Team Elysium Inc., South Korea
&Ahn Young Saem
Department of Computer Engineering, INHA University, South Korea
&In Ho Han, Jae Il Lee
Department of Neurosurgery, Pusan National University Hospital, South Korea
Corresponding author
Abstract

MR imaging will play a very important role in radiotherapy treatment planning for segmentation of tumor volumes and organs. However, the use of MR-based radiotherapy is limited because of the high cost and the increased use of metal implants such as cardiac pacemakers and artificial joints in aging society. To improve the accuracy of CT-based radiotherapy planning, we propose a synthetic approach that translates a CT image into an MR image using paired and unpaired training data. In contrast to the current synthetic methods for medical images, which depend on sparse pairwise-aligned data or plentiful unpaired data, the proposed approach alleviates the rigid registration challenge of paired training and overcomes the context-misalignment problem of the unpaired training. A generative adversarial network was trained to transform 2D brain CT image slices into 2D brain MR image slices, combining adversarial loss, dual cycle-consistent loss, and voxel-wise loss. The experiments were analyzed using CT and MR images of 202 patients. Qualitative and quantitative comparisons against independent paired training and unpaired training methods demonstrate the superiority of our approach.

 

Deep CT to MR Synthesis using Paired
and Unpaired Data


  Cheng-Bin Jin, Xuenan Cuithanks: Corresponding author School of Information and Communication Engineering, Inha University, SouthKorea chengbinjin@inha.edu, xncui@inha.ac.kr Wonmo Jung Acupuncture and Meridian Science Research Center, Kyung Hee University, South Korea Seongsu Joo, Ensik Park Team Elysium Inc., South Korea Ahn Young Saem Department of Computer Engineering, INHA University, South Korea In Ho Han, Jae Il Lee Department of Neurosurgery, Pusan National University Hospital, South Korea

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

CT-based radiotherapy is currently used in radiotherapy planning and its effect is quite good. Recently, radiotherapy devices using magnetic resonance (MR) imaging are being developed, since MR imaging is much better than computed tomography (CT) scan in the contrast of soft tissue. In particular, the use of MR-based radiotherapy is increasing in brain tumors, and MR imaging will play a very important role in radiotherapy planning in the near future. However, MR imaging usually costs more than a CT scan, and the time required for a complete MR scan takes about 20 to 30 minutes. Conversely, a CT scan is usually completed within 5 minutes. In addition, CT scans can also differentiate soft tissue, especially with an intravenous contrast agent, and has higher imaging resolution, and less motion artifact due to its high imaging speed, which are its advantages compared with MR imaging. Furthermore, the use of MR-based radiotherapy has been limited in situations where the use of metal implants such as cardiac pacemakers and artificial joints is increasing in aging society. Much of the concern about CT scans is the harm of radiation. However, there is no risk to patients, even for a patient with lung tuberculosis who undergoes several X-rays in one year. The real risk is to professionals–technicians and radiologists. Of course, this is a controversial topic among experts. We believe that there is some latent information in the raw data of a CT scan that humans are not aware it. To improve the accuracy of CT-based radiotherapy planning, this paper attempts to increase the contrast of soft tissues by translating CT images to MR images.

Recently, advances in deep learning and machine learning in medical computer-aided diagnosis (CAD) [1, 2], have allowed systems to provide information on potential abnormalities in medical images. Many methods have synthesized a CT image from the available MR image for MR-only radiotherapy treatment planning [3]. The MR-based synthetic CT generation method [4] used deep convolutional neural networks (CNN) with paired data, which was rigidly aligned by the minimization of voxel-wise differences between CT and MR images. However, minimizing the voxel-wise loss between the synthesized image and the reference image during training may lead to blurry generated outputs. In order to obtain clear results, Nie et al. [5] proposed a method that combined the voxel-wise loss with an adversarial loss in a generative adversarial network (GAN) [6]. Concurrent work [7] proposed a similar idea to synthesize positron emission tomography (PET) images from CT images using multiple channel information of the pix2pix framework by Isola et al. [8]. Ben-Cohen et al. [9] combined fully convolutional network (FCN) [10] and the pix2pix [8] model to export initial results and blend the two outputs to generate a synthesized PET image from a CT image.

Although the combination of the voxel-wise loss with adversarial loss addresses the problem of blurry generated synthesis, the voxel-wise loss is dependent on the availability of large numbers of aligned CT and MR images. Obtaining rigidly aligned data can be difficult and expensive. However, most medical institutions have considerable unpaired data that were scanned for different purposes and different radiotherapy treatments. Using unpaired data would increase the amount of training data exponentially, and alleviate many of the constraints of current deep learning-based synthetic systems (Figure 1). Unlike the paired data-based methods in [4, 5, 7, 9], Wolterink et al. [11] used a CycleGAN model [12], which is an image-to-image translation with unpaired images used to synthesize CT images from MR images. In an unpaired GAN paradigm, we want the synthesized image to not only look real, but also to be paired up with an input image in a meaningful way. Therefore, cycle-consistency loss is enforced to translate the synthesized image back to the original image domain, and minimize the difference between the input and the reconstructed image as a regularization. Because of the large amount of unpaired data, the synthesized images are more realistic than the results from the paired training methods. However, compared to the voxel-wise loss of the paired data, the cycle-consistent loss still has certain limitations in correctly translating the contextual information of soft tissues and blood vessels.

In this paper, we propose a synthetic approach to produce synthesized MR images from brain CT images. Our approach combines adversarial loss, dual cycle-consistent loss, and voxel-wise loss for training paired and unpaired data together. The experimental results indicate that training with these two kinds of data can synthesize much more correct outputs for synthetic systems. To the best of our knowledge, this is the first study that attempts to translate a CT image to an MR image.

Figure 1: Left Deep networks train with paired data, which include CT and MR slices taken from the same patient at the same anatomical location. Paired data need to be intentionally collected and aligned, which imposes difficulty. However, paired data give network regression constraints that are far more correct. Right Deep networks train with unpaired data, which include CT and MR slices that are taken from different patients at different anatomical locations. There is a considerable amount of unpaired data.

2 Data

Our dataset consisted of the brain CT and MR images of 202 patients who were scanned for radiotherapy treatment planning for brain tumors. Among these patients, 98 patients had only CT images, and 84 patients had only MR images. These data belonged to the unpaired data. For the remaining 20 patients, CT and MR images were acquired during radiation treatment. CT images were acquired helically on a GE Revolution CT scanner (GE Healthcare, Chicago, Illinois, United States) at 120 kVp and 450 mA. T2 3D MR (repetition time, 4320 ms; echo time, 95 ms; flip angle 150°) images were obtained with a Siemens 3.0T Trio TIM MR scanner (Siemens, Erlangen, Germany). To generate paired sets of CT and MR images, CT and MR images of the same patient were aligned and registered using affine transformation based on mutual information. CT and MR images were resampled to the same voxel size . Before the registration, the skull area in the CT images was removed by masking all voxels above a manually selected threshold. Skull-stripped MR brain image were also registered. In this study, AFNI’s 3dAlleniate function was used for the regression process [13]. The affine transformation parameters obtained were used to register resampled CT and MR images with the skull. To maximize information inside the brain area, CT images were windowed with a window length of 80 Hounsfield units (HU) and a window center of 40 HU (Figure 2).

Figure 2: Examples showing registration between CT and MR images after the mutual-information affine transform. After registration, CT and MR images were well-aligned spatially and temporally.

3 Methods

The proposed approach has a structure similar to CycleGAN [12], which contains a forward and a backward cycle. However, our model has a dual cycle-consistent term for paired and unpaired training data. The dual cycle-consistent term includes four cycles: forward unpaired-data, backward unpaired-data, forward paired-data, and backward paired-data cycles (Figure 3).

The forward unpaired-data cycle contains three independent networks that each have a different goal. The network attempts to translate a CT image to a realistic MR image, so that the output cannot be distinguished from "real" MR images by the adversarially trained discriminator , which is trained to do as well as possible at discriminating the synthetic "fakes." In addition, to solve the well-known problem of mode collapse, the network is trained to translate back to the original CT domain. To improve training stability, the backward unpaired-data cycle is also enforced in, translating an MR image to a CT image, and it works with a logic opposite to the forward unpaired-data cycle. Unlike the unpaired-data cycle, the discriminators in the paired-data cycles do not just discriminate between real and synthesized images; they also observe a pair of CT and MR images to differentiate between real and synthesized pairs. In addition, the voxel-wise loss between the synthesized and the reference image is included in the paired-data cycles. The synthetic networks and in paired-data cycles work exactly the same as in the unpaired-data cycles.

Figure 3: Dual cycle-consistent structure consists of (a) a forward unpaired-data cycle, (b) a backward unpaired-data cycle, (c) a forward paired-data cycle, and (d) a backward paired-data cycle. In the forward unpaired-data cycle, the input CT image is translated to an MR image by a synthesis network . The synthesized MR image is translated to a CT image that approximates the original CT image, and is trained to distinguish between real and synthesized MR images. In the backward unpaired-data cycle, a CT image is instead synthesized from an input MR by the network , reconstructs the MR from the synthesized CT image, and is trained to distinguish between real and synthesized CT images. The forward paired-data and the backward paired-data cycle are the same as the above forward unpaired-data and the backward unpaired-data cycle. The difference is that and do not just discriminate between real and synthesized images, they learn to classify between real and synthesized pairs. In addition, the voxel-wise loss between the synthesized image and the reference image is included in the paired-data cycles.

3.1 Objective

Both networks in GAN were trained simultaneously with discriminators and estimating the probability that a sample came from real data rather than the synthesis networks, while the synthesis networks and were trained to translate realistic synthetic data that could not be distinguished from real data by the discriminators. We applied adversarial losses [6] to the synthesis network : and its discriminator , and express the objective as:

(1)

where tries to translate an image to a image that looks similar to an image from the MR image domain. For the first and the second term in Eq. (1), the discriminator aims to distinguish between synthesized and the real MR image for unpaired data. For the paired data, the discriminator also tries to discriminate between the real and synthesized pairs that provide with the synthesized MR image as the third and fourth term in the Eq. (1). The synthesis network tries to minimize this objective against an adversarial that tries to maximize it, i.e., . For another synthesis network , and discriminator have a similar adversarial loss as well, i.e., .

To stabilize the training procedure, the negative log-likelihood objective in unpaired data was replaced by a least squares loss [14] in our work. Hence, the discriminator aims to apply the label 1 for real MR images, and the label 0 for synthesized MR images. However, we found that keeping the negative log-likelihood objective in the paired data generated higher quality results. Eq. (1) then becomes:

(2)

The dual cycle-consistent loss is enforced to further reduce the space of possible mapping functions for paired and unpaired training data. In the forward cycle, for each from the CT domain, the image translation cycle should be able to bring back to the original image, i.e., . Similarly, for each from the MR domain, and should also satisfy a backward cycle consistency: . The dual cycle-consistency loss is expressed as:

(3)

Previous approaches [15] have found it beneficial to combine the adversarial loss with a more traditional loss, such as L1 distance. For the paired data , , the synthesis network is tasked to not only generate realistic MR images, but also to be near the reference of the input . Though we don’t need a synthesis network as a final product, adding the same constraint to the enables a higher quality of synthesized MR images. The L1 loss term for the and are:

(4)

The overall objective is:

(5)

where and control the relative importance of adversarial loss, dual cycle-consistent loss, and voxel-wise loss. We aim to solve:

(6)
Figure 4: Flow diagram of the discriminator in the synthetic system. has extra head and extra tail convolutional layers for the different input and loss functions of the paired and unpaired data. Discriminator has the same architecture as the .

3.2 Implementation

For the architecture of synthesis networks and , we utilized the archiecture from Johnson et al. [16], which was a 2D fully-convolutional network with one convolutional layer, followed by two strided convolutional layers, nine residual blocks [17], two fractionally-strided convolutional layers, and one last convolutional layer. Instance normalization [18] and ReLU followed each convolution except at the last convolutional layer. The synthesis network takes a input and generates an output image of the same size.

For the discriminators and , we adapted PatchGANs [8], which tries to classify each patch in an image as real or fake. This way, the discriminators could better focus on high-frequency information in local image patches. Networks and used the same architecture, which had one convolution as an extra head for different input data, four strided convolutions as a shared network, and two convolutions as an extra tail for different tasks. Except for the first and last convolution, each convolutional layer was followed by instance normalization [18] and leaky ReLu [19] (Figure 4).

To optimize our networks, we used minibatch SGD and applied the Adam optimizer [20] with a batch size of 1. The learning rate started at for the first iterations, and decayed linearly to zero over the next iterations. For all experiments, we set and in Eq. (5) empirically. At inference time, we ran the synthesis network only to give a CT image.

4 Experiments and Results

Among the data of patients, all of the unpaired data were used as training data. The paired data were separated into a training set with the data of patients, and a separate test set containing CT images, and corresponding reference MR images from patients. Each CT or MR volume involved more than 2D axial image slices. These were resampled to in -grayscale and uniformly distributed by HU for CT and MR data.

For training, we augmented the training data with random online transforms:

  • Flip: Batch data were horizontally flipped with probability.

  • Translation: Batch data were randomly cropped to size from padded .

  • Rotation: Batch data were rotated by degrees.

The paired CT and MR images were augmented using the same factor. However, in the unpaired data, CT and MR images were augmented independently. The proposed approach training took about hours for iterations using a single GeForce GTX 1080Ti GPU. At inference time, the system required ms to synthesize a single-slice CT image to MR image.

4.1 Evaluation Metrics

Real and synthesized MR images were compared using the mean absolute error (MAE)

(7)

where is the index of the 2D axial image slice in aligned voxels, and is the number of slices in the reference MR images. MAE measures the average distance between each pixel of the synthesized and the reference MR image. In addition, the synthesized MR images were evaluated using the peak-signal-to-noise-ratio (PSNR) as proposed in [5, 7, 11]:

(8)
(9)

where . PSNR measures the ratio between the maximum possible intensity value and the mean square error (MSE) of the synthesized and reference MR images.

Figure 5: From left to right Input CT, synthesized MR, reference MR, and absolute error between real and synthesized MR images.

4.2 Analysis of MR synthesis using paired and unpaired data

We first compared synthesized MR images with reference MR images that had been carefully registered to become paired data with CT images. For brevity, we refer to our method as MR-GAN. Figure 5 shows two examples of an input CT image, synthesized MR image obtained by MR-GAN, reference MR image, and absolute difference maps between the synthesized and reference MR images. The MR-GAN learned to differentiate between different anatomical structures with similar pixel intensity in CT images, such as bones, gyri, and soft brain tissues. The largest differences are in the area of bony structures, and the smallest differences are found in the soft brain tissues. This may be partly due to the misalignment between the CT and reference MR images, and because the CT image provides more detail about bony structures to complement the shortcoming of the synthesized MR, which is focused on soft brain tissues.

Table 1 shows a quantitative evaluation using MAE and PSNR to compare different methods in the test set. We compare the proposed method with independent training using paired and unpaired data. To train the paired data system, a synthesis network with the same architecture and a discriminator network with the same architecture are trained using a combination of adversarial loss and voxel-wise loss as in the pix2pix framework [8]. To train the unpaired data system, the cycle-consistent structure of the CycleGAN [12] model is used, which is the same as our approach for the forward and backward unpaired-data cycle, shown in Figure 1. To ensure a fair comparison, we implemented all the baselines using the same architecture and implementation details as our method.

Although having trained with limited paired data, the model using paired training data outperformed the CycleGAN model on unpaired data in our experiments. Table 1 indicates that our approach to training with paired and unpaired data together had the best performance across all measurements, with the lowest MAE and highest PSNR compared to the conventional paired and unpaired training. Figure 6 shows a qualitative comparison between paired training, unpaired training, and our approach. The results of training with paired data seemed good, but generated blurry outputs. The images obtained with unpaired training were realistic, but lost anatomical information in ares of soft brain tissue, and contained artifacts in areas with bony structures. Although our method learns translation using paired and unpaired data, the quality of our results closely approximates the reference MR images, and for some details our results are much clearer than the reference MR images.

During the training of the MR-GAN, dual cycle-consistency is explicitly imposed in a bi-direction way. Hence, an input CT image translated to an MR image by the model should be successfully translated back to the original CT domain. Figure 7 shows an input CT, corresponding synthesized MR images from the CycleGAN and MR-GAN, their reconstructed CT images, and their relative difference maps. We observed that the reconstructed CT images were very close to the input images. Relative differences are distributed at the contour of the bone, and the reconstructed CT image from MR-GAN is more smoothed than the CycleGAN model because of the correct SynMR(ICT), which is like a latent vector in an auto-encoder [21].

MAE PSNR
Paired Unpaired Ours Paired Unpaired Ours
Patient01 24.20 27.71 22.76 62.82 62.45 64.65
Patient02 17.82 24.12 18.27 64.91 63.05 65.93
Patient03 22.01 22.45 22.27 63.59 63.83 63.55
Patient04 18.23 23.64 16.75 65.28 63.44 65.76
Patient05 18.26 22.82 17.68 64.92 64.04 65.97
Patient06 20.52 20.41 17.57 64.87 64.78 65.92
Patient07 20.63 18.72 16.55 64.55 64.14 66.28
Patient08 19.42 22.77 18.30 64.10 63.22 65.82
Patient09 19.12 16.98 18.57 64.93 66.19 65.43
Patient10 23.23 29.76 24.91 63.81 62.60 64.17
Avgsd 20.342.20 22.943.62 19.362.73 64.280.81 63.771.06 65.350.86
Table 1: MAE and PSNR evaluations between synthesized and real MR images when training with paired, unpaired, and paired with unpaired data (Ours).
Figure 6: From left to right Input CT image, synthesized MR image with paired training, synthesized MR image with unpaired training, synthesized MR images with paired and unpaired training (ours), and reference MR images.
Figure 7: From left to right Input CT image, synthesized MR image, reconstructed CT image, and relative difference error between the input and reconstructed CT image.

5 Discussion

This paper has shown that a synthetic system can be trained using paired and unpaired data to synthesize an MR images from a CT image. Unlike other methods, the proposed approach utilizes the adversarial loss from a discriminator network, dual cycle-consistent loss using paired and unpaired training data, and voxel-wise loss based on paired data to synthesize realistically-looking MR images. Quantitative evaluation results in Table 1 show that the average correspondence between synthesized and reference MR images in our approach is much better than in other methods; synthesized images are closer to the reference, and achieve the lowest MAE of and the highest PSNR of . Slight misalignments between CT images and reference MR images may have a large effect on quantitative evaluation. Although a quantitative measurement may be the gold standard for assessing the performance of a method, we found that numerical differences in the quantitative evaluation do not indicate the qualitative difference correctly. In future work, we will evaluate the accuracy of synthesized MR images based on perceptual studies with medical experts.

A synthetic system using a CycleGAN model [12] and trained with unpaired data generated realistic results. However, the results had poor anatomical definitions compared with corresponding CT images, as exemplified in Figure 6. We found that even though it was trained with limited paired data, the pix2pix model [8] outperformed the CycleGAN model on unpaired data in our experiments. The limitation of paired training is blurry output due to the voxel-wise loss. Qualitative analysis showed that MR images obtained by the MR-GAN look more realistic and contain less blurring than other methods. This could be due to the dual cycle-consistent and voxel-wise loss for paired data.

The experimental results have implications for accurate CT-based radiotherapy treatment for patients who are contraindicated to undergo an MR scan because of cardiac pacemakers or metal implants, and patients who live in areas with poor medical services. Our synthetic system can be trained using any kind of data: paired, unpaired, or both. Using paired and unpaired data together obtain higher quality synthesized images than using one kind of data alone.

6 Conclusion

We propose a system for synthesizing MR images from CT images. Our approach uses paired and unpaired data to solve the context-misalignment problem of unpaired training, and alleviate the rigid registration task and blurred results of paired training. Unpaired data is plentifully available, and together with limited paired data, could be used for effective synthesis in many cases. Our results on the test set demonstrate that MR-GAN was much closer to the reference MR images when compared with other methods. The preliminary results indicated that the synthetic system is able to efficiently translate structures within complicated 2D brain slices, such as soft brain vessels, gyri, and bones. In future work, we will investigate the 3D information of anatomical structures as presented in CT and MR brain sequences to further improve performance based on paired and unpaired data. We suggest that our approach can potentially increase the quality of synthesized images for a synthetic system that depends on supervised and unsupervised settings, and can also be extended to support other applications, such as MR-CT and CT-PET synthesis.

References

[1] Son, J., Park, S. J., & Jung, K. H. (2017). Retinal vessel segmentation in fundoscopic images with generative adversarial networks. arXiv preprint, arXiv:1706.09318.

[2] Chen, H., Qi X., Yu L., et al. (2017) DCAN: Deep contour-aware networks for object instance segmentation from histology images. Medical Image Analysis 36:135-146.

[3] Edmund, J. M., & Nyholm, T. (2017) A review of substitute CT generation for MRI-only radiation therapy. Radiation Oncology, 12:28.

[4] Han X. (2017) MR-based synthetic CT generation using a deep convolutional neural network method. Medical Physics 44(4):1408-1419.

[5] Nie, D., Trullo, R., Lian, J., et al. (2017) Medical image synthesis with context-aware generative adversarial networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp.417-425.

[6] Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al. (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680.

[7] Bi, L., Kim, J., Kumar, A., et al. (2017) Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs). In Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment, pp. 43-51.

[8] Isola, P., Zhu, J.-Y., Zhou, T., et al. (2017) Image-To-Image Translation With Conditional Adversarial Networks, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

[9] Ben-Cohen, A., Klang, E., Raskin, S. P., et al. (2017) Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results. In International Workshop on Simulation and Synthesis in Medical Imaging, pp. 49-57.

[10] Long, J., Shelhamer, E., & Darrell, T. (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 3431-3440.

[11] Wolterink, J. M., Dinkla, A. M., Savenije, M. H., et al. (2017) Deep MR to CT synthesis using unpaired data. In International Workshop on Simulation and Synthesis in Medical Imaging, pp. 14-23

[12] Zhu, J.-Y., Park, T., Isola, P., et al. (2017) Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks. The IEEE International Conference on Computer Vision (ICCV))

[13] Saad, Z. S., Glen, D. R., Chen, G., et al. (2009) A new method for improving functional-to-structural MRI alignment using local Pearson correlation. Neuroimage, 44(3):839-848.

[14] Mao, X., Li, Q., Xie, H., et al. (2016) Multi-class generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076, 2.

[15] Pathak, D., Krahenbuhl, P., Donahue, J., et al. (2016) Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2536-2544.

[16] Johnson, J., Alahi, A., & Fei-Fei, L. (2016) Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision (ECCV), pp. 694-711.

[17] He, K., Zhang, X., Ren, S., et al. (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 770-778.

[18] Ulyanov, D., Vedaldi, A., & Lempitsky V. (2016) Instance normalization: The missing ingredient for fast stylization, arXiv preprint, arXiv:1607.08022.

[19] Xu, B., Wang, N., Chen, T., et al. (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint, arXiv:1505.00853.

[20] Kingma, D. P., & Ba, J. (2014) Adam: A method for stochastic optimization. arXiv preprint, arXiv:1412.6980.

[21] Hinton, G. E., & Salakhutdinov, R. R. (2006) Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199824
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description