Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN Funded by AI-REPS ANR-18-CE37-0007-01, ANITI ANR-19-PI3A-0004 and a Nvidia GPU grant.{}^{*} These authors contributed equally to this work.

Reconstructing Natural Scenes from fMRI Patterns using BigBiGAN 1

Abstract

Decoding and reconstructing images from brain imaging data is a research area of high interest. Recent progress in deep generative neural networks has introduced new opportunities to tackle this problem. Here, we employ a recently proposed large-scale bi-directional generative adversarial network, called BigBiGAN, to decode and reconstruct natural scenes from fMRI patterns. BigBiGAN converts images into a 120-dimensional latent space which encodes class and attribute information together, and can also reconstruct images based on their latent vectors. We trained a linear mapping between fMRI data, acquired over images from 150 different categories of ImageNet, and their corresponding BigBiGAN latent vectors. Then, we applied this mapping to the fMRI activity patterns obtained from 50 new test images from 50 unseen categories in order to retrieve their latent vectors, and reconstruct the corresponding images. Pairwise image decoding from the predicted latent vectors was highly accurate (). Moreover, qualitative and quantitative assessments revealed that the resulting image reconstructions were visually plausible, successfully captured many attributes of the original images, and had high perceptual similarity with the original content. This method establishes a new state-of-the-art for fMRI-based natural image reconstruction, and can be flexibly updated to take into account any future improvements in generative models of natural scene images.

fMRI Decoding, Visual Reconstruction, Natural Scenes, BigBiGAN

I Introduction

For many years, scientists have used machine learning (ML) to decode and understand human brain activity in response to visual stimuli. The great progress of deep neural networks (DNNs) in the last decade has provided researchers with powerful tools and a large number of unexplored opportunities to achieve better brain decoding and visual reconstructions from functional magnetic resonance imaging (fMRI) data.

A variety of approaches have been taken to address image reconstruction from brain data. Before the deep learning era, researchers achieved reconstructions of simple binary stimuli directly from fMRI data [1]. Even though the reconstruction of complex natural images was hardly possible in those days, there were attempts to identify the image within a dataset, instead of reconstructing it: for example, quantitative receptive field models were used to identify the presented image [2]; in another work [3], the authors made use of Bayesian methods to find the image with the highest likelihood.

In recent years, deep networks have brought significant improvements in this field, with the reconstruction of handwritten digits using deep belief networks [4], of face stimuli with variational auto-encoders (VAEs) [5], and of natural scenes with feed-forward networks [6, 7], generative adversarial networks (GANs) [8, 9], and dual-VAE/GAN [10]. Most reconstruction methods for natural images, however, tend to emphasize pixel-level similarity with the original images, and rarely produce recognizable objects, or visually plausible or semantically meaningful scenes.

Inspired by [5], we propose a method to reconstruct natural scenes from fMRI data using a recently proposed large-scale bi-directional generative adversarial network, called BigBiGAN [11]. This network is the current state-of-the-art for unconditional image generation on ImageNet in terms of image quality and visual plausibility. In our proposed method, the brain data is mapped to the latent space of the BigBiGAN (pre-trained on ImageNet), whose generator is then used to reconstruct the image. Fig. 1 demonstrates an overview of the proposed method. Specifically, a training set of natural images that is shown to the human subjects is also fed into BigBiGAN’s encoder to get “original” latent vectors. Then, a linear mapping is computed between brain responses to the training images and their corresponding original latent vectors. Applying this mapping to the brain data for novel test images, a set of “predicted” latent vectors is then generated. Finally, these predicted latent vectors are passed on to the BigBiGAN’s generator for image reconstruction.

We demonstrate that the proposed method is able to outperform others by generating high-resolution naturalistic reconstructions thanks to the BigBiGAN generator. We justify our claims by quantitative comparisons of reconstructions to the original images in the high-level representational space of a state-of-the-art deep neural network.

(a)
(b)
Fig. 1: The proposed method. (a) Training phase. We train a linear mapping (computing the linear transform matrix ) from -D latent vectors (derived from the BigBiGAN encoder or from PCA decomposition) to -D fMRI patterns. is the number of voxels inside the brain region of interest. (b) Test phase. The trained mapping is inversely used to transform fMRI patterns of test images into the latent vectors. The image is then reconstructed using BigBiGAN’s generator (or a PCA inverse transform).

Ii Previous Works

We begin this section by describing our earlier work from which the method was adapted. In [5], we took advantage of the latent space of a VAE trained with a GAN procedure on a large set of faces. By learning a linear mapping between fMRI patterns and -dimensional VAE latent vectors, and using the GAN generator to reconstruct input images, we established a new state-of-the-art for fMRI-based face reconstruction. Moreover, the method even allowed for decoding face gender or face mental imagery.

Despite these promising results on faces, dealing with natural images remains a hard challenge. In another study [12], authors used a VAE for reconstructing naturalistic movie stimuli. They first trained a VAE, with five layers for encoding and five layers for decoding, on the ImageNet dataset. Then, similar to [5], they converted the fMRI patterns to the VAE’s latent space through linear mapping. Although they reported an appreciable level of success, the reconstructions were still blurry and difficult to recognize.

Studies in this field are not limited to the latent space of VAEs. In [6], the feature space of deep convolutional networks (DCNs) was used for fMRI decoding and image reconstruction. To do so, a decoder was first trained to transform fMRI patterns into the DCN’s image representations. Then, for each fMRI pattern, an initial image was proposed and passed through iterative optimization steps. In each iteration, the image was given to the DCN and the difference between its feature representation and the one from the actual image was computed as a loss value. Finally, pixel values were optimized to decrease this loss. The authors also examined optimization in the space of deep generative networks instead of in pixel space. According to the obtained reconstructions, their method was able to capture input attributes such as object color, position, and a coarse estimation of the shape. However, images remained blurry and the objects difficultly recognizable.

Other studies have proposed original network architectures instead of using pre-existing ones. In [7] an encoder/decoder structure was proposed, in which the encoder maps images to fMRI data, while the decoder does the reverse. In the first step, the encoder and decoder were separately trained on (image, fMRI) data pairs. Since the number of data pairs was insufficient for proper generalization, the authors applied a second round of training in an unsupervised fashion.

In yet another study [10], the authors proposed a dual-VAE, trained with a GAN procedure. This method involved three stages of training. In stage 1 the encoder, generator, and discriminator were trained on original images vs. generated ones. In stage 2, the generator was fixed, the encoder was trained on fMRI data, and the discriminator was trained with reconstructed images from the fMRI data, and reconstructed images from Stage 1. Finally, in Stage 3, the encoder was fixed, and the generator and discriminator networks were fine-tuned using the original images and reconstructed images from the fMRI data. This three-step method not only outperformed previous studies in image decoding, but also generated more crisp and visually plausible reconstructions. However, object identity was not always evident in the reconstructed images.

In this paper, we reconstruct images from human brain activity patterns using the state-of-the-art in natural image generation, a large-scale bi-directional GAN coined “BigBiGAN” [11]. Notably, the high-level image attributes captured in the latent space of the BigBiGAN allow us to go beyond pixel-wise similarity between the original and reconstructed images, and to reconstruct realistic and visually plausible scenes that express high-level semantic and category-level information from brain activity patterns.

Iii Materials and Methods

Iii-a fMRI Data

In this paper, we used open-source fMRI data provided by [13]. Images in the stimulus set were selected from ImageNet, and included training samples (1 presentation each) from categories (), and test samples (35 presentations each) from categories. Training and test categories were independent of each other. Five healthy subjects viewed these training and test images in a fMRI scanner in separate sessions. Each fMRI run consisted of a fixation point (s), image presentations (s per image, flashing at Hz), and a final fixation point (s). Moreover, images were randomly repeated during a run and subjects performed a one-back task on these images (i.e., they pressed a button when the same image was presented on two consecutive trials).

We downloaded the raw data2 and applied a standard preprocessing pipeline: slice-time correction, realignment, and corregistation to the Tw anatomical image using SPM12 software3. Details of the parameters used for preprocessing can be found in [5]. The downloaded fMRI dataset also provided pre-defined regions of interest (ROIs) that covered visual cortex. The onset and duration of each image were entered into a general linear model (GLM) as regressors (a separate GLM was used for the training and test sessions).

Iii-B BigBiGAN

BigBiGAN is a state-of-the-art large-scale bi-directional generative network for natural images [11]. It is a successor of the BiGAN bi-directional GAN [14], but adopting the generator and discriminator architectures from the more recent BigGAN [15]. Similar to BiGAN, the encoder and generator are trained indirectly via a joint discriminator that has to discriminate real from fake [latent vector, data] pairs. The encoder maps data into the latent vectors (real pairs), while the generator reconstructs data from latent vectors (fake pairs). Unlike BigGAN, a conditional GAN which requires a separate “conditioning” vector for object category, BigBiGAN’s generator has a unified 120-dimensional latent space which captures all properties of objects, including category and pose. In other words, each image can be expressed as a -dimensional vector in the network’s latent space, and any latent vector can be mapped back to the corresponding image. The low-dimensionality of the BigBiGAN model makes it particularly appealing for fMRI-based decoding, given the relatively small amount of brain data available for training our system (see III-D).

In this study, we used the largest pre-trained BigBiGAN model, revnet50x4, with image resolution. The model is publicly available on TensorFlow Hub4.

Iii-C PCA Model

As a baseline image decomposition and reconstruction model for our comparisons, we applied principal component analysis (PCA) on a set of images that were randomly selected from the training categories ( each). We made sure that the training images were included. Using the first 120 principal components (PCs), all of the image stimuli were transformed into a set of 120-D vectors. These vectors were then treated similarly to BigBiGAN’s latent vectors for brain decoding and reconstructions. This method (known as “eigen-face” or “eigen-image”) has previously been applied to fMRI-based face reconstruction [16, 5] and natural image reconstruction [12].

Iii-D Decoding and Reconstruction

Using linear regression, we trained a linear encoder that maps the -dimensional BigBiGAN latent representations (or the -dimensional PCA projections) associated with the training images onto the corresponding brain representations, recorded when the human subjects viewed the same images in the scanner (see Fig. 1a). For each subject, this mapping was learned by using the -dimensional latent vectors (or PCs) for the training images as parametric modulators to the general linear model (GLM), applied on the preprocessed BOLD signal. This step takes into account the covariance matrix of the latent dimensions (across images), and produces a linear transform matrix which will be used for the inverse transformation in the test phase.

In other words, for the training set of images, if there are voxels in the desired ROI, the GLM finds an optimal transformation matrix between their -dimensional latent vectors (including an additional constant bias term) and the corresponding -dimensional brain activation vectors:

(1)

where and denote the latent and brain activation vectors, respectively. Please note that all of the GLMs were solved by SPM12 over the entire visual cortex (union of all pre-defined functional ROIs).

For the test images, brain representations were derived from another GLM in which the test image onsets and durations were used as regressors. The previously trained mapping () was then inverted (again, taking into account the covariance matrix of the latent dimensions, this time across brain voxels), and used to predict the latent vectors (or PCA projections) from the brain representations (see Fig. 1b). This corresponds to the “brain decoding” step. Precisely, we retrieved the latent vectors from the brain activation vectors of the test images using the pre-trained and its (pseudo-)inverse covariance matrix :

(2)

Before solving equation 2, the brain activation vectors were zero-meaned by subtracting from each the average activation vector across all test images.

Finally, we discarded the bias term from the predicted latent vectors (PCA projections) and fed them into BigBiGAN’s generator (PCA’s inverse transform) to generate image reconstructions.

Fig. 2: fMRI reconstructions by the proposed method across all subjects. The first and second columns show the input image and BigBiGAN’s original reconstruction (reconstruction from the original latent vector), respectively. The next five columns illustrate BigBiGAN’s fMRI reconstructions (reconstruction from predicted latent vectors) each of the five subjects. Although fMRI reconstructions are not a perfect match to the input images, there are many attributes that are consistently captured by all subjects. These attributes can be semantic, such as being an animal or the body pose, and/or visually driven such as roundness or tallness, to mention a few.

Iii-E Decoding Accuracy

We used a pairwise strategy to evaluate the accuracy of our brain decoder. Assume that there are a set of (original) vectors and their respective predictions . Then the pairwise decoding accuracy is computed as:

(3)

where is the Pearson correlation and

(4)
Fig. 3: Comparison of fMRI reconstructions by different methods. The first and second columns show the input image and BigBiGAN’s original reconstruction (reconstruction from the original latent vector), respectively. Columns three to seven illustrate fMRI reconstructions for BigBiGAN (our method, reconstruction from the predicted latent vector), Eigen-Image (PCA, baseline model), Ren et al. [10], Beliy et al. [7], and Shen et al. [6], respectively. Clearly, reconstructions by the proposed method are the most naturalistic, with the highest resolution, in contrast to the more blurry or semantically ambiguous results of the other methods.

Iii-F High-Level Similarity Measure

Unlike human judgement, classic similarity metrics such as mean squared error (MSE), pix-comp [10], or structural similarity index (SSIM) are computed in pixel space and cannot capture high-level perceptual similarities, e.g. in terms of object attributes and identity, or semantic category. One good solution for this problem is to make use of DCN representational spaces, as there are several pieces of evidence supporting their correlation to the human brain [17, 18, 13].

In this paper, ResNet-152 [19] was the DCN of our choice, with the outputs of its penultimate layer (just before the final soft-max classification layer) defining our high-level representational space. In this space, as a measure of high-level perceptual representations, we computed the average Pearson correlation distance between representations of the original images and their associated fMRI reconstructions. In addition to this high-level measure, we also report pix-comp values [10] as a measure of low-level similarity.

Iv Results

Iv-a Image Reconstructions

Using BigBiGAN’s generator (or the PCA inverse transform), we could reconstruct an estimate of the test images from the latent vectors obtained by the brain decoder (). Since BigBiGAN’s generator is not perfect (see first and second columns in Fig. 2 and Fig. 3), we cannot expect the fMRI reconstructions to be identical to the input images (even if our decoding procedure was accurate). However, we found that the brain decoder not only captured several high-level attributes of the images, but that there were robust consistencies in image reconstruction across subjects. Fig. 2 shows a series of reconstructions across all of the five subjects.

For example, when the input image contained an animal (rows 1, 2, 5, 7, 10) or a human (row 9), it was preserved in the reconstructions with comparable location, body shape and pose across subjects. It is worth mentioning that objects or attributes that occur with a higher frequency in the ImageNet dataset are more likely to be preserved in the original BigBiGAN and fMRI reconstructions. For instance, images in the third and eighth rows are not common in Imagenet, yet their roundness attribute is more frequently observed. Thus, all the reconstructions agreed with a round object, even though they could not exactly reconstruct what the object was. Other examples are the images of the tower (fourth row) for the narrowness and tallness attributes, or the insect (seventh row) whose reconstructions mostly captured the long rope-like object behind it and rendered it with insect-related attributes.

Method Similarity Measure
Low-Level High-Level
(Pix-Comp) (ResNet-152)
Shen et al. [6]
Beliy et al. [7]
Ren et al. [10]
Eigen-Image (PCA)
BigBiGAN (ours)
TABLE I: Quantitative comparison of image reconstructions. For each measure, the best value is highlighted in bold. (For Pix-Comp, higher is better; for ResNet-152, lower is better)

fMRI-based natural image reconstruction has been addressed by a variety of methods recently, however only a few of them have been evaluated on the dataset we used. Here, we compare our reconstructions to three recent works by Shen et al. [6], Beliy et al. [7], and Ren et al. [10].

Fig. 3 shows reconstructions of seven images obtained by each method. Note that we could not compare other images since their reconstructions were not available for all methods. Although our reconstructions are not a perfect match to the input image, they show the clearest resolution, details, and naturalness, and display high-level similarity to the input image. Clearly, PCA (eigen-image) reconstructions rank worst in clarity. The other three methods suffered to varying degrees from ambiguous reconstructions (notably, without any clearly discernible object), although they did much better in estimating low-level attributes of the images, with the best performance obtained by Ren et al.

For a quantitative comparison, we evaluated reconstructions of the same seven images in explaining low- and high-level similarities. The former was computed as the pairwise decoding performance in pixel space (pix-comp), while the latter was the correlation distance between representations of the penultimate layer in ResNet-152 (see subsection III-F). The quantitative results (see table I) justify our claim that high-level aspects of the input images were better preserved by our method, while the other methods had an advantage for low-level aspects.

Iv-B Decoding Accuracy Across Brain Regions

As mentioned above, the fMRI dataset includes several pre-defined brain regions of interest (ROIs) in visual cortex, including V1 to V4, LOC, FFA, PPA, and HVC as the union of the last three. We also defined the whole visual cortex (VC) as the union of all these ROIs. By limiting voxels to those that were inside each ROI, we evaluated the pairwise decoding accuracy across different regions in visual cortex.

Fig. 4 illustrates the average decoding accuracy over all subjects in each brain region. PCA outperformed BigBiGAN in the two earliest visual areas (V1 and V2). However, in higher areas, BigBiGAN gradually improved while PCA worsened. Peak performance for our method was reached in V3, V4, and HVC, where PCA performed poorly. We hypothesize that the superiority of PCA in lower areas is due to the fact that the PCs were computed in pixel space, and thus correspond mostly to low-level features. On the other hand, BigBiGAN’s latent vectors can better represent high-level features, since they are obtained via a large hierarchy of processing layers. For both BigBiGAN and PCA, the best accuracy was achieved when we used brain responses from the whole VC. Peak accuracy was and for BigBiGAN and PCA, respectively.

It is worth mentioning that, while the whole VC improved BigBiGAN’s performance significantly compared to each individual region, PCA could only do marginally better than when using voxels in V1d alone (its best single-region performance). This again suggests that PCA mostly depends on low-level features, whereas the BigBiGAN brain decoder can benefit from low-level information as well as high-level image attributes.

Fig. 4: Pairwise decoding accuracy across different brain regions of interest (ROIs). While voxels in high-level areas of the visual cortex are best decoded using BigBiGAN (our method), PCA performs better in low-level regions (V1, V2). Although the best performance is achieved when all the voxels (the whole visual cortex) are included, PCA could only do marginally better than when only V1d voxels were used.

V Discussion

In this paper, we have proposed a new method for realistic reconstruction of natural scenes from fMRI patterns. Thanks to the high-level, low-dimensional latent space of BigBiGAN, we could establish a linear mapping that associates image latent vectors to their corresponding fMRI patterns. This linear mapping was then inverted to transform novel fMRI patterns into BigBiGAN latent vectors. Finally, by feeding the obtained latent vectors into the BigBiGAN generator, the associated images were reconstructed.

Many recent approaches have taken advantage of deep generative neural networks to reconstruct natural scenes [6, 7, 10]. However, due to the complexity of natural images, a huge amount of computational resources and capacity is required to achieve high-resolution realistic image generation [15]. Here, we used the pre-trained BigBiGAN as a state-of-the-art large-scale bi-directional GAN for natural images. We showed that the proposed method is able to generate the most realistic reconstructions in the highest resolution () compared to other methods. Moreover, comparing results across subjects revealed a robust consistency in capturing high-level attributes of different objects through the reconstructions.

We acknowledge that our reconstructions are still far from perfect and can often lag behind the others in terms of low-level similarity measures. In contrast, the superiority of the proposed method is with respect to high-level evaluations of perceptual similarity. While we can surpass other methods in this area, we believe that there is still room for methodological improvements. In particular, failures to retrieve the proper semantic category or visual attribute can of course be caused by imperfect brain-decoding of the latent vectors, but also sometimes by inadequate image generation from the BigBiGan generator (e.g., compare the first 2 columns in Fig. 2). We believe that one promising area of improvement for our work is through the ability of the image generation model. In this regard, whenever new bidirectional GANs (or other bidirectional architectures) improve on the current state-of-the-art, our method can easily be adapted to deploy them and take advantage of their image generation prowess for more accurate brain-based reconstructions.

Another current limitation of the proposed method is our use of pre-defined brain regions of interest (or potentially, of the entire visual cortex). It is likely that not all voxels are informative or relevant to the target task; including uninformative or irrelevant voxels can only degrade the outcome. Additionally, there might well be informative voxels in other brain areas such as pre-frontal cortex, signaling high-level perceptual or semantic aspects of the visual stimulus, that we are currently not considering. For these reasons, extending the analysis to the entire brain, while using a proper voxel selection stage to discard irrelevant voxels, is bound to further improve the results.

Footnotes

  1. thanks: Funded by AI-REPS ANR-18-CE37-0007-01, ANITI ANR-19-PI3A-0004 and a Nvidia GPU grant.
    These authors contributed equally to this work.
  2. https://openneuro.org/datasets/ds001246/
  3. https://www.fil.ion.ucl.ac.uk/spm/software/spm12/
  4. https://tfhub.dev/deepmind/bigbigan-revnet50x4/1

References

  1. Y. Miyawaki, H. Uchida, O. Yamashita, M.-a. Sato, Y. Morito, H. C. Tanabe, N. Sadato, and Y. Kamitani, “Visual image reconstruction from human brain activity using a combination of multiscale local image decoders,” Neuron, vol. 60, no. 5, pp. 915–929, 2008.
  2. K. N. Kay, T. Naselaris, R. J. Prenger, and J. L. Gallant, “Identifying natural images from human brain activity,” Nature, vol. 452, no. 7185, pp. 352–355, 2008.
  3. T. Naselaris, R. J. Prenger, K. N. Kay, M. Oliver, and J. L. Gallant, “Bayesian reconstruction of natural images from human brain activity,” Neuron, vol. 63, no. 6, pp. 902–915, 2009.
  4. M. A. van Gerven, F. P. de Lange, and T. Heskes, “Neural decoding with hierarchical generative models,” Neural computation, vol. 22, no. 12, pp. 3127–3142, 2010.
  5. R. VanRullen and L. Reddy, “Reconstructing faces from fmri patterns using deep generative neural networks,” Communications biology, vol. 2, no. 1, pp. 1–10, 2019.
  6. G. Shen, T. Horikawa, K. Majima, and Y. Kamitani, “Deep image reconstruction from human brain activity,” PLoS computational biology, vol. 15, no. 1, p. e1006633, 2019.
  7. R. Beliy, G. Gaziv, A. Hoogi, F. Strappini, T. Golan, and M. Irani, “From voxels to pixels and back: Self-supervision in natural-image reconstruction from fmri,” in Advances in Neural Information Processing Systems, 2019, pp. 6514–6524.
  8. G. St-Yves and T. Naselaris, “Generative adversarial networks conditioned on brain activity reconstruct seen images,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC).   IEEE, 2018, pp. 1054–1061.
  9. K. Seeliger, U. Güçlü, L. Ambrogioni, Y. Güçlütürk, and M. A. van Gerven, “Generative adversarial networks for reconstructing natural images from brain activity,” NeuroImage, vol. 181, pp. 775–785, 2018.
  10. Z. Ren, J. Li, X. Xue, X. Li, F. Yang, Z. Jiao, and X. Gao, “Reconstructing perceived images from brain activity by visually-guided cognitive representation and adversarial learning,” arXiv preprint arXiv:1906.12181, 2019.
  11. J. Donahue and K. Simonyan, “Large scale adversarial representation learning,” in Advances in Neural Information Processing Systems, 2019, pp. 10 541–10 551.
  12. K. Han, H. Wen, J. Shi, K.-H. Lu, Y. Zhang, and Z. Liu, “Variational autoencoder: An unsupervised model for modeling and decoding fmri activity in visual cortex,” bioRxiv, p. 214247, 2018.
  13. T. Horikawa and Y. Kamitani, “Generic decoding of seen and imagined objects using hierarchical visual features,” Nature communications, vol. 8, no. 1, pp. 1–15, 2017.
  14. J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial feature learning,” arXiv preprint arXiv:1605.09782, 2016.
  15. A. Brock, J. Donahue, and K. Simonyan, “Large scale gan training for high fidelity natural image synthesis,” arXiv preprint arXiv:1809.11096, 2018.
  16. A. S. Cowen, M. M. Chun, and B. A. Kuhl, “Neural portraits of perception: reconstructing face images from evoked brain activity,” Neuroimage, vol. 94, pp. 12–22, 2014.
  17. S.-M. Khaligh-Razavi and N. Kriegeskorte, “Deep supervised, but not unsupervised, models may explain it cortical representation,” PLoS computational biology, vol. 10, no. 11, 2014.
  18. R. M. Cichy, A. Khosla, D. Pantazis, A. Torralba, and A. Oliva, “Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence,” Scientific reports, vol. 6, p. 27755, 2016.
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406259
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description