Automated Classification of Helium Ingress in Irradiated X-750

Automated Classification of Helium Ingress in Irradiated X-750

Abstract

Imaging nanoscale features using transmission electron microscopy is key to predicting and assessing the mechanical behavior of structural materials in nuclear reactors. Analyzing these micrographs is often a tedious and time-consuming manual process, making this analysis is a prime candidate for automation. A region-based convolutional neural network is proposed, which can identify helium bubbles in neutron-irradiated Inconel X-750 reactor spacer springs. We demonstrate that this neural network produces analyses of similar accuracy and reproducibility than that produced by humans. Further, we show this method as being four orders of magnitude faster than manual analysis allowing for generation of significant quantities of data. The proposed method can be used with micrographs of different Fresnel contrasts and resolutions and shows promise in application across multiple defect types.

I Introduction

Transmission electron microscopy (TEM) enables microstructural characterization of materials with nanoscale precision; this methodology is now ubiquitous in materials physics jenkins2000characterisation (). TEM imaging allows insights into microstructural behaviour and defect morphology at nanometer scales, ultimately leading to knowledge which can be translated at the system level. TEM is utilized frequently by the nuclear power industry as a visualisation tool for irradiation damage. It is used by operators to identify material degradation in advance of component failure jenkins2000characterisation (). The ability to image ex-situ components and visualize their microstructures is important in order to predict a material’s response to irradiation zhang2014tem ().

Irradiation-induced damage of reactor components is one of the primary issues that plagues nuclear power generation due to the high cost of component replacement and the introduction of uncertainty into life cycle predictions. Neutrons interact with the materials atoms via various mechanisms resulting in a multitude of defect types, each with unique consequences towards macroscopic behaviour of the system. An issue of particular interest is the build-up of nanoscale helium (He) bubbles in nickel-based superalloys used in Canada Deuterium Uranium (CANDU) reactors, namely Inconel X-750 judge2015intergranular (); judge2018high (). The He is mainly produced through the interactions of thermal neutrons emitted by the reactor core with the nickel (Ni) atoms. This reaction is the transmutation of Ni to He and Fe through the absorption of a neutron judge2015effects (). These interactions are governed by the thermal neutron cross section Block2010 (). Ni has a high thermal cross section–1000 Barns–while Fe only has a cross section of 10 Barns Kopecky1997 (). The presence of He has important effects on the mechanical properties of the structural alloy. It stabilizes vacancies, and leads to bubble formation, which in turn accelerates void swelling. It has been understood to lead to grain boundary embrittlement judge2015effects (). Under typical reactor-operation conditions, the sizes of He bubbles in Ni-based superalloys is generally only a few nanometers judge2018high (). Therefore, they are regularly measured by TEM.

He bubbles appear as circular objects in TEM micrographs under Fresnel contrast imaging judge2015intergranular (). However, the images produced by TEM are often noisy and 2D projections of bubbles can overlap leading to tedious analysis judge2018high (). Hence, standard image processing software shipped with TEMs cannot automatically identify these bubbles. Manually analyzing the images one-by-one limits the amount of useful scientific data which can be collected. There are three main downsides to manual identification of bubbles. First, the process is time consuming: an individual image can take up to a few hours to classify. Second, manual identification is error prone as bubbles can be easily misidentified. Third, there is a lack of reproducibility and consistency from one human inspection to another.

TEM-based inspection of CANDU reactors’ X-750 spacer springs uses samples extracted from power generation reactor cores. This is challenging, as radioactive contamination necessitates specialized transport and sample preparation gibson2013safe (). Providing operators with near-instantaneous feedback about the He content of the samples would help them choose, on-the-fly which samples to invest more time imaging. In other words, automating bubble counting would save time both in terms of image analysis, but also in term of effectively choosing which samples to prepare and image.

Advances in image recognition algorithms led to its recent adoption in numerous fields. Cirecsan et al. identified the possibility of using a convolutional neural network (CNN) to detect cell mitosis associated to breast cancer cirecsan2013mitosis (). This work highlighted the viability of neural networks as image classifiers with moderate computational costs. CNNs are now able to identify defects in TEM images, including noisy TEM images zhu2017deep (). Recently, a method combining fast-Fourier transforms with CNNs proved an efficient method for identification of phase transformations in WS vasudevan2018mapping (); maksov2019deep () characterized by TEM and scanning tunnelling microscopy. Object detection was also used to detect dislocation loops in irradiated FeCrAl alloys, with successes in the extraction of both visual and quantitative defect metrics matching manual methods li2018automated (). Recent developments in the field of object detection, namely region proposal methods (R-CNNs), suggest that they are viable analysis methods for TEM images.

In this article, a R-CNN approach is proposed to automatically identify the He bubbles in irradiated X-750 micrographs. First, the network architecture is introduced. Second the preparation of the training and validation data is discussed. Third the validation metrics of the network are described and the performance of the model is assessed. Finally, prescriptions for use of this algorithm are made.

Ii Methodology

ii.1 Strategy

The detection strategy relies on a R-CNN based on layer cross-correlations Girshick_2015_ICCV (). This method was chosen due to the large number of defects present in the images, which suggests the processing time is important. The model was set to have a confidence threshold of 50 %. This value allows for a good compromise between capturing all of the relevant features while limiting the number of false positives. Since the He bubbles are pressurized and roughly spherical, they were considered as circular objects. The coordinates of a squares bounding these circles were used to store to positions and size of the bubbles. The model aims at identifying the location of bubbles, their radii, and their cumulative volumes.

ii.2 Network Architecture

Historically, CNNs were hindered by their high computational cost. A mitigation strategy involves sharing convolutions across region proposals, which leads to significant reductions in computational cost NIPS2015_5638 (). The R-CNN differs from the CNN, in that there are pre-generated region proposals which are then used to narrow the search for objects of interest girshick2014rich (). This region of interest (RoI) layer allows for faster and more accurate predictions of object locations DBLP:journals/corr/GirshickDDM13 (). The most recent iteration is known as Faster R-CNN which utilizes a deep internal network to generate a feature network Girshick_2015_ICCV (). This feature network is then passed through a region proposal network which generates the RoIs. The model then utilizes the feature map along with the RoIs to make object location predictions. Deriving the region proposals from the feature maps substantially reduces the time required to classify an image without sacrificing accuracy ren2015faster (). The region proposal network makes the Faster R-CNN an ideal candidate for identifying microstructural defects given the large quantity of defects present.

The architecture of the network in this proposed method is a 7 layer Faster R-CNN. The input layer asses the images through multiple convolutional layers. These convolutional layers transform the image into a convolutional feature map which can be interpreted by the detection layers, this map is then passed to three neural networks which share features across layers: a feature network, a region proposal network (RPN), and a detection network. These three networks operating together allow for fast and accurate predictions of object locations. The output of the network is an annotated image with identified objects enclosed within bounding boxes.

ii.3 Data Collection

Element wt. %
Nickel 68.6
Chromium 16.0
Iron 8.0
Titanium 2.5
Niobium 1.0
Cobalt 1.0
Trace 2.9
Table 1: Chemical composition of X-750 spacers used in CANDU fuel channels.

The data was collected as part of an effort to characterize the He ingress in X-750 reactor spacer springs in the CANDU reactor fleet. The nominal composition of the current Inconel X-750 alloy is summarized in Table 1. TEM samples were prepared from an ex-service spacer spring. The maximum flux of fast neutrons emitted from the CANDU fuel bundles is MeV jenkins2000characterisation (). For a typical CANDU fuel channel power profile, each Ni atom will be displaced approximately once per year by fast neutrons jenkins2000characterisation (). This damage is augmented in the presence of thermal neutrons. The current investigated spacer was irradiated in reactor after 14 effective full power years to a fluence of 30 dpa. Cross-section of the spring wire was 0.7 mm x 0.7 mm. Samples were cut from different locations, 12 o’clock () and 6 o’clock () respectively along the cross-section. The positioning alters the microstructure as the 12 o’clock segments are not in compression and are at a constant high temperature, the 6 o’clock position being compressed under the channel with a larger temperature gradient. To ensure appropriate imaging of bubbles thin samples must be prepared to minimize the number of overlapping bubbles. TEM samples were milled using a focused ion beam (FIB) and then ion polished using a nano-mill with 900 eV Argon ions at 10 degree glancing angles. All TEM imaging was performed using a JEOL F200 TEM at an operating voltage of 200 keV utilizing single and double tilt sample holders. Two-beam dynamical bright field and weak beam dark field conditions were applied for imaging irradiation induced microstructural changes.

ii.4 Data Set Preparation

The open source program LabelImg was used to annotate the images labelIMG (). Bounding boxes were generated around the outer bubble ring. The annotations were performed manually using this software and the bounding boxes were translated into the XML files to be read by TensorFlow. Micrographs contained upwards of 50 instances of He bubbles, which were manually identified. Differentiating the smaller bubbles from the the base material is challenging, leading to variance in classifications based on interpretation van2018intra (). The training and validation data was classified by a single trained individual in order to minimize this variance. The training data-set consisted of 300 512 x 512 x 3 images (512x512 RGB pixels). The data-set contained both over and under-focused images, 80 over-focused micrographs and 220 under-focused micrographs. The micrographs had a 0.38 nm/pixel magnification.

ii.5 Model Training

As explained in the subsection II.2, above, the R-CNN is composed of three neural networks that share features across layers. This includes a feature network, a region proposal network (RPN), and a detection network. The feature network is the pre-trained image classification network known as ResNet-101 DBLP:journals/corr/HeZRS15 (). This network generates features from the initial images while maintaining the original structure and shape. The RPN consists of three layers, the first feeding into the classification and subsequent bounding box regression layer. This layer creates the RoIs which have a high chance of containing an object. The detection network takes inputs from the feature network and the RPN to generate the final bounding boxes. The RPN and detection networks were trained in tandem using the same image set. The final network consisted of 7 trained layers and the pre-trained ResNet network DBLP:journals/corr/GirshickDDM13 ()DBLP:journals/corr/HeZRS15 (). The model was trained for a period of 4 hours with a loss stop being put in place if 97% accuracy were to be achieved in order to prevent over-fitting. The final R-CNN was trained using 300 images of same size and magnification. Each image contained 50-100 bubbles. The R-CNN was trained using a deep learning package from TensorFlow. The network was trained on Compute Canada servers using a single Intel E5-2683 v4 ”Broadwell” Processor clocked at 2.1 Ghz and a single NVIDIA P100 Pascal GPU.

Iii Results

Figure 1: Original TEM micrographs are in the left-side panels. The micrographs annotated by R-CNN are in the right-side panels. 1) Is an example of an over-focused image. The R-CNN does not perform as well in over-focused conditions than in under-focused conditions. Note that the majority of the training set is comprised of under-focused micrographs. 2,4,5) Are representative examples of relatively clean and visible micrographs 3) Is an example of a noisy micrograph. The R-CNN overestimated the number of bubbles present.

iii.1 Visual Defect Assessment

Visual examination of bubbles is performed using Fresnel contrast imaging to make the bubbles identifiable in the micrographs. The imaging can be performed either under- or over-focused, which will produce white bubbles against a dark background or black against a light background respectively. This is performed to differentiate bubbles from background noise. Typical examples of over-focused bubbles are shown in Fig. 1 image 1, while images 2-5 are under-focused.

A dominant defect type is observed: He bubble voids, i.e. spherical cavities where He produced by neutron interactions accumulates. As illustrated in Fig.1, the bubbles are of relatively low contrast relative to the background. Additionally, bubble sizes vary between 1-12 nm. As the bubbles decrease in size, the contrast diminishes, which further impedes consistent bubble identification, especially in the presence of FIB damage.

Figure 2: The mean of the He bubbles radius distribution, the standard deviation of this distribution, and total He bubble volume in three micrographs of irradiated X-750. We compare visual inspection by three independent researchers to the R-CNN’s analysis.

Since identifying He bubbles is a somewhat ambiguous task, as explained above, the statistics generated by visual inspection vary from one human inspector to another. Figure 2 illustrates these variation across four images analyzed by three operators and the trained R-CNN. Variations of the order of 25 % are observed from individual-to-individual. The R-CNN-extracted values are not statistically different from those found by the three human operators.

iii.2 Model Performance Metrics

In order to measure the performance of the R-CNN analysis, two set of metrics were used. The first set, recall & precision 38136 (), are used as a means to evaluate the capability of finding correctly positioned defects. Recall refers to the percentage of features which are identified by the R-CNN, relative to the human’s analysis. Precision refers to the accuracy of the bounding box size and positions. A bubble position is determined according to its bounding box XY coordinates. It can be tested against the the reference bounding box values using intersection over union (IoU). IoU compares the two bounding boxes that are passed and determines the amount of overlap between the pair is greater than a predetermined threshold Rezatofighi_2019_CVPR (). We use a threshold of 60% in this study. The second set are the statistics of the bubble size distributions. Specifically, we calculated total mean bubble diameter, standard deviation of bubble diameters, and the cumulative bubble volume. These are statistics of importance for predicting the effect of He bubbles on mechanical properties.

Iteration Recall Precision
80 Images 0.67 0.89
160 Images 0.69 0.91
240 Images 0.72 0.93
300 Images 0.75 0.96
Table 2: Accuracy of the R-CNN’s He bubble analysis as a function of the number of micrographs in the training set using recall and precision metrics. Both recall and precision increase as the size of the training set increases.

To gauge the model’s effectiveness, an iterative training method was chosen where a random subset of the complete data-set was selected to be used as training data. These model checkpoints were assessed using recall & precision to estimate performance as a function of the size of the training set. The results are shown in Table 2. As mentioned above, the training set used in the iterative process contained micrographs of equal size (512x512x3 pixels) and magnification (0.38 nm/pixel) with a split of 80:220 between over-focused and under-focused images. It should be noted that images produced by the TEM can vary in magnification and size. Therefore, our final validation set contained a mix of over and under-focused images, as well as images of varying magnifications.

Recall & precision were calculated for the 23 final validation micrographs. The results are summarized in Table 3. Accuracy depends on the focus condition, as well as magnification. Overall, the recall & precision are good: 78% and 90%, respectively. The performance in under-focused images is best. Note that most of the training data was comprised of under-focused images.

Image Type Recall Precision
Over-focused 0.69 0.95
Under-focused 0.89 0.96
Lower Resolution 0.68 0.65
Complete Data-set 0.78 0.90
Table 3: Summary of recall and precision metrics in the the various image formats found throughout the validation data-set as well as in the complete set.

iii.3 Comparison with Human Analysis

Figure 3: Mean bubble diameter, standard deviation of mean bubble diameter, and cumulative bubble volumes as recorded by the R-CNN and manual procedures across 23 independent images. Images 1-13 are examples of performance in under-focused images, images 14-18 are examples of over-focus, and images 19-23 are of lower resolution.

The bubble statistics for the 23 validation micrographs are plotted in Fig. 3. R-CNN and manual results are reported. As mentioned above, the validation set contains over-focused images, under-focused images, and lower resolution images. Fig. 3 shows that the R-CNN- and human-generated statistics follow the same trends.

The R-CNN- and human-based estimates of mean bubble diameter in the higher magnification images–labelled 1-18- are within 1.3 % of each other. In the five lower resolution images, labelled 19-23, the values are within 12.5 % of each other. The estimates of standard deviations of bubble diameters are within 1.8 % of each other in the high-magnification images, and within 46 % of each other in the low-magnification images. The estimates of total volumes are within 15 % of each other, in both high- and low-magnification images.

The R-CNN took approximately 2 seconds to process each image. The manual procedures which took up to 5 hours per image–this number varies widely on a case-by-case basis. This is an improvement of four orders of magnitude.

Iv Discussion

The automated bubble classification method can determine defect size distributions and quantities quickly and accurately. The algorithm can accurately imitate manual methods and procedures with minimal time and computational costs. Note that the quality of the results depends on the training data from which the model correlations are developed. Table 2 highlights that recall and precision increase as the size of the data-set increases. Table 3 highlights that the model performs differently depending on the type of image it is provided. The proposed model was trained on images that were split 80:220 in favour of under-focused images. This is reflected in the effectiveness of the model. 89% recall was noted in under-focused images, while 69% recall was noted in over-focused images. Further, low recall was found in images with a lower magnification than those in the training set. In order to improve performance in these scenarios, greater amounts of data can be collected and classified for training purposes. Of note, even if the model provides partly inaccurate results-i.e. false positives and negatives–this can be easily corrected by manual post-processing. Overall, the whole procedure is much more rapid than fully manual annotation. Furthermore, manually post-processed images can be added to the training set, further improving the performance of the R-CNN.

Current mapping of TEM samples is performed on one image which is sub-sampled from a through focal series. Through focals are used as a means to confirm whether an object within an image is a He bubble or surface damage. As current mapping is performed on one image processing of the focal series is not possible. Through an augmentation of the current bounding box outputs, it could be determined which objects remain within known drift throughout the focal series. From this objects that remain visible throughout the focal series could be identified as bubbles and there would be fewer cases of false positives. This confirmation of bubbles could further improve the efficacy of the network and expansion of the overall data-set.

Analysis is hindered by the inconsistent manner in which the samples are prepared and imaged. This leads to challenges in the creation of a program that is not only robust but also highly effective. Variation and inconsistencies in training data lead to difficulties in developing the necessary correlations to effectively identify defects. In TEM imaging of He bubbles there are two issues of particular note that lead to poor performance in R-CNN analysis. During sample preparation, if the samples are not thin enough, overlapping bubbles tend to dominate the final image. These overlapping bubbles hinder the R-CNN in identifying distinguishing features and thus reducing performance. Additionally when there is excess FIB damage from sample milling, imperfections appear on the surface, these imperfections can appear similar to bubbles and can lead to overestimation of total bubble density. However, in these cases, as well as at low resolution, human analysis is also fairly inconsistent. One challenge is that junior operators and researchers might not have the experience to recognize that they are dealing with a low-quality image and therefore not producing useful images.

A means of mitigating against spending time on images that cannot produce useful information, is the development of a secondary neural network, specifically for the purpose of recognizing high- and low-quality images. A network trained to identify images that cannot generate useful data could be put in place prior to the main R-CNN which would remove those low grade images from testing. This system, if utilized at the data-collection stage by TEM operators, would allow for on-the-fly on image quality information allowing operators to spend more time on clean samples enhancing useful data collected. This augmentation of procedures would allow for more efficient use of operator time which could in turn be used to generate more useful sets of data.

Development of neural networks to help analyze micrographs of radiation-induced damage shows promise. Both dislocation loops li2018automated () and He bubbles can be consistently identified. As the field progresses and new features are added, we should mention two challenges. The first is the reliance of R-CNN’s on large data sets. The large sets are not always available for materials science applications agrawal2019deep (). In this current study, given the abundance of X-750 micrographs available, this was not a problematic issue. But it certainly would be the case in other systems, where the resources to generate and classify hundreds of micrographs do not exist. A second challenge pertains to the automated extraction of defect contours li2018automated (). This is not an issue when dealing with spherical objects such as bubbles, but would be an issue for other defect types such as dislocation loops.

V Conclusions

A neural-network-based approach to analyze large sets of noisy TEM-generated micrographs was developed and evaluated. The R-CNN can identify He bubbles in X-750 alloys after neutron irradiation. It results in bubble statistics in quantitative agreement with those extracted by human-based analysis of the micrographs. The R-CNN can process images in a few seconds, which is orders of magnitude faster than human-based analysis. Accuracy levels of 89% have been achieved when mapping micrographs of the same magnification of those in the training set and close to 70% in micrographs of lower magnification. Consistency of imaging conditions is key to the success of the model. Images to be analyzed should be similar to the images used in model training; in particular, focusing conditions should be the same. Manual post processing of R-CNN-annotated micrographs can be used to progressively improve the training set, with a lower time investment than full manual annotation. Deep learning shows promise, and can likely be used to improve many other aspects of characterization of materials, including those used for nuclear power generation.

Vi Acknowledgements

We would like to thank the researchers at the Canadian Nuclear Labs who provided the micrographs used for training as well as the background information on classification of the bubbles. We thank Compute Canada for generous allocation of computer resources. Research was funded by the Canadian Nuclear Laboratories, Mitacs Canada, and the Natural Sciences and Engineering Research Council of Canada.

References

  1. Mike L Jenkins and Mark A Kirk. Characterisation of radiation damage by transmission electron microscopy. CRC Press, 2000.
  2. He Ken Zhang, Zhongwen Yao, Gregory Morin, and Malcolm Griffiths. Tem characterization of in-reactor neutron irradiated candu spacer material inconel x-750. Journal of Nuclear Materials, 451(1-3):88–96, 2014.
  3. Colin D Judge, Nicolas Gauquelin, Lori Walters, Mike Wright, James I Cole, James Madden, Gianluigi A Botton, and Malcolm Griffiths. Intergranular fracture in irradiated inconel x-750 containing very high concentrations of helium and hydrogen. Journal of Nuclear Materials, 457:165–172, 2015.
  4. Rajakumar Judge. High resolution transmission electron microscopy of irradiation damage in inconel x-750. The Minerals Metals and Materials Science, 2018.
  5. Colin David Judge. The Effects of Irradiation on Inconel X-750. PhD thesis, 2015.
  6. Robert C. Block, Yaron Danon, Frank Gunsing, and Robert C. Haight. Neutron Cross Section Measurements, pages 1–81. Springer US, Boston, MA, 2010.
  7. J. Kopecky, J. C. Sublet, J. A. Simpson, R. A. Forrest, and D. Nierop. Atlas of neutron capture cross sections. Technical report, International Atomic Energy Agency (IAEA), 1997. INDC(NDS)–362.
  8. Raymond Gibson. The safe transport of radioactive materials. Elsevier, 2013.
  9. Dan C Cireşan, Alessandro Giusti, Luca M Gambardella, and Jürgen Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In International Conference on Medical Image Computing and Computer-assisted Intervention, pages 411–418. Springer, 2013.
  10. Yanan Zhu, Qi Ouyang, and Youdong Mao. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy. BMC bioinformatics, 18(1):348, 2017.
  11. Rama K Vasudevan, Nouamane Laanait, Erik M Ferragut, Kai Wang, David B Geohegan, Kai Xiao, Maxim Ziatdinov, Stephen Jesse, Ondrej Dyck, and Sergei V Kalinin. Mapping mesoscopic phase evolution during e-beam induced transformations via deep learning of atomically resolved images. npj Computational Materials, 4(1):30, 2018.
  12. Artem Maksov, Ondrej Dyck, Kai Wang, Kai Xiao, David B Geohegan, Bobby G Sumpter, Rama K Vasudevan, Stephen Jesse, Sergei V Kalinin, and Maxim Ziatdinov. Deep learning analysis of defect and phase evolution during electron beam-induced transformations in ws 2. npj Computational Materials, 5(1):12, 2019.
  13. Wei Li, Kevin G Field, and Dane Morgan. Automated defect analysis in electron microscopic images. npj Computational Materials, 4(1):36, 2018.
  14. Ross Girshick. Fast r-cnn. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  15. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.
  16. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
  17. Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524, 2013.
  18. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
  19. Tzutalin. Labelimg. https://github.com/tzutalin/labelImg, 2015.
  20. Jayden O van Horik, Ellis JG Langley, Mark A Whiteside, Philippa R Laker, and Joah R Madden. Intra-individual variation in performance on novel variants of similar tasks influences single factor explanations of general cognitive processes. Royal Society open science, 5(7):171919, 2018.
  21. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
  22. Kevin P Murphy. Machine learning: a probabilistic perspective. Cambridge, MA, 2012.
  23. Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  24. Ankit Agrawal and Alok Choudhary. Deep materials informatics: Applications of deep learning in materials science. MRS Communications, pages 1–14, 2019.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
401260
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description