Classification of Spot-welded Joints in Laser Thermography Data using Convolutional Neural Networks

Classification of Spot-welded Joints in Laser Thermography Data using Convolutional Neural Networks


Spot welding is a crucial process step in various industries. However, classification of spot welding quality is still a tedious process due to the complexity and sensitivity of the test material, which drain conventional approaches to its limits. In this paper, we propose an approach for quality inspection of spot weldings using images from laser thermography data. We propose data preparation approaches based on the underlying physics of spot welded joints, heated with pulsed laser thermography by analyzing the intensity over time and derive dedicated data filters to generate training datasets. Subsequently, we utilize convolutional neural networks to classify weld quality and compare the performance of different models against each other. We achieve competitive results in terms of classifying the different welding quality classes compared to traditional approaches, reaching an accuracy of more than 95 percent. Finally, we explore the effect of different augmentation methods.

I Introduction

Spot welding plays a major role in joining technologies, especially in the automotive industry. Traditional methods to assure the quality of spot welded joints include random and periodic destructive tests like torsion testing or manual destructive testing, where the specimen has to be cut in half to be investigated. These methods are tedious and destroy the sample. Non-destructive testing methods (NDT) reduce the costs of quality assurance and imply an optimization of the method of spot welding, since every joint could be checked, and therefore the number of spot welded joints could be reduced. Among popular NDT methods for quality inspection of welded material are ultrasonic testing, X-Ray tomography [20], acoustic emission testing and laser thermography. X-Ray has been considered as reliable approach to assess the welding quality. Kar et al. [12] used X-Ray tomography to study the porosity of welded joints and asses the quality. Patil et al. [18] investigated weld defects using X-Ray radiography and found that the X-Ray method could reveal more defects compared to a visual inspection. While X-Ray approaches are a commonly used NDT method, the necessary radiation protection is a major limitation, thus it cannot be easily applied for in-situ inspection. In addition, X-Ray computer tomography is expensive compared to other NDT methods such as ultrasound or thermography. Furthermore, the wave’s penetration degree is limited, especially with multi-layered material thus could not be applied to detect small defects as observed by Duchene et al. [6]. As an alternative, ultrasonic approaches are being increasingly considered. Yu et al. [27] proposed an approach which employed high order ultrasonic waves to detect damages in welded joints and thus, could enhance the detection sensitivity to detect small weld flaws. Tabatabaeipour et al. [24] proposed an immersion ultrasonic testing method by observing the backscattered energy C-Scan images. Papanikolaou et al. [17] used ultrasonic testing as NDT method to inspect various parameters such as the chemical compositions or mechanical properties of the specimen to determine the weariness of specimen. The researchers conclude enhanced results using ultrasound testing, compared to visual testing and liquid penetration testing. Acoustic approaches on the other hand, utilizes ultrasonic waves at a much higher frequency and have been employed by a variety of work. Shrama et al. [21] applied acoustic emission to inspect welded joints for damages. They conducted a variety of tests and conclude an enhancement in understanding of damage mechanism for early maintenance. Kubit et al. [13] utilized acoustic microscopy to evaluate the joint quality. Despite its increased sensitivity, the setup and operation is very complex. Active thermography, on the other hand, emanates in recent times as a method, which allows contactless, fast and reliable testing, at cheaper operation costs than e.g. computer tomography. The feasibility of spot weld inspection based on thermography was theoretically examined in [22]. In [15], the researchers could already show that thermography is a robust alternative and can be calibrated using X-Ray methods. A non-destructive testing approach based on laser thermography was proposed by Jonietz et al. [11], where the researchers could detect important metrics of quality like the welding diameter by applying active thermography in transmission and reflection. However, the quality of the spot welded joints could not be assessed in detail.
Convolutional neural networks (CNNs) have achieved remarkable results in computer vision for tasks such as anomaly detection and classification, thus gaining immense popularity in NDT research in recent years. Cruz et al. [5] used CNNs to detect defects in ultrasound testing. Works by [23], [25] and [9] use CNNs to detect welding defects within X-Ray images and show performance enhancements. For instance, Wang et al. [25] used a RetinaNet-based CNN architecture to detect and classify three different types of defects inside X-Ray images. Zhang et al. [28] presented a weld defect detection on X-Ray images based on CNNs. The researchers achieve satisfying results in detecting features relevant for quality assessment. However, since the X-Ray approach is based on the transmission of radiation through the spot-welded joint, it is only possible with access from both sides. Janssens et al. [10] explored the usage of a deep neural network on infrared thermal images to monitor machine health by detecting fault conditions from moving machine components. The researchers conclude a significant performance boost when applying CNNs and that relevant regions could be identified and visualized to detect potential failures. Nasiri et al. [16] used CNNs to detect six conditions in thermal images of cooling tubes. Similar to our work, Yang et al. [26] used a Faster-RCNN-based architecture to visualize defects inside metal plates inducted with heat. They analyse the heat distribution and propose an improved Faster-RCNN architecture to visualize and detect the cracks. Dung et al. [7] explored the effect of CNNs on welded joints on gusset plates and conclude its feasibility when using transfer learning and data augmentation. In this paper, we will utilize both methods as well but will specify data preparation methods specifically for thermography data. Therefore, we consider the underlying physics such as the heat distribution and the temporal component of thermal images, which provide more information about the specimen. In addition, our data acquisition approach is contactless and, very importantly, requires access to the weld from one side only, making it proficient for in-situ quality inspection. For improved feature visualization, we apply preprocessing steps presented in [11].

The main contributions of this work are the following:

  • Proposal of CNN-based welding quality assessment method to classify welding quality from thermal images that are not distinguishable by human vision inspection.

  • Proposal of methods to generate a feasible training dataset from thermal images by analyzing the underlying physics and generating filters accordingly.

  • Evaluation of different data augmentation methods and their effect on thermal datasets.

  • Performance evaluation of three State-of-the-Art neural network architectures.

The paper is structured as follows. Sec. II begins with the theoretical foundations utilized in our approach. The methodology including the overall concept and the implementation of each module, is presented in Sec III. Sec. IV presents the results and discussion. Finally, Sec. V will give a conclusion and outlook.

Ii Theoretical foundations

The data analyzed in this contribution has been acquired using laser thermography. The theoretical backgrounds are presented in this chapter.

Fig. 1: Sketch for theoretical understanding of IR radiant flux in our experiments. The IR camera receives different IR radiation components (red arrows), whereas the direct component from the specimen is given as a blue arrow. The schematic of the cross section through a welding joint is given on top in gray color.

Ii-a Description of the thermal radiation components and emissivity corrections

Fig. 1 illustrates our setup for a theoretical understanding of the IR radiant flux used in our experiments. The radiant flux (SI unit: Watt) is a common quantity to describe the intensity level of the IR radiation. Fig. 1 shows the IR radiation components as detected by the IR camera in the measurement environment: Direct radiation from the ambient environment (), environmental radiation reflected from the surface of our investigated specimen (), as well as radiation from the measurement path between specimen and our measurement device (), which is caused by the atmospheric absorbers (e.g., air, humidity, CO). All these disturbing quantities (summed up in the following as ) are detrimental for our measurement since we are interested in measuring the radiant flux of the specimen . In addition, we do not know the exact emissivity of our specimen that indicates how much radiation it emits compared to an ideal heat radiator, i.e. a black body (BB). These conditions, lead to the following total radiant flux during our measurements for every pixel:


The emissivity is a unit-less scalar with . According to Stefan-Boltzmann law, the radiant flux depends on the temperature (). During a thermographic measurement we can rewrite to and before the measurement, we can write with standing for the room temperature and considering the temporal heating given by laser illumination (). Assuming constant environmental conditions and temperature-independent optical quantities of the specimen, and remain the same during the experiment. The environmental disturbances could be therefore removed if we consider the radiant flux difference . Further, the unknown emissivity can be removed by considering a normalized radiant flux difference where refers to another time after the sample is cooled down to a temperature . For more detailed explanations we refer to [11]. In this contribution we utilized this method to generate a noise-free dataset without any uncertainties due to the emissivity. Please note that in this approach we have to calculate with the temperature dependent radiant flux (as measured with the IR camera) and not with the temperature (calculated inside the IR camera based on a previous calibration) itself. Moreover, using Stefan-Boltzmann law is an approximation, since the IR camera is sensitive in a restricted spectral range only.

Ii-B Data description

In our experiment, we perform pulsed thermography using a rectangular shaped homogeneous laser illumination over the whole area of interest. Therefore, we can calculate the 2D solution (referring to the two spatial dimensions and , see Fig. 1) for the homogeneous heat diffusion equation for a 2D heating source in reflection configuration () by [4]:


whereby describes the absorbed radiation energy from the laser, the illuminated area, the time, the material density, the specific heat capacity of the material, the diffusivity, the thermal reflectivity (material to air), the number of reflections of the so-called thermal wave and the thickness of the plate. The given temperature evolution refers to an ideal sheet infinitely extended in the plane and an infinitely short heating impulse. For the actual specimen and experimental setup, we can only get a first impression on how the temperature evolves and concentrate on the transient signal contrast in the dataset caused by the geometry of the specimen. Fig. 1 shows that the value for differs since the specimen consists of two steel sheets welded together by a spot-welded joint. This means that, according to eq. (2), the solution for the heat diffusion equation in the area of the spot-welded joint works with whereas the region outside the spot-welded joint works with . Fig. 2 (b) shows also the main difference of the heat flow visually (red - high temperature, blue - low temperature). Since we are measuring in reflection configuration (IR camera and laser on the same side of the specimen), we observe a hot rim outside the spot-welded joint region since the heat is accumulated. On the other hand, we observe a cold spot in the middle since the heat diffuses through the spot-welded joint towards the other steel sheet. Therefore, a good connection should serve for an evident contrast between the region inside and outside the spot-welded joint.

(a) (b)

Fig. 2: (a) Metallography of one of our specimens after applying resistance spot-welding. (b) Data acquisition setup. The specimen, consisting of two welded metal sheets, is heated up with active laser thermography. The heat distribution over time is measured with the IR camera. The diameter of the spot-welded joints is around 4-8 mm.
Fig. 3: (a) Classification of welding quality. Specimens for the three different quality classes and their respective normalized thermal image are shown. It is evident that the classes appear similar. For instance, it is hard to classify between image 1612 and 1587. (b) Destructive testing as reference method to assure the quality of the spot-welded joints using hammer and chisel.

In the following, we are working with intensities , where designates the number of time stamps which is equal to the number of measured thermal images (thermograms) in a thermal film sequence. These intensities refer to the radiant flux of the thermal radiation as measured by the InSb-based detector of the IR camera and converted to digits using an analog-to-digital converter. Thus, in this work, the intensity values in a thermal image are given by digits pixelwise representing the measured radiant flux in () and ().

Iii Methodology

After describing the underlying physics and theoretical foundations of our data, in this section, we will present the methods that we used for our proposed quality assessment use case.

Iii-a Experimental setup and data acquisition

Our dataset was collected from specimens that were made using an electric welding system, see Fig. 2. These specimens consist of two resistance spot-welded hot-dip galvanized micro alloyed steel sheets HX340LAD [1] (zinc layer is approximately 7.5  on each side), respectively, which are typically used in automotive industry and have a thickness of  mm. The resistance spot-welding has been performed using a welding current of  kA, a pressure of  kN, and a welding time of  ms using an electrical spot welding machine. According to the procedure for the determination of the electrode life [2], more than 1600 spot weldings have been performed. After approximately 1000 welds, the electrode life has been reached and started to produce unreliable spot-welded joints. We tested 115 welds using thermography starting from weld no. 1510. As reference, we applied destructive chisel testing according to Ref. [8]. The setup for data acquisition is illustrated in Fig. 2 (a). We used active laser thermography for all tested specimens and captured 250 frames over time, which results in a film for every test object visualizing the spatial heat distribution for each time step. The laser radiation was switched on for a duration of one second at  W, illuminating a square-shaped area of . The thermal images were measured with an IR camera (InSb detector, sensitive between m, frame rate:  Hz, spatial resolution varied between and m/pixel). The utilized fiber-coupled laser emits in the near infrared range ( nm) and is therefore not interfering with the detector range of the IR camera. The laser heats up the specimen with a spot-welded joint. As can be observed in Fig. 2, the challenge of our thermal dataset is the similarity of the raw infrared data for different quality classes, which is not distinguishable by human visual inspection. For instance, it is hard to classify between image 1612 and 1587 or 1533 and 1548, despite their different classes. The features specifying each class are not evident, which causes common feature extractors like CNNs to struggle with. One that account, we explore ways to generate feasible datasets out of utilizing the underlying physics of the laser thermography process described in the previous chapter.

Fig. 4: Data engineering workflow with exemplary filters illustrated in the data filtering section.

Iii-B Data filtering

One of the aspects of this work is to explore how to process the normalized intensity data described in the previous section (see section II-B) to provide reliable predictions using CNNs. Therefore, we study different filters and their effect on the performance of the CNN. We only extract certain images defining a filtered set with the cardinality . can be replaced by , referring to eq. (1), and further it can be described by a 1D array with and , so that we can describe the filtered data by:


whereby represents a subset of the whole measured dataset with the specified intensity values defined in Table I so that denotes a filtered dataset. stands for the normalized intensity difference as similarly described for the radiant flux in section II-A. Thus, the filters are defined based on intensity values of the films. These filters can lead to positive effects as we are investigating a dynamic temperature behaviour over time. Extracting only frames with significant changes in their amplitude, e.g. while heating or beginning of cooling phase, allows for more evident features within the datasets. The intensity is calculated by using the average value of all pixels in the image. Fig. 4 (upper right corner) illustrates the intensity and gradient values referring to the temperature-time diagram as well as marked areas of filters and resulting datasets. For the generation of our final results, we use a combination of different filtered sets which yields whereby , , and , , designate different filtered datasets according to Table I. In total, we define 12 different filtered datasets for the whole film each representing a different status of heating to investigate the effects of certain areas of the intensity curve on the performance of the CNN. The image counts of each dataset before and after augmentation are listed in Table I as (before || after). The applied augmentation methods are described in the next section.

Filter Description
1-25 5k-7k 2.5k || 20k No Heating
35-45 9k-11k 1k || 8k Intensity Peak 2
51-60 12.5k-14.4k 900 || 4560 Maximum Intensity
61-75 9.7k-11k 1.4k|| 11.2k After-Maximum
76-100 8k-9.1k 2.4k || 19.2k Cool Down
101-135 7k-8k 3.4k || 27.2k Cool Down
136-170 6.6-7k 3.4k || 27.2k Cool Down
171-210 6.2-6.5 3.9k || 31.2k Cool Down
211-250 6.05-6.1k 3.9k || 31.2k End
20-75 8k-14.4k 5k || 40k Peaks Combined
1-250 0.4-14k 10k || 80k Cool Down
101-250 6.2k-8k 7.5k || 60k Cool Down
TABLE I: Image count for datasets

Iii-C Data augmentation

It is well-known in data science that data augmentation techniques such as scaling, rotation and flipping yields a better data basis for the application of CNNs. We first filter the data to obtain a set and then augment yielding a new set :


where are coordinate transformations and represent color transformations which change the intensity values of a pixel within a film. More specifically, in this work we use by employing following data augmentation techniques:


Iii-D Data Labeling

Three classes are to be considered for classification: good, bad and medium. Fig. 3 (a) describes the labeling benchmark on which the data labeling is based. This benchmark was created with destructive testing using the standardized chisel testing [8] by destroying the specimen and inspecting the welding quality visually by a human expert (s. Fig. 3 (b)). As a result, each image in our dataset contains a label stating whether it has good (standard spot weld diameter), bad (stick weld, i.e. no or only minimum actual spot weld) or medium (undersized weld nugget leading to a weak mechanical joint) welding quality.

Iii-E Neural Network Design

The data engineering steps previously discussed enable us to generate a feasible dataset with evident features for a convolutional neural network to robustly assess the welding quality. An important aspect of our data is that the frames starting approximately from frame 100 (after the cooling down phase) immediately become similar to each other and are not distinguishable. Since the areas of interest only contain 7-12 frames, a long short-term memory (LSTM) based approach which considers the temporal dependency would not deliver the desired results. The incorporation of recurrent neural networks was not considered because of the dominance of similar-looking frames which compromised nearly 80 percent of the film. Furthermore, the dynamics of our dataset is too low with only small changes visible between the frames. However, our observations also find that especially for the relevant areas like the maximum intensity area, the features will get evident for each class. Based on these considerations, a Faster-RCNN based 2D convolutional network is employed which will analyze one dedicated frame of the film, to make the prediction. Our architecture is based on the original Faster-RCNN [19] with modified input to match our thermal data and a ResNet101 as a backbone network. The architecture is visualized in Fig. 5. The input image has three channels and is of size . As a backbone network, the ResNet-101 is employed. After passing the backbone network, feature maps are generated, which are passed through the region proposal network. Subsequently, for each region proposal, a bounding box regressor and a softmax classifier is applied to detect and locate the defects. The architecture is illustrated in Fig. 5.

Fig. 5: Architecture of our neural network

Iv Results and Discussion

Iv-a Filters Evaluation

We trained the CNN with different datasets generated by applying the filters introduced in section III. Furthermore, the positional data augmentation techniques defined in Sec. III were applied: The images were horizontally and vertically flipped and rotated with a random value between -90 and 90 degrees. Since the heat diffusion is pointsymmetric, these positional changes will not affect the original information of the frame. Fig. 6 illustrates the accuracies for the different datasets, each representing an intensity area. We used the mean Average Precision (mAP) as evaluation metric, which indicates the classification probability of a correct result for a bounding box overlap of 50 percent to the groundtruth label (intersection over union = 0.5). The average of the accuracies for all three classes were calculated.

Fig. 6: Accuracies of different intensity areas. The red curve is the average intensity curve of all films on which the filters are defined. The bars represent the according models’ accuracies. Depending on the test dataset, the accuracy vary due to the more evident features of specific areas.

The highest accuracy is observed when using frames at large intensity values to train. However, using the same frames within smaller chunks of data, results in a significantly decreased performance. On that account, the effect of a combined dataset is explored by combining multiple filters as well as using the whole dataset for training. An evident accuracy boost can be observed while using the filtered dataset with images from frames of maximum intensity. However, it is noticeable that the smaller datasets from within the same area of intensity () result in significantly worse accuracies compared to the combined dataset (). This observation is also evident when combining the datasets of frames 100 to 250, when the specimen is in its cool down stage. The results, albeit being already bad with only 30-40 percent accuracy, gain a small boost to 42 percent accuracy when being combined. However, since the specimen state at the end of a film is already cooled down completely, the visual differences between frames perish. Thus, the results are in line with our theoretical statements from chapter II. Therefore, they should not be considered when training the CNN as the similarity of the training data affects the performance of the CNN in a negative way. Overall, we could improve accuracy by 6 percent when specifying filters which consider frames from the maximum intensity area of the film ( compared to ). Interestingly, using the whole film does not decrease the accuracy significantly. As expected, areas at the end of the film will result in imprecise results with an accuracy of 40 percent. Fig. 7 showcases the detections of the two best achieving models resulting from dataset and . While most test films could be classified correctly, there are some cases, in which the model gives a wrong prediction while the model could classify it correctly.

Fig. 7: Performance comparison between different models. The upper row of the film results showcases the detection for three different films of model , which were classified falsely while the lower row shows the detections of the same frames of model which were classified correctly

Iv-B Data Augmentation Evaluation

To evaluate the impact of data evaluation methods, we applied different positional as well as color augmentations as defined in Sec. III. The results are depicted in Fig. 8. For the color augmentation, the brightness and contrast saturation and PCA-Color was changed with a random value each.

Fig. 8: Impact of different augmentation methods on combined datasets ()

The accuracy could be improved when using the positional augmentations compared to the dataset without augmentation techniques applied. This is more evident in the datasets and . Since the heat diffusion is pointsymmetric, positional changes like rotation or flipping will not falsify the information inside the images. As expected, color augmentations affect the accuracy in a negative way for the datasets and . Notably, the effect was not as evident as assumed. It is most evident in the area of maximum intensity, where the color augmentation decreased the accuracy. The area at the beginning of the cooling phase experiences a performance increase even when using color augmentation. This indicates a potential boost when using color augmentation due to the similar intensity values at later stages of the cooling down stage. The observed decrease in accuracy at stages where the intensity value is high, is due to the more evident spatial differences between frames, which a color augmentation would only disturb.

Iv-C Comparison with other CNN architectures

The dataset generated when applying has resulted in robust performance. Based on this, we evaluated two additional network architectures, namely Retina Net and Cascade-RCNN. Furthermore, we evaluated the classfiication accuracies for the three different classes ’good’, ’medium’ and ’bad’. Retina contains an additional focal loss function [14] while Cascade-RCNN employs an additional network as cascade layer [3]. Table II lists relevant metrics of our training for all different approaches. Fig. 9 to 11 illustrate the predictions.

Faster-RCNN RetinaNet Cascade-RCNN
6,33 36,78 8,85 9,97 45,6 12,66 2,72 34,66 16,54
92,14 81,57 94,234 78,12 74,21 80,97 95,31 84,4 94,7
TABLE II: Evaluation metrics for different architectures. Error rate metric indicates what percentage of all predictions were false. AP metric denotes average likelihood of correct predictions.

The error rate metric indicates how many of the predictions were correct and wrong, respectively, with over 90 percent precision. For the average precision metric, we averaged the values of all correct predictions for the different classes. Each bounding box gives a likelihood of the class being predicted, e.g. a value of denotes that the probability of the class is 94 percent. Faster-RCNN and Cascade-RCNN achieve the highest average precision. Especially for the classes ’good’ and ’bad’, over 95 and 94 percent are achieved, respectively. The accuracy and error rates are indicating a stable and reliable prediction for all models. Cascade-RCNN is achieving the best results. A 97 percent accuracy for the class ’good’ and 93 percent for the class ’bad’ is achieved. As expected, the performance is worse for the prediction of the class ’medium’. Hence, Faster-RCNN achieves a 63 percent accuracy, while Cascade-RCNN achieves an accuracy of 65 percent. The flawed accuracy is due to several reasons: films for the class medium were represented the least with only 17 percent. Thus, the imbalance between the different classes bad and good compared to medium, leads to a poor performance. Furthermore, the class ’medium’ is generally hard to visually distinguish from the classes ’good’ and ’bad’. Remarkably, RetinaNet perform worst in all metrics. This could be attributed to the fact that generally, the welding spots are hard to detect and classify because, even with preprocessing, features are blurry due to the unordered heat distribution throughout the whole image. This makes it hard for classifiers to spot relevant regions. This is enhanced by the fact, that one-stage-detectors rely on one step rather than incorporating an additional region proposal network. Furthermore, large objects are known to cause difficulties for one stage detectors. In our case, the object compromises almost 80 percent of the whole image, which might be another reason why RetinaNet performed worse.

Fig. 9: Detection results for class ’good’
Fig. 10: Detection results for class ’medium’
Fig. 11: Detection results for class ’bad’

V Conclusion

Classifying the quality of spot weldings is a tedious process in industries due to the lack of reliable and robust, non-destructive inspection methods. Common approaches analyze weldings using hand engineered features. Neural networks bear the potential to automate the process and learn relevant features to assess the quality. In this work, we have explored the effect of thermal dataset preparation to generate feasible training datasets for CNNs. Therefore, we take underlying theoretical physical foundations into account and analyzed the intensity value of spot welded joints after pulsed laser thermography. Based on these observations, we proposed data filters and explored their effect on the performance of the CNN. Overall, we could achieve an accuracy of 95 percent in classifying the quality of welds, which motivates not to apply destructive testing methods. Our approach utilizes data generated with laser thermography, which is a cheaper alternative and can be easily applied in-situ, contrary to X-Ray approaches. Additionally, it can be applied for a non-contact inspection, opposing to conventional ultrasonic approaches. We demonstrated an enhancement by 6 percent when applying our defined data filters, which are based on the maximum intensity area of the film. An important aspect is that smaller data chunks are not sufficient, even with data augmentation, to deliver robust results, and a dataset covering multiple frames is always to be preferred. We also demonstrate the efficiency of different augmentation methods on different areas along the intensity curve. Color augmentation is especially useful for the cooling stage, when the data is similar, while positional augmentation like rotation and flipping can boost accuracy at the earlier stages. Further steps include the modification and optimization of the used neural network models with physics-based optimizers to detect more complex anomalies especially for thermal images. Additionally, we aspire to employ the detection in the frequency domain, which potentially could deliver enhanced results in terms of computational performance and accuracy.


  1. E. 10346 (2015) Continuously hot-dip coated steel flat products - technical delivery conditions. British-Adopted European Standard. Cited by: §III-A.
  2. I. 18278-2 (2016) Resistance welding — weldability — part 2: evaluation procedures for weld-ability in spot welding. Beuth Verlag. Cited by: §III-A.
  3. Z. Cai and N. Vasconcelos (2018) Cascade r-cnn: delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6154–6162. Cited by: §IV-C.
  4. K. D. Cole et al. (2010) Heat conduction using green’s functions. 2 edition, CRC Press. Cited by: §II-B.
  5. F. Cruz, E. Simas Filho, M. Albuquerque, I. Silva, C. Farias and L. Gouvêa (2017) Efficient feature selection for neural network based detection of flaws in steel welded joints using ultrasound testing. Ultrasonics 73, pp. 1–8. Cited by: §I.
  6. P. Duchene, S. Chaki, A. Ayadi and P. Krawczak (2018) A review of non-destructive techniques used for mechanical damage assessment in polymer composites. Journal of materials science 53 (11), pp. 7915–7938. Cited by: §I.
  7. C. V. Dung, H. Sekiya, S. Hirano, T. Okatani and C. Miki (2019) A vision-based method for crack detection in gusset plate welded joints of steel bridges using deep convolutional neural networks. Automation in Construction 102, pp. 217–229. Cited by: §I.
  8. EN-ISO (2005) Resistance welding–testing of welds–peel and chisel testing of resistance spot and projection welds. Beuth Verlag. Cited by: §III-A, §III-D.
  9. S. Faghih-Roohi, S. Hajizadeh, A. Núñez, R. Babuska and B. De Schutter (2016) Deep convolutional neural networks for detection of rail surface defects. In 2016 International joint conference on neural networks (IJCNN), pp. 2584–2589. Cited by: §I.
  10. O. Janssens, R. Van de Walle, M. Loccufier and S. Van Hoecke (2017) Deep learning for infrared thermal image based machine health monitoring. IEEE/ASME Transactions on Mechatronics 23 (1), pp. 151–159. Cited by: §I.
  11. F. Jonietz, P. Myrach, H. Suwala and M. Ziegler (2016) Examination of spot welded joints with active thermography. Journal of Nondestructive Evaluation 35 (1), pp. 1. Cited by: §I, §II-A.
  12. J. Kar, S. K. Dinda, G. G. Roy, S. K. Roy and P. Srirangam (2018) X-ray tomography study on porosity in electron beam welded dissimilar copper–304ss joints. Vacuum 149, pp. 200–206. Cited by: §I.
  13. A. Kubit, T. Trzepiecinski, K. Faes, M. Drabczyk, W. Bochnowski and M. Korzeniowski (2019) Analysis of the effect of structural defects on the fatigue strength of rfssw joints using c-scan scanning acoustic microscopy and sem. Fatigue & Fracture of Engineering Materials & Structures 42 (6), pp. 1308–1321. Cited by: §I.
  14. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §IV-C.
  15. P. Myrach, F. Jonietz, D. Meinel, H. Suwala and M. Ziegler (2017) Calibration of thermographic spot weld testing with x-ray computed tomography. Quantitative InfraRed Thermography Journal 14 (1), pp. 122–131. Cited by: §I.
  16. A. Nasiri, A. Taheri-Garavand, M. Omid and G. M. Carlomagno (2019) Intelligent fault diagnosis of cooling radiator based on deep learning analysis of infrared thermal images. Applied Thermal Engineering 163, pp. 114410. Cited by: §I.
  17. S. Papanikolaou, D. Fasnakis, A. Maropoulos, D. Giagopoulos, S. Maropoulos and T. Theodoulidis (2020) Non-destructive testing of welded fatigue specimens. In MATEC Web of Conferences, Vol. 318, pp. 01033. Cited by: §I.
  18. C. Patil, H. Patil and H. Patil (2016) Investigation of weld defects in similar and dissimilar friction stir welded joints of aluminium alloys of aa7075 and aa6061 by x-ray radiography. American Journal of Materials Engineering and Technology 4 (1), pp. 11–15. Cited by: §I.
  19. S. Ren, K. He, R. Girshick and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §III-E.
  20. T. Saravanan, B. Lahiri, K. Arunmuthu, S. Bagavathiappan, A. Sekhar, V. Pillai, J. Philip, B. Rao and T. Jayakumar (2014) Non-destructive evaluation of friction stir welded joints by x-ray radiography and infrared thermography. Procedia Engineering 86, pp. 469–475. Cited by: §I.
  21. K. Shrama, S. K. Al-Jumaili, R. Pullin, A. Clarke and S. Evans (2019) On the use of acoustic emission and digital image correlation for welded joints damage characterization. Journal of Applied and Computational Mechanics 5 (2), pp. 381–389. Cited by: §I.
  22. U. Siemer (2010) Einsatz der Thermografie als zerstörungsfreies Prüfverfahren in der Automobilindustrie: Entwicklung einer Ingenieurplattform. Cited by: §I.
  23. F. M. Suyama, M. R. Delgado, R. D. da Silva and T. M. Centeno (2019) Deep neural networks based approach for welded joint detection of oil pipelines in radiographic images with double wall double image exposure. NDT & E International 105, pp. 46–55. Cited by: §I.
  24. M. Tabatabaeipour, J. Hettler, S. Delrue and K. Van Den Abeele (2016) Non-destructive ultrasonic examination of root defects in friction stir welded butt-joints. Ndt & E International 80, pp. 23–34. Cited by: §I.
  25. Y. Wang, F. Shi and X. Tong (2019) A welding defect identification approach in x-ray images based on deep convolutional neural networks. In International Conference on Intelligent Computing, pp. 53–64. Cited by: §I.
  26. J. Yang, W. Wang, G. Lin, Q. Li, Y. Sun and Y. Sun (2019) Infrared thermal imaging-based crack detection using deep learning. IEEE Access 7, pp. 182060–182077. Cited by: §I.
  27. X. Yu, P. Zuo, J. Xiao and Z. Fan (2019) Detection of damage in welded joints using high order feature guided ultrasonic waves. Mechanical Systems and Signal Processing 126, pp. 176–192. Cited by: §I.
  28. H. Zhang, Z. Chen, C. Zhang, J. Xi and X. Le (2019) Weld defect detection based on deep learning method. In 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), pp. 1574–1579. Cited by: §I.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description