Unsupervised Pixel-level Road Defect Detectionvia Adversarial Image-to-Frequency Transform

Unsupervised Pixel-level Road Defect Detection via Adversarial Image-to-Frequency Transform

Abstract

In the past few years, the performance of road defect detection has been remarkably improved thanks to advancements on various studies on computer vision and deep learning. Although a large-scale and well-annotated datasets enhance the performance of detecting road defects to some extent, it is still challengeable to derive a model which can perform reliably for various road conditions in practice, because it is intractable to construct a dataset considering diverse road conditions and defect patterns. To end this, we propose an unsupervised approach to detecting road defects, using Adversarial Image-to-Frequency Transform (AIFT). AIFT adopts the unsupervised manner and adversarial learning in deriving the defect detection model, so AIFT does not need annotations for road defects. We evaluate the efficiency of AIFT using GAPs384 dataset, Cracktree200 dataset, CRACK500 dataset, and CFD dataset. The experimental results demonstrate that the proposed approach detects various road detects, and it outperforms existing state-of-the-art approaches.

I Introduction

Road defect detection is one of the important studies to prevent vehicle accidents and manage the road condition effectively. All over the United States, road conditions contribute to the frequency and severity of motor vehicle accidents. Almost of third of all motor vehicle crashes are related to poor road conditions, resulting in more than two million injuries and 22,000 fatalities [32]. Over time, as road infrastructure ages, the condition of that infrastructure steadily declines, and the volumes and severity of defects increase [6]. Therefore, the need for the development of a method for detecting road defects within this area only increases [11], and numerous studies have been being proposed in the literature.

Over the past decades, diverse studies have considered the use of image processing and machine learning approaches with hand-crafted features [2, 5, 7, 8]. Statistical analysis [2, 7] is the oldest one and also the most popular. Acosta et al. [2] and Deutschl et al. [8] have proposed vision-based methods based on partial differential techniques. Chambon et al. [7] have presented a method based on Markovian modelling to take into account the local geometrical constraints about road cracks. Bray et al. [5] have utilized the classification approach using neural networks for identifying road defects. These approaches usually identify road defects using the contrast of texture information on a road surface.

However, the contrast between roads and the defects on the roads may be reduced due to the illumination conditions and the changes in weather [27]. Additionally, the specification of cameras for capturing the surface of the roads also can affect the detection accuracies. Hense, it is still challenging to develop a defect detection method which can cover various road conditions in the real world using a simple image processing or machine learning methods alone [4].

Recently, various approaches [20, 10] based on deep learning have been proposed to overcome these drawbacks. Pauly et al. [20] have proposed a method for road defect detection employing convolutional neural networks (CNNs). Fan et al. [10] have proposed segmentation method based on CNNs and apply an adaptive. These approaches need a well-annotated dataset for road defects, and also their performance may depend on scale of the given dataset. Regrettably, it is problematic in practice to construct such a dataset containing various patterns of road defects.

Developing an unsupervised method which does not need annotations for road defects in the training step, is an issue that has been noticed for a long time in this literature. Various unsupervised approaches based on image processing and machine learning were proposed [1, 19]. However, these approaches still have an inherent weakness which is detection performances are highly dependent on camera specifications and image qualities. Recently, among the approaches based on deep learning, several studies [18, 12] have presented unsupervised methods using autoencoder [28]. These approaches take normal road images as their training samples and optimize their models in a way to minimize reconstruction errors between their input and output. These approaches recognize defects if the reconstruction errors of inputted samples are larger than a predefined threshold.

However, according to Perera et al. [21] and Pidhorskyi et al. [22], even though a model based on the reconstruction setting obtains a well-optimized solution, there is a possibility that the model can reconstruct samples which have not appeared in the training step. It could be a significant disadvantage in detecting road defects using the model. Due to this disadvantage, the model may produce lower error than the expectation even if it takes defect samples as their input, and it can make hard to distinguish whether this sample contains defects or not.

Fig. 1: Architectural detail of the adversarial image-to-frequency transform. The blue objects denote the operation units including the generator and the discriminators and . The red circles indicate the loss functions corresponded to the each operation unit. The red arrow lines show the work flow for the image-to-frequency cycle , and the blue arrow lines represent the process of the frequency-to-image cycle . The dotted arrow lines represent the correlations of each component to the loss functions.

To tackle this issue, we present an unsupervised approach, which exploits domain transformation based on adversarial learning, to detecting road defects. The proposed approach called Adversarial Image-to-Frequency Transform (AIFT) is trained by normal road images only and needs no annotations for defects. In contrast to other approaches [18, 12] optimizing their models by minimize reconstruction errors, AIFT is concentrated on deriving mapping function between an image-domain and a frequency-domain using adversarial manner. To demonstrate the efficiency of the proposed approach for road defect detection, we compare the proposed approach with various state-of-the-art approaches, including supervised and unsupervised methods. The experimental results show that the proposed approach can outperform existing state-of-the-art methods.

The main contributions of our work are summarized as follows:

  • An unsupervised method for detecting road defects, which can provide outstanding performance without a well-annotated dataset for road defects.

  • The adversarial learning for deriving the image-to-frequency mapping function. Our approach can derive the more optimal transform model than typical approaches such as reconstruction or classification settings.

  • The extensive experiments about road defect detection. The experiments include ablation analysis depending on the loss functions and comprehensive comparison with the existing state-of-the-art methods.

In the further sections, we describe the details of our approach and provide the experimental results and analysis it. We conclude this paper by summarizing our works.

Ii The Proposed Method

Ii-a Adversarial Image-to-Frequency Transform

It is essential to derive a robust model invariant to environments in order to detect a great number of defect patterns on roads. Our method is inspired by novelty detection studies [21, 22], which derive a model using inlier samples only and recognize outliers by computing a likelihood or an reconstruction error. The proposed method, called Adversarial Image-to-Frequency Transform (AIFT), initially derives a transform model between image-domain and frequency-domain using normal road pavement images only. The frequency-domain corresponding to the image-domain is generated by applying Fourier transform to the given image-domain. Detecting road defects is conducted by comparing given and generated samples of each domain.

AIFT is composed of three components: Generator , Image discriminator , Frequency discriminator , for applying adversarial learning. The original intention of adversarial learning is to learn generative models while avoiding approximating many intractable probabilistic computations arising in other strategies e.g., maximum likelihood estimation. This intention is suitable to derive an optimal model for covering the various visual patterns of road defects. The workflow of AIFT is illustrated in Fig 1.

The generator plays as a role for the mapping function between image-domain to frequency-domain as follows, . For the convenience of notation, we distinguish the notations of mappings for image-to-frequency and frequency-to-image , separately. generate the transformed results from each domain as follows,

(1)

where and indicate the transformed results from and , respectively. and are conveyed to the two discriminators and for computing an adversarial loss. For computational-cost-effective implementation, weight sharing has employed.

Fig. 2: Structural details of the network models in the generator and the discriminators and . (a) and (b) denote the structural details of the generator and the two discriminators and , respectively. The green, blue, and red boxes denote the convolutional layers, the deconvolutional layers, and the fully-connected layers, respectively.

The discriminators and are defined as follows,

(2)

where denotes the indicator to assign the discriminators depending on the types of inputs . takes and as an input, and takes and as an input, respectively. indicates the outputs and according to the types of the inputs and the discriminators. The value of can be regarded by as a likelihood to discriminate whether a given sample is truth or generated. Each component is compiled by CNNs and fully-connected neural networks and the structural details of these components are shown in Fig 2.

Ii-B Adversarial transform consistency learning

As the workflow of AIFT shown in Fig 1, the generator plays a role as a bidirectional mapping function between image-domain and corresponding frequency-domain generated from . The underlying assumption for detecting road defects using AIFT is as follows. Since AIFT is only trained with normal road pavement images, if AIFT takes images containing defect patterns as an input, the error between the given samples and the transformed results would be larger than normal ones. Given this assumption, the prerequisite for precise road defect detection on AIFT is deriving a strict transform model between the image-domain and the frequency-domain from a given dataset for normal image samples for road pavement.

To end this, we present an adversarial transform consistency loss for training AIFT. Adversarial transform consistency loss is defined by,

(3)

where tries to generate images and frequency samples via and that look similar to given images and frequencies , while and aim to distinguish between given samples ( and ) and transformed results ( and ).

Fig. 3: Comparison of the given and generated samples for the road pavement image and the corresponding frequency.

Adversarial learning can, in theory, learn mappings that produce outputs identically distributed as image and frequency domains, respectively [34]. However, with large enough capacity, can map the same samples of an input domain to any random permutation of samples in the different domain, where any of the learned mappings can induce an output distribution that matches the target distribution. Thus, adversarial transform consistency loss alone may not guarantee that the learned function can map an individual input to the desired output.

To further reduce the space of possible mapping functions, we utilize the reconstruction loss to optimize the generator . It is a common way to enforce the output of the generator to be close to the target through the minimization of the reconstruction error based on the pixel-wise mean square error (MSE) [31, 3, 14, 25]. It is calculated in the form

(4)

Consequently, the total loss function is:,

(5)

where indicates the balancing parameter to take the weight for the reconstruction loss.

Given the definition of above loss functions, the discriminators and the generator are trained by maximizing or minimizing corresponding loss terms expressed by,

(6)

where , ,and denote the parameters corresponded to the generator , the image discriminators , and the frequency discriminator . Fig 3 illustrates the examples of the given samples and the transformed results for image and frequency domains. We have conducted the ablation studies to observe the effect of each loss term in learning AIFT.

(a)
(b)
Fig. 4: The trends of AIU over the training epochs. (a) show the AIU trend over the training epochs on GAPs384 dataset, and (b) illustrate the AIU trend with respect to the training epochs on CFD dataset. The red-coloured curve (AIFT) denotes the AIU trend of AIFN trained by the total loss (Eq 5). The green-colored curve (AIFT) indicates the AIU trend of AIFN trained by the ATCL loss (Eq 3) only. The blue-colored curve (AIFT) shows the AIU trend of AIF trained by the reconstruction loss (Eq 4).

Ii-C Road defect detection

Detecting defects on a road is straightforward. Initially, AIFT produces the frequency sample using given an image samples . Secondly, AIFT transforms into the image samples via . Road defects are defected by comparing the given image sample with the transformed result .

Similarity metric for comparing the two samples and , is defined as follows,

(7)

where is expectation of and . Above similarity metric is based on Jeffery divergence, which is a modified KL-divergence to take symmetric property. Euclidean distances such as -norm and -normal are not suitable as a similarity metric for images since neighboring values are not considered [24]. Jeffrey divergence is numerically stable, symmetric, and invariant to noise and input scale [23].

Model GAPs384 dataset [9] CFD dataset [26]
AIU ODS OIS AIU ODS OIS
AIFT 0.052 0.181 0.201 0.152 0.562 0.572
AIFT 0.081 0.226 0.234 0.187 0.642 0.659
AIFT 0.083 0.247 0.249 0.203 0.701 0.732
TABLE I: Quantitative performance comparison of the detection performance on AIFT using GAPs384 dataset and CFD dataset depending on the loss functions (Eq 4), (Eq 3), and (Eq 5). The bolded figures indicate the best performances on the experiments.

Iii Experiment

Iii-a Experiment setting and dataset

To evaluation the performance of the proposed method on road defect detection, we employ the best F-measure on the dataset for a fixed scale (ODS), the aggregate F-measure on the dataset for the best scale in each image (OIS), and AIU, which is proposed by Yang et al. [30]. AIU is computed on the detection and ground truth without non-max suppression (NNS) and thinking operation, defined by, , where denotes the total number of thresholds with interval 0.01; for a given , is the number of pixels of intersected region between the predicted and ground truth crack area; and denote the number of pixels of predicted and ground truth crack region, respectively. The proposed method has been evaluated on four publicly available datasets. The details of the datasets are described as follows.

GAPs384 dataset is German Asphalt Pavement Distress (GAPs) dataset presented by Eisenbach et al. [9], and it is constructed to address the issue of comparability in the pavement distress domain by providing a standardized high-quality dataset of large scale. The dataset contains 1,969 gray scaled images for road defects, with various classes for defects fsuch as cracks, potholes, and inlaid patches. The resolution of images is 1,9201,080.

Cracktree200 dataset [35] contains 206 road pavement images with 800600 resolution, which can be categorized to various types of pavement defects. The images on this dataset are captured with some challenging issues such as shadows, occlusions, low contrast, and noise.

CRACK500 dataset is constructed by Yang et al. [30]. The dataset is composed of 500 images wity 2,0001,500, and each image has a pixel-level annotation. The dataset is seperated by training dataset and test dataset. The training dataset consists of 1,896 images, and the test dataset is composed of 1,124 images.

CFD dataset [26] contains 118 images with 480320 resolution. Each image has pixel-level annotation and captured by Iphone 5 with focus of 4mm aperture of and exposure time of 1/135s.

The hyperparameter setting for the best performance is as follows. The epoch size and the batch size are 50 and 64, respectively. The balancing weight for the reconstruction loss is 0.1, and the critic iteration is set by 10. The networks are optimized by Adam et al. [13]. The proposed approach has implemented with Pytorch library 1, and the experiments have conducted with GTX Titan XP and 32GB memory.

Fig. 5: Visualization of the road defect detection results. The images on the first row represent the input images. The second row’s images illustrate the ground-truths. The images on the third row denote the detection results for road defects.
Methods S/U GAPs384 [9] Cracktree200 [35] CRACK500 [30] CFD [26] FPS(s)
AIU ODS OIS AIU ODS OIS AIU ODS OIS AIU ODS OIS
HED [29] S 0.069 0.209 0.175 0.040 0.317 0.449 0.481 0.575 0.625 0.154 0.683 0.705 0.0825
RCF [15] S 0.043 0.172 0.120 0.032 0.255 0.487 0.403 0.490 0.586 0.105 0.542 0.607 0.079
FCN [16] S 0.015 0.088 0.091 0.008 0.334 0.333 0.379 0.513 0.577 0.021 0.585 0.609 0.114
CrackForest [26] U - 0.126 0.126 - 0.080 0.080 - 0.199 0.199 - 0.104 0.104 3.971
FPHBN [30] S 0.081 0.220 0.231 0.041 0.517 0.579 0.489 0.604 0.635 0.173 0.683 0.705 0.237
AAE [17] U 0.062 0.196 0.202 0.039 0.472 0.491 0.371 0.481 0.583 0.142 0.594 0.613 0.721
SVM [33] S 0.051 0.132 0.162 0.017 0.382 0.391 0.362 0.418 0.426 0.082 0.3R52 0.372 0.852
ConvNet [33] S 0.079 0.203 0.211 0.037 0.472 0.499 0.431 0.591 0.609 0.152 0.579 0.677 0.921
AIFT 0.083 0.247 0.249 0.045 0.607 0.642 0.478 0.549 0.561 0.203 0.701 0.732 1.1330
TABLE II: Quantitative performance comparison about road defect detection using GAPs384 [9], Cracktree200 [35], CRACK500 [30], and CFD [26]. ”-” means the results are not provided. The bolded figures indicate that the best performance among them. ’S/U’ denotes whether a model focuses on ’supervised’ or ’unsupervised’ approaches. FPS indicates the execution speed of each method, and it is computed by averaging the execution speeds about all datasets.

Iii-B Ablation study

We have conducted an ablation study to observe the effect of the loss function terms on the performance of AIFT. We have trained AIFT using the three loss functions (Eq 4), (Eq 3), and (Eq 5) using GAPs384 dataset and CFD dataset, and observed AIU at every two epochs. The hyperparameter settings applied to train each model, are all same, and only the loss functions are different. Fig 4 shows the AIU trends of AIFTs trained by the three loss functions. Table I contains AIUs, ODSs, and OISs on GAPs384 dataset and CFD dataset. The experimental results show that AIFT trained by the total loss (AIFT) achieves the best performance on this experiments. As shown in Table I, AIFT achieves 0.083 of AIU, 0.247 of OIS, and 0.249 of ODS for GAPs384 dataset. These figures show that AIFT can produce approximately 7% better performance than others. In the experiments using CFD dataset, AIFT achieves 0.203 of AIU, 0.701 of OIS, and 0.732 of ODS, and these figure are all higher than that of the others.

Notably, the overall experimental results demonstrate that the AIFTs trained by adversarial learning, can outperform the AIFT based on the reconstruction setting (AIFT). Not only AIFT, but also AIFT obtains the improved achievement than AIFT. The AIU Trends (Fig 4) also justify that the AIFT learnt by adversarial manners can outperform the AIFT trained by the reconstruction setting. The experimental results justify adversarial learning can improve the robustness of AIFT for detecting road defects.

Iii-C Comparison with existing state-of-the-arts

We have carried out the comparison with existing state-of-the-art methods for the crack detection [29, 26, 30] and the road defect detection [33]. For the efficiency of the experiments, only AIFT is compared with other methods. Table II contains AIUs, OISs, and ODSs on Cracktree200, GAPs384, Cracktree200, and CFD datasets. AIFT has achieved state-of-the-art performance for GAPs384 dataset, Cracktree200 dataset, and CFD dataset. In the experiments using GAPs384 dataset, AIFT achieves 0.083 of AIU, 0.247 of ODS, and 0.249 of OIS. These figures show that AIFT outperforms than the previous state-of-the-art performance that achieved by FPHBN [30]. FPHBN obtains 0.081 of AIU, 0.220 of ODS, and 0.231 of OIS. AIFT shows 3% better performances than FPHBN. The experiments on Cracktree200 dataset and CFD dataset also show that AIFT surpasses other methods. AIFT produces 0.045 of AIU, 0.607 of ODS, and 0.642 of OIS in the experiments using Cracktree200 dataset. Additionally, AIFT achieves 0.203 of AIU, 0.701 of ODS, and 0.732 of OIS on CFD dataset. These figures are 8.8% and 3% better than the previous state-of-the-art methods.

However, AIFT could not obtain the highest performance on CRACK500 dataset. The state-of-the-art performance on CRACK500 dataset is achieved by FPHBN [30], and it produces 0.489 of AIU, 0.604 of ODS, and 0.635 of OIS, respectively. AIFT has 0.478 of AIU, 0.549 of ODS, and 0.561 of OIS. The gaps between FPHBN and AIFT are 0.011 on AIU, 0.055 on ODS, and 0.074 on OIS. However, FPHBN exploits a supervised approach, and it needs predetermined pixel-level annotations for road defects. Also, the network architecture applied to their approach is much deeper than Ours. These are the great advantages of detecting road defects.

The overall experiments show that AIFT can outperform existing state-of-the-art methods. As shown in Table II, the detection performance of AIFT surpasses other unsupervised methods [26, 17]. Additionally, AIFT achieves outstanding detection performance in detecting defects than others based on supervised learning approaches, even AIFT does not need an annotation for road defects in the training step. This may be thought that AIFT is enabled to apply various practical situations in which a large-scale and well-annotated dataset can not be used. Consequently, the experimental results demonstrate that AIFT can outperform existing state-of-the-art methods.

Iv Conclusions

In this paper, we have proposed an unsupervised approach to detecting road defects, based on adversarial image-to-frequency transform. The experimental results demonstrate the proposed approach can detect various patterns of road defects without explicit annotations for road defects in the training step, and it outperforms existing state-of-the-art methods in most of the cases for experiments of road defect detection.

Acknowledgment

This work was partly supported by the ICT R&D program of MSIP/IITP. (2014-0-00077, Development of global multi target tracking and event prediction techniques based on real-time large-scale video analysis).

Footnotes

  1. Source codes are publicly available on https://github.com/andreYoo/Adversarial-IFTN.git

References

  1. I. Abdel-Qader, S. Pashaie-Rad, O. Abudayyeh and S. Yehia (2006) PCA-based algorithm for unsupervised bridge crack detection. Advances in Engineering Software 37 (12), pp. 771–778. Cited by: §I.
  2. J. A. Acosta, J. L. Figueroa and R. L. Mullen (1992) Low-cost video image processing system for evaluating pavement surface distress. Transportation research record (1348). Cited by: §I.
  3. Y. Bai, Y. Zhang, M. Ding and B. Ghanem (2018) Finding tiny faces in the wild with generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 21–30. Cited by: §II-B.
  4. M. Baygin and M. Karakose (2015) A new image stitching approach for resolution enhancement in camera arrays. In 2015 9th International Conference on Electrical and Electronics Engineering (ELECO), pp. 1186–1190. Cited by: §I.
  5. J. Bray, B. Verma, X. Li and W. He (2006) A neural network based technique for automatic classification of road cracks. In The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 907–912. Cited by: §I.
  6. T. A. Carr, M. D. Jenkins, M. I. Iglesias, T. Buggy and G. Morison (2018) Road crack detection using a single stage detector based deep neural network. In 2018 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems (EESMS), pp. 1–5. Cited by: §I.
  7. S. Chambon, C. Gourraud, J. M. Moliard and P. Nicolle (2010) Road crack extraction with adapted filtering and markov model-based segmentation: introduction and validation. Cited by: §I.
  8. E. Deutschl, C. Gasser, A. Niel and J. Werschonig (2004) Defect detection on rail surfaces by a vision based system. In IEEE Intelligent Vehicles Symposium, 2004, pp. 507–511. Cited by: §I.
  9. M. Eisenbach, R. Stricker, D. Seichter, K. Amende, K. Debes, M. Sesselmann, D. Ebersbach, U. Stoeckert and H. Gross (2017) How to get pavement distress detection ready for deep learning? a systematic approach. In 2017 international joint conference on neural networks (IJCNN), pp. 2039–2047. Cited by: TABLE I, §III-A, TABLE II.
  10. R. Fan, M. J. Bocus, Y. Zhu, J. Jiao, L. Wang, F. Ma, S. Cheng and M. Liu (2019) Road crack detection using deep convolutional neural network and adaptive thresholding. In 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 474–479. Cited by: §I.
  11. Z. Hadavandsiri, D. D. Lichti, A. Jahraus and D. Jarron (2019) Concrete preliminary damage inspection by classification of terrestrial laser scanner point clouds through systematic threshold definition. ISPRS International Journal of Geo-Information 8 (12), pp. 585. Cited by: §I.
  12. G. Kang, S. Gao, L. Yu and D. Zhang (2018) Deep architecture for high-speed railway insulator surface defect detection: denoising autoencoder with multitask learning. IEEE Transactions on Instrumentation and Measurement. Cited by: §I, §I.
  13. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III-A.
  14. W. Liu, W. Luo, D. Lian and S. Gao (2018) Future frame prediction for anomaly detection–a new baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6536–6545. Cited by: §II-B.
  15. Y. Liu, M. Cheng, X. Hu, K. Wang and X. Bai (2017) Richer convolutional features for edge detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3000–3009. Cited by: TABLE II.
  16. J. Long, E. Shelhamer and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: TABLE II.
  17. A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow and B. Frey (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §III-C, TABLE II.
  18. A. Mujeeb, W. Dai, M. Erdt and A. Sourin (2019) One class based feature learning approach for defect detection using deep autoencoders. Advanced Engineering Informatics 42, pp. 100933. Cited by: §I, §I.
  19. H. Oliveira and P. L. Correia (2012) Automatic road crack detection and characterization. IEEE Transactions on Intelligent Transportation Systems 14 (1), pp. 155–168. Cited by: §I.
  20. L. Pauly, D. Hogg, R. Fuentes and H. Peel (2017) Deeper networks for pavement crack detection. In Proceedings of the 34th ISARC, pp. 479–485. Cited by: §I.
  21. P. Perera, R. Nallapati and B. Xiang (2019) Ocgan: one-class novelty detection using gans with constrained latent representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2898–2906. Cited by: §I, §II-A.
  22. S. Pidhorskyi, R. Almohsen and G. Doretto (2018) Generative probabilistic novelty detection with adversarial autoencoders. In Advances in Neural Information Processing Systems, pp. 6822–6833. Cited by: §I, §II-A.
  23. J. Puzicha, T. Hofmann and J. M. Buhmann (1997) Non-parametric similarity measures for unsupervised texture segmentation and image retrieval. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 267–272. Cited by: §II-C.
  24. Y. Rubner, C. Tomasi and L. J. Guibas (2000) The earth mover’s distance as a metric for image retrieval. International journal of computer vision 40 (2), pp. 99–121. Cited by: §II-C.
  25. M. Sabokrou, M. Fayyaz, M. Fathy, Z. Moayed and R. Klette (2018) Deep-anomaly: fully convolutional neural network for fast anomaly detection in crowded scenes. Computer Vision and Image Understanding 172, pp. 88–97. Cited by: §II-B.
  26. Y. Shi, L. Cui, Z. Qi, F. Meng and Z. Chen (2016) Automatic road crack detection using random structured forests. IEEE Transactions on Intelligent Transportation Systems 17 (12), pp. 3434–3445. Cited by: TABLE I, §III-A, §III-C, §III-C, TABLE II.
  27. Y. Sun, E. Salari and E. Chou (2009) Automated pavement distress detection using advanced image processing techniques. In 2009 IEEE International Conference on Electro/Information Technology, pp. 373–377. Cited by: §I.
  28. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio and P. Manzagol (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research 11 (Dec), pp. 3371–3408. Cited by: §I.
  29. S. Xie and Z. Tu (2015) Holistically-nested edge detection. In Proceedings of the IEEE international conference on computer vision, pp. 1395–1403. Cited by: §III-C, TABLE II.
  30. F. Yang, L. Zhang, S. Yu, D. Prokhorov, X. Mei and H. Ling (2019) Feature pyramid and hierarchical boosting network for pavement crack detection. IEEE Transactions on Intelligent Transportation Systems. Cited by: §III-A, §III-A, §III-C, §III-C, TABLE II.
  31. X. Ying, H. Guo, K. Ma, J. Wu, Z. Weng and Y. Zheng (2019) X2CT-gan: reconstructing ct from biplanar x-rays with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10619–10628. Cited by: §II-B.
  32. E. Zaloshnja and T. R. Miller (2009) Cost of crashes related to road conditions, united states, 2006. In Annals of Advances in Automotive Medicine/Annual Scientific Conference, Vol. 53, pp. 141. Cited by: §I.
  33. L. Zhang, F. Yang, Y. D. Zhang and Y. J. Zhu (2016) Road crack detection using deep convolutional neural network. In 2016 IEEE international conference on image processing (ICIP), pp. 3708–3712. Cited by: §III-C, TABLE II.
  34. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §II-B.
  35. Q. Zou, Y. Cao, Q. Li, Q. Mao and S. Wang (2012) CrackTree: automatic crack detection from pavement images. Pattern Recognition Letters 33 (3), pp. 227–238. Cited by: §III-A, TABLE II.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
406408
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description