Robust Adaptive Median Binary Pattern for noisy texture classification and retrieval

Robust Adaptive Median Binary Pattern for noisy texture classification and retrieval

Abstract

Texture is an important cue for different computer vision tasks and applications. Local Binary Pattern (LBP) is considered one of the best yet efficient texture descriptors. However, LBP has some notable limitations, mostly the sensitivity to noise. In this paper, we address these criteria by introducing a novel texture descriptor, Robust Adaptive Median Binary Pattern (RAMBP). RAMBP based on classification process of noisy pixels, adaptive analysis window, scale analysis and image regions median comparison. The proposed method handles images with high noisy textures, and increases the discriminative properties by capturing microstructure and macrostructure texture information. The proposed method has been evaluated on popular texture datasets for classification and retrieval tasks, and under different high noise conditions. Without any train or prior knowledge of noise type, RAMBP achieved the best classification compared to state-of-the-art techniques. It scored more than under impulse noise densities, more than under Gaussian noised textures with standard deviation , and more than under Gaussian blurred textures with standard deviation . The proposed method yielded competitive results and high performance as one of the best descriptors in noise-free texture classification. Furthermore, RAMBP showed also high performance for the problem of noisy texture retrieval providing high scores of recall and precision measures for textures with high levels of noise.

\SetKwRepeat

Dodowhile \SetKwInputKwDataInput \SetKwInputKwResultOutput \SetKwRepeatDodowhile

\IEEEpeerreviewmaketitle

1 Introduction

Texture is a fundamental characteristic of various types of images. Texture analysis is considered as a complex problem due to the large number of texture classes, the associated intraclass variations such as randomness and periodicity, and external class variation such as illumination and noise [1]. Texture classification is one of the major problems in texture analysis, it has a crucial value in the fields of computer vision and pattern recognition, including medical imaging, document analysis, environment modeling, and object recognition [2].

One of texture classification methods that gained huge attention is Local Binary Pattern (LBP). LBP is a powerful operator that shows robustness to illumination, rotation and scale [3]. And due to its low computational complexity, LBP has been used widely as a solution for many problems, such as texture classification [4], object detection  [5], image matching [6], image retrieval [7], biomedical image analysis [8], face recognition [9], etc. For general texture classification purposes, LBP derivatives have been introduced such as ILBP [10], CLBP [11], RLBP [12], DLBP [4], etc.

However, LBP and its derivatives have their weaknesses in term of robustness to noise. For that, several studies aimed to present a noise robust operator. In [13], the authors proposed Median Binary Pattern (MBP) to add more sensitivity to microstructure and impulse noise robustness. Nevertheless, MBP does not handle other types of noise and showed low performance for the high levels of impulse noise. In [14], the authors introduced Binary Rotation Invariant and Noise Tolerant (BRINT). Although BRINT samples the points in a scaled circular neighborhood which made it more distinctive and robust to noise, it suffers from limitations in term of robustness under high noisy textures.

In [15], Schaefer et al. proposed Multi-Dimensional Local Binary Pattern (MDLBP), which added more information from different radii and concatenated it in one histogram. This makes the histogram more effective. But this approach suffers in robustness under high noise corruption and from computational complicity due to feature dimensionality. In [16], the authors introduced Adaptive Median Binary Pattern (AMBP). AMBP used self-adaptive analysis window size depending on the local microstructure of the texture which made the descriptor more robust to impulse noise. Despite the noise robustness of this approach, it has limitations for textures with a very high level of noise.

In [17], Cimpoi et al. used filter banks and convolutional neural networks (CNN) for texture recognition and segmentation. The descriptor built on pre-trained VGG-VD (very deep) model which improves the performance. This approach is an effective texture descriptor and produces more robustness for variable images recognitions, but it has weaknesses to the median and high noisy textures, and some shortcomings in term of time complexity.

Figure 1: Illustration for the proposed descriptor.

Guo et al. [18] proposed Scale Selective Local Binary Patterns (SSLBP). SSLBP uses a Gaussian filter to produce a scale space of a texture image. For each image in the scale space, the pre-learned dominant binary patterns histogram is built. Then the scale invariant feature for each pattern is found by taking the maximal frequency among different scales. SSLBP considered an effective descriptor for textures under Gaussian noise. Nevertheless, SSLBP filtering procedure failed under impulse noise.

Liu et al. [19] performed median filtering and compared it with a sampling scheme to introduce Median Robust Extended Local Binary Pattern (MRELBP). MRELBP adds more microstructure and macrostructure information, but it used simple median filter procedure which leads to failure under Gaussian noise and extremely high level of impulse noise.

Although binary patterns family has huge success in the computer vision field, there are several weaknesses with these methods. In [20], the authors performed extensive comparisons for the existing local binary features for texture classification, where many of the existing local binary approaches suffer from a serious limitation. These limitations can be concluded in the descriptor ability to handle textures with a high level of noise, and to handle different types of noise.

Based on previous descriptors limitations, the obvious question being raised here is how to reach high noise robustness without any prior knowledge of the noise type, without any prior learning process, and for different kind of noises such as Gaussian noise, Gaussian blur, and impulse noise. In other words, performing the descriptor in noise-free data then try to classify and retrieve the noisy and noise-free textures under different geometric and illumination condition.

Gaussian noise, Gaussian blur and impulse noise are considered as the most frequent and challenging noises in image processing, computer vision and pattern recognition fields. For that, to improve and ensure the best performance of the image processes such as classification, these noises should be detected, reduced or removed. Some descriptors incorporate filtering procedure to improve the performance, such as Gaussian filtering for SSLBP, and median filtering for MBP, AMBP and MRELBP. Many techniques have been developed to suppress Gaussian noise, such as mean filter, wavelet denoising [21], and kernel regression [22]. Nevertheless, these filters are suitable for Gaussian noise but not for other noises such as impulse noise. On the other hand, various filters have been proposed to remove impulse noise, such as median filter [23] and adapted median filter [24]. Hence median and adaptive median filters consider all pixels as noisy corrupted pixels, the filter will fail under images with a high level of noise. To avoid this drawback, the switching techniques were introduced such as Boundary Discriminative Noise Detection (BDND), which takes the advantages of detecting which pixel is corrupted and which one is not [25, 26, 27]. In this context, using pixel classification from switching techniques with binary pattern methods can lead to better texture analysis for different types of noise.

In this paper, we propose an efficient and simple local binary descriptor, Robust Adaptive Median Binary Pattern (RAMBP). It takes the advantages of switching techniques and median adaptive scheme to include more robustness in features for texture with a high level of noise. RAMBP captures both microstructure and macrostructure texture information, and provides a better representation of the local structures. RAMBP effectiveness and robustness will be examined for high noisy textures classification and retrieval.

The major contributions of this paper can be summarized as follows:

  • Robust descriptor for different high noises.

  • High noisy texture classification.

  • Noise-free texture classification.

  • Noisy texture retrieval.

The structure of our paper is as follow. Section 2 details the proposed RAMBP method. Followed by experimental results and discussion in Section 3. The paper is ended with final conclusions in Section 4.

2 The proposed approach

To provide an efficient texture classification process, the descriptor should be discriminative and robust to noise. All state-of-the-art descriptors share one or more weaknesses of sensitivity for high noisy textures. RAMBP uses noisy pixel classification, an adaptive window for the threshold and binary modules, and regional values instead of using pixels intensities. Fig. 1 shows the scheme of the proposed descriptor, where it can be seen it’s divided into three stages, classification process of noisy pixels detection, threshold process, and generating the binary pattern.

2.1 Classification process for noisy pixels detection

As this paper aims to perform texture classification without any prior noise knowledge, the first step consists of classifying each pixel in the image as corrupted or uncorrupted pixels. for that, the detection step of BDND algorithm [25] has been adopted in this paper.

Pixel classification starts by taking a window around the central pixel, then examine the pixel whether it meets the condition as an uncorrupted pixel. If the pixel considered as a corrupted pixel in the first stage, another examination will be invoked by imposing a window around the central pixel to ensure the examination for more confined local statistics. A pixel classified as a corrupted pixel, if it fails in both examinations. Alg. 2 provides a full explanation about pixel classification step.

Figure 2: Pixel classification example of window using the procedure of Alg. 2. In the image , corrupted pixels are represented as and uncorrupted pixels as .

Fig. 2 provides an example of window instead of to facilitate understanding pixel classification algorithm (Alg. 2) using the following procedure,

  • The first step after choosing the window is to sort all the pixels in the window to obtain , in the given example (Fig. 2) , and the median value () of is .

  • Then the difference vector is obtained .

  • Find , the correspondence pixel in that gives the maximum intensity differences in left interval. (left interval of is located between the and in ). In this example .

  • In the same manner, find , the correspondence pixel in that gives the maximum intensity differences in right interval. (right interval of is located between the and in ). And in this example .

  • Then the three clusters are { }, {}, and { }. The central pixel belongs to the third cluster which considered as corrupted pixel and the pixel needs to re-examine on window .

  • As can be seen in Fig. 1, sorting the pixels in the window gives with . . and , which provide three clusters {} , {}, and {}. The central pixel still belongs to the corrupted pixels clusters, which concludes that is a corrupted pixel and .

{algorithm}\KwData

The original image . \KwResultThe image of labeled pixels . \Foreach pixel position For the current pixel , impose a window.
Compute by sorting the pixels in the window.
Find the median value() of .
From , obtain the difference vector . , where the index
Compute the left cluster range , where ( is the index of ).
Compute the right cluster range , where ( is the index of ).
Initialize three clusters of , {}, {}, and {}.
\uIf {} labeled as .
\Else Repeat steps with window around . If {}, label as . Otherwise, labeled as .
Noisy pixels detection

2.2 Threshold Process

Figure 3: Threshold process example to facilitate understanding Alg. 3.

Finding the threshold value for each pixel is a crucial point for generating the binary pattern. Using a corrupted central pixel as a threshold value, as LBP, will affect the noise robustness of the descriptor. As well as, using a small or large region to obtain the median as a threshold value will affect the descriptor. This leads to biased median value, due to missing information for the small region or including a large number of pixels for the large region. For obtaining the threshold value, adaptive window and pixel classification are used to reach the maximum robustness.

Alg. 3 represents the threshold process of the proposed descriptor, which starts by checking if the current pixel is classified as a corrupted or an uncorrupted pixel. If the current pixel classified as an uncorrupted pixel, the pixel threshold value is equal to the current pixel value (same as LBP). Otherwise, a window is imposed around the current pixel and the number of the uncorrupted pixels is counted. If the number of the uncorrupted pixel is more than the corrupted ones, the threshold value is equal to the median of the uncorrupted pixels inside this window. Otherwise, the window will be enlarged by pixel in all directions (). This process will be repeated until the maximum window size is reached, where the threshold value is equal to the median value of all uncorrupted pixel inside that window.

Fig. 3 illustrates an example of obtaining the threshold value after classifying the pixels using the following procedure,

  • The current central pixel is classified as a corrupted pixel, which leads to impose a window around it.

  • The next step consists of checking whether the number of uncorrupted pixels is greater than the number of corrupted one. In the given example, while , which followed by ignoring this window and enlarge it to be window.

  • In window, while . This window considered accepted window, and the threshold value will be obtained by taking the median value of the uncorrupted pixels, {} and equal to .

{algorithm}\KwData

The original image , the image of labeled pixels , maximum window size . \KwResultPixels threshold values , and pixels corresponding window size . \Foreach pixel position \uIf is classified as in

\Else Initialize
Impose a window () around ( )
Intialize
\While Find in
\uIf
Break
\Else Update (), where
Find

Generation of local thresholds

2.3 Generate the binary pattern

To reach the highest performance in texture classification, the descriptor should balance the classification goals such as robustness to noise, discriminativeness, and low computational cost. LBP descriptor conveys local structures, but to achieve better performance, discriminative properties should be used by considering the effect of image patches instead of taking a single pixel. To provide more information to the descriptor, these patches do not intersect with central pixel threshold window (Sec 2.2). As well as, and each patch size will be found using an adaptive way that depends on each patch pixels. Fig. 4 and Alg. 4 demonstrates the binary pattern module of the proposed descriptor.

Figure 4: Binary module scheme (Alg. 4). Where is the current central pixel, is corresponding window size (Alg. 3), and are the neighborhood patches with each central pixel ().

The binary pattern module (Alg. 4) represents the procedure of forming the binary pattern. The module starts by finding the neighborhood patches with a maximum size around its central pixel. For each patch, a window imposed around its central pixel. If the number of uncorrupted pixels is more than the corrupted pixels, this window considered accepted window and the value of the patch is the median of the uncorrupted pixels in that window. Otherwise, the window is enlarged to be window. The process continues until reaching the predefined maximum window size. After finding each neighborhood patch value, the binary pattern () is computed with a simple procedure between the patches values and the central pixel threshold value, where each patch represented in the binary pattern by 0 or 1.

{algorithm}\KwData

The original image , the image of labeled pixels , maximum window size , pixels threshold values , and pixels corresponding window size . \KwResultThe binary pattern (RAMBP). \Foreach pixel position The distance between the central pixel and each patch center (): +
\Foreach patch () Initialize
Impose a window () around center () ( )
Intialize patch window size
\While Find in
\uIf
Break. \Else Update (), where
Find

Generation of binary patterns

Texture datasets of classes of images Image size() Challenges  [28] 24 4320 Rotation changes  [28] 24 960 Inca illuminant, rotations ()  [28] 24 4800 Illumination variations, rotation changes  [28] 68 2720 Inca illuminant, rotations ()  [29] 61 5612 Illumination variation, rotations and pose changes, specularities, shadowing  [30] 111 999 Large number of classes, lack of intraclass variations  [30] 111 999 Rotation changes, large number of classes, lack of intraclass variations  [31] 11 4752 Pose changes, illumination changes, scale changes  [32] 250 25000 Strong illumination changes, rotation changes, large number of classes

Table 1: Summary of the used Datasets.

{subfigure}

0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15

Figure 5:
Figure 6:
Figure 7:
Figure 8:
Figure 9:
Figure 10:

{subfigure}

0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15

Figure 11:
Figure 12:
Figure 13:
Figure 14:
Figure 15:
Figure 16:
Figure 17: Example of and textures with different impulse noise densities.

3 Experiments and results

The experiments were carried out with a core 7 Duo 3.50 GHz processor with 32GB RAM under Matlab. Nine texture datasets were conducted in these experiments, which considered from the most commonly used texture datasets. Tab. 1 summarized the used texture datasets, number of classes, number of images, images size, and each texture challenges.

To evaluate the robustness of the proposed approach, -nearest neighbor (-NN) had been used. The -NN classifier recognized as one of the most popular and simplest methods, the -NN is used with distance defined as

(1)

‎

where and are the features vectors of two different textures. -NN is adopted with value equal to for most experiments, but this parameter has been varied to test its influence on the performance consistency.

In order to study the effect of the adaptive window maximum size, RAMBP performance has been tested on dataset with different maximum window sizes. Fig. 18 shows the classification score in different applied noises for different maximum window size values. As can be seen, the larger window size gives the higher score, but the time complexity will grow exponentially. Therefore, a good trade-off should be taken between the accuracy and the time complexity. In the experiments, max window size is adopted since it gives high classification score and makes the algorithm run faster. In comparison with traditional LBP, the proposed method is slower but it has less computational complexity and dimensionality than many LBP descriptors used to address the noisy textures.

Figure 18: Illustration of the performance according to the maximum window size that the adaptive window could reach.

In this section, we start by evaluation of the proposed method on high noisy textures, including Salt-and-Pepper noise, Gaussian noise, and Gaussian blur. Followed by experiment results of the proposed method on noise-free textures. Finally, the evaluation of the proposed method for noisy textures retrieval. In this paper, some of state-of-the-art descriptors results have been reported from [20].

3.1 Noisy texture classification

Noise robustness is a crucial point for evaluating descriptors. In this experiment, in order to test the noisy textures and evaluate the descriptor robustness in a more accurate way, the random noise generation has been repeated times over the dataset, and the classification results had been noted by taking the average of these tests. Noise-free images have been used for the training step while testing step performed on the noisy images. Choosing this scheme makes the noisy texture classification very difficult since the descriptor does not use any noise information and any prior learning process.

Dataset Noise parameter Noise density Noise density Method  [3] 85.4 15.5 5.4 4.2 4.2 66.0 9.9 3.8 1.8 1.5  [3] 31.7 4.2 4.2 4.4 4.2 11.8 1.5 1.5 1.5 1.5  [3] 47.1 10.0 4.2 4.2 4.2 26.5 4.7 2.2 1.5 1.5  [10] 27.3 4.2 4.2 4.2 4.2 10.7 2.1 1.5 1.5 1.5  [11] 17.3 8.3 4.2 4.2 4.2 7.6 2.9 1.5 1.6 1.5  [13] 31.0 8.3 4.2 4.2 4.2 17.0 2.5 1.5 1.5 1.5  [13] 95.8 38.6 20.5 16.6 16.1 76.8 18.6 6.0 4.9 4.2  [12] 39.2 4.2 4.2 4.2 4.2 18.5 1.5 1.5 1.5 1.5  [33] 27.3 4.2 4.2 4.2 4.2 12.2 1.5 1.5 1.5 1.5  [34] 74.4 22.1 4.8 5.0 6.3 40.5 4.7 3.8 2.6 2.7  [15] 71.9 13.5 8.3 4.2 4.2 38.2 3.7 2.9 2.5 1.9  [4] 29.8 5.4 4.2 4.2 4.2 16.5 4.9 1.5 1.5 1.5  [14] 30.8 7.1 6.0 4.4 4.2 15.9 1.5 1.5 1.3 1.5  [35] 25.2 8.3 4.2 4.2 4.2 10.3 2.9 1.5 1.5 0.1  [18] 29.0 9.6 4.2 4.2 4.2 24.5 2.8 1.5 1.5 1.5  [16] 100.0 95.4 20.7 13.8 10.7 100.0 85.0 4.8 1.8 1.5  [19] 100.0 100.0 100.0 85.8 50.2 100.0 99.9 94.0 54.6 19.2  [17] 21.0 12.1 6.0 6.5 4.2 10.3 5.2 2.3 1.5 1.8 100.0 100.0 100.0 99.1 98.5 100.0 100.0 100.0 99.8 90.2

Table 2: Classification scores () comparison between the proposed descriptor (RAMBP) and state-of-the-art descriptors for Salt-and-Pepper noise.
{subfigure}

[b]0.3 {subfigure}[b]0.3

Figure 19:
Figure 20:
Figure 21: The performance of RAMBP for Salt-and-Pepper noise according to values in -NN. Where starts from to with a step of two to avoid tie problem, and parameter indicates Salt-and-Pepper density.

Salt-and-Pepper noise

Impulse noise introduces high or low values randomly distributed over the image. Salt-and-Pepper noise has been applied to and datasets with different noise densities . High noisy textures are very challenging as it can be seen in Fig. 17 where textures are visually unrecognizable from of noise.

The results of the proposed algorithm are listed in Tab. 2. It can be observed that the classification accuracy is improved after using the proposed method. Compared to the different state-of-the-art techniques, RAMBP yields the best results and outperforms other techniques, especially on high noisy textures.

As can be seen from the results, using rotational uniform scheme decreases the performance of the LBP based descriptors. Using and gave better results than and , respectively. It can also be noticed from Tab. 2 that, offers the second best performance but its accuracy drops drastically with high noise densities (e.g. ). Also, gives good results and noise robustness under low-density impulse noise but not for high noise.

Although RAMBP previously mentioned performance shows a high score where KNN () provides the best match among all images, it is important to study the matched percentage of the same class images. This percentage can be computed using different values in -NN. In other words, an image is classified by the majority votes and assigned to the most common class. For example , KNN provides the nearest image, then the examined image will be classified as that image class. For , KNN provides the nearest three images and the examined image will be classified to class with the majority votes between the three images classes. For that, RAMBP performance has been tested with different values in -NN. We can notice from Fig. 21 the stability and robustness of RAMBP in different Salt-and-Pepper noise densities where it keeps good accuracy even with high noise density and large value of . Also shown in Fig. 21, the descriptor performance over has a more decreasing rate which is proportional to the noise density. This happens may be due to the number of classes (i.e. 68) in dataset. Nevertheless, the accuracy stays good for different values.

Dataset Method  [3] 35.0 09.8  [3] 17.7 8.4  [3] 16.0 7.9  [10] 17.5 10.4  [11] 11.9 5.6  [13] 12.1 5.2  [13] 59.4 22.0  [12] 22.1 11.9  [33] 19.2 10.3  [34] 24.0 9.0  [15] 12.5 6.1  [4] 14.8 8.2  [14] 61.9 27.4  [35] 24.6 14.8  [18] 97.1 91.5  [16] 96.5 74.3  [19] 91.5 79.2  [17] 93.1 71.5 99.0 95.9

Table 3: Classification scores () comparison between the proposed descriptor (RAMBP) and state-of-the-art descriptors for Gaussian noise with standard deviation .
{subfigure}

[b]0.26 {subfigure}[b]0.26

Figure 22:
Figure 23:
Figure 24: The performance of RAMBP for Gaussian noise with different values in -NN. Where starts from to with a step of two to avoid tie problem, and parameter indicates Gaussian noise standard deviation.

{subfigure}

0.14 {subfigure}0.14

Figure 25:
Figure 26:

{subfigure}

0.14 {subfigure}0.14

Figure 27:
Figure 28:
Figure 29: Example of and textures with Gaussian noise standard deviation , where it shows the changes in pixels values.

Gaussian noise

Gaussian noise is an additive noise affects digital images gray values. Gaussian noise has been added to and datasets with standard deviation . Fig. 29 provides an example of the used datasets after adding Gaussian noise, where visually it is difficult to see the global effect and the difference between noise-free and noisy textures, but it can be seen the local information and pixels intensity are affected.

Tab. 3 shows the classification results of the proposed method as well as the state-of-the-art descriptors, where RAMBP provides the best performance among other descriptors. SSLBP descriptor gives the second best results, followed by MRELBP, AMBP, and deep learning techniques. But SSLBP yielded poor accuracy under Salt-and-Pepper as indicated in Tab. 2. As can be observed from Tab. 2 and Tab. 3, the proposed method achieved the best results in both experiments and showed nice consistency in different types of noise.

To illustrate the stability of RAMBP for Gaussian noise, different values in -NN have been tested as shown in Fig. 24. We can notice, a small decrease of RAMBP accuracy with increasing the value of . Overall, RAMBP provides good stability and robustness even at large values of in -NN.

Gaussian blur

Gaussian blur, known also as Gaussian smoothing, is another kind of effects happened to images, which results in removing image detail. This type of noise also modify the local structure which affects the local binary patterns. In these experiments, Gaussian blur has been applied to and datasets with different standard deviations . Fig. 40 illustrates an example of the used datasets with Gaussian blur.

{subfigure}

0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15

Figure 30:
Figure 31:
Figure 32:
Figure 33:
Figure 34:

{subfigure}

0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15 {subfigure}0.15

Figure 35:
Figure 36:
Figure 37:
Figure 38:
Figure 39:
Figure 40: Example of and textures with different Gaussian blur standard deviation.

Dataset Method  [3] 99.1 71.8 53.8 39.8 99.9 55.0 37.8 29.0  [3] 94.2 46.5 24.6 12.7 72.4 30.3 16.6 9.7  [3] 86.9 44.6 26.0 18.1 57.7 28.3 16.0 9.4  [10] 97.3 59.8 29.4 20.4 81.7 43.2 25.1 16.7  [11] 98.8 74.8 49.6 23.1 86.6 55.4 36.1 21.2  [13] 85.4 29.0 18.5 11.9 58.7 22.5 13.5 10.6  [13] 97.5 73.0 50.9 39.5 99.0 58.2 40.7 28.3  [12] 95.0 49.8 28.7 16.5 75.4 33.2 18.4 10.7  [33] 94.0 47.7 28.3 17.1 73.3 32.0 17.8 10.5  [34] 96.3 49.0 33.1 19.4 80.1 35.7 21.7 14.1  [15] 100.0 60.2 36.9 23.8 95.7 35.1 20.6 12.2  [4] 90.4 61.5 21.9 13.1 67.7 31.3 16.5 8.7  [14] 100.0 97.1 80.4 44.6 100 97.5 59.1 39.1  [35] 99.4 85.8 65.2 45.4 87.7 56.0 40.2 30.6  [18] 100.0 100.0 100.0 100.0 100.0 100.0 100.0 90.6  [16] 100.0 99.0 88.7 52.6 100.0 99.5 81.4 53.8  [19] 100.0 100.0 93.8 75.4 99.9 97.9 85.8 61.8  [17] 100.0 100.0 96.5 89.8 99.6 94.1 83.1 71.8 100.0 100.0 100.0 100.0 100.0 100.0 100.0 99.2

Table 4: Classification scores () comparison between the proposed descriptor (RAMBP) and state-of-the-art descriptors for Gaussian blur with standard deviation .
{subfigure}

[b]0.3 {subfigure}[b]0.3

Figure 41:
Figure 42:
Figure 43: The performance of RAMBP for Gaussian blur with different values in -NN. Where starts from to with a step of two to avoid tie problem, and parameter indicates Gaussian blur standard deviation.

Method  [3] 99.36 90.55 92.77 88.67 76.48 60.33 86.58  [3] 99.69 92.16 97.03 90.70 79.22 62.69 94.15  [3] 86.69 83.68 95.38 89.93 71.73 61.48 93.29  [10] 99.66 93.34 94.66 91.66 82.27 61.93 95.71  [11] 99.45 95.78 97.33 92.34 84.35 64.18 96.74  [13] 95.29 86.69 92.09 87.25 74.57 61.49 88.23  [13] 98.52 97.17 91.24 89.27 76.67 60.19 91.30  [12] 99.66 93.53 97.20 91.09 79.59 61.20 94.23  [33] 99.64 93.55 96.85 90.19 80.08 62.39 95.20  [34] 99.32 95.27 96.11 89.31 80.25 61.30 94.47  [15] 99.22 95.64 96.92 93.40 82.31 66.52 95.81  [4] 99.46 91.97 94.38 88.73 75.04 61.72 NO  [14] 99.35 98.13 97.02 90.83 78.77 66.67 96.13  [35] 98.78 96.67 94.23 89.74 74.79 63.47 92.82  [18] 99.82 99.36 98.79 89.94 80.03 65.57 96.68  [16] 99.68 98.12 95.64 90.67 79.86 62.73 95.82  [19] 99.82 99.58 97.10 90.86 81.92 68.98 97.28  [17] 80.00 82.30 99.00 98.70 92.10 88.20 99.50 99.90 99.70 98.50 94.05 86.98 68,86 97.59

Table 5: Classification scores () comparison between the proposed descriptor (RAMBP) and state-of-the-art descriptors for noise-free texture classification.

Tab. 4 depicts the classification scores after applying Gaussian blur. The proposed method shows the best score with SSLBP method. The latest method performs nicely here because it includes the blurring process in descriptor generation. However, it must be recalled the poor performance of SSLBP in Salt-and-Pepper noise as evidenced in Tab. 2. MRELBP and FV-CNN have good performance under low noisy textures, but the accuracy vastly decreases under higher noise.

The accuracy of the proposed method can also be observed in Fig. 43 after varying values in -NN. In the same Fig. 43, the classification accuracy is high and gets affected with a small decrease after increasing the standard deviation and the value of . It can be seen that RAMBP has high stability and robustness among different values. In general, RAMBP achieved the best results compared to the different state-of-the-art techniques as apparent in Tab 23 and 4.

3.2 Noise-free texture classification

Noise-free texture classification is challenging due to datasets properties mentioned in Tab. 1. In these experiments, seven texture datasets have been used. The training and testing schemes are different from one dataset to another. For and Outex datasets, testing and training samples are well-defined by [3]. For the training set has no rotation, and the testing set is rotated by {} rotation angles. Also, the training set has no rotation, while the testing set is rotated by {} rotation angles.

is generated to test rotation invariance by applying a random rotation angle for each sample in . For , , , and datasets, each class samples was divided equally ( train/test) using a random selection of the samples. random couple train/test sets were generated and the classification results are averaged over the random partitionings. For dataset which has four samples of classes each, training is performed on three samples and testing on the remaining one, the results are obtained by performing the experiment four times.

The result of texture classification is depicted in Tab. 5, where RAMBP provides the best results in some datasets and high performance for the others. Also, FV-CNN, SSLBP and MRELBP techniques show high and competitive performances. However, since RAMBP does not use any learning process and provides high performance for different kind of noises, RAMBP stands out the best descriptor in noisy and noise-free texture classification.

3.3 Noisy texture retrieval

Texture retrieval is based on image illustration and representation, and its basic idea is setting a query image, find the best-matched image then retrieve it [36]. Texture retrieval starts by building the database, and extract its features. To retrieve a query image, the image features will be extracted and matched with the database feature vectors. Each image in the database will be ranked according to the similarity with the query image. Noisy texture retrieval is more challenging than noise-free texture retrieval, where it depends on the robustness of image features representation. So far little attention has been paid to noisy image retrieval. These experiments illustrate how the proposed method can be effective for noisy image retrieval problem.

{subfigure}

[b]0.33 {subfigure}[b]0.33 {subfigure}[b]0.33

Figure 14: Salt-and-Pepper
Figure 15: Gaussian Noise
Figure 16: Gaussian Blur
Figure 17: RAMBP performance measure in term of average Recall and Precision for noisy texture retrieval on dataset.

In the literature, many types of distances have been applied. Here, distance is adapted to find the distances and similarities between the noisy query image and noise-free dataset images. The images are ranked in term of distances and considered as the neighbors (-NN) in the feature space. This followed by returning the best similar images as the results. Fig. 12 shows the flow chart of the texture retrieval process.

Figure 12: Noisy texture retrieval scheme.

Accuracy is the most common performance measure, but the main drawback is that accuracy hides some details that can help understanding better the classification model performance. For that, Recall and Precision provide better performance understanding by taking both false positives and false negatives into account. RAMBP noisy texture retrieval performance had been tested with regard to recall and precision by conducting the experiment on dataset. Recall is the number of relevant images retrieved with respect to the number of all images in the class, while Precision is the number relevant images retrieved with respect to all retrieved images.

(2)

‎

(3)

‎

In this experiment, different kinds of noise were applied to database. The noisy images were chosen as the query images, while noise-free images were chosen as database images. And due to the number of images per class in dataset ( images per class), the number of -nearest neighbor is tested up to 40 (more precise from 1 to 39 with a step of 2 to avoid tie problem). Fig. 13 provides a simple example of the noisy texture retrieval ranking procedure and how to calculate the recall and precision values.

Figure 13: An example of noisy texture retrieval using RAMBP with the best five results (). In the right side, the figure depicts an example of calculating the recall and precision values according to the rank result.

For evaluating both correctness and accuracy of RAMBP, recall and precision graph was used. And with the aim of evaluating the descriptor robustness, only textures with medium and high noises were tested. To find recall and precision values for all noisy database queries, the average of these values is taken. Fig. 17 provides the proposed method retrieval performance results. As it can be noticed from Fig. 17, RAMBP shows high robustness and provides high recall and precision rate even under high level of noise.

4 Conclusion

Texture classification is a crucial process in many computer vision fields applications. Existing descriptors achieved good texture classification performance. However, these descriptors have some weaknesses and limitations. One of these limitations is descriptor robustness to high level of noise. Another limitation is the robustness for different kind of noises. To address these limitations Robust Adaptive Median Binary Pattern (RAMBP) descriptor is introduced in this paper.

RAMBP descriptor takes the advantage of pixel classification and the adaptive analysis to provide strong discriminativeness and noise robustness properties. The proposed descriptor has been evaluated on noisy textures including Salt-and-Pepper, Gaussian noise, and Gaussian blur. Experimental results indicated that RAMBP outperforms other existing descriptors for handling high noisy textures classification, and performs as one of the best in noise-free texture classification. In this paper noisy texture retrieval was introduced, where it demonstrates the consistency and stability of RAMBP, and it shows the high performance and robustness of RAMBP.

Overall, the proposed method provides the best performance in both texture classification and retrieval with a presence of very challenging different types of noise. As future work, the proposed approach will be extended to assessed on other datasets and other types of noise.

References

  1. M. Tuceryan, A. K. Jain, et al., Texture analysis, Handbook of pattern recognition and computer vision 2 (1993) 235–276.
  2. M. Petrou, P. G. Sevilla, Image processing: dealing with texture, Vol. 1, Wiley Online Library, 2006.
  3. T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on pattern analysis and machine intelligence 24 (7) (2002) 971–987.
  4. S. Liao, M. W. Law, A. C. Chung, Dominant local binary patterns for texture classification, IEEE transactions on image processing 18 (5) (2009) 1107–1118.
  5. A. Satpathy, X. Jiang, H.-L. Eng, Lbp-based edge-texture features for object recognition, IEEE Transactions on Image Processing 23 (5) (2014) 1953–1964.
  6. M. Heikkilä, M. Pietikäinen, C. Schmid, Description of interest regions with local binary patterns, Pattern recognition 42 (3) (2009) 425–436.
  7. N. P. Doshi, G. Schaefer, A comprehensive benchmark of local binary pattern algorithms for texture retrieval, in: Pattern Recognition (ICPR), 2012 21st International Conference on, IEEE, 2012, pp. 2760–2763.
  8. L. Nanni, A. Lumini, S. Brahnam, Local binary patterns variants as texture descriptors for medical image analysis, Artificial intelligence in medicine 49 (2) (2010) 117–125.
  9. T. Ahonen, A. Hadid, M. Pietikäinen, Face recognition with local binary patterns, in: European conference on computer vision, Springer, 2004, pp. 469–481.
  10. H. Jin, Q. Liu, H. Lu, X. Tong, Face detection using improved lbp under bayesian framework, in: Image and Graphics (ICIG’04), Third International Conference on, IEEE, 2004, pp. 306–309.
  11. Z. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern operator for texture classification, IEEE Transactions on Image Processing 19 (6) (2010) 1657–1663.
  12. J. Chen, V. Kellokumpu, G. Zhao, M. Pietikäinen, Rlbp: Robust local binary pattern., in: BMVC, 2013.
  13. A. Hafiane, G. Seetharaman, B. Zavidovique, Median binary pattern for textures classification, in: International Conference Image Analysis and Recognition, Springer, 2007, pp. 387–398.
  14. L. Liu, Y. Long, P. W. Fieguth, S. Lao, G. Zhao, Brint: binary rotation invariant and noise tolerant texture classification, IEEE Transactions on Image Processing 23 (7) (2014) 3071–3084.
  15. G. Schaefer, N. P. Doshi, Multi-dimensional local binary pattern descriptors for improved texture analysis, in: Pattern Recognition (ICPR), 2012 21st International Conference on, IEEE, 2012, pp. 2500–2503.
  16. A. Hafiane, K. Palaniappan, G. Seetharaman, Joint adaptive median binary patterns for texture classification, Pattern Recognition 48 (8) (2015) 2609–2620.
  17. M. Cimpoi, S. Maji, A. Vedaldi, Deep filter banks for texture recognition and segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3828–3836.
  18. Z. Guo, X. Wang, J. Zhou, J. You, Robust texture image representation by scale selective local binary patterns, IEEE Transactions on Image Processing 25 (2) (2016) 687–699.
  19. L. Liu, S. Lao, P. W. Fieguth, Y. Guo, X. Wang, M. Pietikäinen, Median robust extended local binary pattern for texture classification, IEEE Transactions on Image Processing 25 (3) (2016) 1368–1381.
  20. L. Liu, P. Fieguth, Y. Guo, X. Wang, M. Pietikäinen, Local binary features for texture classification: Taxonomy and experimental study, Pattern Recognition 62 (2017) 135–160.
  21. M. A. Figueiredo, J. M. Bioucas-Dias, R. D. Nowak, Majorization–minimization algorithms for wavelet-based image restoration, IEEE Transactions on Image processing 16 (12) (2007) 2980–2991.
  22. H. Takeda, S. Farsiu, P. Milanfar, Kernel regression for image processing and reconstruction, IEEE Transactions on image processing 16 (2) (2007) 349–366.
  23. A. Buades, B. Coll, J.-M. Morel, A review of image denoising algorithms, with a new one, Multiscale Modeling & Simulation 4 (2) (2005) 490–530.
  24. H. Hwang, R. A. Haddad, Adaptive median filters: new algorithms and results, IEEE Transactions on image processing 4 (4) (1995) 499–502.
  25. P.-E. Ng, K.-K. Ma, A switching median filter with boundary discriminative noise detection for extremely corrupted images, IEEE Transactions on image processing 15 (6) (2006) 1506–1516.
  26. H.-L. Eng, K.-K. Ma, Noise adaptive soft-switching median filter, IEEE Transactions on image processing 10 (2) (2001) 242–251.
  27. X. Zhang, Y. Xiong, Impulse noise removal using directional difference based noise detector and adaptive weighted mean filter, IEEE Signal processing letters 16 (4) (2009) 295–298.
  28. T. Ojala, T. Maenpaa, M. Pietikainen, J. Viertola, J. Kyllonen, S. Huovinen, Outex-new framework for empirical evaluation of texture analysis algorithms, in: Pattern Recognition, 2002. Proceedings. 16th International Conference on, Vol. 1, IEEE, 2002, pp. 701–706.
  29. M. Varma, A. Zisserman, A statistical approach to texture classification from single images, International journal of computer vision 62 (1-2) (2005) 61–81.
  30. P. Brodatz, Textures: a photographic album for artists and designers, Dover Pubns, 1966.
  31. P. Mallikarjuna, M. Fritz, A. T. Targhi, E. Hayman, B. Caputo, J. Eklundh, The kth-tips and kth-tips2 databases (2006).
  32. G. J. Burghouts, J.-M. Geusebroek, Material-specific adaptation of color invariant features, Pattern Recognition Letters 30 (3) (2009) 306–313.
  33. H. Zhou, R. Wang, C. Wang, A novel extended local-binary-pattern operator for texture analysis, Information Sciences 178 (22) (2008) 4314–4325.
  34. A. Fathi, A. R. Naghsh-Nilchi, Noise tolerant local binary pattern operator for efficient texture analysis, Pattern Recognition Letters 33 (9) (2012) 1093–1100.
  35. X. Hong, G. Zhao, M. Pietikainen, X. Chen, Combining lbp difference and feature correlation for texture description, IEEE Transactions on Image Processing 23 (6) (2014) 2557–2568.
  36. A. Pentland, R. W. Picard, S. Sclaroff, Photobook: Content-based manipulation of image databases, International journal of computer vision 18 (3) (1996) 233–254.
192931
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
0
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question