Evolving Fuzzy Image Segmentation
Current image segmentation techniques usually require that the user tune several parameters in order to obtain maximum segmentation accuracy, a computationally inefficient approach, especially when a large number of images must be processed sequentially in daily practice. The use of evolving fuzzy systems for designing a method that automatically adjusts parameters to segment medical images according to the quality expectation of expert users has been proposed recently (Evolving fuzzy image segmentation – EFIS). However, EFIS suffers from a few limitations when used in practice mainly due to some fixed parameters. For instance, EFIS depends on auto-detection of the object of interest for feature calculation, a task that is highly application-dependent. This shortcoming limits the applicability of EFIS, which was proposed with the ultimate goal of offering a generic but adjustable segmentation scheme. In this paper, a new version of EFIS is proposed to overcome these limitations. The new EFIS, called self-configuring EFIS (SC-EFIS), uses available training data to self-estimate the parameters that are fixed in EFIS. As well, the proposed SC-EFIS relies on a feature selection process that does not require auto-detection of an ROI. The proposed SC-EFIS was evaluated using the same segmentation algorithms and the same dataset as for EFIS. The results show that SC-EFIS can provide the same results as EFIS but with a higher level of automation.
Evolving fuzzy image segmentation (short EFIS ) has been recently introduced to solve the parameter setting problem (e.g., fine-tuning) of different segmentation techniques. EFIS has been designed with emphasis on acquiring and integrating user feedback into the fine-tuning process. As a result, EFIS is suitable for all applications, such as medical image analysis, in which an experienced and knowledgeable user provides evaluative feedback of some sort with respect to the quality, i.e., accuracy, of the image segmentation.
Image segmentation is the grouping of pixels to form meaningful clusters of pixels that constitute objects (e.g., organs, tumours), a task with various applications in medical image analysis including measurement, detection, and diagnosis. Image segmentation can be roughly categorized into two main classes of algorithms; non-parametric-based (e.g., atlas-based segmentation) and parametric-based (e.g., thresholding, region growing) algorithms. The former is based on a model which usually does not require parameters whereas the latter is based on some parameters that must be adjusted in order to obtain reasonable segmentation results. Parameter-based segmentation algorithms always face the challenge of parameter adjustment; a parameter tuned for a particular set of images may perform poorly for a different image category.
On the other hand, in a clinical setting such as in a hospital, the final outcome of image segmentation algorithms usually need to be modified (i.e., manually edited) and approved by a an expert (e.g., radiologist, oncologist, pathologist). The clinical ramifications of not verifying the correctness of segments include missing a target (resulting in a less effective therapy) or increased toxicity if the target is over-segmented. The frequent expert intervention to correct the results, in fact, generates valuable feedback for a learning scheme to automatically adjust the segmentation parameters.
EFIS is an images segmentation scheme that evolves fuzzy rules to tune the parameters of a given segmentation algorithm by incorporating the user feedback which is provided to the system as corrected or manually created segmentation results called gold standard images. EFIS represents a new understanding of how image segmentation should be designed in the context of observer-oriented applications. Naturally, EFIS needs to be further improved and extended in order to exploit the full potential of its underlaying evolving mechanism in relation to the user feedback. The original design of EFIS as presented in  requires pre-configurations of a few steps which should be set for a given image set and the segmentation algorithm to which EFIS is integrated. This limits the efficiency of EFIS; either the algorithm should be pre-configured for each dataset and/or segmentation algorithm or it is possible that a fixed pre-configuration will adversely affect its performance. In this paper, we present a new and extended version of EFIS which we call self-configuring EFIS (short SC-EFIS) that has a higher level of automation. The new extension of EFIS proposed in this paper will enhance EFIS through removing these limitations by introducing self-configuration into different stages of EFIS.
This paper is organized as follow: In section II, a brief review of the EFIS (evolving fuzzy image segmentation) will be provided. In section III, we critically point to the shortcomings of EFIS. The section IV reviews the literature on feature selection as this is the major improvement in SC-EFIS compared to EFIS. In section V, we present the proposed self-configuring EFIS (SC-EFIS). In section VI, experiments are described and the results are presented and analyzed. Finally, section VII concludes the paper.
Ii A Brief Review of EFIS
The concept of Evolving Fuzzy Image Segmentation, EFIS, was proposed recently . The problem that EFIS attempts to address is parameter adjustment in image segmentation. The basic idea of EFIS is to adjust the parameters of segmentation to increase the accuracy by using user feedback in form of corrected segments. To do so, EFIS extracts features from a region inside the image and assigns them to the best parameter exhaustively detected. Clustering or other methods are then used to generate fuzzy rules, which are then continuously updated when new images are processed. The simplified pseudo-code of EFIS is given in Algorithm 1.
EFIS needs to be trained for specific algorithms and image categories . In other words, in order to employ EFIS, the following components must be pre-designated:
Parent algorithm: any segmentation algorithm with at least one parameter that affects its accuracy (e.g., global thresholding, statistical region merging)
Parameter(s) to be adjusted (e.g., thresholds, scales)
Images and corresponding gold standard images
Procedure to find optimal parameters (e.g., brute force or trial-and-error via comparison with the gold standard images)
Once the above-mentioned components are available/defined, the following steps need to be specified:
ROI-detection algorithm: An algorithm that detects the region of interest (ROI) around the subject to be segmented by EFIS.
Procedure for feature extraction around available seed points: Methods like SIFT are used to generate seed points. But a certain number of expressive features should be calculated in the vicinity of each seed point to be fed to fuzzy inference system.
Rule pruning: Upon processing a new image, a new rule can be learned only if the features and corresponding output parameters had not been observed previously. In other words, by looking at the difference between an input (features plus outputs) with all rules in the database, the information of a new image is added only if not captured by existing rules.
Label fusion: When EFIS is used with multiple algorithms at once, the segmentation results are fused using a fusion method namely STAPLE algorithm .
EFIS includes two main phases namely training and testing. In training phase, images with their gold standard results are fed to the algorithm where features are extracted from each image. The parent algorithm, e.g., thresholding, is applied to each image and the results are compared to the gold standard image. The algorithm’s parameters are continuously changed until the best possible result is achieved. The best parameter which yields the best result (i.e., the highest agreement with the gold standard image) along with the image feature extracted in the previous stage are stored. Once all training images are processed, the fuzzy rules are generated from the stored data using a clustering algorithm.
In testing phase, new images are first processed to extract features. Next, the image features are fed to the fuzzy inference system to approximate the parameters. The parent algorithm is then applied to the input image using the estimated parameter. EFIS can address both single-parametric and multi-parametric problems. EFIS was applied to three different thresholding algorithms and significant improvements in terms of segmentation accuracy were achieved .
Iii Critical Analysis of EFIS
Although EFIS has demonstrated to improve the segmentation results , some of its underlying steps may limit its applicability mainly because these steps have been designed in an ad-hoc fashion and tailored to the specific test images and algorithms namely breast ultrasound and thresholding. In this section, we examine the limitations of EFIS and lay out how they should be addressed via self-configuration.
EFIS calculates the features inside a rectangle that constitutes the region of interest, ROI. Within this region, feature are calculated using scale-invariant feature transform (SIFT) [13, 14]. In designing the ROI-detection algorithm, it is assumed that the ROI will be dark based on the characteristics of test images used (breast lesions in ultrasound are hypoechoic, meaning they are darker than surrounding tissue). This means that EFIS needs a detection algorithm for any new image category (application) to correctly recognize the region of the image containing the object of interest. In addition, similar to any other detection algorithm, if it fails, then EFIS will not be able to perform. We will remove this dependency by redesigning the feature extraction stage.
In order to calculate features within the ROI, EFIS uses a fixed number of landmarks, called seed points, which are delivered by SIFT. These fixed key points, with , is set for all images regardless of their content. Of course, an arbitrary number of features may not be able to characterize all types of images. We will eliminate this limitation of EFIS by automatically setting the number of seed points for different image categories.
EFIS constructs a fixed sized window of pixels around each landmark (seed point) to calculate the features. A self-configuring EFIS has to automatically set the window size during a pre-processing stage in order to optimally define the feature neighbourhood.
EFIS uses a fixed number of manually selected features, namely 18 features which proved to perform well on the breast ultrasound images. It is intuitively clear that this may not be a flexible approach to capture the image content. Any set of images with some common characteristics may need a different set of features for the evolving fuzzy systems to effectively estimate the parameters of the segmentation.
In the proposed extension of EFIS algorithm, we will address these shortcomings by introducing a pre-processing (self-configuration) stage where the settings are undertaken automatically. As apparent from the list above, feature selection seems to be the core of EFIS lack of automation. In following section, therefore, we will briefly review feature selection methods.
Iv Feature Selection
Providing relevant features to a learning system will increase its ability to generalize and hence elevate its performance. Feature selection is the process of selecting the most relevant features out of a larger group of features so that either redundant or irrelevant features are removed. Redundant features add no new information to the system, and irrelevant features may confuse the system and decrease its ability to learn efficiently. Feature selection may be conducted according to one of four schemes :
Feature selection may also be categorized into three main branches: supervised, semi-supervised, and unsupervised.
Iv-a Supervised Feature Selection
In supervised feature extraction, the selection of a set of features from a larger number of features is based on one of three characteristics : 1) features of a size that optimize an evaluation measure, 2) features satisfying a condition in the evaluation measure, and 3) features that best match a size and evaluation measure. Supervised feature selection methods deal primarily with the classification problems, in which the class labels are known in advance . Numerous studies have investigated supervised feature selection using the measures of the information theoretic  and Hilbert-Schmidt independence criterion .
Iv-B Semi-Supervised Feature Selection
The concept of semi-supervised feature selection has emerged recently as a means of addressing situations in which insufficient labels are available to cover the entire training data  or in which a substantial portion of the data are unlabelled. Traditional supervised feature selection techniques are generally ineffective under such circumstance. Semi-supervised feature selection is therefore employed for the selection of features when not enough labels are available. A semi-supervised feature selection constraint score that takes into account the unlabelled data has been proposed in . The literature also contains proposals for numerous semi-supervised techniques based on spectral analysis , a Bayesian network , a combination of a traditional technique with feature importance measure , or the use of a Laplacian score . Although semi-supervised selection does not require a complete set of class labels, it does need some.
Iv-C Unsupervised Feature Selection
Unsupervised feature selection is the process of selecting the most relevant non-redundant features from a larger number of features without the use of class labels. Mitra et al.  proposed an unsupervised feature selection algorithm based on feature similarity. They used a maximum information compression index to measure the similarities between features so that similar features could be discarded. He et al.  proposed an unsupervised feature selection technique that relies on the Laplacian score to indicate the significance of the features. Zhao et al.  used spectral graph theory to develop a new algorithm that unifies both supervised and unsupervised feature selection in one algorithm. They applied the spectrum of the graph that contains the information about the structure of the graph in order to measure the relevance of the features. Cai et al.  proposed a new unsupervised feature selection algorithm called Multi-Cluster Feature Selection, in which the features selected are those that maintain the multi-cluster structure of the data. Farahat et al.  present a novel unsupervised greedy feature selection algorithm consisting of two parts: a recursive technique for calculating the reconstruction error of the matrix of features selected, and a greedy algorithm for feature selection. The method was tested on six different benchmark data sets, and the results show an improvement over state-of-the-art unsupervised feature selection techniques.
Iv-D Features for SC-EFIS
In order to eliminate the major shortcomings of EFIS with respect to inflexible and static feature selection, and in order to not assume availability of class labels, we chose unsupervised feature selection, specially the previously mentioned five popular unsupervised feature selection algorithms to characterize images for training the evolving fuzzy system. These five methods, along with an additional correlation-based method, were combined to produce an ensemble of final relevant features that could be used for training.
In the remaining of the paper, the output matrices of these techniques are denoted as follows:
V Self-Configuring EFIS (SC-EFIS)
This section introduces a new version of EFIS, namely a self-configuring evolving fuzzy image segmentation (SC-EFIS) which represents a higher level of automation compared to the original EFIS scheme. The proposed SC-EFIS scheme consists of three phases; self-configuration phase, training phase, and online or evolving phase. In the following, each of these phases are described in detail.
V-a Self-Configuring Phase
In the self-configuring phase (Algorithm 2), all available images are processed in order to determine two crucial factors: 1) the size of the feature area around each seed point, and 2) the final features to be used for the current image category.
The rectangle around each SIFT point to be used for feature calculation is determined based on different sizes of all available images (algorithm 2). Following this step, the set of features that should be used for the available images is selected from a large number of features which are calculated for each image from the vicinity of the SIFT points located in the entire image (since there is no longer an ROI) (Fig. 1). This process starts with the determination of the number of SIFT points that should be used in the current image (algorithm 2). This step is identical to the procedure used in the EFIS training phase, as previously explained in section II, with three exceptions: the SIFT points are detected across the entire image (as opposed to selecting SIFT points inside an ROI as a subset of the image), the final number of SIFT seed points is not fixed, and the points returned are separated from each other by in each direction. For all seed points, features are extracted from a rectangle around each point, based on the discrete cosine transform () of , the gradient magnitude () of , the approximation coefficient matrix of (computed using the wavelet decomposition of ), and the SIFT descriptors . The following set of features is extracted (Algorithm 2):
The mean, median, standard deviation, co-variance, mode, range, minimum, and maximum of , , and , and (32 features)
The mean, median, standard deviation, co-variance, range, minimum, maximum, and zero population of (eight features) with the minimum of changed to be the minimum number after zero
The contrast, correlation, energy, and homogeneity of the gray level co-occurrence matrices (computed in four directions , , , and ) of , , and , and (64 features)
The contrast, correlation, energy, and homogeneity of the gray level co-occurrence matrices (computed in only one directions of ) of (four features)
A feature matrix of size generated for (in this case )
The next step is to calculate different statistical measures from (e.g., : mean, median, mode, standard deviation, co-variance, range, minimum, and maximum). The resulting matrix (size ) is returned, in which each row represents a statistical measure (Algorithm 2, CSF). is then appended to the feature matrix (Algorithm 2). After all images are processed, the feature matrix is formed from the features of all images, with each image being represented by rows.
In the last step, the final set of features that should be used in the current image category are selected from . This process starts with the removal of very similar features in based on the calculation of the correlations between all features. Hence, if two features are highly correlated, e.g. with a correlation coefficient of at least 99%, then one is kept and the other is discarded. The output of this process is a matrix (Algorithm 2).
For any unsupervised feature selection technique, the number of features that should be returned must be established in advance. A correlation with a threshold of 90% is used in order to determine the number of features that should be returned from (Algorithm 2). Following this process, is the resulting feature matrix. In addition to , five different unsupervised feature selection methods are also used for feature selection. The matrix and the variable are passed to the methods, and each method returns a different matrix with its selected features. The resulting matrices are , , , , and  (Algorithm 2). For all features in the six matrices, any feature extracted by at least three of the six methods are selected and appended to a matrix (Algorithm 2). The final matrix is generated based on the discarding of features from that are at least 90% correlated (Algorithm 2).
V-B Offline Phase
V-C Training Phase
In this phase, the features selected for the training images are used for the training of the fuzzy system. A set of images are randomly selected for training (Algorithm 3). A matrix is created and filled with the rows from that belong to the training images (Algorithm 3). A matrix is created and filled with the rows from that belong to the training images (Algorithm 3). A pruning step is performed starting from the second training image in order to ensure that and do not contain similar rows (Algorithm 3). The pruned matrices and are used for the generation of the initial fuzzy rules (Algorithm 3). The initial fuzzy system is built through the creation of a set of rules using the Takagi-Sugeno approach to describe the in- and output matrices. Based on different features from the input and one optimal parameter as the output, a set of rules is generated whereby the features are in the antecedent part and the optimal parameters are in the consequent part of the rules.
V-D Online and Evolving Phase
The evolving process is performed in order to increase the capabilities of the proposed system. For each test image, a matrix is filled with the rows from that belong to the test image (Algorithm 4). Fuzzy inference using is applied, and a parameter vector is returned (size ) and the final output parameter is calculated (Algorithm 4). The resulting parameter is used for the segmentation of the image (Algorithm 4), and the resulting segment is stored and then displayed to the user for review and eventual correction (Algorithm 4). The best parameter for the current image is then calculated based on the user-corrected segment and is stored in (Algorithm 4). A pruning procedure is performed on and as described in , with the exception that the Euclidean distance thresholds are, in contrast to EFIS, different for different techniques. After pruning, revised versions of and are appended to and (Algorithm 4). In the final step, the current fuzzy inference system, i.e., its rule base, is regenerated using the updated matrices and (Algorithm 4), and the process is repeated as long as new images are available.
Vi Experiments and Results
This section describes the experiments conducted in order to test the proposed self-configuring EFIS (SC-EFIS). To build the initial fuzzy system, for each training set, a set of randomly selected images from the data set were used for the extraction of the features along with the optimum parameters as output. This initial fuzzy system was then used to test the proposed method using the remaining images. The initial fuzzy system evolves as long as new (unseen) images are fed into the system and as long as the segmentation results produced by the algorithms are corrected by an expert user in order to generate optimal parameter values. This process drives the evolution of the fuzzy rules for segmentation. During the experimentation, the training-testing cycle was repeated 10 times. The results of ten different trials for each segmentation technique and for each parent algorithm are presented in order to validate the performance of SC-EFIS. The number of rules was monitored during the evolution process in order to acquire empirical knowledge about the convergence of the evolving process.
The experimental results using an image dataset for three different segmentation techniques (region growing, global thresholding, and statistical region merging) are presented. All experiments were performed using Matlab 64-bit.
Vi-a Image Data
The target dataset was developed from 35 breast ultrasound scans111The images and their gold standard segments are available online: http://tizhoosh.uwaterloo.ca/Data/ that were segmented by an image-processing expert with extensive experience in breast lesion segmentation (the second author). The images, collected from the Web, are of different dimensions, ranging from to pixels (Figure 2, images resized for sake of illustration). These are the same images used to introduce EFIS originally .
Ultrasound images are generally difficult to segment, primarily due to the presence of speckle noise and low level of local contrast. It should be noted that the segmentation of ultrasound actually does require a complete processing chain, (including proper preprocessing and post-processing steps). However, the purpose of using these images was solely to demonstrate that the accuracy of the segmentation can be increased with the application of SC-EFIS.
Vi-B Evaluation Measures
Considering two segments (generated by an algorithm) and (the gold standard image manually created by an expert), we calculate the average of the Jaccard index (area overlap) :
and its standard deviation . As well, the confidence interval (CI) of the Jaccard index is calculated . Finally, we performed t-tests to validate the null hypothesis for comparing the results of a parent algorithm and its evolved version in order to establish whether any potential increase in accuracy is statistically significant. Ground-truth images were created so that the objects of interest (i.e., lesions and tumours) could be labeled as white (1) and the background as black (0). All thresholding techniques were used consistently to label object pixels in this way as this was done in EFIS.
To compare with EFIS, the SC-EFIS results are calculated for the same parent algorithms, namely for region growing (RG), global thresholding, and statistical region merging (SRM) are presented. The results are discussed with respect to rule evolution, visual inspection, accuracy verification using the Jaccard results.
Rule Evolution – Fig. 3 indicates the change in the number of rules during the evolving of the thresholding (THR) process. The initial number of rules increases with any incoming image and then begins to decrease as additional images become available. The same behaviour was noted for SRM and RG.
Visual Inspection – A visual inspection of Fig. 4 shows that the results produced by the proposed SC-EFIS for RG represent a substantial improvement over those obtained with the FRG (fuzzy RG – the initial fuzzy rules are used in order to estimate the similarity threshold). A visual inspection of Fig. 5 reveals a significant improvement in the SC-EFIS for SRM images over the SRM ones.
Accuracy Verification – Ten different trials/runs are presented for each method. Each run is an independent experiment involving different training and testing images. Fig. 6 shows the improvement in the Jaccard index of the SC-EFIS for SRM and the images for SRM with a scale = 32.
Table I presents a comparison of the results for the RG technique: RG results with fuzzy inference, RG results with a similarity threshold of 0.17, RG with the best similarity threshold (0.12) for the available data (RG-B), the EFIS-RG technique, and the SC-EFIS-RG. The best similarity threshold, determined only for experimental purposes, is found via exhaustive search that is impractical in real world applications. It can be seen that the results achieved with SC-EFIS are better than EFIS results in eight of ten experiments.
Table II presents a comparison of the results for the global thresholding with a static (non-evolving) fuzzy system (THR) technique: the results for THR, EFIS-THR, and SC-EFIS-THR. It is clear that the SC-EFIS results surpass the EFIS ones in six of ten experiments. However, EFIS produces better results in two experiments and equivalent results in other two.
Table III presents a comparison of the results for the SRM technique: results for SRM using fuzzy inference FSRM, results for SRM with a scale = 32 (SRM), results for SRM with the best scale (64) for the available images (SRM-B) determined via exhaustive search, EFIS-SRM results, and SC-EFIS-SRM results. It can be seen that the results produced by SC-EFIS are superior to the EFIS results in five experiments, inferior in four experiments, and equivalent for the remaining experiments. Of course, both EFIS and SC-EFIS do perform better than the parent algorithm.
In general, SC-EFIS is competitive with and can even surpass EFIS with respect to the three segmentation techniques, while offering a higher level of automation.
Switching/Fusion of Results – On the other hand, the switch/fusion technique  was re-examined for use with SC-EFIS. Table IV presents the results of switching and fusion for the same three methods, namely Niblack, SRM (scale=32), and RG (similarity = 0.17) using EFIS (EFIS-S and EFIS-F) and using SC-EFIS (SC-EFIS-S and SC-EFIS-F). It is clear that the outcomes of EFIS and SC-EFIS are comaparable. In addition, the results with EFIS-S and SC-EFIS-S surpass those for SRM, which represents the best method.
Table V enables a comparison of EFIS and SC-EFIS results for global thresholding with different global and local thresholding techniques. The data listed are taken form three experiments selected from Table II. It is clear that, in the three experiments, EFIS and SC-EFIS provide outcomes that are more accurate than those produced with the non-evolutionary thresholding techniques.
|Niblack (local)||56%24%||[47% 65%]|
|Niblack (local)||57%25%||[48% 66%]|
|Niblack (local)||59%24%||[49% 68%]|
Most image segmentation techniques involve multiple parameters that must be tuned in order to achieve maximum segmentation accuracy. Evolving fuzzy image segmentation (EFIS) has been recently proposed to provide evolving and user-oriented adjustment for medical image segmentation. EFIS is a generic segmentation scheme that relies on user feedback in order to improve the quality of segmentation. Its evolving nature makes this approach attractive for applications that incorporate high-quality user feedback, such as in medical image analysis. However, EFIS entails some limitations, such as parameters that must be selected prior to the running of the algorithm and the lack of an automated feature selection component. These drawbacks restrict the use of EFIS to specific categories of images. An improved version of EFIS, called self-configuring EFIS (SC-EFIS) was proposed in this paper. SC-EFIS is a generic image segmentation scheme that does not require setting of some parameters, such as number of features or detecting a region of interest. SC-EFIS operates with the data available and extracts major parameters necessary for its operation from those data. A comparison of the SC-EFIS results with those obtained with EFIS demonstrates the comparable accuracy of both schemes with SC-EFIS offering a much higher level of automation.
-  E. Arvacheh and H. Tizhoosh, Pattern analysis using zernike moments, in Proceedings of the IEEE Instrumentation and Measurement Technology Conference (IMTC 2005), vol. 2, 2005, pp. 1574–1578.
-  F. Bellal, H. Elghazel, and A. Aussem, A semi supervised feature ranking method with ensemble learning, Pattern Recognition Letters, (2012).
-  J. Cadenas, M. Carmen Garrido, and R. Martï¿½nez, Feature subset selection filter-wrapper based on low quality data, Expert Systems with Applications, (2013).
-  D. Cai, C. Zhang, and X. He, Unsupervised feature selection for multi-cluster data, in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2010, pp. 333–342.
-  R. Cai, Z. Zhang, and Z. Hao, Bassum: A bayesian semi-supervised method for classification feature selection, Pattern Recognition, 44 (2011), pp. 811–820.
-  G. Doquire and M. Verleysen, A graph laplacian based approach to semi-supervised feature selection for regression problems, Neurocomputing, (2013).
-  A. K. Farahat, A. Ghodsi, and M. S. Kamel, Efficient greedy feature selection for unsupervised learning, Knowledge and Information Systems, (2012), pp. 1–26.
-  X. He, D. Cai, and P. Niyogi, Laplacian score for feature selection, Advances in Neural Information Processing Systems, 18 (2006), p. 507.
-  L. Huang and M. Wang, Image thresholding by minimizing the measure of fuzziness, Pattern Recognition, 28 (1995), pp. 41–51.
-  M. Kalakech, P. Biela, L. Macaire, and D. Hamad, Constraint scores for semi-supervised feature selection: A comparative study, Pattern Recognition Letters, 32 (2011), pp. 656–665.
-  J. Kittler and J. Illingworth, Minimum error thresholding, Pattern Recognition, (1986), pp. 41–47.
-  T. N. Lal, O. Chapelle, J. Weston, and A. Elisseeff, Embedded methods, in Feature Extraction, Springer, 2006, pp. 137–165.
-  D. Lowe, Object recognition from local scale-invariant features, in Proceeding of the IEEE International Conference on Computer Vision, vol. 2, 1999, pp. 1150–1157.
-  , Distinctive image features from scale-invariant keypoints, International journal of computer vision, 60 (2004), pp. 91–110.
-  J. Martínez Sotoca and F. Pla, Supervised feature selection by clustering using conditional mutual information-based distances, Pattern Recognition, 43 (2010), pp. 2068–2081.
-  P. Mitra, C. Murthy, and S. K. Pal, Unsupervised feature selection using feature similarity, IEEE transactions on pattern analysis and machine intelligence, 24 (2002), pp. 301–312.
-  L. C. Molina, L. Belanche, and À. Nebot, Feature selection algorithms: A survey and experimental evaluation, in IEEE International Conference on Data Mining ICDM, 2002, pp. 306–313.
-  W. Niblack, An Introduction to Digital Image Processing, Strandberg Publishing Company, Birkeroed, Denmark, 1986.
-  A. Othman, H. R. Tizhoosh, and F. Khalvati, EFIS: Evolving fuzzy image segmentation, IEEE Transactions on Fuzzy Systems, 22 (2014), pp. 72–82.
-  Y. Saeys, I. Inza, and P. Larrañaga, A review of feature selection techniques in bioinformatics, Bioinformatics, 23 (2007), pp. 2507–2517.
-  N. Sánchez-Maroño, A. Alonso-Betanzos, and M. Tombilla-Sanromán, Filter methods for feature selection–a comparative study, in Intelligent Data Engineering and Automated Learning-IDEAL 2007, Springer, 2007, pp. 178–187.
-  L. Song, A. Smola, A. Gretton, K. M. Borgwardt, and J. Bedo, Supervised feature selection via dependence estimation, in Proceedings of the 24th international conference on Machine learning, ACM, 2007, pp. 823–830.
-  K. V. Tan P.-N., Steinbach M., Introduction to Data Mining, Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2005.
-  H. R. Tizhoosh, Image thresholding using type II fuzzy sets, Pattern Recognition, 38 (2005), pp. 2363–2372.
-  , Type II fuzzy image segmentation, in Fuzzy Sets and Their Extensions: Representation, Aggregation and Models Studies in Fuzziness and Soft Computing, vol. 220, 2008, pp. 607–619.
-  S. Warfield, Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation, IEEE Transactions on Medical Imaging, 23 (2004), pp. 903–921.
-  Z. Zhao and H. Liu, Semi-supervised feature selection via spectral analysis, in Proceedings of the 7th SIAM International Conference on Data Mining, Minneapolis, MN, 2007, pp. 1151–1158.
-  , Spectral feature selection for supervised and unsupervised learning, in Proceedings of the 24th international conference on Machine learning, ACM, 2007, pp. 1151–1157.