Bounded Fuzzy Possibilistic Method
Abstract
This paper introduces Bounded Fuzzy Possibilistic Method (BFPM) by addressing several issues that previous clustering/classification methods have not considered. In fuzzy clustering, object’s membership values should sum to 1. Hence, any object may obtain full membership in at most one cluster. Possibilistic clustering methods remove this restriction. However, BFPM differs from previous fuzzy and possibilistic clustering approaches by allowing the membership function to take larger values with respect to all clusters. Furthermore, in BFPM, a data object can have full membership in multiple clusters or even in all clusters. BFPM relaxes the boundary conditions (restrictions) in membership assignment. The proposed methodology satisfies the necessity of obtaining full memberships and overcomes the issues with conventional methods on dealing with overlapping. Analysing the objects’ movements from their own cluster to another (mutation) is also proposed in this paper. BFPM has been applied in different domains in geometry, set theory, anomaly detection, risk management, diagnosis diseases, and other disciplines. Validity and comparison indexes have been also used to evaluate the accuracy of BFPM. BFPM has been evaluated in terms of accuracy, fuzzification constant (different norms), object’s movement analysis, and covering diversity. The promising results prove the importance of considering the proposed methodology in learning methods to track the behaviour of data objects, in addition to obtain accurate results.
Index TermsBounded Fuzzy Possibilistic Method, Membership function, Critical object, Object movement, Mutation, Overlapping, Similarity function, Weighted Feature Distance, Supervised learning, Unsupervised learning, Clustering.
I Introduction
Clustering is a form of unsupervised learning that splits data into different groups or clusters by calculating the similarity between objects contained in a dataset [1]. More formally, assume that we have a set of objects represented by in which each object is typically described by numerical data that has the form , where is the dimension of the search space or the number of features [2]. A cluster is a set of objects, and the relation between clusters and objects is often represented by a matrix with values , where represents a membership value, is the object in the dataset, and is the cluster [3]. A partition or membership matrix is often represented as a matrix , where is the number of clusters [4]. Crisp, fuzzy and possibilistic are three types of partitioning methods [5]. Crisp clusters are nonempty, mutuallydisjoint subsets of :
(1) 
where is the membership of object in cluster . If the object is a member of cluster , then otherwise, . Fuzzy clustering is similar to crisp clustering [6], but each object can have partial memberships in more than one cluster [7] (to cover overlapping). This condition is stated by Eq. (I), where an object may obtain partial nonzero memberships in several clusters, but only a full membership in one cluster.
(2) 
According to Eq. (I), each column of the partition matrix must sum to 1 . Thus, a property of fuzzy clustering is that, as becomes larger, the values must become smaller [8]. An alternative partitioning approach is possibilistic clustering [9]. In Eq. (I), the condition is relaxed by substituting it with .
(3) 
Based on Eq. (I), Eq. (I), and Eq. (I), it is easy to see that all crisp partitions are subsets of fuzzy partitions, and a fuzzy partition is a subset of a possibilistic partition, i.e., . Possibilistic method has some drawbacks such as offering trivial null solutions [10], and the method needs to be tuned in advance as the method strongly depends on good initialization steps [11]. AMPCM (Automatic Merging Possibilistic Clustering Method) [12], Graded possibilistic clustering [13], and some other approaches such as soft transition techniques [14] are proposed to cover the issues with possibilistic methods.
Ia FCM Algorithm
In prototypebased (centroidbased) clustering, the data is described by a set of point prototypes in the data space [15]. There are two important types of prototypebased FCM algorithms. One type is based on the fuzzy partition of a sample set [16] and the other is based on the geometric structure of a sample set in a kernelbased method [17]. The FCM function may be defined as [1]:
(4) 
where U is the partition matrix, is the vector of cluster centers (prototypes) in is the fuzzification constant, and is any inner product Ainduced norm [18], i.e., or the distance function such as Minkowski distance, presented by Eq. (5). Eq.(4) [19] makes use of Euclidean distance function by assigning in Eq. (5).
(5) 
In this paper, the Euclidean norm or is used for the experimental verifications, although there are some issues with conventional similarity functions to cover diversity in their feature spaces [20].
IB Kernel FCM
In kernel FCM, the dot product is used to transform feature vector , for nonlinear mapping function , where is the dimensionality of the feature space. Eq. (6) presents a nonlinear mapping function for Gaussian kernel [21].
(6) 
where is the fuzzification parameter, and is the kernel base distance [22], replaces Euclidean function, between the and feature vectors:
(7) 
The other sections in this paper are organized as follows. In Section II, challenges on conventional clustering and membership functions are discussed. To clarify the challenges, numerical examples are explored. Section III introduces BFPM methodology on membership assignments and discusses how the method overcomes the issues with conventional methods. This section also presents an algorithm for clustering problems using BFPM methodology. Experimental results and cluster validity functions are discussed in Section IV. Discussions and conclusions are presented in Section V.
Ii Challenges on Conventional Methods
Uncertainty is the main concept in fuzzy typeI, fuzzy typeII, probability, possibilistic, and other methods [23]. To get a better understanding of the issues on conventional membership functions that deal with uncertainty, several examples in wider perspectives are presented in the following sections. The necessity of considering a comprehensive membership function on data objects is being brighten when intelligent systems are used to implement mathematical equations in uncertain conditions. The aim of this paper is to remove restrictions on data objects to participate in as much clusters as they can [24]. Lack of this consideration in learning methods weakens the accuracy and the capability of the proposed methods [25]. The following examples explore the issues on conventional methods in two dimensional search space. In higher dimensional search spaces, Big data, and social networks where overlapping is the key role [26], [27], the influences of missassignments massively skew the final results. Examples are selected from geometry, set theory, medicine, and other domains to highlight the importance of considering objects’ participation in more clusters.
Iia Example from Geometry:
Assume is a membership function that assigns a membership degree to each point with respect to each line , where each line represents a cluster. Some believe that a line cannot be a cluster, but we should note that data pattern or data model can be represented as a function in different norms, which in here, the data models can be presented as lines in two dimensional search space. Now consider the following equation which describes lines crossing at the origin:
(8) 
where matrix is a coefficient matrix, and is a matrix, in which is the number of lines and is the number of dimensions (in this example is 2). From a geometrical point of view, each line containing the origin is a subspace of . Eq. (9) describes lines as subspaces. Without the origin, each of those lines is not a subspace, since the definition of a subspace comprises the existence of the null vector as a condition in addition to other properties [28]. When trying to design a fuzzybased [29] clustering method that can create clusters using the points in all lines, it should be noted that removing or decreasing a membership value, with respect to the origin of each cluster, ruins the subspace.
(9) 
For instance, , , , and are equations representing some of those lines shown by Eq. (10) with infinite data objects (points) on them.
(10) 
To clarify the idea, just two of those lines and with five points on each, including the origin, are selected for this example.
where . The origin is a member of all lines, but for convenience, it has been given different names such as and in each line above. The point distances with respect to each line and Euclidean function are shown in the matrices below, where is the number of clusters and is the number of objects.
A zero value in the first matrix in the first row indicates that the object is on the first line. For example in , the first row shows that all the members of set are on the first line, and the second row shows how far each one of the points on the first line are from the second line (cluster). Likewise the matrix shows the data points on the second line.
Membership values are assigned to each point, using crisp and fuzzy sets as shown in the matrices below by using an example of membership function presented by Eq. (11) with respect to Fig. 1 for crisp and fuzzy methods, besides considering the conditions for these methods described by Eq. (I) and Eq. (I).
(11) 
where is the Euclidean distance of object from cluster , and is a constant that is used to normalize the values. In this example .
or
                                  
By selecting crisp membership functions in membership assignments, the origin can be a member of just one cluster and must be removed from other clusters. Given the properties of fuzzy membership functions, if the number of clusters increases, the membership value assigned to each object will decrease proportionally. For instance, in the case of two clusters, the membership of the origin is , but if the number of clusters increases to using Eq. (9) we obtain . Hence, a point will have a membership value that is smaller than the value that we can expect to get intuitively (typicality [9] value for ). For instance given that they are points in the line. However, as the number of clusters increases, we will obtain smaller values for since ( ; ). Possibilistic approaches allow data objects to obtain larger values in membership assignments. But PCM (Possibilistic Clustering Method) needs a good initialization to provide accurate clustering [10]. According to PCM condition the trivial null solutions should be handled by modifications on membership assignments.
IiB Examples from Set Theory
In set theory, we can extract some subsets from a superset according to the properties of members. Usually, members can participate in more than one set (cluster). For instance, a set of natural numbers can be categorized into different subsets: ”Even” , ”Odd” , and ”Prime” numbers, presented by Fig. 2. According to set theory, we can distinguish some members that participate in other subsets with full memberships. The other example is to categorize the set into two clusters: numbers divisible by two and numbers divisible by five, presented by Fig. 3. By considering the examples, we see some members can participate in more clusters, and we cannot remove any of them form any of those sets. In other words, we cannot restrict objects (members) to participate in only one cluster, or partially participate in other clusters.
Removing or restricting objects from their participation in other clusters leads to losing very important information, and consequently weakens the accuracy of learning methods. Members should be treated by a comprehensive method that allows objects to participate in more, even all clusters as full members with no restriction. From Fig. 3, this is obvious that we cannot say the member is only a member of cluster ”divisible by 2”, or is a half member of this cluster. Member is a full member of both clusters, which cannot be precisely explored by conventional methods. More formally, for a set of objects that should be clustered into two clusters and , crisp methods can cover the union , means all objects are categorized in clusters, but cannot provide the intersection , means that no object can participate in more than one cluster even for the mandatory cases such as presented examples from geometry and set theory. Other conventional methods reduce the memberships assigned to objects that participate in more than one cluster, which means objects cannot be full members of different clusters.
IiC Examples from other Domains
Another example is a student from two or more departments that needs to be assigned full memberships from more than one department, as a top student based on their participations. Assume we plan to categorize a number of students (S) into some clusters, based on their skills with respect to each department or course D (mathematics, physics, chemistry and so on) which can be presented as . Again, assume that a very good student () with potential ability to participate in more than one cluster () with the full membership degree (1), as follow:
,
, … , and
As the equations show, is a full member of some clusters (departments) . The following membership assignments are calculated based on crisp and fuzzy methods for membership values for student who is a member of two departments and out of four:
or
                                
As results show, students can obtain credit from just one department in crisp method. In fuzzy method, their memberships and credits are divided into the number of departments. In all domains and disciplines, we need to remove the limitations and restrictions in membership assignments to allow objects to participate in more clusters to track their behaviour. In medicine, we need to evaluate individuals before being affected to any disease categories to cut the further costs for treatments [30]. In such a case, we need to evaluate individuals on their participations in other clusters without any restriction to track their potential ability to participate in other clusters.
Iii New Learning Methodology (BFPM)
Bounded Fuzzy Possibilistic Method (BFPM) makes it possible for data objects to have full memberships in several or even in all clusters, in addition to provide the properties of crisp, fuzzy, and possibilistic methods. This method also overcomes the issues on conventional clustering methods, in addition to facilitate objects’ movements (mutation) analysis. This is indicated in Eq. (III) with the normalizing condition of . BFPM allows objects to participate in more even all clusters in order to evaluate the objects’ movement from their own cluster to other clusters in the near future. Knowing the exact type of objects and studying the object’s movement in advance is very important for crucial systems.
(12) 
BFPM avoids the problem of reducing the objects’ memberships when the number of clusters increases. According to the geometry example, objects (points) are allowed to obtain full memberships from more than one even all clusters (lines) with no restrictions [31]. Based on the membership assignments provided by BFPM we can obtain the following results for the points on the lines.
The origin can participate in all clusters (lines) as a full member. These objects such as the origin and alike are called critical objects [32]. Addressing the critical objects is very important, as we need to encourage or prevent objects to/from participating in other clusters. Critical objects are important even in multi objective optimization problems, as we are interested to find the solution that fits in all objective functions. The arithmetic operations in set theory, the intersection and the union , are precisely covered by BFPM. Covering the intersection by learning methods, removes the limitations and restrictions in membership assignments, and members can participate in more even all clusters.
BFPM has the following properties:
Property 1:
Each data object must be assigned to at least one cluster.
Property 2:
Each data object can potentially obtain a membership value of 1 in multiple clusters, even in all clusters.
Proof of Property 1:
As the following inequality shows, each data object must participate in at least one cluster.
Proof of Property 2:
According to fuzzy membership assignment [33] with respect to clusters, we have:
, …,
.
By replacing the values of zero and one (for fuzzy functions) with the linguistic values of and , respectively, we will have:
, …,
.
Consequently, we will get Eq. (13), as and are upper and lower boundaries, and regarding the rules in fuzzy sets [34], results from multiplications in upper and lower boundaries are in those boundaries.
(13) 
By considering Eq. (13), the above assumptions, and dividing all sides by , we obtain the following equation:
(14) 
Serum metabolites  Samples No.  Survival(s)  Survival(l)  Stage(l)  Stage(h)  Ave. age  Ctype(s)  Ctype(a) 

Diamond cluster  12  63.30 %  36.70 %  83.33 %  16.67 %  64.82  57.33 %  42.67 % 
Cancer samples in  21  40.91 %  59.09 %  60.87 %  39.13 %  65.31  39.13 %  60.87 % 
square cluster  
Cancer samples in  6  50.00 %  50.00 %  66.66 %  33.34 %  58.60  33.34 %  66.66 % 
circle cluster 
Finally, we get the BFPM condition by converting the linguistic values to crisp values, .
The second proof provides the most flexible environment for all membership functions that make use of uncertainty in their membership assignments. In other words, BFPM is introduced to assist fuzzy and possibilistic methods in their membership assignments. In conclusion, BFPM presents a methodology that not only objects can be clustered based on their similarities, but also the abilities of objects in participation in other clusters can be studied. The method covers the intersection and the union operations with respect to all objects and clusters. It means that if and even if there is a similarity between any object and clusters, even in one dimension, the object can obtain membership degrees from clusters. If there is a similarity between an object and a cluster, a proper membership degree will be assigned to the object, otherwise the membership degree will be zero. The degree of membership is calculated based on the similarity of objects with clusters with respect to all dimensions. Having similarities in more dimensions results in a higher degree of membership. Moreover, the method considers the object’s movement from one cluster to another in prediction and prevention strategies. In crucial systems such as security, diagnosing diseases, risk management, and decision making systems, studying the behaviour of objects in advance is extremely important. Neglecting the study of objects’ movements directs the systems to irreparable consequences. Algorithm 1 is introduced to assign memberships to objects with respect to each cluster.
BFPM methodology and Euclidean distance function have been applied in Algorithm 1. This algorithm makes use of the BFPM membership assignments (Eq. (III)) by considering condition to assign membership values. Eq. (15) and Eq. (16) show how the algorithm calculates and how the prototypes will be updated in each iteration. The algorithm runs until reaching the condition:
The value assigned to is a predetermined constant that varies based on the type of objects and clustering problems.
……… Iris Dataset  
………  Modified FCM  PCM  
Fuzzy 
LAC 
WLAC 
FWLAC 
CDFCM 
KCDFCM 
KFCMF 
WEFCM 
APCM 
BFPM 
……… 
90.21  90.57  94.37  95.90  96.18  92.06  96.66  92.67  97.33 
Iv Experimental Verification
The proposed method has been applied on real applications and problems from different domains such as medicine (lung cancer diagnosis) [30], intrusion detection systems [32], and banking (financial) systems and risk managements (decision making systems) [35]. The proposed methodology has been utilized in both supervised and unsupervised learning strategies, but in this paper clustering problems have been selected for experimental verifications. BFPM revealed information about lung cancer through analysis of metabolomics, by evaluating the potential abilities of objects to participate in other cluster. The results on the cancer dataset have been presented by Table I and Fig. 4. The figure shows how individuals are clustered with respect to their serum features (metabolites), where the horizontal axis presents objects, and the vertical axis shows the memberships assigned to objects. The methodology has been also applied in risk management and security systems to evaluate how objects (packets/transactions) that are categorized in normal cluster can be risky for the systems in the near future. In other words, the potential ability of current normal (healthy) objects to move to abnormal (cancer) cluster, and vice versa, has been covered by BFPM. In this paper, the benchmark Iris and Pima datasets from UCI repository [36] have been chosen to illustrate the idea. The datasets used in this experiment were normalized in the range . The accuracy of BFPM is compared with other methods, while the accuracy is mostly measured by the percentage of the correct labelled objects in classification problems, however the accuracy in clustering problems refers to the evaluation of the distance between the objects and the center of the clusters which is known as measuring separation and compactness of objects with respect to prototypes [37]. Table II shows the accuracy obtained by BFPM in comparison with recent fuzzy and possibilistic methods: fuzzykmeans [38], Locally Adaptive Clustering (LAC) [38], Weighted Locally Adaptive Clustering (WLAC) [38], Fuzzy Weighted Locally Adaptive Clustering (FWLAC) [38], Collaborative Distributed Fuzzy CMeans (CDFCM) [39], Kernelbased Collaborative Distributed Fuzzy CMeans (KCDFCM) [39], Kernelbased Fuzzy CMeans and Fuzzy clustering (KFCMF) [39], Weighted Entropyregularized Fuzzy CMeans (WEFCM) [39], and Adaptive Possibilistic CMeans (APCM) [40]. Results form BFPM was achieved by assigning fuzzification constant as ().
Validity Index  Suitable Value  Function  

Maximum 


is the membership value of object for cluster.  
Minimum 


DB  Minimum 


is the distance between the and the prototype  
and are the average errors for clusters and .  
CS  Minimum 


G  Maximum 


and 
Validity Index  DataSet  Cluster No.  m=1.4  m=1.6  m=1.8  m =2 

Iris  3  0.761  0.744  0.739  0.742  
Pima  2  0.758  0.660  0.574  0.573  

Iris  3  0.332  0.271  0.233  0.204 
Pima  2  0.301  0.301  0.296  0.268  
Iris  3  0.300  0.249  0.232  0.225  
Pima  2  2.970  1.950  1.742  1.668  

Iris  3  5.300  7.880  10.050  12.140 
Pima  2  1.440  1.830  2.100  2.230  
Iris  3  0.051  0.047  0.046  0.045  
Pima  2  0.054  0.036  0.032  0.031  

According to the results, BFPM performs better than other clustering methods, in addition to provide the crucial objects and areas in each dataset by allowing objects to show their potential abilities to participate in other clusters. This ability results in tracking the objects’ movements from one cluster to another. Fig. 5 and Fig 6 depict the objects’ memberships obtained by BFPM and fuzzy methods, respectively. The memberships are assigned with respect to two clusters, the current cluster which objects are clustered in and the closest clusters. The horizontal axis presents objects, and the vertical axis shows the memberships assigned to objects. The upper points are memberships assigned to objects with respect to the current cluster and the lower points depicts the objects’ memberships with respect to the closest cluster. According to Fig. 5, the objects can show their ability to participate in other clusters by obtaining higher memberships based on BFPM membership assignments , while in Fig. 6 which is obtained by fuzzy methods, objects cannot obtain high memberships for other clusters as the fuzzy methods are designed to get objects completely separated . By comparing the figures, we can conclude that fuzzy methods aim to cluster data objects, while BFPM not only aims to cluster objects with higher accuracy but also detect critical objects that trap learning methods in their learning procedures. The accuracy of clustering methods can be also evaluated either using cluster validity functions or the comparison indices method [41]. Several validity functions have been introduced, which some of them are presented in Table III. DB index [42], shown by Eq. (19), evaluates the performance of the clustering method by maximizing the distance between prototypes distances on one side and minimizing the distance between each prototype and the objects belong to the same cluster. CS index, presented by Eq. (20) [43], is very similar to DB index, but it works on clusters with different densities. G index, presented by Eq. (21) [44], performs by evaluating the separations and compactness of data objects with respect to clusters. Separation of the fuzzy partition is defined as to check how well the prototypes are separated. On the other hand, compactness of fuzzy partitions measures how close data objects are in each cluster. The desirable values () with respect to each validity index is presented in Table III. Table IV explores values from different validity functions , , , , and for BFPM with respect to different values of fuzzification constant for Iris and Pima datasets with three and two clusters respectively.
V Discussions and Conclusions
This paper introduced the Bounded Fuzzy Possibilistic Method (BFPM) as a new methodology in membership assignments in partitioning methods. The paper provided the mathematical proofs for presenting BFPM as a superset of conventional methods in membership assignments. BFPM not only avoids decreasing the memberships assigned to objects with respect to all clusters, but also makes the search space wider for objects to participate in more, even all clusters as partial or full members to present their potential abilities to move from one cluster to another (mutation). BFPM facilitates the analysis of objects’ movements for crucial systems, while conventional methods aim to just cluster objects without paying attention to their movements. Tracking the behavior of objects in advance leads to better performances, in addition to have a better insight in prevention and prediction strategies. The necessity of considering the proposed method has been proved by several examples from geometry, set theory, and other domains.
References
 [1] T.C. Havens, J.C. Bezdek, C. Leckie, L.O. Hall, M. Palaniswami, Fuzzy means algorithms for very large data, IEEE, Transactions on Fuzzy Information and Engineering, vol. 20, no. 6, pp. 11301146, 2012.
 [2] R. Xu, D. Wunsch, Clustering, IEEE Press Series on Computational Intelligence, 2009.
 [3] X. Wang, Y. Wang, L. Wang, Improving fuzzy cmeans clustering based on featureweight learning, Elsevier, Pattern Recognition, vol. 25, no. 10, pp. 11231132, 2004.
 [4] W. Pedrycz, V. Loia, S. Senatore, Fuzzy clustering with viewpoints, IEEE, Transactions on Fuzzy Systems, vol. 18, no.2 , pp. 274284, 2010.
 [5] R. Xu, D.C. Wunsch, Recent advances in cluster analysis, Intelligent Computing and Cybernetics, vol. 1, no. 4, pp. 484508, 2008.
 [6] L.A. Zadeh, Toward extended fuzzy logic A first step, Elsevier, Fuzzy Sets and Systems, vol. 160, no. 21, pp. 31753181, 2009.
 [7] S. Eschrich, J. Ke, L.O. Hall, D.B. Goldgof, Fast accurate fuzzy clustering through data reduction, IEEE, Transactions on Fuzzy Systems, vol. 11, no. 2, pp. 262269, 2003.
 [8] D.T. Anderson, J.C. Bezdek, M. Popescu, J.M. Keller, Comparing fuzzy, probabilistic, and possibilistic partitions, IEEE, Transactions on Fuzzy Systems, vol. 18, no. 5, pp. 906918, 2010.
 [9] R. Krishnaouram, J.M. Keller, A possibilistic approach to clustering, IEEE, Transactions on Fuzzy Systems, vol. 1, no. 2, pp. 98110, 1993.
 [10] M. Barni, V. Cappellini, A. Mecoccie, Comments on ”A possibilistic approach to clustering”, IEEE, Transaction on Fuzzy Systems, vol. 4, no. 3, pp. 393396, 1996.
 [11] H. Yazdani, Bounded Fuzzy Possibilistic on different search spaces, IEEE, International Symposium on Computational Intelligence and Informatics, pp. 283288, 2016.
 [12] M.S. Yang, C.Y. Lai, A robust automatic merging possibilistic clustering method, IEEE, Transactions on Fuzzy Systems, vol. 19, no. 1, pp. 2641, 2011.
 [13] K. Honda, H. Ichihashi, A. Notsu, F. Masulli, S. Rovetta, Several formulations for graded possibilistic approach to fuzzy clustering, SpringerVerlag, International Conference on Rough Sets and Current Trends in Computing, vol. 4259, pp. 939948, 2006.
 [14] F. Masulli, S. Rovetta, Soft transition from probabilistic to possibilistic fuzzy clustering, IEEE, Transactions on Fuzzy Systems, vol. 14, no. 4, pp. 516527, 2006.
 [15] R.J. Hathawaya, J.C. Bezdekb, probabilistic clustering to very large data sets, Elsevier, Computational Statistics and Data Analysis, vol. 51, no. 1, pp. 215234, 2006.
 [16] C. Borgelt, Prototypebased classification and clustering, Magdeburg, 2005.
 [17] H.C. Huang, Y.Y. Chuang, C.S. Chen, Multiple kernel fuzzy clustering, IEEE, Transactions on Fuzzy Systems, vol. 20, no. 1, pp. 120134, 2011.
 [18] D.J. WellerFahy, B.J. Borghetti, A.A. Sodemann, A survey of distance and similarity measures used within network intrusion anomaly detection, IEEE, Communication Surveys and Tutorials, vol. 17, no. 1, pp. 7091, 2015.
 [19] S.H. Cha, Comprehensive survey on distance/similarity measures between probability density functions, International Journal of Mathematics Models Methods Applied Sciences, vol. 4, no. 1, pp. 300307, 2007.
 [20] H. Yazdani, D. OrtizArroyo, H. Kwasnicka, New similarity functions, IEEE, International Conference on Artificial Intelligence and Pattern Recognition, pp. 4752, 2016.
 [21] D. Vanisri, Spatial bias correction based on Gaussian kernel fuzzy cmeans in clustering, International journal of Computer Science and Network Solutions, vol. 2, no. 12, pp. 18, 2014.
 [22] M.S. Yang, H.S. Tsai, A Gaussian kernelbased fuzzy cmeans algorithm with a spatial bias correction, Pattern Recognition Letters, vol. 29, no. 12, pp. 17131725, 2008.
 [23] C. Hwang, F.C.H. Rhee, Uncertain fuzzy clustering: Interval type2 fuzzy approach to cmeans, IEEE, Transactions on Fuzzy Systems, vol. 15, no. 1, pp. 107120, 2007.
 [24] H. Yazdani, D. OrtizArroyo, K. Choros, H. Kwasnicka, Applying Bounded Fuzzy Possibilistic Method on Critical Objects, IEEE, International Symposium on Computational Intelligence and Informatics, pp. 271276, 2016.
 [25] H. Yazdani, H. Kwasnicka, Issues on critical objects in mining algorithms, IEEE, International Conference on Artificial Intelligence and Pattern Recognition, pp. 5358, 2016.
 [26] A.E. Sarlyuce, B. Gedik, G. JacquesSilva, K.L. Wu, U.V. Catalyurek, SONIC: streaming overlapping community detection, Springer, Data Mining Knowledge Discovery, vol. 30, no. 4, pp. 819847, 2016.
 [27] J. Xie, S. Kelley, B. Szymanski, Overlapping community detection in networks: the stateoftheart and comparative study, ACM, Computer Survey, vol. 45, no. 4, pp. 143, 2013.
 [28] G. Strang, Introduction to linear algebra, WellesleyCambridge Press, 2015.
 [29] G. Chen, T.T. Pham, Introduction to fuzzy sets, fuzzy logic and fuzzy control systems, CRC Press, 2000.
 [30] H. Yazdani, L. Cheng, D.C. Christiani, A. Yazdani, Bounded Fuzzy Possibilistic Method reveals information about lung cancer through analysis of metabolomics, IEEE, Transactions on Computational Biology and Bioinformatics, vol. 15, no. x, pp. xxxxx, 2018.
 [31] H. Yazdani, D. OrtizArroyo, K. Choros, H. Kwasnicka, On high dimensional searching space and learning methods, SpringerVerlag, Data Science and Big Data: An Environment of Computational Intelligence, pp. 2948, 2016.
 [32] H. Yazdani, K. Choros, Intrusion detection and risk evaluation in online transactions using partitioning methods, Springer, International Conference on Multimedia and Network Information Systems, pp. 190200, 2018.
 [33] J.J. Kli, B. Yuan, Fuzzy sets and fuzzy logic theory and applications, Printice Hall PTR, 1995.
 [34] L.A. Zadeh, Fuzzy sets as a basis for a theory of possibility, Fuzzy Sets and Systems, vol. 100, no. 1, pp. 934, 1999.
 [35] H. Yazdani, H. Kwasnicka, Fuzzy classification method in credit risk, Springer, International Conference on Computer and Computational Intelligence, pp. 495505, 2012.
 [36] A. Asuncion, D. Newman, UCI machine learning repository.
 [37] I.J. Sledge, J.C. Bezdek, T.C. Havens, J.M. Keller, Relational generalizations of cluster validity indices, IEEE, Transactions on Fuzzy Systems, vol. 18, no. 4, pp. 771786, 2010.
 [38] H. Parvin, B. MinaeiBidgoli, A clustering ensemble framework based on selection of fuzzy weighted clusters in a locally adaptive clustering algorithm, Springer, Pattern Analysis and Applications, vol. 18, no. 1, pp. 87112, 2015.
 [39] J. Zhou, C.L.P. Chen, L. Chen, H.X. Li, A collaborative fuzzy clustering algorithm in distributed network environments, IEEE, Transactions on Fuzzy Systems, vol. 22, no. 6, pp. 14431456, 2014.
 [40] S.D. Xenaki, K.D. Koutroumbas, A.A. Rontogiannis, A novel adaptive possibilistic clustering algorithm, IEEE, Transactions on Fuzzy Systems, vol. 24, no. 4, pp. 791810, 2016.
 [41] X.L. Xie, G. Beni, A validity measure for fuzzy clustering, IEEE, Transaction on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 841847, 1991.
 [42] D.L. Davies, D.W. Bouldin, A cluster separation measure, IEEE, Transactions on Pattern Analysis And Machine Intelligence, vol. 2, pp. 224227, 1979.
 [43] C. Chou, M. Su, E. Lai, A new cluster validity measure and its application to image compression, Pattern Analysis and Application, vol. 7, no. 2, pp. 205220, 2004.
 [44] M.R. Rezaee, B.P.F. Lelieveldt, J.H.C. Reiber, A new cluster validity index for the fuzzy cmeans, Elsevier, Pattern Recognition Letters, vol. 19, no. 3, pp. 237246, 1998.
Hossein Yazdani is a PhD candidate at Wroclaw University of Science and Technology. He was a Manager of Foreign Affairs, DBA, Network Manager, System Analyst in Sadad Informatics Corp. a subsidiary of Melli Bank of Iran. Currently, he cooperates with the Department of Information Systems, Faculty of Computer Science and Management, and Faculty of Electronics at Wroclaw University of Science and Technology. His research interests include BFPM, Critical objects, Machine learning, Artificial intelligence, Dominant features, Distributed networks, Collaborative clustering, Security, Big data, Bioinformatics, and optimization. 