STAD Research Report 01/2015
Boosted-Oriented Probabilistic Smoothing-Spline Clustering of Series.
Fuzzy clustering methods allow the objects to belong to several clusters simultaneously, with different degrees of membership. However, a factor that influences the performance of fuzzy algorithms is the value of fuzzifier parameter. In this paper, we propose a fuzzy clustering procedure for data (time) series that does not depend on the definition of a fuzzifier parameter.
It comes from two approaches, theoretically motivated for unsupervised and supervised classification cases, respectively. The first is the Probabilistic Distance (PD) clustering procedure. The second is the well known Boosting philosophy. Our idea is to adopt a boosting prospective for unsupervised learning problems, in particular we face with non hierarchical clustering problems. The aim is to assign each instance (i.e. a series) of a data set to a cluster. We assume the representative instance of a given cluster (i.e. the cluster center) as a target instance, a loss function as a synthetic index of the global performance and the probability of each instance to belong to a given cluster as the individual contribution of a given instance to the overall solution. The global performance of the proposed method is investigated by various experiments.
Keywords: Fuzzy Clustering, Boosting, PD clustering
We propose a fuzzy approach for clustering data (time) series. The goal of clustering is to discover groups so that objects within a cluster have high similarity among them, and at the same time they are dissimilar to objects in other clusters. Many clustering algorithms for time series have been introduced in the literature. Since clusters can formally be seen as subsets of the data set, one possible classification of clustering methods can be according to whether the subsets are fuzzy (soft) or crisp (hard).
Let be a data set consisting of series and let be an integer, with , the goal is to partition into groups.
Crisp clustering methods are based on classical set theory, and restrict that each object of data set belongs to exactly one cluster.
It means partitioning the data into a specified number of mutually exclusive clusters .
A hard partition of can be defined as a family of subsets that satisfies the following properties (Bezdek, 1981):
Let be the membership function and let be the partition matrix. The elements of must satisfy the following conditions:
The column of contains value of of the subset of .
In a hard partition, is the indicator function:
Following Bezdek (1981) the hard partionining space is thus defined by:
being the space of all possible hard partition matrices for .
Generalizing the crisp partition, is a fuzzy partitions of with elements of the partition matrix bearing real values in (Kaufman and Rousseeuw, 2009).
The idea of fuzzy set was conceveid by Zadeh (2009). Fuzzy clustering methods do not assign objects to a cluster but suggest degrees of membership to each group. The larger is the value of the membership value for a given object with respect to a cluster, the larger is the probability of that object to be assigned to that cluster.
Similarly to crisping conditions, Ruspini (1970) defined the following fuzzy partition properties:
The fuzzy partitioning space is the set:
Several clustering criteria have been proposed to identify fuzzy partition in . Among these proposals, the most popular method is fuzzy -means.
Proposed by Dunn (1973) and developed by Bezdek (1981), fuzzy -means considers each data point as a possible member of multiple clusters with a membership value. This algorithm is based on minimization of the following objective function:
In the equation (1), is any real number greater than 1, is the degree of membership of in the cluster and is any norm expressing the similarity between any measured data and the center. The parameter is called fuzzifier or weighting coefficient. To perform fuzzy partitioning, the number of clusters and the weighting coefficient have to be choosen. The procedure is carried out through an iterative optimization of the objective function shown above, with the update of membership value and the cluster centers by solving:
The loop will stop when
where is a small number for stopping the iterative procedure, and indicates the iteration steps. The algorithm is synthesized in box 1.
One of limitations of fuzzy -means clustering is the value of fuzzifier . A large fuzzifier value tends to mask outliers in data sets, i.e. the larger , the more clusters share their objects and viceversa. For all data objects have identical membership to each cluster, for , the method becomes equivalent to -means. The role of the weighting exponent has been well investigated in literature.
Pal and Bezdek (1995) suggested taking .
Dembélé and Kastner (2003) obtain the fuzzifier with an empirical method calculating the coefficient of variation of a function of the distances between all objects of the entire datset.
Yu et al. (2004) proposed a theoretical upper bound for that can prevent the sample mean from being the unique optimizer of a fuzzy -means objective functions.
Futschik and Carlisle (2005) search for a minimal fuzzifier value for which the cluster anlysis of the randomized data set produces no meaningful results, by comparing a modified partitions coefficient for different values of both parameters.
Schwämmle and Jensen (2010) showed that the optimal fuzzfier takes values far from the its frequently used value equal to . The authors introduced a method to determine the value of the fuzzifier without using the current working data set. Then for high dimensional ones, the fuzzifier value depends directly on the dimension of data set and its number of objects. For low dimensional data set with small number of objects, the authors reduce the search space to find the optimal value of the fuzzifier. According to the authors, this improvement helps choosing the right parameter and saving computational time when processing large data set.
On the basis of a robust selection analysis of the algorithm, Wu (2012) founds that a large value of will make fuzzy -means algorithm more robust to noise and outliers. The author suggested to use value of fuzzifier ranging between 1.5 and 4.
Since the weighting coefficient determines the fuzziness of the resulting classification, we propose a method that is independent from the choice of the fuzzifier. It comes from two approaches, theoretically motivated for unsupervised and supervised classification cases respectively. The first is the Probabilistic Distance (PD) clustering procedure defined by Ben Israel and Iyigun (2008). The second is the well known Boosting philosophy. From the PD approach we took the idea of determining the probabilities of each series to any of the clusters. As this probability is unequivocally related to the distance of each series from the centers, there are no degrees of freedom in determine the membership matrix. From the Boosting approach (Freund and Schapire, 1997) we took the idea of weighting each series according some measure of badness of fit in order to define an unsupervised learning process based on a weighted re-sampling procedure. As a learner for the boosting procedure we use a smoothing spline approach. Among the smoothing spline techniques, we chose the penalized spline approach (Eilers and Marx, 1996) because of its flexibility and computational efficiency. This paper is organized as follows: Section 2 contains our proposal, in Section 3 the results of some experimental evaluation studies are carried out and some concluding remarks are presented in Section 4.
2 Boosted-oriented probabilistic clustering of time series
2.1 The key idea
The boosting approach is based on the idea that a supervised learning algorithm (weak learner) improves its performance by learning from its errors (Freund and Schapire, 1997). It consists of an ensemble method that work with a resampling procedure (Dietterich, 2000). The general idea is to run several times the supervised learning algorithm and assigning a weight to each instance of a data set that governs the resampling (with replacement) process during the iterations. The weights are set in such a way that the misclassified instances gets a weight larger than the weight assigned to well classified instances. In this way, the probability to be included in the sample during the iterations is higher for those instances for which the supervised learning algorithm returns a wrong classification. There exist boosting algorithms for both classification and regression problems (Freund and Schapire, 1997; Dietterich, 2000; Eibl and Pfeiffer, 2002; Gey and Poggi, 2006). In both cases the weighting system combines a synthetic index of the performance of the supervised learning algorithm with some index that represents the individual contribution of a given instance to the overall solution. Our idea is to adapt the boosting philosophy to unsupervised learning problems, specially to non hierarchical cluster analysis. In such a case there not exists a target variable, but as the goal is to assign each instance (i.e. a series) to a cluster, we have a target instance. In other words, we switch from a target variable to a target instance point of view. We take each cluster center as a representative instance for each series and we assume as a synthetic index of the global performance a loss function to be minimized. The probability of each instance to belong to a given cluster is assumed to be the individual contribution of a given instance to the overall solution. In contrast to the boosting approach, the larger the probability of a given series to be member of a given cluster, the larger the weight of that series in the resampling process. As a learner either a smoothing spline techniques or a regression model can be used. We decided to use a penalized spline smoother because of its flexibility and computational efficiency. To define the probabilities of each series to belong to a given cluster we use the PD clustering approach (Ben Israel and Iyigun, 2008). This approach allows us to define a suitable loss function and, at the same time, to propose a fuzzy clustering procedure that does not depend on the definition of a fuzzifier parameter.
2.2 P-splines in a nutshell
P-splines have been introduced by Eilers and Marx (1996) as flexible smoothing procedures combining B-splines (de Boor, 1978) and difference penalties. Suppose to observe a set of data , where the vector indicates the independent variable (e.g. time) and the dependent one. We want to describe the available measurements through an appropriate smooth function. Denote the value of the B-spline of degree defined on a domain spanned by equidistant knots (in case of not equally spaced knots our reasoning can be generalized using divided differences). A curve that fits the data is given by where (with ) are the estimated B-splines coefficients. Unfortunately the curve obtained by minimizing w.r.t. shows more variation than is justified by the data if a dense set of spline functions is used. To avoid this overfitting tendency it is possible to estimate using a generous number of bases in a penalized regression framework
where is a order difference penalty matrix and is a smoothing parameter. Second or third order difference penalties are suitable in many applications.
The optimal spline coefficients follow from (4) as:
The smoothing parameter controls the trade-off between smoothness and goodness of fit. For the final estimates tend to be constant while for the smoother tends to interpolate the observations.
Popular methods for smoothing parameter selection are the Akaike Information Criterion and Cross Validation. AIC estimates the predictive log likelihood, by correcting the log likelihood of the fitted model () by its effective dimension (ED): . Following Hastie and Tibshirani (1990) we can compute the effective dimension as for the P-spline smoother and
where is the maximum likelihood estimate of . But , so the second term of is a constant. Hence the AIC can be written as
The optimal parameter is the one that minimizes the value of .
LOO-CV chooses the value of that minimizes
where is the th diagonal entry of .
Analogous to CV is the generalized cross validation measure (Whaba, 1990)
where . In analogy with cross validation we select the smoothing parameter that minimizes .
All these selection procedures suffer of two drawbacks: 1) they require the computation of the effective model dimension which can become time consuming for long data series, and 2) they are sensitive to serial correlation in the noise around the trend. The L-curve (Hansen, 1992) and the derived V-curve criteria (Frasso and Eilers, 2015) overcome these hitches. The L-curve is a parameterized curve comparing the two ingredients of every regularization or smoothing procedure: badness of the fit and roughness of the final estimate. For a P-spline smoother, the following quantities can be defined
The L-curve is obtained by plotting against . This plot typically shows a L-shaped curve and the optimal amount of smoothing is located in the corner of the “L” by maximizing the local curvature measure. The V-curve criterion offers a valuable simplification of the searching criterion by requiring the minimization of the Euclidean distance between the adjacent points on the L-curve and, like in plots of AIC or GCV, the graphical presentation of the V-curve has an axis for that can be read off. The V-curve criterion is computed as follows:
2.3 PD clustering approach
Let be a dataset consisting of series and let be cluster, with , partitioning . We suppose that each series has the same domain of length .
At each cluster is associated a cluster center , with .
Let be a distance function of the i series from the k cluster center.
Let be the probability of the i series belonging to the k cluster.
For each series and each cluster , we assume the following relation between probabilities and distances (Ben Israel and Iyigun, 2008):
2.4 The algorithm
Since the probabilities as defined in equation (8) sum up to one among the clusters, we use the quantity as a measure of compliance representation of the series with respect to the overall solution of the clustering procedure. It is easy to note that if the series coincides with the cluster center, as well as if there is maximum uncertainty in assigning the series to any cluster center. For this reason we use as measure of cluster compliance solution the quantity
Equation (9) is a synthetic uncertainty clustering measure: the lower its value, the better the solution. It equals zero when there is a perfect solution (i.e., each series has probability equal to one to belong to some cluster center). The maximum possible value of equation (9) is , when each series has probability equal to to belong to each of the cluster. The index allows to compare the overall clustering solution when the number of the clusters differs.
From equation 9 we define the following loss function to be minimized as
Let be the contribution of the series to generate the cluster.
Let be a indicator matrix whose entries are if (, ) and otherwise.
We define the weight of the series for the cluster as
For each cluster , the weights are first normalized in this way:
then within each cluster we set
For each cluster , a sample is extracted with replacement from , taking in account equation (11). Then the cluster centers , are estimated by using a P-spline smoother. These centers are then used to compute the membership probabilities according to equation (8) for the next iteration. The cluster centers are re-estimated and adaptively updated with an optimal spline smoother.
The choice of the metric depends on the nature of the series, the optimal P-spline smoothing procedure frames our approach in the class of model-based clustering techniques but any suitable smoother can be adopted. Box 2 shows the pseudo-code of our the Boosted-Oriented Smoothing Spline Probabilistic Clustering algorithm.
The procedure described in box 2 is repeated a certain number of time due to the sensitivity of final solution to the random choice of cluster center.
3 Experimental evaluation
To evaluate the performance of the proposed algorithm, we conducted three experiments. In estimating the optimal P-splines smoother, always we used the V-curve criterion as in equation (6) to select the optimal parameter, and we used a number of interior knots equal to , in which is the length of time domain, as suggested by Ruppert (2002). Moreover we need a measure of goodness of fuzzy partitions. To reach this aim, we decided to use a fuzzy variant of the Rand Index proposed by Hullermeier et al. (2012). This index is defined by the complement to of the normalized sum of degree of discordance. The Rand index developed by Rand (1971) is a external evaluation measure to compare the clustering partitions on a set of data. The problem of evaluating the solution of a fuzzy clustering algorithm with the Rand index is that it requires converting the soft partition into a hard one, losing information.
As shown in Campello (2007), different fuzzy partitions describing different structures in the data may lead to the same crisp partition and then in the same Rand index value. For this reason the Rand index is not appropriate for fuzzy clustering assessment.
To overcome this problem Hüllermeier et al. (2012) proposed a generalization of the Rand index for fuzzy partitions. We recall some essential background.
Let be a fuzzy partition of the data set , each element is characterized by its membership vector:
where is the degree membership of the series to the cluster . Given any pair , Hellermeier et al. (2012) defined a fuzzy equivalence relation on in terms of similarity measure on the associated membership vectors (12). Generally, this relation is of the form:
where represents the norm divided by that constitutes a proper metric on and yields value on . is equal to if and only if and have the same membership pattern and is equal to otherwise. The basic idea of the authors to reach the fuzzy extension of the Rand index was to generalize the concept of concordance in the following way.
Given 2 fuzzy partition, and and considering a pair as being concordant as and agree on its degree of equivalence, they defined the degree of concordance as
and degree of discordance as:
Finally, the distance measure proposed by Hüllermeier et al. (2012) is defined as the normalized sum of degrees of discordance:
The direct generalization of the Rand index corresponds to the normalized degree of concordance and it is equal to:
and it reduces to the original Rand index when partitions and are non-fuzzy.
As true fuzzy partition, we always computed the true cluster centers with an optimal P-spline smoother, and then we computed the true probabilities by applying equation (8).
3.1 Simulated data
As a first experiment, we generated clusters of numerical series at equally spaced time points in as described in Coffey et al. (2014). Distinct cluster specific models were used (subscript refers to the series, subscript refers to the time domain):
with , , ,
, , , with , , , with ranging from to and is an autoregressive model of order 1.
Cluster means were chosen to reflect the situation where there are series that show little variation in value over time (as given by cluster 3) and series which have distinct signal over time. Cluster sizes were equal to , , , , and , for cluster respectively, giving a total number of simulated series. Data set is plotted in Fig.1.
Figure 1 about here.
Given the nature of the simulated series, we are interested in the similarity of the shape of the series. For this reason the chosen metric was the Penrose shape distance (Penrose, 1952), defined as:
where is the squared average Euclidean distance coefficient and .
We performed five analyses with and boosting iterations. In all cases we set random starting points. Figure 2 shows the behavior of the BC function as defined in equation (9) during the boosting iterations. In this case the values appear to be non-increasing as the number of iterations increases. The values of the function are equal to for and boosting iterations respectively.
Figure 2 about here.
All the solutions return in fact the same results in terms of estimated centers: in example, figure 3 shows the estimated cluster centers for each cluster as returned by the first analysis.
Figure 3 about here.
For this data set, by using the Penrose shape distance, the Fuzzy Rand Index is equal to and for the solutions with respectively and boosting iterations. Even if the solutions in terms of ”hard” clustering are the same, the difference in terms of fuzzy rand index indicates that the partitions returned by the proposed algorithm are really close to the true one. The true value of the index is .
3.2 Synthetic data set
Synthetic.tseries data set is freely available from the TSclust R-package (Montero and Vilar, 2014).
Synthetic.tseries data consist of three partial realizations of length of six first order autoregressive models. Figure 4 shows separately the six groups of series.
Figure 4 about here.
Subplot (a) shows an AR(1) process with moderate autocorrelation. Subplot (b) contains series from a bi-linear process with approximately quadratic conditional mean. Subplot (c) is formed by an exponential autoregressive model with a more complex non-linear structure. Subplot (d) shows a self-exciting threshold autoregressive model with a relatively strong non-linearity. Subplot (e) contains series generated by a general non-linear autoregressive model and subplot (f) shows a smooth transition autoregressive model presenting a weak non-linear structure. As we did not generated these series we do not show completely the simulation setting. For more details about the generating models we refer to Montero and Vilar (2014), pag. 24.
Assuming that the aim of cluster analysis is to discover the similarity between underlying models, the ”true” cluster solution is given by the six clusters involving the three series from the same generating model. Given the nature of the data set considered, we use a periodogram-based distance measure proposed by Caiado at al. (2006). It assesses the dissimilarity between the corresponding spectral representation of time series.
By following also the suggestion of to Montero and Vilar (2014), an interesting alternative to measure the dissimilarity between time series is the frequency domain approach. Power spectrum analysis is concerned with the distribution of the signal power in the frequency domain. The power-spectral density is defined as the Fourier transform of the autocorrelation function of series. It is a measure of self-similarity of a signal with its delayed version. The classic method for estimation of the power spectral density of an -sample record is the periodogram introduced by Schuster (1897). Let and be two time series of length .
Let , in the range to , be the frequencies of the series.
Let and be the periodograms of series and , respectively.
Finally, the dissimilarity measure between and proposed by Caiado et al. (2006) is defined as the Euclidean distance between periodogram ordinates :
We performed our analysis by setting boosting iterations and random starting points.
Table 1 shows the results of applying our algorithm to the Synthetic.tseries data set. Each series is assigned to the estimated cluster according to the value of the membership probability matrix. In order to obtain the Fuzzy Rand Index, we computed the true cluster centers with a periodogram modeled by P-spline , and then we computed the true probabilities by applying equation (8) by using the periodogram-based distance as in equation (14).
The Fuzzy Rand is equal to . Even if the solutions in terms of ”hard” clustering seems to be excellent (since only series is misclassified), the difference in terms of Fuzzy Rand index indicates that the partitions returned by the algorithm are really close to the true one.
Table 1 about here.
3.3 A real data example
The ”growth” data set is freely available from the internal repository of the R-package fda (Ramsay et al., 2012). This data set comes from the Berkeley Growth Study (Tuddenham and Snyder, 1954). Left hand side of figure 5 shows the growth curves of 93 children, 39 boys and 54 girls, starting by the age of one year till the age of 18. The right hand side of the same figure displays the corresponding growth velocities.
Figure 5 about here.
In the framework of cluster analysis this data set was mainly used for problems of clustering of misaligned data (Sangalli et al., 2010a, 2010b). We performed two analyses with 800 boosting iterations and with 10 random starting point with . In the first partitioning analysis we used the Euclidean distance. The estimated centers of both the growth curves and the growth velocity curves are displayed respectively in the left and right hand side of figure 6. As it can be noted, Euclidean distance discriminates between children growing more and children growing less. This can be appreciated by looking at left hand side of the same figure. On average, as expected, boys grow more than girls.
Figure 6 about here.
Nevertheless, Euclidean distance does not seem the right measure to be used in such a case. Probably researchers are interested in the shape of both growth and growth velocity curves during the years. For this reason, we repeated the analysis by using the Penrose shape distance as defined in equation (13). Figure 7 shows the estimated centers for both the growth and the growth velocity curves. The recognized centers are really similar to the ones obtained by Sangalli et al. (2010a; 2010b): firstly, as confirmed by looking at tables 4 and 5 with respect to tables 2 and 3, there is a neat separation of boys and girls. Secondly, by looking at right hand side of figure 7, boys start to grow later but they seem to have a more pronounced growth, as it can be noticed by looking at the higher peak in correspondence of 15 year.
Figure 7 about here.
Table 2 about here.
Table 3 about here.
Table 4 about here.
Table 5 about here.
The Fuzzy Rand index is equal to and by using the Euclidean distance for the partitions of growth and growth velocity curves respectively. The Fuzzy Rand index is equal to and by using the Penrose shape distance for the partitions of growth and growth velocity curves respectively.
4 Concluding remarks
In this paper we merged two approaches, theoretically motivated for respectively unsupervised and supervised classification cases, to propose a new non-hierachical fuzzy clustering algorithm.
From the Probabilistic Distance (PD) clustering (Ben-Israel and Iyigun, 2008) approach we shared the idea of determining the probabilities of each series to any of the clusters. As this probability is directly related to the distance of each series from the cluster centers, there are no degrees of freedom in determine the membership matrix.
From the Boosting approach (Freund and Schapire, 1997) we shared the idea of weighting each series according some measure of badness of fit in order to define an unsupervised learning process based on a weighted resampling procedure. In contrast to the boosting approach, the higher the probability of a given instance to be member of a given cluster, the higher the weight of that instance in the resampling process. As a learner we can use any smoothing spline technique. We used a P-spline smoother (Eilers and Marx, 1996) because of its nice properties and we choose the optimal spline parameter with the V-curve criterion as defined by Frasso and Eilers (2015). In this way we defined a suitable loss function and, at the same time, we proposed a fuzzy clustering procedure that does not depend on the definition of a fuzzifier parameter.
To evaluate the performance of our proposal, we conducted three experiments, one of them on simulated data and the remaining two on data sets known in literature. The results show that our Boosted-oriented procedure show good performance in terms of data partitioning. Even if the final fuzzy partition is sensitive to the choice of a distance measure, it is independent on any other input parameters. This consideration allows to define a suitable true fuzzy partition with which evaluate the final solution in terms of Fuzzy Rand Index (Hüllermeier et al., 2012). The weigthed re-sampling process allows each series to contribute to the composition of each cluster as well as the adaptive estimation of cluster centers allows the algorithm to learn by its progresses.
It is worth-nothing that, as in any partitioning problem, the choice of the distance measure can influence the goodness of partition.
-   Ben-Israel, A., and Iyigun, C. (2008). Probabilistic d-clustering. Journal of classification, 25(1); 5-26.
-   Bezdeck, J.C. (1981). Pattern recognition with fuzzy objective function algorithms. Kluwer Academic Publishers, Norwell, MA, USA.
-   Caiado, J., Crato, N., and Peña, D. (2006). A periodogram-based metric for time series classification. Computational Statistics and Data Analysis 50(10), 2668-2684.
-   Campello, R.J. (2007). A fuzzy extension of the rand index and other related indexes for clustering and classification assessment. Pattern recognition letters, 28(7), 833-841.
-   Coffey, N., Hinde, J. and Holian E. (2014). Clustering longitudinal profiles using P-splines and mixed effects models applied to time-course gene expression data. Computational Statistics and Data Analysis, 71, 14-29.
-   de Boor, C. (1978). A practical guide to splines. Spronger-Verlag.
-   Dembélé D., and Kastner P. (2003). Fuzzy C-means method for clustering microarray data.. Bioinformatics, 19(8), 973-980.
-   Dietterich, T.G. (2000). Ensenmble methods in machine learning. In Kittler and F. Roli (Ed.), First International Workshop on Multiple Classifier Systems, Lecture Notes in Computer Science, 1-15. New York: Springer Verlag.
-   Dunn, J.C. (1973). A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters. Journal of Cybernetics 3, 32-57.
-   Eibl, G, and Pfeiffer, K.P. (2002). How to Make AdaBoost.M1 Work for Weak Base Classifiers by Changing Only One Line of the Code. Proceedings of the 13th European Conference on Machine Learning, 72-83, Springer.
-   Eilers, P.H.C., and Marx, B.D. (1996). Flexible smoothing with B-splines and penalties. Statistical Science, 1(2), 115-121.
-   Frasso, G., and Eilers P.H.C. (2015). L- and v-curves for optimal smoothing. Statistical Modelling, 15(1), 91-111.
-   Freund, Y., and Schapire, R.E. (1997). A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer and System Sciences , 55(1), 119-139.
-   Gey, S., and Poggi, J.M. (2006). Boosting and instability for regression trees. Computational statistics and data analysis, 50(2), 533-550.
-   Golay, X., Kollias, S., Stoll, G., Meier, D., Valavanis, A., and Boesiger P. (1998). A new correlationâbased fuzzy logic clustering algorithm for FMRI. Magnetic Resonance in Medicine, 40(2), 249-260.
-   Hansen, P.C. (1992). Analysis of discrete ill-posed problems by means of the L-curve. SIAM Review, 34(4), 561-580.
-   Hastie, T. J., and Tibshirani, R. J. (1990). Generalized additive models. CRC Press.
-   Heiser, W.J. (2004). Geometrical representation of association between categories. Psychometrika, 69(4), 515-545.
-   Hüllermeier, E., Rifqi, M., Henzgen, S., and Senge, R. (2012). Comparing Fuzzy Partitions: A Generalization of the Rand Index and Related Measures. IEEE Transactions on Fuzzy Systems, 20(3), 546-556.
-   Kaufman, L., and Rousseeuw, P.J. (2009). Finding groups in data: An introduction to cluster analysis, John Wiley & Sons.
-   Montero, P., and Vilar, J.A. (2014). TSclust: An R package for time series clustering. Journal of stattistical software, 62(1), 1-43.
-   Pal, N.R., and Bezdek, J.C. (1005), On cluster validity for the fuzzy c-means model. IEEE Transactions on Fuzzy Systems, 3(3), 370-379.
-   Penrose, L.S. (1952). Distance, size and shape. Annals of eugenics, 17(1), 337-343.
-   Rand, W.M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336), 846-850.
-   Ramsay, J, Wickham, H., Graves, S, and Hooker, G. (2012). fda: Functional Data Analysis. R package version 2.2.8. URL http://CRAN.R-project.org/package=fda.
-   Ruppert, D. (2002). Selecting the number of knots for penalized splines. Journal of computational and graphical statistics, 11(4).
-   Ruspini, E.H. (1970). Numerical methods for fuzzy clustering. Information Sciences, 2(3), 319-350.
-   Sangalli, L., Secchi, P., Vantini, S. and and Vitelli, V. (2010a). Functional clustering and alignment methods with applications. Communications in Applied and Industrial Mathematics, 1(1), 205-224.
-   Sangalli, L., Secchi, P., Vantini, S. and and Vitelli, V. (2010b). K-means alignment for curve clustering. Computational Statistics and Data Analysis, 54, 1219-1233.
-   Schuster, A. (1897). On lunar and solar periodicities of earthquakes. Proceedings of the Royal Society of London, 455-465.
-   Schwämmle V., and Jensen O.N. (2010). A simple and fast method to determine the parameters for fuzzy c-means cluster analysis. Bioinformatics, 26(22), 2841-2848.
-   Tuddenham, R.D., and Snyder, M.M. (1954). Physical growth of California boys and girls from birth to eighteen years. University of California Publication in Child Development, 1, 183-364.
-   Wahba, G. (1990). Spline models for observational data. SIAM, Philadelphia, PA.
-   Wu, K.L. (2012). Analysis of parameter selections for fuzzy c-means. Pattern recognition, 45(1), 407-415.
-   Yu J., Cheng Q., and Huang H. (2004). Analysis of the weighting exponent in the FCM. IEEE Transactions on Systems, Man Cybernetics part B, 34(1), 634-639.
-   Zadeh, L.A. (1965). Fuzzy sets. Information and control, 8(3), 338-353.
|Cluster 1||Cluster 2|
|Cluster 1||Cluster 2|
|Cluster 1||Cluster 2|
|Cluster 1||Cluster 2|