Supervised functional classification: A theoretical remark and some comparisons

Supervised functional classification: A theoretical remark and some comparisons

Amparo Baíllo111Corresponding author. Phone: +34 914978640, e-mail: amparo.baillo@uam.es  and  Antonio Cuevas
Departamento de Análisis Económico: Economía Cuantitativa, Univ. Autónoma de Madrid, Spain
Departamento de Matemáticas, Univ. Autónoma de Madrid, Spain
The research of both authors was partially supported by Spanish grant MTM2007-66632 and the IV PRICIT program titled Modelización Matemática y Simulación Numérica en Ciencia y Tecnología (SIMUMAT).
Abstract

The problem of supervised classification (or discrimination) with functional data is considered, with a special interest on the popular -nearest neighbors (-NN) classifier.

First, relying on a recent result by Cérou and Guyader (2006), we prove the consistency of the -NN classifier for functional data whose distribution belongs to a broad family of Gaussian processes with triangular covariance functions.

Second, on a more practical side, we check the behavior of the -NN method when compared with a few other functional classifiers. This is carried out through a small simulation study and the analysis of several real functional data sets. While no global “uniform” winner emerges from such comparisons, the overall performance of the -NN method, together with its sound intuitive motivation and relative simplicity, suggests that it could represent a reasonable benchmark for the classification problem with functional data.

Key words and phrases. Supervised classification, functional data, projections method, nearest neighbors, discriminant analysis.

AMS 2000 subject classification. Primary 62G07; secondary 62G20.

8mm

1. Introduction

1.1 Some background on supervised classification

Supervised classification is the modern name for one of the oldest statistical problems in experimental science: to decide whether an individual, from which just a random measurement (with values in a “feature space” endowed with a metric ) is known, either belongs to the population or to . For example, in a medical problem and could correspond to the group of “healthy” and “ill” individuals, respectively. The decision must be taken from the information provided by a “training sample” , where , , are independent replications of , measured on randomly chosen individuals, and are the corresponding values of an indicator variable which takes values 0 or 1 according to the membership of the -th individual to or . Thus the mathematical problem is to find a “classifier” , with , that minimizes the classification error .

The term “supervised” refers to the fact that the individuals in the training sample are supposed to be correctly classified, typically using “external” non statistical procedures, so that they provide a reliable basis for the assignation of the new observation. This problem, also known as “statistical discrimination” or “pattern recognition”, is at least 70 years old. The origin goes back to the classical work by Fisher (1936) where, in the -variate case , a simple “linear classifier” was introduced ( stands for the indicator function of a set ).

A deep insightful perspective of the supervised classification problem can be found in the book of Devroye et al (1996). Other useful textbooks are Hand (1997) and Hastie et al. (2001). All of them focus on the standard multivariate case .

It is not difficult to prove (e.g., Devroye et al., 1996, p. 11) that the optimal classification rule (often called “Bayes rule”) is

(1)

where . Of course, since is unknown the exact expression of this rule is usually unknown, and thus different procedures have been proposed in order to approximate it. In particular, it can be seen that Fisher’s linear rule is optimal provided that the conditional distributions of and are both normal with identical covariance matrix. While these conditions look quite restrictive, and it is straightforward to construct problems where any linear rule has a poor performance, Fisher’s classifier is still by far the most popular choice among users.

A simple non-parametric alternative is given by the -nearest neighbors (-NN) method which is obtained by replacing the unknown regression function in (1) with the regression estimator

(2)

where is a given (integer) smoothing parameter and “” means that is one of the nearest neighbours of . More concretely, if the pairs are re-indexed as so that the ’s are arranged in increasing distance from , , then . This leads to the -NN classifier .

It is well-known that, in addition to this simple classifier, several other alternative methods (kernel classifiers, neural networks, support vector machines,…) have been developed and extensively analyzed in the latest years. However, when used in practice with real data sets, the performance of Fisher’s rule is often found to be very close to that of the best one among all the main alternative procedures. On these grounds, Hand (2006) has argued in a provocative paper about the “illusion of progress” in supervised classification techniques. The central idea would be that the study of new classification rules often fails to take into account the structure of real data sets and it tends to overlook the fact that, in spite of the its theoretical limitations, Fisher’s rule is quite satisfactory in many practical applications. This, together with its conceptual simplicity, explains its popularity over the years.

1.2 The purpose and structure of this paper

We are concerned here with the problem of (binary) supervised classification with functional data. That is, we consider the general framework indicated above but we will assume throughout that the space where the random elements take values is a separable metric space of functions. For some theoretical results (Theorem 2) we will impose a more specific assumption by taking as the space of real continuous functions defined in a closed finite interval , with the usual supremum norm .

The study of discrimination techniques with functional data is not as developed as the corresponding finite-dimensional theory but, clearly, is one of the most active research topics in the booming field of functional data analysis (FDA). Two well-known books including broad overviews of FDA with interesting examples are Ferraty and Vieu (2006) and Ramsay and Silverman (2005). Other recent more specific references will be mentioned below.

There are of course several important differences between the theory and practice of supervised classification for functional data and the classical development of this topic in the finite-dimensional case, where typically the data dimension is much smaller than the sample size (the “high-dimensional” case where is “large”, and usually , requires a separate treatment). A first important practical difference is the role of Fisher’s linear discriminant method as a “default” choice and a benchmark for comparisons. As we have mentioned, this holds for the finite dimensional cases with “small” values of but it is not longer true if functional (or high-dimensional) data are involved. To begin with, there is no obvious way to apply in practice Fisher’s idea in the infinite-dimensional case, as it requires to invert a linear operator which is not in general a straightforward task in functional spaces; see, however, James and Hastie (2001) for an interesting adaptation of linear discrimination ideas to a functional setting. Then, the question is whether there exists any functional discriminant method, based on simple ideas, which could play a reference role similar to that of Fisher’s method in the finite dimensional case. The results in this paper suggest (as a partial, not definitive, answer) that the -NN method could represent a “default standard” in functional settings.

Another difference, particularly important from the theoretical point of view, concerns the universal consistency of the -NN classifier. A classical result by Stone (1977) establishes that in the finite-dimensional case (with ) the conditional error of the -NN classifier

(3)

converges in probability (and also in mean) to that of the Bayes (optimal) rule , that is, , provided that and as . This result holds universally, that is, irrespective of the distribution of the variable . The interesting point here is that this universal consistency result is no longer valid in the infinite-dimensional setting. As recently proved by Cérou and Guyader (2006), if the space where takes values is a general separable metric space, a non-trivial condition must be imposed on the distribution of in order to ensure the consistency of the -NN classifier.

The aim of this paper is twofold, with a common focus on the -NN classifier and in close relation with the above mentioned two differences between the classification problem in finite and infinite settings. First, on the theoretical side, we have a further look at the consistency theorem in Cérou and Guyader (2006) by giving concrete non-trivial examples where their consistency condition is fulfilled. Second, from a more practical viewpoint, we will carry out numerical comparisons (based both on Monte Carlo studies and real data examples) to assess the performance of different functional classifiers, including -NN.

This paper is organized as follows. In Section 2 the consistency of the functional -NN classifier is established, as a consequence of Theorem 2 in Cérou and Guyader (2006), for a broad class of Gaussian processes. In Section 3 other functional classifiers recently considered in the literature are introduced and briefly commented. They are all compared through a simulation study (based on two different models) as well as six real data examples, very much in the spirit of Hand’s (2006) paper, where the performance of the classical Fisher’s rule was assessed in terms of its discrimination capacity in several randomly chosen data sets.

2. On the consistency of the functional -NN classifier

In the functional classification problem several auxiliary devices have been used to overcome the extra difficulty posed by the infinite dimensional nature of the feature space. They include dimension reduction techniques (e.g., James and Hastie 2001, Preda et al. 2007), random projections combined with data-depth measures projections use of data-depth measures (Cuevas et al. 2007) and different adaptations to the functional framework of several non-parametric and regression-based methods, including kernel classifiers (Abraham et al. 2006, Biau et al. 2005, Ferraty and Vieu 2003), reproducing kernel procedures (Preda 2007), logistic regression (Müller and Stadtmüller 2005) and multilayer perceptron techniques with functional inputs (Ferré and Villa 2006).

2.1 On the consistency of the functional -NN classifier

The functional -NN classifier belongs also to the class of procedures adapted from the usual non-parametric multivariate setup. Nevertheless, unlike most of the above mentioned functional methodologies, the -NN procedure works according to exactly the same principles in the finite and infinite-dimensional cases. It is defined by , where is the -NN regression estimator (2), whose definition is formally identical to that of the finite-dimensional case. The intuitive interpretation is also the same in both cases. No previous data manipulation, projection or dimension reduction technique is required in principle, apart from the discretization process necessarily involved in the practical handling of functional data. In the present section we offer some concrete examples where the -NN functional classifier is weakly consistent. As we have mentioned in the previous section, this is a non-trivial point since the -NN classifier is no longer universally consistent in the case of infinite-dimensional inputs .

Throughout this section the feature space where the variable takes values is a separable metric space . We will denote by the distribution of defined by , where are the Borel sets of .

Let us now consider the following regularity assumption on the regression function

(BC) Besicovitch condition:

where is the closed ball with center and radius .

Under (BC) Cérou and Guyader (2006, Th. 2) get the following consistency result.

Denote by and , respectively, the conditional error associated with the above defined -NN classifier and the Bayes (optimal) error for the problem at hand. If is separable and condition (BC) is fulfilled then the -NN classifier is weakly consistent, that is , as , provided that and .

Besicovich condition plays an important role also in the consistency of kernel rules (see Abraham et al. 2006).

Cérou and Guyader (2006) have also considered the following more convenient condition (called -continuity) that ensures (BC): For every and for -a.e.

However, for our purposes, it will be sufficient to observe that the continuity (-a.e.) of implies also (BC). We are interested in finding families of distributions of under which the regression function is continuous (-a.e.) and hence (BC) holds.

From now on we will use the following notation. Let be the distribution of conditional on , that is, , for and . We denote by the support of , for , and . The expression will denote that is absolutely continuous with respect to . Also we will assume that fulfills .

The following theorem shows that the property of continuity (resp. -continuity) of , and hence the weak consistency of the -NN classifier, follows from the continuity (resp -continuity) of the Radon-Nikodym derivative of with respect to provided that it exists.

Theorem 1: Assume that and that and on . Then the following inequality holds for -a.e. .

where denotes the Radon-Nikodym derivative of with respect to . When the assumption may be dropped.

In particular, is continuous -a.e. (resp. -continuous) whenever is continuous -a.e. (resp. -continuous). Of course, a similar result holds by interchanging the sub-indices 0 and 1 and replacing by .

Proof: Define . Then , for , and we can define the Radon-Nikodym derivatives , for . From the definition of the conditional expectation we know that can be expressed by

(4)

Observe that and thus , for . Since and on then, on this set, we can define the Radon-Nikodym derivatives and . In this case, it also holds that , for both and

Then (see, e.g., Folland 1999), for and for -a.e. ,

(5)

Substituting (5) into expression (4) we get

(6)

Using this last expression we can see that if and if is continuous -a.e. (resp. -continuous) on then is also continuous -a.e. (resp. -continuous) on . To see this it suffices to observe that, for -a.e. ,

To derive the last inequality we have used that, as , , are positive measures, the Radon-Nikodym derivative is also non-negative.

In order to be able to combine Theorem 1 and the consistency result in Cérou and Guyader (2006, Th. 2), we are interested in finding distributions of an infinite-dimensional random element such that and with continuous Radon-Nikodym derivatives. Measures and satisfying that and on are said to be equivalent on .

Let us denote by the metric space of continuous real-valued functions defined on the interval , endowed with the supremum norm, . Also let be the space of twice continuously differentiable functions defined on .

In the next theorem we show a broad class of Gaussian processes fulfilling the conditions of Theorem 2 in Cérou and Guyader (2006). Thus the consistency of the -NN classifier is guaranteed for them. A key element in the proof are the results by Varberg (1961) and Jørsboe (1968) providing explicit expressions for the Radon-Nikodym derivative of a Gaussian measure with respect to another one. From the gaussianity assumption, the model is completely determined by giving the mean and covariance functions. For the sake of a more clear and systematic presentation the statement is divided into three parts: The first one applies to the case where the mean function in both functional populations, with distributions and (corresponding to and ), is common and the difference between both processes lies in the covariance functions (which however keep a common structure). The second part considers the dual case where the difference lies in the mean functions and the covariance structure is common. Finally, the third part of the theorem generalizes the previous two statements by including the case of different mean and covariance functions.

Theorem 2: Let with .

  1. Assume that , for , are Gaussian processes on , whose mean function is zero and with covariance functions , for , where , for , are positive functions in . Assume also that , for , and are bounded away from zero on , that and that if and only if . Then is continuous on .

  2. Assume that , for , are Gaussian processes on , with equal covariance function , for , where are positive functions and and are bounded away from zero on . Assume also that the mean function of is 0 and that of is a function , such that whenever . Then is continuous on .

  3. Assume that , for , are Gaussian processes on , with mean functions and covariance functions , for , where , for , are positive functions in which fulfill the same conditions imposed in (a). Assume also that whenever . Then is continuous on .

Therefore, under the assumptions in either (a), (b) or (c), the -NN classifier discriminating between and is weakly consistent when and .

Proof:

  1. Varberg (1961, Th. 1) shows that, under the assumptions of (a), and are equivalent measures and the Radon-Nikodym derivative of with respect to is given by

    (7)

    where

    and

    Observe that, by the assumptions of the theorem, this function is differentiable with bounded derivative. Thus is of bounded variation and it may be expressed as the difference of two bounded positive increasing functions. Therefore the stochastic integral (7) is well defined and it can be evaluated integrating by parts,

    with and . It is clear that this derivative is a continuous functional of with respect to the supremum norm.

    Now, Theorem 1 implies that is continuous and, therefore, Besicovich condition (BC) holds and, from Theorem 2 in Cérou and Guyader (2006), the -NN classifier is weakly consistent. Note that the equivalence of and implies the coincidence of both supports .

  2. In Jørsboe (1968), p. 61, it is proved that, under the indicated assumptions, and are equivalent measures with the following Radon-Nikodym derivative

    where

    and

    Again, the integration by parts gives

    (8)

    with

    Thus , and hence , are continuous and the consistency of the -NN classifier holds also in this case.

  3. Let us denote by the distribution of the Gaussian process with mean and covariance function . Then is continuous since (see e.g. Folland 1991)

    (9)

    and, as we have shown in the proofs of (a) and (b), the Radon-Nikodym derivatives in the right-hand side of (9) are all continuous.

Remark 1 (Application to the Ornstein-Uhlenbeck processes). Let , for , be Gaussian processes on , with zero mean and covariance function , for , where for . Assume that . Then these processes satisfy the assumptions in Theorem 2(a).

Remark 2 (Application to the Brownian motion). Theorem 2(b) can also be used to consistently discriminate between a Brownian motion without trend () and another one with trend (). It will suffice to consider the case where and .

Remark 3 (On triangular covariance functions). Covariance functions of type , called triangular, have received considerable attention in the literature. For example, Sacks and Ylvisaker (1966) use this condition in the study of optimal designs for regression problems where the errors are generated by a zero mean process with covariance function . It turns out that the Hilbert space with reproducing kernel plays an important role in the results and, as these authors point out, the norm of this space is particularly easy to handle when is triangular. On the other hand, Varberg (1964) has given an interesting representation of the processes , with zero mean and triangular covariance function by proving that they can be expressed in the form

where is the standard Wiener process and is a function, of bounded variation with respect to , defined in terms of .

Remark 4 (On plug-in functional classifiers). The explicit knowledge of the conditional expectation (6) in the cases considered in Theorem 2 could be explored from the statistical point of view as they suggest to use “plug-in” classifiers obtained by replacing in (1) with suitable parametric or semiparametric estimators.

Remark 5 (On equivalent Gaussian measures and their supports). According to a well-known result by Feldman and Hájek, for any given pair of Gaussian processes, there is a dichotomy in such a way that they are either equivalent or mutually singular. In the first case both measures and have a common support so that Theorem 1 is applicable with . As for the identification of the support, Vakhania (1975) has proved that if a Gaussian process, with trajectories in a separable Banach space , is not degenerate (i.e., then the distribution of any non-trivial linear continuous functional is not degenerate) then the support of such process is the whole space . Again, expression (6) of the regression functional suggests the possibility of investigating possible nonparametric estimators for the Radon-Nikodym derivative which would in turn provide plug-in versions of the Bayes rule with no further assumption on the structure of the involved Gaussian processes, apart from their equivalence.

3. Some numerical comparisons

The aim of this section is to compare (numerically) the performance of several supervised functional classification procedures already introduced in the literature. The procedures are the -NN rule, computed both with respect to the supremum norm and the norm , and other discrimination rules reviewed in Section 3.1. One of the objectives of this numerical study is to have some insight into which classification procedures perform well no matter the type of functional data under consideration and could thus be considered a sort of benchmark for the functional discrimination problem. Section 3.2 contains a Monte Carlo study carried out on two different functional data generating models. In Section 3.3 we consider six functional real data sets taken from the literature.

3.1 Other functional classifiers

Here we will review other classification techniques that have been used in the literature in the context of functional data. From now on we denote by the nodes where the functional predictor has been observed.

Partial Least Squares (PLS) classification

Let us first describe the procedure in the context of a multivariate predictor . PLS is actually a dimension reduction technique for regression problems with predictor and a response (which in the case of classification takes only two values, 0 or 1, depending on which population the individual comes from). The dimension reduction is carried out by projecting onto an lower dimensional space such that the coordinates of the projected , the PLS coordinates, are uncorrelated to each other and have maximum covariance with . Then, if the aim is classification, Fisher’s linear discriminant is applied to the PLS coordinates of (see Barker and Rayens 2003, Liu and Rayens 2007). In the case of a functional predictor (see Preda et al. 2007), the above described procedure is applied to the discretized version of , . Here we have chosen the number of PLS directions, among the values 1,…,10, by cross-validation.

Reproducing Kernel Hilbert Space (RKHS) classification

We will also define this technique initially for a multivariate predictor . For simplicity, we will assume that takes values in . Let be a function defined on . A RKHS with kernel is the vector space generated by all finite linear combinations of functions of the form , for any , and endowed with the inner product given by . RKHS are frequently used in the context of Machine Learning (see Evgeniou et al. 2002, Wahba 2002); for their applications in Statistics the reader is referred to the monograph of Berlinet and Thomas-Agnan (2004). In this work we use the Gaussian kernel , where is a fixed parameter. The classification problem is solved by plugging a regression estimator of the type into the Bayes classifier. When is a random function, this procedure is applied to the discretized . The parameters , for , are chosen to minimize the risk functional , where is a penalization parameter. In this work the values of the parameters and have been chosen by cross-validation via a leave-one-out procedure. According to our results, it seems that the performance the RKHS methodology is rather sensitive to changes in these parameters and even to the starting point of the leave-one-out procedure mentioned.

Classification via depth measures

The idea is to assign a new observation to that population, or , with respect to which is deeper (see Ghosh and Chaudhuri 2005, Cuevas et al. 2007). From the five functional depth measures considered by Cuevas et al. (2007) we have taken the -mode depth and the random projection (RP) depth.

Specifically, the -mode depth of with respect to the population given by the random element is defined as , where , is a kernel function (here we have taken the Gaussian kernel ) and is a smoothing parameter. As the distribution of is usually unknown, in the simulations we actually use the empirical version of , . The smoothing parameter has been chosen as the 20 percentile in the distances between the functions in the training sample (see Cuevas et al. 2007).

To compute the RP depth the training sample is projected onto a (functional) random direction (independent of the ). The sample depth of an observation with respect to is defined as the univariate depth of the projection of onto with respect to the projected training sample from . Since is a random element this definition leads to a random measure of depth, but a single representative value has been obtained by averaging these random depths over 50 independent random directions (see Cuevas and Fraiman 2008 for a certain theoretical development of this idea). If we are working with discretized versions of the functional data , we may take according to a uniform distribution on the unit sphere of . This can be achieved, for example, setting , where is drawn from standard Gaussian distribution on .

Moving window rule

The moving window classifier is given by

where is a smoothing parameter. This classification rule was considered in the functional setting, for instance, by Abraham et al. (2006). In this work the parameter has been chosen again via cross-validation.

3.2 Monte Carlo results

In this section we study two functional data models already considered by other authors. More specifically, in Model 1, similar to one used in Cuevas et al. (2007), is a Gaussian process with mean and covariance function , for . Observe that this model with smooth trajectories satisfies the assumptions in Theorem 2 and thus we would expect the -NN classification rule (with respect to the norm) to perform nicely. Let us note that the value of 1.1 in the exponent of is in fact the one used in Model 1, pg. 487, of Cuevas et al. (2007), although in their work a 1.2 was misprinted instead.

Model 2 appears in Preda et al. (2007), but here the functions , used to define the mean, have been rescaled to have domain . The trajectories of are given by

(10)

where is uniformly distributed on , , , and the is an approximation to the continuous-time white noise. In practice, this means that in the discretized approximations to , the variables are independently drawn from a standard normal distribution.

The simulation results are summarized in Tables 1 and 2. The number of equispaced nodes where the functional data have been evaluated is the same for both models, . The number of Monte Carlo runs is 100. In every run we generated two training samples (from and respectively) each with sample size 100, and we also generated a test sample of size 50 from each of the two populations. The tables display the descriptive statistics of the proportion of correctly classified observations from these test samples.

-NN -NN PLS RKHS -modal RP(hM) MWR
Minimum 0.6200 0.6600 0.6000 0.4800 0.6400 0.5400 0.6600
First quartile 0.8000 0.8000 0.8000 0.6600 0.8000 0.7800 0.8000
Median 0.8400 0.8400 0.8400 0.8400 0.8400 0.8400 0.8400
Mean 0.8396 0.8354 0.8371 0.7999 0.8409 0.8260 0.8393
Third quartile 0.8800 0.8800 0.8800 0.9400 0.8800 0.8800 0.8800
Maximum 0.9800 0.9600 0.9800 1.0000 0.9800 0.9800 1.0000
Std. deviation 0.0603 0.0572 0.0668 0.1457 0.0589 0.0725 0.0634
Table 1: Simulation results for Model 1
-NN -NN PLS RKHS -modal RP(hM) MWR
Minimum 0.8400 0.8400 0.8800 0.8400 0.8600 0.8400 0.8200
First quartile 0.9200 0.9400 0.9600 0.9600 0.9400 0.9400 0.9400
Median 0.9600 0.9600 0.9800 0.9800 0.9800 0.9600 0.9600
Mean 0.9522 0.9558 0.9686 0.9688 0.9657 0.9522 0.9570
Third quartile 0.9800 0.9800 0.9800 1.0000 1.0000 0.9800 0.9800
Maximum 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Std. deviation 0.0335 0.0355 0.0279 0.0313 0.0308 0.0345 0.0349
Table 2: Simulation results for Model 2

Regarding Model 1, observe that there is little difference between the correct classification rates of any of the methods, except for the RKHS procedure which performs worse. In Model 2 the PLS, RKHS and -modal methods slightly outperform the others. When the Monte Carlo study with this model was carried out, we also applied the -NN classification procedures to a spline-smoothed version of the trajectories. The result was that the mean correct classification rate increased to 0.9582 in the case of the supremum norm and to 0.9624 in the case of the norm. This, together with the analysis of the flies data in the next subsection, seems to suggest that, when the curves are irregular, smoothing these functions will enhance the -NN discrimination procedure.

3.3. Some comparisons based on real data sets

3.3.1. Brief description of the data sets

Berkeley Growth Data: The Berkeley Growth Study (Tuddenham and Snyder 1954) recorded the heights of girls and boys between the ages of 1 and 18 years. Heights were measured at 31 ages for each child. These data have been previously analyzed by Ramsay and Silverman (2002).

ECG data: These are electrocardiogram (ECG) data, studied by Wei and Keogh (2006), from the MIT-BIH Arrhythmia database (see Goldberger et al. 2000). Each observation contains the successive measurements recorded by one electrode during one heartbeat and was normalized and rescaled to have length 85. A group of cardiologists have assigned a label of normal or abnormal to each data record. Due to computational limitations, of the original records in the data set, we have randomly chosen only observations from each group.

MCO data: The variable under study is the mitochondrial calcium overload (MCO), measured every 10 seconds during an hour in isolated mouse cardiac cells. The data come from research conducted by Dr. David García-Dorado at the Vall d’Hebron Hospital (see Ruiz-Meana et al. 2003, Cuevas, Febrero and Fraiman 2004, 2007). In order to assess if a certain drug increased the MCO level, a sample of functions of size was taken from a control group and functions were sampled from the treatment group.

Spectrometric data: For each of 215 pieces of meat a spectrometer provided the absorbance attained at 100 different wavelengths (see Ferraty and Vieu 2006 and references therein). The fat content of the meat was also obtained via chemical processing and each of the meat pieces was classified as low- or high-fat.

Phoneme data: The variable is the log-periodogram (discretized to 150 nodes) of a phoneme. The two populations correspond to phonemes “aa” and “ao” respectively (see more information in Ferraty and Vieu 2006). We have considered a sample of 100 observations from each phoneme.

Medflies data: This dataset was obtained by Prof. Carey from U.C. Davis (see Carey et al. 1998) and has been studied, for instance, by Müller and Stadtmüller (2005). The predictor is the number of eggs laid daily by a Mediterranean fruit fly for a 30-day period. The fly is classified as long-lived if its remaining lifetime past 30 days is more than 14 days and short-lived otherwise. The number of long- and short-lived flies observed was 256 and 278 respectively.

3.3.2. Results

We have applied the classification techniques reviewed in Section 3.1 to the real data sets just described. While carrying out the simulations of Subsection 3.1, we observed that the performance of the RKHS procedure was very dependent on the initial values of the parameters and provided for the cross-validation algorithm. In fact, finding initial values for these parameters that would finally yield competitive results with respect to the other methods took a considerable time. Thus we decided to exclude the RKHS classification method from the study with real data.

We have computed, via a cross-validation procedure, the mean correct classification rates attained by the different discrimination methods on the real data sets. In Table 3 we display the results. Since the egg-laying trajectories in the medflies data set were very irregular and spiky, we have computed the correct classification rate for both the original data and a smoothed version obtained with splines. The smoothing leads to a better performance of the -NN procedure with the supremum metric, just as it happened in the simulations with Model 2.

Data set -NN -NN PLS -modal RP(hM) MWR
Growth 0.9462 0.9677 0.9462 0.9462 0.9462 0.9570
ECG 0.9900 0.9950 0.9825 0.9900 0.8575 0.8850
MCO 0.8427 0.8315 0.8876 0.7640 0.7079 0.6854
Spectrometric 0.9070 0.8558 0.9163 0.6791 0.6930 0.6558
Phoneme 0.7300 0.7800 0.7400 0.7300 0.7450 0.6950
Medflies (non-smoothed) 0.5468 0.5412 0.5262 0.4925 0.5056 0.5431
              (smoothed) 0.5712 0.5431 0.5094 0.5075 0.5543 0.5206
Table 3: Mean correct classification rates for the real data sets

As a conclusion we would say that the -NN classification methodology with respect to the norm is always among the best performing ones if the trajectories are smooth. The -NN procedure with respect to the norm and the PLS methodology give also good results, although the latter has the drawback of a much higher computation time.

References

  • Abraham, C., Biau, G. and Cadre, B. (2006). On the kernel rule for function classification. Annals of the Institute of Statistical Mathematics 58, 619-633.

    Barker M. and Rayens W. (2003). Partial least squares for discrimination. Journal of Chemometrics 17, 166-73.

    Berlinet, A. and Thomas-Agnan, C. (2004). Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer Academic Publishers.

    Biau, G., Bunea, F. and Wegkamp, M. (2005). Functional classification in Hilbert spaces. IEEE Transactions on Information Theory 51, 2163-2172.

    Carey, J.R., Liedo, P., Müller, H.G., Wang, J.L. and Chiou, J.M. (1998). Relationship of age patterns of fecundity to mortality, longevity, and lifetime reproduction in a large cohort of Mediterranean fruit fly females. Journal of Gerontology, Ser. A 53, 245–251.

    Cérou, F. and Guyader, A. (2006). Nearest neighbor classification in infinite dimension. ESAIM: Probability and Statistics 10, 340-355.

    Cuevas, A., Febrero, M and Fraiman, R. (2004). An ANOVA test for functional data. Computational Statistics and Data Analysis 47, 111–122.

    Cuevas, A., Febrero, M and Fraiman, R. (2007). Robust estimation and classification for functional data via projection-based depth notions. Computational Statistics 22, 481–496.

    Cuevas, A. and Fraiman, R. (2008). On depth measures and dual statistics. A methodology for dealing with general data. Manuscript.

    Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer-Verlag.

    Evgeniou , T., Poggio, T. Pontil, M. and Verri, A. (2002). Regularization and statistical learning theory for data analysis. Computational Statistics and Data Analysis, 38, 421–432.

    Ferraty, F. and Vieu, P. (2003). Curves discrimination: A nonparametric functional approach. Computational Statistics and Data Analysis 44, 161–173.

    Ferraty, F. and Vieu, P. (2006). Nonparametric Modelling for Functional Data. Springer.

    Ferré, L. and Villa, N. (2006). Multilayer perceptron with functional inputs: an inverse regression approach. Scandinavian Journal of Statistics 33, 807–823,

    Fisher, R.A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188.

    Folland, G. B. (1999). Real analysis. Modern techniques and their applications. Wiley.

    Ghosh, A. K. and Chaudhuri, P. (2005). On maximal depth and related classifiers. Scandinavian Journal of Statistics 32, 327–350.

    Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P., Mark, R., Mietus, J., Moody, G., Peng, C., and He, S. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101, 215–220.

    Hand, D.J. (1997). Construction and Assessment of Classification Rules. Wiley.

    Hand, D.J. (2006). Classifier technology and the illusion of progress. Statistical Science 21, 1–14.

    Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning. Springer.

    James, G.M. and Hastie, T.J. (2001). Functional linear discriminant analysis for irregularly sampled curves. Journal of the Royal Statistical Society, Ser. B 63, 533-550.

    Jørsboe, O. G. (1968). Equivalence or Singularity of Gaussian Measures on Function Spaces. Various Publications Series, No. 4, Matematisk Institut, Aarhus Universitet, Aarhus.

    Liu, Y. and Rayens, W. (2007). PLS and dimension reduction for classification. Computational Statistics 22, 189–208.

    Müller, H.G. and Stadtmüller, U. (2005). Generalized functional linear models. The Annals of Statistics 33, 774-805.

    Preda, C. (2007). Regression models for functional data by reproducing kernel Hilbert spaces methods. Journal of Statistical Planning and Inference 137, 829–840.

    Preda, C., Saporta, G. and Lévéder, C. (2007). PLS classification of functional data. Computational Statistics 22, 223–235.

    Ramsay, J.O. and Silverman, B.W. (2002). Applied Functional Data Analysis. Methods and Case Studies. Springer-Verlag.

    Ramsay, J.O. and Silverman, B.W. (2005). Functional Data Analysis. Second edition. Springer.

    Ruiz-Meana, M., García-Dorado, D., Pina, P., Inserte, J., Agulló, L. and Soler-Soler, J. (2003). Cariporide preserves mitochondrial proton gradient and delays ATP depletion in cardiomyocites during ischemic conditions. American Journal of Physiology - Heart and Circulatory Physiology 285, 999–1006.

    Sacks, J. and Ylvisaker, N.D. (1966). Designs for regression problems with correlated errors. Annals of Mathematical Statistics 37, 66–89.

    Stone, C. J. (1977). Consistent nonparametric regression. The Annals of Statistics 5, 595-645.

    Tuddenham, R. D. and Snyder, M. M. (1954). Physical growth of California boys and girls from birth to eighteen years. University of California Publications in Child Development 1, 183–364.

    Vakhania, N.N. (1975). The topological support of Gaussian measure in Banach space. Nagoya Mathematical Journal 57, 59–63.

    Varberg, D.E. (1961). On equivalence of Gaussian measures. Pacific Journal of Mathematics 11, 751–762.

    Varberg, D.E. (1964). On Gaussian measures equivalent to Wiener measure. Transactions of the American Mathematical Society 113, 262–273.

    Wahba, G. (2002). Soft and hard classification by reproducing kernel Hilbert space methods. Proceedings of National Academy of Sciences 99, 16524–16530.

    Wei, L. and Keogh, E. (2006). Semi-Supervised Time Series Classification. Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 748–753, Philadelphia, U.S.A.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10592
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description