Times series averaging from a probabilistic interpretation of timeelastic kernel
Abstract
In the light of regularized dynamic time warping kernels, this paper reconsiders the concept of time elastic centroid (TEC) for a set of time series. From this perspective, we show that TEC can be readily addressed as a preimage problem. However, this nonconvex problem is illposed, and obtaining a suboptimal solution may involve heavy computational costs, especially for long time series. We then derive two new algorithms based on a probabilistic interpretation of kernel alignment matrices that expresses the result in terms of probabilistic distributions over sets of alignment paths. The first algorithm is an agglomerative iterative heuristic procedure inspired from a stateoftheart DTW barycentre averaging algorithm. The second proposed algorithm uses a progressive agglomerative heuristic method to perform classical averaging of the aligned samples but also averages the times of occurrence of the aligned samples. By comparing classification accuracies for 45 time series datasets obtained by first nearest centroid/medoid classifiers we show that: i) centroidbased approaches significantly outperform medoidbased approaches, ii) for the considered datasets, the second algorithm which combines averaging in the sample space and along the time axes, emerges as the most significantly robust heuristic model for timeelastic averaging with a promising noise reduction capability.
1Introduction
Since Maurice Fréchet’s pioneering work [1] in the early 1900s, timeelastic matching of time series or symbolic sequences has attracted much attention from the scientific community in numerous fields such as information indexing and retrieval, pattern analysis, extraction and recognition, data mining, etc. This approach has impacted a very wide spectrum of applications relating to a multitude of socioeconomic issues such as the environment, industry, health, energy, defense and so on.
Among other time elastic measures, Dynamic Time Warping (DTW) was widely popularized during the 1970s with the advent of speech recognition systems [2], [3] and numerous variants that have since been proposed to match time series with a certain degree of time distortion tolerance.
The main issue addressed here is time series or shape averaging in the context of a time elastic distance. This is a longstanding issue that is currently becoming increasingly prevalent; it is relevant for summarizing subsets of time series, defining significant prototypes, identifying outliers, performing data mining tasks (mainly exploratory data analysis such as clustering) and speeding up classification, as well as regression or data analysis processes in a big data context.
In this paper, we specifically tackle the question of averaging subsets of time series, not from considering the DTW measure itself as has been already largely explored, but from the perspective of the socalled regularized DTW kernel (KDTW) that ensures positive definiteness. From this new perspective, the estimation of a time series average or centroid can be readily addressed as a preimage (inverse) problem. However, this approach has some theoretical and practical limitation that are discussed in the following sections. A more promising direct approach is developed here, which is based on a probabilistic interpretation of kernel alignment matrices, allowing a precise definition of the average of a pair of time series from the expected value of local alignments of samples. The tests carried out so far demonstrate the robustness and the efficiency of this approach comparison to the stateofthe art approach.
The structure of this paper is as follows: after an introduction, the second section summarizes the most relevant related studies on time series averaging as well as DTW kernelization. In the third section, we show how the estimation of a timeelastic centroid can be addressed as a preimage problem in the context of the DTW regularized kernel (KDTW). In the fourth section, we derive a probabilistic interpretation from the kernel alignment matrices evaluated on a pair of time series. In the fifth section, we define the average of a pair of time series, and based on this pairwise averaging procedure, we propose two suboptimal algorithms designed for the averaging of any subset of time series.
2Related works
Time series averaging in the context of (multiple) time elastic distance alignments has been mainly addressed in the scope of the Dynamic Time Warping (DTW) measure [2], [3]. Although other time elastic distance measures such as the Edit Distance With Real Penalty (ERP) [4] or the Time Warp Edit Distance (TWED) [5] could be considered instead, without loss of generality, we remain focused throughout this paper on DTW and its kernelization.
2.1DTW and time elastic centroid of a pair of time series
A classical formulation of DTW can be given as follows. If is a fixed positive integer, we define a time series of length as a multidimensional sequence , such that, , .
In other words, a warping path defines a way to travel along both time series simultaneously from beginning to end; it cannot skip a point, but it can advance one time step along one series without advancing along the other, thereby justifying the term timewarping.
If is a distance on , the global cost of a warping path is the sum of distances (or squared distances or local costs) between pairwise elements of the two time series along , i.e.:
A common choice of distance on is the one generated by the norm:
2.2Time elastic centroid of a set of time series
A single alignment path is required to calculate the time elastic centroid of a pair of time series (Def. ?). However, multiple path alignments need to be considered to evaluate the centroid of a larger set of time series. Multiple alignments have been widely studied in bioinformatics [6], and it has been shown that the computational complexity of determining the optimal alignment of a set of sequences under the sum of all pairs (SP) score scheme is a NPcomplete problem [7] [8]. The time and space complexity of this problem is , where is the number of sequences in the set and is the length of the sequences when using dynamic programming to search for an optimal solution [9]. This latter result applies to the estimation of the time elastic centroid of a set of time series with respect to the DTW measure. Since the search for an optimal solution becomes rapidly intractable with increasing , suboptimal heuristic solutions have been subsequently proposed, most of them falling into one of the following three categories.
Progressive heuristics
Progressive heuristic methods estimate the time elastic centroid of a set of time series by combining pairwise centroids (Def. ?). This kind of approach constructs a binary tree whose leaves correspond to the time series of the data set, and whose nodes correspond to the calculation of a local pairwise centroid, such that, when the tree is complete, the root is associated with the estimated data set centroid. The proposed strategies differ in the way the tree is constructed. One popular approach consists of providing a random order for the leaves, and then constructing the binary tree up to the root using this ordering [10]. Another approach involves constructing a dendrogram (a hierarchical ascendant clustering) from the data set and then using this dendrogram to calculate pairwise centroids starting with the closest pairs of time series and progressively aggregating series that are farther away[11] as illustrated on the left of Fig. ?. Note that these heuristic methods are entirely based on the calculation of a pairwise centroid, so they do not explicitly require the evaluation of a DTW centroid for more than two time series. Their degree of complexity varies linearly with the number of time series in the data set.
Iterative heuristics
Iterative heuristics are based on an iterated threestep process. For a given temporary centroid candidate, the first step consists of calculating the inertia, i.e. the sum of the DTW distances between the temporary centroid and each time series in the data set. The second step evaluates the best pairwise alignment with the temporary centroid for each time series in the data set (). A new time series is thus constructed that contains all the samples of time series , but with time being stretched or compressed according to the best alignment path. The third step consists of producing a new temporary centroid candidate from the set by successively averaging (in the sense of the Euclidean centroid), the samples at every timestamp of the time series. Basically, ), where is an indicator function equal to if time series is defined for timestamp , but which is otherwise .
Thus, the new centroid candidate replaces the previous one and the process is iterated until the inertia is no longer reduced or the maximum number of iterations is reached. Generally, the first temporary centroid candidate is taken as the DTW medoid of the considered data set. This process is illustrated on the right of Fig. ?. The three steps of this heuristic method were first proposed in [12]. The iterative aspect of this heuristic approach was initially introduced by [13] and refined by [14]. Note that, in contrast to the progressive method, this kind of approach needs to evaluate, at each iteration, all the alignments with the current centroid candidate. The complexity of the iterative approach is higher than the progressive approach, the extra computational cost being linear with the number of iterations. More sophisticated approaches have been proposed to escape some local minima. [15] have evaluated a genetic algorithm for managing a population of centroid candidates, thus improving with some success the straightforward iterative heuristic methods.
Optimization approaches
Given the entire set of time series and a subset of time series , optimization approaches attempt to estimate the centroid of from the definition of an optimization problem, which is generally expressed by Equation 1 given below:
To our knowledge, the first attempt to use this kind of direct approach for the estimation of time elastic centroid estimation was recently described in [16].
These authors (op.cit.) derived a solution of their original nonconvex constrained optimization problem, by integrating a temporal weighting of local sample alignments to highlight the temporal region of interest in a time series data set, thus penalizing the other temporal regions. Two time elastic measures were specifically addressed: i) a dynamic time warping measure between a time series and a weighted time series (representing the centroid estimate) and ii) an (indefinite) kernel DTW called DTAK [17]. Their results are very promising: although the number of parameters to optimize is linear with the size and the dimensionality of the time series, the two steps gradientbased optimization process they derived is very computationally efficient and shown to outperform the state of the art approaches on some challenging scalar and multivariate data sets. However, as numerous local optima exist in practice, the method is not guaranteed to converge toward the best possible centroid, which is anyway the case in all other approaches.
2.3Discussion and motivation
According to the state of the art in time elastic centroid estimation, an exact centroid, if it exists, can be calculated by solving a NPcomplete problem whose complexity is exponential with the number of time series to be averaged. Heuristic methods with increasing time complexity have been proposed since the early 2000s. Simple pairwise progressive aggregation is a less complex approach, but which suffers from its dependence on initial conditions. Iterative aggregation is reputed to be more efficient, but entails a higher computational cost. It could be combined with ensemble methods or soft optimization such as genetic algorithms. The nonconvex optimization approach has the merit of directly addressing the mathematical formulation of the centroid problem in a time elastic distance context. This approach nevertheless involves a higher complexity and must deal with a relatively large set of parameters to be optimized (the weights and the sample of the centroid). Its scalability could be questioned, specifically for high dimensional multivariate time series.
It should also be mentioned that some criticisms of these heuristic methods have been made in [18]. Among other drawbacks, the fact that DTW is not a metric (the triangle inequality is not satisfied) could explain the occurrence of unwanted behaviour such as centroid drift outside the time series cluster to be averaged. We should also be borne in mind that keeping a single best alignment (even though several may exist, without mentioning the good ones) can increase the dependence of the solution on the initial conditions. It may also increase the aggregating order of the time series proposed by the chosen method, or potentially enhance the convergence rate.
In this study, we do not directly address the issue of time elastic centroid estimation from the DTW perspective, but rather from the point of view of the regularized dynamic time warping kernel (KDTW) [19]. This perspective allows us to consider centroid estimation as a preimage problem, which is in itself another optimization perspective. More importantly, the KDTW alignment matrices can be used to derive a probabilistic interpretation of the pairwise alignment of time series. This leads us to propose a robust interpolation scheme jointly along the time axis and in the sample space. We do not claim that using KDTW and its probabilistic interpretation can solve all or even any of the fundamental questions raised earlier: since the problem tackled here is NPcomplete, an exact solution requires exponentially complex computations and any heuristic method must handle numerous local minima. Our aim is to throw some new light on the problem as well as obtain new quantitative results showing, in this difficult context, that the proposed alternative approach is worth considering.
2.4Time elastic kernels and their regularization
Dynamic Time Warping
(DTW), [2], [3] as defined in Eq. ? can be recursively evaluated as
where is the Euclidean distance (eventually, the square of the Euclidean distance) defined on between the two positions/?points in sequences and taken at times and , respectively.
Apart from the fact that the triangular inequality does not hold for the DTW distance measure, it is furthermore not possible to define a positive definite kernel directly from this distance. Hence, the optimization problem, which is inherent to the learning of a kernel machine, is no longer quadratic and, at least for some tasks, could be a source of limitation.
Regularized DTW: recent studies [20], [19] lead us to propose new guidelines to ensure that kernels constructed from elastic measures such as DTW are positive definite. A simple instance of such a regularized kernel, derived from [19], can be expressed in the following form, which makes use of two recursive terms:
where is the Kronecker symbol, is a stiffness parameter which weights the local contributions, i.e. the distances between locally aligned positions, and is a distance defined on .
The initialization is simply .
The main idea behind this regularization is to replace the operators and (which prevent symmetrization of the kernel) by a summation operator (). This allows us to consider the best possible alignment, as well as all the best (or nearly the best) paths by summing their overall cost. The parameter is used to check what is termed as nearlythebest alignment, thus penalizing alignments that are too far away from the optimal ones. This parameter can be easily optimized through a crossvalidation.
3KDTW centroid as a preimage problem
In this section, we tackle the centroid estimation question from a kernelized centroid point of view, the kernel of interest being KDTW.
The MooreAronszajn theorem [21] establishes that a reproducing kernel Hilbert space (RKHS) exists uniquely for every positive definite kernel and viceversa. Let be the RKHS associated to kernel defined on a set , and let be the inner product defined on . In addition, the representer property of the evaluation functional in is expressed as: for any and any , .
Denoting as the map that assigns the kernel function to each input , the reproducing property of the kernel implies that for any , .
Furthermore, is the generalization of the squared Euclidean distance defined in the feature space , which can be expressed in kernel terms as: (the socalled kernel trick).
Finally, the representer theorem [22] states that any function of a RKHS minimizing a regularized cost functional of the form:
with predicted output for input and desired output , where is a strictly monotonically increasing function on , is equivalent to a kernel expansion expressed in terms of available data ()
Hence, a direct definition of the kernelized centroid of the set expressed in the RKHS feature space associated with kernel can be written as:
The representer theorem applies and thus takes the form given in Equation 4, which allows us to rewrite Equation 5 as follows:
Unfortunately, if the kernelized centroid is related to a welldefined quadratic optimization problem in the RKHS space (Equation 6), it is an illposed problem in set . This is known as the preimage problem, since the preimage of might not exist. Instead, we are seeking the best approximation, namely whose map is as close as possible to , as illustrated in Fig. ?.
Hence, if we remove the term that does depend upon , the optimization problem becomes:
For KDTW, the nonconvex optimization problem cannot be straightforwardly addressed using gradientbased approaches mainly because the derivative cannot be determined analytically. Moreover, the number of variables (linear with the length of the time series and with the dimensionality of each sample) is generally high so this approach often encounters combinatorial difficulties related to the number of local minima. A derivativefree method could nevertheless be applied for local modelling of the functional to be optimized. In an attempt to carry out such a preimage formulation to estimate the time elastic centroid for a set of time series, we applied the stateoftheart BOBYQA algorithm developed for bound constrained optimization without using derivatives [23]. Fig. ? and Fig. ? give the centroid estimations for each category of the CBF and Trace datasets, respectively [24]. On the top left diagram of the figures, the values of the function to be minimized are plotted against the number of iterations. The optimization process is initialized using the medoid for each category. We show that the required number of iterations is quite high and depends on the number of variables. For the CBF dataset, the time series is made up of 128 samples while there are 275 samples for the Trace dataset. The convergence rate is roughly ten times slower for the Trace data set compared with the CBF dataset, mainly because KDTW complexity is quadratic with the length of the time series. The iteration cost becomes somewhat prohibitive for long time series or large time series datasets. Although this approach could be possibly optimized, the parameters need to be carefully set up (basically, definition of the trust region) and, in any case, as stated above, the optimum so provided remains an estimation of the centroid that is sought. Finally, note that the functional starts to decrease after attaining the number of iterations (in this case, twice the length of the time series) initially required for local estimation of the functional.
4Probabilistic interpretation of time elastic kernel alignment matrices
In this section, we consider the recursive term that is used in Equation 3. When evaluating the similarity between two time series and with respective lengths of and , this recursion allows the construction of an alignment matrix with and . The cell at location contains the summation of the global costs of all alignment paths, as defined in definition ?, that connect cell with cell . For any alignment path , the global cost is expressed as:
i.e. the product along the path of the local alignment costs. We can give a probabilistic interpretation of these local costs : basically, we can assume that these local costs correspond (within the magnitude of the scalar multiplication constant) to the local a priori probability of aligning sample with sample . By making this assumption, we eventually attach a probability distribution to the set of all alignment paths, with the corresponding (within the magnitude of the scalar multiplication constant) to the probability attached to alignment path .
Hence, the cell of matrix , contains the sum of the probabilities (within the magnitude of the scalar multiplication constant) of the paths that connect cell to cell .
Similarly, if, instead of and , we evaluate the similarity between and derived from and by reversing the temporal index, we obtain an alignment matrix whose cell contains the sum of the probabilities (to within a multiplicative scalar constant) of the paths that connect cell tp cell .
Finally, multiplying properly cells of with cells of yields the Alignment Matrix Average () defined as:
and whose cell contains the sum of the probabilities (upto the normalization constant) of the paths that connect cell to cell while going through the cell .
From this path probability distribution, we can now derive an alignment probability distribution between the samples of and the samples of as follows:
For all , the probability of aligning sample is ; all samples need to be aligned.
Similarly, for all , the probability of aligning sample is .
The probability of aligning sample with sample is . is the probability that sample is aligned with sample given that the alignment process is in state . The estimation of is obtained by using matrix :
Furthermore, the probability of aligning sample with sample is also . Similarly, the estimation of is obtained by using matrix :
Note that the normalization constant mentioned above is eliminated.
Since , we can finally estimate the probability of aligning sample with sample as follows:
Equation 10 forms the basis of our pairwise time elastic time series averaging algorithm given below.
As an example, Fig ? presents the AMA matrix corresponding to the alignment of a positive halfwave with a sinus wave. The three potential alignment pathes are clearly identified in the light blue and red colors.
5Time elastic centroid based on the alignment matrix
Based on the structure of the KDTW kernel and the AMA matrix, and by using the socalled DtwBarycenter Averaging (DBA) method developed by [12], [14], [13], we first present the KernelDtwNarycenter Averaging (KDBA) algorithm for estimating a time elastic centroid for a set of time series according to an iterative agglomerative procedure as shown in Fig. ?. Secondly, we detail the concept of a time elastic average for a pair of time series (KDTWPWA), and then develop the progressive heuristic approach presented in Fig. ? that uses KDTWPWA to estimate another kind of time elastic centroid (KDTWC1) for a set of time series of any cardinal.
5.1KDTWCentroid of a set of time series based on KDBA algorithm
Following the DBA algorithmic approach [12],[13], we present here the development of our kernelized version called KDBA. KDBA directly applies the definition of the alignment matrix average (AMA) as given in Equation 8 and its probabilistic interpretation Equation 10.
Let us consider a set of time series, , and a reference time series. Let and be the lengths of and , respectively. , with and , is obtained from the AMA matrix resulting from the alignment of with , according to Equation 10. Algorithm ? computes an average time series according to the following equation:
Note that the iterative average of time series produced by algorithm ? has the same size as the reference time series .
The algorithm ? can be refined by iterating until no further improvement is obtained [14]. An improvement is observed when the sum of the distances (resp. similarities) between the current average and the new pairwise average provided by KDBA, , is lowered (resp. increased). Algorithm ? implements this iterative strategy, which will necessarily find a local minimum or will stop when a maximum number of iterations has been reached.
5.2KDTW average of a pair of time series (KDTWPWA)
Similarly to DBA, the KDBA algorithm averages a set of time series in the sample space but not along the time axis. Basically, let us suppose we are averaging two triangularshaped time series such as represented by the blue crosses and black dots on Fig. ?. When using DBA or KDBA algorithms with one of the two time series acting as the reference, then the calculated average would be the reference distribution itself. However, we would also expect to average the time shift between the two series, thus obtaining the distribution indicated by the red dots in Fig.fig:timeshift. This is precisely our main motivation for the deriving the following Pair Wise Averaging (KDTWPWA) algorithm designed to average a pair of time series in the sample space but also along the time axis.
Algorithm ? provides the KDTWPWA average () of the two time series and according to Equation 11.
As the time indices are considered discrete (integer values), the time averaging is smoothed between the floor and cell integer values, using the smoothing coefficient (line 17 of the algorithm).
Thus, the KDTWPWA jointly averages the sample values of the two time series and their time locations. Equation 11 allows us to interpret the centroid of a pair of time series as the mathematical expectation of aligning the two sequences of samples.
As an example, the centroid corresponding to the pairwise alignment of the sinus experiment depicted in Fig ? is presented in Fig ?. Notice that in the centroid, the negative halfwaves of the sine wave have been filtered. This is because the negative halfwaves do not match with the positive halfwave that is aligned with the sine wave.
In Fig ?, we present a very simple experiment that consists of averaging two identical triangularshaped time series (on left of figure) and two time series with identical triangular shapes but shifted in time. At the bottom of the figure, the corresponding matrices are presented. The KDTWPWA distributions, presented in red, are multiplied by a factor of two to facilitate reading of the figure. We can see that, for both situations, the centroid is precisely located at the correct averaged time of occurrence of the two time series, whether or not they are shifted in time. The most likely alignment areas on the AMA matrices are shown in in red and the less likely alignment areas in blue. The time shift is clearly visible on the righthand figure.
DBA 
iKDBA  pKDTWPWA 

5.3KDTWCentroid of a set of time series based on KDTWPWA
To average a larger set of time series using the pairwise average KDTWPWA, we simply adopt the progressive agglomerative approach presented in Fig. ?. This heuristic approach, detailed in Algorithm ? has complexity, being the size of the considered set of time series.
The figures presented in Table 1 compare the centroid estimates provided by the iterated DBA, iKDBA and pKDTWPWA algorithms. For the experiment, The DBA and iKDBA were iterated at most 20 times. Although the DBA and iKDBA estimates appear quite similar, the centroid estimates provided by the pKDTWPWA algorithm is much smoother. This is a general property of the latter algorithm, which implements a time averaging principle based on the time expectation of sample occurrences, thus somehow allowing it to filter noisy data. Note also that the DBA and iKDBA estimates for the CBF data set are close to the results provided by the preimage approach (Fig. ?).
6Experimentation
The purpose of this experiment is to evaluate the effectiveness of the proposed time elastic averaging methods against a double baseline, namely kmedoidbased approaches and the DBA algorithm. The first baseline allow us to compare centroidbased with medoidbased approaches. The second baseline highlights the advantages we can expect from using p.d elastic kernels instead of indefinite kernels such as DTW in the context of time series averaging. DBA is also currently considered as a state of the art method to average a set of sequences consistently with DTW.
For this purpose, we empirically evaluate the effectiveness of the methods using a first nearest centroid/medoid (1NC) classification task on a set of time series derived from widely diverse fields of application. The task consists of representing each category contained in a training data set by estimating its medoid or centroid and then evaluating the error rate of a 1NC classifier on an independent testing data set. Hence, the classification rule consists of assigning to the tested time series the category which corresponds to the closest (or most similar) medoid or centroid according to DTW or KDTW measures.
In [25] a nice generalized kNC task is described. The authors demonstrate that by selecting the appropriate number of centroids (using DBA and kmeans), they achieve, without loss, a 70% speedup in average, compared to the original kNear Neighbor task. Although, in general, the classification accuracies is improved when several centroids are used to represent the TRAIN datasets, our main purpose is to highlight and amplify the discrimination between time series averaging methods: this is why stick here with the 1NC task.
DBA and iKDBA iterative centroid methods are iterated at most 20 times and yield local estimates of the centroid. The pKDTWPWA progressive agglomerative centroid method is only processed once, and hence is roughly 20 times faster than iKDBA and about 10 times faster than DBA.
A collection of 45 data sets is used to assess the proposed algorithms. The collection includes synthetic and real data sets, as well as univariate and multivariate time series. These data sets are distributed as follows:
42 of these data sets are available at the UCR repository [24]. Basically, we used all the data sets except for StarLightCurves, NonInvasive Fetal ECG Thorax1 and NonInvasive Fetal ECG Thorax2. Although these last three data sets are still tractable, their computational cost is high because of their size and the length of the time series they contain. All the data sets are composed of scalar time series.
One data set, uWaveGestureLibrary_3D was constructed from the uWaveGestureLibrary_XYZ scalar data sets to compose a new set of multivariate (3D) time series.
One data set, CharTrajTT, is available at the UCI Repository [26] under the name Character Trajectories Data Set. This data set contains multivariate (3D) time series and is divided into two equal sized data sets (TRAIN and TEST) for the experiment.
The last data set, PWM2, which stands for Pulse Width Modulation [27], was specifically defined to demonstrate a weakness in dynamic time warping (DTW) pseudo distance. This data set is composed of artificial scalar time series.
For each dataset, a training subset (TRAIN) is defined as well as an independent testing subset (TEST). We use the training sets to extract single medoids or centroid estimates for each of the categories defined in the data sets.
Furthermore, for KDTW, iKDBA and pKDTWPWA, the parameter is optimized using a leaveoneout (LOO) procedure carried out on the TRAIN data sets. The value is selected within the discrete set . The value that minimizes the LOO classification error rate on the TRAIN data is then used to provide the error rates that are estimated on the TEST data.
The classification results are given in Table ?. It can be seen from this experiment, that
Centroidbased methods outperform medoidbased methods: DBA yields lower error rates compared to DTW, as do iKDBA and pKDTWPWA compared to KDTW.
iKDBA outperforms DBA: under the same experimental conditions (maximum of 20 iterations), the kernalized version of the DTW measure leads to better classification accuracy. To some extent, this confirms previous results obtained for SVM classification [19] on such kinds of datasets.
pKDTWPWA outperforms iKDBA: this results seems to show that joint averaging in the sample space and along the time axis improves the classification accuracy. As pKDTWPWA provides a centroid estimation in a single agglomerative step, we can conjecture that this method converges faster toward a satisfactory centroid candidate.
The average ranking for all five tested methods, which supports our preliminary conclusion, is given at the bottom of Table ?.
Following the study of [28] on statistical tests available to evaluate the significance of differences in error rate between classifiers over multiple data sets, we conducted a Friedman’s significance test, a sort of nonparametric counterpart of the wellknown ANOVA. This test ranks the algorithms for each data set separately, the best performing algorithm being given a rank of 1, the second best rank 2, etc.
According to this test, the null hypothesis is rejected (with a ). Posthoc tests can then be carried out to compare pairwise algorithms using the WilcoxonNemenyiMcDonaldThompson test [29]. For this purpose, we use the R code provided by [30] to generate the parallel coordinate plots and boxplots presented in Fig. ? as well as the results reported in Table 2.

Pvalue  

DBA  DTW  1.98e05 
KDTW  DTW  2.99e03 
iKDBA  DTW  1.38e10 
pKDTWPWA  DTW  1.09e12 
KDTW  DBA  7.84e01 
iKDBA  DBA  3.10e01 
pKDTWPWA  DBA  2.60e02 
iKDBA  KDTW  1.90e02 
pKDTWPWA  KDTW  4.07e04 
pKDTWPWA  iKDBA  8.36e01 
Table 2 reports the Pvalues for each pair of tested algorithms. This posthoc analysis partially confirms our previous analysis of the classification results. If we consider that the null hypothesis is rejected when the Pvalue is less than , the posthoc analysis shows that centroidbased approaches perform significantly better than medoidbased approaches. Furthermore, KDTW appears to be significantly better than DTW.
Furthermore, pKDTWPWA is evaluated as significantly better than DBA but not significantly better than iKDBA in this experiment. Note also that DBA is not shown to perform significantly better than KDTW.
This posthoc analysis is summarized in Fig. ? which shows the ranking graph for the five algorithms tested in our experiments.
7Conclusion
In this paper, we address the reputedly difficult problem of averaging a set of time series in the context of a time elastic distance measure such as Dynamic Time Warping. The new perspective provided by the kernelization of the elastic distance firstly allows us to consider the averaging of time series as a preimage problem. This latter is unfortunately an illposed nonconvex problem that could suffer from combinatorial number of local optima when dealing with long multidimensional time series. Furthermore, this kind of preimage problem can only be resolved using gradientfree optimization procedures that are computationally very costly (since extensive functional evaluation is required).
However, this new kernelization approach allows a reinterpretation of pairwise kernel alignment matrices as distributions of probability over alignment paths. Based on this reinterpretation, we propose two distinct algorithms, iKDBA and pKDTWPWA, based on iterative and progressive agglomerative heuristic methods, respectively, that are developed to compute approximate solutions to the multialignment of time series.
We present an extensive experiment carried out on synthetic and real data sets, mostly containing univariate but also some multivariate time series. Our results show that centroidbased methods significantly outperform medoidbased methods in the context of a first nearest neighbour classification task. Most strikingly, the pKDTWPWA algorithm, which integrates joint averaging in the sample space and along the time axis, is significantly better than the stateofthe art DBA algorithm, with a potentially lower computational cost. Indeed, the simple onepass progressive agglomerative heuristic procedure is used in the pKDTWPWA algorithm can be further optimized.
Acknowledgments
The authors thank the French Ministry of Research, the Brittany Region, the General Council of Morbihan and the European Regional Development Fund that partially funded this research. The authors also thank the promoters of the UCR and UCI data repositories for providing the time series data sets used in this study.
References
 M. Fréchet, Sur quelques points du calcul fonctionnel, ., Ed. 1em plus 0.5em minus 0.4emThèse, Faculté des sciences de Paris., 1906.
 V. M. Velichko and N. G. Zagoruyko, “Automatic recognition of 200 words,” International Journal of ManMachine Studies, vol. 2, pp. 223–234, 1970.
 H. Sakoe and S. Chiba, “A dynamic programming approach to continuous speech recognition,” in Proceedings of the 7th International Congress of Acoustic, 1971, pp. 65–68.
 L. Chen and R. Ng, “On the marriage of lpnorms and edit distance,” in Proceedings of the Thirtieth International Conference on Very Large Data Bases  Volume 30, ser. VLDB ’04.1em plus 0.5em minus 0.4emVLDB Endowment, September 2004, pp. 792–803.
 P.F. Marteau, “Time warp edit distance with stiffness adjustment for time series matching,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 2, pp. 306–318, Feb 2009.
 K. H. Fasman and S. S. L., “An introduction to biological sequence analysis,” in Computational Methods in Molecular Biology,.1em plus 0.5em minus 0.4emIn Salzberg, S.L., Searls, D.B., and Kasif, S., eds., Elsevier, 1998, pp. 21–42.
 L. Wang and T. Jiang, “On the complexity of multiple sequence alignment.” Journal of Computational Biology, vol. 1, no. 4, pp. 337–348, 1994.
 W. Just and W. Just, “Computational complexity of multiple sequence alignment with spscore,” Journal of Computational Biology, vol. 8, pp. 615–623, 1999.
 =2plus 43minus 4 H. Carrillo and D. Lipman, “The multiple sequence alignment problem in biology,” SIAM J. Appl. Math., vol. 48, no. 5, pp. 1073–1082, Oct. 1988. [Online]. Available: http://dx.doi.org/10.1137/0148063 =0pt
 L. Gupta, D. Molfese, R. Tammana, and P. Simos, “Nonlinear alignment and averaging for estimating the evoked potential,” Biomedical Engineering, IEEE Transactions on, vol. 43, no. 4, pp. 348–356, April 1996.
 V. Niennattrakul and C. Ratanamahatana, “Shape averaging under time warping,” in Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, 2009. ECTICON 2009. 6th International Conference on, vol. 02, May 2009, pp. 626–629.
 W. Abdulla, D. Chow, and G. Sin, “Crosswords reference template for dtwbased speech recognition systems,” in TENCON 2003. Conference on Convergent Technologies for the AsiaPacific Region, vol. 4, Oct 2003, pp. 1576–1579 Vol.4.
 V. Hautamaki, P. Nykanen, and P. Franti, “Timeseries clustering by approximate prototypes,” in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, Dec 2008, pp. 1–4.
 =2plus 43minus 4 F. Petitjean, A. Ketterlin, and P. Gançarski, “A global averaging method for dynamic time warping, with applications to clustering,” Pattern Recogn., vol. 44, no. 3, pp. 678–693, Mar. 2011. [Online]. Available: http://dx.doi.org/10.1016/j.patcog.2010.09.013 =0pt
 F. Petitjean and P. Gançarski, “Summarizing a set of time series by averaging: From Steiner sequence to compact multiple alignment,” Journal of theoretical computer science, vol. 414, no. 1, pp. 76–91, Jan. 2012.
 S. SoheilyKhal, A. DouzalChouakria, and E. Gaussier, “Time series centroid estimation under weighted and kernel dynamic time warping,” Personal communication (under submission), 2014.
 H. Shimodaira, K. I. Noma, M. Nakai, and S. Sagayama, “Dynamic TimeAlignment Kernel in Support Vector Machine,” in Advances in Neural Information Processing Systems 14, T. G. Dietterich, S. Becker, and Z. Ghahramani, Eds.1em plus 0.5em minus 0.4emCambridge, MA: MIT Press, 2002.
 =2plus 43minus 4 V. Niennattrakul and C. Ratanamahatana, “ l@English Inaccuracies of shape averaging method using dynamic time warping for time series data,” in l@English Computational Science – ICCS 2007, ser. Lecture Notes in Computer Science, Y. Shi, G. van Albada, J. Dongarra, and P. Sloot, Eds.1em plus 0.5em minus 0.4emSpringer Berlin Heidelberg, 2007, vol. 4487, pp. 513–520. [Online]. Available: http://dx.doi.org/10.1007/9783540725848_68 =0pt
 =2plus 43minus 4 P.F. Marteau and S. Gibet, “On Recursive Edit Distance Kernels with Application to Time Series Classification,” IEEE Trans. on Neural Networks and Learning Systems, pp. 1–14, Jun. 2014. [Online]. Available: http://hal.inria.fr/hal00486916 =0pt
 M. Cuturi, J.P. Vert, O. Birkenes, and T. Matsui, “A kernel for time series based on global alignments,” in IEEE ICASSP 2007, vol. 2, April 2007, pp. II–413–II–416.
 N. Aronszajn, “Theory of reproducing kernels,” Transactions of the American Mathematical Society, vol. 68, 1950.
 =2plus 43minus 4 B. Schölkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” in Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory, ser. COLT ’01/EuroCOLT ’01.1em plus 0.5em minus 0.4emLondon, UK, UK: SpringerVerlag, 2001, pp. 416–426. [Online]. Available: http://dl.acm.org/citation.cfm?id=648300.755324 =0pt
 M. J. D. Powell, “The bobyqa algorithm for bound constrained optimization without derivatives,” Aug. 2009.
 E. J. Keogh, X. Xi, L. Wei, and C. Ratanamahatana, “The UCR time series classificationclustering datasets,” 2006, http://wwwcs.ucr.edu/ eamonn/time_series_data/.
 =2plus 43minus 4 F. Petitjean, G. Forestier, G. Webb, A. Nicholson, Y. Chen, and E. Keogh, “Dynamic time warping averaging of time series allows faster and more accurate classification,” in Proceedings of the 14th IEEE International Conference on Data Mining, 2014, pp. 470–479. [Online]. Available: http://dx.doi.org/10.1109/ICDM.2014.27 =0pt
 =2plus 43minus 4 M. Lichman, “Uci machine learning repository,” 2013. [Online]. Available: http://archive.ics.uci.edu/ml =0pt
 =2plus 43minus 4 P.F. Marteau, “Pulse width modulation data sets,” 2007. [Online]. Available: http://people.irisa.fr/PierreFrancois.Marteau/PWM/ =0pt
 =2plus 43minus 4 J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” J. Mach. Learn. Res., vol. 7, pp. 1–30, Dec. 2006. [Online]. Available: http://dl.acm.org/citation.cfm?id=1248547.1248548 =0pt
 =2plus 43minus 4 M. Hollander and D. Wolfe, Nonparametric Statistical Methods, ser. Wiley Series in Probability and Statistics.1em plus 0.5em minus 0.4em Wiley, 1999. [Online]. Available: https://books.google.fr/books?id=RJAQAQAAIAAJ =0pt
 =2plus 43minus 4 T. Galili, “R code for the friedman test post hoc analysis.” february 2010. [Online]. Available: http://www.rstatistics.com/2010/02/posthocanalysisforfriedmanstestrcode/ =0pt