Spaceefficient Feature Maps for
String Alignment Kernels
Abstract
String kernels are attractive data analysis tools for analyzing string data. Among them, alignment kernels are known for their high prediction accuracies in string classifications when tested in combination with SVM in various applications. However, alignment kernels have a crucial drawback in that they scale poorly due to their quadratic computation complexity in the number of input strings, which limits largescale applications in practice. We address this need by presenting the first approximation for string alignment kernels, which we call spaceefficient feature maps for edit distance with moves (SFMEDM), by leveraging a metric embedding named edit sensitive parsing (ESP) and feature maps (FMs) of random Fourier features (RFFs) for largescale string analyses. The original FMs for RFFs consume a huge amount of memory proportional to the dimension of input vectors and the dimension of output vectors, which prohibits its largescale applications. We present novel spaceefficient feature maps (SFMs) of RFFs for a space reduction from of the original FMs to of SFMs with a theoretical guarantee with respect to concentration bounds. We experimentally test SFMEDM on its ability to learn SVM for largescale string classifications with various massive string data, and we demonstrate the superior performance of SFMEDM with respect to prediction accuracy, scalability and computation efficiency.
I Introduction
Massive string data are now ubiquitous throughout research and industry, in areas such as biology, chemistry, natural language processing and data science. For example, ecommerce companies face a serious problem in analyzing huge datasets of user reviews, question answers and purchasing histories [11, 22]. In biology, homology detection from huge collections of protein and DNA sequences is an important part for their functional analyses [25]. There is therefore a strong need to develop powerful methods to make best use of massive string data on a largescale.
Kernel methods [12] are attractive data analysis tools because they can approximate any (possibly nonlinear) function or decision boundary well with enough training data. In kernel methods, a kernel matrix a.k.a. Gram matrix is computed from training data and nonlinear support vector machines (SVM) are trained on the matrix. Although it is known that kernel methods achieve high prediction accuracy for various tasks such as classification and regression, they scale poorly due to a quadratic complexity in the number of training data [13, 9]. In addition, calculation of a classification requires, in the worst case, linear time in the number of training data, which limits largescale applications of kernel methods in practice.
String kernels [10] are kernel functions that operate on strings, and a variety of string kernels using string similarity measures have been proposed [17, 25, 6, 20]. As stateoftheart string kernels, string alignment kernels are known for high prediction accuracy in string classifications, such as remote homology detection for protein sequences [25] and timeseries classifications [34, 6], when tested in combination with SVM. However, alignment kernels have a crucial drawback; that is, as in other kernel methods, they scale poorly due to their quadratic computation complexity in the number of training data. Thus, an important open challenge, which is required for largescale analyses of string data, is to develop a kernel approximation for string alignment kernels.
Kernel approximations using feature maps (FMs) have been proposed to solve the scalability issues regarding kernel methods. FMs project training data into lowdimensional vectors such that the kernel value (similarity) between each pair of training data is approximately equal to the inner product of the corresponding pair of low dimensional vectors. Then, linear SVM are trained on the projected vectors, thereby significantly improving the scalability, while preserving their prediction accuracy. Although a variety of kernel approximations using FMs for enhancing the scalability of kernel methods have been proposed (e.g., Jaccard kernels [18], polynomial kernels [23] and MinMax kernels [19]), and random Fourier features (RFFs) [24] are an approximation of shiftinvariant kernels (e.g., Laplacian and radial basis function (RBF) kernels), approximation for string alignment kernels has not been studied.
Several metric embeddings for string distance measures have been proposed for largescale string processing [4, 2]. Edit sensitive parsing (ESP) [4] is a metric embedding of a string distance measure called edit distance with moves (EDM) that consists of ordinal edit operations of insertion, deletion and replacement in addition to substring move operation. ESP maps all the strings from the EDM space into integer vectors named characteristic vectors in the distance space.
Training  Training  Prediction  
Approach  time  space  time  
GAK [5, 6]  Global alignment  
LAK [25]  Local alignment  
D2KE [30, 31]  Random feature map  
SFMEDM (this study)  ESP  
SFMCGK (this study)  CGK  

To date, ESP has been applied only to string processing such as string compression [21], indexing [29], edit distance computation [4]; however, as we will see, there remains high potential for application to an approximation of alignment kernels. ESP is expected to be effective for approximating alignment kernels, because it approximates EDM between strings as distance between integer vectors.
Contribution. In this paper, we present SFMEDM as the first approximation of alignment kernels for solving largescale learning problems on string data. Key ideas behind the proposed method are threefold: (i) to project input strings into characteristic vectors leveraging ESP, (ii) to map characteristic vectors into vectors of RFFs by FMs, and (iii) to train linear SVM on the mapped vectors. However, applying FMs for RFFs to highdimensional vectors in a direct way requires memory linearly proportional to not only dimension of input vectors but also dimension of RFF vectors. In fact, characteristic vectors as input vectors for FMs tend to be very high dimensional for solving largescale problems using FMs, and output vectors of RFFs needs to also be highdimensional for achieving high prediction accuracies, and those conditions limit the applicability of FMs on a largescale. Although fastfood approach [16] and orthogonal range reporting [32] have been proposed for efficiently computing RFFs in time and memory, they are only applicable to RFFs for approximating RBF kernels with a theoretical guarantee. Accordingly, in this study, we present spaceefficient FMs (SFMs) that requires only memory to solve this problem and can be used for approximating any shiftinvariant kernel such as a Laplacian kernel. This is an essential property which is required for approximating alignment kernels and has not been taken into account by previous research. Our SFMEDM has the following desirable properties:

Scalability: SFMEDM is applicable to massive string data.

Fast training: SFMEDM trains SVM fast.

Space efficiency: SFMEDM trains SVM spaceefficiently.

Prediction accuracy: SFMEDM can achieve high prediction accuracy.
We experimentally test the ability of SFMEDM to train SVM with various massive string data, and demonstrate that SFMEDM has superior performance in terms of prediction accuracy, scalability and computational efficiency.
Ii Related Work
Several alignment kernels have been proposed for analyzing string data. We briefly review the state of the art, which is also summarized in Table I. Early methods are proposed in [1, 27, 34] and are known not to satisfy the positive definiteness for their kernel matrices. Thus, they are proposed with numerical corrections for any deficiency of the kernel matrices.
The global alignment kernel (GAK) [5, 6] is an alignment kernel based on global alignments originally proposed for time series data. GAK defines a kernel as a summation score of all possible global alignments between two strings. The computation time of GAK is for number of strings and the length of strings , and its space usage is .
A local alignment kernel (LAK) on the notion of the SmithWaterman algorithm [28] for detecting protein remote homology was proposed by Saigo et al. [25]. LAK measures the similarity between each pair of strings by summing up scores obtained from local alignments with gaps of strings. The computation time of LAK is and its space usage is . Although, in combination with SVM, LAK achieves high classification accuracies for protein sequences, LAK is applicable to protein strings only because its scoring function is optimized for proteins.
D2KE [30] is a random feature map from structured data to feature vectors such that a distance measure between each pair of the structured data is preserved by the inner product between the corresponding pair of mapped vectors. The feature vector for each input structured data is built as follows: (i) structured data in input are sampled; (ii) the dimension feature vector for each structured data is built such that each dimension of the feature vector is defined as the distance between the structured data and a sampled one. D2KE has been applied to time series data [31]; however, as we will see, D2KE cannot achieve high prediction accuracies when it is applied to string data.
Despite the importance of a scalable learning with alignment kernels, no previous work has been able to achieve high scalabilities while preserving high prediction accuracies. We present SFMEDM, the first scalable learning with string alignment kernels that meets these demands and is made possible by leveraging an idea behind ESP and SFM.
CGK [2] is another metric embedding for edit distance and maps input strings of alphabet and of the maximum length into strings of fixedlength such that the edit distance between each pair of input strings is approximately preserved by the Hamming distance between the corresponding pair of mapped strings. Recently, CGK has been applied to the problem of edit similarity joins [33]. We also present a kernel approximation of alignment kernels called SFMCGK by leveraging an idea behind CGK and SFM.
Details of the proposed method are presented in the next section.
Iii Edit sensitive parsing
Edit sensitive parsing (ESP) [4] is an approximation method for efficiently computing
edit distance with moves (EDM).
EDM is a stringtostring distance measure for turning one string into another in a series of string operations, where
a substring move is included as a string operation in addition to typical string operations such as insertion, deletion and replacement.
Formally, let be a string of length and be the th character in .
for two strings and is defined as the minimum number of edit operations defined below to transform into as following:
Insertion: character at position in is inserted, resulting in ;
Deletion: character at position in is deleted, resulting in ;
Replacement: character at position in is replaced by , resulting in ;
Substring move: a substring in is moved and inserting at position , resulted in .
Computing EDM between two strings is known as an NPcomplete problem [26].
ESP can approximately compute EDM by embedding strings into vector space by a parsing.
Given string , ESP builds a parse tree named an ESP tree, which is illustrated in Figure 1 as an example. The ESP tree is a balanced tree and each node in the ESP tree belongs to one of three types: (i) a node with three children, (ii) a node with two children and (iii) a node without children (i.e., a leaf). In addition, internal nodes in the ESP tree have the same node label if and only if they have children satisfying both two conditions: (i) the numbers of those children are the same, and (ii) the node labels of those children are the same in the lefttoright order. The height of ESP tree is for the length of input string .
Let be a dimension integer vector built from ESP tree such that each dimension of is the number of a node label appearing in . is called characteristic vectors. ESP builds ESP trees such that as many subtrees with the same node labels as possible are built for common substrings for strings and , resulted in an approximation of EDM between and by distance between their characteristic vectors and , i.e., , where is an norm. More precisely, the upper and lower bounds of the approximation are as follows,
where is the iterated logarithm of , which is recursively defined as , and for a positive integer .
Detail of the ESP algorithm is presented in the appendix.
Iv Spaceefficient Feature Maps
In this section we present our new SFMs for RFFs using space proportional to the dimension of characteristic vectors and independent of the RFF target dimension . The proposed SFMs improve space usage for generating RFFs from to while preserving theoretical guarantees (concentration bounds). The method is general and can be used for approximating any shiftinvariant kernel.
From an abstract point of view, an RFF is based on a way of constructing a random mapping
such that for every choice of vectors we have
where is the kernel function. The randomness of comes from a vector sampled from an appropriate distribution that depends on kernel function (see section V for more details), and the expectation is over the choice of . For the purposes of this section, all that needs to be known about is that the vector coordinates are independently sampled according to the marginal distribution .
Since we have , i.e., bounded variance; however, this in itself does not imply the desired approximation as . Indeed, is a poor estimator of . The accuracy of RFFs can be improved by increasing the output dimension to . Specifically RFFs use independent vectors sampled from , and they consider FMs
that concatenates the values of functions to one dimensional vector. It can then be shown that with high probability for sufficiently large .
To represent the function , it is necessary to store a matrix containing vectors , which uses space . Our assumption for ensuring good kernel approximations is that the vectors do not need to be independent. Instead, for a small integer parameter , we compute each vector using a hash function chosen from a wise independent family such that for every , comes from distribution . Then, instead of storing , we only store the description of the hash function in memory . A priori, two issues seemingly concern this approach:

It is unclear how to construct wise independent hash functions with output distribution .

Is wise independence sufficient to ensure results similar to the fully independent setting?
We address these issues in the next two subsections.
Iva Hash functions with distribution
For concreteness, our construction is based on the following class of wise independent hash functions, where is a parameter: For chosen uniformly at random, let
where computes the fractional part of . It can be shown that any distinct integer inputs , the vector is uniformly distributed in .
Let denote the inverse of the cumulative distribution function of the marginal distribution . Then, if is uniformly distributed in , . Accordingly hash function can be constructed where the th coordinate on input is given, as
where are chosen independently from . We see that for every , has distribution . Furthermore, for every set of distinct integer inputs , the hash values are independent.
IvB Concentration bounds
We then show that for RFFs, random features suffice to approximate the kernel function within error with probability arbitrarily close to .
Theorem 1
For every pair of vectors , if the mapping is constructed as described above using , for every , it follows that
Proof
Our proof follows the same outline as the standard proof of Chebychev’s inequality. Consider the second central moment:
The second equality above uses wise independence, and the fact that
to conclude that only terms in the expansion have nonzero expectation. Finally, we have:
where the second inequality follows from Markov’s inequality. This concludes the proof.
In the original analysis of RFFs, a strong approximation guarantee was considered; namely, the kernel function for all pairs of points in a bounded region of was approximated. This kind of result can be achieved by choosing sufficiently large to obtain strong tail bounds. However, we show that the pointwise guarantee (with ) provided by Theorem 1 is sufficient for an application in kernel approximations in Sec. VII.
Dataset  Number  #positives  Alphabet size  Average length 

Protein  3,238  96  20  607 
DNA  3,238  96  4  1,827 
Music  10,261  9,022  61  329 
Sports  296,337  253,017  63  307 
Compound  1,367,074  57,536  44  53 
Data  Protein  DNA  Music  Sports  Compound  

Method  ESP  CGK  ESP  CGK  ESP  CGK  ESP  CGK  ESP  CGK 
Time (sec)  
Memory (MB)  
Dimension 
V Scalable Alignment Kernels
We present the SFMEDM algorithm for scalable learning with alignment kernels hereafter. Let us assume a collection of strings and their labels where . We define alignment kernels using for each pair of strings and as follows,
where is a parameter. We apply ESP to each for and build ESP trees . Since ESP approximates as an distance between characteristic vectors and built from ESP trees and for and , i.e., , can be approximated as follows,
(1) 
Since Eq.1 is a Laplacian kernel, which is also known as a shiftinvariant kernel [24], we can approximate using FMs for RFFs as follows,
where . For Laplacian kernels, for each is defined as
(2) 
where random vectors for are sampled from the Cauchy distribution. We shall refer to approximations of alignment kernels leveraging ESP and FMs as FMEDM.
Applying FMs to high dimensional characteristic vectors consumes memory for storing vectors for . Thus, we present SFMs for RFFs using only memory by applying wise independent hash functions introduced in Sec. IV. We fix in this study, resulted in memory. We shall refer to approximations of alignment kernels leveraging ESP and SFMs as SFMEDM.
Algorithm 1 generates random numbers from a Cauchy distribution by using memory. Two arrays and , initialized with 64bit random numbers as unsigned integers, are used. Function is implemented using and in and returns a random number in for given and as input. Then, random number returned from is converted to a random number from the Cauchy distribution in as at line 8. Algorithm 2 implements SFMs generating RFFs in Eq.2. Computation time and memory for SFMs are and , respectively.
Protein  DNA  Music  Sports  Compound 


Protein  DNA  Music  Sports  Compound 

Method  Protein  DNA  Music  Sports  Compound 

SFMEDM(D=128)  
FMEDM(D=128)  
SFMEDM(D=512)  
FMEDM(D=512)  
SFMEDM(D=2048)  
FMEDM(D=2048)  
SFMEDM(D=8192)  
FMEDM(D=8192)  
SFMEDM(D=16384)  
FMEDM(D=16384) 
Method  Protein  DNA  Music  Sports  Compound 

SFMCGK(D=128)  
FMCGK(D=128)  
SFMCGK(D=512)  
FMCGK(D=512)  
SFMCGK(D=2048)  
FMCGK(D=2048)  
SFMCGK(D=8192)  
FMCGK(D=8192)  
SFMCGK(D=16384)  
FMCGK(D=16384) 
Vi Feature Maps using CGK embedding
CGK [2, 33] is another string embedding using a randomized algorithm. Let for ,,, be input strings of alphabet and let be the maximum length of input strings. CGK maps input strings in the edit distance space into strings of length in the Hamming space, i.e, the edit distance between each pair and of input strings is approximately preserved by the Hamming distance of the corresponding pair and of the mapped strings. See [33] for the detail of CGK.
To apply SFMs, we convert mapped strings in the Hamming space by CGK to characteristic vectors in the distance space as follows. We view elements for ,,…, as locations (of the nonzero elements) instead of characters. For example, when , we view each as a vector of length . If , then we code it as ; if , then we code it as . We then concatenate those vectors into one vector of dimension and with nonzero elements. As a result, the Hamming distance between original strings and is equal to the distance between obtained vectors and , i.e., . By applying SFMs or FMs to , we built vectors of RFFs . We shall call approximations of alignment kernels using CGK and SFMs (respectively, FMs) SFMCGK (respectively, FMCGK).
Vii Experiments
In this section, we evaluated the performance of SFMEDM with five massive string datasets, as shown in Table II. The ”Protein” and ”DNA” datasets consist of 3,238 human enzymes obtained from the KEGG GENES database [14], respectively. Each enzyme in ”DNA” was coded by a string consisting of four types of nucleotides or bases (i.e., A, T, G, and C). Similarity, each enzyme in ”Protein” was coded by a string consisting of 20 types of amino acids. Enzymes belonging to the isomerases class in the enzyme commission (EC) numbers in ”DNA” and ”Protein” have positive labels and the other enzymes have negative labels. The ”Music” and ”Sports” datasets consist of 10,261 and 296,337 reviews of musical instruments products and sports products in English from Amazon [11, 22], respectively. Each review has a rating of five levels. We assigned positive labels to reviews with four or five levels for rating and negative labels to the other reviews. The ”Compound” dataset consists of 1,367,074 bioactive compounds obtained from the NCBI PubChem database [15]. Each compound was coded by a string representation of chemical structures called SMILES. The biological activities of the compounds for human proteins were obtained from the ChEMBL database. In this study we focused on the biological activity for the human protein microtubule associated protein tau (MAPT). The label of each compound corresponds to the presence or absence of biological activity for MAPT.
All the methods were implemented by C++, and all the experiments were performed on one core of a quadcore Intel Xeon CPU E52680 (2.8GHz). The execution of each method was stopped if it did not finish within 48 hours in the experiments.
Method  Protein  DNA  Music  Sports  Compound 

SFMEDM(D=128)  5  8  11  204  261 
SFMEDM(D=512)  22  34  47  799  1,022 
SFMEDM(D=2048)  93  138  193  3,149  4,101 
SFMEDM(D=8192)  367  544  729  12,179  16,425 
SFMEDM(D=16384)  725  1,081  1,430  24,282  32,651 
SFMCGK(D=128)  14  52  26  452  397 
SFMCGK(D=512)  60  222  104  1,747  1,570 
SFMCGK(D=2048)  237  981  415  7,156  6,252 
SFMCGK(D=8192)  969  3,693  1,688  27,790  25,054 
SFMCGK(D=16384)  1,937  7,596  3,366  53,482  49,060 
D2KE(D=128)  319  4,536  296  8,139  1,641 
D2KE(D=512)  1,250  19,359  1244  34,827  6,869 
D2KE(D=2048)  5,213  76,937  5,018  140,187  28,116 
D2KE(D=8192)  21,208  48h  19,716  48h  48h 
D2KE(D=16384)  43,417  48h  38,799  48h  48h 
LAK  31,718         
GAK  25,252  48h  101,079  48h  48h 
EDMKernel  20  28  162  48h  48h 
STK17  48h  48h  48h 
Viia Scalability of ESP
First, we evaluated the scalability of ESP and CGK. Table III shows the execution time, memory in megabytes and dimension of characteristic vectors generated by ESP and CGK. ESP and CGK were practically fast enough to build characteristic vectors for large datasets. The executions of ESP and CGK finished within 60 seconds for ”Compound” that was the largest dataset consisting of more than 1 million compounds. At most 1.5GB memory was consumed in the execution of ESP. These results demonstrated high scalability of ESP for massive datasets.
For each dataset, characteristic vectors of very high dimensions were built by ESP and CGK. For example, 18 million dimension vectors were built by ESP for the ”Sports” dataset. Applying the original FMs for RFFs to such high dimension characteristic vectors consumed huge amount of memory, deteriorating the scalability of FMs. The proposed SFMs can solve the scalability problem, which will be shown in the next subsection.
ViiB Efficiency of SFMs
We evaluated the efficiency of SFMs applied to characteristic vectors built from ESP, and we compared SFMs with FMs. We examined combinations of characteristic vectors and projected vectors of SFMEDM, FMEDM, SFMCGK and FMCGK. The dimension of projected vectors of RFFs was examined for .
Figure 3 shows the amount of memory consumed in SFMs and FMs for characteristic vectors built by ESP and CGK for each dataset. According to the figure, a huge amount of memory was consumed by FMs for high dimension characteristic vectors and projected vectors. Around 1.1TB and 323GB of memory were consumed by FMEDM for for ”Sports” and ”Compound”, respectively. Those huge amounts of memory made it impossible to build highdimension vectors of RFFs. The memory required by SFMs was linear in regard to dimension of characteristic vectors for each dataset. Only 280MB and 80MB of memory were consumed by SFMEDM for for ”Sports” and ”Compound”, respectively. These results suggest that compared with FMEDM, SFMEDM dramatically reduces the amount of required memory.
Figure 3 shows the execution time for building projected vectors for each dataset. According to the figure, execution time increases linearly with dimension for each method and for ”Compound”, SFMs built 16,384dimension vectors of RFFs in around nine hours.
We evaluated accuracies of our approximations of alignment kernels in terms of average error of RFFs, defined as
where is defined by Eq. 1 and was fixed. Average error of SFMs was compared with that of FMs for each dataset. Table IV shows average error of SFMs and FMs using characteristic vectors built from ESP for each dataset. The average errors of SFMEDM and FMEDM are almost the same for all datasets and dimension . The accuracies of FMs were preserved in the case of SFMs, while the amount of memory required by FMs was dramatically reduced. The same tendencies were observed for average errors of SFMs in combination with CGK, as shown in Table V.
ViiC Classification performance of SFMEDM
We evaluated classification abilities of SFMEDM, SFMCGK, D2KE, LAK and GAK. We used an implementation of LAK downloadable from http://sunflower.kuicr.kyotou.ac.jp/~hiroto/project/homology.html. We implemented D2KE by C++ with edit distance as a distance measure for strings. Laplacian kernels with characteristic vectors of ESP and CGK in Eq.1 were also evaluated and denoted as ESPKernel and CGKKernel, respectively. In addition, we evaluated a classification ability of the stateoftheart string kernel[8], which we shall refer to as STK17, and we used an implementation of STK17 downloadable from https://github.com/mufarhan/sequence_class_NIPS_2017. We used LIBLINEAR [7] for training linear SVM with SFMEDM and SFMCGK. We trained nonlinear SVM with GAK, LAK, ESPKernel and CGKKernel using LIBSVM [3]. We performed threefold crossvalidation for each dataset and measured the prediction accuracy by the area under the ROC curve (AUC). Dimension of the vectors of RFFs and D2KE was examined for . We selected the best parameter achieving the highest AUC among all combinations of the kernel’s parameter and the SVM’s parameter
Table VI shows the execution time for building RFFs and computing kernel matrices in addition to training linear/nonlinear SVM for each method. LAK was applied to only ”Protein” because its scoring function was optimized for protein sequences. It took 9 hours for LAK to finish the execution, which was the most timeconsuming of all the methods in the case of ”Protein”. The execution of GAK finished within 48 hours for ”Protein” and ”Music” only, and it took around seven hours and 28 hours for ”Protein” and ”Music”, respectively. The executions of D2KE did not finish within 48 hours for three large datasets of ”Music”, ”Sports” and ”Compound”. In addition, the executions of EDMKernel and CGKKernel did not finish within 48 hours for ”Sports” and ”Compound”. These results suggest that existing alignment kernels are unsuitable for applications to massive string datasets. The executions of D2KE did not finish when large dimensions (e.g., and ) were used, which showed that creating high dimension vectors for achieving high classification accuracies by D2KE is timeconsuming. The executions of SFMEDM and SFMCGK finished with 48 hours for all datasets. SFMEDM and SFMCGK took around nine hours and 13 hours, respectively, for ”Compound” consisting of 1.3million strings in the setting of large . .
Figure 5 shows amounts of memory consumed for training linear/nonlinear SVM for each method, where Here, GAK, LAK, EDMKernel, CGKKernel and STK17 are represented as ”Kernel”. ”Kernel” required a small amount of memory for the small datasets (namely, ”Protein”, ”DNA” and ”Music”), but it required a huge amount of memory for the large datasets (namely, ”Sports” and ”Compound”). For example, it consumed 654 GB and 1.3 TB of memory for ”Sports” and ”Compound”, respectively. The memories for SFMEDM, SFMCGK and D2KE were at least one order of magnitude smaller than those for ”Kernel”. SFMEDM, SFMCGK and D2KE required 36GB and 166GB of memory for ”Sports” and ”Compound” in the case of large , respectively. These results demonstrated the high memory efficiency of SFMEDM and SFMCGK. Although training linear SVM with vectors built by K2DE was spaceefficient, prediction accuracies were not high, which is presented next.
Figure 5 shows the classification accuracy of each method, where the results for the methods not finished with 48 hours were not plotted. The prediction accuracies of SFMEDM and SFMCGK were improved for larger . The prediction accuracy of SFMEDM was higher than that of SFMCGK for any on all datasets and was also higher than those of all the kernel methods (namely, LAK, GAK, ESPKernel and CGKKernel and STK17). The prediction accuracies of K2DE were worse than those of SFMEDM and were not improved for even large . These results suggest that SFMEDM can achieve the highest classification accuracy and it is much more efficient than the other methods in terms of memory and time for building RFFs and training SVM.
Viii Conclusion
We have presented the first feature maps for alignment kernels, which we call SFMEDM, presented SFMs for computing RFFs spaceefficiently, and demonstrated its ability to learn SVM for largescale string classifications with various massive string data, and we demonstrate the superior performance of SFMEDM with respect to prediction accuracy, scalability and computation efficiency. Our SFMEDM has the following appealing properties:
SFMEDM opens the door to new application domains such as Bioinformatics and natural language processing, in which largescale string processing with kernel methods was too restrictive so far.
Ix Acknowledgments
We thank Ninh Pham and Takaaki Nishimoto for useful discussions of kernel approximation and edit sensitive parsing. The research of Rasmus Pagh has received funding from the European Research Council under the European Union’s 7th Framework Programme (FP7/20072013) / ERC grant agreement no. 614331.
References
 [1] (2002) Online handwriting recognition with support vector machinesa kernel approach. In Proceedings of the 8th International Workshop on Frontiers in Handwriting Recognition, pp. 49–54. Cited by: §II.
 [2] (2016) Streaming algorithms for embedding and computing edit distance in the low distance regime. In Proceedings of the 48th annual ACM symposium on Theory of Computing, pp. 715–725. Cited by: §I, §II, §VI.
 [3] (2011) LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, pp. 27:1–27:27. Cited by: §VIIC.
 [4] (2007) The string edit distance matching problem with moves. ACM Transactions on Algorithms 3, pp. 2:1–2:19. Cited by: §I, §I, §III.
 [5] (2007) A kernel for time series based on global alignments. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 413–416. Cited by: TABLE I, §II.
 [6] (2011) Fast global alignment kernels. In Proceedings of the 28th International Conference on Machine Learning, pp. 929–936. Cited by: TABLE I, §I, §II.
 [7] (2008) LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research 9, pp. 1871–1874. Cited by: §VIIC.
 [8] (2017) Efficient approximation algorithms for strings kernel based sequence classification. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 6938–6948. Cited by: §VIIC.
 [9] (2003) Interiorpoint methods for massive support vector machines. SIAM Journal of Optimization 13, pp. 783–804. Cited by: §I.
 [10] (2003) A survey of kernels for structured data. ACM SIGKDD Explorations Newsletter 5, pp. 49–58. Cited by: §I.
 [11] (2016) Ups and downs: modeling the visual evolution of fashion trends with oneclass collaborative filtering. In Proceedings of the 25th International World Wide Web Conference, pp. 507–517. Cited by: §I, §VII.
 [12] (2008) Kernel methods in machine learning. The Annals of Statistics 36, pp. 1171–1220. Cited by: §I.
 [13] (2006) Training linear SVMs in linear time. In Proceedings of the 12th ACM Conference on Knowledge Discovery and Data Mining, pp. 217–226. Cited by: §I.
 [14] (2017) KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Research 45, pp. D353–D361. Cited by: §VII.
 [15] (2016) PubChem substance and compound databases. Nucleic Acids Research 44, pp. D1202–D1213. Cited by: §VII.
 [16] (2013) Fastfood – approximating kernel expansions in loglinear time. In Proceedings of the 30th International Conference on Machine Learning, pp. 244–252. Cited by: §I.
 [17] (2002) The spectrum kernel: a string kernel for svm protein classification. In Proceedings of the 7th Pacific Symposium on Biocomputing, pp. 566–575. Cited by: §I.
 [18] (2011) Hashing algorithms for largescale learning. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: §I.
 [19] (2017) Linearized GMM kernels and normalized random fourier features. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 315–324. Cited by: §I.
 [20] (2002) Text classification using string kernels. Journal of Machine Learning Research 2, pp. 419–444. Cited by: §I.
 [21] (2013) Fullyonline grammar compression. In Proceedings of International Symposium on String Processing and Information Retrieval, pp. 218–229. Cited by: §I.
 [22] (2015) Imagebased recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 43–52. Cited by: §I, §VII.
 [23] (2013) Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 239–247. Cited by: §I.
 [24] (2007) Random features for largescale kernel machines. In Advances in Neural Information Processing Systems, pp. 1177–1184. Cited by: §I, §V.
 [25] (2004) Protein homology detection using string alignment kernels. Bioinformatics 20, pp. 1682–1689. Cited by: TABLE I, §I, §I, §II.
 [26] (2002) Edit distance with move operations. In Proceedings of the 13th Symposium on Combinatorial Pattern Matching, Vol. 2373, pp. 85–98. Cited by: §III.
 [27] (2001) Dynamic timealignment kernel in support vector machine. In Advances in Neural Information Processing Systems, pp. 921–928. Cited by: §II.
 [28] (1981) Identification of common molecular subsequences. Journal of Molecular Biology 147, pp. 195–197. Cited by: §II.
 [29] (2014) Improved ESPindex: A practical selfindex for highly repetitve texts. In Proceedings of the 13th International Symposium on Experimental Algorithms, pp. 338–350. Cited by: §I.
 [30] (2018) D2KE: from distance to kernel and embedding. CoRR abs/1802.04956. Cited by: TABLE I, §II.
 [31] (2018) Random warping series: a random features method for timeseries embedding. In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics, pp. 793–802. Cited by: TABLE I, §II.
 [32] (2016) Orthogonal random features. In Proceedings of the 29th International Conference on Neural Information Processing Systems, pp. 1975–1983. Cited by: §I.
 [33] (2017) EmbedJoin: efficient edit similarity joins via embeddings. In Proceedings of the 23rd SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 585–594. Cited by: §II, §VI.
 [34] (2010) Unsupervised discovery of facial events. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2574–2581. Cited by: §I, §II.
Appendix A Supplement
Aa Edit sensitive parsing
In the next section, we introduce left preferential parsing (LPP) as a basic algorithm of ESP. In the later part of this section, we present the ESP algorithm.
AA1 Left preferential parsing (LPP)
The key idea of LPP is to make pairs of nodes from the left to the right positions preferentially in a sequence of nodes at an ESP tree and make triples of the remaining three nodes. Then, ESP builds type2 nodes for these pairs of nodes and a type1 node for the triple of nodes. In this way, LPP builds an ESP tree in a bottomup manner.
More precisely, if the length of sequence at the th level of the ESP tree is even, LPP makes pairs of and for all and builds type2 nodes for all the pairs. Thus, at the th level of the ESP tree is a sequence of type2 nodes. If the length of sequence of the th level of the ESP tree is odd, LPP makes pairs of and for each , and it makes triple of , and . LPP builds type2 nodes for pairs of nodes and type1 node for the triple of nodes. Thus, at the th level of the ESP tree is a sequence of type2 nodes (except the last node) and a type1 node as the last node. LPP builds an ESP tree in a bottomup manner; that is, it build an ESP tree from leaves (i.e, ) to the root. See Figure 6 for an example of this ESPtree building.
A crucial drawback of LPP is that it builds completely different ESP trees even for similar strings. For example, as shown in Figure 1, is a string where character is inserted at the first position of . Although and are similar strings, LPP builds completely different ESP trees, namely, and for and , respectively, resulting in a large difference between EDM and distance for characteristic vectors and . Thus, LPP lacks the ability to approximate EDM.
AB The ESP algorithm
ESP uses an engineered strategy while using LPP in its algorithm. ESP classifies a string into substrings of three categories and applies different parsing strategies according to those categories. An ESP tree for an input string is built by gradually applying this parsing strategy of ESP to strings from the lowest to the highest level of the tree.
Given sequence , ESP divides into subsequences in the following three categories: (i) a substring such that all pairs of adjacent node labels are different and substring length is at least . Formally, a substring starting from position and ending at position in satisfies for any and ; (ii) a substring of the same node label and with length of at least . Formally, a substring starting from position and ending at position satisfies for any and ; (iii) neither of categories (i) and (ii).
After classifying a sequence into subsequences of the above three categories, ESP applies different parsing methods to each substring according to their categories. ESP applies LPP to each subsequence of sequence in categories (ii) and (iii), and it builds nodes at ()level. For subsequences in category (i), ESP applies a special parsing technique named alphabet reduction.
Alphabet reduction. alphabet reduction is a procedure for converting a sequence to a new sequence with alphabet size of 3 at most. For each symbol , the conversion is performed as follows. is a left adjacent symbol of . Suppose and are represented as binary integers. Let be the index of the leastsignificant bit in which differs from , and let be the binary integer of at the th bit index. label is defined as and is computed for each position in . When this conversion is applied to a sequence of alphabet , the alphabet size of the resulting label sequence is , In addition, an important property of labels is that all adjacent labels in a label sequence are different, i.e., for all . Thus, this conversion can be iteratively applied to a new label sequence, namely, , until its alphabet size is at most .
The alphabet size is reduced from to as follows. First, each in a sequence is replaced with the least element from that does not neighbor the . Then, the same procedure is repeated for each and , which generates a new sequence () of node labels drawn from , where no adjacent characters are identical.
Any position that is a local maximum, i.e., , is then selected. Those positions are called landmarks. In addition, any position that is a local minimum, i.e., , and not adjacent to an already chosen landmark, is selected as a landmark. An important property for those landmarks is that for any two successive landmark positions, and , either or hold, because is a sequence of no adjacent characters in alphabet . Alphabet reduction for sequence is illustrated in Figure 7.
Finally, type2 nodes (respectively, type3 nodes) are built for subsequences between landmarks and of length (respectively, ).
The computation time of ESP is .