# Kernelized Hashcode Representations for Relation Extraction

###### Abstract

Kernel methods have produced state-of-the-art results for a number of NLP tasks such as relation extraction, but suffer from poor scalability due to the high cost of computing kernel similarities between natural language structures. A recently proposed technique, kernelized locality-sensitive hashing (KLSH), can significantly reduce the computational cost, but is only applicable to classifiers operating on kNN graphs. Here we propose to use random subspaces of KLSH codes for efficiently constructing an explicit representation of NLP structures suitable for general classification methods. Further, we propose an approach for optimizing the KLSH model for classification problems by maximizing an approximation of mutual information between the KLSH codes (feature vectors) and the class labels. We evaluate the proposed approach on biomedical relation extraction datasets, and observe significant and robust improvements in accuracy w.r.t. state-of-the-art classifiers, along with drastic (orders-of-magnitude) speedup compared to conventional kernel methods.

## 1 Introduction

As the field of biomedical research expands very rapidly, developing tools for automated information extraction from biomedical literature becomes a necessity. In particular, the task of identifying biological entities and their relations from scientific papers has attracted significant attention in the past several years [Garg et al.2016, Hahn and Surdeanu2015, Krallinger et al.2008], especially because of its potential impact on developing personalized cancer treatments [Cohen2015, Rzhetsky, Valenzuela-Escárcega et al.2017]. See Fig. 1 for an example of the relation extraction task.

For the relation extraction task, approaches based on convolution kernels [Haussler1999] have demonstrated state-of-the-art performance [Chang et al.2016, Tikk et al.2010]. However, despite their success and intuitive appeal, the traditional kernel-trick based methods can suffer from relatively high computational costs, since computing kernel similarities between two natural language structures (graphs, paths, sequences, etc.) can be an expensive operation. Furthermore, to build a support vector machine (SVM) or a k-nearest neighbor (kNN) classifier from training examples, one needs to compute kernel similarities between pairs of training points, which can be prohibitively expensive for large . Some approximation methods have been built for scalability of the kernel classifiers. On such approach is kernelized locality-sensitive hashing (KLSH) [Kulis and Grauman2009, Joly and Buisson2011] that allows to reduce the number of kernel computations to by providing efficient approximation for constructing kNN graphs. However, KLSH only works with classifiers that operate on kNN graphs. Thus, the question is whether scalable kernel locality-sensitive hashing approaches can be generalized to a wider range of classifiers.

The main contribution of this paper is a principled approach for building explicit representations for structured data, as opposed to implicit ones employed in prior kNN-graph-based approaches, by using random subspaces of KLSH codes. The intuition behind our approach is as follows. If we keep the total number of bits in the KLSH codes of NLP structures relatively large (e.g., 1000 bits), and take many random subsets of bits (e.g., 30 bits each), we can build a large variety of generalized representations corresponding to the subsets, and preserve detailed information present in NLP structures by distributing this information across those representations.^{1}^{1}1Compute cost of
KLSH codes is linear in the number of bits (), with the number of kernel computations fixed w.r.t. .
The main advantage of the proposed representation is that it can be used with arbitrary classification methods, besides kNN such as, for example, random forests (RF) [Ho1995, Breiman2001]. Fig. 2 provides high-level overview of the proposed approach.

Our second major contribution is a theoretically justified and computationally efficient method for optimizing the KLSH representation with respect to: (1) the kernel function parameters and (2) a reference set of examples w.r.t. which kernel similarities of data samples are computed for obtaining their KLSH codes. Our approach maximizes (an approximation of) mutual information between KLSH codes of NLP structures and their class labels.
^{2}^{2}2See our code here: github.com/sgarg87/HFR.

Besides their poor scalability, kernels usually involve only a relatively small number of tunable parameters, as opposed to, for instance, neural networks, where the number of parameters can be orders of magnitude larger, thus allowing for more flexible models capable of capturing complex patterns. Our third important contribution is a nonstationary extension of the conventional convolution kernels, in order to achieve better expressiveness and flexibility; we achieve this by introducing a richer parameterization of the kernel similarity function. Additional parameters, resulting from our non-stationary extension, are also learned by maximizing the mutual information approximation.

We validate our model on the relational extraction task using four publicly available datasets. We observe significant improvements in F1 scores w.r.t. the state-of-the-art methods, including recurrent neural nets (RNN), convnets (CNN), and other methods, along with large reductions in the computational complexity as compared to the traditional kernel-based classifiers.

In summary, our contributions are as follows: (1) we propose an explicit representation learning for structured data based on kernel locality-sensitive hashing (KLSH), and generalize KLSH-based approaches in information extraction to work with arbitrary classifiers; (2) we derive an approximation of mutual information and use it for optimizing our models; (3) we increase the expressiveness of convolutional kernels by extending their parameterization via a nonstationary extension; (4) we provide an extensive empirical evaluation demonstrating significant advantages of the approach versus several state-of-art techniques.

## 2 Background

As indicated in Fig. 1, we map the relation extraction task to a classification problem, where each candidate interaction as represented by a corresponding (sub)structure is classified as either valid or invalid.

Let be a set of data points representing NLP structures (such as sequences, paths, graphs) with their corresponding class labels, . Our goal is to infer the class label of a given test data point . Within the kernel-based methods, this is done via a convolution kernel similarity function defined for any pair of structures and with kernel-parameter , augmented with an appropriate kernel-based classifier [Garg et al.2016, Srivastava, Hovy, and Hovy2013, Culotta and Sorensen2004, Zelenko, Aone, and Richardella2003, Haussler1999].

### 2.1 Kernel Locality-Sensitive Hashing (KLSH)

Previously, Kernel Locality-Sensitive Hashing (KLSH) was used for constructing approximate kernelized k-Nearest Neighbor (kNN) graphs [Joly and Buisson2011, Kulis and Grauman2009]. The key idea of KLSH as an approximate technique for finding the nearest neighbors of a data point is that rather than computing its similarity w.r.t. all other data points in a given set, the kernel similarity function is computed only w.r.t. the data points in the bucket of its hashcode (KLSH code). This approximation works well in practice if the hashing approach is locality sensitive, i.e. data points that are very similar to each other are assigned hashcodes with minimal Hamming distance to each other.

Herein, we brief on the generic procedure for mapping an arbitrary data point to a binary kernel-hashcode , using a KLSH technique that relies upon the convolution kernel function .

Let us consider a set of data points that might include both labeled and unlabeled examples. As a first step in constructing the KLSH codes, we select a random subset of size , which we call a reference set; this corresponds to the grey dots in the left-most panel of Fig. 3. Typically, the size of the reference set is significantly smaller than the size of the whole dataset, .

Next, let be a real-valued vector of size , whose –th component is the kernel similarity between the data point and the –th element in the reference set, . Further, let , , be a set of binary valued hash functions that take as an input and map it to binary bits and let . The kernel hashcode representation is then given as .

We now describe a specific choice of hash functions based on nearest neighbors, called as Random k Nearest Neighbors (RkNN). For a given , let and be two randomly selected, equal-sized and non-overlapping subsets of , , . Those sets are indicated by red and blue dots in Fig. 3. Furthermore, let be the similarity between and its nearest neighbor in , with defined similarly (indicated by red and blue arrows in Fig. 3). Then the corresponding hash function is:

(1) |

Pictorial illustration of this hashing scheme is provided in Fig. 3, where ’s nearest neighbors in either subset are indicated by the red and blue arrows.
^{3}^{3}3Small value of , i.e. , should ensure that hashcode bits have minimal redundancy w.r.t. each other.
^{4}^{4}4In RkNN, since , should be optimal [Biau, Cérou, and
Guyader2010].

The same principle of random sub-sampling is applied in KLSH techniques previously proposed in [Kulis and Grauman2009, Joly and Buisson2011]. In [Joly and Buisson2011], is built by learning a (random) maximum margin boundary (RMM) that discriminates between the two subsets, and . In [Kulis and Grauman2009], is obtained from , which is a (approximately) random linear hyperplane in the kernel implied feature space; this is referred to as “Kulis” here.

In summary, we define klsh as the function, that is parameterized by and , and maps a input data point to its KLSH code , using the kernel function and the set of hash functions as subroutines.

(2) |

Next, in Sec. 3, we propose our approach of learning KLSH codes as generalized representations of NLP structures for classification problems.

## 3 KLSH for Representation Learning

We propose a novel use of KLSH where the hashcodes (KLSH codes) can serve as generalized representations (feature vectors) of the data points. Since the KLSH property of being *locality sensitive* [Indyk and
Motwani1998]
^{5}^{5}5See a formal definition of locality-sensitive hashing in [Indyk and
Motwani1998, Definition 7 in Sec. 4.2].
ensures the data points in the neighborhood of (or within the same) hashcode bucket are similar, hashcodes should serve as a good representation of the data points.

In contrast to the use of KLSH for k-NN, after obtaining the hashcodes for data points, we ignore the step of computing kernel similarities between data points in the neighboring buckets.
In kNN classifiers using KLSH, a small number of hashcode bits (), corresponding to a small number of hashcode buckets, generate a coarse partition of the feature space—sufficient for approximate computation of a kNN graph.
In our representation learning framework, however, hashcodes must extract enough information about class labels from the data points, so we propose to generate longer hashcodes, i.e. .
It is worthwhile noting that for a *fixed number of kernel computations* per structure (), a large number of hashcode bits () can be generated through the randomization principle with computational cost linear in .

Unlike regular kernel methods (SVM, kNN, etc.), we use kernels to build an explicit feature space, via KLSH. Referring to Fig. 3, when using RkNN technique to obtain for , hashcode bit, , should correspond to finding a substructure in , that should also be present in its 1-NN from either the set or , depending on the bit value being or . Thus, represents finding important substructures in in relation to . The same should apply for the other KLSH techniques.

#### Random Subspaces of Kernel Hashcodes:

The next question is how to use the binary-valued representations for building a good classifier.

Intuitively, not all the bits may be matching across the hashcodes of NLP structures in training and test datasets; a single classifier learned on all the hashcode bits may overfit to a training dataset. This is especially relevant for bio-information extraction tasks where there is a high possibility of *mismatch between training and test conditions* [Airola et al.2008, Garg et al.2016]; for e.g., in biomedical literature, the mismatch can be due to high diversity of research topics, limited data annotations, variations in writing styles including aspects like hedging, etc.
So we adopt the approach of building an ensemble of classifiers, with each one built on a random subspace of hashcodes [Zhou2012, Ho1998].

For building each classifier in an ensemble of classifiers, bits are selected randomly from hash bits; for inference on a test NLP structure , we take mean statistics over the inferred probability vectors from each of the classifiers, as it is a standard practice in ensemble approaches. Another way of building an ensemble from subspaces of hashcodes is bagging [Breiman1996]. If we use a decision tree as a classifier in ensemble, it corresponds to a random forest [Ho1995, Breiman2001].

It is highly efficient to train a random forest (RF) with a large number of decision trees (), even on long *binary* hashcodes (), leveraging the fact that decision trees can be very efficient to train and test on binary features.

### 3.1 Supervised Optimization of KLSH

In this section, we propose a framework for optimization of the KLSH codes as generalized representations for a supervised classification task. As described in Sec. 2.1, the mapping of a data points (an NLP structure ) to a KLSH code depends upon the kernel function and the reference set (Eq. 2). So, within this framework, we optimize the KLSH codes via learning the kernel parameters , and optionally the reference set . One important aspect of our optimization setting is that the parameters under optimization are *shared* between all the hash functions jointly, and are not specific to any of the hash functions.

#### Mutual Information as an Objective Function:

Intuitively, we want to generate KLSH codes that are maximally informative about the class labels. Thus, for optimizing the KLSH codes, a natural objective function is the mutual information (MI) between KLSH codes of and the class labels, [Cover and Thomas2012].

(3) |

The advantage of MI as the objective, being a fundamental measure of dependence between random variables, is that it is generic enough for optimizing KLSH codes as generalized representations (feature vectors) of data points to be used with any classifier. Unfortunately, exact estimates of MI function in high-dimensional settings is an extremely difficult problem due to the curse of dimensionality, with the present estimators having very high sample complexity [Kraskov, Stögbauer, and
Grassberger2004, Walters-Williams and
Li2009, Singh and
Póczos2014, Gao, Ver Steeg, and
Galstyan2015, Han, Jiao, and
Weissman2015, Wu and Yang2016, Belghazi et al.2018].^{6}^{6}6The sample complexity of an entropy estimator for a discrete variable distribution is characterized in terms of its support size , and it is proven to be not less than [Wu and Yang2016]. Since the support size for hashcodes is exponential in the number of bits, sample complexity would be prohibitively high unless dependence between the hash code bits is exploited. Instead, here we *propose to maximize a novel, computationally efficient, good approximation of the MI function*.

#### Approximation of Mutual Information:

To derive the approximation, we express the mutual information function as, , with denoting the Shannon entropy. For binary classification, the expression simplifies to:

To compute the mutual information, we need to efficiently compute joint entropy of KLSH code bits, . We *propose a good approximation of *, as described below; same applies for and .

(4) |

(5) |

In Eq. 4, the first term is the sum of marginal entropies for the KLSH code bits. Marginal entropies for binary variables can be computed efficiently. Now, let us understand how to compute the second term in the approximation (Eq. 4).
Herein, describes the amount of *Total Correlations (Multi-variate Mutual Information)* ^{7}^{7}7“Total correlation” was defined in [Watanabe1960]. within that can be explained by a latent variables representation .

(6) |

An interesting aspect of the quantity is that one can compute it efficiently for optimized that explains maximum possible Total Correlations present in , s.t. . In [Ver Steeg and
Galstyan2014], an unsupervised algorithm called CorEx ^{8}^{8}8https://github.com/gregversteeg/CorEx is proposed for obtaining such latent variables representation. Their algorithm is efficient for binary input variables, demonstrating a low sample complexity even in very high dimensions of input variables.
Therefore it is particularly relevant for computing the proposed joint entropy approximation on hashcodes. For practical purposes, the dimension of latent representation can be kept much smaller than the dimension of KLSH codes, i.e. . This helps to reduce the cost for computing the proposed MI approximation to negligible during the optimization (Eq. 3).

Denoting the joint entropy approximation as , we express the approximation of the mutual information as:

For computation efficiency as well as robustness w.r.t. overfitting, *we use small random subsets (of size ) from a training set for stochastic empirical estimates of *, motivated by the idea of stochastic gradients [Bottou2010].
For a slight abuse of notation, when obtaining an empirical estimate of using samples set , we simply denote the estimate as: .
Here it is also interesting to note that computation of is very easy to parallelize since the kernel matrices and hash functions can be computed in parallel.

It is worth noting that in our proposed approximation of the MI, both terms need to be computed. In contrast, in the previously proposed variational lower bounds for MI [Barber and Agakov2003, Chalk, Marre, and Tkacik2016, Alemi et al.2017], MI is expressed as, , so as to obtain a lower bound simply by upper bounding the conditional entropy term with a cross entropy term while ignoring the first term as a constant. Clearly, these approaches are not using MI in its true sense, rather using conditional entropy (or cross entropy) as the objective. Further, our approximation of MI also allows semi-supervised learning as the first term is computable even for hashcodes of unlabeled examples.

#### Algorithms for Optimization:

Using the proposed approximate mutual information function as an objective, one can optimize the kernel parameters either using grid search or an MCMC procedure.

For optimizing the reference set (of size ) as a subset of , via maximization of the same objective, we propose a greedy algorithm with pseudo code in Alg. 1. Initially, is initialized with a random subset of (line 1). Thereafter, is maximized greedily, updating one element in in each greedy step (line 3); greedy maximization of MI-like objectives has been successful [Gao, Ver Steeg, and Galstyan2016, Krause, Singh, and Guestrin2008]. Employing the paradigm of stochastic sampling, for estimating , we randomly sample a small subset of (of size ) along with their class labels (line 4). Also, in a single greedy step, we consider only a small random subset of (of size ) as candidates for selection into (line 5); for , with high probability, each element in should be seen as a candidate at least once by the algorithm. Alg. 1 requires kernel computations of order, , with being the sampling size constants; in practice, . Note that and can be optimized in an iterative manner.

### 3.2 Nonstationary Extension for Kernels

One common principle applicable to all the convolution kernel functions, , defining similarity between any two NLP structures is: * is expressed in terms of a kernel function, *, that defines similarity between any two tokens (node/edge labels in Fig. 1). Some common examples of , from previous works [Culotta and
Sorensen2004, Srivastava, Hovy, and
Hovy2013], are:

Herein, are tokens, and , are the corresponding word vectors. The first kernels is stationary, i.e. translation invariant [Genton2001], and the second one is nonstationary, although lacking nonstationarity-specific parameters for learning nonstationarity in a data-driven manner.

There are generic nonstationarity-based parameterizations, unexplored in NLP, applicable for extending any kernel, , to a nonstationary one, , so as to achieve higher *expressiveness and generalization* in model learning [Paciorek and
Schervish2003, Rasmussen2006]. For NLP, nonstationarity of can be formalized as in Theorem 1; see the longer version of this paper for a proof.

###### Theorem 1.

A convolution kernel , that is a function of the kernel , is stationary if is stationary, and vice versa. From a non-stationary , the corresponding extension of , , is also guaranteed to be a valid non-stationary convolution kernel.

One simple and intuitive nonstationary extension of is: .
Here, , are nonstationarity-based parameters; for more details, see [Rasmussen2006]; another choice for the nonstationary extension is based on the concept of process convolution, as proposed in [Paciorek and
Schervish2003].
If , it means that the token should be completely ignored when computing a convolution kernel similarity of an NLP structure (tree, path, etc.) that contains the token (node or edge label ) w.r.t. another NLP structure. Thus, the additional nonstationary parameters allow convolution kernels to be expressive enough for deciding if some substructures in an NLP structure should be ignored explicitly.^{9}^{9}9This approach is explicit in ignoring sub-structures irrelevant for a given task unlike the (complementary) standard skipping over non-matching substructures in a convolution kernel.

Models | (AIMed, BioInfer) | (BioInfer, AIMed) |

SVM [Airola et al.2008] | 0.25 | 0.44 |

SVM [Airola et al.2008] | 0.47 | 0.47 |

SVM [Miwa et al.2009] | 0.53 | 0.50 |

SVM [Tikk et al.2010] | 0.41 | 0.42 |

(0.67, 0.29) | (0.27, 0.87) | |

CNN [Nguyen and Grishman2015] | 0.37 | 0.45 |

Bi-LSTM [Kavuluru, Rios, and Tran2017] | 0.30 | 0.47 |

CNN [Peng and Lu2017] | 0.48 | 0.50 |

(0.40, 0.61) | (0.40, 0.66) | |

RNN [Hsieh et al.2017] | 0.49 | 0.51 |

CNN-RevGrad [Ganin et al.2016] | 0.43 | 0.47 |

Bi-LSTM-RevGrad [Ganin et al.2016] | 0.40 | 0.46 |

Adv-CNN [Rios, Kavuluru, and Lu2018] | 0.54 | 0.49 |

Adv-Bi-LSTM [Rios, Kavuluru, and Lu2018] | 0.49 | |

KLSH-kNN | ||

(0.41, 0.68) | (0.38, 0.80) | |

KLSH-RF | ||

(0.46, 0.75) | (0.37, 0.95) |

While the above proposed idea of nonstationary kernel extensions for NLP structures remains general, for the experiments, the nonstationary kernel for similarity between tuples with format (edge-label, node-label) is defined as the product of kernels on edge labels, , and node labels, ,

with operating only on edge labels. Edge labels come from syntactic or semantic parses of text with small size vocabulary (see syntactic parse-based edge labels in Fig. 1); we keep as a measure for robustness to over-fitting. These parameters are learned by maximizing the same objective, , using the well known Metropolis-Hastings MCMC procedure [Hastings1970].

## 4 Experiments

We evaluate our model “KLSH-RF” (kernelized locality-sensitive hashing with random forest) for the biomedical relation extraction task using four public datasets, AIMed, BioInfer, PubMed45, BioNLP, as briefed below.^{10}^{10}10PubMed45 dataset is available here: github.com/sgarg87/big_mech_isi_gg/tree/master/pubmed45_dataset; the other three datasets are here: corpora.informatik.hu-berlin.de Fig. 1 illustrates that the task is formulated as a binary classification of extraction candidates. For evaluation, it is standard practice to compute precision, recall, and F1 score on the positive class (i.e., identifying valid extractions).

#### Details on Datasets and Structural Features:

*AIMed and BioInfer:* For AIMed and BioInfer datasets, cross-corpus evaluation has been performed in many previous works [Airola et al.2008, Tikk et al.2010, Peng and Lu2017, Hsieh et al.2017]. Herein, the task is of identifying pairs of interacting proteins (PPI) in a sentence while ignoring the interaction type.
We follow the same evaluation setup, using Stanford Dependency Graph parses of text sentences to obtain undirected shortest paths as structural features for use with a path kernel (PK)
to classify protein-protein pairs.

*PubMed45 & BioNLP:*
We use PubMed45 and BioNLP datasets for an extensive evaluation of our KLSH-RF model; for more details on the two datasets, see [Garg et al.2016] and [Kim et al.2009, Kim et al.2011, Nédellec et al.2013]. Annotations in these datasets are richer in the sense that a bio-molecular interaction can involve up to two participants, along with an optional catalyst, and an interaction type from an unrestricted list. In PubMed45 (BioNLP) dataset, 36% (17%) of the “valid” interactions are such that an interaction must involve two participants and a catalyst. For both datasets, we use abstract meaning representation (AMR) to build subgraph or shortest path-based structural features [Banarescu et al.2013], for use with graph kernels (GK) or path kernels (PK) respectively, as done in the recent works evaluating these datasets [Garg et al.2016, Rao et al.2017].
For a fair comparison of the classification models, we use the same bio-AMR parser [Pust et al.2015] as in the previous works. In [Garg et al.2016], the PubMed45 dataset is split into 11 subsets for evaluation, at paper level. Keeping one of the subsets for testing, we use the others for training a binary classifier. This procedure is repeated for all 11 subsets in order to obtain the final F1 scores (mean and standard deviation values are reported from the numbers for 11 subsets). For BioNLP dataset [Kim et al.2009, Kim et al.2011, Nédellec et al.2013], we use training datasets from years 2009, 2011, 2013 for learning a model, and the development dataset from year 2013 as the test set; the same evaluation setup is followed in [Rao et al.2017].

Models | PubMed45 | PubMed45-ERN | BioNLP |

SVM [Garg et al.2016] | |||

(0.58, 0.43) | (0.33, 0.45) | (0.35, 0.67) | |

LSTM [Rao et al.2017] | N.A. | N.A. | 0.46 |

(0.51, 0.44) | |||

LSTM | 0.59 | ||

(0.38, 0.28) | (0.42, 0.33) | (0.89, 0.44) | |

Bi-LSTM | 0.55 | ||

(0.59, 0.43) | (0.45, 0.40) | (0.92, 0.39) | |

LSTM-CNN | 0.60 | ||

(0.55, 0.50) | (0.35, 0.40) | (0.77, 0.49) | |

CNN | 0.60 | ||

(0.46, 0.46) | (0.36, 0.32) | (0.80, 0.48) | |

KLSH-kNN | |||

(0.44, 0.53) | (0.23, 0.29) | (0.63, 0.57) | |

KLSH-RF | |||

(0.63, 0.55) | (0.51, 0.52) | (0.78, 0.53) |

In addition to the models previously evaluated on these datasets, we also compare our KLSH-RF model to KLSH-kNN (kNN classifier with KSLH approximation).

For PubMed45 and BioNLP datasets, for the lack of evaluations of previous works on these datasets, we perform extensive empirical evaluation ourselves of competitive neural network models, LSTM, Bi-LSTM, LSTM-CNN, CNN; from fine-grained tuning, for PubMed45 & PubMed45-ERN datasets, the tuned neural architecture was a five-layer network, [8, 16, 32, 16, 8], having 8, 16, 32, 16, and 8 nodes, respectively, in the 1st, 2nd, 3rd, 4th, 5th hidden layers; for BioNLP dataset, the tuned neural architecture was a two layer network, [32, 32].

#### Parameter Settings:

We use GK and PK, both using the same word vectors, with kernel parameter settings same as in [Garg et al.2016, Mooney and Bunescu2005].

Reference set size, , doesn’t need tuning in our proposed model; there is a trade-off between compute cost and accuracy; by default, we keep . For tuning any other parameters in our model or competitive models, including the choice of a kernel similarity function (PK or GK), we use 10% of training data, sampled randomly, for validation purposes. From a preliminary tuning, we set parameters, , , , , and choose RMM as the KLSH technique from the three choices discussed in Sec. 2.1; same parameter values are used across all the experiments unless mentioned otherwise.

When selecting reference set randomly, we perform 10 trials, and report mean statistics. (Variance across these trials is small, empirically.) The same applies for KLSH-kNN. When optimizing with Alg. 1, we use , (sampling parameters are easy to tune). We employ 4 cores on an i7 processor, with 16GB memory.

### 4.1 Main Results for KLSH-RF

In the following we compare the simplest version of our KLSH-RF model that is optimized by learning the kernel parameters via maximization of the MI approximation, as described in Sec. 3.1 (). In summary, our KLSH-RF model outperforms state-of-the-art models consistently across the four datasets, along with very significant speedups in training time w.r.t. traditional kernel classifiers.

#### Results for AIMed and BioInfer Datasets:

In reference to Tab. 1, KLSH-RF gives an F1 score significantly higher than state-of-the-art kernel-based models (6 pts gain in F1 score w.r.t. KLSH-kNN), and consistently outperforms the neural models. When using AIMed for training and BioInfer for testing, there is a tie between Adv-Bi-LSTM [Rios, Kavuluru, and Lu2018] and KLSH-RF. However, KLSH-RF still outperforms their Adv-CNN model by 3 pts; further, the performance of Adv-CNN and Adv-Bi-LSTM is not consistent, giving a low F1 score when training on the BioInfer dataset for testing on AIMed. For the latter setting of AIMed as a test set, we obtain an F1 score improvement by 3 pts w.r.t. the best competitive models, RNN & KLSH-kNN. Overall, the performance of KLSH-RF is more consistent across the two evaluation settings, in comparison to any other competitive model.

The models based on adversarial neural networks [Ganin et al.2016, Rios, Kavuluru, and Lu2018], Adv-CNN, Adv-Bi-LSTM, CNN-RevGrad, Bi-LSTM-RevGrad, are learned jointly on labeled training datasets and unlabeled test sets, whereas our model is purely supervised. In contrast to our principled approach, there are also system-level solutions using multiple parses jointly, along with multiple kernels, and knowledge bases [Miwa et al.2009, Chang et al.2016]. We refrain from comparing KLSH-RF w.r.t. such system level solutions, as it would be an unfair comparison from a modeling perspective.

#### Results for PubMed45 and BioNLP Datasets:

A summary of main results is presented in Tab. 2. “PubMed45-ERN” is another version of the PubMed45 dataset from [Garg et al.2016], with ERN referring to entity recognition noise. Clearly, our model gives F1 scores significantly higher than SVM, LSTM, Bi-LSTM, LSTM-CNN, CNN, and KLSH-kNN model. For PubMed45, PubMed45-ERN, and BioNLP, the F1 score for KLSH-RF is higher by 6 pts, 8 pts, and 3 pts respectively w.r.t. state of the art; KLSH-RF is the most consistent in its performance across the datasets and significantly more scalable than SVM. Note that standard deviations of F1 scores are high for the PubMed45 dataset (and PubMed45-ERN) because of the high variation in distribution of text across the 11 test subsets (the F1 score improvements with our model are statistically significant, p-value=4.4e-8).

For the PubMed45 dataset, there are no previously published results with a neural model (LSTM). The LSTM model of [Rao et al.2017], proposed specifically for the BioNLP dataset, is not directly applicable for the PubMed45 dataset because the list of interaction types in the latter is unrestricted. F1 score numbers for SVM classifier were also improved in [Garg et al.2016] by additional contributions such as document-level inference, and the joint use of semantic and syntactic representations; those system-level contributions are complementary to ours, so excluded from the comparison.

### 4.2 Detailed Analysis of KLSH-RF

While we are able to obtain superior results with our basic KLSH-RF model w.r.t. state-of-the-art methods using just core optimization of the kernel parameters , in this subsection we analyze how we can further improve the model. In Fig. 4 we present our results from optimization of other aspects of the KLSH-RF model: reference set optimization (RO) and non-stationary kernel parameters learning (NS). (In the longer version of this paper, we also analyze the effect of parameters, , and the choice of KLSH technique, under controller experiment settings.) We report mean values for precision, recall, F1 scores. For these experiments, we focus on PubMed45 and BioNLP datasets.

#### Reference Set Optimization:

In Fig. 4(a) and 4(b), we analyze the effect of the reference set optimization (RO), in comparison to random selection, and find that the optimization leads to significant increase in recall (7-13 pts) for PubMed dataset along with a marginal increase/decrease in precision (2-3 pts); we used PK for these experiments. For the BioNLP dataset, the improvements are not as significant. Further, as expected, the improvement is more prominent for smaller size of reference set (). To optimize reference set for , it takes approximately 2 to 3 hours (with in Alg. 1).

#### Nonstationary Kernel Learning (NSK):

In Fig. 4(c) and 4(d), we compare performance of non-stationary kernels, w.r.t. traditional stationary kernels (M=100). As proposed in Sec. 3.2, the idea is to extend a convolution kernel (PK or GK) with non-stationarity-based binary parameters (NS-PK or NS-GK), optimized using our MCMC procedure via maximizing the proposed MI approximation based objective (). For the PubMed45 dataset with PK, the advantage of NSK learning is more prominent, leading to high increase in recall (7 pts), and a very small drop in precision (1 pt). Compute time for learning the non-stationarity parameters in our KLSH-RF model is less than an hour.

#### Compute Time:

Compute times to train all the models are reported in Fig. 4(e) for the BioNLP dataset; similar time scales apply for other datasets. We observe that our basic KLSH-RF model has a very low training cost, w.r.t. models like LSTM, KLSH-kNN, etc. (similar analysis applies for inference cost). The extensions of KLSH-RF, KLSH-RF-RO and KLSH-RF-NS, are more expensive yet cheaper than LSTM and SVM.

## 5 Related Work

Besides some related work mentioned in the previous sections, this section focuses on relevant state-of-the-art literature in more details.

#### Other Hashing Techniques:

In addition to hashing techniques considered in this paper, other locality-sensitive hashing techniques [Grauman and Fergus2013, Zhao, Lu, and Mei2014, Wang et al.2017] are either not kernel based, or they are defined for specific kernels that are not applicable for hashing of NLP structures [Raginsky and Lazebnik2009]. In deep learning, hashcodes are used for similarity search but classification of objects [Liu et al.2016].

#### Hashcodes for Feature Compression:

Binary hashing (not KLSH) has been used as an approximate feature compression technique in order to reduce memory and computing costs [Li et al.2011, Mu et al.2014]. Unlike prior approaches, this work proposes to use hashing as a representation learning (feature extraction) technique.

#### Using Hashcodes in NLP:

In NLP, hashcodes were used only for similarity or nearest neighbor search for words/tokens in various NLP tasks [Goyal, Daumé III, and Guerra2012, Li, Liu, and Ji2014, Shi and Knight2017]; our work is the first to explore kernel-hashing of various NLP structures, rather than just tokens.

#### Weighting Substructures:

Our idea of skipping substructures, due to our principled approach of nonstationary kernels, is somewhat similar to sub-structure mining algorithms [Suzuki and Isozaki2006, Severyn and Moschitti2013]. Learning the weights of sub-structures was recently proposed for regression problems, but not yet for classification [Beck et al.2015].

#### Kernel Approximations:

Besides the proposed model, there are other kernel-based scalable techniques in the literature, which rely on approximation of a kernel matrix or a kernel function [Williams and Seeger2001, Moschitti2006, Rahimi and Recht2008, Pighin and Moschitti2009, Zanzotto and Dell’Arciprete2012, Severyn and Moschitti2013, Felix et al.2016]. However, those approaches are only used as computationally efficient approximations of the traditional, computationally-expensive kernel-based classifiers; unlike those approaches, our method is not only computationally more efficient but also yields considerable accuracy improvements.

#### Nonstationary Kernels:

Nonstationary kernels have been explored for modeling spatio-temporal environmental dynamics or time series relevant to health care, finance, etc, though expensive to learn due to a prohibitively large number of latent variables [Paciorek and Schervish2003, Snelson, Rasmussen, and Ghahramani2003, Assael et al.2014]. Ours is the first work proposing nonstationary convolution kernels for natural language modeling; the number of parameters is constant in our formulation, so highly efficient in contrast to the previous works.

## 6 Conclusions

In this paper we propose to use a well-known technique, kernelized locality-sensitive hashing (KLSH), in order to derive feature vectors from natural language structures. More specifically, we propose to use random subspaces of KLSH codes for building a random forest of decision trees. We find this methodology particularly suitable for modeling natural language structures in supervised settings where there are significant mismatches between the training and the test conditions. Moreover we optimize a KLSH model in the context of classification performed using a random forest, by maximizing an approximation of the mutual information between the KLSH codes (feature vectors) and the class labels. We apply the proposed approach to the difficult task of extracting information about bio-molecular interactions from the semantic or syntactic parsing of scientific papers. Experiments on a wide range of datasets demonstrate the considerable advantages of our method.

## 7 Acknowledgments

This work was sponsored by the DARPA Big Mechanism program (W911NF-14-1-0364). It is our pleasure to acknowledge fruitful discussions with Karen Hambardzumyan, Hrant Khachatrian, David Kale, Kevin Knight, Daniel Marcu, Shrikanth Narayanan, Michael Pust, Kyle Reing, Xiang Ren, and Gaurav Sukhatme. We are also grateful to anonymous reviewers for their valuable feedback.

## References

- [Airola et al.2008] Airola, A.; Pyysalo, S.; Björne, J.; Pahikkala, T.; Ginter, F.; and Salakoski, T. 2008. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC Bioinformatics.
- [Alemi et al.2017] Alemi, A.; Fischer, I.; Dillon, J.; and Murphy, K. 2017. Deep variational information bottleneck.
- [Assael et al.2014] Assael, J.-A. M.; Wang, Z.; Shahriari, B.; and de Freitas, N. 2014. Heteroscedastic treed bayesian optimisation. arXiv preprint arXiv:1410.7172.
- [Banarescu et al.2013] Banarescu, L.; Bonial, C.; Cai, S.; Georgescu, M.; Griffitt, K.; Hermjakob, U.; Knight, K.; Koehn, P.; Palmer, M.; and Schneider, N. 2013. Abstract meaning representation for sembanking. In Proc. of the 7th Linguistic Annotation Workshop and Interoperability with Discourse.
- [Barber and Agakov2003] Barber, D., and Agakov, F. 2003. The im algorithm: a variational approach to information maximization. In Proc. of NIPS.
- [Beck et al.2015] Beck, D.; Cohn, T.; Hardmeier, C.; and Specia, L. 2015. Learning structural kernels for natural language processing. Transactions of ACL.
- [Belghazi et al.2018] Belghazi, M. I.; Baratin, A.; Rajeshwar, S.; Ozair, S.; Bengio, Y.; Courville, A.; and Hjelm, D. 2018. Mutual information neural estimation. In Proc. of ICML.
- [Biau, Cérou, and Guyader2010] Biau, G.; Cérou, F.; and Guyader, A. 2010. On the rate of convergence of the bagged nearest neighbor estimate. JMLR.
- [Bottou2010] Bottou, L. 2010. Large-scale machine learning with stochastic gradient descent. In Proc. of COMPSTAT.
- [Breiman1996] Breiman, L. 1996. Bagging predictors. Machine learning.
- [Breiman2001] Breiman, L. 2001. Random forests. Machine learning.
- [Chalk, Marre, and Tkacik2016] Chalk, M.; Marre, O.; and Tkacik, G. 2016. Relevant sparse codes with variational information bottleneck. In Proc. of NIPS.
- [Chang et al.2016] Chang, Y.-C.; Chu, C.-H.; Su, Y.-C.; Chen, C. C.; and Hsu, W.-L. 2016. Pipe: a protein–protein interaction passage extraction module for biocreative challenge. Database.
- [Cohen2015] Cohen, P. R. 2015. Darpa’s big mechanism program. Physical biology.
- [Collins and Duffy2002] Collins, M., and Duffy, N. 2002. Convolution kernels for natural language. In Advances in neural information processing systems, 625–632.
- [Cover and Thomas2012] Cover, T. M., and Thomas, J. A. 2012. Elements of information theory.
- [Culotta and Sorensen2004] Culotta, A., and Sorensen, J. 2004. Dependency tree kernels for relation extraction. In Proc. of ACL.
- [Felix et al.2016] Felix, X. Y.; Suresh, A. T.; Choromanski, K. M.; Holtmann-Rice, D. N.; and Kumar, S. 2016. Orthogonal random features. In Proc. of NIPS.
- [Ganin et al.2016] Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; and Lempitsky, V. 2016. Domain-adversarial training of neural networks. JMLR.
- [Gao, Ver Steeg, and Galstyan2015] Gao, S.; Ver Steeg, G.; and Galstyan, A. 2015. Efficient estimation of mutual information for strongly dependent variables. In Proc. of AISTATS.
- [Gao, Ver Steeg, and Galstyan2016] Gao, S.; Ver Steeg, G.; and Galstyan, A. 2016. Variational information maximization for feature selection. In Proc. of NIPS.
- [Garg et al.2016] Garg, S.; Galstyan, A.; Hermjakob, U.; and Marcu, D. 2016. Extracting biomolecular interactions using semantic parsing of biomedical text. In Proc. of AAAI.
- [Genton2001] Genton, M. G. 2001. Classes of kernels for machine learning: a statistics perspective. JMLR.
- [Goyal, Daumé III, and Guerra2012] Goyal, A.; Daumé III, H.; and Guerra, R. 2012. Fast large-scale approximate graph construction for nlp. In Proc. of EMNLP.
- [Grauman and Fergus2013] Grauman, K., and Fergus, R. 2013. Learning binary hash codes for large-scale image search. Machine learning for computer vision.
- [Hahn and Surdeanu2015] Hahn, M. A. V.-E. G., and Surdeanu, P. T. H. M. 2015. A domain-independent rule-based framework for event extraction. In Proc. of ACL-IJCNLP System Demonstrations.
- [Han, Jiao, and Weissman2015] Han, Y.; Jiao, J.; and Weissman, T. 2015. Adaptive estimation of shannon entropy. In Proc. of IEEE International Symposium on Information Theory.
- [Hastings1970] Hastings, W. K. 1970. Monte carlo sampling methods using markov chains and their applications. Biometrika.
- [Haussler1999] Haussler, D. 1999. Convolution kernels on discrete structures. Technical report.
- [Ho1995] Ho, T. K. 1995. Random decision forests. In Proceedings of the Third International Conference on Document Analysis and Recognition.
- [Ho1998] Ho, T. K. 1998. The random subspace method for constructing decision forests. IEEE transactions on pattern analysis and machine intelligence.
- [Hsieh et al.2017] Hsieh, Y.-L.; Chang, Y.-C.; Chang, N.-W.; and Hsu, W.-L. 2017. Identifying protein-protein interactions in biomedical literature using recurrent neural networks with long short-term memory. In Proc. of IJCNLP.
- [Indyk and Motwani1998] Indyk, P., and Motwani, R. 1998. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proc. of STOC.
- [Joly and Buisson2011] Joly, A., and Buisson, O. 2011. Random maximum margin hashing. In Proc. of CVPR.
- [Kavuluru, Rios, and Tran2017] Kavuluru, R.; Rios, A.; and Tran, T. 2017. Extracting drug-drug interactions with word and character-level recurrent neural networks. In Proc. of IEEE International Conference on Healthcare Informatics.
- [Kim et al.2009] Kim, J.-D.; Ohta, T.; Pyysalo, S.; Kano, Y.; and Tsujii, J. 2009. Overview of bionlp’09 shared task on event extraction. In Proc. of BioNLP Workshop.
- [Kim et al.2011] Kim, J.-D.; Pyysalo, S.; Ohta, T.; Bossy, R.; Nguyen, N.; and Tsujii, J. 2011. Overview of bionlp shared task 2011. In Proc. of BioNLP Workshop.
- [Krallinger et al.2008] Krallinger, M.; Leitner, F.; Rodriguez-Penagos, C.; and Valencia, A. 2008. Overview of the protein-protein interaction annotation extraction task of biocreative ii. Genome biology.
- [Kraskov, Stögbauer, and Grassberger2004] Kraskov, A.; Stögbauer, H.; and Grassberger, P. 2004. Estimating mutual information. Physical Review E.
- [Krause, Singh, and Guestrin2008] Krause, A.; Singh, A.; and Guestrin, C. 2008. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. JMLR.
- [Kulis and Grauman2009] Kulis, B., and Grauman, K. 2009. Kernelized locality-sensitive hashing for scalable image search. In Proc. of CVPR.
- [Li et al.2011] Li, P.; Shrivastava, A.; Moore, J. L.; and König, A. C. 2011. Hashing algorithms for large-scale learning. In Proc. of NIPS.
- [Li, Liu, and Ji2014] Li, H.; Liu, W.; and Ji, H. 2014. Two-stage hashing for fast document retrieval. In Proc. of ACL.
- [Liu et al.2016] Liu, H.; Wang, R.; Shan, S.; and Chen, X. 2016. Deep supervised hashing for fast image retrieval. In Proc. of CVPR.
- [Miwa et al.2009] Miwa, M.; Sætre, R.; Miyao, Y.; and Tsujii, J. 2009. Protein–protein interaction extraction by leveraging multiple kernels and parsers. International Journal of Medical Informatics.
- [Mooney and Bunescu2005] Mooney, R. J., and Bunescu, R. C. 2005. Subsequence kernels for relation extraction. In Proc. of NIPS.
- [Moschitti2006] Moschitti, A. 2006. Making tree kernels practical for natural language learning. In Proc. of EACL.
- [Mu et al.2014] Mu, Y.; Hua, G.; Fan, W.; and Chang, S.-F. 2014. Hash-svm: Scalable kernel machines for large-scale visual classification. In Proc. of CVPR.
- [Nédellec et al.2013] Nédellec, C.; Bossy, R.; Kim, J.-D.; Kim, J.-J.; Ohta, T.; Pyysalo, S.; and Zweigenbaum, P. 2013. Overview of bionlp shared task 2013. In Proc. of BioNLP Workshop.
- [Nguyen and Grishman2015] Nguyen, T. H., and Grishman, R. 2015. Relation extraction: Perspective from convolutional neural networks. In Proc. of the Workshop on Vector Space Modeling for Natural Language Processing.
- [Paciorek and Schervish2003] Paciorek, C. J., and Schervish, M. J. 2003. Nonstationary covariance functions for gaussian process regression. In Proc. of NIPS.
- [Peng and Lu2017] Peng, Y., and Lu, Z. 2017. Deep learning for extracting protein-protein interactions from biomedical literature. In Proc. of BioNLP Workshop.
- [Pighin and Moschitti2009] Pighin, D., and Moschitti, A. 2009. Efficient linearization of tree kernel functions. In Proc. of CoNLL.
- [Pust et al.2015] Pust, M.; Hermjakob, U.; Knight, K.; Marcu, D.; and May, J. 2015. Parsing english into abstract meaning representation using syntax-based machine translation. In Proc. of EMNLP.
- [Raginsky and Lazebnik2009] Raginsky, M., and Lazebnik, S. 2009. Locality-sensitive binary codes from shift-invariant kernels. In Proc. of NIPS.
- [Rahimi and Recht2008] Rahimi, A., and Recht, B. 2008. Random features for large-scale kernel machines. In Proc. of NIPS.
- [Rao et al.2017] Rao, S.; Marcu, D.; Knight, K.; and III, H. D. 2017. Biomedical event extraction using abstract meaning representation. In Proc. of BioNLP Workshop.
- [Rasmussen2006] Rasmussen, C. E. 2006. Gaussian processes for machine learning.
- [Rios, Kavuluru, and Lu2018] Rios, A.; Kavuluru, R.; and Lu, Z. 2018. Generalizing biomedical relation classification with neural adversarial domain adaptation. Bioinformatics.
- [Rzhetsky] Rzhetsky, A. The big mechanism program: Changing how science is done. In DAMDID/RCDL.
- [Severyn and Moschitti2013] Severyn, A., and Moschitti, A. 2013. Fast linearization of tree kernels over large-scale data. In Proc. of IJCAI.
- [Shi and Knight2017] Shi, X., and Knight, K. 2017. Speeding up neural machine translation decoding by shrinking run-time vocabulary. In Proc. of ACL.
- [Singh and Póczos2014] Singh, S., and Póczos, B. 2014. Generalized exponential concentration inequality for rényi divergence estimation. In Proc. of ICML.
- [Snelson, Rasmussen, and Ghahramani2003] Snelson, E.; Rasmussen, C. E.; and Ghahramani, Z. 2003. Warped gaussian processes. In Proc. of NIPS.
- [Srivastava, Hovy, and Hovy2013] Srivastava, S.; Hovy, D.; and Hovy, E. H. 2013. A walk-based semantically enriched tree kernel over distributed word representations. In Proc. of EMNLP.
- [Suzuki and Isozaki2006] Suzuki, J., and Isozaki, H. 2006. Sequence and tree kernels with statistical feature mining. In Proc. of NIPS.
- [Tikk et al.2010] Tikk, D.; Thomas, P.; Palaga, P.; Hakenberg, J.; and Leser, U. 2010. A comprehensive benchmark of kernel methods to extract protein–protein interactions from literature. PLoS Comput Biol.
- [Valenzuela-Escárcega et al.2017] Valenzuela-Escárcega, M. A.; Babur, O.; Hahn-Powell, G.; Bell, D.; Hicks, T.; Noriega-Atala, E.; Wang, X.; Surdeanu, M.; Demir, E.; and Morrison, C. T. 2017. Large-scale automated reading with reach discovers new cancer driving mechanisms.
- [Ver Steeg and Galstyan2014] Ver Steeg, G., and Galstyan, A. 2014. Discovering structure in high-dimensional data through correlation explanation. In Proc. of NIPS.
- [Walters-Williams and Li2009] Walters-Williams, J., and Li, Y. 2009. Estimation of mutual information: A survey. In Proc. of International Conference on Rough Sets and Knowledge Technology.
- [Wang et al.2017] Wang, J.; Zhang, T.; Sebe, N.; Shen, H. T.; et al. 2017. A survey on learning to hash. TPAMI.
- [Watanabe1960] Watanabe, S. 1960. Information theoretical analysis of multivariate correlation. IBM Journal of research and development.
- [Williams and Seeger2001] Williams, C. K., and Seeger, M. 2001. Using the nyström method to speed up kernel machines. In Proc. of NIPS.
- [Wu and Yang2016] Wu, Y., and Yang, P. 2016. Minimax rates of entropy estimation on large alphabets via best polynomial approximation. IEEE Transactions on Information Theory.
- [Zanzotto and Dell’Arciprete2012] Zanzotto, F., and Dell’Arciprete, L. 2012. Distributed tree kernels. In Proc. of ICML.
- [Zelenko, Aone, and Richardella2003] Zelenko, D.; Aone, C.; and Richardella, A. 2003. Kernel methods for relation extraction. JMLR.
- [Zhao, Lu, and Mei2014] Zhao, K.; Lu, H.; and Mei, J. 2014. Locality preserving hashing. In Proc. of AAAI.
- [Zhou2012] Zhou, Z.-H. 2012. Ensemble methods: foundations and algorithms.

## Appendix A Dataset Statistics

The number of valid/invalid extractions in each dataset is shown in Tab. 3.

Datasets | No. of Valid Extractions | No. of Invalid Extractions |
---|---|---|

PubMed45 | 2,794 | 20,102 |

BioNLP | 6,527 | 34,958 |

AIMed | 1,000 | 4,834 |

BioInfer | 2,534 | 7,132 |

## Appendix B Nonstationarity of Convolution Kernels for NLP

###### Definition 1 (Stationary kernel [Genton2001]).

A *stationary* kernel, between vectors , is the one which is translation invariant:

that means, it depends only upon the lag vector between and , and not the data points themselves.

For NLP context, stationarity in convolution kernels is formalized as follows.

###### Theorem 2.

A convolution kernel , a function of the kernel , is stationary if is stationary. From a nonstationary , the corresponding extension of , , is also guaranteed to be a valid nonstationary convolution kernel.

###### Proof.

Suppose we have a vocabulary set, , and we randomly generate a set of discrete structures , using . For kernel , that defines similarity between a pair of labels, consider a case of stationarity, , where its value is invariant w.r.t. to the translation of a label to . In the structures, replacing labels with respectively, we obtain a set of new structures . Using a convolution kernel , as a function of , we obtain same (kernel) Gram matrix on the set as for . Thus is also invariant w.r.t. the translation of structures set to , hence a stationary kernel (Def. 1). For establishing the nonstationarity property, following the above logic, if using , we obtain a (kernel) Gram matrix on the set that is different from the set . Therefore is not invariant w.r.t. the translation of set to , hence a nonstationary kernel (Def. 1). ∎

## Appendix C Brief on MCMC Procedure for Optimizing Nonstationary Parameters

Denoting all the nonstationary parameters as , we set as the first sample of MCMC. Now, for producing a new sample from current sample in the Markov chain, we randomly pick one of the parameters and flip its binary value from 0 to 1 or vica versa. This new sample is accepted with probability: , with hashcodes and computed using the kernel parameters samples & , respectively. This procedure is performed for a fixed number of samples, and then the MCMC sample with highest is accepted.

## Appendix D More Experiments

##### Analyzing Hashing Parameters

In Fig. 5(a) and 5(b), we compare performance of all the three KLSH techniques with our model. For these experiments, is fixed to value . We found that Kulis is highly sensitive to the value of in contrast to RMM and RkNN; accuracy numbers drop with Kulis for higher value of (those results are not shown here).

For PubMed45 dataset, we also vary the parameters (=None & , using PK). As we mentioned previously, for obtaining random subspaces of kernel-hashcodes, we can either use bagging (=None), i.e. the random subset of training dataset (with resampling), or explicitly take a random subset of hashcode bits (). Here, in Fig. 5(c) and Fig. 5(d), we present results for both approaches, as two types of our KLSH-RF model, with PK. We can see that the gain in accuracy, is marginal, with an increase in the number of decision trees, after a minimal threshold. For a low value of (15, 30), the F1 score drops significantly. In 5(c), we decrease down to value 100 only since the number of sampled hashcode bits () is 30. We also note that despite the high number of hashcode bits, classification accuracy improves only if we have a minimal number of decision trees.

## Appendix E Convolution Kernels Expressions in Experiments

Convolution kernels belong to a class of kernels that compute similarity between discrete structures [Haussler1999, Collins and Duffy2002]. In essence, convolution kernel similarity function between two discrete structures and , is defined in terms of function that characterizes similarity between a pair of tuples or labels. In the following, we desribe the exact expressions for convoution kernels used in our experiments while the proposed approaches are generically applicable for any convolution kernels operating on NLP structures.

##### Graph/Tree Kernels

In [Zelenko, Aone, and Richardella2003, Garg et al.2016], the kernel similarity between two trees and is defined as:

where are child subsequences under the root nodes and as shown above; and are subtrees rooted at the child node of . Note that we use exact same formulation as used in [Garg et al.2016].

##### Path/Subsequence Kernels

Let and be two sequences of tuples, in [Mooney and Bunescu2005] the kernel is defined as:

Here, is the similarity between the tuples in the subsequences and , of equal length; is the actual length of a subsequence in the corresponding sequence, i.e., the difference between the end index and start index (subsequences do not have to be contiguous); is used to penalize the long subsequences.

For both kernels above, dynamic programming is used for efficient computation.