Subgraph Similarity Search in Large Graphs
One of the major challenges in applications related to social networks, computational biology, collaboration networks etc., is to efficiently search for similar patterns in their underlying graphs. These graphs are typically noisy and contain thousands of vertices and millions of edges. In many cases, the graphs are unlabeled and the notion of similarity is also not well defined. We study the problem of searching an induced subgraph in a large target graph that is most similar to the given query graph. We assume that the query graph and target graph are undirected and unlabeled. We use graphlet kernels  to define graph similarity. Graphlet kernels are known to perform better than other kernels in different applications.
Our algorithm maps topological neighborhood information of vertices in the query and target graphs to vectors. These local topological informations are then combined to find a target subgraph having highly similar global topology with the given query graph. We tested our algorithm on several real world networks such as facebook network, google plus network, youtube network, amazon network etc. Most of them contain thousands of vertices and million edges. Our algorithm is able to detect highly similar matches when queried in these networks. Our multi-threaded implementation takes about one second to find the match on a 32 core machine, excluding the time for one time preprocessing. Computationally expensive parts of our algorithm can be further scaled to standard parallel and distributed frameworks like map-reduce.
Similarity based graph searching has attracted considerable attention in the context of social networks, road networks, collaboration networks, software testing, computational biology, molecular chemistry etc. In these domains, underlying graphs are large with tens of thousands of vertices and millions of edges. Subgraph searching is fundamental to the applications, where occurrence of the query graph in the large target graph has to be identified. Searching for exact occurrence of an induced subgraph isomorphic to the query graph is known as the subgraph isomorphism problem, which is known to be NP-complete for undirected unlabeled graphs.
Presence of noise in the underlying graphs and need for searching ‘similar’ subgraph patterns are characteristic to these applications. For instance, in computational biology, the data is noisy due to possible errors in data collection and different thresholds for experiments. In object-oriented programming, querying typical object usage patterns against the target object dependency graph of a program run can identify deviating locations indicating potential bugs . In molecular chemistry, identifying similar molecular structures is a fundamental problem. Searching for similar subgraphs plays a crucial role in mining and analysis of social networks. Subgraph similarity searching is therefore more natural in these settings in contrast to exact search. In subgraph similarity search problem, induced subgraph of the target graph that is ‘most similar’ to the query graph has to be identified, where similarity is defined using some distance function. Quality of the solution and computational efficiency are two major challenges in these search problems. In this work, we assume that both the underlying graph and query graph are unlabeled and undirected.
Most applications work with a distance metric to define similarity between two entities (graphs in our case). Popular distance metrics include Euclidean distance, Hamming distance, Edit distance, Kernel functions  etc. We use graph kernel functions to define graph similarity.
Kernels are symmetric functions that map pairs of entities from a domain to real values which indicate their similarity. Kernels that are positive definite not only define similarity between pairs of entities but also allow implicit mapping of objects to a high-dimensional feature space and operating on this space without requiring to compute explicit mapping of objects in the feature space. Kernels implicitly yield inner products between the feature vectors without explicit computation of the same in feature space. This is usually computationally cheaper than explicit computation. This approach is usually referred to as the kernel trick or kernel method. Kernel methods have been widely applied to sequence data, graphs, text, images, videos etc., as many of the standard machine learning algorithms including support vector machine (SVM) and principle component analysis (PCA) can directly work with kernels.
Kernels have been successfully applied in the past in the context of graphs . There are several existing graph kernels based on various graph properties, such as random walks in the graphs , cyclic patterns , graph edit distance , shortest paths , frequency of occurrences of special subgraphs  and so on.
Graphlet kernels are defined based on occurrence frequencies of small induced subgraphs called graphlets in the given graphs . Graphlet kernels have been shown to provide good SVM classification accuracy in comparison to random walk kernel and shortest path kernel on different data sets including protein and enzyme data . Graphlet kernels are also of theoretical interest. It is known that under certain restricted settings, if two graphs have distance zero with respect to their graphlet kernel value then they are isomorphic . Improving the efficiency of computing graphlet kernel is also studied in . Graphlet kernel computation can also be scaled to parallel and distributed setting in a fairly straight forward manner. In our work, we use graphlet kernels to define graph similarity.
Similarity based graph searching has been studied in the past under various settings. In many of the previous works, it is assumed that the graphs are labeled. In one class of problems, a large database of graphs is given and the goal is to find the most similar match in the database with respect to the given query graph . In the second class, given a target graph and a query graph, subgraph of the target graph that is most similar to the query graph needs to be identified . Different notions of similarity were also explored in the past for these classes of problems.
In , approximate matching of query graph in a database of graphs is studied. The graphs are assumed to be labeled. Structural information of the graph is stored in a hybrid index structure based on B-tree index. Important vertices of a query graph are matched first and then the match is extended progressively. In , graph similarity search on labeled graphs from a large database of graphs under minimum edit distance is studied. In , algorithm for computing top- approximate subgraph matches for a given query graph in a large labeled target graph is given. In this work, the target graph is converted into a set of multidimensional vectors based on the labels in the vertex neighborhoods. Only matches above a user defined threshold are computed. With higher threshold values, the match is a trivial vertex to vertex label matching. In , label matching is performed while simultaneously preserving pairwise vertex proximity. Their query time is proportional to the product of number of vertices of the query and target graph. Subgraph matching in a large target graph for graphs deployed on a distributed memory store was studied in . In , efficient distributed subgraph similarity search to retrieve matches whose number of missing edges is below a given threshold is studied. It looks for exact matching and not similarity matching. Though different techniques were studied in the past for the problem of similarity searching in various settings, to the best of our knowledge, little work has been done on subgraph similarity search on large unlabeled graphs. In many of the previous works, either the vertices are assumed to be labeled or the graphs they work with are small with hundreds of vertices.
We consider undirected graphs with no vertex or edge labels. We use graphlet kernel to define similarity between graphs. We give a subgraph similarity matching algorithm that takes as input a large target graph and a query graph and identifies an induced subgraph of the target graph that is most similar to the query graph with respect to the graphlet kernel value.
In our algorithm, we first compute vertex labels for vertices in both query and target graph. These labels are vectors in some fixed dimension and are computed based on local neighborhood structure of vertices in the graph. Since our vertex labels are vectors, unlike many of the other labeling techniques, our labeling allows us to define the notion of similarity between vertex labels of two vertices to capture the topological similarity of their corresponding neighborhoods in the graph. We build a nearest neighbor data structure for vertices of the target graph based on their vertex labels. Computing vertex label for target graph vertices and building the nearest neighbor data structure are done in the preprocessing phase. Using nearest neighbor queries on this data structure, vertices of the target graph that are most similar to the vertices of the query graph are identified. Using this smaller set of candidate vertices of target graph, a seed match is computed for the query graph. Using this seed match as the basis, our algorithm computes the final match for the full query graph.
We study the performance of our algorithm on several real life data sets including facebook network, google plus network, youtube network, road network, amazon network provided by the Stanford Large Network Dataset Collection (SNAP)  and DBLP network . We conduct number of experimental studies to measure the search quality and run time efficiency. For instance, while searching these networks with their communities as query graphs, the computed match and the query graph has similarity score close to 1, where 1 is the maximum possible similarity score. In about 30% of the cases, our algorithm is able to identify the exact match and in about 80% of the cases, vertices of exact match are present in the pruned set computed by the algorithm. We validate our results by showing that similarity scores between random subgraphs and similarity scores between random communities in these networks are significantly lower. We also query communities across networks and in noisy networks and obtain matches with significantly high similarity scores. We use our algorithm to search for dense subgraphs and identify subgraphs with significantly high density.
Computationally expensive parts of our algorithm can be easily scaled to standard parallel and distributed computing frameworks such as map-reduce. Most of the networks in our experiments have millions of edges and thousands of vertices. Our multithreaded implementation of the search algorithm takes close to one second on these networks on a 32 core machine for the search phase. This excludes time taken by the one time pre-processing phase.
Graph is an ordered pair comprising a set of vertices and a set of edges. To avoid ambiguity, we also use and to denote the vertex and edge set. We consider only undirected graphs with no vertex or edge labels. A subgraph of is a graph whose vertices are a subset of , and whose edges are a subset of and is denoted as . An induced subgraph is a graph whose vertex set is a subset of and whose edge set is the set of all edges present in between vertices in .
Graphlets are fixed size non isomorphic induced subgraphs of a large graph. Typical graphlet sizes considered in applications are and . For example, Figure 1 shows all possible non isomorphic size graphlets. There are of them of which are connected. We denote by , the set of all size graphlets that are connected. The set is shown in Figure 2.
If graphs and are isomorphic then clearly their corresponding graphlet vectors and are identical. But the reverse need not be true in general. But, it is conjectured that given two graphs and of vertices and their corresponding graphlet vectors and with respect to sized graphlets , graph is isomorphic to if is identical to . The conjecture has been verified for . Kernels based on similarity of graphlet vectors provide a natural way to express similarity of underlying graphs.
Graphlet vectors are in fact an explicit embedding of graphs into a vector space whose dimension is if size graphlets are used. Graphlet kernels have been shown to give better classification accuracies in comparison to other graph kernels like random walk kernel and shortest path kernel for certain applications . Values of and larger values of indicate higher similarity between and .
3Graphlet vector based vertex labeling
Computing vertex labels that capture topological neighborhood information of corresponding vertices in the graph and comparing vertex neighborhoods using their labels is crucial in our matching algorithm. Our vertex labels are graphlet vectors of their corresponding neighborhood subgraphs.
Given a fixed positive integer and graph , let denote the depth neighbors of vertex in . That is, is the subset of all vertices in (including ) that are reachable from in or less edges. Let denote the subgraph induced by vertices in . We denote by , the graphlet vector corresponding to the graph , with respect to size graphlets for some fixed . We note that for defining the graphlet vector for a vertex, there are two implicit parameters and . To avoid overloading the notation, we assume them to be some fixed constants and specify them explicitly when required. Values of and are parameters to our final algorithm.
For each vertex of the graph, its vertex label is given by the vector . Given vertex labels and for vertices and , we denote by the similarity between labels of and , given by their dot product as
Values of and larger values of indicate higher topological similarity between neighborhoods of vertices and . Computing the vertex labels of the target graph is done in the preprocessing phase. Implementation details of the vertex labeling algorithm are discussed in the next section.
Our subgraph similarity search algorithm has two major phases: one time pre-processing phase and the query graph matching phase. Each of these phases comprise sub-phases as given below. Details of each of these subphases is discussed in the subsequent sections.
Pre-processing Phase: This phase has two subphases:
In this phase, vertex labels of all the vertices of the target graph are computed.
k-d tree based nearest neighbor data structure on the vertices of using their label vectors is built.
Matching Phase: This phase is further divided into four subphases:
Selection Phase: In this phase, vertex labels for vertices of the query graph are computed first. Each vertex of the query graph then selects a subset of vertices from the target graph closest to based on their Euclidean distance.
Seed Match Generation Phase: In this phase, a one to one mapping of a subset of query graph vertices to target graph vertices is obtained with highest overall similarity score. Subgraph induced by the mapped vertices in the target graph is called the seed match. The seed match is obtained by solving a maximum weighted bipartite matching problem.
Match Growing Phase: The above seed match is used as a basis to compute the final match for .
Match Completion Phase: This phase tries to match those vertices in that are still left unmatched in the previous phase.
Computation of vertex labels
In this phase, vertex label for each vertex of the target graph is computed first. To compute , we require parameter values and . These two values are assumed to be provided as parameters to the search algorithm. For each vertex , a breadth first traversal of depth is performed starting from to obtain the depth neighborhood of . The graph induced by the vertex set is then used to compute the graphlet vector as given in . The pseudo code is given in Algorithm 1.
Major time taken by the pre-processing phase is for computing the graphlet vector for . In , methods to improve its efficiency including sampling techniques are discussed. We do not make use of sampling technique in our implementation. We remark that finding the graphlet frequencies can easily be scaled to parallel computing frameworks or distributed computing frameworks such as map-reduce.
Nearest neighbor data structure on
After computing vertex labels for , a nearest neighbor data structure on the vertices of based on their label vectors is built. We use k-d trees for nearest neighbor data structure . k-d trees are known to be efficient when dimension of vectors is less than 20 . Since the typical graphlet size that we work with are and , the dimension of (which is ) does not exceed 10.
In the following we describe the three subphases of matching phase.
The vertex labels for all vertices of the query graph are computed first using Algorithm 1. Let denote the set of vertices in that are closest to with respect to the Euclidean distance between their label vectors. In our experiments, we usually fix as 10. For each vertex of , we compute by querying the k-d tree built in the pre-processing phase. Let denote the union of for each vertex of the vertices of . For the subsequent seed match generation phase, we will only consider the vertex subset of . Clearly size of is at most which is typically much smaller than the number of vertices in .
Seed Match Generation Phase
In this phase, we obtain a one to one mapping of a subset of vertices of the query graph to the target graph with highest overall similarity score. We call the subgraph induced by the mapped vertices in as the seed match. To do this, we define a bipartite graph with weighted edges, where one part is the vertex set of the query graph and the other part is the pruned vertex set of obtained in the previous step. The edges of the bipartite graph and their weights are defined as follows. Each vertex in the part is connected to every vertex in , where is the set of nearest neighbors of in as computed in the previous step.
The weight for the edge is defined in the following manner. Let be a fixed scale factor which is provided as a parameter to the search algorithm. We recall that vertex belongs to query graph and vertex belongs to target graph and given by equation (Equation 1) denote the similarity between their label vectors and . Let denote the neighbors of vertex in graph including . Let denote the subset of excluding such that each vertex in is connected to at least one vertex in in the bipartite graph . In particular, for each vertex , let denote the maximum value among all its neighbors in in the bipartite graph. Now the weight for the edge of the bipartite graph is given by
We now solve maximum weighted bipartite matching on this graph to obtain a one to one mapping between a subset of vertices of and the vertices of . Defining edge weights to edge in the bipartite graph in the above fashion not only takes into account the similarity value , but also the strength of similarity of neighbors of in to remaining vertices in the query graph . By assigning edge weights as above, we try to ensure that among two vertices in with equal similarity values to a vertex in , the vertex whose neighbors in also have high similarity to vertices in is preferred over the other in the final maximum weighted bipartite matching solution.
Let denote the solution obtained for the bipartite matching. Let and respectively denote the subgraphs induced by the subset of matched vertices from graphs and under the matching . The connectivity of and may differ. For instance, the number of connected components in and could differ. Therefore, we do not include all the vertices of in the seed match. Instead, we use the largest connected component of as a seed solution. That is, let denote the subset of vertices in corresponding to a maximum cardinality connected component. Let denote their corresponding mapped vertices in . We call as a seed match. The pseudo code for seed match computation is given in Algorithm 2.
Match Growing Phase
After computing the seed match in and its mapped vertices in , we use this seed match as the basis to compute the final match. The final solution is computed in an incremental fashion starting with empty match. In each iteration, we include a new pair of vertices to the solution, where and belongs to and respectively. In order to do this, we maintain a list of candidate pairs and in each iteration, we include a pair with maximum similarity value to the final solution. We use a max heap to maintain the candidate list. The candidate list is initialized with the mapped pairs between and as obtained in the previous phase. Thus, the heap is initialized by inserting each of these mapped pairs with corresponding weight .
We recall that the mapped pairs obtained from previous phase have stronger similarity with respect to the modified weight function . Higher value of indicates that not only is high but also their neighbors share high value. Hence they are more preferred in the solution over other pairs with similar value. By initializing the candidate list with these preferred pairs, the matching algorithm tries to ensure that the incremental solution starts with these pairs first and other potential pairs are considered later. Also, because of the heap data structure, remaining pairs are considered in the decreasing order of their similarity value. Moreover, as will be discussed later, the incremental matching tries to ensure that the partial match in constructed so far is connected. For this, new pairs that are added to the candidate list are chosen from the neighborhood of the partial match between and .
The incremental matching might still match vertex pairs with low value if they are available in the candidate list. Candidate pairs with low values should be treated separately as there could be genuine pairs with low value. For instance, consider boundary vertices of an optimal subgraph match in . Boundary vertices are also connected to vertices outside the matched subgraph. Hence, their local neighborhood structure is different from their counterpart in the query graph. In other words, their corresponding graphlet vectors can be very dissimilar and their similarity value can be very low even though they are expected to be matched in the final solution. In order to find such genuine pairs, we omit pairs with similarity value below some fixed threshold in this phase and such pairs are handled in the next phase.
In each iteration of the incremental matching, a pair with maximum value is removed from the candidate heap and added to the final match. After this, the candidate list is modified as follows. We recall that and belong to and respectively. We call a vertex unmatched if it is not yet present in the final match. The algorithm maintains two invariants: (a) the pairs present in the candidate list are one to one mappings and (b) a query vertex that enters the candidate list will stay in the candidate list (possibly with multiple changes to paired partner vertex) until it is included in the final match. Let denote the unmatched neighbors of in that are also not present in the candidate list. Let denote the unmatched neighbors in . For each query vertex in , let be a vertex in with maximum similarity value . We add to the candidate list if is absent in the list and . If is already present in the candidate list, then replace the current pair for with if has a higher value. The incremental algorithm is given in Algorithm 3. The candidate list modification is described in Algorithm 4.
Match Completion Phase
In this phase, vertices of the query graph that are left unmatched in the previous phase due to similarity values below the threshold are handled. Typically, boundary vertices of the final matched subgraph in remain unmatched in the previous phase. As discussed earlier, this is because, such boundary vertices in and their matched partners in have low value as their local neighborhood topologies vastly differ. Hence using neighborhood similarity for such pairs is ineffective. To handle them, we try to match unmatched query vertices with unmatched neighbors of the current match in . Since the similarity function is ineffective here, we use a different similarity function to compare potential pairs. Let denote the set of unmatched neighbors of the current match in . Let denote the set of unmatched query vertices. Let and let . We define the similarity as follows. Let denote the matched neighbors of in target graph and let denote the matched neighbors of in query graph . Let denote the matched partners of in . We now define using the standard Jaccard similarity coefficient as
We use a fixed threshold that is provided as parameter to the algorithm. We now define a bipartite graph with edge weights as follows. For each , insert an edge with weight in the bipartite graph if . Compute maximum weighted bipartite graph matching on this bipartite graph and include the matched pairs in the final solution . In our experiments, size of (number of unmatched query graph vertices) is very small. The pseudo code is given in Algorithm 5.
We remark that our searching algorithm finds the matched subset of vertices in and also their corresponding mapped vertices in the query graph .
In this section, we conduct experiments on various real life graph data sets  including social networks, collaboration networks, road networks, youtube network, amazon network and on synthetic graph data sets.
5.1Experimental Data sets
: We conduct experiments on facebook and google plus undirected graphs provided by Stanford Large Network Dataset Collection (SNAP) . Facebook graph contains around 4K vertices and 88K edges. In this graph vertices represent anonymized users and an undirected edge connects two friends. google plus graph contains 107K vertices and 13M edges. google plus graph also represents users as vertices and an edge exists between two friends. The data set also contains list of user circles (user communities), where user circle is specified by its corresponding set of vertices. We use these user circles as query graphs and they are queried against the entire facebook network. We also query facebook circles against google plus network to find similar circles across networks. We also experiment querying facebook circles against facebook network after introducing random noise to the facebook network.
DBLP Collaboration Network
: We use the DBLP collaboration network downloadable from . This network has around 317K vertices and 1M edges. The vertices of this graph are authors who publish in any conference or journal and an edge exists between any two co-authors. All the authors who contribute to a common conference or a journal form a community. The data set provides a list of such communities by specifying its corresponding set of vertices. We use such communities as query graphs.
: Youtube network is downloaded from . Network has about 1M vertices and 2M edges. Vertices in this network represent users and an edge exists between two users who are friends. In youtube, users can create groups in which other users can join. The data set provides a list of user groups by specifying its corresponding set of vertices. We consider these user-defined groups as our query graphs.
: We use the road network of California obtained from  in our experiments. This network has around 2M vertices and 3M edges. Vertices of this network are road endpoints or road intersections and the edges are the roads connecting these intersections. We use randomly chosen subgraphs from this network as query graphs.
: Amazon network is a product co-purchasing network downloaded from . This network has around 334K vertices and 925K edges. Each vertex represents a product and an edge exists between the products that are frequently co-purchased . All the products under a certain category form a product community. The data set provides a list of product communities by specifying its corresponding set of vertices. We use product communities as query graphs and we query them against the amazon network.
The statistics of the data sets used are listed in Table ?.
All the experiments are carried out on a 32 core 2.60GHz Intel(R) Xeon(R) server with 32GB RAM. The server has Ubuntu 14.04 LTS. Our implementation uses Java 7.
The computationally most expensive part of our algorithm is the computation of vector labels for all vertices of a graph. The preprocessing phase that computes label vectors for each vertex of the graph is multi-threaded and thus executes on all 32 cores. Similarly, in the matching phase, computing label vectors for all vertices of the query graph is also multi-threaded and uses all 32 cores. Remaining phases use only a single core.
To evaluate the accuracy of the result obtained by our similarity search algorithm, we compute the graphlet kernel value between the query graph and the subgraph of induced by the vertices of the final match in . We use this value to show the similarity between the query graph and our obtained match and we refer to this value as similarity score in our experiments. We recall that similarity score lies in the range where indicates maximum similarity.
There are six parameters in our algorithm: (1) graphlet size , (2) BFS depth for vertex label computation, (3) value of for the nearest neighbors from -d tree, (4) value of in the edge weight function and (5) similarity thresholds for match growing phase and for match completion phase. In all our experiments we fix graphlet size as . We performed experiments with different values of and on different data sets. Based on the results, we chose ranges for these parameters. The value of is chosen from the range to . Even for million vertex graphs, showed good results. We fix scaling factor to be and the thresholds and to be and respectively.
Experiment 1: This experiment shows the effect of bfs depth on the final match. We performed experiments with different values of . We observed that after the depth of 2, there is very little change in the similarity scores of the final match. But as the depth increases the time to compute graphlet vectors also increases. Thus, the bfs depth was taken to be 2 for most of our experiments. Table ? shows the similarity scores of querying amazon communities on amazon network and and DBLP communities on DBLP collaboration network for different values of . These results are averaged over 150 queries.
Experiment 2: For each of the data sets discussed earlier, we perform subgraph querying against the same network. For each network, we use the given communities as query graphs and measure the quality of the search result. That is, we query facebook communities against facebook network, DBLP communities against DBLP network, youtube groups against youtube network and amazon product communities against amazon network. For road network, we use randomly chosen induced subgraphs from the network as query graph. Second column of Table ? shows the similarity score of the match. All the results are averages over 150 queries. The average community (query graph) size is around 100 for facebook, around 40 for DBLP, around 50 for youtube and around 300 for amazon. Query graphs for road network have about 500 vertices.
To validate the quality of our solution, we do the following for each of the network. We compute the similarity score between random induced subgraphs from the same network. These random subgraphs contain 100 vertices. We also compute the similarity score between different communities from the same network. All results are averaged over 150 scores. Table ? shows the result. The results show that the similarity score of our match close to 1 and is significantly better than scores between random subgraphs and scores between communities in the same network. For road network, the third column shows the average similarity between its query subgraphs.
|Query graph &||Between||Between|
Table ? shows the which is the number of queries that yielded the exact match out of the 150 queries (query graph is a subgraph of the network), and - the percentage of queries where the vertices of the exact target match are present in the pruned subset of vertices of target graph obtained after the selection phase. Table ? shows that, for about of the query graphs, our algorithm identifies the exact match. Also, for about of the queries, vertices of the ideal match are present in our pruned set of vertices in the target graph after selection phase.
|(out of 150)||(percentage)|
Table ? shows the timing results corresponding to Experiment 2. The timing information is only for the matching phase and it excludes the one time pre-processing phase. Here denotes time taken (in secs) to compute the label vectors for all vertices of the query graph and the time taken (in secs) for the entire matching phase (including ). We recall that the label vector computation is implemented as multithreaded on 32 cores and the remaining part is executed as a single thread. It can be seen that the label vector computation is the computationally expensive part and the remaining phases take much lesser time.
|(in sec)||(in sec)|
Experiment 3: In all previous experiments, query graphs were induced subgraphs of the target network. In this experiment, we evaluate the quality of our solution when the query graph is not necessarily an induced subgraph of the target graph. For this, we conduct two experiments. In the first experiment, we use facebook communities as query graphs and query them against google plus network. To validate the quality of our solution, we measure the similarity score of the query graph with a random induced subgraph in the target graph with same number of vertices. In the second experiment, we create a modified facebook network by randomly removing 5% its original edges. We use this modified network as the target graph and query original facebook communities in this target graph. Here also, we validate the quality of our solution by measuring the similarity score for the query graph with a random induced subgraph of same number of vertices in the target graph. Table ? shows the results. Values shown for both experiments are averaged over 150 scores. The results show that similarity score of our match is close to 1 and is significantly better than a random match.
|Final Match||Random Subgraph|
|Facebook with random noise||0.933662||0.701198|
Experiment 4: We use our matching algorithm to identify dense subgraphs in large networks. In particular, we search for dense subgraphs in DBLP and google plus networks. For this, we first generate dense random graphs using the standard model with and . We now use these random graphs as query graphs and query them against the DBLP and google plus networks. We use the standard definition of density of a graph as
The average density of our random query graphs is . We queried these dense random graphs against DBLP and google plus networks. Table ? shows the results. Column 2 shows the similarity score between query graph and obtained match. Column 3 shows the density for the obtained match. The results are averaged over 150 queries. Results show that the similarity score with matched result is close to 1 for google plus. For DBLP the score is close to 0.8 primarily because DBLP does not have dense subgraphs with about 500 vertices. Also, the density of the obtained match is close to that of the query graph, which is 0.9.
|Similarity Score||for the match|
Computationally most expensive parts of our algorithm are the vertex label computation for vertices of query and target graphs. Since this is a one time preprocessing for the target graph, it can be easily scaled to a distributed framework using the standard map-reduce paradigm. Vertex label computation for each vertex can be a separate map/reduce job. Vertex label computation for query graph is performed for every search. This can also be parallelized using the standard OpenMP/MPI framework as each vertex label computation can be done in parallel. As shown in the experimental results, remaining phases take much lesser time even with serial implementation. Parts of them can also be parallelized to further improve the search efficiency.
- N. Shervashidze, T. Petri, K. Mehlhorn, K. M. Borgwardt, and S. Vishwanathan, “Efficient graphlet kernels for large graph comparison,” in International conference on artificial intelligence and statistics, 2009, pp. 488–495.
- T. T. Nguyen, H. A. Nguyen, N. H. Pham, J. M. Al-Kofahi, and T. N. Nguyen, “Graph-based mining of multiple object usage patterns,” in Proceedings of the the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering.1em plus 0.5em minus 0.4emACM, 2009, pp. 383–392.
- D. Haussler, “Convolution kernels on discrete structures,” Citeseer, Tech. Rep., 1999.
- F. Desobry, M. Davy, and W. J. Fitzgerald, “A class of kernels for sets of vectors.” in ESANN.1em plus 0.5em minus 0.4emCiteseer, 2005, pp. 461–466.
- R. Kondor and T. Jebara, “A kernel between sets of vectors,” in ICML, vol. 20, 2003, p. 361.
- S. Vishwanathan and A. J. Smola, “Fast kernels for string and tree matching,” Kernel methods in computational biology, pp. 113–130, 2004.
- S. Hido and H. Kashima, “A linear-time graph kernel,” in Data Mining, 2009. ICDM’09. Ninth IEEE International Conference on.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 179–188.
- D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs via spectral graph theory,” Applied and Computational Harmonic Analysis, vol. 30, no. 2, pp. 129–150, 2011.
- N. Shervashidze and K. M. Borgwardt, “Fast subtree kernels on graphs,” in Advances in Neural Information Processing Systems, 2009, pp. 1660–1668.
- T. Gärtner, P. Flach, and S. Wrobel, “On graph kernels: Hardness results and efficient alternatives,” in Learning Theory and Kernel Machines.1em plus 0.5em minus 0.4emSpringer, 2003, pp. 129–143.
- H. Kashima and A. Inokuchi, “Kernels for graph classification,” in ICDM Workshop on Active Mining, vol. 2002.1em plus 0.5em minus 0.4em Citeseer, 2002.
- T. Horváth, T. Gärtner, and S. Wrobel, “Cyclic pattern kernels for predictive graph mining,” in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining.1em plus 0.5em minus 0.4emACM, 2004, pp. 158–167.
- M. Neuhaus and H. Bunke, “Edit distance based kernel functions for attributed graph matching,” in Graph-Based Representations in Pattern Recognition.1em plus 0.5em minus 0.4emSpringer, 2005, pp. 352–361.
- K. M. Borgwardt and H.-P. Kriegel, “Shortest-path kernels on graphs,” in Data Mining, Fifth IEEE International Conference on.1em plus 0.5em minus 0.4emIEEE, 2005, pp. 8–pp.
- R. C. Bunescu and R. J. Mooney, “A shortest path dependency kernel for relation extraction,” in Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing.1em plus 0.5em minus 0.4emAssociation for Computational Linguistics, 2005, pp. 724–731.
- H. Fröhlich, J. K. Wegner, F. Sieker, and A. Zell, “Optimal assignment kernels for attributed molecular graphs,” in Proceedings of the 22nd international conference on Machine learning.1em plus 0.5em minus 0.4emACM, 2005, pp. 225–232.
- J. Ramon and T. Gärtner, “Expressivity versus efficiency of graph kernels,” in First International Workshop on Mining Graphs, Trees and Sequences, 2003, pp. 65–74.
- S. Menchetti, F. Costa, and P. Frasconi, “Weighted decomposition kernels,” in Proceedings of the 22nd international conference on Machine learning.1em plus 0.5em minus 0.4emACM, 2005, pp. 585–592.
- D. Shasha, J. T. Wang, and R. Giugno, “Algorithmics and applications of tree and graph searching,” in Proceedings of the twenty-first ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems.1em plus 0.5em minus 0.4emACM, 2002, pp. 39–52.
- X. Yan, P. S. Yu, and J. Han, “Graph indexing: a frequent structure-based approach,” in Proceedings of the 2004 ACM SIGMOD international conference on Management of data.1em plus 0.5em minus 0.4em ACM, 2004, pp. 335–346.
- S. Zhang, S. Li, and J. Yang, “Gaddi: distance index based subgraph matching in biological networks,” in Proceedings of the 12th International Conference on Extending Database Technology: Advances in Database Technology.1em plus 0.5em minus 0.4emACM, 2009, pp. 192–203.
- M. Mongiovi, R. Di Natale, R. Giugno, A. Pulvirenti, A. Ferro, and R. Sharan, “Sigma: a set-cover-based inexact graph matching algorithm,” Journal of bioinformatics and computational biology, vol. 8, no. 02, pp. 199–218, 2010.
- S. Zhang, J. Yang, and W. Jin, “Sapper: Subgraph indexing and approximate matching in large graphs,” Proceedings of the VLDB Endowment, vol. 3, no. 1-2, pp. 1185–1194, 2010.
- X. Wang, A. Smalter, J. Huan, and G. H. Lushington, “G-hash: towards fast kernel-based similarity search in large graph databases,” in Proceedings of the 12th international conference on extending database technology: advances in database technology.1em plus 0.5em minus 0.4emACM, 2009, pp. 472–480.
- A. Khan, N. Li, X. Yan, Z. Guan, S. Chakraborty, and S. Tao, “Neighborhood based fast graph search in large networks,” in Proceedings of the 2011 ACM SIGMOD International Conference on Management of data.1em plus 0.5em minus 0.4emACM, 2011, pp. 901–912.
- A. Khan, Y. Wu, C. C. Aggarwal, and X. Yan, “Nema: Fast graph search with label similarity,” in Proceedings of the 2013 VLDB endowment, 2011.
- Z. Sun, H. Wang, H. Wang, B. Shao, and J. Li, “Efficient subgraph matching on billion node graphs,” Proceedings of the VLDB Endowment, vol. 5, no. 9, pp. 788–799, 2012.
- Y. Yuan, G. Wang, J. Y. Xu, and L. Chen, “Efficient distributed subgraph similarity matching,” VLDB journal, vol. 24, pp. 369–394, 2015.
- Y. Tian and J. M. Patel, “Tale: A tool for approximate large graph matching,” in Data Engineering, 2008. ICDE 2008. IEEE 24th International Conference on.1em plus 0.5em minus 0.4emIEEE, 2008, pp. 963–972.
- W. Zheng, L. Zou, X. Lian, D. Wang, and D. Zhao, “Graph similarity search with edit distance constraint in large graph databases,” in Proceedings of the 22nd ACM international conference on Conference on information & knowledge management.1em plus 0.5em minus 0.4emACM, 2013, pp. 1595–1600.
- J. Leskovec and A. Krevl, “SNAP Datasets: Stanford large network dataset collection,” http://snap.stanford.edu/data, Jun. 2014.
- “DBLP Network,” http://dblp.uni-trier.de/db/.
- N. Przulj, D. Corneil, and I. Jurisica, “Supplementary information: Efficient estimation of graphlet frequency distributions in protein-protein interaction networks,” 2005.
- G. T. Heineman, G. Pollice, and S. Selkow, Algorithms in a Nutshell.1em plus 0.5em minus 0.4em“ O’Reilly Media, Inc.”, 2008.