A Hierarchical Aggregation Framework for Efficient Multilevel Visual Exploration and AnalysisTo appear in Semantic Web Journal (SWJ), 2016

A Hierarchical Aggregation Framework for Efficient Multilevel Visual Exploration and Analysis1

Abstract

Data exploration and visualization systems are of great importance in the Big Data era, in which the volume and heterogeneity of available information make it difficult for humans to manually explore and analyse data. Most traditional systems operate in an offline way, limited to accessing preprocessed (static) sets of data. They also restrict themselves to dealing with small dataset sizes, which can be easily handled with conventional techniques. However, the Big Data era has realized the availability of a great amount and variety of big datasets that are dynamic in nature; most of them offer API or query endpoints for online access, or the data is received in a stream fashion. Therefore, modern systems must address the challenge of on-the-fly scalable visualizations over large dynamic sets of data, offering efficient exploration techniques, as well as mechanisms for information abstraction and summarization. Further, they must take into account different user-defined exploration scenarios and user preferences. In this work, we present a generic model for personalized multilevel exploration and analysis over large dynamic sets of numeric and temporal data. Our model is built on top of a lightweight tree-based structure which can be efficiently constructed on-the-fly for a given set of data. This tree structure aggregates input objects into a hierarchical multiscale model. We define two versions of this structure, that adopt different data organization approaches, well-suited to exploration and analysis context. In the proposed structure, statistical computations can be efficiently performed on-the-fly. Considering different exploration scenarios over large datasets, the proposed model enables efficient multilevel exploration, offering incremental construction and prefetching via user interaction, and dynamic adaptation of the hierarchies based on user preferences. A thorough theoretical analysis is presented, illustrating the efficiency of the proposed methods. The presented model is realized in a web-based prototype tool, called SynopsViz that offers multilevel visual exploration and analysis over Linked Data datasets. Finally, we provide a performance evaluation and a empirical user study employing real datasets.

v

2015

A,B]\fnmsNikos \snmBikakis, B]\fnmsGeorge \snmPapastefanatos, A]\fnmsMelina \snmSkourla and C]\fnmsTimos \snmSellis

isual analytics \sepmultiscale \sepprogressive\sepincremental indexing \seplinked data \sepmultiresolution \sepvisual aggregation \sepbinning\sepadaptive\sephierarchical navigation \seppersonalized exploration \sepdata reduction \sepsummarization\sepSynopsViz

1 Introduction

Exploring, visualizing and analysing data is a core task for data scientists and analysts in numerous applications. Data exploration and visualization enable users to identify interesting patterns, infer correlations and causalities, and support sense-making activities over data that are not always possible with traditional data mining techniques [IdreosPC15, DadzieLP09]. This is of great importance in the Big Data era, where the volume and heterogeneity of available information make it difficult for humans to manually explore and analyse large datasets.

One of the major challenges in visual exploration is related to the large size that characterizes many datasets nowadays. Considering the visual information seeking mantra: “overview first, zoom and filter, then details on demand[Shneiderman96], gaining overview is a crucial task in the visual exploration scenario. However, offering an overview of a large dataset is an extremely challenging task. Information overloading is a common issue in large dataset visualization; a basic requirement for the proposed approaches is to offer mechanisms for information abstraction and summarization.

The above challenges can be overcome by adopting hierarchical aggregation approaches (for simplicity we also refer to them as hierarchical) [EF10]. Hierarchical approaches allow the visual exploration of very large datasets in a multilevel fashion, offering overview of a dataset, as well as an intuitive and usable way for finding specific parts within a dataset. Particularly, in hierarchical approaches, the user first obtains an overview of the dataset (both structure and a summary of its content) before proceeding to data exploration operations, such as roll-up and drill-down, filtering out a specific part of it and finally retrieving details about the data. Therefore, hierarchical approaches directly support the visual information seeking mantra. Also, hierarchical approaches can effectively address the problem of information overloading as it provides information abstraction and summarization.

A second challenge is related to the availability of API and query endpoints (e.g., SPARQL) for online data access, as well as the cases where that data is received in a stream fashion. The latter pose the challenge of handling large sets of data in a dynamic setting, and as a result, a preprocessing phase, such as traditional indexing, is prevented. In this respect, modern techniques must offer scalability and efficient processing for on-the-fly analysis and visualization of dynamic datasets.

Finally, the requirement for on-the-fly visualization must be coupled with the diversity of preferences and requirements posed by different users and tasks. Therefore, the proposed approaches should provide the user with the ability to customize the exploration experience, allowing users to organize data into different ways according to the type of information or the level of details she wishes to explore.

Considering the general problem of exploring big data [Shneiderman08, MortonBGM14, bs16, IdreosPC15, HeerK12b, GGL15], most approaches aim at providing appropriate summaries and abstractions over the enormous number of available data objects. In this respect, a large number of systems adopt approximation techniques (a.k.a. data reduction techniques) in which partial results are computed. Existing approaches are mostly based on: (1) sampling and filtering [FisherPDs12, ParkCM15, AgarwalMPMMS13, ImVM13, BattleSC13] and/or (2) aggregation (e.g., binning, clustering) [EF10, JugelJM15, JugelJHM14a, LiuJH13, hw13, bcs15, LinsKS13, AbelloHK06, RodriguesTPTTF13]. Similarly, some modern database-oriented systems adopt approximation techniques using query-based approaches (e.g., query translation, query rewriting) [BattleSC13, JugelJM15, JugelJHM14a, VartakMPP14, WuBM14]. Recently, incremental approximation techniques are adopted; in these approaches approximate answers are computed over progressively larger samples of the data [FisherPDs12, AgarwalMPMMS13, ImVM13]. In a different context, an adaptive indexing approach is used in [ZoumpatianosIP14], where the indexes are created incrementally and adaptively throughout exploration. Further, in order to improve performance many systems exploit caching and prefetching techniques [TauheedHSMA12, KalininCZ14, JayachandranTKN14, bcs15, ChanXGH08, KhanSA14, DoshiRW03]. Finally, in other approaches, parallel architectures are adopted [EMJ16, KamatJTN14, KalininCZ15, ImVM13].

Addressing the aforementioned challenges, in this work, we introduce a generic model that combines personalized multilevel exploration with online analysis of numeric and temporal data. At the core lies a lightweight hierarchical aggregation model, constructed on-the-fly for a given set of data. The proposed model is a tree-based structure that aggregates data objects into multiple levels of hierarchically related groups based on numerical or temporal values of the objects. Our model also enriches groups (i.e., aggregations/summaries) with statistical information regarding their content, offering richer overviews and insights into the detailed data. An additional feature is that it allows users to organize data exploration in different ways, by parameterizing the number of groups, the range and cardinality of their contents, the number of hierarchy levels, and so on. On top of this model, we propose three user exploration scenarios and present two methods for efficient exploration over large datasets: the first one achieves the incremental construction of the model based on user interaction, whereas the second one enables dynamic and efficient adaptation of the model to the user’s preferences. The efficiency of the proposed model is illustrated through a thorough theoretical analysis, as well as an experimental evaluation. Finally, the proposed model is realized in a web-based tool, called SynopsViz that offers a variety of visualization techniques (e.g., charts, timelines) for multilevel visual exploration and analysis over Linked Data (LD) datasets.

Contributions. The main contributions of this work are summarized as follows.

  • We introduce a generic model for organizing, exploring, and analysing numeric and temporal data in a multilevel fashion.

  • We implement our model as a lightweight, main memory tree-based structure, which can be efficiently constructed on-the-fly.

  • We propose two tree structure versions, which adopt different approaches for the data organization.

  • We describe a simple method to estimate the tree construction parameters, when no user preferences are available.

  • We define different exploration scenarios assuming various user exploration preferences.

  • We introduce a method that incrementally constructs and prefetches the hierarchy tree via user interaction.

  • We propose an efficient method that dynamically adapts an existing hierarchy to a new, considering user’s preferences.

  • We present a thorough theoretical analysis, illustrating the efficiency of the proposed model.

  • We develop a prototype system which implements the presented model, offering multilevel visual exploration and analysis over LD.

  • We conduct a thorough performance evaluation and an empirical user study, using the DBpedia 2014 dataset.

Outline. The remainder of this paper is organized as follows. Section 2 presents the proposed hierarchical model, and Section 3 provides the exploration scenarios and methods for efficient hierarchical exploration. Then, Section 4 presents the SynopsViz tool and demonstrate the basic functionality. The evaluation of our system is presented in Section 5. Section 6 reviews related work, while Section 7 concludes this paper.

2 The HETree Model

In this section we present HETree (Hierarchical Exploration Tree), a generic model for organizing, exploring, and analysing numeric and temporal data in a multilevel fashion. Particularly, HETree is defined in the context of multilevel (visual) exploration and analysis. The proposed model hierarchically organize arbitrary numeric and temporal data, without requiring it to be described by an hierarchical scheme. We should note that, our model is not bound to any specific type of visualization; rather it can be adopted by several “flat” visualization techniques (e.g., charts, timeline), offering scalable and multilevel exploration over non-hierarchical data.

In what follows, we present some basic aspects of our working scenario (i.e., visual exploration and analysis scenario) and highlight the main assumptions and requirements employed in the construction of our model. First, the input data in our scenario can be retrieved directly from a database, but also produced dynamically; e.g., from a query or from data filtering (e.g., faceted browsing). Thus, we consider that data visualization is performed online; i.e., we do not assume an offline preprocessing phase in the construction of the visualization model. Second, users can specify different requirements or preferences with respect to the data organization. For example, a user prefers to organize the data as a deep hierarchy for a specific task, while for another task a flat hierarchical organization is more appropriate. Therefore, even if the data is not dynamically produced, the data organization is dynamically adapted to the user preferences. The same also holds for any additional information (e.g., statistical information) that is computed for each group of objects. This information must be recomputed when the groups of objects (i.e., data organization) are modified.

From the above, a basic requirement is that the model must be constructed on-the-fly for any given data and users preferences. Therefore, we implement our model as a lightweight, main memory tree structure, which can be efficiently constructed on-the-fly. We define two versions of this tree structure, following data organization approaches well-suited to visual exploration and analysis context: the first version considers fixed-range groups of data objects, whereas the second considers fixed-size groups. Finally, our structure allows efficient on-the-fly statistical computations, which are extremely valuable for the hierarchical exploration and analysis scenario.

The basic idea of our model is to hierarchically group data objects based on values of one of their properties. Input data objects are stored at the leaves, while internal nodes aggregate their child nodes. The root of the tree represents (i.e., aggregates) the whole dataset. The basic concepts of our model can be considered similar to a simplified version of a static 1D R-Tree [Guttman84].

Regarding the visual representation of the model and data exploration, we consider that both data objects sets (leaf nodes contents) and entities representing groups of objects (leaf or internal nodes) are visually represented enabling the user to explore the data in a hierarchical manner. Note that our tree structure organizes data in a hierarchical model, without setting any constraints on the way the user interacts with these hierarchies. As such, it is possible that different strategies can be adopted, regarding the traversal policy, as well as the nodes of the tree that are rendered in each visualization stage.

In the rest of this section, preliminaries are presented in Section 2.1. In Section 2.2, we introduce the proposed tree structure. Sections 2.3 and 2.4 present the two versions of the structure. Finally, Section 2.5 discusses the specification of the parameters required for the tree construction, and Section 2.6 presents how statistics computations can be performed over the tree.

2.1 Preliminaries

In this work we formalize data objects as RDF triples. However, the presented methods are generic and can be applied to any data objects with numeric or temporal attributes. Hence, in the following, the terms triple and (data) object will be used interchangeably.

We consider an RDF dataset consisting of a set of RDF triples. As input data, we assume a set of RDF triples , where and triples in have as objects either numeric (e.g., integer, decimal) or temporal values (e.g., date, time). Let be an RDF triple, , and represent, respectively, the subject, predicate and object of the RDF triple .

Given input data , is an ordered set of RDF triples, produced from , where triples are sorted based on objects’ values, in ascending order. Assume that denotes the -th triple, with the first triple. Then, for each , we have that . Also, , i.e., for each iff .

Figure 1 presents a set of 10 RDF triples, representing persons and their ages. In Figure 1, we assume that the subjects - are instances of a class Person and the predicate age is a datatype property with integer range.

Figure 1: Running example input data (data objects)

Example 1. In Figure 1, given the RDF triple , we have that , and . Also, given that all triples comprise the input data and is the ordered set of based on the object values, in ascending order; we have that and .

Assume an interval , where ; then, . Similarly, for , we have that . Let and denote the lower and upper bound of the interval , respectively. That is, given , then and . The length of an interval is defined as .

In this work we assume rooted trees. The number of the children of a node is its degree. Nodes with degree are called leaf nodes. Moreover, any non-leaf node is called internal node. Sibling nodes are the nodes that have the same parent. The level of a node is defined by letting the root node be at level zero. Additionally, the height of a node is the length of the longest path from the node to a leaf. A leaf node has a height of .

The height of a tree is the maximum level of any node in the tree. The degree of a tree is the maximum degree of a node in the tree. An ordered tree is a tree where the children of each node are ordered. A tree is called an -ary tree if every internal node has no more than children. A full -ary tree is a tree where every internal node has exactly children. A perfect -ary tree is a full -ary tree in which all leaves are at the same level.

2.2 The HETree Structure

In this section, we present in more detail the HETree structure. HETree hierarchically organizes numeric and temporal data into groups; intervals are used to represents these groups.2 HETree is defined by the tree degree and the number of leaf nodes.3 Essentially, the number of leaf nodes corresponds to the number of groups where input data objects are organized. The tree degree corresponds to the (maximum) number of groups where a group is split in the lower level.

Given a set of data objects (RDF triples) , a positive integer denoting the number of leaf nodes; and a positive integer denoting the tree degree; an HETree is an ordered d-ary tree, with the following basic properties.

  • The tree has exactly number of leaf nodes.

  • All leaf nodes appear in the same level.

  • Each leaf node contains a set of data objects, sorted in ascending order based on their values. Given a leaf node , denote the data objects contained in .

  • Each internal node has at most d children nodes. Let be an internal node, denotes the -th child for the node , with be the leftmost child.

  • Each node corresponds to an interval. Given a node , denotes the interval for the node .

  • At each level, all nodes are sorted based on the lower bounds of their intervals. That is, let be an internal node, for any , we have that .

  • For a leaf node, its interval is bounded by the values of the objects included in this leaf node. Let be the leftmost leaf node; assume that contains objects from . Then, we have that and , where is the ordered object set resulting from .

  • For an internal node, its interval is bounded by the union of the intervals of its children. That is, let be an internal node, having child nodes; then, we have and .

Furthermore, we present two different approaches for organizing the data in the HETree. Assume the scenario in which a user wishes to (visually) explore and analyse the historic events from DBpedia [AuerBKLCI07], per decade. In this case, user orders historic events by their date and organizes them into groups of equal ranges (i.e., decade). In a second scenario, assume that a user wishes to analyse in the Eurostat dataset the gross domestic product (GDP) organized into fixed groups of countries. In this case, the user is interested in finding information like: the range and the variance of the GDP values over the top-10 countries with the highest GDP factor. In this scenario, the user orders countries by their GDP and organizes them into groups of equal sizes (i.e., 10 countries per group).

In the first approach, we organize data objects into groups, where the object values of each group covers equal range of values. In the second approach, we organize objects into groups, where each group contains the same number of objects. In the following sections, we present in detail the two approaches for organizing the data in the HETree.

2.3 A Content-based HETree (HETree-C)

In this section we introduce a version of the HETree, named HETree-C (Content-based HETree). This HETree version organizes data into equally sized groups. The basic property of the HETree-C is that each leaf node contains approximately the same number of objects and the content (i.e., objects) of a leaf node specifies its interval. For the tree construction, the objects are first assigned to the leaves and then the intervals are defined.

An HETree-C is an HETree, with the following extra property. Each leaf node contains or objects, where.4 Particularly, the leftmost leaves contain objects, while the rest leaves contain .5 We can equivalently define the HETree-C by providing the number of objects per leaf , instead of the number of leaves .

Figure 2: A Content-based HETree (HETree-C)

Example 2. Figure 2 presents an HETree-C constructed by considering the set of objects from Figure 1, and . As we can observe, all the leaf nodes contain equal number of objects. Particularly, we have that . Regarding the leftmost interval, we have and .

The HETree-C Construction

We construct the HETree-C in a bottom-up way. Algorithm LABEL:algo:HETree-C describes the HETree-C construction. Initially, the algorithm sort the object set in ascending order, based on objects values (line 1). Then, the algorithm uses two procedures to construct the tree nodes. Finally, the root node of the constructed tree is returned (line 4).

\@float

algocf[t]     \end@float

The procedure (Procedure 1) construct leaf nodes (lines 4–16). For the first leaves, objects are inserted, while for the rest leaves, objects are inserted (lines 6–9). Finally, the set of created leaf nodes is returned (line 17).

\@float

algocf[!t]     \end@float

The procedure (Procedure 2) builds the internal nodes in a recursive manner. For the nodes , their parents nodes are created (lines 4-16); then, the procedure calls itself using as input the parent nodes (line 21). The recursion terminates when the number of created parent nodes is equal to one (line 17); i.e., the root of the tree is created.

Computational Analysis. The computational cost for the HETree-C construction (Algorithm LABEL:algo:HETree-C) is the sum of three parts. The first is sorting the input data, which can be done in the worst case in , employing a linearithmic sorting algorithm (e.g., merge-sort). The second part is the procedure, which requires for scanning all data objects. The third part is the procedure, which requires , with the sum being the number of internal nodes in the tree. Note that the maximum number of internal nodes in a -ary tree corresponds to the number of internal nodes in a perfect -ary tree of the same height. Also, note the number of internal nodes of a perfect -ary tree of height is . In our case, the height of our tree is . Hence, the maximum number of internal nodes is . Therefore, the procedure, in worst case requires . Therefore, the overall computational cost for the HETree-C construction in the worst case is

.6

\@float

algocf[t!]     \end@float

2.4 A Range-based HETree (HETree-R)

The second version of the HETree is called HETree-R (Range-based HETree). HETree-R organizes data into equally ranged groups. The basic property of the HETree-R is that each leaf node covers an equal range of values. Therefore, in HETree-R, the data space defined by the objects values is equally divided over the leaves. As opposed to HETree-C, in HETree-R the interval of a leaf specifies its content. Therefore, for the HETree-R construction, the intervals of all leaves are first defined and then objects are inserted.

An HETree-R is an HETree, with the following extra property. The interval of each leaf node has the same length; i.e., covers equal range of values. Formally, let be the sorted RDF set resulting from , for each leaf node its interval has length , where .7 Therefore, for a leaf node , we have that . For example, for the leftmost leaf, its interval is . The HETree-R is equivalently defined by providing the interval length , instead of the number of leaves .

Example 3. Figure 3 presents an HETree-R tree constructed by considering the set of objects (Figrue 1), and . As we can observe from Figure 3, each leaf node covers equal range of values. Particularly, we have that the interval of each leaf must have length .

Figure 3: A Range-based HETree (HETree-R)

The HETree-R Construction

This section studies the construction of the HETree-R structure. The HETree-R is also constructed in a bottom-up fashion.

Similarly with the HETree-C version, Algorithm LABEL:algo:HETree-C is used for the HETree-R construction. The only difference is the procedure (line 2), which creates the leaf nodes of the HETree-R and is presented in Procedure 3.

The procedure The procedure constructs leaf nodes (lines 2–9) and assigns same intervals to all of them (lines 4–8), it traverses all objects in (lines 10–12) and places them to the appropriate leaf node (line 12). Finally, returns the set of created leaves (line 13).

\@float

algocf[t]     \end@float

Computational Analysis. The computational cost for the HETree-R construction (Algorithm LABEL:algo:HETree-C) for sorting the input data (line 1) and creating the internal nodes (line 3) is the same as in the HETree-C case. The procedure (line 2) requires (since ). Using the computational costs for the first and the third part from Section 2.3.1, we have that in worst case, the overall computational cost for the HETree-R construction is .

2.5 Estimating the HETree Parameters

In our working scenario, the user specifies the parameters required for the HETree construction (e.g., number of leaves ). In this section, we describe our approach for automatically calculating the HETree parameters based on the input data, when no user preferences are provided. Our goal is to derive the parameters by the input data, such that the resulting HETree can address some basic guidelines set by the visualization environment. In what follows, we discuss in detail the proposed approach.

An important parameter in hierarchical visualizations is the minimum and maximum number of objects that can be effectively rendered in the most detailed level.8 In our case, the above numbers correspond to the number of objects contained in the leaf nodes. The proper calculation of these numbers is crucial such that the resulting tree avoids overloaded visualizations.

Therefore, in HETree construction, our approach considers the minimum and the maximum number of objects per leaf node, denoted as and , respectively. Besides the number of objects rendered in the lowest level, our approach considers perfect -ary trees, such that a more “uniform” structure (i.e., all the internal nodes have exactly child nodes) results. The following example illustrates our approach of calculating the HETree parameters.

Example 4. Assume that based on an adopted visualization technique, the ideal number of data objects to be rendered on a specific screen is between and . Hence, we have that and .

Now, let’s assume that we want to visualize the object set , using an HETree-C, where . Based on the number of objects and the bounds, we can estimate the bounds for the number of leaves. Let and denote the lower and the upper bound for the number of leaves. Therefore, we have that .

Hence, our HETree-C should have between and leaf nodes. Since, we consider perfect -ary trees, from Table 1 we can identify the tree characteristics that conform to the number of leaves guideline. The candidate setting (i.e., leaf number and degree) is indicated in Table 1, using dark-grey colour. Note that, the settings with are not examined since visualizing two groups of objects in each level is considered a small number under most visualization settings. Hence, in any case we only assume settings with and . Therefore, an HETree-C with and is a suitable structure for our case.

Now, let’s assume that we want to visualize the object set , where . Following a similar approach, we have that . The candidate settings are indicated in Table 1 using light-grey colour. Hence, we have the following settings that satisfy the considered guideline: : , ; : , ; and : , .

In the case where more than one setting satisfies the considered guideline, we select the preferable one according to following set of rules. From the candidate settings, we prefer the setting which results in the highest tree (st Criterion).9 In case that the highest tree is constructed by more than one settings, we consider the distance , between and the centre of and (nd Criterion); i.e., . The setting with the lowest value is selected. Note that, based on the visualization context, different criteria and preferences may be followed.

In our example, from the candidate settings, setting is selected, since it will construct the highest tree (i.e., ). On the other hand, settings and will construct trees with lower heights (i.e., ).

Now, assume a scenario where only and are candidates. In this case, since both settings result to trees with equal heights, the nd Criterion is considered. Hence, for the we have . Similarly, for the . Therefore, between the and , the setting is preferable, since .

In case of HETree-R, a similar approach is followed, assuming normal distribution over the values of the objects.

Degree
Height 3 4 5 6
1 3 4 5 6
2 9 16 25 36
3 27 64 625 216
4 81 256 3125 1296
5 243 1024 15625 7776
6 729 4048 78125 46656
Table 1: Number of leaf nodes for perfect -ary trees

2.6 Statistics Computations over HETree

Data statistics is a crucial aspect in the context of hierarchical visual exploration and analysis. Statistical informations over groups of objects (i.e., aggregations) offer rich insights into the underlying (i.e., aggregated) data. In this way, useful information regarding different set of objects with common characteristics is provided. Additionally, this information may also guide the users through their navigation over the hierarchy.

In this section, we present how statistics computation is performed over the nodes of the HETree. Statistics computations exploit two main aspects of the HETree structure: (1) the internal nodes aggregate their child nodes; and (2) the tree is constructed in bottom-up fashion. Statistics computation is performed during the tree construction; for the leaf nodes, we gather statistics from the objects they contain, whereas for the internal nodes we aggregate the statistics of their children.

For simplicity, here, we assume that each node contains the following extra fields, used for simple statistics computations, although more complex or RDF-related (e.g., most common subject, subject with the minimum value, etc.) statistics can be computed. Assume a node , as we denote the number of objects covered by ; as and we denote the mean and the variance of the objects’ values covered by , respectively. Additionally, we assume the minimum and the maximum values, denoted as and , respectively.

Statistics computations can be easily performed in the construction algorithms (Algorithm LABEL:algo:HETree-C) without any modifications. The follow example illustrates these computations.

Example 5. In this example we assume the HETree-C presented in Figure 2. Figure 4 shows the HETree-C with the computed statistics in each node. When all the leaf nodes have been constructed, the statistics for each leaf is computed. For instance, we can see from Figure 4, that for the rightmost leaf we have: , and . Also, we have and . Following the above process, we compute the statistics for all leaf nodes.

Figure 4: Statistics computation over HETree

Then, for each parent node we construct, we compute its statistics using the computed statistics of its child nodes. Considering the internal node, with the child nodes and , we have that and . Also, we have that . Now we will compute the mean value by combining the children mean values: . Similarly, for variance we have .

The similar approach is also followed for the case of HETree-R.

Computational Analysis. Most of the well known statistics (e.g., mean, variance, skewness, etc.) can be computed linearly w.r.t. the number of elements. Therefore, the computation cost over a set of numeric values is considered as . Assume a leaf node containing objects, then the cost for statistics computations for is . Also, the cost for all leaf nodes is . Let an internal node , then the cost for is ; since the statistics in are computed by aggregating the statistics of the child nodes. Considering that is the maximum number of internal nodes (Section 2.3.1), we have that in the worst case the cost for the internal nodes is . Therefore, the overall cost for statistics computations over an HETree is .

3 Efficient Multilevel Exploration

In this section, we exploit the HETree structure in order to efficiently handle different multilevel exploration scenarios. Essentially, we propose two methods for efficient hierarchical exploration over large datasets. The first method incrementally constructs the hierarchy via user interaction; the second one achieves dynamic adaptation of the data organization based on user’s preferences.

3.1 Exploration Scenarios

In a typical multilevel exploration scenario, referred here as Basic exploration scenario (BSC), the user explores a dataset in a top-down fashion. The user first obtains an overview of the data through the root level, and then drills down to more fine-grained contents for accessing the actual data objects at the leaves. In BSC, the root of the hierarchy is the starting point of the exploration and, thus, the first element to be presented (i.e., rendered).

The described scenario offers basic exploration capabilities; however it does not assume use cases with user-specified starting points, other than the root, such as starting the exploration from a specific resource, or from a specific range of values.

Consider the following example, in which the user wishes to explore the DBpedia infoboxes dataset to find places with very large population. Initially, she selects the populationTotal property and starts her exploration from the root node, moves down the right part of the tree and ends up at the rightmost leaf that contains the highly populated places. Then, she is interested in viewing the area size (i.e., areaTotal property) for one of the highly populated places and, also, in exploring places with similar area size. Finally, she decides to explore places based on the water area size (i.e., areaWater) they contain. In this case, she prefers to start her exploration by considering places that their water area size is within a given range of values.

In this example, besides BSC one we consider two additional exploration scenarios. In the Resource-based exploration scenario (RES), the user specifies a resource of interest (e.g., an IRI) and a specific property; the exploration starts from the leaf containing the specific resource and proceeds in a bottom-up fashion. Thus, in RES the data objects contained in the same leaf with the resource of interest are presented first. We refer to that leaf as leaf of interest.

The third scenario, named Range-based exploration scenario (RAN) enables the user to start her exploration from an arbitrary point in the hierarchy providing a range of values; the user starts from a set of internal nodes and she can then move up or down the hierarchy. The RAN scenario begins by rendering all sibling nodes that are children of the node covering the specified range of interest; we refer to these nodes as nodes of interest.

Note that, regarding the adopted rendering policy for all scenarios, we only consider nodes belonging to the same level. That is, sibling nodes or data objects contained in the same leaf, are rendered.

Regarding the “navigation-related” operations, the user can move down or up the hierarchy by performing a drill-down or a roll-up operation, respectively. A drill-down operation over a node enables the user to focus on and render its child nodes. If is a leaf node, the set of data objects contained in are rendered. On the other hand, the user can perform a roll-up operation on a set of sibling nodes . The parent node of along with the parent’s sibling nodes are rendered. Finally, the roll-up operation when applied to a set of data objects will render the leaf node that contains along its sibling leaves, whereas a drill-down operation is not applied to a data object.

3.2 Incremental HETree Construction

In the Web of Data, the dataset might be dynamically retrieved by a remote site (e.g., via a SPARQL endpoint), as a result, in all exploration scenarios, we have assumed that the HETree is constructed on-the-fly at the time the user starts her exploration. In the previous DBpedia example, the user explores three different properties; although only a small part of their hierarchy is accessed, the whole hierarchies are constructed and the statistics of all nodes are computed. Considering the recommended HETree parameters for the employed properties, this scenario requires that K nodes will be constructed for populationTotal property, K nodes for the areaTotal and K nodes for the areaWater, amounting to a total number of K nodes. However, the construction of the hierarchies for large datasets poses a time overhead (as shown in the experimental section) and, consequently, increased response time in user exploration.

In this section, we introduce ICO (Incremental HETree Construction) method, which incrementally constructs the HETree, based on user interaction. The proposed method goes beyond the incremental tree construction, aiming at further reducing the response time during the exploration process by “pre-constructing” (i.e., prefetching) the parts of the tree that will be visited by the user in her next roll-up or drill-down operation. Hence, a node is not constructed when the user visits it for the first time; instead, it has been constructed in a previous exploration step, where the user was on a node in which can be reached by a roll-up or a drill-down operation. This way, our method offers incremental construction of the tree, tailored to each user’s exploration. Finally, we show that, during an exploration scenario, ICO constructs the minimum number of HETree elements.

Figure 5: Incremental HETree construction example. ➊ Resource-based (RES) exploration scenario; ➋ Range-based (RAN) exploration scenario

Employing ICO method in the DBpedia example, the populationTotal hierarchy will only construct nodes (the root along its child nodes and nodes in each of the lower tree levels) and the areaTotal will construct nodes corresponding to the leaf node containing the requested resource and its siblings. Finally, the areaWater hierarchy initially will contain either or nodes, depending on whether the user’s input range corresponds to a set of sibling leaf nodes, or to a set of sibling internal nodes, respectively.

Example 6. We demonstrate the functionality of ICO through the following example. Assume the dataset used in our running examples, describing persons and their ages. Figure 5 presents the incremental construction of the HETree presented in Figure 3 for the RES and RAN exploration scenarios. Blue color is used to indicate the HETree elements that are presented (rendered) to the user, in each exploration stage.

In the RES scenario (upper flow in Figure 5), the user specifies “http://persons.com/p6” as her resource of interest; all data objects contained in the same leaf (i.e., ) with the resource of interest are initially presented to the user. The ICO initially constructs the leaf , along with its siblings, i.e., leaves and . These leaves correspond to the nodes that the user can reach in a next (roll-up) step. Next, the user rolls up and the leaves , and are presented to her. At the same time, parent node and its sibling are constructed. Note that all elements which are accessible to the user by moving either down (i.e., , , data objects), or up (i.e., , nodes) are already constructed. Finally, when the user rolls up and nodes are rendered and parent node , along with the children of , i.e., and , are constructed.

In the RAN scenario (lower flow in Figure 5), the user specifies [, ] as her range of interest. The nodes covering this range (i.e., , ) are initially presented along with their sibling . Also, ICO constructs the parent node and its sibling because they are accessible by one exploration step. Then, the user performs a roll-up and ICO constructs the , , nodes (as described in the RES scenario above).

In the beginning of each exploration scenario, ICO constructs a set of initial nodes, which are the nodes initially presented, as well as the nodes potentially reached by the user’s first operation (i.e., required HETree elements). The required HETree elements of an exploration step are nodes that can be reached by the user by performing one exploration operation. Hence, in the RES scenario, the initial nodes are the leaf of interest and its sibling leaves. In the RAN, the initial nodes are the nodes of interest, their children, and their parent node along with its siblings. Finally, in the BSC scenario the initial nodes are the root node and its children.

In what follows we describe the construction rules adopted by ICO through the user exploration process. These rules provide the correspondences between the types of elements presented in each exploration step and the elements that ICO constructs. Note that these rules are applied after the construction of the initial nodes, in all three exploration scenarios. The correctness of these rules is verified later in Proposition 3.2.

Rule 1: If a set of internal sibling nodes is presented, ICO constructs: the parent node of along with the parent’s siblings, and the children of each node in .

Rule 2: If a set of leaf sibling nodes is presented, ICO does not construct anything (the required nodes have been previously constructed).

Rule 3: If a set of data objects is presented, ICO does not construct anything (the required nodes have been previously constructed).

The following proposition shows that, in all case, the required HETree elements have been constructed earlier by ICO.10

Proposition 1. In any exploration scenario, the HETree elements a user can reach by performing one operation (i.e., required elements), have been previously constructed by ICO.

Also, the following theorem shows that over any exploration scenario ICO constructs only the required HETree elements.

Theorem 1. ICO constructs the minimum number of HETree elements in any exploration scenario.

ICO Algorithm

In this section, we present the incremental HETree construction algorithm. Note that, here we include the pseudocode only for the HETree-R version, since the only difference with the HETree-C version is in the way that the nodes’ intervals are computed and that the dataset is initially sorted. In the analysis of the algorithms, both versions are studied.

Here, we assume that each node contains the following extra fields. Let a node , denotes the parent node of , and denotes the height of in the hierarchy. Additionally, given a dataset , and denote the minimum and the maximum value for all objects in , respectively. The user preferences regarding the exploration’s starting point are represented as an interval . In the RES scenario, given that the value of the explored property for the resource of interest is , we have . In the RAN scenario, given that the range of interest is , we have that and . In the BSC scenario, the user does not provide any preferences regarding the starting point, so we have and . Finally, according to the definition of HETree, a node encloses a data object (i.e., triple) if and .

The algorithm (Algorithm LABEL:algo:ico) implements the incremental method for HETree-R. The algorithm uses two procedures to construct all required nodes (available in Appendix B). The first procedure (Procedure 4) constructs the nodes which can be reached by a roll-up operation, whereas (Procedure 5) constructs the nodes which can be reached by a drill-down operation. Additionally, the aforementioned procedures exploit two secondary procedures (Appendix B): (Procedure 6) and (Procedure 7), which are used for nodes’ intervals computations and nodes construction.

\@float

algocf[t!]     \end@float

The algorithm is invoked at the beginning of the exploration scenario, in order to construct the initial nodes, as well as every time the user performs an operation. The algorithm takes as input the dataset , the tree parameters and , the starting point , the currently presented (i.e., rendered) elements , and the constructed HETree . begins with the currently presented elements equal to (lines 1-5). Based on the starting point , the algorithm computes the interval corresponding to the sibling nodes that are first presented to the user, as well as its hierarchy height (line 3). For sake of simplicity, the details for computing and are omitted. For example, the interval for the leaf that contains the resource of interest with object value , is computed as and . Following a similar approach, we can easily compute and .

Based on , the algorithm constructs the sibling nodes that are first presented to the user (line 4). Then, the algorithm constructs the rest initial nodes (lines 6-9). In the RES case, as we consider the interval that includes the leaf that contains the resource of interest along with its sibling leaves. Hence, all the initial nodes are constructed in line 4 and the algorithm terminates (line 5) until the next user’s operation.

After the first call, in each ICO execution, the algorithm initially checks if the parent node of the currently presented elements is already constructed, or if all the nodes that enclose data objects11 have been constructed (line 6). Then, procedure (line 7) is used to construct the parent node, as well as the parent’s siblings. In the case that are not leaf nodes or data objects (line 8), procedure (line 9) is used to construct all children. Finally, the algorithm returns the updated HETree (line 10).

Computational Analysis

Here we analyse the incremental construction for both HETree versions.

Number of Constructed Nodes. Regarding the number of initial nodes constructed in each scenario: in RES scenario, at most leaf nodes are constructed; in RAN scenario, at most nodes are constructed; finally in BSC scenario, are constructed.

Regarding the maximum number of nodes constructed in each operation in RES and RAN scenarios: (1) A roll-up operation constructs at most nodes. The nodes are constructed in , whereas the in . (2) A drill-down operation constructs at most nodes in . As for the BSC scenario: (1) A roll-up operation does not construct any nodes. (2) A drill-down operation constructs at most nodes in .

Discussion. The worst case for the computational cost is higher in HETree-R than in HETree-C, for all exploration scenarios. Particularly, in HETree-R worst case, ICO must build leaves that contain the whole dataset and the computational cost is for all scenarios. In HETree-C, for the RES and RAN scenarios, the cost is , and for the BSC scenario the cost is . A detailed computational analysis for both HETree-R and HETree-C is included in Appendix C.

Modify Degree Modify Num. of Leaves
Full Construction elsewhere
Tree Construction
  Complexity
  #leaves 0
  #leaves 0
  #internals
  #internals 0 0 0 0 0
Statistics Computations
  Complexity
  #leaves 0 0 0
  #leaves 0 0 0

  #internals 0
  #internals

0 0 0 0 0 0

, (maximum number of internal nodes), and

Table 2: Summary of Adaptive HETree Construction

3.3 Adaptive HETree Construction

In a (visual) exploration scenario, users wish to modify the organization of the data by providing user-specific preferences for the whole hierarchy or part of it. The user can select a specific subtree and alter the number of groups presented in each level (i.e., the tree degree) or the size of the groups (i.e., number of leaves). In this case, a new tree (or a part of it) pertaining to the new parameters provided by the user should be constructed on-the-fly.

For example, consider the HETree-C of Figure 6 representing ages of persons.12 A user may navigate to node , where she prefers to increase the number of groups presented in each level. Thus, she modifies the degree of from to and the subtree is adapted to the new parameter as depicted on the bottom tree of Figure 6. On the other hand, the user prefers exploring the right subtree (starting from node ) with less details. She chooses to increase the size of the groups by reducing (from to ) the number of leaves for the subtree of . In both cases, constructing the subtree from scratch based on the user-provided parameters and recomputing statistics entails a significant time overhead, especially, when user preferences are applied to a large part of or the whole hierarchy.

Figure 6: Adaptive HETree example

In this section, we introduce ADA (Adaptive HETree Construction) method, which dynamically adapts an existing HETree to a new, considering a set of user-defined parameters. Instead of both constructing the tree and computing the nodes’ statistics from scratch, our method reconstructs the new part(s) of the hierarchy by exploiting the existing elements (i.e., nodes, statistics) of the tree. In this way, ADA achieves to reduce the overall construction cost and enables the on-the-fly reorganization of the visualized data. In the example of Figure 6, the new subtree of can be derived from the old one, just by removing the internal nodes and , while the new subtree of results from merging leaves together and aggregating their statistics.

Let denote the existing HETree and is the new HETree corresponding to the new user preferences for the tree degree and the number of leaves . Note that could also denote a subtree of an existing HETree (in the scenario where the user modifies only a part of it). In this case, the user indicates the reconstruction root of .

Then, ADA identifies the following elements of : (1) The elements of that also exist in . For example, consider the following two cases: the leaf nodes of are internal nodes of in level ; the statistics of nodes in level are equal to the statistics of nodes in level . (2) The elements of that can be reused (as “building blocks”) for constructing elements in . For example, consider the following two cases: each leaf node of is constructed by merging leaf nodes of ; the statistics for the node of can be computed by aggregating the statistics from the nodes and of .

Consequently, we consider that an element (i.e., node or node’s statistics) in can be: (1) constructed/computed from scratch13, (2) reused as is from or (3) derived by aggregating elements from .

Table 2 summarizes the ADA reconstruction process. Particularly, the table includes: (1) the computational complexity for constructing , denoted as ; (2) the number of leaves and internal nodes of constructed from scratch, denoted as # and #, respectively; and (3) the number of leaves and internal nodes of derived from nodes of , denoted as # and #, respectively. The lower part of the table presents the results for the computation of node statistics in . Finally, the second table column, denoted as , presents the results of constructing from scratch.

The following example demonstrates the ADA results, considering a DBpedia exploration scenario.

Example 7. The user explores the populationTotal property of the DBpedia dataset. The default system organization for this property is a hierarchy with degree . The user modifies the tree parameters in order to fit better visualization results as following. First, she decides to render more groups in each hierarchy level and increases the degree from to (st Modification). Then, she observes that the results overflow the visualization area and that a smaller degree fits better; thus she re-adjusts the tree degree to a value of (nd Modification). Finally, she navigates through the data values and decides to increase the groups’ size by a factor of three (i.e., dividing by three the number of leaves) (rd Modification). Again, she corrects her decision and readjusts the final group size to twice the default size (th Modification).

Table 3 summarizes the number of nodes, constructed by a Full Construction and ADA in each modification, along with the required statistics computations. Considering the whole set of modifications, ADA constructs only the (K vs. K) of the nodes that are created in the case of the full construction. Also, ADA computes the statistics for only (K vs. K) of the nodes.

Modify Degree Modify Num. of Leaves
1st Modification 2nd Modification 3rd Modification 4th Modification
Tree Construction
  #nodes K K K K K K K
Statistics Computations
  #nodes K K K K K
Table 3: Full Construction vs. ADA over DBpedia Exploration Scenario (cells values: Full / ADA)

In the next sections, we present in detail the reconstruction process through the example trees of Figure 7. Figure 7a presents the initial tree that is an HETree-C, with and . Figures 7b~7e present several reconstructed trees . Blue dashed lines are used to indicate the elements (i.e., nodes, edges) of which do not exist in . Regarding statistics, we assume that in each node we compute the mean value. In each , we present only the mean values that are not known from . Also, in mean values computations, the values that are reused from are highlighted in yellow. All reconstruction details and computational analysis for each case are included in Appendix D.

Figure 7: Adaptive HETree construction examples

The User Modifies the Tree Degree

Regarding the modification of the degree parameter, we distinguish the following cases:

The user increases the tree degree. We have that ; based on the value we have the following cases:

, with and : Figure 7a presents with and Figure 7d presents the reconstructed with (i.e., ). results by simply removing the nodes with height 1 (i.e., , , , ) and connecting the nodes with height 2 (i.e., , ) with the leaves.

In general, results from by simply removing tree levels from . Additionally, there is no need for computing any new statistics, since the statistics for all nodes of remain the same as in .

, with , and where : An example with is presented in Figure 7c, where we have . In this case, the leaves of (Figure 7a) remain leaves in and all internal nodes up to the reconstruction root of are constructed from scratch. As for the node statistics, we can compute the mean values for nodes with height 1 (i.e., , ) by aggregating already computed mean values (e.g., , etc.) from .

In general, except for the leaves, we construct all internal nodes from scratch. For the internal nodes of height 1, we compute their statistics by aggregating the statistics of leaves, whereas for internal nodes of height greater than 1, we compute from scratch their statistics.

: In any other case where the user increases the tree degree, all internal nodes in except for the leaves are constructed from scratch. In contrast with the previous case, the leaves’ statistics from can not be reused and, thus, for all internal nodes in the statistics are recomputed.

The user decreases the tree degree. Here we have that ; based on the value we have the following two cases:

, with and : Assume that now Figure 7d depicts , with , while Figure 7a presents with . We can observe that contains all nodes of , as well as a set of extra internal nodes (i.e., , , , ). Hence, results from by constructing some new internal nodes.

: This case is the same as the previous case where the user increases the tree degree.

The User Modifies the Number of Leaves

Regarding the modification of the number of leaves parameter, we distinguish the following cases:

The user increases the number of leaves. In this case we have that ; hence, each leaf of is split into several leaves in and the data objects contained in a leaf must be reallocated to the new leaves in . As a result, all nodes (both leaves and internal nodes) in have different contents compared to nodes in and must be constructed from scratch along with their statistics.

In this case, constructing requires (by avoiding the sorting phase).

The user decreases the number of leaves. In this case we have that ; based on the value we have the following three cases:

, with : Considering that Figure 7a presents with and . A reconstruction example of this case with , is presented in Figure 7b, where we have with . In Figure 7b, we observe that the leaves in result from merging leaves of . For example, the leaf of results from merging the leaves and of . Then, results from , by replacing the nodes with height (i.e., , , , ), with the leaves. Finally, the nodes of with height less than are not included in .

Therefore, in this case, is constructed by merging the leaves of and removing the internal nodes of having height less or equal to . Also, we do not recompute the statistics of the new leaves of as these are derived from the statistics of the removed nodes with height .

, with , and , where : As in the previous case, the leaves in are constructed by merging leaves from and their statistics are computed based on the statistics of the merged leaves. In this case, however, all internal nodes in have to be constructed from scratch.

, with , and , where : The two previous cases describe that each leaf in contains leaves from . In this case, a leaf in may partially contains leaves from . A leaf in fully contains a leaf from when the leaf contains all data objects belonging to the leaf. Otherwise, a leaf in partially contain a leaf from when the leaf contains a subset of the data objects from the leaf.

An example of this case is shown in Figure 7e that depicts a reconstructed resulting from the presented in Figure 7a. The leaf of fully contains leaves , of and partially leaf for which value belongs to a different leaf (i.e., ).

Due to this partial containment, we have to construct all leaves and internal nodes from scratch and recalculate their statistics. Still, the statistics of the fully contained leaves of can be reused, by aggregating them with the individual values of the data objects included in the leaves. For example, as we can see in Figure 7e, the mean value of the leaf is computed by aggregating the mean values and corresponding to the fully contained leaves and , with the individual values , of the partially contained leaf .

Figure 8: System architecture

4 The SynopsViz Tool

Based on the proposed hierarchical model, we have developed a web-based prototype called SynopsViz14. The key features of SynopsViz are summarized as follows: (1) It supports the aforementioned hierarchical model for RDF data visualization, browsing and analysis. (2) It offers automatic on-the-fly hierarchy construction, as well as user-defined hierarchy construction based on users’ preferences. (3) Provides faceted browsing and filtering over classes and properties. (4) Integrates statistics with visualization; visualizations have been enriched with useful statistics and data information. (5) Offers several visualization techniques (e.g., timeline, chart, treemap). (6) Provides a large number of dataset’s statistics regarding the: data-level (e.g., number of sameAs triples), schema-level (e.g., most common classes/properties), and structure level (e.g., entities with the larger in-degree). (7) Provides numerous metadata related to the dataset: licensing, provenance, linking, availability, undesirability, etc. The latter can be considered useful for assessing data quality [ZRMP+13].

In the rest of this section, Section 4.1 describes the system architecture, Section 4.2 demonstrates the basic functionality of the SynopsViz. Finally, Section 4.3 provides technical information about the implementation.

4.1 System Architecture

The architecture of SynopsViz is presented in Figure 8. Our scenario involves three main parts: the Client UI, the SynopsViz, and the Input data. The Client part, corresponds to the system’s front-end offering several functionalities to the end-users. For example, hierarchical visual exploration, facet search, etc. (see Section 4.2 for more details). SynopsViz consumes RDF data as Input data; optionally, OWL-RDF/S vocabularies/ontologies describing the input data can be loaded. Next, we describe the basic components of the SynopsViz.

In the preprocessing phase, the Data and Schema Handler parses the input data and inferes schema information (e.g., properties domain(s)/range(s), class/ property hierarchy, type of instances, type of properties, etc.). Facet Generator generates class and property facets over input data. Statistics Generator computes several statistics regarding the schema, instances and graph structure of the input dataset. Metadata Extractor collects dataset metadata. Note that the model construction does not require any preprocessing, it is performed online, according to user interaction.

During runtime the following components are involved. Hierarchy Specifier is responsible for managing the configuration parameters of our hierarchy model, e.g., the number of hierarchy levels, the number of nodes per level, and providing this information to the Hierarchy Constructor. Hierarchy Constructor implements our tree structure. Based on the selected facets, and the hierarchy configuration, it determines the hierarchy of groups and the contained triples. Statistics Processor computes statistics about the groups included in the hierarchy. Visualization Module allows the interaction between the user and the back-end, allowing several operations (e.g., navigation, filtering, hierarchy specification) over the visualized data. Finally, the Hierarchical Model Module maintains the in-memory tree structure for our model and communicates with the Hierarchy Constructor for the model construction, the Hierarchy Specifier for the model customization, the Statistics Processor for the statistics computations, and the Visualization Module for the visual representation of the model.

Figure 9: Web user interface

4.2 SynopsViz In-Use

In this section we outline the basic functionality of SynopsViz prototype. Figure 9 presents the web user interface of the main window. SynopsViz UI consists of the following main panels: Facets panel: presents and manages facets on classes and properties; Input data control panel: enables the user to import and manage input datasets; Visualization panel: is the main area where interactive charts and statistics are presented; Configuration panel: handles visualization settings.

Initially, users are able to select a dataset from a number of offered real-word LD datasets (e.g., DBpedia, Eurostat) or upload their own. Then, for the selected dataset, the users are able to examine several of the dataset’s metadata, and explore several datasets’s statistics.

Using the facets panel, users are able to navigate and filter data based on classes, numeric and date properties. In addition, through facets panel several information about the classes and properties (e.g., number of instances, domain(s), range(s), IRI, etc.) are provided to the users through the UI.

Users are able to visually explore data by considering properties’ values. Particularly, area charts and timeline-based area charts are used to visualize the resources considering the user’s selected properties. Classes’ facets can also be used to filter the visualized data. Initially, the top level of the hierarchy is presented providing an overview of the data, organized into top-level groups; the user can interactively drill-down (i.e., zoom-in) and roll-up (i.e., zoom-out) over the group of interest, up to the actual values of the input data (i.e., LD resources). At the same time, statistical information concerning the hierarchy groups as well as their contents (e.g., mean value, variance, sample data, range) is presented through the UI (Figure (a)a). Regarding the most detailed level (i.e., LD resources), several visualization types are offered; i.e., area, column, line, spline and areaspline (Figure (b)b).

In addition, users are able to visually explore data, through class hierarchy. Selecting one or more classes, users can interactively navigate over the class hierarchy using treemaps (Figure (c)c) or pie charts (Figure (d)d). Properties’ facets can also be used to filter the visualized data. In SynopsViz the treemap visualization has been enriched with schema and statistical information. For each class, schema metadata (e.g., number of instances, subclasses, datatype/object properties) and statistical information (e.g., the cardinality of each property, min, max value for datatype properties) are provided.

Finally, users can interactively modify the hierarchy specifications. Particularly, they are able to increase or decrease the level of abstraction/detail presented, by modifying both the number of hierarchy levels, and number of nodes per level.

A video presenting the basic functionality of our prototype is available at youtu.be/n2ctdH5PKA0. Also, a demonstration of SynopsViz tool is presented in [bsp14].

4.3 Implementation

SynopsViz is implemented on top of several open source tools and libraries. The back-end of our system is developed in Java, Jena framework is used for RDF data handing and Jena TDB is used for disk-based RDF storing. The front-end prototype, is developed using HTML and Javascript. Regarding visualization libraries, we use Highcharts, for the area, column, line, spline, areaspline and timeline-based charts and Google Charts for treemap and pie charts.

(a) Groups of numeric RDF data (Area chart)
(b) Numeric RDF data (Column chart)
(c) Class hierarchy (Treemap chart)
(d) Class hierarchy (Pie chart)
Figure 10: Numeric data & class hierarchy visualization examples

5 Experimental Analysis

In this section we present the evaluation of our approach. In Section 5.1, we present the dataset and the experimental setting. Then, in Section 5.2 we present the performance results and in Section 5.3 the user evaluation we performed.

5.1 Experimental Setting

In our evaluation, we use the well known DBpedia 2014 LD dataset. Particularly, we use the Mapping-based Properties (cleaned) dataset15 which contains high-quality data, extracted from Wikipedia Infoboxes. This dataset contains M triples and includes a large number of numeric and temporal properties of varying sizes. The largest numeric property in this dataset has K triples, whereas the largest temporal property has K.

Regarding the methods used in our evaluation, we consider our HETree hierarchical approaches, as well as a simple non-hierarchical visualization approach, referred as FLAT. FLAT is considered as a competitive method against our hierarchical approaches. It provides single-level visualizations, rendering only the actual data objects; i.e., it is the same as the visualization provided by SynopsViz at the most detailed level. In more detail, the FLAT approach corresponds to a column chart in which the resources are sorted in ascending order based on their object values, the horizontal axis contains the resources’ names (i.e., triples’ subjects), and the vertical axis corresponds to objects’ values. By hovering over a resource, a tooltip appears including the resource’s name and object value.

Regarding the HETree approaches, the tree parameters (i.e., number of leaves, degree and height) are automatically computed following the approach described in Section 2.5. In our experiments, the lower and the upper bound for the objects rendered at the most detailed level have been set to and , respectively. Considering the visualizations provided by the default Highcharts settings, these numbers are reasonable for our screen size and resolution.

Finally, our backend system is hosted on a server with a quad-core CPU at 2GHz and 8GB of RAM running Windows Server 2008. As client, we used a laptop with i5 CPU at 2.5GHz with 4G RAM, running Windows 7, Firefox 38.0.1 and ADSL2+ internet connection. Additionally, in the user evaluation, the client is employed with a 24″ (19201200) screen.

5.2 Performance Evaluation

In this section, we study the performance of the proposed model, as well as the behaviour of our tool, in terms of construction and response time, respectively. Section 5.2.1 describes the setting of our performance evaluation, and Section 5.2.2 presents the evaluation results.

Setup

In order to study the performance, a number of numeric and temporal properties from the employed dataset are visualized using the two hierarchical approaches (i.e., HETree-C/R), as well as the FLAT approach. We select one set from each type of properties; each set contains properties with varying sizes, starting from small properties having - triples up to the largest properties.

In our experiment, for each of the three approaches, we measure the tool response time. Additionally, for the two hierarchical approaches we also measure the time required for the HETree construction.

Note that in hierarchical approaches through user interaction, the server sends to the browser only the data required for rendering the current visualization level (although the whole tree is constructed at the backend). Hence, when a user requests to generate a visualization we have the following workflow. Initially, our system constructs the tree. Then, the data regarding the top-level groups (i.e., root node children) are sent to the browser which renders the result. Afterwards, based on user interactions (i.e., drill-down, roll-up), the server retrieves the required data from the tree and sends it to the browser. Thus, the tree is constructed the first time a visualization is requested for the given input dataset; for any further user navigation over the hierarchy, the response time does not include the construction time. Therefore, in our experiments, in the hierarchical approaches, as response time we measure the time required by our tool to provide the first response (i.e., render the top-level groups), which corresponds to the slower response in our visual exploration scenario. Thus, we consider the following measures in our experiments:

: the time required to build the HETree structure. This time includes (1) the time for sorting the triples; (2) the time for building the tree; and (3) the time for the statistics computations.

: the time required to render the charts, starting from the time the client sends the request. This time includes (1) the time required by the server to compute and build the response. In the hierarchical approaches, this time corresponds to the Construction Time, plus the time required by the server to build the JSON object sent to the client. In the FLAT approach, it corresponds to the time spent in sorting the triples plus the time for the JSON construction; (2) the time spent in the client-sever communication; and (3) the time required by the visualization library to render the charts on the browser.

Tree Characteristics HETree-C HETree-R FLAT
Property (#Triples) #Leaves Degree Height #Nodes
Construction
Time (msec)
Response
Time (msec)
Construction
Time (msec)
Response
Time (msec)
Response
Time (msec)
Numeric Properties
  rankingWins 9 3 2 13 5 324 1 323 415
  distanceToBelfast 9 3 2 13 7 337 4 329 419
  waistSize 16 4 2 21 10 346 9 336 440
  fileSize 27 3 3 40 18 347 16 345 575
  hsvCoordinateValue 81 3 4 121 74 403 50 383 980
  lineLength 81 3 4 121 77 409 55 391 1,463
  powerOutput 243 3 5 364 234 560 217 540 2,583
  width 729 3 6 1,093 506 830 467 799 6,135
  numberOfPages 729 3 6 1,093 2,888 3,219 2,403 2,722 12,669
  inseeCode 2,187 3 7 3,280 4,632 4,962 4,105 4,436 19,119
  areaWater 2,187 3 7 3,280 4,945 5,134 5,274 5,457 29,538
  populationDensity 2,187 3 7 3,280 6,803 7,127 6,080 6,404 44,262
  areaTotal 6,561 3 8 9,841 16,158 16,482 13,298 13,627 219,018
  populationTotal 19,683 3 9 29,524 31,141 31,473 25,866 26,196 1,523,675
  lat 19,683 3 9 29,524 73,528 73,862 71,784 72,106
Temporal Properties
  retired 9 3 2 13 8 330 4 327 425
  endDate 27 3 3 40 17 339 16 339 468
  lastAirDate 64 4 3 85 34 359 30 359 853
  buildingStartDate 81 3 4 121 73 406 53 384 1,103
  latestReleaseDate 243 3 5 364 162 496 146 480 1,804
  orderDate 243 3 5 364 210 542 195 523 2,011
  decommissioningDate 243 3 5 364 405 735 383 717 3,423
  shipLaunch 729 3 6 1,093 1,772 2,094 1,595 1,919 6,935
  completionDate 729 3 6 1,093 1,987 2,311 1,793 2,121 7,814
  foundingDate 729 3 6 1,093 2,745 3,069 2,583 2,905 8,699
  added 2,187 3 7 3,280 5,912 5,943 6,244 6,265 33,846
  activeYearsStartDate 6,561 3 8 9,841 10,368 10,702 8,952 9,282 107,587
  releaseDate 6,561 3 8 9,841 19,122 19,451 16,526 16,856 950,545
  deathDate 19,683 3 9 29,524 32,990 33,313 27,936 28,271
  birthDate 59,049 3 10 88,573 85,797 86,120 83,982 84,314
Table 4: Performance Results for Numeric & Temporal Properties

Results

Table 4 presents the evaluation results regarding the numeric (upper half) and the temporal properties (lower half). The properties are sorted in ascending order of the number of triples. For each property, the table contains the number of triples, the characteristics of the constructed HETree structures (i.e., number of leaves, degree, height, and number of nodes), as well as the construction and the response time for each approach. The presented time measurements are the average values from executions.

Regarding the comparison between the HETree and FLAT, the FLAT approach can not provide results for properties having more than K triples, indicated in the last rows for both numeric and temporal properties with “–” in the FLAT response time. For the rest properties, we can observe that the HETree approaches clearly outperform FLAT in all cases, even in the smallest property (i.e., rankingWin, triples). As the size of properties increases, the difference between the HETree approaches and FLAT increases, as well. In more detail, for large properties having more than K triples (i.e., the numeric properties larger than the populationDensity -th row-, and the temporal properties larger than the added -th row-), the HETree approaches outperform the FLAT by one order of magnitude.

Regarding the time required for the construction of the HETree structure, from Table 4 we can observe the following: The performance of both HETtree structures is very close for most of the examined properties, with the HETree-R performing slightly better than the HETree-C (especially in the relatively small numeric properties). Furthermore, we can observe that the response time follows a similar trend as the construction time. This is expected since the communication cost, as well as the times required for constructing and rendering the JSON object are almost the same for all cases.

Regarding the comparison between the construction and the response time in the HETree approaches, from Table 4 we can observe the following. For properties having up to K triples (i.e., the numeric properties smaller than the width -th row-, and the temporal properties smaller than the decommissioningDate -th row-), the response time is dominated by the communication cost, and the time required for the JSON construction and rendering. For properties with only a small number of triples (i.e., waistSize, triples), only % of the response time is spent on constructing the HETree. Moreover, for a property with a larger number of triples (i.e., buildingStartData, triples), % of the time is spent on constructing the HETree. Finally, for the largest property for which the time spent in communication cost, JSON construction and rendering is larger than the construction time (i.e., powerOutput, triples), % of the time is spent on constructing the HETree.

(a) All Properties (50 to 762K triples)
(b) Small Properties (50 to 20K triples)
Figure 11: Response Time w.r.t. the number of triples

Figure 11 summarizes the results from Table 4, presenting the response time for all approaches w.r.t. the number of triples. Particularly, Figure 11a includes all properties sizes (i.e., to K). Further, in order to have a precise observation over small property sizes (Small properties) in which the difference between the FLAT and the HETree approaches is smaller, we report properties with less than K triples separately in Figure 11b. Once again, we observe that HETree-R performs slightly better than the HETree-C. Additionally, from Figure 11b we can indicate that for up to K triples the performance of the HETree approaches is almost the same. We can also observe the significant difference between the FLAT and the HETree approaches.

However our method clearly outperforms the non-hierarchical method, as we can observe from the above results, the construction of the whole hierarchy can not provide an efficient solution for datasets containing more than K objects. As discussed in Section 3.2, for efficient exploration over large datasets an incremental hierarchy construction is required. In the incremental exploration scenario, the number of hierarchy nodes that have to be processed and constructed is significantly fewer compared to the non-incremental.

For example, adopting an non-incremental construction in populationTotal (K triples), K nodes are to be initially constructed (along with their statistics). On the other hand, with the incremental approach (as analysed in Section 3.2) at the beginning of each exploration scenario, only the initial nodes are constructed. Initial nodes are the nodes initially presented, as well as the nodes potentially reached by the user’s first operation.

In the RES scenario, the initial nodes are the leaf of interest ( node) and its sibling leaves (at most nodes). In the RAN, the initial nodes are the nodes of interest (at most nodes), their children (at most nodes), and their parent node along with its siblings (at most nodes). Finally, in the BSC scenario the initial nodes are the root node ( node) and its children (at most nodes). Overall, at most , , and nodes are initially constructed in the RES, RAN, and BSC scenarios respectively. Therefore, in populationTotal case where at most , and nodes are initially constructed in the RES, RAN, and BSC scenarios respectively.

5.3 User Study

In this section we present the user evaluation of our tool, where we have employed three approaches: the two hierarchical and the FLAT. Section 5.3.1 describes the user tasks, Section 5.3.2 outlines the evaluation procedure and setup, Section 5.3.3 summarizes the evaluation results, and Section 5.3.4 discusses issues related to the evaluation process.

Tasks

In this section we describe the different types of tasks that are used in the user evaluation process.

Type 1 [Find resources with specific value]: This type of tasks requests the resources having value (as object). For this task type, we define task T1 by selecting a value that corresponds to resources. Given this task, the participants are asked to provide the number of resources that pertain to this value. In order to solve this task, the participants first have to find a resource with value and then check which of the nearby resources also have the same value.

Type 2 [Find resources in a range of values]: This type of tasks requests the resources having value greater than and less than . We define two tasks of this type, by selecting different combinations of and values, such that tasks which consider different numbers of resources are defined. We define two tasks, the first task considers a relative small number of resources while the second a larger. In our experiments we select as a small number of resources, while as a large number we select . Particularly, in the first task, named T2.1, we specify the values and such that a relatively small set of (approximately ) resources are included, whereas the second task, T2.2, considers a relatively larger set of (approximately ) resources. Given these tasks, the participants are asked to provide the number of resources included in the given range. This task can be solved by first finding a resource with a value included in the given range, and then explore the nearby resources in order to identify the resources in the given range.

Type 3 [Compare distributions]: This type of tasks requests from the participant to identify whether more resources appear above or below a given value . For this type, we define task T3, by selecting the value near to the median. Given this task, the participants are asked to provide the number of resources appearing either above or below the value . The answer for this tasks requires from the participants to indicate the value and determine the number or resources appearing either before or after this value.

Table 5: Average Task Completion Time (sec)
Small Property Large Property
FLAT HETree-C HETree-R FLAT HETree-C HETree-R
T1

T2.1

T2.2

T3

             

Table 6: Error Rate (%)
Small Property Large Property
FLAT HETree-C HETree-R FLAT HETree-C HETree-R
T1

T2.1

T2.2

T3