Query Evaluation and Optimization in the Semantic Web
Abstract
We address the problem of answering Web ontology queries efficiently. An ontology is formalized as a Deductive Ontology Base (DOB), a deductive database that comprises the ontology’s inference axioms and facts. A costbased query optimization technique for DOB is presented. A hybrid cost model is proposed to estimate the cost and cardinality of basic and inferred facts. Cardinality and cost of inferred facts are estimated using an adaptive sampling technique, while techniques of traditional relational cost models are used for estimating the cost of basic facts and conjunctive ontology queries. Finally, we implement a dynamicprogramming optimization algorithm to identify query evaluation plans that minimize the number of intermediate inferred facts. We modeled a subset of the Web ontology language OWL Lite as a DOB, and performed an experimental study to analyze the predictive capacity of our cost model and the benefits of the query optimization technique. Our study has been conducted over synthetic and realworld OWL ontologies, and shows that the techniques are accurate and improve query performance. To appear in Theory and Practice of Logic Programming (TPLP)
Edna Ruckhaus, Eduardo Ruiz, MaríaEsther Vidal]
Edna Ruckhaus
and Eduardo Ruiz
and MaríaEsther Vidal
Computer Science Department
Universidad Simón Bolívar
1 Introduction
Ontology systems usually provide reasoning and retrieval services that identify the basic facts that satisfy a requirement, and derive implicit knowledge using the ontology’s inference axioms. In the context of the Semantic Web, the number of inferred facts can be extremely large. On one hand, the amount of basic ontology facts (domain concepts and Web source annotations) can be considerable, and on the other hand, Open World reasoning in Web ontologies may yield a large space of choices. Therefore, efficient evaluation strategies are needed in Web ontology’s inference engines.
In our approach, ontologies are formalized as a deductive database called a Deductive Ontology Base (DOB). The extensional database comprises all the ontology language’s statements that represent the explicit ontology knowledge. The intensional database corresponds to the set of deductive rules which define the semantics of the ontology language. We provide a costbased optimization technique for Web ontologies represented as a DOB.
Traditional query optimization techniques for deductive databases systems include joinordering strategies, and techniques that combine a bottomup evaluation with topdown propagation of query variable bindings in the spirit of the MagicSets algorithm [Ramakrishnan and Ullman (1993)]. Joinordering strategies may be heuristicbased or costbased; some costbased approaches depend on the estimation of the join selectivity; others rely on the fanout of a literal [Staudt et al. (1999)]. Costbased query optimization has been successfully used by relational database management systems; however, these optimizers are not able to estimate the cost or cardinality of data that do not exist a priori, which is the case of intensional predicates in a DOB.
We propose a hybrid cost model that combines two techniques for cardinality and cost estimation: (1) the sampling technique proposed in [Lipton and Naughton (1990), Lipton et al. (1990)] is applied for the estimation of the evaluation cost and cardinality of intensional predicates, and (2) a cost model la System R cost model is used for the estimation of the cost and cardinality of extensional predicates and the cost of conjunctive queries.
Three evaluation strategies are considered for ”joining” predicates in conjunctive queries. They are based on the NestedLoop, Block NestedLoop, and Hash Join operators of relational databases [Ramakrishnan and Gehrke (2003)]. To identify a good evaluation plan, we provide a dynamicprogramming optimization algorithm that orders subgoals in a query, considering estimates of the subgoal’s evaluation cost.
We modeled a subset of the Web ontology language OWL Lite [McGuinness and Harmelen (2004)] as a DOB, and performed experiments to study the predictive capacity of the cost model and the benefits of the ontology query optimization techniques. The study has been conducted over synthetic and realworld OWL ontologies. Preliminary results show that the costmodel estimates are pretty accurate and that optimized queries are significantly less expensive than nonoptimized ones.
Our current formalism does not represent the OWL builtin constructor ComplementOf. We stress that in practice this is not a severe limitation. For example, this operator is not used in any of the three realworld ontologies that we have studied in our experiments; and in the survey reported in [Wang (2006)], only 21 ontologies out of 688 contain this constructor.
Our work differs from other systems in the Semantic Web that combine a Description Logics (DL) reasoner with a relational DBMS in order to solve the scalability problems for reasoning with individuals [Calvanese et al. (2005), Haarslev and Moller (2004), Horrocks and Turi (2005), Pan and Hefflin (2003)]. Clearly, all of these systems use the query optimization component embedded in the relational DBMS; however, they do not develop costbased optimization for the implicit knowledge, that is, there is no estimation of the cost of data not known a priori.
Other systems use Logic Programming (LP) to reason on largescale ontologies. This is the case of the projects described in [Grosof et al. (2003), Hustadt and Motik (2005), Motik et al. (2003)] . In Description Logic Programs (DLP) [Grosof et al. (2003)], the expressive intersection between DL and LP without function symbols is defined. DL queries are reduced to LP queries and efficient LP algorithms are explored. The project described in [Hustadt and Motik (2005), Motik et al. (2003)] reduces a knowledge base to a Disjunctive Datalog program. Both projects apply MagicSets rewriting techniques but to the best of our knowledge, no costbased optimization techniques have been developed. The OWL Lite species of the OWL language proposed in [Bruijn et al. (2004)] is based in the DLP project; it corresponds to the portion of the OWL Lite language that can be translated to Datalog. All of these systems develop LP reasoning with individuals, whereas in the DOB model we develop Datalog reasoning with both, domain concepts and individuals.
In [Eiter et al. (2006)], an efficient bottomup evaluation strategy for HEXprograms based on the theory of splitting sets is described. In the context of the Semantic Web, these nonmonotonic logic programs contain higherorder atoms and external atoms that may represent RDF and OWL knowledge. However, their approach does not include determining the best evaluation strategy according to a certain cost metric.
In the next section we describe our DOB formalism. Following this, we describe the DOBS System architecture, Then, we model a subset of OWL Lite as a DOB and present a motivating example. Next, we develop our hybrid cost model and query optimization algorithm. We describe our experimental study and, finally, we point out our conclusions and future work.
2 The Deductive Ontology Base (DOB)
In general, an ontology knowledge base can be defined as:
Definition 1 (Ontology Knowledge Base)
An ontology knowledge base O is a pair , where is a set of ontology facts that represent the explicit ontology structure (domain) and source annotations (individuals), and is a set of axioms that allow the inference of new ontology facts regarding both domain and individuals.
We will model as a deductive database which we call a Deductive Ontology Base (DOB). A DOB is composed of an Extensional Ontology Base (EOB) and an Intensional Ontology Base (IOB). Formally, a DOB is defined as:
Definition 2 (Dob)
Given an ontology knowledge base , a DOB is a deductive database composed of a set of builtin EOB ground predicates representing and a set of IOB builtin predicates representing , i.e. that define the semantics of the EOB builtin predicates.
The IOB predicate and DOB query definitions follow the Datalog language formalism [Abiteboul et al. (1995)]. Next, we provide the definitions related to queryanswering for DOBs.
Definition 3 (Valid Instantiation)
Given a Deductive Ontology Base , a set of constants in , a set of variables , a rule , and an interpretation of that corresponds to its Minimal Perfect Model [Abiteboul et al. (1995)], a valuation ^{1}^{1}1Given a set of variables and a set of constants , a mapping or valuation is a function . is a valid instantiation of if and only if, evaluates to true in .
Definition 4 (Intermediate Inferred Facts)
Given a Deductive Ontology Base , and a query . A proof tree for wrt is defined as follows:

Each node in the tree is labeled by a predicate in .

Each leaf in the tree is labeled by a predicate in ’s EOB.

The root of the tree is labeled by

For each internal node including the root, if is labeled by a predicate defined by the rule , , where is the conjunction of the predicates , then, for each valid instantiation of , , the node has a subtree whose root is and its children are respectively labeled ,…, .
The valuations needed to define all the valid instantiations in the proof tree correspond to the Intermediate Inferred Facts of .
The number of intermediate inferred facts measures the evaluation cost of the query . Additionally, since the valid instantiations of in the proof tree correspond to the answers of the query, the cardinality of corresponds to the number of such instantiations.
Note that the sets of EOB and IOB builtin predicates of a DOB define an ontology framework, so our model is not tied to any particular ontology language. To illustrate the use of our approach we focus on OWL Lite ontologies.
3 The DOBS System’s Architecture
DOBS is a system that allows an agent to pose efficient conjunctive queries against a set of ontologies. The system’s architecture can be seen in Figure 1.
A subset of a given OWL ontology is translated into a DOB using an OWL Lite to DOB translator. EOB and IOB predicates are stored as a deductive database. Next, an analyzer generates the ontology’s statistics: for each EOB predicate, the analyzer computes the number of facts or valid instantiations in the DOB (cardinality), and the number of different values for each of its arguments (nKeys); for each IOB predicate, an adaptive sampling algorithm [Lipton and Naughton (1990)] is applied to compute cardinality and cost estimates.
When an agent formulates a conjunctive query, the DOBS system’s optimizer generates an efficient query evaluation plan. A dynamicprogramming optimizer is based in a hybrid cost model: it uses the ontology’s EOB and IOB statistics, and estimates the cost of a query according the different evaluation strategies implemented. Finally, an execution engine evaluates the query plan and produces a query answer.
4 OWL Lite DOB
An OWL Lite ontology contains: (1) a set of axioms that provides information about classes and properties, and (2) a set of facts that represents individuals in the ontology, the classes they belong to, and the properties they participate in.
Restrictions allow the construction of class definitions by restricting the values of their properties and their cardinality. Classes may also be defined through the intersection of other classes. Object properties represent binary relationships between individuals; datatype properties correspond to relationships between individuals and data values belonging to primitive datatypes.
The subset of OWL Lite represented as a DOB does not include domain and range class intersection. Also, primitive datatypes are not handled; therefore, we do not represent ranges for Datatype properties^{2}^{2}2EquivalentClasses, EquivalentProperties, and allDifferent axioms, and the cardinality restriction are not represented because they are syntactic sugar for other language constructs..
4.1 OWL Lite DOB Syntax
Our formalism, DOB, provides a set of EOB builtin predicates that represents all the axioms and restrictions of an OWL Lite subset.
EOB predicates are ground, i.e., no variables are allowed as arguments. A set of IOB builtin predicates represents the semantics of the EOB predicates. We have followed the OWL Web Ontology Language Overview presented in [McGuinness and Harmelen (2004)].
Table 1 illustrates the EOB and IOB builtin predicates for an OWL Lite subset^{3}^{3}3 We assume that the class owl:Thing is the default value for the domain and range of a property.. Note that some predicates refer to domain concepts (e.g. isClass, areClasses), and some to instance concepts (e.g. is isIndividual, areIndividuals).
4.2 OWL Lite DOB Semantics
A modeltheoretic semantics for an OWL Lite (subset) DOB is as follows:
Definition 5 (Interpretation)
An Interpretation consists of:

A nonempty interpretation domain corresponding to the union of the sets of valid URIs of ontologies, classes, object and datatype properties, and individuals. These sets are pairwise disjoint.

A set of interpretations , of the EOB and IOB builtin predicates in Table 1.

An interpretation function which maps each nary builtin predicate to an nary relation .
Definition 6 (Satisfiability)
Given an OWL Lite DOB , an interpretation , and a predicate , iff:

is an EOB predicate and .

is an IOB predicate :, and whenever satisfies each predicate in the body , also satisfies the predicate in the head .
Definition 7 (Model)
Given an OWL Lite DOB and an interpretation , is a model of iff for every predicate , .
4.3 Translation of OWL Lite to OWL Lite DOB
A definition of a translation map from OWL Lite to OWL Lite DOB is the following:
Definition 8 (Translation)
Given an OWL Lite theory and an OWL Lite DOB theory , an OWL Lite to DOB Translation is a function .
Given an OWL Lite ontology , an OWL Lite DOB ontology is defined as follows:

(Base Case) If is an axiom or fact belonging to the sets of axioms or facts of , then an EOB predicate is defined according to the EOB mappings in Table 2.

If is an OWL Lite inference rule, then an IOB predicate is defined according to the IOB mappings in Table 3.
The translation ensures that the following theorem holds:
Theorem 1
Let and be OWL Lite and OWL Lite DOB theories respectively, and be an OWL Lite to DOB Translation such that, , then .
5 A Motivating Example
Consider a ’cars and dealers’ domain ontology carsOnt and Web source ontologies source1 and source2. Source source1 publishes information about all types of vehicles and dealers, whereas source2 is specialized in SUVs.
The OWL Lite ontologies can be seen in Table 4.
A portion of the example’s EOB can be seen in Table 5.
To illustrate a rule evaluation, we will take a query q that asks for the Web sources that publish information about ’traction’:
q(O):areClasses(C,O),isDProperty(traction,C).
The answer to this query corresponds to all the ontologies with classes characterized by the property traction, i.e., ontologies source1, source2 and carsOnt.
If we invert the ordering of the first two predicates in q, we will have an equivalent query q’:
q’(O):isDProperty(traction,C),areClasses(C,O).
The cost or total number of inferred facts for q is larger than the cost for q’. In q, the number of instantiations or cardinality for the first intensional predicate areClasses(C,O) is twelve, four for each ontology, as source1 and source2 inherit the classes in carsOnt. The cost of inferring these facts is dependent on the cost of evaluating the areClasses rule. In q’, for the first subgoal isDProperty(traction,C), we have one instantiation: isDProperty(traction,suv). Again, the cost of inferring this fact depends on the cost of the isDProperty predicate.
Note that statistics on the size and argument values of the EOB isDProperty predicate can be computed, whereas statistics for the IOB areClasses predicate will have to be estimated as data is not known a priori. Once the cost of each query predicate is determined, we may apply a costbased joinordering optimization strategy.
6 DOB Hybrid Cost Model
The process of answering a query relies on inferring facts from the predicates in the DOB. Our cost metric is focused on the number of intermediate facts that need to be inferred in order to answer the query. The objective is to find an order of the predicates in the body of the query, such that the number of intermediate inferred facts is reduced. We will apply a joinordering optimization strategy la System R using Datalogrelational equivalences [Abiteboul et al. (1995)]. To estimate the cardinality and evaluation cost of the intensional predicates, we have applied an adaptive sampling technique. Thus, we propose a hybrid cost model which combines adaptive sampling and traditional relational cost models.
6.1 Adaptive Sampling Technique
We have developed a sampling technique that is based on the adaptive sampling method proposed by Lipton, Naughton, and Schneider [Lipton and Naughton (1990), Lipton et al. (1990)]. This technique assumes that there is a population of all the different valid instantiations of a predicate , and that is divided into partitions according to the possible instantiations of one or more arguments of . Each element in is related to its evaluation cost and cardinality, and the population is characterized by the statistics mean and variance.
The objective of the sampling is to identify a sample of the population , called , such that the mean and variance of the cardinality (resp. evaluation cost) of are valid to within a predetermined accuracy and confidence level.
To estimate the mean of the cardinality (resp. cost) of , say , within with probability , where and , the sampling method assumes an urn model.
The urn has balls from which samplings are repeatedly taken, until the sum of the cardinalities (resp. costs) of the samples is greater than , where . The estimated mean of the cardinality (resp. cost) is: .
The values and are associated with the relative error and the confidence level, and and represent the cardinality (resp. cost) variance and mean of . Since statistics of are unknown, the upper bound is replaced by .
To approximate for cost and cardinality estimates, we apply Double Sampling [Ling and Sun (1992)]. In the first stage we randomly evaluate samples and take the maximum value among them:
(resp. , where
It has been shown that a few samples are necessary in order for the distribution of the sum to begin to look normal. Thus, the factor may be improved by central limit theorem [Lipton et al. (1990)]. This improvement allows us to achieve accurate estimations and lower bounds.
6.1.1 Estimating cardinality.
Given an intensional predicate , the cardinality of corresponds to the number of valid instantiations of (Definition 1). In our previous example, the number of ontology values obtained in the answer of the query is estimated using this metric.
To estimate the cardinality of , we execute the adaptive sampling algorithm explained before, by selecting any argument of , and partitioning according to the chosen argument. The cardinality estimation will be , where is the number of partitions, i.e. the number of different instantiations for the chosen argument.
Note that once the cardinality of the noninstantiated is estimated, we can estimate the cardinality of the instantiated predicate by using the selectivity value(s) of the instantiated argument(s).
6.1.2 Estimating cost.
The cost of measures the number of intermediate inferred facts (Definition 4). For instance, to estimate the cost of a predicate , we consider the different instantiation patterns that the predicate can have, i.e., we independently estimate the cost for , , and , where b and f indicate that the argument is bound and free, respectively.
The computation of several cost estimates is necessary because in Datalog topdown evaluation [Abiteboul et al. (1995)], the cost of an instantiated intensional predicate cannot be accurately estimated from the cost of a noninstantiated predicate (using selectivity values). Instantiated arguments will propagate in the IOB rule’s body through sidewayspassing, and cost varies according to the binding patterns. For example, the cost of areClasses(C1,C2) may be smaller than the cost of areClasses(C1,C2), i.e., the bound argument C1 ”pushes” instantiations in the definition of the rule:
areSubClasses(C1,C2):isSubClass(C1,C3),areSubClasses(C3,C2).
making its body predicates more selective.
For , and , we partition according to the bound arguments. In these cases we are estimating the cost of one partition. Therefore, .
Finally, to estimate the cost of , we choose an argument of and partition according to the chosen argument. To reduce the cost of computing the estimate, we choose the most selective argument. The cost estimate is .
6.1.3 Determining the number of partitions .
For both, cost and cardinality estimates, we need to determine the number of possible instantiations, , of the chosen argument. This value depends on the semantics of the particular predicate. For instance, for an interpretation ,
where is the set of valid class URIs and is the set of valid ontology URIs. corresponds to the number of EOB predicates , i.e.,
Similarly, =; these cardinalities are precomputed offline. We assume that the values are uniformly distributed.
6.2 System R Technique
To estimate the cardinality and cost of two or more predicates, we use the cost model proposed in System R. The cardinality of the conjunction of predicates , is described by the following expression:
reflects the impact of the sideways passing variables in reducing the cardinality of the result. This value is computed assuming that sideways passing variables are independent and each is uniformly distributed [Selinger et al. (1979)]. For cost estimation, we consider three evaluation strategies:

NestedLoop Join
Following a NestedLoop Join evaluation strategy, for each valid instantiation in , we retrieve a valid instantiation in with a matching ”join” argument value:corresponds to the estimate of the cost of the predicate where the ”join” arguments are instantiated in , i.e., all the sideways passing variables from to are bound in . These binding patterns were considered during the samplingbased estimation of the cost of .

Block NestedLoop Join
Predicate is evaluated into blocks of fixed size, and then each block is ”joined” with . 
Hash Join
A hash table is built for each predicate according to their join argument. The valid instantiations of predicates and with the same hash key will be joined together:
Although the sampling technique is appropiate for estimating a single predicate, it may be inefficient for estimating the size of a conjunction of more than two predicates.
The sampling algorithm in [Lipton and Naughton (1990)] suggests that for a conjunction of two predicates, , if the size of is , the query is npartitionable, i.e., for each valid instantiation in , the corresponding partition of contains all the valid instantiations in such that ”joins” . Therefore, when the size of the first predicate in a query is small, its sample size may be larger. This problem can be extended to conjunctive queries with several subgoals, so when the number of intermediate results is small, sampling time may be as large as evaluation time.
6.3 Query Optimization
In Figure 6, we present the algorithm used to optimize the body of a query. The proposed optimization algorithm extends the System R dynamicprogramming algorithm by identifying orderings of the EOB and IOB predicates in a query. During each iteration of the algorithm, the best intermediate subplans are chosen based on cost and cardinality. In the last iteration, final plans are constructed and the best plan is selected in terms of the cost metric.
During each iteration i between 2 and n1, different orderings of the predicates are analyzed. Two subplans are considered equivalents if and only if, they are composed by the same predicates. A subplan is better than a subplan if and only if, the cost and cardinality of are greater than the cost and cardinality of , respectively. If cost is greater than cost, but cardinality is greater than cardinality, i.e. they are uncomparable, then the equivalence class is annotated with the two subplans.
7 Experimental Results
An experimental study was conducted for synthetic and realworld ontologies. Experiments on synthetic ontologies were executed on a SunBlade 150 (650MHz) with 1GB RAM; experiments on realworld ontologies were executed on a SunFire V440 (1281MHz) with 16GB RAM. Our system was implemented in SWIProlog 5.6.1.
We have studied three realworld ontologies: Travel [Shell (2002)], EHR_RM [Protege staff (1999)], and GALEN [Open Clinical Organization (2001)].
Our cost metrics are the number of intermediate facts for synthetic and realworld ontologies, and the evaluation time for realworld ontologies. In our experiments, the sampling parameters (the error), (the confidence level), and (the size of the sample for the first stage) were set to 0.2, 0.7 and 7, respectively. We developed two sets of experiments according to the evaluation strategies considered: (1) the NestedLoop join evaluation strategy, and (2) the combination of NestedLoop, Block NestedLoop and Hash join evaluation strategies. Our study consisted of the following:

Cost Model Predictive Capability: In Figure 2a, we report the correlation among the estimated values and the actual cost for synthetic ontologies considering the Nestedloop Join evaluation strategy. Synthetic ontologies were randomly generated following a uniform distribution. We generated ten ontology documents and three chain and star queries with three subgoals for each ontology; the cost of each ordering was estimated with our cost model, and each ordering was then evaluated against the ontology; this gives us a total of six hundred queries. The correlation is 0.92.
Figure 2: (a) Correlation of estimated cost to actual cost (log. scale)  nestedloop join  Synt. ontologies; (b) Correlation of estimated cost to actual cost (log. scale)  nestedloop join  GALEN Figure 3: (a) #Pred. optimal ordering vs. #Pred. worst ordering  nestedloopjoin  Synt. Ontologies; (b) #Pred. optimal ordering vs. #Pred. median ordering  nestedloopjoin  Synt. Ontologies Figure 4: (a) #Pred. optimal ordering vs. #Pred. worst ordering  nestedloopjoin  EHR_RM; (b) #Pred. optimal ordering vs. #Pred. worst ordering  combination evaluation strategies  EHR_RM In Figure 2b, we report the same correlation metric for the realworld ontology GALEN, and the value is 0.62. In Table 7, we present correlation values for the realworld ontologies Travel and EHR_RM for our two sets of experiments: the accuracy of the NestedLoop join cost model is similar to the accuracy of the cost model that considers the combination of the three evaluation strategies.

Cost improvements: We also conducted experiments to study cost improvement using the optimizer. We evaluated all the orderings of each query, then we ran the optimizer and evaluated the optimized query. Figure 3a reports the ratio of the cost of the optimal ordering to the cost of the worst ordering considering only nestedloop join, , for queries against synthetic ontologies. For synthetic ontologies, this ratio is less than 10% for most of the queries. We also computed the proportion of the optimal ordering cost with respect to the median ordering cost. The results for synthetic ontologies show that the optimal ordering cost is less than 40% of the median for fifteen of twenty queries; this result can be observed in Figure 3b.
In Figure 4a, we report the ratio of the cost of the optimal ordering to the cost of the worst ordering considering only nestedloop join for EHR_RM. Additionally, Figure 4b reports the same metric considering the combination of the three evaluation strategies. We can observe that the ratio improves when the combination of the different strategies is considered: for nestedloop join the mean of this ratio is 0.10, whereas for the combination of strategies the mean is 0.07; this is because the optimizer searches in a larger space of possibilities, increasing the chance of finding better query plans.
In general, we may state that the results show a significant improvement in the evaluation cost for the optimized queries with respect to the worstcase and mediancase query orderings. This property holds for synthetic and realworld ontologies. However, for synthetic ontologies we notice that for starshaped queries, the difference between the median cost and the optimal cost is very small; this indicates that the form of the query may influence the cost improvement achieved by the optimizer.
Finally, we would like to point out that we also studied the use of an adaptive sampling technique for the cost estimation of the conjunction of two or more predicates (instead of System R cost model). Although, the sampling technique gives a similar correlation result than the combination of sampling and System R cost model, the time required to compute the cost estimation may be as large as the time needed to evaluate the query. In Figure 5, we can observe that the time difference is marginal.
8 Conclusions and Future Work
We have developed a cost model that combines System R and adaptive sampling techniques. Adaptive sampling is used to estimate data that do not exist a priori, data related to the cardinality and cost of intensional rules in the DOB. The experimental results show that our proposed techniques produce in general a significant improvement in the evaluation cost for the optimized query.
Currently, we are developing a hybrid optimization mechanism that combines Magic Sets and our costbased technique; the idea is to first identify a good ordering, and then apply Magic Sets rewritings to reduce the program that evaluates the query. Initial experiments show that this combined solution outperforms the behavior of each individual technique.
We plan to apply similar optimization techniques for conjunctive queries to DL ontologies. Initially, we will work on ABox queries extending the the techniques proposed in [Sirin and Parsia (2006)]. In a next stage, we will consider mixed TBox and ABox conjunctive queries.
References
 Abiteboul et al. (1995) Abiteboul, S., Hull, R., and Vianu, V. 1995. Foundations of Databases. AddisonWesley.
 Bruijn et al. (2004) Bruijn, J., Polleres, A., and D.Fensel. 2004. OWL Lite WSML Working Draft. DERI Institute. http://www.wsmo.org/2004/d20/v0.1/20040629/.
 Calvanese et al. (2005) Calvanese, D., Giacomo, G. D., Lembo, D., Lenzerini, M., and Rosati, R. 2005. Tailoring OWL for Data Intensive Ontologies. In Proceedings of the Workshop on OWL: Experiences and Directions.
 Eiter et al. (2006) Eiter, T., Ianni, G., Schindlauer, R., and Tompits, H. 2006. Towards Efficient Evaluation of HEXPrograms. In Proceedings of the NMR International Workshop on NonMonotonic Reasoning.
 Grosof et al. (2003) Grosof, B., Horrocks, I., Volz, R., and Decker, S. 2003. Description Logic Programs: Combining Logic Programs with Description Logic. In Proceedings of the WWW International World Wide Web Conference.
 Haarslev and Moller (2004) Haarslev, V. and Moller, R. 2004. Optimization Techniques for Retrieving Resources Described in OWL/RDF Documents: First results. In Proceedings of the KR Knowledge Reasoning Conference.
 Horrocks and Turi (2005) Horrocks, I. and Turi, D. 2005. The OWL Instance Store: System Description. In Proceedings of the CADE International Conference on Automated Deduction. 177–181.
 Hustadt and Motik (2005) Hustadt, U. and Motik, B. 2005. Description Logics and Disjunctive Datalog: The Story so Far. In Proceedings of the DL International Workshop on Description Logics.
 Ling and Sun (1992) Ling, Y. and Sun, W. 1992. A Supplement to Samplingbased Methods for Query Size Estimation in a Database System. SIGMOD Record 21, 4, 12–15.
 Lipton and Naughton (1990) Lipton, R. and Naughton, J. 1990. Query Size Estimation by Adaptive Sampling (Extended Abstract). Proceedings of the ACM SIGMOD International Conference on Management of Data, 40–46.
 Lipton et al. (1990) Lipton, R., Naughton, J., and Schneider, D. 1990. Practical Selectivity Estimation through Adaptive Sampling. Proceedings of the ACM SIGMOD International Conference on Management of Data, 1–10.
 McGuinness and Harmelen (2004) McGuinness, D. and Harmelen, F. V. 2004. OWL Web Ontology Language Overview. http://www.w3.org/tr/owlfeatures/.
 Motik et al. (2003) Motik, B., Volz, R., and Maedche, A. 2003. Optimizing Query Answering in Description Logics using Disjunctive Deductive Databases. In Proceedings of the KRDB International Workshop on Knowledge Representation meets Databases. 39–50.
 Open Clinical Organization (2001) Open Clinical Organization. 2001. GALEN Common Reference Model. http://www.openclinical.org/.
 Pan and Hefflin (2003) Pan, Z. and Hefflin, J. 2003. DLDB: Extending Relational Databases to Support Semantic Web Queries. In Proceedings of the PSSS Workshop on Practical and Scalable Semantic Systems.
 Protege staff (1999) Protege staff. 1999. Protege OWL: Ontology Editor for the Semantic Web. http://protege.stanford.edu/plugins/owl/owllibrary/.
 Ramakrishnan and Gehrke (2003) Ramakrishnan, R. and Gehrke, J. 2003. Database Management Systems. Mc Graw Hill.
 Ramakrishnan and Ullman (1993) Ramakrishnan, R. and Ullman, J. D. 1993. A survey of research on deductive database systems. Journal of Logic Programming 23, 2, 125–149.
 Selinger et al. (1979) Selinger, P., Astrahan, M., Chamberlin, D., Lorie, R., and Price, T. 1979. Access Path Selection in a Relational Database Management System. In Proceedings of the ACM SIGMOD International Conference on Management of Data. 23–34.
 Shell (2002) Shell, M. 2002. SchemaWeb Website. http://www.schemaweb.info.
 Sirin and Parsia (2006) Sirin, E. and Parsia, B. 2006. Optimizations for Answering Conjunctive Abox Queries. In Proceedings of the DL International Workshop on Description Logics.
 Staudt et al. (1999) Staudt, M., Soiron, R., Quix, C., and M.Jarke. 1999. Query Optimization for RepositoryBased Applications. In Selected Areas in Cryptography. 197–203.
 Wang (2006) Wang, T. 2006. Gauging Ontologies and Schemas by Numbers. In Proceedings of the EON Workshop on Evaluation of Ontologies for the Web.