Query Reranking As A Service

Query Reranking As A Service

Abstract

The ranked retrieval model has rapidly become the de facto way for search query processing in client-server databases, especially those on the web. Despite of the extensive efforts in the database community on designing better ranking functions/mechanisms, many such databases in practice still fail to address the diverse and sometimes contradicting preferences of users on tuple ranking, perhaps (at least partially) due to the lack of expertise and/or motivation for the database owner to design truly effective ranking functions. This paper takes a different route on addressing the issue by defining a novel query reranking problem, i.e., we aim to design a third-party service that uses nothing but the public search interface of a client-server database to enable the on-the-fly processing of queries with any user-specified ranking functions (with or without selection conditions), no matter if the ranking function is supported by the database or not. We analyze the worst-case complexity of the problem and introduce a number of ideas, e.g., on-the-fly indexing, domination detection and virtual tuple pruning, to reduce the average-case cost of the query reranking algorithm. We also present extensive experimental results on real-world datasets, in both offline and live online systems, that demonstrate the effectiveness of our proposed techniques.

\numberofauthors

1

1 Introduction

Problem Motivation: The ranked retrieval model has rapidly replaced the traditional Boolean retrieval model as the de facto way for query processing in client-server (e.g., web) databases. Unlike the Boolean retrieval model which returns all tuples matching the search query selection condition, the ranked retrieval model orders the matching tuples according to an often proprietary ranking function, and returns the top- tuples matching the selection condition (with possible page-turn support for retrieving additional tuples).

The ranked retrieval model naturally fits the usage patterns of client-server databases. For example, the short attention span of clients such as web users demands the most desirable tuples to be returned first. In addition, to achieve a short response time (e.g., for web databases), it is essential to limit the length of returned results to a small value such as . Nonetheless, the ranked retrieval model also places more responsibilities on the web database designer, as the ranking function design now becomes a critical feature that must properly capture the need of database users.

In an ideal scenario, the database users would have fairly homogeneous preferences on the returned tuples (e.g., newer over older product models, cheaper over more expensive goods), so that the database owner can provide a small number of ranking functions from which the database users can choose to fulfill their individual needs. Indeed, the database community has developed many ranking function designs and techniques for the efficient retrieval of top- query answers according to a given ranking function.

The practical situation, however, is often much more complex. Different users often have diverse and sometimes contradicting preferences on numerous factors. Even more importantly, many database owners simply lack the expertise, resources, or even motivation (e.g., in the case of government web databases created for policy or legal compliance purposes) to properly study the requirements of their users and design the most effective ranking functions. For example, many flight-search websites, including Kyak, Google Flights, Sky Scanner, Expedia, and Priceline offer limited ranking options on a subset of the attributes, that, for example, does not help ranking based on cost per mileage. Similar limitations apply to the websites such as Yahoo! Autos (resp. Blue Nile), if we want to rank the results, for example, based on mileage per year (resp. summation of depth and table percent). As a result, there is often a significant gap, in terms of both design and diversity, between the ranking function(s) supported by the client-server database and the true preferences of the database users. The objective of this paper is to define and study the query re-ranking problem which bridges this gap for real-world client-server databases.

Query Re-Ranking: Given the challenge for a real-world database owner to provide a comprehensive coverage of user-preferred ranking functions, in this paper we develop a third-party query re-ranking service which uses nothing but the public search interface of a client-server database to enable the on-the-fly processing of queries with user-specified ranking functions (with or without selection conditions), no matter if the ranking function is supported by the database or not.

This query re-ranking service can enable a wide range of interesting applications. For example, one may build a personalized ranking application using this service, offering users with the ability to remember their preferences across multiple web databases (e.g., multiple car dealers) and apply the same personalized ranking over all of them despite the lack of such support by these web databases. As another example, one may use the re-ranking service to build a dedicated application for users with disabilities, special needs, etc., to enjoy appropriate ranking over databases that do not specifically tailor to their needs.

There are two critical requirements for a solution to the query re-ranking service: First, the output query answer must precisely follow the user-specified ranking function, i.e., there is no loss of accuracy and the query re-ranking service is transparent to the end user as far as query answers are concerned. Second, the query re-ranking service must minimize the number of queries it issues to the client-server database in order to answer a user-specified query. This requirement is crucial for two reasons: First is to ensure a fast response time to the user query, given that queries to the client-server database must be issued on the fly. Second is to reduce the burden on the client-server database, as many real-world ones, especially web databases, enforce stringent rate limits on queries from the same IP address or API user (e.g., Google Flight Search API allows only 50 free queries per user per day).

Problem Novelty: While extensive studies have focused on translating an unsupported query to multiple search queries supported by a database, there has not been research on the translation of ranking requirements of queries. Related to our problem here includes the existing studies on crawling client-server databases [15], as a baseline solution for query re-ranking is to first crawl all tuples from the client-server database, and then process the user query and ranking function locally. The problem, however, is the high query cost. As proved in [15], the number of queries that have to be issued to the client-server database for crawling ranges from at least linear to the database size in the best-case scenario to quadratic and higher in worse cases. As such, it is often prohibitively expensive to apply this baseline to real-world client-server databases, especially those large-scale web databases that constantly change over time.

Another seemingly simple solution is for the third-party service to retrieve more than tuples matching the user query, say tuples by using the “page-down” feature provided by a client-server database (or [16, 17] when such a feature is unavailable), and then locally re-rank the tuples according to the user-specified ranking function and return the top- ones. There are two problems with this solution. First, since many client-server databases choose not to publish the design of their proprietary ranking functions (e.g., simply naming it “rank by popularity” in web databases), results returned by this approach will have unknown error unless all tuples satisfying the user query are crawled. Second, when the database ranking function differs significantly from the user-specified one, this approach may have to issue many page-downs (i.e., a large ) in order to retrieve the real top- answers according to the user-specified ranking function.

Finally, note that our problem stands in sharp contrast with existing studies on processing top- queries over traditional databases using pre-built indices and/or materialized views (e.g., [4, 10]). The key difference here is the underlying data access model: Unlike prior work which assume complete access to data, we are facing a restricted, top-, search interface provided by the database.

Outline of Technical Results: We start by considering a simple instance of the problem, where the user-desired ranking function is on a single attribute, and developing Algorithm 1D-RERANK to solve it. Note that this special, 1D, case not only helps with explaining the key technical challenges of query reranking, but also can be surprisingly useful for real-world web databases. For example, a need often arising in flight search is to maximize or minimize the layover time, so as to either add a free stopover for a sightseeing day trip or to minimize the amount of agonizing time spent at an airport. Unfortunately, while flight search websites like Kayak offer the ability to specify a range query on layover time, it does not support ranking according to the attribute. The 1D-RERANK algorithm handily addresses this need by enabling a “Get-Next” primitive - i.e., upon given a user query , an attribute , and the top- tuples satisfying according to , it finds the “next”, i.e., -th ranked, tuple.

In the development of 1D-RERANK, we rigidly prove that, in the worst-case scenario, retrieving even just the top-1 tuple requires crawling of the entire database. Nonetheless, we also show that the practical query cost tends to be much smaller. Specifically, we found a key factor (negatively) affecting query cost to be what we refer to as “dense regions” - i.e., a large number of tuples clustering together within a small interval (on the attribute under consideration). The fact that a dense region may be queried again and again (by the third-party query reranker) for the processing of different user queries prompts us to propose an on-the-fly indexing idea that detects such dense regions and proactively crawls top-ranked tuples in it to avoid the waste on processing future user queries. We demonstrate theoretically and experimentally the effectiveness of such an index on reducing the overall query cost.

To solve the general problem of query reranking for any arbitrary user-desired ranking function (rather than just 1D), a seemingly simple solution is to directly apply a classic top- query processing algorithm that leverages sorted access to each attribute, e.g., Fagin’s or TA algorithm [9], by calling the “Get-Next” primitive provided by 1D-RERANK as a subroutine. The problem with this simple solution, however, is that it incurs a significant waste of queries when applied to client-server databases, mainly because it fails to leverage the multi-predicate (conjunctive) queries supported by the underlying database. We demonstrate in the paper that this problem is particularly significant when a large number of tuples satisfying a user query feature extreme values on one or more attributes.

To address the issue, we develop MD-RERANK (i.e., Multi-Dimensional Rerank), a query re-ranking algorithm that identifies a small number of multi-predicate queries to directly retrieve the top- tuples according to a user query. We note a key difference between the 1D and MD cases: In the 1D case, a single query is enough to cover the subspace outranking a given tuple, while the MD case requires a much larger number of queries due to the more complex shape of the subspace. We develop two main ideas, namely direct domination detection and virtual tuple pruning, to significantly reduce the query cost for MD-RERANK. In addition, like in the 1D case, we observe the high query cost incurred by “dense regions”, and include in MD-RERANK our on-the-fly indexing idea to reduce the amortized cost of query re-ranking.

Our contributions also include a comprehensive set of experiments on real-world web databases, both in an offline setting (for having the freedom to control the database settings) and through online live experiments over real-world web databases. Specifically, we constructed a Top- web search interface in the offline experiment, and evaluated the performance of the algorithms in different situations, by varying the parameters such as database size, system-, and system ranking function. In addition we also tested our algorithms live online over two popular websites, namely Yahoo! Autos and and Blue Nile, the largest diamond online retailer. The experiment results verify the effectiveness of our proposed techniques and their superiority over the baseline competitors.

The rest of the paper is organized as follows. We provide the preliminary notions and problem definition in § 2. Then, we consider the 1D case in § 3, proving a lower bound on the worst-case query cost for query reranking and developing the on-the-fly reranking idea that significantly reduces query cost in practice for 1D-RERANK, as demonstrated in theoretical analysis. In § 4, we study the general query reranking problem and developing the other two ideas, direct domination detection and virtual tuple pruning, for MD-RERANK. After discussing the extensions in § 5, we present a comprehensive set of experimental results in § 6. We discuss the related work in § 7, followed by final remarks in § 8.

2 Preliminaries

2.1 Database Model

Database: Consider a client-server database with tuples over ordinal attributes . Let the value domain of be . The database may also have other categorical attributes . But since they are usually not part of any ranking function, they are not the focus of our attention for the purpose of this paper. We assume each tuple to have a none-NULL value on each (ordinal) attribute , which we refer to as (). Note that if NULL values do exist in the database, the ranking function usually substitutes it with another default value (e.g., the mean or extreme value of an attribute). In that case, we simply consider the occurrence of NULL as the substituted value. In most part of the paper, we make the general positioning assumption [20], before introducing a simple post-processing step that removes this assumption in § 5.

Query Interface: Most client-server database allow users to issue certain “simplistic” search queries. Often these queries are limited to conjunctive ones with predicates on one or a few attributes. Examples here include web databases, which usually allows such conjunctive queries to be specified through a form-like web search interface. Formally, we consider search queries of the form

: SELECT * FROM D WHERE AND AND AND conjunctive predicates on ,

where is a subset of ordinal attributes, and is a range within the value domain of .

A subtle issue here is that our definition of only includes open ranges , i.e., , while real-world client-server databases may offer close ranges , i.e., , or a combination of both (e.g., ). We note that these minor variations do not affect the studies in this paper, because it is easy to derive the answer to even when only close ranges are allowed by database: One simple needs to find a value arbitrarily close to the limits, say and with an arbitrarily small , and substitute with . In the case where the value domains are discrete, substitutions can be made to the closest discrete value in the domain.

As discussed in § 1, once a client-server database receives query from a user, it often limits the number of returned tuples to a small value . Without causing ambiguity, we use to refer to the set of tuples actually returned by , to refer to the the set of tuples matching (which can be a proper superset of the returned tuples when there are more than returning tuples, and to refer to the number of tuples matching . When , we say that overflows because only tuples can be returned. Otherwise, if , we say that returns a valid answer. At the other extreme, we say that underflows when it returns empty, i.e., .

System Ranking Function: In most parts of the paper, we make a conservative assumption that, when , the database selects the returned tuples from according to a proprietary system ranking function unbeknown to the query reranking service. That is, we make no assumption about the system ranking function whatsoever. In § 5, we also consider cases where the database offers more ranking options, e.g., ORDER BY according to a subset of ordinal attributes.

2.2 Problem Definition

The objective of this paper is to enable a third-party query reranking service which enables a user-specified ranking function for a user-specified query , when the query is supported by the underlying client-server database but the ranking function is not.

User-Specified Ranking Functions: We allow a user of the query reranking service to specify a user-specified ranking function which takes as input the user query and one or more ordinal attributes (i.e., ) of a tuple , and outputs the ranking score for in processing . The smaller the score is, the higher ranked will be in the query answer, i.e., the more likely is included in the query answer when . Without causing ambiguity, we also represent as when the context (i.e., the user query being processed) is clear.

We support a wide variety of user-specified ranking functions with only one requirement: monotonicity. Given a user query , a ranking function is monotonic if and only if there exists an order of values for each attribute domain, which we represent as with indicating being higher-ranked than , such that there does not exist two possible tuple values and with yet for all .

Intuitively, the definition states that if outranks according to , then has to outrank on at least one attribute according to the order . In other words, cannot outrank if it is dominated [5] by . Another interesting note here is that we do not require all user-specified ranking functions to follow the same attribute-value order . For example, one ranking function may prefer higher prices while the other prefers lower prices. We support both ranking functions so long as each is monotonic according to its own order of attribute values.

Performance Measure: To enable query reranking, we have to issue a number of queries to the underlying client-server database. It is important to understand that the most important efficiency factor here is the total number of queries issued to the database, not the computational time. The rational behind it is that almost many client-server databases, e.g., almost all client-server databases, enforce certain query-rate limit by allowing only a limited number of queries per day from each IP address, API account, etc.

Problem Definition: In this paper, we consider the problem of query reranking in a “Get-Next”, i.e., incremental processing, fashion. That is, for a given user query , a user-specified ranking function , and the top- tuples satisfying according to , we aim to find the No.  tuple. When , this means finding the top-1 for given and . One can see that finding the top- tuples for and can be easily solved by repeatedly calling the Get-Next function. The reason why we define the problem in this fashion is to address the real-world scenario where a user first retrieves the top- answers and, if still unsatisfied with the returned tuples, proceeds to ask for the No. . By supporting incremental processing, we can progressively return top answers while paying only the incremental cost.

Query reranking Problem: Consider a client-server database with a top- interface and an arbitrary, unknown, system ranking function. Given a user query , a user-specified monotonic ranking function , and the top- ( can be greater than, equal to, or smaller than ) tuples satisfying according to , discover the No.  tuple for while minimizing the number of queries issued to the client-server database .

3 1d-Rerank

We start by considering the simple 1D version of the query reranking problem which, as discussed in the introduction, can also be surprisingly useful in practice. Specifically, for a given attribute , a user query , and the tuples having the minimum values of among (i.e., tuples satisfying ), our goal here is to find tuple , which satisfies and has the -th smallest value on among , while minimizing the number of queries issued to the underlying database.

3.1 Baseline Solution and Its Problem

1d-Baseline

Baseline Design: Since our focus here is to discover given , and , without causing ambiguity, we use as a short-hand representation of . A baseline solution for finding is to start with issuing to the underlying database query : SELECT * FROM D WHERE AND , where represents all selection conditions specified in . If , this query simply becomes SELECT * FROM D WHERE .

Note that the answer to must return non-empty, because otherwise it means there are only tuples matching . Let be the one having minimum among all returned tuples. Given , the next query we issue is : WHERE AND . In other words, we narrow the search region on to “push the envelop” and discover any tuple with even “better” than what we have seen so far.

If returns empty, then . Otherwise, we can construct and issue , , , in a similar fashion. More generally, given being the tuple with minimum returned by , the next query we issue is : WHERE AND . We stop when returns empty, at which time we conclude . Algorithm 1, 1D-BASELINE, depicts the pseudo-code of this baseline solution.

Leveraging History: An implementation issue worth noting for 1D-BASELINE is how to leverage the historic query answers we have already received from the underlying client-server database. This applies not only during the processing of a user query, but also across the processing of different user queries.

During the process of user query , for example, we do not have to start with the range of as stated in the basic algorithm design. Instead, if we have already “seen” tuples in that have in the historic query answers, then we can first identify such a tuple with the minimum , denoted by , and then start the searching process with , a much smaller region that can yield significant query savings, as shown in the query cost analysis below.

More generally, this exact idea applies across the processing of different user queries. What we can do is to inspect every tuple we have observed in historic query answers, identify those that match the user query being processed, and order these matching tuples according to the attribute under consideration. By doing so, the more queries we have processed, the more likely we can prune the search space for based on historic query answers, and thereby reduce the query cost for re-ranking.

1:   = argmin history
2:   = Top-(WHERE AND )
3:  while is overflow
4:      = argmin
5:      = Top-(WHERE AND )
6:  return
Algorithm 1 1D-BASELINE

Negative Result: Lower Bound on Worst-Case Query Cost

While simple, 1D-BASELINE has a major problem on query cost, as it depends on the correlation between and the system ranking function which we know nothing about and has no control over. For example, if the system ranking function is exactly according to , then the query cost of finding is 2: returns and returns empty to confirm that is indeed the “next” tuple. On the other hand, if the system ranking function is the exact opposite to (i.e., returning tuples with maximal first), then the query cost for the baseline solution is exactly in the worst-case scenario (when ), because every tuple satisfying will be returned before is revealed at the end. Granted, this cost can be “amortized” thanks to the leveraging-history idea discussed above, because the queries indeed reveal not just the top- but the complete ranking of all tuples matching . Nonetheless, the query cost is still prohibitively high when matches a large number of tuples.

While it might be tempting to try to “adapt to” such ill-conditioned system ranking functions, the following theorem actually shows that the problem is not fixable in the worst-case sense. Specifically, there is a lower bound of on the query cost required for query reranking given the worst-case data distribution and worst-case system ranking function.

Theorem 1

, there exists a database of tuples such that finding the top-ranked tuple on an attribute through a top- search interface requires at least queries that retrieve all the tuples.

{proof}

Without loss of generality, consider a database with only one attribute and an unknown ranking function. Let be the domain of . Note that this means (1) the query re-ranking algorithm can only issue queries of the form SELECT * FROM D WHERE , where , (2) the returned tuples will be ranked in an arbitrary order, and (3) the objective of the query re-ranking algorithm is to find the tuple with the smallest .

For any given query re-ranking algorithm , consider the following query processing mechanism for the database: During the processing of all queries, we maintain a min-query-threshold with initial value . If a query issued by has lower bound not equal to , i.e., : WHERE with , returns whatever tuples already returned in historic query answers that fall into range . It also sets .

Otherwise, if is of the form WHERE with , then returns an overflowing answer with tuples. These tuples include those in the historic query answers that fall into . If more than such tuples exist in the history, we choose an arbitrary size- subset. If fewer than such tuples exist, we fill up the remaining slots with arbitrary values in range 1. We also set to be .

There are two critical observations here. First is that for any query sequence with , we can always construct a database of at most tuples, such that the query answers generated by are consistent with what produces. Specifically, would simply be the union of all tuples returned. Note that our maintenance of ensures the consistency.

The second critical observation is that no query re-ranking algorithm can find the tuple with the smallest without issuing at least queries. The reason is simple: since queries cannot reveal all tuples, we can add a tuple with to the database, where is its value after processing all queries. One can see that the answers to all queries can remain the same. As such, for any , there exists a database containing tuples such that finding the top-ranked one for an attribute requires at least queries, which according to [15] is sufficient for crawling the entire database in a 1D space.

3.2 1d-Rerank

Given the above result, we have to shift our attention to reducing the cost of finding in an average-case scenario, e.g., when the tuples are more or less uniformly distributed on (instead of forming a highly skewed distribution as constructed in the proof of Theorem 1). To this end, we start this subsection by considering a binary-search algorithm. After pointing out the deficiency of this algorithm when facing certain system ranking functions, we introduce our idea of on-the-fly indexing for the design of 1D-RERANK, our final algorithm for query reranking with a single-attribute user-specified ranking function.

1D-BINARY and its Problem

The binary search algorithm departs from 1D-BASELINE on the construction of : Given , instead of issuing : WHERE AND , we issue here

This query has two possible outcomes: If it returns non-empty, we consider the returned tuple with minimum , say , and construct according to . The other possible outcome is for to return empty. In this case, we issue : WHERE AND , which has to return non-empty as otherwise . In either case, the search space (i.e., the range in which must reside) is reduced by at least half. Algorithm 2, 1D-BINARY, depicts the pseudocode.

1:   = argminHistory
2:  do
3:      WHERE AND
4:      = Top-()
5:     if is underflow
6:         = WHERE AND
7:         = Top-()
8:     if is not underflow
9:         = argmin
10:  while is overflow
11:  return
Algorithm 2 1D-BINARY

Query Cost Analysis: While the design of 1D-BINARY is simple, the query-cost analysis of it yields an interesting observation which motivates the indexing-based design of our final 1D-RERANK algorithm. Let

(1)

An important observation here is that the execution of 1D-BINARY must conclude when the search space is reduced to width smaller than , because no such range can cover while matching more than tuples. Thus, the worst-case query cost of 1D-BINARY is

(2)

where is the range of among tuples satisfying - i.e., . Note that the second input to the function in (2) is because every pair of queries issued by 1D-BINARY, i.e., and , must return at least tuples never seen before that satisfies .

The query-cost bound in (2) illustrates both the effectiveness and the potential problem of Algorithm 1D-BINARY. On one hand, one can see that 1D-BINARY performs well when the tuples matching are uniformly distributed on , because in this case the expected value of becomes , leading to a query cost of .

On the other hand, 1D-BINARY still incurs a high query cost (as bad as , just as indicated by Theorem 1) when two conditions are satisfied: (1) the system ranking function is ill-conditioned, i.e., negatively correlated with , and (2) Within there are densely clustered tuples with extremely close values on , leading to a small . Unfortunately, once the two conditions are met, the high query cost 1D-BINARY is likely to be incurred again and again for different user queries , leading to an expensive reranking service. It is this observation which motivates our index-based reranking idea discussed next.

Algorithm 1D-RERANK: On-The-Fly Indexing

Oracle-based Design: According to the above observation, densely clustered tuples cause a high query cost of 1D-BINARY. To address the issue, we start by considering an ideal scenario where there exists an oracle which identifies these “dense regions” and reveals the tuple with minimum in these regions without costing us any query. Of course, no such oracle exists in practice. Nevertheless, what we shall do here is to analyze the query cost of 1D-BINARY given such an oracle, and then show how this oracle can be “simulated” with a low-cost on-the-fly indexing technique.

Specifically, for any given region , we call it a dense region if and only if it covers at least tuples and , where and are parameters. In other words, the density of tuples in is more than times higher than the uniform distribution (which yields an expected value of ). The setting of and is a subtle issue which we specifically address at the end of this subsection. Given the definition of dense region, the oracle functions as follows: Upon given a user query , an attribute , and a range as input, the oracle either returns empty if is not dense, or a tuple which (1) satisfies , (2) has , and (3) features the smallest among all tuples satisfying (1) and (2).

With the existence of this oracle, we introduce a small yet critical revision to 1D-BINARY, by terminating binary search whenever the width of the search space becomes narrower than the threshold for dense region, i.e., . Then, we call the oracle with the remaining search space as input. Note that doing so may lead to two possible returns from the oracle:

One is when the region is indeed dense. In this case, the oracle will directly return us with zero cost. The other possible outcome is an empty return, indicating that the region is not really dense, instead containing more than (otherwise 1D-BINARY would have already terminated) but fewer than tuples. Note that this is not a bad outcome either, because it means that by following the baseline technique (1D-BASELINE) on the remaining search space, we can always find within queries.

Algorithm 3 depicts the pseudocode of 1D-RERANK, the revised algorithm. The following theorem shows its query cost, which follows directly from the above discussions.

1:   = argminHistory
2:  while
3:      = WHERE AND
4:      = Top-()
5:     if is underflow
6:         = WHERE AND
7:         = Top-()
8:     if is not underflow
9:         = argmin
10:     if is valid break
11:  if is valid
12:     look up at ORACLE(,,)
13:  return
Algorithm 3 1D-RERANK
Theorem 2

The query cost of 1D-RERANK, with the presence of the oracle, is .

{proof}

The query cost of of 1D-RERANK, with the presence of the oracle, is the summation of the following costs:

  • : the query cost of following 1D-BINARY, until the search space becomes narrower than the dense region threshold,

  • : the query cost of discovering in the remaining region, using the oracle.

Following 1D-BINARY takes queries. Because , is in the order of . As discussed previously, if the oracle does not include the remaining region, the region is not dense and contains fewer than tuples. Then, following 1D-BASELNE, at most queries are requires to discover , i.e. is . Consequently, the query cost of 1D-RERANK, with the presence of the oracle, is .

Note that the query cost indicated by the theorem is very small. For example, when and , the query cost is , substantially smaller than that of 1D-BINARY. Of course, the oracle does not exist in any real system. Thus, our goal next is to simulate this oracle with an efficient on-the-fly indexing technique.

On-The-Fly Indexing: Our idea for simulating the oracle is simple: once 1D-RERANK decides to call the oracle with a range , we invoke the 1D-BASELINE algorithm on SELECT * FROM D WHERE to find the tuple with smallest in the range. If satisfies the user query being processed, then we can stop and output . Otherwise, we call 1D-BASELINE on WHERE to find the No. 2 tuple, and repeat this process until finding one that satisfies . All tuples discovered during the process are then added into the “dense index” that is maintained throughout the processing of all user queries.

Algorithm 4 depicts the on-the-fly index building process. Note that the index we maintain is essentially a set of 3-tuples

(3)

where is an attribute, is a range in (non-overlapping with other indexed ranges of ), and contains all (top-ranked) tuples we have discovered that have .

1:  if ORACLE(,) exists
2:     return argmin matches
3:  =1D-BASELINE(WHERE )
4:  add to
5:  while does not satisfy
6:     =1D-BASELINE(WHERE )
7:     add to
8:  return
Algorithm 4 ORACLE

Note that this simulation does differ a bit from the ideal oracle. Specifically, it does not really determine if the region is dense or not. Even if the region is not dense, this simulated oracle still outputs the correct tuple. What we would like to note, however, is that this difference has no implication whatsoever on the query cost of 1D-RERANK. Specifically, what happens here is simply that the on-the-fly indexing process pre-issues the queries 1D-RERANK is supposed to issue when the oracle returns empty. The overall query cost remains exactly the same.

Another noteworthy design in on-the-fly indexing is the call of 1D-BASELINE on SELECT * FROM D WHERE , a query that does not “inherit” the selection conditions in the user query being processed. This might appear like a waste as 1D-BASELINE could issue fewer queries with a narrower input query. Nonetheless, we note that rationale here is that a dense region might be covered by multiple user queries repeatedly. By keeping the index construction generic to all user queries, we reduce the amortized cost of indexing as the dense index can make future reranking processes more efficient.

Parameter Settings: To properly set the two parameters for dense index, and , we need to consider not only the query cost derived in Theorem 2, but also the cost for building the index, which is considered in the following theorem:

Theorem 3

The total query cost incurred by on-the-fly indexing (for processing all user queries) is at most

(4)

where if there exists , such that

(5)

and 0 otherwise. Here refers to the -th ranked tuple according to in the entire database.

{proof}

The discovery of every tuple in the dense region takes at most the amortized cost of one query. That is because 1D-BASELINE assures the discovery on unseen tuples by every non-underflowing query, i.e. every tuple in the dense region is discovered by one and only one query. Thus the query cost is at most equal to the number of tuples in the dense regions. Each tuple is in the dense region with regard to the dimension , if, sorting the tuples on , we can construct a window containing , with size less than the dense region threshold, that has at least tuples. Suppose is ranked -th based on . Equation 5 checks the existence of such a window around it. The total cost thus, is at most the number of the tuples for which this equation is true. This is reflected in Equation 4.

One can see from the above theorem and Theorem 2 how and impacts the query cost: the larger is, the fewer dense regions there will be, leading to a lower indexing cost. On the other hand, the per-query reranking cost increases at the log scale with . Similarly, the larger is, the fewer dense regions there will be (because a larger reduces the variance of tuple density), while the per-query reranking cost increases linearly with . Given the different rate of increase for the per-query reranking cost with and , we should set to be a larger value to leverage its log-scale effect, while keep small to maintain an efficient reranking process.

Specifically, we set and . One can see that the per-query reranking cost of 1D-RERANK in this case is . While the indexing cost depends on the specific data distribution (after all, we are bounded by Theorem 1 in terms of worst-case performance), the large value of makes it extremely unlikely for the indexing cost to be high. In particular, note that even if the density surrounding each tuple follows a heavy-tailed scale-free distribution, the setting of still makes the number of dense regions, therefore the query cost for indexing, a constant.We shall verify this intuition and perform a comprehensive test of different parameter settings in the experimental evaluations.

4 Md-Rerank

In this section, we consider the generic query reranking problem, i.e., over any monotonic user-specified ranking function. We start by pointing out the problem of a seemingly simple solution: implementing a classic top- query processing algorithm such as TA [9] by calling 1D-RERANK as a subroutine. The problem illustrates the necessity of properly leveraging the conjunctive queries supported by the search interface of the underlying database. To do so, we start with the design of MD-BASELINE, a baseline technique similar to 1D-BASELINE. Despite of the similarity, we shall point out a key difference between two cases: MD-BASELINE requires many more queries because of the more complex shape of what we refer to as a tuple’s “rank-contour” - i.e., the subspace (e.g., a line in 2D space) containing all possible tuples that have the same user-defined ranking score as a given tuple . To reduce this high query cost, we propose Algorithm MD-BINARY which features two main ideas, direct domination detection and virtual tuple pruning. Finally, we integrate the dense-region indexing idea with MD-BINARY to produce our final MD-RERANK algorithm.

4.1 Problem with TA over 1D-RERANK

Figure 1: Illustration of problem with TA over 1D-RERANK

A seemingly simple solution to solve the generic query reranking problem is to directly apply a classic top- query processing algorithm, e.g., the threshold (TA) algorithm [9], over the Get-Next primitive offered by 1D-RERANK. While we refer readers to [9] for the detailed design of TA, it is easy to see that 1D-RERANK offers all the data structure required by TA, i.e., a sorted access to each attribute. Note that the random access requirement does not apply here because, as discussed in the preliminary section, the search interface returns all attribute values of a tuple without the need for accessing each attribute separately. Since TA supports all monotonic ranking functions, this simple combination solves the generic query reranking problem defined in § 2.

While simple, this solution suffers from a major efficiency problem, mainly because it does not leverage the full power provided by client-server databases. Note that, by exclusively calling 1D-RERANK as a subroutine, this solution focuses on just one attribute at a time and does not issue any multi-predicate (conjunctive) queries supported by the underlying database (unless such predicates are copied from the user query). The example in Figure 1 illustrates the problem: In the example, there is a large number of tuples with extreme values on both attributes (i.e., tuples on the - and -axis). Since this TA-based solution focuses on one attribute at a time, these extreme-value tuples have to be enumerated first even when the system ranking function completely aligns (e.g., equals) the user-desired ranking function. In other words, no matter what the system/user ranking function is, discovering the top-1 reranked tuple requires sifting through at least half of the database in this example.

On the other hand, one can observe from the figure the power bestowed by the ability to issue multi-predicate conjunctive queries. As an example, consider the case where the system ranking function is well-conditioned and returns as the result for SELECT * FROM D. Given , we can compute its rank-contour, i.e., the line/curve that passes through all 2D points with user-defined score equal to , i.e., the score of . The curve in the figure depicts an example. Given the rank-contour, we can issue the smallest 2D query encompassing the contour, e.g., in Figure 1, and immediately conclude that is the No. 1 tuple when returns and nothing else (assuming ). This represents a significant saving from the query cost of implementing TA over 1D-RERANK.

4.2 MD-Baseline

Discovery of Top-1

To leverage the power of multi-predicate queries, we start with developing a baseline algorithm similar to 1D-BASELINE. The algorithm starts with discovering the top-1 tuple according to an arbitrary attribute, say . Then, we compute the rank-contour of (according to the user ranking function, of course), specifically the values where ’s rank-contour intersects with each dimension, i.e.,

(6)

Figure 2 depicts an example of for the two dimensions, computed according to .

We now issue queries of the form

(7)

Again, Figure 2 shows an example of and for the 2D space.

Figure 2: Example of MD-BASELINE

One can see that the union of these (mutually exclusive) queries covers in its entirety the region “underneath” the rank-contour of . Thus, if none of them overflows, we can safely conclude that the No. 1 tuple must be either or one of the tuples returned by one of the queries. If at least one query overflows and returns with score , i.e., that ranks higher than , we restart the entire process with .

Otherwise, for each query that overflows, we “partition” it further into queries. Let be the tuple returned by . We compute for each attribute a value such that

(8)

Intuitively, can be understood as follows: In order for a tuple to outrank , the highest-ranked tuple discovered so far, it must either “outperform” on at least one attribute, i.e., with , or it must dominate . Examples of and are shown in Figure 2.

Note that, while any monotonic (user-defined) ranking function yields a unique solution for , the complexity of computing it can vary significantly depending upon the design of the ranking function. Nonetheless, recall from § 2 that our main efficiency concern is on the query cost of the reranking process rather than the computational cost for solving locally (which does not incur any additional query to the underlying database). Furthermore, the most extensively studied ranking function in the literature, a linear combination of multiple attributes, features a constant-time solver for .

Given , we are now ready to construct the queries we issue. The first queries cover those tuples outperforming on , respectively; while the last one covers those tuples dominating . Specifically, () is the AND of and

(9)

The last query is the AND of and AND AND , i.e., covering the space dominating .

Once again, at anytime during the process if a query returns with , we restart the entire process with . Otherwise, for each query that overflows, we “partition” it into queries as described above.

In terms of query cost, recall from § 2 our idea of leveraging the query history by checking if any previously discovered tuples match the query we are about to issue. Given the idea, each tuple will be retrieved at most once by MD-BASELINE. Since each tuple we discover triggers at most queries which are mutually exclusive with each other, one can see that the worst-case query cost of MD-BASELINE for discovering the top-1 tuple is .

Discovery of Top-

We now discuss how to discover the top- () tuples satisfying a given query. To start, consider the discovery of No. 2 tuple after finding the top-1 tuple . What we can do is to pick an arbitrary attribute, say , and partition the search space into two parts: and . Then, we launch the top-1 discovery algorithm on each subspace. Note that during the discovery, we can reuse the historic query answers - e.g., by starting from the tuple(s) we have also retrieved in each subspace that have the smallest . One can see that one of the two discovered top-1s must be the actual No. 2 tuple of the entire space.

Once is discovered, in order to discover the No. 3 tuple, we only need to further split the subspace from which we just discovered (into two parts). For example, if we discovered from , then we can split it again into and . One can see that the No. 3 tuple must be either the top-1 of one of the two parts or the top-1 of , which we have already discovered. As such, the discovery of each tuple in top-, say No. , requires launching the top-1 discovery algorithm exactly twice, over the two newly split subspaces of the subspace from which the No.  tuple was discovered. Thus, the worst-case query cost for MD-BASELINE to discover all top- tuples is .

4.3 MD-Binary

Problem of MD-Baseline

A main problem of MD-Baseline is its poor performance when the system ranking function is negatively correlated to the user-desired ranking function. To understand why, consider how MD-Baseline compared with the 1D-Baseline algorithm discussed in § 3. Both algorithms are iterative in nature; and the objectives for each iteration are almost identical in both algorithms: once a tuple is discovered, find another tuple that outranks it according to the input ranking function. The difference, however, is that while it is easy to construct in 1D-Baseline a query that covers only those tuples which outranks (for the attribute under consideration), doing so in the MD case is impossible.

Figure 3: Illustration of problem with MD-Baseline

The reason for this difference is straightforward: observe from Figure 3 that, when there are more than one, say two, attributes, the subspace of tuples outranking is roughly “triangular” in shape. On the other hand, only “rectangular” queries are supported by the database. This forces us to issue at least queries to “cover” the subspace outranking (without covering, and returning, itself).

The problem for this “coverage” strategy in MD-Baseline, however, is that the rectangular queries it issues may match many tuples that indeed rank lower (i.e., have larger )) than according to the desired ranking function. For example, half of the space covered by in Figure 3 is occupied by tuples that rank lower than . This means that, when the system ranking function is negatively correlated with our desired one, queries like in Figure 3 are most likely going to return tuples that rank lower than . This outcome has two important ramifications on the efficiency of MD-Baseline: First, it significantly slows down the process of iteratively finding a tuple that outranks the previous one. Second, within each iteration, it slows down the pruning of the search space. For example, observe from Figure 3 that, after returns , the pruning effect on the space covered by is minimal, i.e., only the dark subspace on the top-right corner of .

Design of MD-Binary

We propose two ideas in MD-Binary to address the two ramifications of MD-Baseline, respectively:

Direct Domination Detection: The intuition of this idea can be stated as follows: When a query such as returns a tuple that ranks lower than , we attempt to “test” whether this is indeed caused by the absence of higher-ranked tuples in , or by the ill-conditioned nature of the system ranking function. As discussed above, there is no way to efficiently cover the subspace of tuples outranking . Thus, what we do here is to find the single query which (1) is a subquery of , (2) only covers the subspace outranking , and (3) has the maximum volume among all queries that satisfy (1) and (2).

Figure 4: Design of MD-Binary: Example 1

For example, when in Figure 3 returns , we issue (marked in green) in Figure 4 which covers roughly half of the “triangular” subspace underneath the rank-contour of in . As another example, if in Figure 2 returns a tuple with lower rank than , then we the max-volume tuple would be in Figure 5, which covers almost all of the subspace outranking in . One can see from these examples that, if the returning of is caused by the ill-conditioned system ranking function while there are abundant tuples outranking , then and/or are likely to return such a tuple and successfully push MD-Binary to the next iteration. If, on the other hand, returns empty, we use the next idea to further partition , in order to determine whether there is any tuple in it that outranks .

Virtual Tuple Pruning: We now address the second problem of MD-Baseline, i.e., the lack of pruning power when the system ranking function is negatively correlated with the desired one. To this end, our idea is to prune the search space according to not the returned tuple, but a virtual tuple created for the purpose of minimizing the pruned subspace. Figure 4 illustrates an example: Instead of partitioning with like in Figure 3 which results in minimal pruning, we “create” a virtual tuple which maximizes the reduction of search space as marked in gray in Figure 4.

Figure 5: Design of MD-Binary: Example 2

Figure 4 represents one possible outcome of virtual tuple pruning, when happens to dominate the tuple returned by . The other possible outcome is depicted in Figure 5, where does not dominate . In this case, if we still split as in Figure 4, then one of the subspace (i.e., AND ) would return , making the query answer useless. As such, we split into three pieces in this scenario, as shown in Figure 5.

The more general design of virtual tuple pruning for an -D database is shown in Algorithm 5. The algorithm also depicts the direct domination detection idea. Note from the algorithm that, depending on the values of and on the attributes, the number of split subspaces can range from , when dominates , to , when dominates on all but one attribute.

1:  apply 1D-RERANK on to and set threshold=
2:  add the queries in Equation 7 to the empty queue
3:  while queue is not empty
4:     =queue.delete
5:      = Top-(); = argmin
6:     if threshold
7:        threshold=; goto Line 2
8:     if is valid: continue
9:      argmax contour
10:      = Top-(,
11:     if is not underflow
12:        =argmin; threshold=; goto Line 2
13:     for each
14:        if add the following query to the queue
15:        else add the following queries to the queue
16:  return
Algorithm 5 MD-BINARY

One can see from the design that virtual tuple pruning does not affect the correctness of the algorithm: so long as , the union of the split subspaces still cover . On the other hand, the benefit of the idea can be readily observed from Figure 4: instead of having only a small reduction of the search space like in Figure 3, now we can prune half of the space in that rank below (in this 2D case, of course). The experimental results in § 6 demonstrate the effectiveness of virtual tuple pruning.

4.4 Md-Rerank

Just like the 1D case, the query cost of MD-Binary may increase significantly when there is a dense cluster of tuples right above the rank-contour of the top-1 tuple. In this case, the split in MD-Binary may have to continue for a large number of times before all tuples in the cluster are excluded from the search space. Once again, our solution to this problem is index-based reranking. Like in the 1D case, we proactively record as an index densely located tuples once we encounter them, so that we do not need to incur a high query cost every time a query triggers visits to the same dense region.

More specifically, MD-RERANK follows MD-Binary until a remaining search space (1) is covered by an already crawled region in the index; or (2) has volume smaller than , where is the volume of the entire data space, and and are the same as in 1D. In the earlier case, since the search space has been crawled already, we can directly reuse the crawled tuples. In the latter case, we follow the same procedure as in 1D-RERANK, i.e., we crawl the space and, if it indeed turns out to be dense (by containing at least tuples), we include the crawled tuples into the index. Algorithm 6 depicts the pseduocode of MD-RERANK.

1:  follow MD-BINARY
2:  during the process for each query :
3:     if
4:         = remove from
5:        if ORACLE() exists
6:           return argmin matches
7:        =MD-BASELINE(); add to temp
8:        while does not satisfy
9:            = MD-BASELINE( AND )
10:            = MD-BASELINE( AND )
11:           =min( , ); add and to temp
12:        add temp to
Algorithm 6 MD-RERANK

5 discussions

General Positioning Assumption: In previous discussions, we made the general positioning assumption, i.e., each tuple has a unique value on each attribute, for the simplicity of discussions. We now consider the removal of this assumption. Note that the removal of this assumption for MD-RERANK is extremely simple: the only tuple(s) that can be missed by MD-RERANK are those that have the exact same value on every single attribute. Thus, the only post-processing step required for removing the assumption is to form a fully specified query according to No.  tuple just discovered. If more than one, say , tuples are returned, they become the No.  to No.  top-ranked tuples. Removing the assumption for 1D-RERANK is slightly more complex. For example, if we are running it over attribute , the removal of the general positioning assumption means query SELECT * FROM D WHERE might overflow. In this case, our solution is to call the crawling algorithm [15] to discover, one at a time, tuples satisfying , as all of these tuples have the same rank for the purpose of 1D-RERANK.

Multiple/Known System Ranking Functions: Another interesting issue arising in practice is when the client-server database offers more than one ranking functions, often times allowing ranking over a specific attribute. For example, Amazon.com offers not only a proprietary rank by “popularity”, the design of which is unknown, but also ranking by price, which is an attribute usually involved in the user-specified ranking function. An interesting implication of such a “public” ranking function is that it might boost the performance of the TA-1D algorithm discussed in the beginning of § 4. Specifically, since now TA can simply use the public ranking function on the attribute instead of calling 1D-RERANK, it may have a even lower query cost than MD-RERANK when the user-desired ranking function aligns well with the system one.

Point Predicates: In this paper, we focused on cases where attributes involved in the ranking function are numeric attributes that support range queries. While this is often the case in practice (as evidenced in real-world websites such as the aforementioned Blue Nile where all attributes such as price, carat, clarity, etc., are available as range predicates), there are also cases where a ranking attribute with only a small number of domain values can only be specified as a point predicate (i.e., of the form ) in the database search interface. For 1D-RERANK, this is often a blessing because it simplifies the task to querying the attribute values in the preference order (plus the crawling-based provision as in the discussion for the general positioning assumption). On the other hand, it makes MD-RERANK much more costly, because now a conjunctive query covers a much smaller space than the range case. Thus, an intuition here is to prefer the TA-1D algorithm over MD-RERANK when a large number of attributes are searchable as point predicates only. Due to space limitations, we leave a comprehensive study of this issue to future work.

6 Experimental Evaluation

6.1 Experimental Setup

In this section, we present our experimental results over a number of several real-world datasets, offline and online. We started with the offline case by testing over a real-world dataset we have already collected. Specifically, we constructed a top- web search interface over it, and then executed our algorithms through the interface. This offline setting enabled us to not only verify the correctness of our algorithms, but also investigate how the performance of query reranking changes with various factors such as the database size, the system ranking function, settings of the system search interface, etc. We followed the offline tests with online, live, experiments over two real-world web databases, including the largest online diamond retailer and a popular auto search website. In all these experiments, we applied the extensions described in § 5 to resolve the general positioning assumption which may not hold in practice.

Offline Dataset: We used the flight on-time dataset published by the US Department of Transportation (DOT)2. A wide range of third-party websites use this dataset to identify on-time performance of flights, routes, airports, airlines, etc. It consists of 457,013 flight records of 14 US carriers during the month of May 2015. It has 28 attributes, out of which we selected the following 8 attributes for ranking: Dep-Delay, Taxi-Out, Taxi-In, Arr-Delay-New, CRS-Elapsed-Time, Actual-Elapsed-Time, Air-Time, and Distance. The domain sizes are , , , , , , , and , respectively. For the purpose of the experiments, we considered two system ranking functions: 0.3 AIR-TIME + TAXI-IN (SR1) and -0.1 DISTANCE - DEP-DELAY (SR2). In general, SR1 has a positive correlation with the user-specified ranking functions we tested, while SR2 has a negative one. We set SR1 as the default ranking function in the experiments. The value of offered by the database is set to 10 by default.

Online Experiments: We conducted live experiments over two real-world web-sites: Blue Nile (BN) and Yahoo! Autos (YA).

Blue Nile3 is the largest diamonds online retailer in the world. At the time of our experiments, its catalog had 117,641 diamonds. We considered Carat, Depth, LengthWidthRatio, Price, and Table as the ranking attributes, and Clarity, Color, Cut, Fluorescence, Polish, Shape, and Symmetry for filtering. The domains for the ranking attributes are [0.23,22.74], [0.45,0.86], [0.49,0.89], [$220,$4506938] and [0.75,2.75], respectively. BN allows multiple ranking functions - ordering based on each attributes individually as well as by the derived attribute price-per-carat.

Yahoo! Autos is a popular website for buying used cars4. We considered the 13,169 cars listed for sale within 30 miles of New York city. We treated Price, Milage and Year as the ranking attributes, and BodyStyle, DriveType, Transmission, Name and Model as the filtering attributes. The cars had a price range between $0 and $50,000, mileage between 0 and 300,000, and were manufactured between 1993 and 2016. The default ranking function is “distance from a predefined location” (which is not monotonic). Additionally, it supports ranking by each of the numerical attributes individually.

Performance Measures: As explained in § 2, our algorithms always return the precise query answer. After verifying the correctness in all offline experiments, we turn our attention to the key performance measure, efficiency, which is measured by the number of queries issued to the web database.

Figure 6: 1D: Impact of (SR1)
Figure 7: 1D: Impact of (SR2)
Figure 8: 1D: Impact of System-
Figure 9: 1D: Impact of and

6.2 1D Experiments

Constructing Workload of User Preference Queries: We tested a diverse set of user-specified queries of the form SELECT * FROM WHERE ORDER BY . Specifically, we randomly selected different subsets of filtering attributes for the WHERE clause, while choosing the (1D) ranking attribute uniformly at random. This approach has a number of appealing properties. First, it covers diverse cases that include ideal, worst-case and typical scenarios. Second, since 1D-RERANK uses on-the-fly indexing to amortize the cost between different user-issued queries, our diverse query workload simulates a real-world scenario where the service is used by multiple users. For each experimental configuration, we execute each of the queries and report the average query cost. Specifically, for the DOT dataset, we constructed 32 queries of which 25% do not have any filtering condition. For BN, we constructed a set of 20 queries, of which 4 have no filtering conditions, while these values are 15 and 2 for YA, respectively.

Experiments over the Real-world Dataset

Impact of Database Size and System Ranking Function: We started by testing the impact of database size on our algorithms for the two system ranking functions SR1 and SR2. To test databases of varying sizes, we drew 10 simple random samples of a given size from the DOT dataset, and measured the average query cost for the entire workload over these 10 small databases. Figures 9 and 9 show the average query cost for retrieving the top-1 tuple over SR1 and SR2, respectively. As expected, the database size has negligible impact on the query cost. Also note from the figures that, consistent with our theoretical analysis, Algorithm 1D-RERANK outperformed both 1D-BASELINE and 1D-BINARY significantly. One can also note that the change in system ranking function has a major impact on the performance comparison between 1D-BASELINE and 1D-BINARY, yet has a negligible impact on that of 1D-RERANK, again consistent with our theoretical discussions.

Figure 10: 1D: Impact of Query order in 1D-RERANK
Figure 11: 1D: Top Query Cost (BN)
Figure 12: 1D: Top Query Cost (YA)
Figure 13: MD: Impact of (SR1)
Figure 14: MD: Impact of (SR2)
Figure 15: MD: Impact of System-
Figure 16: MD: Top Query Cost (BN)
Figure 17: MD: Top Query Cost (YA)

Impact of Value of : Figure 9 shows the average (accumulative) query cost for retrieving top-1 to top-10 tuples when the system varies from 1 to 10. There are two key observations from the figure: First, our query cost increases (about) linearly with the number of desired top answers, demonstrating its scalability to a large desired answer size. Second, the query cost ,as expected, decreases when the system offers a larger .

Impact of 1D-RERANK parameters and : Recall from § 2 that the performance of 1D-RERANK can be parameterized by and . We conducted two experiments to empirically verify the impact. In the first experiment, we fixed the value of to and varied between and . In the other one, we fixed the value of to and varied the value of from to . Figure 9 shows the average query cost for both settings. As our theoretical results suggest, setting and resulted in the (almost) optimal performance. One can see that further reducing or increasing does not have much affect on query cost, yet significantly increases the index size.

Impact of Query Order on 1D-RERANK: Recall that 1D-RERANK constructs the index on the fly. As such, when queries are issued in different order, the index being maintained may differ. To test whether the order of user queries have a major effect on the performance of 1D-RERANK, we ran an experiment using SR1 with three query-issuing orders: (1) from low to high selectivity (i.e., from more general to narrower queries), (2) from high to low selectivity, and (3) in a random order. Figure 13 shows that the query issuance order has a negligible effect on the query cost of 1D-RERANK.

Online Experiments

We also conducted two live experiments over Blue Nile and Yahoo! Autos, aiming to retrieve the top- tuples for each of the user query in the workload. The default system- for BN and YA are 30 and 15, respectively, with the system ranking function being the default for each website, i.e., descending value of price per carat for BN and distance from the pre-defined location for YA.

Figures 13 and 13 show the average query cost for retrieving top- tuples. As expected, 1D-RERANK significantly outperforms the other algorithms for both websites. For BN, while 1D-BINARY performed well in the beginning, it required higher query cost for large values of . That is because the binary search approach keeps dividing the search area in half until the issued query underflows, thus it is likely to end up with an underflowing query that contains fewer tuples, leading to less saving in the query cost. For YA, note that 1D-BINARY does not benefit much from the savings and is hence outperformed by 1D-BASELINE.

6.3 MD Experiments

In this subsection, we compare the performance of MD-RERANK against three baseline methods: the aforementioned “TA over 1D-RERANK”, as well as MD-BASELINE and MD-BINARY. Once against, we tested both offline and online settings.

Constructing Workload of User Preference Queries: The workload is constructed using a process similar to one described in § 6.2. However, the ranking functions are constructed by selecting a subset from the set of all ranking attributes and choosing different weights between 0 and 1 for each of them. The workload consists of 32, 12 and 10 queries for DOT, BN and YA, respectively, of which 8, 3 and 2 do not have any filtering conditions.

Experiments over the Real-world Dataset

Impact of Database Size and System Ranking Function: The experimental setup was similar to the 1D experiments in § 6.2. We evaluated our algorithms for different database sizes and system ranking functions SR1 and SR2. Figures 13 and 17 shows the results for SR1 and SR2 respectively. In both cases, the algorithm MD-RERANK significantly outperformed all three competing baselines. One may notice an increase in the query cost of the algorithms when increases in Figures 17, and a decrease in Figures 13. That is because when system and user-specified ranking function are anti-correlated, the more tuples database has, the more queries are required to find top tuples for the user-specified ranking function (since more tuples are ranked higher than them based on SR2). The case is vice-verse for SR1.

Impact of System-: We then varied , the number of tuples returned by the web database and measured the average query cost to retrieve top- tuples for the query workload. Figure 17 shows the results. As expected, higher values of system- required lesser query cost to obtain the top- tuples. When , our algorithms were not able to use the savings by valid queries resulting in a substantial query cost.

Online Experiments

We applied MD-RERANK, as well as TA over 1D-RERANK, to retrieve the top- tuples for each query in the workload. Figure 17 shows the average query cost for the BN experiment. As shown in the figure, MD-RERANK outperformed TA significantly. The results for YA experiment is reflected in Figure 17. The substantial difference in query cost of the algorithms can be explained by the observation by the negative correlation between the ranking tuples in YA queries (for example the cars with higher mileage are probably cheaper). Hence TA algorithm had to issue many GetNext operations before it finds the top tuples.

7 Related Work

Top- discovery methods can get divided in three main categories: (sorted/random) access-based methods, layering-based approaches, and view-based techniques. The first series of algorithms take the advantage of the data access methods. For example, NRA [9] assumes the existence of one sorted list of tuples for each attribute, and finds the Top- only by exploring the lists, while TA [9] applies both random and sorted access. The more advanced algorithms in this category are CA [9], Upper/Pick [2], and [13]. The next category is the set of algorithms, such as ONION [4] and [19], that pre-process the data and index the layers of extremum tuples that gaurantee including the Top-. View-based methods such as PREFER [10] and LPTA [6], employ the materialized views to increase the efficiency of Top- discovery process. While prior work focused on minimizing the storage overhead of indices/materialized views and the computational overhead of processing top- queries, we have to focus on minimizing the number of queries issued to the underlying database. This fundamentally different data access model also leads to a different cost model. For example, many prior work, such as [9] and [3], assume a separate cost for accessing each attribute and/or evaluating each predicate in the top- query, while in our problem all attributes of a tuple are returned at once.

Hidden Databases Most of the prior works on the hidden databases relate to sampling, crawling the database, and aggregate estimation. Prior works such as [7, 18] propose efficient algorithms for collecting unbiased low-variance random samples of a given hidden databasee and [8, 11] provide unbiased aggregate estimators. While [15, 14, 12] aim toward crawling the whole hidden database, [1] only crawls the maxima index.

Top- queries over Hidden Databases As the best of our knowledge, this is the first paper on reranking the query results of a hidden database. The only prior work about Top- in hidden databases is [16]. Assuming the full knowledge of the system ranking function and attribute domains, its goal is to go beyond the Top- limitation of the database interface, by partitioning the query space.

8 final remarks

In this paper, we introduced a novel problem of query reranking, a third-party service that takes a client-server database with a proprietary ranking function and enables query processing according to any user-specified ranking function. To enable query reranking while minimizing the number of queries issued to the underlying database, we develop 1D-RERANK and MD-RERANK for user-specified ranking functions that involve only one attribute and any arbitrary set of attributes, respectively. Theoretic analysis and extensive experimental results on real-world databases, in offline and online settings, demonstrate the effectiveness of our techniques and their superiority over baseline solutions.

Footnotes

  1. Note that any factor here (besides 2) works too. So in general the range can be so long as .
  2. downloaded from http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time
  3. http://www.bluenile.com/diamond-search
  4. https://autos.yahoo.com/used-cars/

References

  1. A. Asudeh, S. Thirumuruganathan, N. Zhang, and G. Das. Discovering the skyline of web databases. VLDB, 2016.
  2. N. Bruno, S. Chaudhuri, and L. Gravano. Top-k selection queries over relational databases: Mapping strategies and performance evaluation. TODS, 2002.
  3. K. C.-C. Chang and S.-w. Hwang. Minimal probing: supporting expensive predicates for top-k queries. In SIGMOD. ACM, 2002.
  4. Y.-C. Chang, L. Bergman, V. Castelli, C.-S. Li, M.-L. Lo, and J. R. Smith. The onion technique: indexing for linear optimization queries. In SIGMOD, 2000.
  5. J. Chomicki. Preference formulas in relational queries. TODS, 2003.
  6. G. Das, D. Gunopulos, N. Koudas, and D. Tsirogiannis. Answering top-k queries using views. In VLDB, 2006.
  7. A. Dasgupta, G. Das, and H. Mannila. A random walk approach to sampling hidden databases. In SIGMOD, 2007.
  8. A. Dasgupta, X. Jin, B. Jewell, N. Zhang, and G. Das. Unbiased estimation of size and other aggregates over hidden web databases. In SIGMOD, 2010.
  9. R. Fagin, A. Lotem, and M. Naor. Optimal aggregation algorithms for middleware. Journal of Computer and System Sciences, 66(4):614–656, 2003.
  10. V. Hristidis and Y. Papakonstantinou. Algorithms and applications for answering ranked queries using ranked views. VLDB Journal, 2004.
  11. W. Liu, S. Thirumuruganathan, N. Zhang, and G. Das. Aggregate estimation over dynamic hidden web databases. VLDB, 2014.
  12. J. Madhavan, D. Ko, Ł. Kot, V. Ganapathy, A. Rasmussen, and A. Halevy. Google’s deep web crawl. VLDB, 2008.
  13. A. Marian, N. Bruno, and L. Gravano. Evaluating top-k queries over web-accessible databases. ACM Trans. Database Syst., 29(2), 2004.
  14. S. Raghavan and H. Garcia-Molina. Crawling the hidden web. VLDB, 2000.
  15. C. Sheng, N. Zhang, Y. Tao, and X. Jin. Optimal algorithms for crawling a hidden database in the web. VLDB, 2012.
  16. S. Thirumuruganathan, N. Zhang, and G. Das. Breaking the top-k barrier of hidden web databases. In ICDE. IEEE, 2013.
  17. S. Thirumuruganathan, N. Zhang, and G. Das. Rank discovery from web databases. VLDB, 2013.
  18. F. Wang and G. Agrawal. Effective and efficient sampling methods for deep web aggregation queries. In EDBT, 2011.
  19. D. Xin, C. Chen, and J. Han. Towards robust indexing for ranked queries. In VLDB, 2006.
  20. P. B. Yale. Geometry and symmetry. Courier Corporation, 1968.
13743
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters