Towards Tight Bounds for the Streaming Set Cover Problem
We consider the classic Set Cover problem in the data stream
model. For elements and sets () we give a
-pass algorithm with a strongly sub-linear
space and logarithmic approximation
Finally, we show that any randomized one-pass algorithm that distinguishes between covers of size 2 and 3 must use a linear (i.e., ) amount of space. This is the first result showing that a randomized, approximate algorithm cannot achieve a space bound that is sublinear in the input size.
This indicates that using multiple passes might be necessary in order to achieve sub-linear space bounds for this problem while guaranteeing small approximation factors.
The Set Cover problem is a classic combinatorial optimization task. Given a ground set of elements , and a family of sets where , the goal is to select a subset such that covers , i.e., , and the number of the sets in is as small as possible. Set Cover is a well-studied problem with applications in many areas, including operations research [GW97], information retrieval and data mining [SG09], web host analysis [CKT10], and many others.
Although the problem of finding an optimal solution is NP-complete, a natural greedy algorithm which iteratively picks the “best” remaining set is widely used. The algorithm often finds solutions that are very close to optimal. Unfortunately, due to its sequential nature, this algorithm does not scale very well to massive data sets (e.g., see Cormode et al. [CKW10] for an experimental evaluation). This difficulty has motivated a considerable research effort whose goal was to design algorithms that are capable of handling large data efficiently on modern architectures. Of particular interest are data stream algorithms, which compute the solution using only a small number of sequential passes over the data using a limited memory. In the streaming Set Cover problem [SG09], the set of elements is stored in the memory in advance; the sets are stored consecutively in a read-only repository and an algorithm can access the sets only by performing sequential scans of the repository. However, the amount of read-write memory available to the algorithm is limited, and is smaller than the input size (which could be as large as ). The objective is to design an efficient approximation algorithm for the Set Cover problem that performs few passes over the data, and uses as little memory as possible.
The last few years have witnessed a rapid development of new streaming
algorithms for the Set Cover problem, in both theory and applied
communities, see [SG09, CKW10, KMVV13, ER14, DIMV14, CW16]. Figure 1.1 presents
the approximation and space bounds achieved by those algorithms, as
well as the lower bounds
|Geometric Set Cover (Theorem 4.22)||R|
|-Sparse Set Cover (Theorem 6.38)||1||R|
Related work. The semi-streaming Set Cover problem was first studied by Saha and Getoor [SG09]. Their result for Max -Cover problem implies a -pass -approximation algorithm for the Set Cover problem that uses space. Adopting the standard greedy algorithm of Set Cover with a thresholding technique leads to -pass -approximation using space. In space regime, Emek and Rosen studied designing one-pass streaming algorithms for the Set Cover problem [ER14] and gave a deterministic greedy based -approximation for the problem. Moreover they proved that their algorithm is tight, even for randomized algorithms. The lower/upper bound results of [ER14] applied also to a generalization of the Set Cover problem, the -Partial Set Cover() problem in which the goal is to cover fraction of elements and the size of the solution is compared to the size of an optimal cover of Set Cover(). Very recently, Chakrabarti and Wirth extended the result of [ER14] and gave a trade-off streaming algorithm for the Set Cover problem in multiple passes [CW16]. They gave a deterministic algorithm with passes over the data stream that returns a -approximate solution of the Set Cover problem in space. Moreover they proved that achieving in passes using space is not possible even for randomized protocols which shows that their algorithm is tight up to a factor of . Their result also works for the -Partial Set Cover problem.
In a different regime which was first studied by Demaine et al., the goal is to design a “low” approximation algorithms (depending on the computational model, it could be or ) in the smallest possible space [DIMV14]. They proved that any constant pass deterministic -approximation algorithm for the Set Cover problem requires space. It shows that unlike the results in -space regime, to obtain a sublinear “low” approximation streaming algorithm for the Set Cover problem in a constant number of passes, using randomness is necessary. Moreover, [DIMV14] presented a -approximation algorithm that makes passes and uses memory space.
The Set Cover problem is not polynomially solvable even in the restricted instances with points in as elements, and geometric objects (either all disks or axis parallel rectangles or fat triangles) in plane as sets [FG88, FPT81, HQ15]. As a result, there has been a large body of work on designing approximation algorithms for the geometric Set Cover problems. See for example [MRR14, AP14, AES10, CV07] and references therein.
1.1 Our results
Despite the progress outlined above, however, some basic questions still remained open. In particular:
Is it possible to design a single pass streaming algorithm with a “low” approximation factor
4that uses sublinear (i.e., ) space?
If such single pass algorithms are not possible, what are the achievable trade-offs between the number of passes and space usage?
Are there special instances of the problem for which more efficient algorithms can be designed?
In this paper, we make a significant progress on each of these questions. Our upper and lower bounds are depicted in Figure 1.1.
On the algorithmic side, we give a -pass algorithm with a
approximation factor. This yields a significant improvement over the
earlier algorithm of Demaine et al. [DIMV14] which used
exponentially larger number of passes. The trade-off offered by our
algorithm matches the lower bound of Nisan [Nis02] that
holds at the endpoint of the trade-off curve, i.e., for
, up to poly-logarithmic factors in
Our algorithm exhibits a natural tradeoff between the number of passes and space, which resembles tradeoffs achieved for other problems [GM07, GM08, GO13]. It is thus natural to conjecture that this tradeoff might be tight, at least for “low enough” approximation factors. We present the first step in this direction by showing a lower bound for the case when the approximation factor is equal to , i.e., the goal is to compute the optimal set cover. In particular, by an information theoretic lower bound, we show that any streaming algorithm that computes set cover using passes must use space (even assuming exponential computational power) in the regime of . Furthermore, we show that a stronger lower bound holds if all the input sets are sparse, that is if their cardinality is at most . We prove a lower bound of for and .
We also consider the problem in the geometric setting in which the elements are points in and sets are either discs, axis-parallel rectangles, or fat triangles in the plane. We show that a slightly modified version of our algorithm achieves the optimal space to find an -approximation in passes.
Finally, we show that any randomized one-pass algorithm that distinguishes between covers of size 2 and 3 must use a linear (i.e., ) amount of space. This is the first result showing that a randomized, approximate algorithm cannot achieve a sub-linear space bound.
Recently Assadi et al. [AKL16] generalized this lower bound to any approximation ratio . More precisely they showed that approximating Set Cover within any factor in a single pass requires space.
Our techniques: Basic idea. Our algorithm is based on the idea that whenever a large enough set is encountered, we can immediately add it to the cover. Specifically, we guess (up to factor two) the size of the optimal cover . Thus, a set is “large” if it covers at least fraction of the remaining elements. A small set, on the other hand, can cover only a “few” elements, and we can store (approximately) what elements it covers by storing (in memory) an appropriate random sample. At the end of the pass, we have (in memory) the projections of “small” sets onto the random sample, and we compute the optimal set cover for this projected instance using an offline solver. By carefully choosing the size of the random sample, this guarantees that only a small fraction of the set system remains uncovered. The algorithm then makes an additional pass to find the residual set system (i.e., the yet uncovered elements), making two passes in each iteration, and continuing to the next iteration.
Thus, one can think about the algorithm as being based on a simple iterative “dimensionality reduction” approach. Specifically, in two passes over the data, the algorithm selects a “small” number of sets that cover all but fraction of the uncovered elements, while using only space. By performing the reduction step times we obtain a complete cover. The dimensionality reduction step is implemented by computing a small cover for a random subset of the elements, which also covers the vast majority of the elements in the ground set. This ensures that the remaining sets, when restricted to the random subset of the elements, occupy only space. As a result the procedure avoids a complex set of recursive calls as presented in Demaine et al. [DIMV14], which leads to a simpler and more efficient algorithm.
Geometric results. Further using techniques and results from computational geometry we show how to modify our algorithm so that it achieves almost optimal bounds for the Set Cover problem on geometric instances. In particular, we show that it gives a -pass -approximation algorithm using space when the elements are points in and the sets are either discs, axis parallel rectangles, or fat triangles in the plane. In particular, we use the following surprising property of the set systems that arise out of points and disks: the the number of sets is nearly linear as long as one considers only sets that contain “a few” points.
More surprisingly, this property extends, with a twist, to certain geometric range spaces that might have quadratic number of shallow ranges. Indeed, it is easy to show an example of points in the plane, where there are distinct rectangles, each one containing exactly two points, see Figure 1.1. However, one can “split” such ranges into a small number of canonical sets, such that the number of shallow sets in the canonical set system is near linear. This enables us to store the small canonical sets encountered during the scan explicitly in memory, and still use only near linear space.
We note that the idea of splitting ranges into small canonical ranges is an old idea in orthogonal range searching. It was used by Aronov et al. [AES10] for computing small -nets for these range spaces. The idea in the form we use, was further formalized by Ene et al. [EHR12].
Lower bounds. The lower bounds for multi-pass algorithms for the Set Cover problem are obtained via a careful reduction from Intersection Set Chasing. The latter problem is a communication complexity problem where players need to solve a certain “set-disjointness-like” problem in rounds. A recent paper [GO13] showed that this problem requires bits of communication complexity for rounds. This yields our desired trade-off of space in passes for exact protocols for Set Cover in the communication model and hence in the streaming model for . Furthermore, we show a stronger lower bound on memory space of sparse instances of Set Cover in which all input sets have cardinality at most . By a reduction from a variant of Equal Pointer Chasing which maps the problem to a sparse instance of Set Cover, we show that in order to have an exact streaming algorithm for -Sparse Set Cover with space, passes is necessary. More precisely, any -pass exact randomized algorithm for -Sparse Set Cover requires memory space, if and .
Our single pass lower bound proceeds by showing a lower bound for a one-way communication complexity problem in which one party (Alice) has a collection of sets, and the other party (Bob) needs to determine whether the complement of his set is covered by one of the Alice’s sets. We show that if Alice’s sets are chosen at random, then Bob can decode Alice’s input by employing a small collection of “query” sets. This implies that the amount of communication needed to solve the problem is linear in the description size of Alice’s sets, which is .
iterSetCover: // Try in parallel all possible (-approx) sizes of optimal cover for do in parallel: // Repeat for times Let be a sample of of size , for do // By doing one pass if then // Size Test else // Store the set explicitly in memory , // By doing additional pass over data return best computed in all parallel executions.
2 Streaming Algorithm for Set Cover
In this section, we design an efficient streaming algorithm for the Set Cover problem that matches the lower bound results we already know about the problem. In the Set Cover problem, for a given set system , the goal is to find a subset , such that covers and its cardinality is minimum. In the following, we sketch the iterSetCover algorithm (see also Figure 1.3).
In the iterSetCover algorithm, we have access to the algOfflineSC subroutine that solves the given Set Cover instance offline (using linear space) and returns a -approximate solution where could be anywhere between and depending on the computational model one assumes. Under exponential computational power, we can achieve the optimal cover of the given instance of the Set Cover (); however, under assumption, cannot be better than where is a constant [Fei98, RS97, AMS06, Mos12, DS14] given polynomial computational power.
Let be the initial number of elements in the given ground set. The iterSetCover algorithm, needs to guess (up to a factor of two) the size of the optimal cover of . To this end, the algorithm tries, in parallel, all values in . This step will only increase the memory space requirement by a factor of .
Consider the run of the iterSetCover algorithm, in which the guess is correct (i.e., , where OPT is an optimal solution). The idea is to go through iterations such that each iteration only makes two passes and at the end of each iteration the number of uncovered elements reduces by a factor of . Moreover, the algorithm is allowed to use space.
In each iteration, the algorithm starts with the current ground set of uncovered elements , and copies it to a leftover set . Let be a large enough uniform sample of elements . In a single pass, using , we estimate the size of all large sets in and add to the solution immediately (thus avoiding the need to store it in memory). Formally, if covers at least yet-uncovered elements of then it is a heavy set, and the algorithm immediately adds it to the output cover. Otherwise, if a set is small, i.e., its covers less than uncovered elements of , the algorithm stores the set in memory. Fortunately, it is enough to store its projection over the sampled elements explicitly (i.e., ) – this requires remembering only the indices of the elements of .
In order to show that a solution of the Set Cover problem over the sampled elements is a good cover of the initial Set Cover instance, we apply the relative ()-approximation sampling result of [HS11] (see Definition 2.4) and it is enough for to be of size . Using relative -approximation sampling, we show that after two passes the number of uncovered elements is reduced by a factor of . Note that the relative -approximation sampling improves over the Element Sampling technique used in [DIMV14] with respect to the number of passes.
Since in each iteration we pick sets and the number of uncovered elements decreases by a factor of , after iterations the algorithm picks sets and covers all elements. Moreover, the memory space of the whole algorithm is (see Lemma 2.2).
In the rest of this section we prove that the iterSetCover algorithm with high probability returns a -approximate solution of Set Cover() in passes using memory space.
The number of passes the iterSetCover algorithm makes is .
In each of the iterations of the iterSetCover algorithm, the algorithm makes two passes. In the first pass, based on the set of sampled elements , it decides whether to pick a set or keep its projection over (i.e., ) in the memory. Then the algorithm calls algOfflineSC which does not require any passes over . The second pass is for computing the set of uncovered elements at the end of the iteration. We need this pass because we only know the projection of the sets we picked in the current iteration over and not over the original set of uncovered elements. Thus, in total we make passes. Also note that for different guesses for the value of , we run the algorithm in parallel and hence the total number of passes remains .
The memory space used by the iterSetCover algorithm is .
In each iteration of the algorithm, it picks during the first pass at most sets (more precisely at most sets) which requires memory. Moreover, in the first pass we keep the projection of the sets whose projection over the uncovered sampled elements has size at most . Since there are at most such sets, the total required space for storing the projections is bounded by
Since in the second pass the algorithm only updates the set of uncovered elements, the amount of space required in the second pass is . Thus, the total required space to perform each iteration of the iterSetCover algorithm is . Moreover, note that the algorithm does not need to keep the memory space used by the earlier iterations; thus, the total space consumed by the algorithm is .
Next we show the sets we picked before calling algOfflineSC has large size on .
With probability at least all sets that pass the “Size Test” in the iterSetCover algorithm have size at least .
Let be a set of size less than . In expectation, is less than By Chernoff bound for large enough ,
Applying the union bound, with probability at least , all sets passing “Size Test” have size at least .
In what follows we define the relative -approximation sample of a set system and mention the result of Har-Peled and Sharir [HS11] on the minimum required number of sampled elements to get a relative -approximation of the given set system.
Let be a set system, i.e., is a set of elements and is a family of subsets of the ground set . For given parameters , a subset is a relative -approximation for , if for each , we have that if then
If the range is light (i.e., ) then it is required that
Namely, is -multiplicative good estimator for the size of ranges that are at least -fraction of the ground set.
The following lemma is a simplified variant of a result in Har-Peled and Sharir [HS11] – indeed, a set system with sets, can have VC dimension at most . This simplified form also follows by a somewhat careful but straightforward application of Chernoff’s inequality.
Let be a finite set system, and be parameters. Then, a random sample of such that for an absolute constant is a relative -approximation, for all ranges in , with probability at least .
Assuming , after any iteration, with probability at least the number of uncovered elements decreases by a factor of , and this iteration adds sets to the output cover.
Let be the set of uncovered elements at the beginning of the iteration and note that the total number of sets that is picked during the iteration is at most (see Lemma 2.3). Consider all possible such covers, that is , and observe that . Let be the collection that contains all possible sets of uncovered elements at the end of the iteration, defined as Moreover, set , and and note that . Since for large enough , by Lemma 2.5, is a relative -approximation of with probability. Let be the collection of sets picked during the iteration which covers all elements in . Since is a relative -approximation sample of with probability at least , the number of uncovered elements of (or ) by is at most .
Hence, in each iteration we pick sets and at the end of iteration the number of uncovered elements reduces by .
The iterSetCover algorithm computes a set cover of , whose size is within a factor of the size of an optimal cover with probability at least .
Consider the run of iterSetCover for which the value of is between and . In each of the iterations made by the algorithm, by Lemma 2.6, the number of uncovered elements decreases by a factor of where is the number of initial elements to be covered by the sets. Moreover, the number of sets picked in each iteration is . Thus after iterations, all elements would be covered and the total number of sets in the solution is . Moreover by Lemma 2.6, the success probability of all the iterations, is at least .
The algorithm makes passes, uses memory space, and finds a -approximate solution of the Set Cover problem with high probability.
Furthermore, given enough number of passes the iterSetCover algorithm matches the known lower bound on the memory space of the streaming Set Cover problem up to a factor where is the number of sets in the input.
As for the lower bound, note that by a result of Nisan [Nis02], any randomized ()-approximation protocol for Set Cover() in the one-way communication model requires bits of communication, no matter how many number of rounds it makes. This implies that any randomized -pass, ()-approximation algorithm for Set Cover() requires space, even under the exponential computational power assumption.
By the above, the iterSetCover algorithm makes passes and uses space to return a -approximate solution under the exponential computational power assumption (). Thus by letting , we will have a -approximation streaming algorithm using space which is optimal up to a factor of .
Theorem 2.8 provides a strong indication that our trade-off algorithm is optimal.
3 Lower Bound for Single Pass Algorithms
In this section, we study the Set Cover problem in the two-party communication model and give a tight lower bound on the communication complexity of the randomized protocols solving the problem in a single round. In the two-party Set Cover, we are given a set of elements and there are two players Alice and Bob where each of them has a collection of subsets of , and . The goal for them is to find a minimum size cover covering while communicating the fewest number of bits from Alice to Bob (In this model Alice communicates to Bob and then Bob should report a solution).
Our main lower bound result for the single pass protocols for Set Cover is the following theorem which implies that the naive approach in which one party sends all of its sets to the the other one is optimal.
Any single round randomized protocol that approximates Set Cover within a factor better than and error probability requires bits of communication where and and is a sufficiently large constant.
We consider the case in which the parties want to decide whether there exists a cover of size for in or not. If any of the parties has a cover of size at most for , then it becomes trivial. Thus the question is whether there exist and such that .
A key observation is that to decide whether there exist and such that , one can instead check whether there exists and such that . In other words we need to solve OR of a series of two-party Set Disjointness problems. In two-party Set Disjointness problem, Alice and Bob are given subsets of , and and the goal is to decide whether is empty or not with the fewest possible bits of communication. Set Disjointness is a well-studied problem in the communication complexity and it has been shown that any randomized protocol for Set Disjointness with error probability requires bits of communication where [BJKS04, KS92, Raz92].
We can think of the following extensions of the Set Disjointness problem.
In this variant, Alice has subsets of , and Bob is given a single set . The goal is to determine whether there exists a set such that .
In this variant, each of Alice and Bob are given a collection of subsets of and the goal for them is to determine whether there exist and such that .
Note that deciding whether two-party Set Cover has a cover of size is equivalent to solving the (Many vs Many)-Set Disjointness problem. Moreover, any lower bound for (Many vs One)-Set Disjointness clearly implies the same lower bound for the (Many vs Many)-Set Disjointness problem. In the following theorem we show that any single-round randomized protocol that solves (Many vs One)-Set Disjointness() with error probability requires bits of communication.
Any randomized protocol for (Many vs One)-Set Disjointness with error probability that is requires bits of communication if where and are large enough constants.
The idea is to show that if there exists a single-round randomized protocol for the problem with bits of communication and error probability , then with constant probability one can distinguish distinct inputs using bits which is a contradiction.
Suppose that Alice has a collection of uniformly and independently random subsets of (in each of her subsets the probability that is in the subset is ). Lets assume that there exists a single round protocol for (Many vs One)-Set Disjointness() with error probability using bits of communication. Let algExistsDisj be Bob’s algorithm in protocol . Then we show that one can recover random bits with constant probability using algExistsDisj subroutine and the message sent by the first party in protocol . The algRecoverBit which is shown in Figure 3.1, is the algorithm to recover random bits using protocol and algExistsDisj.
To this end, Bob gets the message communicated by protocol from Alice and considers all subsets of size and of . Note that is communicated only once and thus the same is used for all queries that Bob makes. Then at each step Bob picks a random subset of size of and solve the (Many vs One)-Set Disjointness problem with input by running . Next we show that if is disjoint from a set in , then with high probability there is exactly one set in which is disjoint from (see Lemma 3.11). Thus once Bob finds out that his query, , is disjoint from a set in , he can query all sets and recover the set (or union of sets) in that is disjoint from . By a simple pruning step we can detect the ones that are union of more than one set in and only keep the sets in .
In Lemma 3.14, we show that the number of queries that Bob is required to make to recover is where is a constant.
Let be a random subset of of size and let be a collection of m random subsets of . The probability that there exists exactly one set in that is disjoint from is at least .
The probability that is disjoint from exactly one set in is
First we prove the first term in the above inequality. For an arbitrary set , since any element is contained in with probability , the probability that is disjoint from is .
Moreover since there exist pairs of sets in , and for each , the probability that and are disjoint from is ,
A family of sets is called intersecting if and only if for any sets either both and are non-empty or both and are empty; in other words, there exists no such that . Let be a collection of subsets of . We show that with high probability after testing queries for sufficiently large constant , the algRecoverBit algorithm recovers completely if is intersecting. First we show that with high probability the collection is intersecting.
Let be a collection of uniformly random subsets of where . With probability at least , is an intersecting family.
The probability that is and there are at most pairs of sets in . Thus with probability at least , is intersecting.
The number of distinct inputs of Alice (collections of random subsets of ), that is distinguishable by algRecoverBit is .
There are collections of random subsets of . By Observation 3.12, of them are intersecting. Since we can only recover the sets in the input collection and not their order, the distinct number of input collection that are distinguished by algRecoverBit is which is for .
By Observation 3.12 and only considering the case such that is intersecting, we have the following lemma.
Let be a collection of uniformly random subsets of and suppose that . After testing at most queries, with probability at least , is fully recovered, where is the success rate of protocol for the (Many vs One)-Set Disjointness problem.
By Lemma 3.11, for each of size the probability that is disjoint from exactly one set in a random collection of sets is at least . Given is disjoint from exactly one set in , due to symmetry of the problem, the chance that is disjoint from a specific set is at least . After queries where is a large enough constant, for any , the probability that there is not a query that is only disjoint from is at most .
Thus after trying queries, with probability at least , for each we have at least one query that is only disjoint from (and not any other sets in ).
Once we have a query subset which is only disjoint from a single set , we can ask queries of size and recover . Note that if is disjoint from more than one sets in simultaneously, the process (asking queries of size ) will end up in recovering the union of those sets. Since is an intersecting family with high probability (Observation 3.12), by pruning step in the algRecoverBit algorithm we are guaranteed that at the end of the algorithm, what we returned is exactly . Moreover the total number of queries the algorithm makes is at most
Thus after testing queries, will be recovered with probability at least where is the success probability of the protocol for (Many vs One)-Set Disjointness().
Let I be a protocol for (Many vs One)-Set Disjointness() with error probability and bits of communication such that for large enough . Then algRecoverBit recovers with constant success probability using bits of communication.
By Observation 3.13, since algRecoverBit distinguishes distinct inputs with constant probability of success (by Corollary 3.15), the size of message sent by Alice, should be . This proves Theorem 3.10.
Theorem 3.9: As we showed earlier, the communication complexity of (Many vs One)-Set Disjointness is a lower bound for the communication complexity of Set Cover. Theorem 3.10 showed that any protocol for (Many vs One)-Set Disjointness with error probability less than requires bits of communication. Thus any single-round randomized protocol for Set Cover with error probability requires bits of communication.
Since any -pass streaming -approximation algorithm for problem P that uses memory space, is a -round two-party -approximation protocol for problem P using bits of communication [GM08], and by Theorem 3.9, we have the following lower bound for Set Cover problem in the streaming model.
Any single-pass randomized streaming algorithm for Set Cover() that computes a -approximate solution with probability requires memory space (assuming ).
4 Geometric Set Cover
In this section, we consider the streaming Set Cover problem in the geometric settings. We present an algorithm for the case where the elements are a set of points in the plane and the sets are either all disks, all axis-parallel rectangles, or all -fat triangles (which for simplicity we call shapes) given in a data stream. As before, the goal is to find the minimum size cover of points from the given sets. We call this problem the Points-Shapes Set Cover problem.
Note that, the description of each shape requires space and thus the Points-Shapes Set Cover problem is trivial to be solved in space. In this setting the goal is to design an algorithm whose space is sub-linear in . Here we show that almost the same algorithm as iterSetCover (with slight modifications) uses space to find an -approximate solution of the Points-Shapes Set Cover problem in constant passes.
A triangle is called -fat (or simply fat) if the ratio between its longest edge and its height on this edge is bounded by a constant (there are several equivalent definitions of -fat triangles).
Let be a set system such that is a set of points and is a collection of shapes, in the plane . The canonical representation of is a collection of regions such that the following conditions hold. First, each has description. Second, for each , there exists such that . Finally, for each , there exists sets such that for some constant .
(Lemma 4.18 in [EHR12]) Given a set of points in the plane and a parameter , one can compute a set of axis-parallel rectangles with the following property. For an arbitrary axis-parallel rectangle that contains at most points of , there exist two axis-parallel rectangles whose union has the same intersection with as , i.e., .
(Theorem 5.6 in [EHR12]) Given a set of points in , a parameter and a constant , one can compute a set of regions each having description with the following property. For an arbitrary -fat triangle that contains at most points of , there exist nine regions from whose union has the same intersection with as .
Using the above lemmas we get the following lemma.
Let be a set of points in and let be a set of shapes (discs, axis-parallel rectangles or fat triangles), such that each set in contains at most points of . Then, in a single pass over the stream of sets , one can compute the canonical representation of . Moreover, the size of the canonical representation is at most and the space requirement of the algorithm is .
For the case of axis-parallel rectangles and fat triangles, first we use Lemma 4.18 and Lemma 4.19 to get the set offline which require memory space. Then by making one pass over the stream of sets , we can find the canonical representation by picking all the sets such that for some . For discs however, we just make one pass over the sets and keep a maximal subset such that for each pair of sets their projection on are different, i.e., . By a standard technique of Clarkson and Shor [CS89], it can be proved that the size of the canonical representation, i.e., , is bounded by . Note that this is just counting the number of discs that contain at most points, namely the at most -level discs.
In the first pass, the algorithm picks all the sets that cover a large number of yet-uncovered elements. Next, we sample . Since we have removed all the ranges that have large size, in the first pass, the size of the remaining ranges restricted to the sample is small. Therefore by Lemma 4.20, the canonical representation of has small size and we can afford to store it in the memory. We use Lemma 4.20 to compute the canonical representation in one pass. The algorithm then uses the sets in to find a cover for the points of . Next, in one additional pass, the algorithm replaces each set in by one of its supersets in .
Finally, note that in the algorithm of Section 2, we are assuming that the size of the optimal solution is . Thus it is enough to stop the iterations once the number of uncovered elements is less than . Then we can pick an arbitrary set for each of the uncovered elements. This would add only more sets to the solution. Using this idea, we can reduce the size of the sampled elements down to which would help us in getting near-linear space in the geometric setting. Note that the final pass of the algorithm can be embedded into the previous passes but for the sake of clarity we write it separately.