Online Facility Location on Semi-Random Streams

Online Facility Location on Semi-Random Streams


In the streaming model, the order of the stream can significantly affect the difficulty of a problem. A -semirandom stream was introduced as an interpolation between random-order () and adversarial-order () streams where an adversary intercepts a random-order stream and can delay up to elements at a time. IITK Sublinear Open Problem #15 asks to find algorithms whose performance degrades smoothly as increases. We show that the celebrated online facility location algorithm achieves an expected competitive ratio of . We present a matching lower bound that any randomized algorithm has an expected competitive ratio of .

We use this result to construct an -approximate streaming algorithm for -median clustering that stores points and has worst-case update time. Our technique generalizes to any dissimilarity measure that satisfies a weak triangle inequality, including -means, -estimators, and norms. The special case yields an optimal space algorithm for random-order streams as well as an optimal time algorithm in the RAM model, closing a long line of research on this problem.


One of the fundamental theoretical questions in the streaming model is to understand how the stream order impacts computation. In the adversarial-order model, results must hold under any order, whereas in the random-order model the order is selected uniformly at random. The order of the stream can strongly affect the resources required to solve a problem. For example, for streams of integers where the stream may be read sequentially multiple times, determining the median using polylogarithmic space requires passes in adversarial-order [33] but only passes in random-order [17].

As demonstrated by the median problem, there can be an exponential gap in the resources required for random-order and adversarial-order streams. To interpolate between these two extremes, Guha and McGregor introduced two notions of semirandom-order where the adversary has limited power.

These models capture the notion of an adversary with limited power and establish a spectrum of semirandom orders to intermediate between the fully random and fully adversarial cases. IITK Sublinear Open Problem #15 [27] asks:

How do these notions relate to each other? Can we develop algorithms whose performance degrades smoothly as the stream ordering becomes “less-random” using either definition? For a given application, which notion is more appropriate?

We respond to the first question by showing that no non-trivial relations hold between these models. One can verify that and correspond to random-order, and that and correspond to adversarial-order. However, we show that these models are incomparable in the sense that an -generated adversary requires to simulate the action of a -bounded adversary for any , and that a -bounded adversary requires to simulate the action of an -generated adversary for any .

We answer the second question by proving matching bounds for the online facility location problem that show the performance degrades smoothly as increases. These are the first bounds for semirandom streams that match at all values of . Previous results matched only for sufficiently small. For example, the result of [17] shows how to return the median in passes when . However, for a constant the algorithm is only guaranteed to terminate in passes when . In comparison, even at there are polylogarithmic-space algorithms that return the median in only passes [33].

Our results provide evidence that -bounded adversarial-order is a viable model of semirandomness. We address the third question by complementing our positive results for the -semirandom model with an argument showing that the -generated random order model is uninteresting for a wide class of problems. A more complete discussion of IITK Open Question #15 is included in Section 6.

1.1Our Contributions

We present results for online facility location and a large class of clustering problems. In Section 3, we provide a novel analysis that the online facility location algorithm of [31] is -competitive in expectation. Adapting Meyerson’s original argument for a -bounded adversary is possible but results in an expected competitive ratio. We introduce a different analysis that permits this exponential improvement. We complement this result by presenting a matching lower bound in Section 4 that any randomized algorithm for online facility location is -competitive in expectation. See Table ? for a comparison with existing results.

In Section 5, we present a streaming algorithm for clustering using any function that satisfies a weak triangle inequality (this includes -median, -means, -estimators, and norms). Our algorithm stores points and has worst-case update time. As shown in Table ?, we match the state-of-the-art for adversarial-order streams and provide the first results for all . We remark that our algorithm respects sparsity by only storing a weighted subset of the input. Another notable property of our clustering algorithm is that it is oblivious to the actual values of and . The algorithm takes an input value , and the output is valid as long as . This may be useful for practical applications where the number of clusters or power of the adversary is unknown. For example, if the data exhibits a hierarchical structure than the resolution of the result (measured by ) degrades smoothly as the power of the adversary increases.

The special case yields the first result for clustering on random-order streams. In the RAM model where we can shuffle the input into random order in linear time, this implies an optimal time algorithm and closes a long line of research on the problem.

As a blackbox used by our clustering algorithm, we present a method to compress a weighted set of distinct points to a weighted set of distinct points in linear time while incurring less than twice the optimal cost of clustering to points. Our algorithm, based on -coloring a nearest neighbor graph, is presented in Section 5.2 as it may be of independent interest.

1.2Prior Work

Random-Order Streams: There has been an increasing interest to design algorithms for data streams that arrive in random-order, and in recent years the model has become quite popular. Random-order streams have been considered for problems including rank selection [33], frequency moments [3], entropy [21], submodular maximization [32], and graph matching [26]. Lower bounds that hold even under the assumption of random-order have been developed using multi-party communication complexity [9]. Semirandom-order streams, in both the -bounded and -generated models, have been considered for rank selection [17]. The stochastic streaming model, which takes the random-order assumption a step further by assuming that stream elements are independent samples from an unknown distribution, has also attracted attention [19]. The stochastic streaming model is strictly easier than the random-order model since any stochastic stream is automatically in random-order.

Online Facility Location: The study of online facility location was initiated by Meyerson [31]. He provided a simple randomized algorithm and proved that for random-order streams it is -competitive in expectation. Later, Fotakis [15] showed that for adversarial-order streams any randomized algorithm has an expected competitive ratio of and proved that Meyerson’s randomized algorithm achieves this bound; he also presented a novel deterministic algorithm that achieves this bound. For Euclidean space, a simple and practical deterministic algorithm was provided by [2].

Streaming Metric -median and -means Clustering: The streaming -median and -means problems have only been considered in the adversarial-order model. These problems are well-studied; here we mention only the metric space results that achieved an improvement in the space bound over the previous state-of-the-art. The first streaming solution computed a -approximation for any and stored points [22]. Later, an algorithm storing only points was provided [10]. The current state-of-the-art -approximation stores points [6]. A variety of other results are known for Euclidean space.

RAM-Model Metric -median and -means Clustering: The history of fast -approximations1 in the RAM model is summarized in Table ?, omitting results that do not improve the runtime for any value of . These results for -median generalize to -means with a larger constant in the approximation ratio. We conclude this line of research with an optimal time algorithm, matching the time lower bound for any randomized algorithm [30]. We remark that there exists an time algorithm for -means in Euclidean space [1], but it relies on the principal axis theorem and therefore does not generalize to -median or to other metric spaces.


Let be a metric space. In the facility location problem with parameter (called the facility cost), we are given a set2 called demands. The problem is to compute a set called facilities and to connect each demand to a facility. To connect demand to facility , we incur cost . We also incur cost for each facility opened. The objective is to compute such that the total cost is minimized. Defining , the total cost is by connecting each demand to the nearest facility.

In online facility location, we receive as a stream of points. When point arrives, we may open a facility (incurring facility cost ) and then must connect to a facility (incurring connection cost). Observe that if we open a facility at the location of , there is no connection cost. The problem is online because the decisions to open a facility and connect are irrevocable, meaning that a facility can never be closed and that cannot be reconnected if a closer facility opens later.

For the -median problem, there is no facility cost but the number of facilities (here called centers) is fixed at . The goal is to compute a set that minimizes the total cost . When the input arrives as a stream, we seek to design algorithms that require a minimal amount of memory.

The optimal cost for -median is . For facility location, the optimal cost is the minimum where ranges over all positive integers. An -approximation is a solution with cost at most times the optimum.

A -semirandom stream is the result of a random-order stream that has been intercepted by a -bounded adversary (see Definition ?). Imagine that the stream of elements is a deck of cards, initially shuffled into random order. The adversary draws cards into his hand from the deck. He may give any card from his hand to the algorithm. The restriction is that he can have at most cards in his hand at any time. This means that if he has a full hand of cards, he cannot draw a new card until giving one to the algorithm. See Figure ? for an example.

3Online Facility Location

The algorithm of Meyerson [31] is simple and elegant. Let be the facility cost parameter. When a point arrives, let be the distance between and the nearest facility. With probability , we open a facility at and pay facility cost . Otherwise, we connect to the nearest facility and pay connection cost . We write OFL to refer to this algorithm.

Partition into optimal clusters so that . Consider the centers such that . Lemmas ?, ?, and ? bound the expected connection cost and expected facility cost on each by . We obtain the result by summing over all .

We now provide the results used in the proof of Theorem ?. Let be a set of points. Given a center point , define and . We partition into three pieces as follows:

Defining for any set , we decompose .

For convenience we normalize to . Let be the order of . Define . There is a probability of that point opens as a facility; otherwise incurs connection cost .

Let denote the expected connection cost before a facility is opened when OFL is run on the suffix . For define . Observe that . We seek to prove that .

We write the recursive formula . Observe that , with the maximum occurring when . Assuming inductively that , observe that . We conclude that .

Observe that can be used to simultaneously bound both the expected connection cost and the expected facility cost. We use this in Lemmas ?- ? to bound both types of cost with the same argument.

By Fact ?, the expected connection cost of points in before a facility opens in is less than . The facility cost is exactly when the first facility opens in . After a facility has opened in , we may bound for any by the triangle inequality. The result follows by summing over all .

The proof of the next lemma is similar to the previous.

We partition into parts, defining for . By Fact ?, the expected connection cost of points in before a facility opens in is less than . The facility cost of is exactly when the first facility opens in . After a facility opens in , we may bound for each by the triangle inequality. The result follows by summing over all .

Fixing an order, let be the expected connection cost of the point of . For , define . For , let be the number of points in that arrive after exactly points of have arrived.

The first points of incur cost at most trivially. If a point arrives after a point , then by the triangle inequality. Hence for a point that arrives after points of , taking expectations and then the minimum over the preceding points of shows that . The result follows by summing over all .

The remainder of the proof will depend on the assumption of -semirandom order. For precision, we continue to use for expectation over the randomness used by and introduce for expectation over the randomness of the stream order. Let be the number of points in that occur after exactly points of in the initial random-order stream before being intercepted by the adversary. Since the adversary may hold at most points, if the adversary has received points of then the algorithm has received at least points of . This provides the relation for every . We can view as the number of balls in the bin when we randomly drop balls into bins.

Observe that . The adversary’s optimal strategy against our bound in Lemma ? is to delay points in as long as possible. We therefore identify the worst-case bound and for every . We rewrite .

The difficulty is that and are dependent random variables. Although they cannot affect each other directly, both and depend on the prefix of the stream ending on the point of . To overcome this, we split the sum into two pieces and for each piece we find an upper bound on either or that holds independently of the prefix.

The next lemma quantifies the intuition that if many points of have arrived then there must be a facility very close to . This bound suffices after a constant fraction of has arrived.

Let be the indicator random variable that no facility is open in after is processed. Let be the set of points of that have arrived. Fact ? implies that . Thus there must be some such that . Since , we bound .

By Markov’s inequality and . Then which implies . The result follows since .

If we drop balls into bins and condition on the number of balls in of the bins, the expected number of balls in any other bin is at most that of dropping balls into bins. This bound blows up towards the end of the stream but suffices for the first half.

Let denote the order of the prefix of the stream ending on the point of . Observe that with the maximum occurring when for all . This implies .

By causality, points that arrive after the point of do not affect . Therefore is a constant when is fixed. We can now separate

We have the restraint by Fact ?. Since is increasing, the sum is maximized when for . We bound . By Markov’s inequality . We may assume that since otherwise and there is nothing to show. We conclude by .

We can now provide a bound for that holds for -semirandom order streams.

We substitute the bounds of Lemmas ? and ? into Lemma ?. The last part is to bound for the worst-case adversary and observe that .

3.1Application to Online Facility Location

Our main result for online facility location is now a simple corollary of Theorem ?. As shown in the Section 5, Theorem ? yields results for both online facility location and -median clustering by applying the theorem with different choices of and .

Let be an optimal facility set for . Define and observe that is the connection cost associated with the optimal solution with facility set . The optimal cost for the facility location problem is then . For any , set and . Observe that for all greater than some function of . Applying Theorem ? to with these values of , , and shows that the total expected cost is at most . This implies the result since by taking sufficiently large.

Remark on Aspect Ratio: Suppose that we are working in a metric space of aspect ratio3 . Setting instead of , observe in the proof of Theorem ? that . This yields a bound of on both the expected facility cost and expected connection cost. We may therefore replace with in our results. This justifies the lower bound in the following section being constructed in a metric space of aspect ratio . Our upper and lower bounds match for all choices of and by substituting for .

4Lower Bound

We present a lower bound on the expected competitive ratio of any randomized algorithm for the online facility location problem. The bound holds even when the algorithm can open a facility at any location in the metric space. The proof works by constructing a sequence of points that converge to the location of an optimal facility. At each step, there are enough possible locations of the optimal facility that no algorithm can guess (except with negligible probability) the correct location until it is too late.

We will construct a family of inputs and show that any deterministic algorithm has at least a certain competitive ratio when run on an input selected uniformly at random from this family. The result immediately extends to randomized algorithms by Yao’s principle.

The Metric Space: Define , , and . Let be a positive integer. The points of the metric space are the nodes of a complete -ary tree of depth . This means that the root is at depth and leaves are at depth . The distance between a node at depth and any of its children is . The distance between other nodes is obtained by summing the distances of the shortest path between them.

The Family of Inputs: The family of inputs is enumerated by the possible strings of numbers in . We now describe how to construct the input associated with . Define to be the root. Recursively define to be the child of . For , place points at node . We let the input size be for any . Since , there will be some remaining points. Place all remaining points at the root. The randomized input is to select a member from this family of inputs uniformly at random.

The Optimal Cost: The optimal cost of the input associated with the string is at most the cost of the solution that places facilities at the root and at , connecting all non-root points to . The distance between and is . Therefore the optimal cost is less than for every input the the family.

The -Bounded Adversary’s Strategy: There are points not located at the root. This implies that regardless of the order that the points are sent, a -bounded adversary can deterministically ensure that points arrive in non-decreasing order of depth. The adversary does this by simply sending along any elements at the root while storing the at most other elements until they are ready to be sent in non-decreasing order of depth. We consider this arrival order.

The Algorithm’s Optimal Strategy: We define a cost scheme for the algorithm that is strictly less than the original cost (therefore any lower bound in this easier scheme is valid for the original problem). This greatly simplifies the analysis by allowing us to isolate an optimal strategy.

Suppose that if a facility is open at certain node, then the connection cost of a point at any ancestor4 node is zero. With this modification, we can define an optimal strategy when a point arrives. If there is an open facility at any descendant node, then connect this point with zero cost. Otherwise, open a facility with cost at any descendant leaf node and then connect with zero cost.

If there is no open facility at a descendant node, the second option is optimal since the nearest facility cannot be closer than the parent node. The parent node is at distance . The algorithm, aware of the family of inputs, knows that a total of points are coming at this node. This means that the total connection cost from this node would be at least . Therefore it is optimal to pay and open a new facility.

Given that we will open a new facility, placing it at a descendant leaf node is optimal because it minimizes the connection cost of future points in the stream. Without loss of generality, we have our deterministic algorithm always open at the descendant leaf node obtained by moving down the first child of each node in the path from the current node.

The Algorithm’s Expected Cost: Using the algorithm’s strategy defined above, one can see that the algorithm incurs zero connection cost. For the input associated with , the number of facilities besides the root is just the number of not equal to . The probability that is . The expected total cost is then . Using sufficiently large shows that it is not possible to achieve expected cost below .

The competitive ratio is therefore at least as desired. The result extends to randomized algorithms by Yao’s principle.

5-Median Clustering on Streams

In this section, we present a -approximation streaming algorithm for -median clustering that stores points and has worst-case update time. The extension to other functions is sketched in Section 5.1. Our algorithm is based on the doubling algorithm of [10]. Among our innovations is a subroutine that permits us to improve the update time and approximation ratio of the original algorithm. is described in Section 5.2 and comes with the following guarantee:

Throughout this section, as in Theorem ?, we overload notation by writing where is a weighted set (with total weight equal to that of ). Recall that for an unweighted set , the function denotes the minimum connection cost of connecting the demand set to the facility set where each facility can service an unlimited number of demands. When is a weighted set, we let denote the minimum connection cost under the restraint that each facility must service exactly its weight in demands.

We now present Algorithm ? to maintain a set which we show can determine a -approximation to the -median clustering of the stream. Observe that the main loop of Lines ?- ? always begins with , which ensures by Theorem ? that the while-loop of Lines ?- ? always begins with .

The variable is an upper bound on the increase of during the current instance of the while-loop. Since increases by at most in each iteration, the termination condition of Line ? ensures that . It remains to show that the cost when the while-loop began was at most . Let us recursively assume that when the previous while-loop began, the cost was at most where was the previous value of . During that instance, less than cost was incurred. On Line ?, exactly cost was incurred. We may bound and by Line ?. Therefore the total cost is as desired.

The next lemma addresses a subtle issue that only arises for . Observe that any segment of a random-order () stream is in random-order, and that any segment of an adversarial-order () stream is in adversarial-order. However, a segment of a -semirandom order stream for is not necessarily in -semirandom order (or even in -semirandom order) because the adversary may have as many as points in storage when the segment begins. Instead, we analyze a segment as two separate -semirandom streams, one coming from the adversary’s storage and the other as those points that the adversary has not yet received.

Observe that the while-loop runs OFL with facility cost . Setting and , apply Theorem ? and plug in the value for . This shows that on a -semirandom stream, OFL incurs less than in expected connection cost and opens less than facilities in expectation. Observe that this bound holds for the points of even when is interlaced with points from another stream.

For a segment of the -semirandom stream, let be the adversary’s storage at the beginning of the segment and let be all other points. Applying the previous argument twice, we see that on this segment OFL incurs less than in expected connection cost and opens less than facilities in expectation. The terms involving did not double since . With probability at least , the cost and number of facilities are most twice these bounds by Markov’s inequality.

Suppose that . Then with probability at least , the while-loop incurs less than cost and opens less than facilities. Since the while-loop begins with , the termination condition means that either cost was incurred or facilities were opened. We conclude with probability at least that when the while-loop terminates.

For correctness of the algorithm, the result of the preceding lemma only needs to hold for the most recent termination of the while-loop. We therefore apply it only once, avoiding a factor of in the space bound that would result from applying the lemma at each loop iteration.

After processing a point, Algorithm ? waits on Line ? for the next point. We now extend the previous lemma to hold on this line, and therefore after each point has been processed.

Let and be the states of and at the beginning of the current iteration of the main loop. We condition upon the event that , which occurs with probability at least by Lemma ?. From Line ? we infer that either or .

In the case that , then the result is immediate. Otherwise, by the guarantee of . We have since by assumption. Applying the triangle inequality to each point of , we get . We have that , where the first inequality is by Lemma ? and the second inequality is the event we have conditioned upon. Therefore . This result holds regardless of how many iterations of the while-loop have occurred since is non-decreasing as points are added to .

We now state our main theorem for clustering. Amplifying the probability of success is simple; run independent instances of Algorithm ? in parallel and return the from an instance with minimal .

Combining Lemmas ? and ? shows that . It is immediate from the pseudocode that the storage is less than points. As for the update time, each iteration of the while-loop requires time. By Theorem ?, if the nearest neighbor function for has been computed then Line ? terminates in time. We must show how to ensure with worst-case update time that the nearest neighbor function for has been computed before each time that Line ? is executed.

Given the nearest neighbor function for a set of points, observe that we can insert a point into the set and update in time. Beginning with the nearest neighbor function of the first two points of , we simply update with the next three points of each time one point is received from the stream. The while-loop runs at least times before each time that Line ? executes. Therefore the nearest neighbor function for all of is guaranteed to have been computed by the time the while-loop terminates.

By tweaking parameters and refining the analysis, one can improve the guarantee to (in particular, the space blows up as the constant approaches ). It is well-known that if then any -approximation of is a -approximation of [10]. Therefore Theorem ? implies that carries enough information to determine a -approximation of . Our constant does not guarantee a particularly low approximation ratio. However, Algorithm ? can be used as a building block for a more accurate solution. Using the technique of [5], we can use Algorithm ? to maintain an -coreset which carries enough information to determine a -approximation5 for any . This technique essentially converts the constant in the approximation factor into a constant in the size of the coreset. The only space required in addition to Algorithm ? is the space needed to store the -coreset. As an example, for -median in , coresets of size are known [14], implying that our result can be used to determine a -approximation using space.

As a corollary to Theorem ?, we obtain an -time approximation algorithm for the RAM model. In light of the time lower bound of [30], the runtime is optimal.

Shuffle in time. Set and run Algorithm ? in time followed by the offline algorithm of [30] in time to obtain by Theorem ? a set of points such that with probability at least . Repeat this times and output the solution of minimal cost.

5.1Extension to Other Functions

We have assumed that we are in a metric space but we can weaken this assumption. Suppose that throughout our results we replace the metric with an arbitrary symmetric positive-definite function . If satisfies the triangle inequality, then is a metric and our result for -median applies directly. However, suppose that just satisfies a weak triangle inequality for some :

All of our proofs go through with larger constants. The bound in Theorem ? generalizes to and the guarantee of the routine of the next subsection generalizes to . For any function satisfying a constant value, our results for both online facility location and clustering carry through with larger constants. An example application is that our results generalize to norms. Another important case is -means that corresponds to .

Recall that the maximum likelihood estimator for the mean of Gaussian data is the that minimizes . To handle outliers more robustly, the statistics community introduced -estimators which generalize maximum likelihood estimation by minimizing for some positive-definite function . An -estimator along with a positive integer defines a clustering problem to find a set of points that minimizes . Observe that we recover -means for and -median for . The convergence and robustness properties of -estimators have been well-studied, but we also observe that the function usually satisfies a weak triangle inequality for a very low value. Evidently we can let be any value such that for all such that . In Table ? we have calculated tight values for the most commonly used -estimators .

These results imply that space suffices to approximate -estimators on a data stream. We remind the reader that we can use the technique of [5] to maintain an -coreset which permits a -approximation to the optimal -estimator.

5.2The Routine

We present a deterministic algorithm that accepts a weighted set of distinct points along with a positive integer and returns a weighted set of distinct points such that . If the nearest neighbor graph on has been computed, then terminates in time.

In what follows, there must be a way to order the points of . This is necessary for a technical detail that comes up in Lemma ?; we need a consistent way to break ties. In practice, this can simply be the order that the algorithm loops through the points of .

We use to define a graph as follows:

possesses a special structure of not containing any cycles of length greater than .

Let be a cycle such that for each we have (additions should be interpreted modulo ). We will show that .

By definition of , it must be that since . Then we have and the chain of inequalities implies equality. Let be the element of the cycle that is greatest according to the ordering of . Since and are equidistant from , the criterion for breaking ties in Definition ? ensures that . Since , it must be that and so .

In light of Lemma ?, let us consider the structure of the directed graph . Removing the edges in length-2 cycles, we are left with a forest (a collection of trees directed to the root). Considering the full graph along with these 2-cycles, we see that each component is a pair of trees whose roots are coupled. This forest of “bi-trees” can be 2-colored, and the following lemma shows that we can do this efficiently.

We say that is the parent of , and that is the child of . Each vertex has exactly one parent, and the edges point to the parent. For each point , we store a pointer to its parent as well as a list of pointers to its children. Given the function , this can be accomplished in time.

As reasoned above, the graph can be partitioned into bi-tree components. We use the following iterative procedure until all vertices have been colored: (1) Select any uncolored vertex; (2) Walk along the edges until reaching the two roots; (3) Color each root a different color; (4) Recursively color each child vertex the opposite color than its parent.

For Step 2, we will know we have located the roots when we return to the vertex we just left. This process (of moving from to until reaching the root) terminates in time proportional to the depth of the tree. Therefore a total of time is spent during Step 2 over all iterations of this procedure.

For Step 4, finding a child takes time since we have stored a list of children with each vertex. Therefore a total of time is spent during Step 4 over all iterations of this procedure.

We will need the following technical lemma to bound . Recall that is defined using centers from anywhere in the metric space . The lemma says that if we restrict to centers from itself, then the optimal cost increases by less than a factor of two.

Let be a set of points such that . For each , let be the closest point of to . Any element that was connected to can be instead connected to with a cost of since . Moreover, the cost of this cluster increased by strictly less than factor of two since if the cost decreased for and if the cost stayed the same. Define . Then is a set of points such that .

The algorithm is presented in the next theorem. The basic idea is to -color and then eliminate one of the colors by relocating those points to their image under . Since maps each point to a point of the opposite color, this transformation increases the weights of one color while completely eliminating the other.

Let the function map each point of to its weight. By Lemma ?, we -color in time. Let and be the partition of into the two colors after removing the points with the top values of . Let be the larger component (by number of points) and note that .

Build from as follows: for each , increment by and delete . This procedure terminates in time. By definition of a 2-coloring, for every element , and so contains weighted points.

Observe that . The optimal -median solution of using points from involves moving points of by at least the distance to the nearest neighbor, so and so . This completes the proof since Lemma ? guarantees .

To complete the guarantee of Theorem ?, observe that we can return the exact value of in time.

6Discussion of Models for Semirandom Order

IITK Open Problem #15 addresses computational models of semirandom-order for streams and asks how the models of -bounded adversarial order and -generated random order relate to each other [27]. One can verify that like is equivalent to random-order and that like is equivalent to adversarial-order. However, we demonstrate in the following two lemmas that no other relations hold between these models.

Let the stream consist of elements . Let denote the identity of the element in the initial random-order stream. For any , a -bounded adversary can ensure that for all . Let be the event that . In random-order, observe that and that the are mutually independent. Therefore the uniform distribution assigns probability mass to the orders satisfying for some . Any distribution that assigns probability mass to must satisfy

Let be a subset of elements. In random-order, let be the event at at least one element of arrives among the first elements. , so we can create a distribution such that and .

In the -bounded adversarial order model, elements are sent in random-order but intercepted by an adversary who manipulates the order that elements arrive for the algorithm. Observe that an element among the last to be sent cannot be among the first to arrive. If then with positive probability all elements of are among the last to be sent. Therefore if we have . Setting , this necessitates that whenever . The result follows by setting .

Our matching bounds for online facility location show a non-trivial degradation of performance in the -bounded adversarial-order model that smooothly interpolates between random-order and adversarial-order. This supports the claim that -bounded adversarial-order is a viable model of semi-randomness. In contrast, it is trivial to show matching bounds of in the -generated model. More generally, any bound in expectation which is in random-order and in adversarial-order implies a bound in the -generated model. As a result, the -generated model is not interesting for a wide class of problems. This class is rather large since any bound with probability implies a bound in expectation.

We conclude with an open question:

Open Question: Given an adversarial-order bound of , all the results in this paper present a bound of for -bounded adversarial order, showing a smooth degradation as increases. However, some problems exhibit a sharp phase transition. For example, the size of the largest component in an Erdős-Rényi graph jumps from to around . In the -bounded adversarial order model, is it always the case that bounds degrade smoothly as increases? Alternatively, do problems exist that exhibit a sharp jump in some quantity of interest (i.e. time, space, or approximation factor) when increases by only a constant factor around some value?


  1. Most of the results shown in Table ? actually output centers instead of exactly . However, we observe that the result of [29] implies that any solution of centers can be converted to a solution of exactly centers in time.
  2. We use the word “set” to actually mean “multiset”. Multisets may contain multiple copies of the same element.
  3. The aspect ratio of a metric space is the ratio between the maximum and minimum non-zero distance between points.
  4. If node is contained in the subtree of node , we say that is a descendant of and that is an ancestor of .
  5. An efficiently computed solution will have a larger approximation factor. Both the -median and -means problem are MAX-SNP Hard for all . See the related work section of [4] for a survey of hardness results.


  1. Adaptive sampling for k-means clustering.
    Ankit Aggarwal, Amit Deshpande, and Ravi Kannan. In APPROX-RANDOM, pages 15–28, 2009.
  2. A simple and deterministic competitive algorithm for online facility location.
    Aris Anagnostopoulos, Russell Bent, Eli Upfal, and Pascal Van Hentenryck. Inf. Comput., 194(2):175–202, November 2004.
  3. Better bounds for frequency moments in random-order streams.
    Alexandr Andoni, Andrew McGregor, Krzysztof Onak, and Rina Panigrahy. CoRR, abs/0808.2222, 2008.
  4. The hardness of approximation of euclidean k-means.
    Pranjal Awasthi, Moses Charikar, Ravishankar Krishnaswamy, and Ali Kemal Sinop. In SoCG, volume 34, pages 754–767, 2015.
  5. New frameworks for offline and streaming coreset constructions.
    Vladimir Braverman, Dan Feldman, and Harry Lang. CoRR, abs/1612.00889, 2016.
  6. Streaming k-means on well-clusterable data.
    Vladimir Braverman, Adam Meyerson, Rafail Ostrovsky, Alan Roytman, Michael Shindler, and Brian Tagiku. In SODA, pages 26–40, 2011.
  7. Robust lower bounds for communication and stream computation.
    Amit Chakrabarti, Graham Cormode, and Andrew McGregor. In STOC, pages 641–650, 2008.
  8. Tight lower bounds for selection in randomly ordered streams.
    Amit Chakrabarti, T. S. Jayram, and Mihai Pǎtraşcu. In SODA, pages 720–729, 2008.
  9. Near-optimal lower bounds on the multi-party communication complexity of set disjointness.
    Amit Chakrabarti, Subhash Khot, and Xiaodong Sun. In In IEEE Conference on Computational Complexity, pages 107–117, 2003.
  10. Better streaming algorithms for clustering problems.
    Moses Charikar, Liadan O’Callaghan, and Rina Panigrahy. In STOC, pages 30–39, 2003.
  11. On coresets for -median and -means clustering in metric and euclidean spaces and their applications.
    Ke Chen. SIAM J. Comput., 39(3):923–947, August 2009.
  12. Stochastic streams: Sample complexity vs. space complexity.
    Michael Crouch, Andrew McGregor, Gregory Valiant, and David P. Woodruff. In ESA, volume 57, pages 32:1–32:15, 2016.
  13. Streaming algorithms for estimating the matching size in planar graphs and beyond.
    Hossein Esfandiari, Mohammad T. Hajiaghayi, Vahid Liaghat, Morteza Monemizadeh, and Krzysztof Onak. In SODA, pages 1217–1233, 2015.
  14. A unified framework for approximating and clustering data.
    Dan Feldman and Michael Langberg. In STOC, pages 569–578, 2011.
  15. On the competitive ratio for online facility location.
    Dimitris Fotakis. Algorithmica, 50(1):1–57, December 2007.
  16. Revisiting the direct sum theorem and space lower bounds in random order streams.
    Sudipto Guha and Zhiyi Huang. In ICALP, pages 513–524, 2009.
  17. Approximate quantiles and the order of the stream.
    Sudipto Guha and Andrew McGregor. In PODS, pages 273–279, 2006.
  18. Lower bounds for quantile estimation in random-order and multi-pass streaming.
    Sudipto Guha and Andrew McGregor. In ICALP, pages 704–715, 2007.
  19. Space-efficient sampling.
    Sudipto Guha and Andrew Mcgregor. In AISTATS, volume 2, pages 171–178, 2007.
  20. Stream order and order statistics: Quantile estimation in random-order streams.
    Sudipto Guha and Andrew McGregor. SIAM J. Comput., 38(5):2044–2059, January 2009.
  21. Streaming and sublinear approximation of entropy and information distances.
    Sudipto Guha, Andrew McGregor, and Suresh Venkatasubramanian. In SODA, pages 733–742, 2006.
  22. Clustering data streams: Theory and practice.
    Sudipto Guha, Adam Meyerson, Nina Mishra, Rajeev Motwani, and Liadan O’Callaghan. IEEE Trans. on Knowl. and Data Eng., 15(3):515–528, March 2003.
  23. Sublinear time algorithms for metric space problems.
    Piotr Indyk. In STOC, pages 428–434, 1999.
  24. Approximation algorithms for metric facility location and -median problems using the primal-dual schema and lagrangian relaxation.
    Kamal Jain and Vijay V. Vazirani. J. ACM, 48(2):274–296, March 2001.
  25. Approximating matching size from random streams.
    Michael Kapralov, Sanjeev Khanna, and Madhu Sudan. In SODA, pages 734–751, 2014.
  26. Maximum matching in semi-streaming with few passes.
    Christian Konrad, Frédéric Magniez, and Claire Mathieu. In APPROX-RANDOM, 2012.
  27. List of open problems in sublinear algorithms: Problem 15.
    Andrew McGregor.
  28. The shifting sands algorithm.
    Andrew McGregor and Paul Valiant. In SODA, pages 453–458, 2012.
  29. The online median problem.
    Ramgopal R. Mettu and C. Greg Plaxton. SIAM J. Comput., 32(3):816–832, March 2003.
  30. Optimal time bounds for approximate clustering.
    Ramgopal R. Mettu and C. Greg Plaxton. Mach. Learn., 56(1-3):35–60, June 2004.
  31. Online facility location.
    A. Meyerson. In FOCS, pages 426–, 2001.
  32. Randomized composable core-sets for distributed submodular maximization.
    Vahab Mirrokni and Morteza Zadimoghaddam. In STOC, pages 153–162, 2015.
  33. Selection and sorting with limited storage.
    J. I. Munro and M. S. Paterson. In SFCS, pages 253–258, 1978.
  34. The average-case complexity of counting distinct elements.
    David P. Woodruff. In ICDT, pages 284–295, 2009.
This is a comment super asjknd jkasnjk adsnkj
The feedback cannot be empty
Comments 0
The feedback cannot be empty
Add comment

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.