Faster Algorithms for some Optimization Problemson Collinear Points

Faster Algorithms for some Optimization Problems on Collinear Points

Ahmad Biniaz Cheriton School of Computer Science, University of Waterloo,
Prosenjit Bose School of Computer Science, Carleton University, {jit, anil, michiel}@scs.carleton.ca    Paz Carmi Department of Computer Science, Ben-Gurion University of the Negev, carmip@cs.bgu.ac.il    Anil Maheshwari22footnotemark: 2    Ian Munro11footnotemark: 1    Michiel Smid22footnotemark: 2
July 12, 2019
Abstract

We propose faster algorithms for the following three optimization problems on collinear points, i.e., points in dimension one. The first two problems are known to be NP-hard in higher dimensions.

1. Maximizing total area of disjoint disks: In this problem the goal is to maximize the total area of nonoverlapping disks centered at the points. Acharyya, De, and Nandy (2017) presented an -time algorithm for this problem. We present an optimal -time algorithm.

2. Minimizing sum of the radii of client-server coverage: The points are partitioned into two sets, namely clients and servers. The goal is to minimize the sum of the radii of disks centered at servers such that every client is in some disk, i.e., in the coverage range of some server. Lev-Tov and Peleg (2005) presented an -time algorithm for this problem. We present an -time algorithm, thereby improving the running time by a factor of .

3. Minimizing total area of point-interval coverage: The input points belong to an interval . The goal is to find a set of disks of minimum total area, covering , such that every disk contains at least one input point. We present an algorithm that solves this problem in time.

1 Introduction

Range assignment is a well-studied class of geometric optimization problems that arises in wireless network design, and has a rich literature. The task is to assign transmission ranges to a set of given base station antennas such that the resulting network satisfies a given property. The antennas are usually represented by points in the plane. The coverage region of an antenna is usually represented by a disk whose center is the antenna and whose radius is the transmission range assigned to that antenna. In this model, a range assignment problem can be interpreted as the following problem. Given a set of points in the plane, we must choose a radius for each point, so that the disks with these radii satisfy a given property.

Let be a set of points in the -dimensional Euclidean space. A range assignment for is an assignment of a transmission range (radius) to each point . The cost of a range assignment, representing the power consumption of the network, is defined as for some constant . We study the following three range assignment problems on a set of points on a straight-line (-dimensional Euclidean space).

Problem 1

Given a set of collinear points, maximize the total area of nonoverlapping disks centered at these points. The nonoverlapping constraint requires to be no larger than the Euclidean distance between and , for every .

Problem 2

Given a set of collinear points that is partitioned into two sets, namely clients and servers, the goal is to minimize the sum of the radii of disks centered at the servers such that every client is in some disk, i.e., every client is covered by at least one server.

Problem 3

Given a set of input points on an interval, minimize the total area of disks covering the entire interval such that every disk contains at least one input point.

In Problem 1 we want to maximize , in Problem 2 we want to minimize , and in Problem 3 we want to minimize . These three problems are solvable in polynomial time in 1-dimension. Both Problem 1 and Problem 2 are NP-hard in dimension , for every , and both have a PTAS [2, 3, 4].

Acharyya et al. [2] showed that Problem 1 can be solved in time. Eppstein [7] proved that an alternate version of this problem, where the goal is to maximize the sum of the radii, can be solved in time for any constant dimension . Bilò et al. [4] showed that Problem 2 is solvable in polynomial time by reducing it to an integer linear program with a totally unimodular constraint matrix. Lev-Tov and Peleg [9] presented an -time algorithm for this problem. They also presented a linear-time 4-approximation algorithm. Alt et al. [3] improved the ratio of this linear-time algorithm to 3. They also presented an -time 2-approximation algorithm for Problem 2. Chambers et al. [6] studied a variant of Problem 3—on collinear points—where the disks centered at input points; they showed that the best solution with two disks gives a -approximation. Carmi et al. [5] studied a similar version of the problem for points in the plane.

1.1 Our Contributions

In this paper we study Problems 1-3. In Section 2, we present an algorithm that solves Problem 1 in linear time, provided that the points are given in sorted order along the line. This improves the previous best running time by a factor of . In Section 3, we present an algorithm that solves Problem 2 in time; this also improves the previous best running time by a factor of . In Section 4, first we present a simple algorithm for Problem 3. Then with a more involved proof, we show how to improve the running time to .

2 Problem 1: Disjoint Disks with Maximum Area

In this section we study Problem 1: Let be a set of points on a straight-line that are given in sorted order. We want to assign to every a radius such that the disks with the given radii do not overlap and their total area, or equivalently , is as large as possible. Acharyya et al. [1] showed how to obtain such an assignment in time. We show how to obtain such an assignment in linear time.

Theorem 1.

Given collinear points in sorted order in the plane, in time, we can find a set of nonoverlapping disks centered at these points that maximizes the total area of the disks.

With a suitable rotation we assume that is horizontal. Moreover, we assume that is the sequence of points of in increasing order of their -coordinates. We refer to a set of nonoverlapping disks centered at points of as a feasible solution. We refer to the disks in a feasible solution that are centered at as , respectively. Also, we denote the radius of by ; it might be that . For a feasible solution we define . Since the total area of the disks in is , hereafter, we refer to as the total area of disks in . We call a full disk if it has or on its boundary, a zero disk if its radius is zero, and a partial disk otherwise. For two points and , we denote the Euclidean distance between and by .

We briefly review the -time algorithm of Acharyya et al. [1]. First, compute a set of disks centered at points of , which is the superset of every optimal solution. For every disk , that is centered at a point , define a weighted interval whose length is , where is the radius of , and whose center is . Set the weight of to be . Let be the set of these intervals. The disks corresponding to the intervals in a maximum weight independent set of the intervals in forms an optimal solution to Problem 1. By construction, these disks are nonoverlapping, centered at , and maximize the total area. Since the maximum weight independent set of intervals that are given in sorted order of their left endpoints can be computed in time [8], the time complexity of the above algorithm is essentially dominated by the size of . Acharyya et al. [1] showed how to compute such a set of size and order the corresponding intervals in time. Therefore, the total running time of their algorithm is .

We show how to improve the running time to . In fact we show how to find a set of size and order the corresponding intervals in time, provided that the points of are given in sorted order.

2.1 Computation of D

In this section we show how to compute a set with a linear number of disks such that every disk in an optimal solution for Problem 1 belongs to .

Our set is the union of three sets , , and of disks that are computed as follows. The set contains disks representing the full disks and zero disks that are centered at points of . We compute by traversing the points of from left to right as follows; the computation of is symmetric. For each point with we define its signature as

 s(pi)={+if~{}~{}|pi−1pi|⩽|pipi+1|−if~{}~{}|pi−1pi|>|pipi+1|.

Set and . We refer to the sequence as the signature sequence of . Let be the multiset that contains all contiguous subsequences of , with , such that , and for all ; if , then there is no . For example, if , then . Observe that for every sequence in we have that

 |pipi+1|⩽|pi+1pi+2|⩽|pi+2pi+3|⩽⋯⩽|pj−1pj|,and|pj−1pj|>|pjpj+1|.

Every plus sign in belongs to at most one sequence in , and every minus sign in belongs to at most two sequences in . Therefore, the size of (the total length of its sequences) is at most . For each sequence in we add some disks to as follows. Consider the full disk at . Iterate on . In each iteration, consider the disk that is centered at and touches . If does not contain and its area is smaller than the area of , then add to and proceed to the next iteration, otherwise, terminate the iteration. See Figure 1. This finishes the computation of . Notice that contains at most disks. The computation of is symmetric; it is done in a similar way by traversing the points from right to left (all the signatures become and vice versa).

The number of disks in is at most . The signature sequence can be computed in linear time. Having , we can compute the multiset , the disks in as well as the corresponding intervals, as in [1] and described before, in sorted order of their left endpoints in total time. Then the sorted intervals corresponding to circles in can be computed in linear-time by merging the sorted intervals that correspond to sets , , and . It remains to show that contains an optimal solution for Problem 1. To that end, we first prove two lemmas about the structural properties of an optimal solution.

Lemma 1.

Every feasible solution for Problem 1 can be converted to a feasible solution where and are full disks and .

Proof.

Recall that . We prove this lemma for ; the proof for is similar. Since is a feasible solution, we have that . Let be a solution that is obtained from by making a full disk and a zero disk. Since we do not increase the radius of , it does not overlap , and thus, is a feasible solution. In , the radius of is , and we have that . This implies that . ∎

Lemma 2.

If , with , is a partial disk in an optimal solution, then .

Proof.

The proof is by contradiction; let be such an optimal solution for which . First assume that touches at most one of and . By slightly enlarging and shrinking its touching neighbor we can increase the total area of . Without loss of generality suppose that touches . Since ,

 (ri+ϵ)2+(ri−1−ϵ)2=r2i+r2i−1+2(riϵ−ri−1ϵ+ϵ2)>r2i+r2i−1>0,

for any . This contradicts optimality of . Now, assume that touches both and , and that . See Figure 2. We obtain a solution from by enlarging as much as possible, and simultaneously shrinking both and . This makes a zero disk, a full disk, a zero or a partial disk, and does not change the other disks. The difference between the total areas of and is

 ((ri+ri−1)2+(ri+1−ri−1)2)−(r2i−1+r2i+r2i+1)=r2i−1+2ri−1(ri−ri+1)>0;

this inequality is valid since . This contradicts the optimality of . ∎

Lemma 3.

The set contains an optimal solution for Problem 1.

Proof.

It suffices to show that every disk , which is centered at , in an optimal solution belongs to . By Lemma 1, we may assume that both and are full disks. If is a full disk or a zero disk, then it belongs to . Assume that is a partial disk. Since is optimal, touches at least one of and , because otherwise we could enlarge .

First assume that touches exactly one disk, say . We are going to show that belongs to (If touches only , by a similar reasoning we can show that belongs to ). Notice that , because otherwise we could enlarge and shrink simultaneously to increase , which contradicts the optimality of . Since is partial and touches , we have that is either full or partial. If is full, then it has on its boundary, and thus . By our definition of , for some , the sequence belongs to . Then by our construction of both and belong to , where plays the role of . Assume that is partial. Then touches , because otherwise we could enlarge and shrink simultaneously to increase . Recall that . Lemma 2 implies that . This implies that , and thus . Since is partial and touches , we have that is either full or partial. If is full, then it has on its boundary, and thus . By a similar reasoning as for based on the definition of and , we get that , , and are in . If is partial, then it touches and again by Lemma 2 we have and consequently . By repeating this process, we stop at some point , with , for which is a full disk, , and ; notice that such a exists because is a full disk and consequently is a zero disk. To this end we have that , , and is a plus sequence. Thus, is a subsequence of some sequence in . Our construction of implies that all disks belong to .

Now assume that touches both and . By Lemma 2 we have that is strictly smaller than the largest of these disks, say . By a similar reasoning as in the previous case we get that . ∎

3 Problem 2: Client-Server Coverage with Minimum Radii

In this section we study Problem 2: Let be a set of points on a straight-line that is partitioned into two sets, namely clients and servers. We want to assign to every server in a radius such that the disks with these radii cover all clients and the sum of their radii is as small as possible. Bilò et al. [4] showed that this problem can be solved in polynomial time. Lev-Tov and Peleg [9] showed how to obtain such an assignment in time. Alt et al. [3] presented an O(n log n)-time 2-approximation algorithm for this problem. We show how to solve this problem optimally in time.

Theorem 2.

Given a total of collinear clients and servers, in time, we can find a set of disks centered at servers that cover all clients and where the sum of the radii of the disks is minimum.

Without loss of generality assume that is horizontal, and that is the sequence of points of in increasing order of their -coordinates. We refer to a disk with radius zero as a zero disk, to a set of disks centered at servers and covering all clients as a feasible solution, and to the sum of the radii of the disks in a feasible solution as its cost. We denote the radius of a disk by , and denote by a disk that is centered at the point with the point on its boundary.

We describe a top-down dynamic programming algorithm that maintains a table with entries . Each table entry represents the cost of an optimal solution for the subproblem that consists of points . The optimal cost of the original problem will be stored in ; the optimal solution itself can be recovered from . In the rest of this section we show how to solve a subproblem . In fact, we show how to compute recursively by a top-down dynamic programming algorithm. To that end, we first describe our three base cases:

• There is no client. In this case .

• There are some clients but no server. In this case .

• There are some clients and exactly one server, say . In this case is the radius of the smallest disk that is centered at and covers all the clients.

Assume that the subproblem has at least one client and at least two servers. We are going to derive a recursion for .

Observation 1.

Every disk in any optimal solution has a client on its boundary.

Lemma 4.

No disk contains the center of some other non-zero disk in an optimal solution.

Proof.

Our proof is by contradiction. Let and be two disks in an optimal solution such that contains the center of . Let and be the centers of and , respectively, and and be the radii of and , respectively. See Figure 3(a). Since contains , we have . Let be the disk of radius that is centered at . Notice that covers all the clients that are covered by . By replacing and with we obtain a feasible solution whose cost is smaller than the optimal cost, because . This contradicts the optimality of the initial solution. ∎

Let be the rightmost client in . For a disk that covers , let be the smallest index for which the point is in the interior or on the boundary of , i.e., is the index of the leftmost point of that is in . See Figure 3(b).

We claim that only one disk in an optimal solution can cover , because, if two disks cover then if their centers lie on the same side of , we get a contradiction to Lemma 4, and if their centers lie on different sides of , then by removing the disk whose center is to the right of we obtain a feasible solution with smaller cost. Let be an optimal solution (with minimum sum of the radii) that has a maximum number of non-zero disks. Let be the disk in that covers . All other clients in are also covered by , and thus, they do not need to be covered by any other disk. As a consequence of Lemma 4, the servers that are in and the servers that lie to the right of cannot be used to cover any clients in . Therefore, if we have , then the problem reduces to a smaller instance that consists of the points to the left of , i.e., . See Figure 3(b). Thus, the cost of the optimal solution for the subproblem can be computed as .

In the rest of this section we compute a set of disks each of them covering . Then we claim that belongs to . Therefore, we can compute by this recursion:

 T(k)=min{T(ψ(D)−1)+r(D):D∈Dk}.

We compute in two phases. In the first phase, for every server we add the disk to . In the second phase, we consider the servers that are to the left of and the servers that are to the right of separately. Let be the sequence of all servers that are to the left of ; see Figure 4. For every , let be the left intersection point of the boundary of with . Set . Let be the longest sequence of consecutive clients between and that lie just before ; there is no server between and the leftmost point of . For every client we add the disk to as in Figure 4. Now we consider the servers , which are to the right of . See Figure 4. Notice that the optimal solution does not contain a disk that is centered at any of the servers because otherwise we could replace by a smaller disk centered at such that covers the same clients that are covered by . Thus, we simply discard . For , we define in a similar way that we defined (notice that here we have ), and then for each client we add to ; see Figure 4. This finishes the computation of . In the first phase we added one disk to for every server. In the second phase we added one disk for every client in for all . The sets are pairwise disjoint because each contains some clients that lie between and . Thus the total number of disks added to in the second phase is at most the number of clients. Therefore the total number of disks in is at most . The set , and consequently the entry can be computed in time. Therefore, our dynamic programming algorithm computes all entries of in time.

To complete the correctness proof of our algorithm it only remains to show that belongs to . If has on its boundary, then has been added to in the first phase of the computation of . Assume that does not have on its boundary. Let , with , be the client on the boundary of (such a client exists by Observation 1). Let be the center of . Since is the rightmost client and is on the boundary of , we have that is to the left of (but can be to the left or to the right of ). See Figure 5. Observe that is in the interior of . The intersection of with the disk difference consists of two line segments; let be the one to the left.

Lemma 5.

There is no server on and there is no server such that the boundary of intersects .

Proof.

We prove both statements by contradiction. To prove the first statement assume that contains a server ; see Figure 5(a). We can replace by and the smallest disk that is centered at and covers all the clients on . These two new disks cover all the clients that have been covered by . If is strictly inside , then sum of the radii of these two disks is smaller than the radius of , thereby this replacement reduces the optimal cost, which contradicts the optimality of . If is on the right endpoint of , then the sum of the radii of these two disks is equal to the radius of , thereby this replacement increases the number of non-zero disks without increasing the optimal cost; this contradicts our choice of (notice that, by Lemma 4, has a zero disk in ).

To prove the second statement let be a server such that intersects ; see Figure 5(b). Since intersects , lies to the left of . In this configuration, is contained in (no matter if is to the left or to the right of ). Also, is smaller than , and covers all the clients that are covered by . Thus, by replacing with we obtain a feasible solution whose cost is smaller than the optimal cost, which is a contradiction. ∎

Let be the set of all clients on (including ). By the first statement of Lemma 5 the clients in are consecutive. Let for some . Then by our definition of , the clients in lie just before . By the second statement of Lemma 5 the clients in are to the right of . These constraints imply that . Therefore, the disk is contained in , since it was added in the second phase of the construction. This finishes the correctness proof.

4 Problem 3: Point-Interval Coverage with Minimum Area

Let be an interval on the -axis in the plane. We say that a set of disks covers if is a subset of the union of the disks in this set. Let be a set of points on such that and are in . A point-interval coverage for the pair is a set of disks that cover such that (i) the disks in have their centers on and (ii) every disk in contains at least one point of . See Figure 6. The point-interval coverage problem is to find such a set of disks with minimum total area. In this section we show how to solve this problem in time.

Theorem 3.

Given points on an interval, in time, we can find a set of disks covering the entire interval such that every disk contains at least one point and where the total area of the disks is minimum.

If we drop condition (ii), then the problem can be solved in linear time by using Observation 2 (which is stated below). Let Problem be a version of the point-interval coverage problem with an additional constraint that (iii) every point is assigned to exactly one of the disks in that contains . Notice that any solution for Problem is also a solution for Problem 3. Conversely, any solution for Problem 3 can be transformed to a solution for Problem by assigning every point to one of the disks containing it. Thus, these two problems are equivalent. Therefore, without loss of generality, in the rest of this section we study the version of the point-interval coverage with three constraints (i), (ii), and (iii). First we prove some lemmas about the structural properties of an optimal point-interval coverage. We say that a disk is anchored at a point if it has on its boundary. We say that two intersecting disks touch each other if their intersection is exactly one point, and we say that they overlap otherwise.

Lemma 6.

In any optimal solution for the point-interval coverage problem, exactly one point is assigned to each disk.

Proof.

Our proof is by contradiction. Consider a disk in an optimal solution that is assigned two points and . Without loss of generality assume that is to the left of . Let and be the two intersection points of the boundary of with the -axis, and let be a point on the -axis that is between and . Let and be the disks with diameters and , respectively; see Figure 7(a). The total area of and is smaller than the total area of . Also covers the same interval as does. Remove from the optimal set and add and to the resulting set. Assign to and to . This gives a solution with smaller total area, which is a contradiction. ∎

Lemma 7.

There is no pair of overlapping disks in any optimal solution for the point-interval coverage problem.

Proof.

Our proof is by contradiction. Consider two overlapping disks and in an optimal solution. Let and denote the points that are assigned to and , respectively. We differentiate between the following two cases.

• is a subset of , or vice versa. Assume that is a subset of . Let and be the two intersection points of the boundary of with the -axis. Let and be the disks with diameters and . The total area of and is smaller than the total area of and . Moreover, covers the same interval as does. Remove and from the optimal set and add and to the resulting set. Assign to one of and that contains , and assign to the other disk. This results a solution whose total area is smaller than the optimal area, which is a contradiction.

• is not a subset of , nor vice versa. See Figure 7(b). Let and be the left and right intersection points of the boundary of with the -axis, respectively. Let and be the left and right intersection points of the boundary of with -axis, respectively. Assume that is to the left of ; this implies is the sorted sequence of these points from left to right. If , then we shrink (while anchored at ) by a small amount and reduce the total area of the optimal set, which is a contradiction. Assume that ; similarly, assume that . In this configuration we shrink (while anchored at ) and (while anchored at ) simultaneously until they touch each other as in Figure 7(b). Then we assign to , and to . This gives a valid solution whose total area is smaller than the optimal area, which is a contradiction.∎

Lemma 6 implies that the number of disks in every optimal solution—for the interval coverage problem—is equal to the number of points in , and Lemma 7 implies that these disks can touch each other but do not overlap. This enables us to order the disks of every optimal solution from left to right such that any two consecutive disks touch each other; let be this ordering. Let be the points of from left to right. Then for every , the point is assigned to the disk ; see Figure 6.

Lemma 8.

In any optimal solution, if the intersection point of and does not belong to , then and have equal radius.

Proof.

Let be the intersection point of and . Let be the left intersection point of the boundary of with the -axis, and let be the right intersection point of the boundary of with the -axis; see Figure 7(c). We proceed by contradiction, and assume, without loss of generality, that is smaller than . We shrink (while anchored at ) and enlarge (while anchored at ) simultaneously by a small value. This gives a valid solution whose total area is smaller than the optimal area, because our gain in the area of is smaller than our loss from the area of . This contradicts the optimality of our initial solution. ∎

The following lemma and observation play important roles in our algorithm for the point-interval coverage problem, which we describe later.

Lemma 9.

Let be a real number, and be a sequence of positive real numbers such that . Then

 k∑i=1r2i≥k∑i=1(R/k)2=R2/k, (1)

i.e., the sum on the left-hand side of (1) is minimum if all are equal to .

Proof.

If is a convex function, then—by Jensen’s inequality—we have

 f(k∑i=1rik)⩽k∑i=1f(ri)k.

Since the function is convex, it follows that

 (Rk)2=f(Rk)=f(k∑i=1rik)⩽k∑i=1r2ik,

which, in turn, implies Inequality (1). ∎

The minimum sum of the radii of a set of disks that cover is . The following observation is implied by Lemma 9, by setting and .

Observation 2.

The minimum total area of disks covering is obtained by a sequence of disks of equal radius such that every two consecutive disks touch each other; see Figure 8.

We refer to the covering of that is introduced in Observation 2 as the unit-disk covering of with disks. Such a covering is called valid if it is a point-interval coverage for .

4.1 A Dynamic-Programming Algorithm

In this subsection we present an -time dynamic-programming algorithm for the point-interval coverage problem. In Subsection 4.2 we show how to improve the running time to .

First, we review some properties of an optimal solution for the point-interval coverage problem that enable us to present a top-down dynamic programming algorithm. Let be the sequence of disks in an optimal solution for this problem. Recall that as a consequence of Lemma 7, the intersection of every two consecutive disks in is a point. If there is no for which the intersection point of and belongs to , then Lemma 8 implies that all disks in have equal radius, and thus, is a valid unit-disk covering. Assume that for some the intersection point of and is a point . Notice that is assigned to either or ; this implies either or . In either case, is the union of the optimal solutions for two smaller problem-instances and where , , and .

We define a subproblem and represent it by four indices where and . The indices and indicate that . The set contains the points of that are on provided that belongs to if and only if and belongs to if and only if . For example, if and , then . We define to be the cost (total area) of an optimal solution for subproblem . The optimal cost of the original problem will be stored in . We compute as follows. If the unit-disk covering is a valid solution for , then by Observation 2 it is optimal, and thus we assign its total area to . Otherwise, as we discussed earlier, there is a point of with that is the intersection point of two consecutive disks in the optimal solution. This splits the problem into two smaller subproblems, one to the left of and one to the right of . The point is assigned either to the left subproblem or to the right subproblem. See Figure 9 for an instance in which the unit-disk covering is not valid, and is assigned to the right subproblem. In the optimal solution, is assigned to the one that minimizes the total area, which is

 T(i,j,i′,j′)=min{T(i,k,i′,1)+T(k,j,0,j′),T(i,k,i′,0)+T(k,j,1,j′)}.

Since we do not know the value of , we try all possible values and pick the one that minimizes .

There are three base cases for the above recursion. (1) No point of is assigned to the current subproblem: we assign to , which implies this solution is not valid. (2) Exactly one point of is assigned to the current subproblem: we cover with one disk of diameter and assign its area to . (3) More than one point of is assigned to the current subproblem and the unit-disk covering is valid: we assign the total area of this unit-disk covering to .

The total number of subproblems is at most , because and take different values, and each of and takes two different values. The time to solve each subproblem is proportional to the time for checking the validity of the unit-disk covering for this subproblem plus the iteration of from to ; these can be done in total time . Thus, the running time of our dynamic programming algorithm is .

In the next section we present a more involved dynamic-programming algorithm that improves the running time to . Essentially, our algorithm verifies the validity of the unit-disk coverings for all subproblems in time.

4.2 Improving the running time

We describe a top-down dynamic programming algorithm that maintains a table with entries where and . Each entry represents the cost of an optimal solution for the subproblem that consists of interval and a point set . If , then , whereas, if , then . The optimal cost of the original problem will be stored in ; the optimal solution itself can be recovered from . In the rest of this section we show how to solve subproblem . If the unit-disk covering is a valid solution for , then we assign its total area to . Otherwise, there must be some point with that is the intersection point of two consecutive disks in the optimal solution. Let be the largest such . This choice of implies that the interval is covered by unit disks, and thus, we only need to solve the subproblem to the left of optimally for two cases where and . Let denote the cost of a unit-disk covering for the problem instance (that is defined in the previous section). Then

 T(j,j′)=min{T(i,1)+U(i,j,0,j′),T(i,0)+U(i,j,1,j′)}.

Since we do not know the value of , we try all possible values and pick the one that minimizes .

The total number of subproblems is , and the time to solve each subproblem is proportional to the total time for the iterations of from to plus the time for computing the unit-disk covering for and checking its validity. Let denote the time for computing and checking validity of unit-disk coverings for all . Then the time to compute is . Therefore, the running time of our algorithm, i.e. the time to compute , is

 n∑j=1O(j)+u(j)=O(n2)+n∑j=1u(j)=O(n2)+n∑j=1j−1∑i=2u(i,j,i′,j′),

where denotes the time of computing the unit-disk covering for and checking its validity. In the rest of this section we will show how to do this, for all , in time. This implies that the total running time of our algorithm is .

Take any . We show how to check the validity of the unit-disk covering for , where iterates from to . We are going to show how to do this in total time, for all values of . This will imply that we can check the validity of the unit-disk coverings for all and in time. We describe the procedure for the case when and ; the other three cases can be handled similarly. Recall that is the interval and is the point set that are associated with . Notice that the length of is , and the number of points in is . Then the diameter of each disk in the unit-disk covering of is . In order to have a valid unit-disk covering for , the following conditions are necessary and sufficient (see Figure 10)

 |pipi+1|  ⩽dij dij⩽ |pipi+2|  ⩽2⋅dij 2⋅dij⩽ |pipi+3|  ⩽3⋅dij 3⋅dij⩽ |pipi+4|  ⩽4⋅dij ⋮ (nij−1)⋅di