Minimizing the Aggregate Movements for Interval Coverage

# Minimizing the Aggregate Movements for Interval Coverage

Aaron M. Andrews Department of Computer Science
Utah State University, Logan, UT 84322, USA
11email: aaron.andrews@aggiemail.usu.edu, haitao.wang@usu.edu
Haitao Wang Department of Computer Science
Utah State University, Logan, UT 84322, USA
11email: aaron.andrews@aggiemail.usu.edu, haitao.wang@usu.edu
###### Abstract

We consider an interval coverage problem. Given intervals of the same length on a line and a line segment on , we want to move the intervals along such that every point of is covered by at least one interval and the sum of the moving distances of all intervals is minimized. As a basic geometry problem, it has applications in mobile sensor barrier coverage in wireless sensor networks. The previous work solved the problem in time. In this paper, by discovering many interesting observations and developing new algorithmic techniques, we present an time algorithm. We also show an time lower bound for this problem, which implies the optimality of our algorithm.

## 1 Introduction

In this paper, we consider an interval coverage problem. Given intervals of the same length on a line and a line segment on , we want to move the intervals along such that every point of is covered by at least one interval and the sum of the moving distances of all intervals is minimized.

The problem has applications in barrier coverage of mobile sensors in wireless sensor networks. For convenience, we will introduce and discuss the problem from the barrier coverage point of view. Given a set of points on , say, the -axis, each point represents a sensor. Let be the coordinate of on for each . For any two coordinates and with , we use to denote the interval of between and . The sensors of have the same covering range, denoted by , such that for each , sensor covers the interval . Let be a line segment of and we call a “barrier”. We assume that the length of is no more than since otherwise could not be fully covered by these sensors. The problem is to move all sensors along such that each point of is covered by at least one sensor of and the sum of the moving distances of all sensors is minimized. Note that although sensors are initially on , they may not be on . We call this problem the min-sum barrier coverage, denoted by MSBC.

The problem MSBC has been studied before and Czyzowicz et al.  gave an time algorithm. In this paper, we present an time algorithm and we show that our algorithm is optimal.

### 1.1 Related Work

A Wireless Sensor Network (WSN) uses a large number of sensors to monitor some surrounding environmental phenomena . Each sensor is equipped with a sensing device with limited battery-supplied energy. The sensors process data obtained and forward the data to a base station. Intrusion detection and border surveillance constitute a major application category for WSNs. A main goal of these applications is to detect intruders as they cross the boundary of a region or domain. For example, research efforts were made to extend the scalability of WSNs to the monitoring of international borders [9, 10]. Unlike the traditional full coverage [12, 16, 17] which requires an entire target region to be covered by the sensors, the barrier coverage [3, 4, 7, 8, 10] only seeks to cover the perimeter of the region to ensure that any intruders are detected as they cross the region border. Since barrier coverage requires fewer sensors, it is often preferable to full coverage. Because sensors have limited battery-supplied energy, it is desired to minimize their movements.

If the sensors have different ranges, the Czyzowicz et al.  proves that the problem MSBC is NP-hard.

The min-max version of MSBC has also been studied, where the objective is to minimize the maximum movement of all sensors. If the sensors have the same range, Czyzowicz et al.  gave an time algorithm, and later Chen et al. presented an time solution . If sensors have different ranges, Czyzowicz et al.  left it as an open question whether the problem is NP-hard, and Chen et al.  answered the open problem by giving an time algorithm.

Mehrandish et al. [13, 14] considered another variant of the one-dimensional barrier coverage problem, where the goal is to move the minimum number of sensors to form a barrier coverage. They [13, 14] proved the problem is NP-hard if sensors have different ranges and gave polynomial time algorithms otherwise. In addition, Li et al.  considers the linear coverage problem which aims to set an energy for each sensor to form a coverage such that the cost of all sensors is minimized. There , the sensors are not allowed to move, and the more energy a sensor has, the larger the covering range of the sensor and the larger the cost of the sensor. Another problem variation is considered in , where the goal is to maximize the barrier coverage lifetime subject to the limited battery powers.

Bhattacharya et al.  studied a two-dimensional barrier coverage in which the barrier is a circle and the sensors, initially located inside the circle, are moved to the circle to minimize the sensor movements; the ranges of the sensors are not explicitly specified but the destinations of the sensors are required to form a regular -gon on the circle. Algorithms for both min-sum and min-max versions were given in  and subsequent improvements were made in [6, 15].

Some other barrier coverage problems have been studied. For example, Kumar et al.  proposed algorithms for determining whether a region is barrier covered after the sensors are deployed. They considered both the deterministic version (the sensors are deployed deterministically) and the randomized version (the sensors are deployed randomly), and aimed to determine a barrier coverage with high probability. Chen et al.  introduced a local barrier coverage problem in which individual sensors determine the barrier coverage locally.

### 1.2 Our Approaches

If the covering intervals of all sensors intersect the barrier , we call this case the containing case. If the sensors whose covering intervals do not intersect are all in one side of , then it is called the one-sided case. Otherwise, it is the general case.

In Section 2, we introduce notations and briefly review the algorithm in . Based on the algorithm in , by using a different implementation and designing efficient data structures, we give an time algorithm for the containing case in Section 3.

To solve the one-sided case, the containing case algorithm does not work and we have to develop different algorithms. To do so, we discover a number of interesting observations on the structure of the optimal solution, which allows us to have an time algorithm. The one-sided case algorithm uses the containing case algorithm as a first step and apply a sequence of so-called “reverse operations”. The one-sided case is discussed in Section 4.

In Section 5, we solve the general case in time. To this end, we generalize the techniques for solving the one-sided case. For example, we show a monotonicity property of one-sided case (in Section 4), which is quite useful for the general case. We also discover new observations on the solution structures. These observations help us develop efficient algorithmic techniques. All these efforts lead to the time algorithm for the general case.

Section 6 concludes the paper, where we prove the time lower bound (even for the containing case) by an easy reduction from sorting.

We should point out that although the paper is relatively long, the algorithm itself is simple and easy to implement. In fact, the most complicated data structure used in the algorithm is the balanced binary search trees! The lengthy (and sometimes tedious) proofs are all devoted to discovering the observations and showing the correctness, which eventually lead to a simple, elegant, efficient, and optimal algorithm. Discovering these observations turns out to be quite challenging and is actually one of our main contributions.

## 2 Preliminaries

In this section, we introduce some notations and sketch the algorithm given by Czyzowicz et al. . Below we will use the terms “line segment” and “interval” interchangeably, i.e., a line segment of is also an interval and vice versa. Let denote the length of . Without loss of generality, we assume the barrier is the interval . For short, sensor covering intervals are called sc-intervals.

We assume the sensors of are already sorted, i.e., (otherwise we sort them in time). For each sensor , we use to denote its covering interval. Recall that is the covering range of each sensor and the length of each sc-interval is . We assume since otherwise the solution would be trivial. An easy but important observation given in  is the following order preserving property: there always exists an optimal solution where the order of the sensors is the same as that in the input. Note that this property does not hold if sensors have different ranges.

Sensors will be moved during the algorithm. For any sensor , suppose its location at some moment is ; the value is called the displacement of (here we use instead of in the definition in order to ease the discussions later). Hence, if the displacement of is positive (resp., negative), then it is to the left (resp., right) of its original location in the input.

In the sequel, we define two important concepts: gaps and overlaps, which were also used in .

A gap refers to a maximal sub-segment of such that each point of the sub-segment is not covered by any sensors (e.g., see Fig. 1). Each endpoint of any gap is an endpoint of either an sc-interval or . Specifically, consider two adjacent sensors and such that . If and , then the interval is on and defines a gap, and and are called the left and right generators of the gap, respectively. If , then is a gap and is the only generator of the gap. Similarly, if , then is a gap and is the only generator. For any gap , we use to denote its length. For simplicity, if a gap has only one generator , then the left/right generator of is .

To solve the problem MSBC, the essential task is to move the sensors to cover all gaps by eliminating overlaps, defined as follows. Consider two adjacent sensors and . The intersection defines an overlap if it is not empty (e.g., see Fig. 1), and we call and the left and right generators of the overlap, respectively. Consider any sensor . If is not completely on , then the sub-interval of that is not on defines an overlap and is its only generator (e.g., see Fig. 1). A subtle situation appears when contains an endpoint of in its interior. Refer to Fig. 2 as an example, where is in the interior of with and . According to our definition, and together define an overlap ; itself defines an overlap ; itself defines an overlap . However, to avoid some tedious discussions, we consider the union of and as a single overlap defined by and together, but still itself defines the overlap . Symmetrically, if contains in its interior, then we consider as a single overlap defined by and , and itself defines an overlap that is the portion of outside .

For any overlap , we use to denote its length. For simplicity, if an overlap has only one generator , then the left/right generator of is . We should point out that according to our above definition on overlaps, if an overlap has two different generators, then these two generators must be two adjacent sensors (e.g., and for some ). In other words, if the sc-intervals of two non-adjacent sensors (e.g., and ) intersect, their intersection does not define any overlap.

Clearly, the total number of overlaps and gaps is . Figure 2: I(si)∩I(si+1) contains 0 in its interior. In this case, we consider si and si+1 together defining an overlap [c,b] and si itself defining an overlap [a,0].

To solve MSBC, the goal is to move the sensors to cover all gaps by eliminating overlaps. We say a gap/overlap is to the left (resp., right) of another gap/overlap if the left generator of is to the left (resp., right) of the left generator of (in the case of Fig. 2, where overlaps and have the same left generator , is considered to the left of ).

For any two indices and with , let .

Below we sketch the time algorithm in  on the containing case where every sc-interval intersects . The algorithm “greedily” covers all gaps from left to right one by one. Suppose the first gaps have just been covered completely and the algorithm is about to cover the gap .

Let (resp., ) be the closest overlap to the right (resp., left) of . We will cover by using either or . To determine using which overlap to cover , the costs and are defined as follows. Let be the set of sensors between the right generator of and the left generator of . Define to be . The intuition of this definition is that suppose we shift all sensors of to the left for an infinitesimal distance (such that the gap becomes shorter), then the sum of the moving distances of all sensors of is . As will be clear later, the current displacement of each sensor in may be positive but cannot be negative. For , it is defined in a slightly different way. Let be the set of sensors between the left generator of and the right generator of , and let be the subset of sensors of whose displacements are positive. If we shift all sensors in to the right for an infinitesimal distance , although the sum of the moving distances of all sensors of is , the total moving distance contributed to the sum of the moving distances of all sensors of is actually because the sensors of are moved towards their original locations. Hence, the cost is defined to be . Note that the sensors in or are consecutive in their index order.

If , we move each sensor in leftwards by distance , and we call this a left-shift process. Note that if there is any gap between two sensors in , then the above shift process will move leftwards as well, but the size and the generators of do not change, and thus in the later algorithm we can still use without causing any problems. If , then after the left-shift process is covered completely and we proceed on the next gap . Otherwise, is eliminated and is only partially covered. We proceed on the remaining .

If , we move each sensor in rightwards by distance , where is the smallest displacement of the sensors in , and we call this a right-shift process. If is the smallest among the three values, then the process makes the displacement of at least one sensor in become zero and we call the process as a positive-displacement-removal right-shift process (or PDR process for short). After the process, if is only partially covered, we proceed on the remaining ; otherwise we proceed on the next gap .

The algorithm finishes after all gaps are covered. To analyze the running time, there are shift processes in total. To see this, each shift process covers a gap completely, or eliminates an overlap, or is a PDR process. An observation is that if the displacement of a sensor was positive but is made to zero during a PDR process, then the displacement of will never become positive again because all uncovered gaps are to the right of . Therefore, the number of PDR processes is at most . Since the number of gaps and overlaps is , the total number of shift processes in the algorithm is . Each shift process can be done in time, and thus the algorithm runs in time.

## 3 The Containing Case

In this section, we present our algorithm that solves the containing case of MSBC in time. The high-level scheme of our algorithm is the same as the time algorithm  described in Section 2, but we design efficient data structures such that each shift process can be implemented in amortized time. More specifically, our algorithm maintains an overlap tree , a position tree , a left-shift tree , and a global variable .

### 3.1 The Overlap Tree To

We store each gap/overlap by recording its generators. Consider any gap (which may have been partially covered previously). Our algorithm needs to compute the two overlaps and . To this end, we maintain all overlaps in a balanced binary search tree , called overlap tree, using the indices of the left generators of the overlaps as “keys”. We can find the two overlaps and in time by searching with the index of the left generator of . The tree can also support each deletion of any overlap in time if the overlap is eliminated.

Furthermore, can help us to compute the costs and in the following way. After is found, we have , where is the index of the left generator of and is the index of the right generator of . Hence, can be computed in time. Similarly, we can obtain . However, to compute , we also need to know the size , which will be discussed later.

### 3.2 The Position Tree Tp

Recall that the algorithm needs to do the left or right shift processes, each of which moves a sequence of consecutive sensors by the same distance. To achieve the overall time for the algorithm, we cannot explicitly move the involved sensors for each shift process. Instead, we use the following position tree to perform each shift implicitly in time.

The tree is a complete binary tree of leaves and height. The leaves from left to right correspond to the sensors in their index order. For each , leaf (i.e., the -th leaf from the left) stores the original location of sensor . Each node of (either an internal node or a leaf) is associated with a shift value. Initially the shift values of all nodes of are zero. At any moment during the algorithm, the actual location of each sensor is plus the sum of the shift values of the nodes in the path from the root to leaf (actually this sum of shift values is exactly the negative value of the current displacement of ), which can be obtained in time.

Now suppose we want to do a right-shift process that moves a sequence of sensors in for rightwards by a distance . We first find a set of nodes of such that the leaves of the subtrees of all these nodes correspond to exactly the sensors in . Specifically, is defined as follows. Let be the lowest common ancestor of leaves and . Let be the path from the parent of leaf to . For each node in , if the right child of is not in , then the right child of is in . Leaf is also in . The rest of the nodes of are defined in a symmetric way on the path from the parent of leaf to . The set can be easily found in time by following the two paths from the root to leaf and leaf . For each node in , we increase its shift value by . This finishes the right-shift process, which can be done in time. Similarly, each left-shift process can also be done in time.

After the algorithm finishes, we can use to obtain the locations for all sensors in time.

### 3.3 The Left-Shift Tree Tl and the Global Variable γ

It remains to compute the size and the smallest displacement of the sensors in . Our goal is to compute them in time. This is one main difficulty in our containing case algorithm. We propose a left-shift tree to maintain the displacement information of the sensors that have positive displacements (i.e., their current positions are to the left of their original locations).

The tree is a complete binary tree of leaves and height. The leaves from left to right correspond to the sensors. For each leaf , denote by the path in from the root to the leaf. Each node of is associated with the following information.

1. If is a leaf, then is associated with a flag, and is set to “valid” if the current displacement of is positive and “invalid” otherwise. Initially all leaves are invalid. If the flag of leaf is valid/invalid, we also say the sensor is valid/invalid. Thus, is the set of valid sensors of .

2. As in the position tree , regardless of whether is an internal node or a leaf, maintains a shift value . At any moment during the algorithm, for each leaf , the sum of all shift values of the nodes in the path is exactly the negative value of the current displacement of the sensor .

3. Node maintains a min value , which is equal to minus the sum of the shift values of the nodes in the path from to the root, where is the smallest displacement among all valid leaves in the subtree rooted at , and further, the index of the corresponding sensor that has the above smallest displacement is also maintained in as .

If no leaves in the subtree of are valid, then and .

4. Node maintains a num value , which is the number of valid leaves in the subtree of . Initially for all nodes.

The tree can support the following operations in time each.

set-valid

Given a sensor , the goal of this operation is to set the flag of the -th leaf valid.

To perform this operation, we first find the leaf , denoted by . We set , , . Next, we update the min and index values of the other nodes in the path in a bottom-up manner. Beginning from the parent of , for each node in , we set where and are the left and right child of , respectively, and we set to if gives the above minimum value and otherwise.

Finally, we update the num values for all nodes in the path by increasing by one for each node .

Hence, the set-valid operation can be done in time.

set-invalid

Given a sensor , the goal of this operation is to set the flag of the -th leaf invalid.

We first find leaf , set it invalid, set its min value to , and set its num value to . Then, we update the min, index, and num values of the nodes in the path similarly as in the above set-valid operation. We omit the details. The set-invalid operation can be done in time.

left-shift

Given two indices and with , as well as a distance , the goal of this operation is to move each sensor in leftwards by . It is required that is small enough such that any valid (resp., invalid) sensor before the operation is still valid (resp., invalid) after the operation.

The operation can be performed in a similar way as we did on the position tree , with the difference that we also need to update the shift, min, and index values of some nodes. Specifically, we first compute the set of nodes, as defined in the position tree , and then for each node of , we increase its shift value by .

Next, we update the min and index values. An easy observation is that only those nodes on the two paths and need to have their min and index values updated. Specifically, for , we follow it from leaf in a bottom-up manner, for each node , we update and in the same way as we did in the set-valid operations. We do the similar things for the path . The time for performing this operation is .

right-shift

Given two indices and with , as well as a distance , the goal of this operation is to move each sensor in rightwards by . Similarly, is small enough such that any valid (resp., invalid) sensor before the operation is still valid (resp., invalid) after the operation.

This operation can be performed in a symmetric way as the above left-shift operation and we omit the details.

find-min

Given two indices and with , the goal is to find the smallest displacement and the corresponding sensor among all valid sensors in .

We first find the set of nodes as before. For each node , we compute the smallest displacement among all valid nodes in its subtree, which is equal to plus the shift values of the nodes in the path from to the root. These smallest displacements for all nodes in can be computed in time in total by traversing the two paths and in the top-down manner. The smallest displacement among all valid sensors in is the minimum among all above smallest displacements, and the corresponding sensor for the smallest displacement can be immediately obtained by using associated with each node of . Thus, each find-min operation can be done in time.

find-num

Given two indices and with , the goal is to find the number of valid sensors in .

We first find the set of nodes as before, and then return the sum of the values for all nodes . Hence, time is sufficient for performing the operation.

In addition, our algorithm maintains a global variable that is the rightmost sensor that has ever been moved to the left. We will use to determine whether we should do a set-valid operation on a sensor in the left-shift tree and make sure the total number of set-valid operations on in the entire algorithm is at most . Initially, . As will be clear later, the variable will never decrease during the algorithm.

### 3.4 The O(nlogn) Time Algorithm

Using the three trees , , , and the global variable , we implement the algorithm  described in Section 2 in time, as follows.

The initialization of these trees can be easily done in time. Suppose the algorithm is about to consider gap . We assume the three trees and have been correctly maintained. We first use use the overlap tree to find the two overlaps and in time, as discussed earlier. The two numbers and , as well as the cost , are also determined. Next, we find by doing a find-num operation on using the index of the right generator of and the index of the left generator of . The cost is thus obtained. Depending on whether , we have two main cases.

#### 3.4.1 Case C(ori)<C(oli)

If , we do a left-shift process that moves all sensors in leftwards by distance . Note that with being the index of the right generator of and being the index of the left generator of . To implement the above left-shift process, we first do a left shift on the position tree , as described earlier. Then, we update the left-shift tree and the variable in the following way.

Since is an overlap and the gaps that have been covered are all to the left of , no sensor to the right of has ever been moved. Specifically, sensor has never been moved, for any . This implies that .

If , then for each sensor with , we first do a set-valid operation on on and then do a left-shift operation on on with distance .

If , we have the following lemma.

###### Lemma 1

If , then right before the above left-shift process, all sensors in have positive displacements and thus are valid.

Proof: We consider the situation right before the above left-shift process.

First of all, we claim that the displacement of must be positive. Indeed, according to the definition of , if the displacement of is not positive, then there must be a shift process previously in the algorithm that moved rightwards. However, since the gaps that have been considered by the algorithm are all to the left of and thus to the left of , never had any chance to be moved rightwards. The claim thus follows. Hence, if , the lemma is trivially true.

If , assume to the contrary that there is a sensor with whose displacement is not positive. Since the displacement of is positive, the above situation can only happen if the algorithm covered a gap between and , which contradicts with the fact that all gaps that have been covered by the algorithm are to the left of and thus to the left of . Thus, the lemma follows.

If , for each with , we first do a set-valid operation on and then do a left-shift operation on with distance in . Finally, we do a left-shift operation for the sensors in on with distance . Based on Lemma 1, the tree is now correctly updated.

Note that during the above left-shift process, we did multiple set-valid operations and each of them is followed immediately by a left-shift operation. An observation is that the total number of set-valid operations in the entire algorithm is at most , because the sensors that are set to valid during this left-shift processes have never been set to valid before as their indices are larger than . The number of left-shift operations immediately following these set-valid operations is thus also at most .

Finally, we update to .

If , we proceed on the next gap . Otherwise, is eliminated and we delete it from the overlap tree . Since is only partially covered, we proceed on the remaining with the same approach (in the special case , we proceed on ).

#### 3.4.2 Case C(ori)≥C(oli)

If , we perform a right-shift process that moves all sensors in rightwards by distance , where is the smallest displacement of the sensors in . Let be the index of the right generator of and be the index of the left generator of . Hence, .

To implement the right-shift process, we first do a find-min operation on with indices and to compute . Then, we update the position tree by doing a right-shift operation for the sensors in with distance . Since no sensor is moved leftwards in the above process, we do not need to update .

Next, we update the other two trees and , depending on which of the three values , , and is the smallest.

If , we do a right-shift operation with indices and for distance on . Recall that the find-min operation can also return the sensor that gives the sought smallest displacement. Suppose the above find-min operation on returns whose displacement is , with . Since the displacement of now becomes zero, we do a set-invalid operation on in . Note that although it is possible that , we do not need to update .

We should point out a subtle situation where multiple sensors in had displacements equal to . For handling this case, we do another find-min operation on with indices and . If the smallest displacement found by the operation is zero, then we do the set-invalid operation on on the sensor returned by this find-min operation. We keep doing the find-min operations until the smallest displacement found above is larger than zero. Although there may be multiple set-invalid and find-min operations during the above procedure, the total number of these operations is in the entire algorithm. To see this, it is sufficient to show that the number of set-invalid operations is because there is exactly one find-min operation following each set-invalid operation. After each set-invalid operation, say, on a sensor , we claim that the sensor will never be set to valid again in the algorithm. Indeed, since the displacement of was positive, according to the definition of , we have . Since each set-valid operation is only on sensors with indices larger than and the value never decreases, will never be set to valid again in the algorithm. In fact, will never be moved leftwards in the algorithm because is to the left and all gaps that will be covered in the algorithm are to the right of and thus are to the right of .

This finishes the discussion for the case . Below we assume .

We do the right-shift operation with indices and for distance on . Since , no valid sensor in will become invalid due to the right-shift. If , we delete from since is eliminated. If , we proceed on the next gap ; otherwise, we proceed on the remaining .

The algorithm finishes after all gaps are covered. The above discussion also shows that the running time of the algorithm is bounded by .

## 4 The One-Sided Case

In this section, we solve the one-sided case in time, by using our algorithm for the containing case in Section 3 as an initial step. In the one-sided case, the sensors whose covering intervals do not intersect are all in one side of , and without loss of generality, we assume it is the right side. Specifically, we assume holds. We assume at least one sc-interval does not intersect since otherwise it would become the containing case. Note that this implies .

We use configuration to refer to a specification of where each sensor is located. For example, in the input configuration, each sensor is located at .

A sequence of consecutive sensors are said to be in attached positions if for each the right endpoint of the covering interval of is at the same position as the left endpoint of .

### 4.1 Observations

First, we show in the following lemma that a special case where no sc-interval intersects , i.e., , can be easily solved in time.

###### Lemma 2

If , we can find an optimal solution in time.

Proof: If , then all sensor covering intervals are strictly to the right side of . According to the order preserving property, in the optimal solution must have its left endpoint at (i.e., is at ). Note that we need at least sensors to fully cover . Since all sensors have their covering intervals strictly to the right side of and no sensor intersects , in the optimal solution sensors in must be in attached positions. Therefore, the optimal solution has a very special pattern: is at , sensors in are in attached positions, and other sensors are at their original locations. Hence, we can compute this optimal solution in time.

In the following, we assume , i.e., intersects . Let be the largest index such that intersects . Note that due to . To simplify the notation, let and .

Our containing case algorithm is not applicable here and one can easily verify that the cost function we used in the containing case do not work for the sensors in . More specifically, suppose we want to move a sensor in leftwards to cover a gap; there will be an “additive” cost , i.e., has to move leftwards by that distance before it touches . Recall that the cost we defined on overlaps in the containing case is a “multiplicative” cost, and the above additive cost is not consistent with the multiplicative cost. To overcome this difficulty, we have to use a different approach to solve the one-sided case.

Our main idea is to somehow transform the one-sided case to the containing case so that we can use our containing case algorithm. Let be any optimal solution for our problem. By slightly abusing notation, depending on the context, a “solution” may either refer to the configuration of the solution or the sum of moving distances of all sensors in the solution. If no sensor of is moved in , then we can compute by running our containing case algorithm on the sensors in . Otherwise, let be the largest index such that sensor is moved in . If we know , then we can easily compute in time as follows. First, we “manually” move all sensors in leftwards to such that the left endpoints of their covering intervals are at . Then, we apply our containing case algorithm on all sensors in , which now all have their covering intervals intersecting (which is an instance of the containing case), and let be the solution obtained above. Based on the order preserving property, the following lemma shows that is .

###### Lemma 3

is .

Proof: Since is moved in , must intersect in . Based on the order preserving property, for each , intersects in , which implies that the location of in must be to the left of . On the other hand, since no sensor with is moved, sensors in are useless for computing . Therefore, is essentially the optimal solution for the containing case on after each sensor in is moved leftwards to , i.e., . The lemma thus follows.

By the above discussion, one main task of our algorithm is to determine .

For each with , let , i.e., the sum of the moving distances for “manually” moving all sensors in leftwards to , and we use to denote the configuration after the above manual movement and we let contain only the sensors in (i.e., sensors in do not exist in ). Let and be the input configuration but containing only sensors in . For each , suppose we apply our containing case algorithm on and denote by the solution (in the case where , we let ), and further, let .

The above discussion leads to the following lemma.

and .

### 4.2 The Algorithm Description and Correctness

Lemma 4 leads to a straightforward time algorithm for the one-sided case, by computing in time for each with , as suggested above. In the sequel, by exploring the solution structures, we present an time solution. The algorithm itself is simple, but it is not trivial to discover the observations behind the scene.

Our algorithm will compute for all . Recall that . Since it is easy to compute all ’s in time, we focus on computing ’s. The main idea is the following. Suppose we already have the solution , which can be considered as being obtained by our containing case algorithm. To compute , since we have an additional overlap defined by at , i.e., the sc-interval , we modify by “reversing” some shift processes that have been performed in the containing algorithm when computing , i.e., using to cover some gaps that were covered by other overlaps in . The details are given below.

We first compute on the configuration . If , then for each ; in this case, we can start from computing and use the similar idea as the following algorithm. To make it more general, we assume , and thus .

Consider our containing case algorithm for computing . Recall that our containing case algorithm consists of shift processes and each shift process covers a gap using an overlap. Let be the shift processes performed in the algorithm in the inverse order of time (e.g., is the last process), where is the total number of processes in the algorithm. For each , let be the gap covered in the process by using/eliminating an overlap, denoted by . Note that each gap/overlap above may not be an original gap/overlap in the input configuration but only a subset of an original gap/overlap. It holds that for each . We call the gap list of . For each , we use to denote the cost of when the algorithm uses to cover in the process . Note that the above process information can be explicitly stored during our containing case algorithm without affecting the overall running time asymptotically. We will use these information later. Note that according to our algorithm the gaps in are sorted from right to left.

Next, we compute , by modifying the configuration . Comparing with , the configuration has an additional overlap defined by at , and we use to denote it. We have the following lemma.

###### Lemma 5

holds if one of the following happens: (1) the coordinate of the right endpoint of is strictly larger than ; (2) is to the right of ; (3) is to the left of and the cost is not greater than the number of sensors between and .

Proof: We prove Case (3) first.

Suppose that we run our containing case algorithm on both and simultaneously. We use to denote the algorithm on and use to denote the algorithm on . Below we will show that every shift process of and is exactly the same, which proves .

Consider any shift process . We assume the processes before on both algorithms are the same, which holds for . In , the process covers gap by using overlap .

If is to the right of , then since is the rightmost overlap in , algorithm also uses to cover , which is the same as .

If is to the left of , then depending on whether is the only overlap to the right of , there are two cases.

1. If is not the only overlap to the right , then let be the closest overlap to among the overlaps to the right of . According to our containing algorithm, the current process of the algorithm only depends on the costs of the two overlaps and . Hence, algorithm uses the same shift process to cover as that in , i.e, use to cover .

2. If is the only overlap to the right , then the current process of algorithm depends on the costs of the two overlaps and . In the following, we show that , and thus algorithm also uses to cover , as in .

Recall that the list of gaps are sorted from right to left by their generators. Thus, the gaps are sorted from left to right. Since is the only overlap to the right in , there is no overlap in to the right of for any with . Hence, algorithm will have to uses the overlaps to the left of to cover for each with . In other words, all overlaps are to the left of all gaps , which implies that the above list of overlaps are sorted from right to left and .

Since is to the right of , the cost , which is the number of sensors between and , is no less than the number of sensors between and . Since in Case (3) the number of sensors between and are at least , we obtain that .

The above shows that every shift process of and is the same, which proves that holds for Case (3).

The proofs of the first two cases are similar to the above, and we only sketch them below.

Case (1) means that sensor still defines an overlap, say , to the right of . If we run our containing case algorithm on to compute , sensor will not be moved since the overlap is to the right of . Hence, holds.

Case (2) means the last shift process covers using that is to the right of . If we run our containing case algorithm on , overlap will never have any chance to be used to cover any gap, because is the rightmost overlap of . Hence, holds.

To compute , we first check whether one of the three cases in Lemma 5 happens, which can be done in constant time by the above process information stored when computing . If any of the three cases happens, we are done for computing . Below, we assume none of the cases happens.

Let be the number of sensors between and , which would be the cost of the overlap if it were there right before we cover . Note that since we know the generators of , can be computed in constant time (e.g., if has two generators, , where is the index of the right generator of ).

Define to be . We can consider as the “unit revenue” (or savings) if we use to cover instead of using . Note that otherwise the third case of Lemma 5 would happen. Hence, it is possible to obtain a better solution than by using to cover instead of . Note that and .

If , then we use to cover . Specifically, we move all sensors in leftwards by distance , where is the index of the right generator of the overlap . The above essentially “restores” the overlap and covers by eliminating . We refer to it as a reverse operation (i.e., it reverses the shift process that covers by using in the algorithm for computing ). Due to , after the reverse operation, is fully covered by and is eliminated. We will show later in Lemma 6 that the current configuration is . Note that . Again, is restored in . Finally, we remove from the list .

If