On the Complexity of Barrier Resilience
for Fat Regions and Bounded Ply111A preliminary version of this work appeared at the 9th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics .
In the barrier resilience problem (introduced by Kumar et al., Wireless Networks 2007), we are given a collection of regions of the plane, acting as obstacles, and we would like to remove the minimum number of regions so that two fixed points can be connected without crossing any region. In this paper, we show that the problem is NP-hard when the collection only contains fat regions with bounded ply (even when they are axis-aligned rectangles of aspect ratio ). We also show that the problem is fixed-parameter tractable (FPT) for unit disks and for similarly-sized -fat regions with bounded ply and pairwise boundary intersections. We then use our FPT algorithm to construct an -approximation algorithm that runs in time, where .
The barrier resilience problem asks for the minimum number of spatial regions from a collection that need to be removed, such that two given points and are in the same connected component of the complement of the union of the remaining regions. This problem was posed originally in 2005 by Kumar et al. [16, 17], motivated from sensor networks. In their formulation, the regions are unit disks (sensors) in some rectangular strip , where each sensor is able to detect movement inside its disk. The question is then how many sensors need to fail before an entity can move undetected from one side of the strip to the opposite one (that is, how resilient to failure the sensor system is). Kumar et al. present a polynomial time algorithm to compute the resilience in this case. They also consider the case where the regions are disks in an annulus, but their approach cannot be used in that setting.
1.1 Related Work
Despite the seemingly small change from a rectangular strip to an annulus, the second problem still remains open, even for the case in which regions are unit disks in . There has been partial progress towards settling the question: Bereg and Kirkpatrick  present a factor -approximation algorithm for the unit disk case. This result was very recently improved to a 1.5-approximation by Chan and Kirkpatrick . On the negative side, Alt et al. , Tseng and Kirkpatrick , and Yang [26, Section 5.1] independently showed that if the regions are line segments in , the problem is NP-hard. Tseng and Kirkpatrick  also sketched how to extend their proof for the case in which the input consists of (translated and rotated) copies of a fixed square or ellipse.
The problem of covering barriers with sensors has received a lot of attention in the sensor network community (e.g., [6, 7, 13]). In the algorithms community, closely related problems involving region intersection graphs have also become quite popular. Gibson et al.  study a problem that is, in a sense, opposite of ours: given a set of points and disks separating them (i.e., every path between two points intersects some disk), compute the maximum number of disks one can remove while keeping the points separated. They present a constant-factor approximation algorithm for this problem. Later, Penninger and Vigan showed that the problem is NP-complete . Recently, Cabello and Giannopoulos  gave a cubic-time algorithm for the case where only two points have to be kept separated, for barriers that are arbitrary connected curves (under some mild assumptions).
We present constructive results for two natural restricted variants of the problem. In Section 3 we show that the problem is fixed-parameter tractable on the resilience when the regions are unit disks. We then extend this approach to other shapes that resemble unit disks. This resemblance is measured with the following three restrictions: all regions are of similar size, region boundaries have pairwise intersections, and the collection of regions have bounded ply  (that is, no point of the plane is covered by too many sensors). Such restrictions are similar in spirit to previous results that bound the union complexity of fat (and non-fat) regions [8, 10, 24]. Formal definitions of fatness, ply, and more detailed descriptions of our restrictions are given in Section 3.2. In Section 4 we also show that the FPT result can be used to obtain an approximation scheme. In particular, the constructive results apply to the original unit disk coverage setting when the collection of disks (or in general fat objects) has bounded ply.
As a complement to these algorithms, in Section 5 we show that the problem is NP-hard even when the input is a collection of fat regions of arbitrary shape in . The result holds even if regions consist of axis-aligned rectangles of aspect ratio and . Our results rely on tools and techniques from both computational geometry and graph theory.
We denote with and the points that need to be connected, and with the set of regions that represent the sensors. To simplify the presentation of our results, we make the following general position assumption: all intersections between boundaries of regions in consist of isolated points. We say that a collection of objects in the plane are pseudodisks if the boundaries of any two of them intersect at most twice.
We formally define the concepts of resilience and thickness introduced in . The resilience of a path between two points and , denoted , is the number of regions of intersected by . Given two points and , the resilience of and , denoted , is the minimum resilience over all paths connecting and . In other words, the resilience between and is the minimum number of regions of that need to be removed to have a path between and that does not intersect any region of . Note that sometimes we will assume that neither nor are contained in any region of , since such regions must always be counted in the minimum resilience paths, hence we can ignore them (and update the resilience we obtain accordingly).
Often it will be useful to refer to the arrangement (i.e., the subdivision of the plane into faces; see  for a formal definition) induced by the regions of , which we denote by . Based on this arrangement we define a weighted dual graph as follows. There is one vertex for each face (i.e., 2-dimensional cell) of . Each pair of neighboring cells is connected in by two directed edges, and . The weight of an edge is if, when traversing from the starting cell to the destination one, we enter a region of (or if we leave a region666Note that no other option is possible under our general position assumption.).
The thickness of a path between and , denoted , equals the number of times enters a region of when traveling from to (possibly counting the same region multiple times). Given two points and , the thickness of and , denoted , is the value , where is a shortest path in from the cell of to the cell of , and equals the number of regions that contain . Also note that the resilience (or thickness) between two points only depends on the cells to which the points belong. Hence, we can naturally extend the definitions of thickness to encompass two cells of , or a cell and a point. Unless otherwise stated, we will use to denote a path with minimum resilience, and for one of minimum thickness.
Note that thickness and resilience can be different (since entering the same region several times has no impact on the resilience, but is counted every time for the thickness). In fact, the thickness between two points can be efficiently computed in polynomial time using any shortest path algorithm for weighted graphs (for example, using Dijkstra’s algorithm). However, as we will see later, the thickness (and the associated shortest path) will help us find a path of low resilience.
Throughout the paper we often use the following fundamental property of disks, already observed in . In the statement below, “well-separated” is in the sense used in —that is, the distance between and is at least .777Note that the well-separatedness of and is used to prove a factor 2 instead of 3. Everything still works for points that are not well-separated, at a slight increase of the constants. Our most general statements for -fat regions do not make this requirement.
Lemma 1 (, Lemma 1)
Let be a set of unit disks, and let be a path from to of minimum resilience. If are well-separated, then encounters no disk of more than twice.
Corollary 1 ()
When the regions of are unit disks, the thickness between two well-separated points is at most twice their resilience.
Note that a crucial property in the above results is that all disks have the same size. In Figure 2 we show problem instances with a single large disk that has to be traversed a linear number of times in any minimum resilience path. The same instance is then modified in Figure 2 so that the radius of the larger disk is only times larger than the radius of the other disks (at the expense of concentrating all disks at the same point).
3 Fixed-parameter tractability
In this section we introduce a single-exponential fixed-parameter tractable (FPT) algorithm, where the parameter is the resilience of the optimal solution. Thus, our aim is to obtain an algorithm that given a problem instance, determines whether or not there is a path of resilience between and , and runs in time for some constant and some polynomial function .
For clarity we first explain the algorithm for the special case of unit disks. Afterwards, in Section 3.2, we show how to adapt the solution to the case in which is a collection of -fat objects. Note that for treating the case of unit disk regions we assume that and are well-separated, so we can apply Lemma 1. This requirement is afterwards removed in Section 3.2.
First we give a quick overview of the method of Kumar et al.  for open belt regions. Their idea consists of considering the intersection graph of together with two additional artificial vertices , with some predefined adjacencies. There is a path from the bottom side to the top side of the belt if and only if there is no path between and in the graph. Hence, computing the resilience of the network is equal to finding a minimum vertex cut between and .
We start by giving a bird’s-eye view of our algorithm. Let be a path of minimum resilience from to , and let be any known path that starts at , passes through , and reaches an unbounded region. Assume that somehow we know that and do not cross (other than at and ). Then, we can cut open through effectively splitting the regions of traversed by the path into two. Topologically speaking, we get something that is homeomorphic to an open belt region, and thus we can solve the problem as such: construct the intersection graph, connect the split regions of to either of the artificial vertices depending on which side of the cut they lie in, and look for a minimum vertex cut (see Figure 3, left). Note that, when doing this cut, it is possible that a disk is split into more than one component. Whenever this happens, we must identify the portions as one (i.e., when one portion is entered, then entering the other portions of the same disk is for free).
Thus, the problem is easy once we have a path that does not cross with . Unfortunately, finding such a path is difficult. Instead, we use several observations to compute a (possibly non-simple) path that cannot have many crossings with , and guess where (if any) these crossings happen. Naturally, we don’t know the way in which the two paths interact, but we will try all possibilities and return the one whose resulting resilience is smallest. A fixed crossing pattern decomposes into subpaths whose endpoints are in (see Figure 3, right). Although the subpaths are unknown, we can compute them via the usual open belt region approach. The main problem is that the different sub-problems are not independent (removing a single region may be useful for several subpaths). Thus, rather than finding a vertex cut that isolates the single source to the single sink, we are given a list of sources and sinks that need to be pairwise disconnected from each other. In the literature, this problem is known as the vertex multicut problem , and several FPT approaches are known.
We now present some observations that will allow us to have a nice choice of (i.e., find a path in which the number of crossings with does not depend on ). Consider a minimum resilience path of shortest length between the cells containing and in , and let be the number of disks traversed by . Since has shortest length, it does not enter and leave the same region unless it helps reduce resilience. Since we assumed that is not contained in any region, is exactly the thickness of and . We observe that cells with high thickness to or can be ignored when we look for low resilience paths.
The minimum resilience path between and cannot traverse cells whose thickness to or is larger than .
Proof: We argue about thickness to ; the argument with respect to is analogous. Let be a path of minimum resilience between and , and let be the resilience of . Also, let be a minimum-thickness path from to . Recall that does not enter a disk more than twice, hence the thickness of is at most . Assume, for the sake of contradiction, that the thickness of some cell traversed by is greater than . Let be the portion of from to . Since the thickness of from to is at most , the triangular inequality implies that the thickness of is less than .
Now, by concatenating and . we would obtain a path that connects with whose thickness is less than , giving a contradiction with the thickness of cell .
For simplicity in the exposition, we will also bound the region to consider (thus, we discard regions with very high resilience since they will not be traversed by ). Let be the union of the cells of the arrangement that have thickness from at most ; we call the domain of the problem. Observe that is connected, but need not be simple (see Figure 4).
For simplicity in the explanation, we add additional discs surrounding so as to make sure that the unbounded face has thickness more than . This does not affect the asymptotic behavior of our algorithm, but it removes the need of considering some degenerate situations. Note that the number of cells remaining in might still be quadratic, hence asymptotically speaking the instance size has not decreased (the purpose of this pruning will become clear later).
There exists a point on the outer boundary of and a tree that spans , , and that has total thickness888The thickness of the tree is defined as the thickness of the paths that compose the tree. .
Proof: Pick any point in the outer boundary of and consider the tree obtained by joining the shortest paths from to , and to . Note that the two paths may go through the same cell of , see Figure 4. The exact paths chosen are not important provided that they have no proper crossings. By definition, the thickness of each of these paths cannot exceed and , respectively, hence the lemma is shown.
Let be the path from to that traverses the tree from the previous lemma. We “cut open” through , removing it from our domain. Note that cells that are traversed by are split into two copies (or three in the case of the cell containing ) of the same Jordan curve (See Figure 5).
Consider now a minimum resilience path , and let denote its resilience. This path can cross several times, and it can even coincide with in some parts (shared subpaths). Although we do not know how and where these crossings occur, we can guess (i.e., try all possibilities) the topology of with respect to . For each disk that passes through, we consider two cases: if goes through it, it will be part of the solution, and can be ignored from now on (increasing by one the total resilience). Otherwise, we make it an obstacle, removing it from the domain, see Figure 5. In that way we know the exact behavior of in the regions traversed by . Additionally, we guess how many times and share part of their paths (either for a single crossing in one cell, or for a longer shared subpath). For each shared subpath, we guess from which cell arrives and leaves.
We call each such configuration a crossing pattern between and . More formally, a single crossing is described by a tuple of four cells: the first cell that the two paths have in common for that crossing, the cell that visits right before entering . Similarly, we add the last cell that the two paths have in common and the cell that is afterwards entered by . A crossing pattern is described by a sorted list of all the crossings that and have.
For any problem instance , there are at most crossing patterns between and , where .
Proof: First, for all disks in , we guess whether or not they are also traversed by . By Lemma 3, has thickness at most , there are at most such many disks (hence up to choices for which disks are traversed by ).
Now observe that cannot traverse many cells of : when moving from a cell to an adjacent one, we either enter or leave a disk of . Since we cannot leave a disk we have not entered and has thickness at most , we conclude that at most cells will be traversed by (other than the starting and ending cells).
We now bound the number of (maximal) shared subpaths between and : recall that passes through exactly disks, and visits each disk at most twice. Hence, there cannot be more than shared subpaths. For each shared subpath we must pick two of the cells traversed in (as candidates for first and last cell in the subpath). By the previous observation there are at most candidates for first and last cell (since that is the maximum number of cells traversed by ). Additionally, for each shared subpath we must determine from which side entered and left the subpath; in most cases we have two options for entering and leaving (since most cells are split into two by ). However, it could happen that the first, last (or even both cells) are the cell containing . The cell containing was split into three, and thus we have three options on which part of the cell enters or leaves. That is, on the worst case there are three possibilities where enters and three possibilities where leaves the path, which gives a total of nine options overall. Since these choices are independent, in total we have at most possibilities.
That is, in order to determine a crossing pattern, we must fix which disks of are traversed by as well as how many and where do the crossings between and happen. The bounds for each of these terms are and , respectively. Since these choices are independent, and using the fact that , we obtain:
Note that the bound is very loose, since most of the choices will lead to an invalid crossing pattern. However, the importance of the lemma is in the fact that the total number of crossing patterns only depends on .
Our FPT algorithm works by considering all possible crossing patterns, finding the optimal solution for a fixed crossing pattern, and returning the solution of smallest resilience. From now on, we assume that a given pattern has been fixed, and we want to obtain the path of smallest resilience that satisfies the given pattern. If no path exists, we simply discard it and associate infinite resilience to it.
3.1 Solving the problem for a fixed crossing pattern
Recall that the crossing pattern gives us information on how to deal with the disks traversed by . Thus, we remove all cells of the arrangement that contain one or more disks that are forbidden to . Similarly, we remove from the disks that must cross. After this removal, several cells of our domain may be merged.
Since we do not use the geometry, we may represent our domain by a disk (possibly with holes). After the transformation, each remaining region of becomes a pseudodisk, and becomes a collection of disjoint partial paths, each of which has its endpoints on the boundary of (see Figure 6), but is otherwise not yet fixed. To solve the subproblem associated with the crossing pattern we must remove the minimum number of disks so that all partial paths are feasible.
We consider the intersection graph between the remaining regions of . That is, each vertex represents a region of , and two vertices are adjacent if and only if their corresponding regions intersect. Similarly to , we must augment the graph with boundary vertices. The partial paths split the boundary of into several components. We add a vertex for each component (these vertices are called boundary vertices). We connect each such vertex to vertices corresponding to pseudodisks that are adjacent to that piece of boundary (Figure 6). Let be the resulting graph associated to crossing pattern . Note that no two boundary vertices are adjacent.
We now create a secondary graph as follows: the vertices of are the boundary vertices of . We add an edge between two vertices if there is a partial path that separates the vertices in (Figure 6). Two vertices connected by an edge of are said to form a forbidden pair (each partial path that would create the edge is called a witness partial path). We first give a bound on the number of forbidden pairs that can have.
Any crossing pattern has at most forbidden pairs.
Proof: By definition, only adds edges between boundary vertices. Thus, it suffices to show that has at most boundary vertices. Since partial paths cannot cross, each such path creates a single cut of the domain. This cut introduces a single additional boundary vertex (except the first partial path that introduces two vertices). Recall that we can map the partial paths to crossings between paths and and, as argued in the proof of Lemma 4, these paths can cross at most times. Thus, we conclude that there cannot be more than boundary vertices.
The following lemma shows the relationship between the vertex multicut problem and the minimum resilience path for a fixed pattern.
There are vertices of whose removal disconnects all forbidden pairs if and only if there are disks in whose removal creates a path between and that obeys the crossing pattern .
Proof: Consider the regions of inside that are not covered by any disk after the disks have been removed and let be their union. By definition, there is a path between and with the fixed crossing pattern if all partial paths are feasible (i.e., there exists a path connecting the two endpoints that is totally within ). The reasoning for each partial path is analogous to the one used by Kumar et al. . If all partial paths are possible, then no forbidden pair can remain connected in , since—by definition—each forbidden pair disconnects at least one partial path (the witness path). On the other hand, as soon as one forbidden pair remains connected, there must exist at least one partial path (the witness path) that crosses the forbidden pair. Thus if a forbidden path is not disconnected, there can be no path connecting and for that crossing pattern.
Using Lemma 6, we can transform the barrier resilience problem to the following one: given two graphs , and on the same vertex set, find a set of minimum size so that no pair is connected in . This problem is known as the (vertex) multicut problem . Although the problem is known to be NP-hard if , there exist several FPT algorithms on the size of the cut and on the size of the set [18, 25]. Among them, we distinguish the method of Xiao (, Theorem 5) that solves the vertex multicut problem in roughly time, where is the number of vertices to delete, , and is the number of vertices of .
Let be a collection of unit disks in , and let and be two well-separated points. There exists an algorithm to test whether , for any value , and if so, to compute a path with that resilience, in time, where .
Proof: Recall that our algorithm considers all possible crossings between and . For any fixed crossing pattern , our algorithm computes , and all associated forbidden pairs. We then execute Xiao’s FPT algorithm  for solving the vertex multicut problem. By Lemma 6, the number of removed vertices (plus the number of disks that were forced to be deleted by ) will give the minimum resilience associated with .
Regarding the running time, the most expensive part of the algorithm is running an instance of the vertex multicut problem for each possible crossing pattern. Observe that the parameters and of the vertex multicut problem are bounded by functions of as follows: and (the first claim is direct from the definition of resilience, and the second one follows from Lemma 5). Hence, a single instance of the vertex multicut problem will need time. By Lemma 4 the number of crossing patterns is bounded by . Thus, by multiplying both expressions we obtain the bound on the running time, and the theorem is shown.
We remark that the importance of this result lies in the fact that an FPT algorithm exists. Hence, although the dependency on is high, we emphasize that the bounds are rather loose. We also note that both the minimum resilience path and the disks to be deleted can be reported.
3.2 Extension to Fat Regions
We now generalize the algorithm to consider more general shapes. A region is -fat if there exist two concentric disks and whose radii differ by at most a factor , such that (whenever the constant is not important, the region is simply called fat). Figure 7 shows an example of a -fat region. However, for our algorithms, it is not sufficient for us to assume that the regions are fat. We impose three restrictions on our fat regions, which make them more like disks: (1) the collection of regions has bounded ply , (2) all regions have similar size, allowing us to assume the radius of is , and the radius of is , and (3) any two regions have intersections between their boundaries. Together, these three restrictions ensure that no minimum resilience path traverses a given region more than a constant number of times, making thickness within a constant factor of resilience. We formally describe each restriction, and illustrate how its removal impacts the path complexity.
- Bounded ply
The arrangement formed by a collection of regions is said to have bounded ply if no point is contained in more than elements of . As we illustrate in Figure 8, we can place regions of similar size and bounded region complexity (but no bounded ply) forming a corridor. In particular, the minimum resilience path between and may be forced to leave and reenter another similarly-sized region times. Note that this construction is not possible for unit disks, and therefore unit disk instances do not require bounded ply; however, as soon as we allow a disk with larger radius (e.g., a disk of radius , ), the bounded ply restriction is required.
- Similar size
We assume without loss of generality that the radius of is and the radius of is ; in this case we will call a -fat unit region. As previously shown in Figures 2 and 2, with the existence of a single larger region we can create a corridor of small interlocking regions with constant ply, and partially cover it with a large region to force the optimal resilience path to leave and reenter the large region times.
- Bounded region complexity
Our final assumption is that the fat regions cannot be too complex. In particular, we assume that any two region boundaries have pairwise intersections, ensuring that the intersection between any two regions has connected components. As shown in Figure 8, we can create a corridor with two regions that have pairwise boundary intersections with a third region, forcing the minimum resilience path to leave and reenter this third region times. Note that such complex regions can be formed, for example, by taking the union of circles with radius 1, with centers that are spaced apart on a line.
Although these restrictions may seem excessive, previous results have made similar assumptions on input regions, and for the same reason we do here: worst-case configurations are possible even with the simplest inputs. For example, to bound the union complexity of fat -covered regions, Efrat  assumes constant algebraic complexity–that region boundaries can be represented by algebraic polynomials, implying that the region boundaries have at most pairwise intersections. Whitesides and Zhao , when defining -admissible curves, impose further restrictions on their (non-fat) regions, requiring the difference of any two regions to be connected, in order to guarantee linear-size union boundary (see also [1, 20] for alternative proofs of this result). Lastly, de Berg  assumes constant density, which bounds the number of regions that can intersect any small disk, similar in spirit to ply.
To our knowledge, no definition of fatness meets any of our three assumptions. Fortunately, our assumptions are not overly restrictive. Indeed, they are representative of cases that we are likely to encounter in practice, as it is inefficient to place sensors so that many of them cover the same region, sensor ranges are typically of similar size, and limiting the boundary intersections encompasses both unit disks and pseudodisks as special cases.
Let be a set of -fat unit regions forming an arrangement with ply , and bounded region complexity. Let be an optimal solution. In the sequence of regions of found when going from to in an optimal way, no region of appears more than times.
Proof: Let be a region in , and consider its containing disk with center . Analogously to the original argument by Bereg and Kirkpatrick , we note that every time the optimal path visits and leaves , it must do so to avoid some other region. This other region must intersect , and since it is -fat unit, it must contain a unit disk centered at distance at most from .
Therefore all regions intersecting have their unit-disks centered at distance at most from . In particular, their unit-disks are totally contained in a disk of radius centered at . A simple area argument shows that at most disjoint unit-disks fit into a disk of radius . Since the ply is bounded by , overall there can be up to regions intersecting . Recall that, by our fatness assumption, two regions can intersect only in connected components. Therefore, the number of times an optimal path can reenter region is, proportional to the number of other regions that intersect which is bounded by .
We note that our bound is asymptotically tight. Figure 9 illustrates how a matching lower bound.
When the regions of are -fat unit regions forming an arrangement with ply , and bounded region complexity, the thickness between two points is at most times their resilience.
This change in the upper bound of the thickness in terms of the resilience implies similar changes in Lemmas 2, 3, 4 and 5. The following lemmas summarize these changes; they are proved in the same way as their counterparts for disks, thus we only sketch the differences with the original proofs (if any).
When the regions of are -fat unit regions forming an arrangement with ply , and bounded region complexity, the minimum resilience path between and cannot traverse cells whose thickness to or is larger than .
Proof: We use the same reasoning as in the proof of Lemma 2. On the one hand there is the minimum thickness path between and , whose thickness is . On the other hand, we also have the minimum resilience path between the same points, whose thickness is at most by Corollary 2. Assume now that any cell traversed by has thickness from , for some . The alternative path goes from to , via , and its thickness is at most . The bound we need is obtained for the value of that makes both expressions equal, which is , leading to the claimed value.
Thus, for -fat objects our domain now becomes be the union of the cells of the arrangement that have thickness from at most .
There exists a (possibly non-simple) path whose thickness is at most , that connects to a point on the outer boundary of and passes through .
For any problem instance , there are at most crossing patterns between and .
Proof: Let and . We proceed as in the proof of Lemma 4. Recall that previously we had crossing patterns, but now we must use the bounds that depend on instead. What before was now becomes , and the terms now become . Making these changes in the previous expression, we obtain that the number of crossings is bounded by
Since (and by simplifying the expression), this is upper bounded by
Finally, we apply that both , and obtain the desired bound.
Any crossing pattern has at most forbidden pairs.
Proof: As in the unit disc case, each crossing between and creates an additional vertex in the boundary (i.e., a potential vertex of ). Further note that and can cross at most times (since they traverse through at most that many cells of ). A bound on the number of vertices of immediately implies a quadratic bound on the number of edges in as well. Thus, we obtain that the number of forbidden pairs is at most as claimed.
With these results in place, the rest of the algorithm remains unchanged: the only additional property of unit disks that we use is the fact that they are connected, to be able to phrase the problem as a vertex cut in the region intersection graph.
Let be a collection of connected -fat unit regions of bounded region complexity in forming an arrangement of ply , and let and be two points. Let be a parameter. There exists an algorithm to test whether , and if so, to compute a path with that resilience, in time, where .
Proof: As before, the running time is bounded by the product of the number of crossing patterns and the time needed to solve a single instance of the vertex multicut problem. By Lemmas 10 and 11, these bounds now become and , respectively. The product of both is dominated by the second term, hence the theorem is shown.
In this section we present an efficient polynomial-time approximation scheme (EPTAS) for computing the resilience of an arrangement of disks of bounded ply . The general idea of the algorithm is very simple: first, we compute all pairs of regions that can be reached by removing at most disks, for . Then, we compute a shortest path in the dual graph of the arrangement of regions, augmented with some extra edges. We prove that the length of the resulting path is a -approximation of the resilience.
As in the previous section, we first consider the case in which is a set of unit disks in (note that this time we have the additional constraint that no point is covered in more than disks). Let be the arrangement induced by the regions of , and let be the dual graph of . Recall that has a vertex for every cell of , and a directed edge between all pairs of adjacent cells of cost when entering a disk, and cost when leaving a disk. For any given , let be the graph obtained from by adding, for each pair of cells with resilience at most , a shortcut edge of cost .
For a pair of cells of , we can test whether is smaller than , and if so, compute it, in time (where ) by applying Theorem 1 to a point and a point . Since the number of pairs of cells of the arrangement is also bounded by a polynomial in , we overall get a EPTAS since is a constant that depends only on and . Again, we emphasize that the bounds presented in this section are not tight, but our objective is to show the existence of an EPTAS for this problem.
Let , where has ply , and let , be any two points inside . Then the resilience between and in is at most .
Proof: Let be the number of disks that contain or (or both). Clearly these disks must be removed. Also notice that , since contains both points and no point is contained in more than disks. Now we analyze how many other disks may need to be removed too.
Consider a minimum resilience path between and among those that stay inside . For each disk (not containing neither nor ) that needs to be removed in an optimal solution, there must be another disk that intersects , so that and together separate and inside . We call such a pair of disks a separating pair. Thus if the resilience is , there must be at least disjoint999By disjoint we refer to the identities of the disks, not to the regions they occupy. separating pairs intersecting . Let and be the diametral pair on that is orthogonal to segment . We claim that one of the disks of any separating pair must cover either or . Indeed, assume on the contrary that there exists two unit disks and that separate and but do not contain neither nor (nor or ). Without loss of generality, we may assume that both and lie on the boundary of . Observe that in order to separate and , the union of and must cross segment and cannot cross segment (otherwise it would contain or , since , and are unit disks). However, the only possible way of doing so is if and are tangent to , and either or (see Figure 10). However, in this case and are not separated, a contradiction.
That is, for each separating pair we have a unique disk that covers either or . Since no point is contained in more than disks (and contains both and we conclude that there cannot be more than separating pairs, completing the proof of the lemma.
The previous lemma implies that in an optimal resilience path, if a disk appears twice, the two entry points have resilience at most apart (when counting the cells traversed by the path between the two occurrences of the disk). Note that a lower bound of is also easy to construct, so the result is (asymptotically speaking) tight.
To prove the result in this section it will be convenient to focus on the sequence of disks encountered by a path when going from to . It turns out that such problem is essentially a string problem, where each symbol represents a disk encountered by the path. In that context, the thickness will be equivalent to the number of symbols of the string (recall that we assume that is not contained in any disk), and the resilience to the number of distinct symbols.
Let be a string of symbols from some alphabet , such that no symbol appears more than twice. Let be a substring of . We define to be the length of , and to be the number of distinct symbols in . Clearly, . Let and be two fixed integers such that . We define the cost of a substring of to be:
Note that, in the string context, acts as the resilience, as the thickness, and is the approximation we compute. Intuitively, if is short (i.e., length at most ) we can compute the exact value . If has a symbol whose two appearances are far away we will use a “shortcut” and pay (i.e., for unit disk regions, by Lemma 12, we have ). Otherwise, we will approximate by .
Given a long string, we wish to subdivide into a segmentation , composed of disjoint segments (i.e., substrings of ) , that minimize the total cost . Clearly, .
Let be a sequence. There exists a segmentation such that , where .
Proof: Let be an integer such that , of exact value to be specified later. First, we consider all pairs of equal symbols in that are more than apart. We would like to take all of these pairs as separate segments; however, we cannot take segments that are not disjoint. So, we greedily take the leftmost symbol whose partner is more than further to the right, and mark this as a segment. We recurse on the substring remaining to the right of the rightmost .101010In fact, we could choose any disjoint collection such that after their removal there are no more segments of this type longer than . Finally, we segment the remaining pieces greedily into pieces of length . Figure 11 illustrates the resulting segmentation.
Now, we prove that the resulting segmentation has a cost of at most . First, consider a symbol to be counted if it appears in only one short (blue) segment, and to be double-counted if it appears in two different short segments. Suppose is double-counted. Then the distance between its two occurrences must be smaller than , otherwise it would have formed a long (red) segment. Therefore, it must appear in two adjacent short segments. The leftmost of these two segments has length exactly , but only of these can have a partner in the next segment. So, at most a fraction symbols are double-counted.
Second, we need to analyze the cost of the long (red) segments. In the worst case, all symbols in the segment also appear in another place, where they were already counted. In this case, the true cost would be , and we pay too much. However, we can assign this cost to the at least symbols in the segment; since each symbol appears only twice they can be charged at most once. So, we charge at most to each symbol. The total cost is then bounded by . To optimize the approximation factor, we choose such that ; more precisely we take .
Recall that for our resilience approximation we have (Lemma 12). Thus, the actual value of is obtained by solving for , which leads to .
4.2 Application to resilience approximation
We now show that the shortest path between any in is a -approximation of their resilience. Let be a path from to in , and let be the sequence that records every disk of we enter along , plus the disks that contain the start point of , added at the beginning of the sequence, in any order. Then we have .
For every path from to and every segmentation of , there exists a path from to in of cost at most .
Proof: We describe how to construct a path in based on . For every segment of , we create a piece of path whose length in is at most the cost of the segment .
There are three types of segments. The first type are segments that start and end with the same symbol , which corresponds to a disk . For those, we make a shortcut path that stays inside , as per Lemma 12. The second type are segments whose length is at most . For those, by definition, contains a shortcut edge whose cost is exactly the resilience between the corresponding cells of . The third type are the remaining segments. For those, we simply use the piece of that corresponds to .
For any , it holds .
Proof: Let be a path from to of optimal resilience . Then, consider the sequence , that is, the sequence of disks that enters. Now, by Lemma 13, there exists a segmentation of of cost at most . By Lemma 14, there exists a path in of equal or smaller cost. Figure 12 illustrates this.
Now, consider the path that our algorithm produces. The resilience of is smaller than the cost of in , which is smaller than the cost of in , which is smaller than times the resilience of . That is: .
Let be a set of unit disks of ply in . We can compute a path between any two given points whose resilience is at most in time, where .
Proof: The running time of the algorithm is dominated by the preprocessing stage: determining if the resilience between every pair of vertices of is at most . Since is an arrangement of disks with ply at most , it has cells111111We thank the anonymous referee that pointed this to us and allowed the dependency in to be lowered.. We execute the algorithm of Theorem 1 for every pair of cells (thus, times), and we obtain the desired bound.
4.3 Extension to fat regions
As in Section 3.2, we now generalize the result to arbitrary -fat unit regions. We again assume that our collection of regions has bounded ply , and that the region boundaries have pairwise intersections. As in Section 3.2, for simplicity in the notation our analysis assumes that the region boundaries have at most two pairwise intersections, implying that the intersection between any two overlapping regions has one connected component. However, our results generalize to pairwise intersections between region boundaries.
Let , where has ply , and let , be any two points inside . Then the resilience between and in is at most .
Proof: The resilience between and is upper-bounded by the number of regions that intersect . We can give an upper bound using a simple packing argument. Since and belong to a -(unit)fat region , they are both inside a circle with center and radius . Any other -fat region that interferes with the path from to must intersect . Such an intersecting region, being also -fat, must contain a unit-disk whose center cannot be more than away from . Therefore all regions intersecting have their unit-disks centered at distance at most from . Moreover, such disks are totally contained in a disk of radius centered at . As in the proof of Lemma 7, we can show that at most disjoint unit-disks fit into a disk of radius . Since the ply is at most , the maximum number of unit-disks inside a disk of radius in is .
As before, the rest of the arguments do not rely on the geometry of the regions anymore, and we can proceed as in the disk case. The only difference is that the value of doing a shortcut has increased to .
Let be a set of -fat regions of ply in . We can compute a path between any two points whose resilience is at most in time, where .
In this section we show that computing the resilience of certain types of fat regions is NP-hard. We recall that several NP-hardness results for other shapes are already known, but most of them are for skinny objects. For example, hardness for the case in which regions are line segments in was shown in [2, 23] and [26, Section 5.1]. Our contribution is to show that hardness holds for for the case in which ranges have bounded fatness (i.e., ranges are not skinny). The only hardness proof that we know for objects of positive area is by Tseng , who shows that if the regions are rotations and translations of a fixed square or ellipse the problem is NP-hard.
In addition to showing that the problem is difficult for other shapes, our construction is of independent interest, since it is completely different from those given in , , , and [26, Section 5.1]. Moreover, our proof has the advantage of being easy to extend to other shapes. We also note that the construction of Tseng uses several rotations of a fixed shape (i.e., 3 for a square, 4 for an ellipse), whereas our construction only needs two different rotations of the same shape.
First we show NP-hardness for general connected regions, and later we extend it to axis-aligned rectangles of aspect ratio and . We start the section establishing some useful graph-theoretical results.
Let be a graph, and let be a point in the plane. Let be an embedding of into the plane, which behaves properly (vertices go to distinct points, edges are curves that do not meet vertices other than their endpoints and do not triple cross), and such that is not on a vertex or edge of the embedding. We say is an odd embedding around if it has the following property: every cycle of has odd length if and only if the winding number of the corresponding closed curve in the plane in around is odd. We say a graph is oddly embeddable if there exists an odd embedding for it (Figure 13 shows some examples). We claim that vertex cover is NP-hard for this constrained class of graphs. The proof of this statement is based on two observations.
Every tripartite graph is oddly embeddable.
Proof: The vertices of a tripartite graph can be divided into three groups such that there are no internal edges in any of these groups. Now, consider a triangle around . We create an embedding where all vertices in are close to one corner of , the vertices in are close to a second corner, and the vertices in are close to the remaining corner. All edges are straight line segments. See Figure 13.
Consider the graph obtained from by contracting all vertices in to a single vertex ; is a triangle (or a subgraph of a triangle). Now consider any cycle in , and project it to . Since there were no edges in connecting vertices within a group , this does not change the length of the cycle, nor does it change the winding number around . Any two consecutive edges from to , and back from to , do not influence the parity of the length of the cycle, nor the winding number around , so we can remove them from the cycle. We are left with a cycle of length and winding number or , for some integer . Clearly, is odd if and only if is odd. Therefore, is an odd embedding of , as required.
The maximum independent set problem in a graph asks for the largest set of vertices in the graph such that no two vertices in the set are connected by an edge. This problem is well-known to be NP-hard on general graphs. In fact, it remains NP-hard for tripartite graphs. A simple proof is included for completeness, and because we need the argument later. Note that a minimum vertex cover is the complement of a maximum independent set, hence by proving the NP-hardness of maximum independent set, we are also proving that minimum vertex cover is NP hard.
Let be a graph. Let be obtained from by subdividing every edge into an odd number of pieces, by adding an even number of new vertices. Let be the total number of vertices added. Then has a maximum independent set if and only if has a maximum independent set with .
Proof: For every independent set in , there is a corresponding independent set in with : for every pair of extra vertices on an edge, we can always add one of the two to an independent set. Conversely, for every independent set in , there is a corresponding independent set in with : cannot use both extra vertices on an edge, so if we simply remove all extra vertices we remove at most elements from (clearly, if we remove less than vertices from this way, we can remove more vertices until has the desired cardinality).
From the above observations, it follows that maximum independent set is also NP-hard on tripartite graphs, and hence, also on oddly embeddable graphs. Since our construction does not increase the maximum vertex degree, and vertex cover is known to be NP-hard for graphs with maximum degree three, we obtain the following.
Minimum vertex cover on oddly embeddable graphs of maximum degree 3 is NP-hard.
Given an embedded graph , we say that a curve in the plane is an odd Euler path if it does not go through any vertex of and it crosses every edge of an odd number of times.
Let be a point in the plane, and an oddly embedded graph around . Then there exists an odd Euler path for that starts at and ends in the outer face. Moreover, such path can be computed in polynomial time.
Proof: First, we insert an even number of extra vertices on every edge of such that in the resulting embedded graph , every edge crosses at most one other edge. Now we construct an Euler path that crosses every edge of exactly once; note that this path will therefore cross every edge of an odd number of times. Consider a pair of crossing edges and the four vertices concerned. For each pair of consecutive vertices (vertices that are not endpoints of the same edge), find a path in the graph that does not go around (when seen as a cycle, after adding the crossing).
The parity of the length of this path does not depend on which path we take: if there would be an even-length path and an odd-length path between the same two vertices, both of which do not go around , then there would be an odd cycle that does not contain , which contradicts the oddly embeddedness of . Now, if the path has even length, we identify these two vertices. Note that of the four pairs of vertices involved in a crossing (i.e., ignoring the two pairs forming edges in ), exactly two pairs will have odd length connecting paths, so effectively we “flatten” the crossing. We do this for all crossings, and call the resulting multigraph . See Figure 14. (If the two crossing edges belong to different connected components of , there are no paths connecting their vertices; in this case we make an arbitrary choice of which vertices to identify.)
Now is planar. Furthermore, by construction, all faces of have even length, except the one containing and the outer face. Therefore, the dual multigraph of has only two vertices of odd degree, and hence has an Euler path between these vertices. Furthermore, this Euler path crosses every edge of exactly once, and therefore every edge of an odd number of times. Note that the proof is constructive. Moreover, both the transformations and the Euler path can be done in polynomial time, hence such path can also be obtained in polynomial time.
Let be a given point in the plane, and an oddly embedded graph (not necessarily planar) around . Furthermore, let be a curve that forms an odd Euler path from to the outer face. Then we can construct a set of connected regions such that a minimum set of regions from to remove corresponds exactly to a minimum vertex cover in .
Proof: If is self-intersecting, then we can rearrange the pieces between self-intersections to remove all self-intersections. Thus we assume that is a simple path.
If crosses any edge of more than once, we insert an even number of extra vertices on that edge such that afterwards, every edge is crossed exactly once. Let be the resulting graph. Since we inserted an even number of vertices on every edge, finding a minimum vertex cover in will give us a minimum vertex cover in .
Now, for each vertex in , we create one region in . This region consists of the point where is embedded, and the pieces of the edges adjacent to up to the point where they cross . Figure 15 shows an example (the regions have been dilated by a small amount for visibility; if the embedding has enough room this does not interfere with the construction). Note that all regions are simply connected.
Finally, we create one more special region in that forms a corridor for . Then is duplicated at least times to ensure that crossing this “wall” will always be more expensive than any other solution. Figure 15 shows this.
Now, in order to escape, anyone starting at must roughly follow in order to not cross the wall. This means that for every edge of that passes, one of the regions blocking the path (one of the vertices incident to the edge) must be disabled. The smallest number of regions to disable to achieve this corresponds to a minimum vertex cover in .
Combining this result with Corollary 3, we obtain our first hardness result for the barrier resilience problem.
The barrier resilience problem for a collection of connected regions is NP-hard.
5.1 Extension to fat regions
We now adapt the previous approach to also work for a much more restricted class of regions: axis-aligned rectangles of sizes and for any (as long as depends polynomially on ). For simplicity, we limit to have maximum degree 3. Maximum independent set is still known to be NP-hard in that case , and making them tripartite does not change the maximum degree.
The idea of the reduction is the following. We start from a sufficiently spacious (but polynomial) embedding of , as illustrated in Figure 16. On each edge we add a large even number of extra vertices. Each new vertex will be replaced by a rectangle, so every edge in will become a chain of overlapping rectangles, like the green rectangles in Figure 16. Therefore the first phase consists in replacing the embedding of by an equivalent embedding of rectangles. We call these rectangles graph rectangles (green in the figures). Some care must be taken in the placement of graph rectangles around degree-3 vertices and in crossings, so that the rest of the construction can be made to work. Next, we place wall rectangles (orange in the figures; these consist of many copies of the same rectangle) across each graph rectangle. The gaps between adjacent wall rectangles should cover the overlapping part of two adjacent graph rectangles, so that a path can pass through them only whenever one of the two graph rectangles is removed. Then, we find a curve from that goes through every gap exactly once (note that exists, by Lemma 17). Figure 16 illustrates this phase of the construction. Finally, we add more wall rectangles around , to force any potential minimum resilience path from that does not go through the wall rectangles to be homotopic to . Figure 16 shows the final set of rectangles. Now, computing an optimal resilience path among this set of rectangles would correspond to a maximal independent set in .
For the construction to work, there needs to be enough space to place the wall rectangles. It is clear that this is possible far away from the graph rectangles, but close to the graph rectangles we proceed as follows: first, Figure 17 shows the placement of rectangles along an edge of . Figure 17 shows how to place the rectangles at degree-3 vertices. Crossings are handled as shown in Figure 17. These gadgets force some of the gaps in the chain to join each other. But this is no problem if every edge has enough rectangles. Also, note that at the center of the construction in Figure 17 there are two overlapping green rectangles, which belong to the two crossing chains. This is the only place where we vitally use the fact that the regions are not pseudodisks.
Let be a given point in the plane, and an oddly embedded graph with maximum vertex degree (not necessarily planar) around . Furthermore, let be a curve that forms an odd Euler path from to infinity. Then we can construct a set of axis-aligned rectangles of aspect ratio such that a minimum set of regions from to remove corresponds exactly to a minimum vertex cover in .
Proof: We first add groups of extra vertices on every edge of so that we have room to place the rectangles, in an even number per edge. Then replace edges by chains as of rectangles as as in Figure 17, and connect the orange (wall) rectangles to force the only optimal path from to the outer face to be along the Euler path . The path may have to be rerouted locally close to the crossings, but since there is a sufficiently large number of crossings with every edge anyway, this is always possible. Orange rectangles have to be duplicated sufficiently many times again, to make sure that no optimal path will ever cross them.
The barrier resilience problem for regions that are axis-aligned rectangles of aspect ratio is NP-hard.
A similar approach can likely be used to show NP-hardness of other classes of regions as well. However, it seems that a necessary property for our approach is that the regions are able to completely cross each other: in other words, the regions in cannot be pseudodisks.121212A similar fact was also observed in .
The authors would like to thank some anonymous referees for their thorough check of a previous version of this document. M.K was partially supported by the ELC project (MEXT KAKENHI No. 24106008). M.L. was supported by the Netherlands Organisation for Scientific Research (NWO) under grant 639.021.123. R.I. S. was partially supported by projects MINECO MTM2015-63791-R/FEDER and Gen. Cat. DGR 2014SGR46, and by MINECO through the Ramón y Cajal program.
-  P. K. Agarwal, J. Pach, and M. Sharir. Surveys on Discrete and Computational Geometry: Twenty Years Later, volume 453 of Contemporary Mathematics, chapter State of the Union (of Geometric Objects). AMS, 2008.
-  H. Alt, S. Cabello, P. Giannopoulos, and C. Knauer. On some connection problems in straight-line segment arrangements. In Proc. EuroCG, pages 27–30, 2011. Also available as CoRR abs/1104.4618.
-  S. Bereg and D. G. Kirkpatrick. Approximating barrier resilience in wireless sensor networks. In Proc. ALGOSENSORS, pages 29–40, 2009.
-  S. Cabello and P. Giannopoulos. The complexity of separating points in the plane. Algorithmica, 74(2):643–663, 2016.
-  D. Y. C. Chan and D. G. Kirkpatrick. Multi-path algorithms for minimum-colour path problems with applications to approximating barrier resilience. Theoretical Computer Science, 553:74–90, 2014.
-  C.-Y. Chang, C.-Y. Hsiao, and C.-T. Chang. The k-barrier coverage mechanism in wireless visual sensor networks. In Proc. IEEE WCNC, pages 2318–2322, 2012.
-  D. Z. Chen, Y. Gu, J. Li, and H. Wang. Algorithms on minimizing the maximum sensor movement for barrier coverage of a linear domain. Discrete & Computational Geometry, 50(2):374–408, 2013.
-  M. de Berg. Improved bounds on the union complexity of fat objects. Discrete Comput. Geom., 40(1):127–140, July 2008.
-  M. de Berg, O. Cheong, M. van Kreveld, and M. Overmars. Arrangements and duality. In Computational Geometry: Algorithms and Applications, pages 165–182. Springer, 2008.
-  A. Efrat. The complexity of the union of -covered objects. SIAM J. Comput., 34:775–787, 2005.
-  M. Garey, D. Johnson, and L. Stockmeyer. Some simplified NP-complete graph problems. Theoretical Computer Science, 1(3):237 – 267, 1976.
-  M. Gibson, G. Kanade, and K. Varadarajan. On isolating points using disks. In Proc. ESA, pages 61–69, 2011.
-  S. He, J. Chen, X. Li, X. Shen, and Y. Sun. Cost-effective barrier coverage by mobile sensor networks. In Proc. INFOCOM, pages 819–827, 2012.
-  T. C. Hu. Multi-commodity network flows. Operations Research, 11(3):pp. 344–360, 1963.
-  M. Korman, M. Löffler, R. I. Silveira, and D. Strash. On the complexity of barrier resilience for fat regions. In Proc. ALGOSENSORS, pages 201–216, 2013.
-  S. Kumar, T.-H. Lai, and A. Arora. Barrier coverage with wireless sensors. In Proc. MOBICOM, pages 284–298, 2005.
-  S. Kumar, T.-H. Lai, and A. Arora. Barrier coverage with wireless sensors. Wireless Networks, 13(6):817–834, 2007.
-  D. Marx. Parameterized graph separation problems. Theoretical Computer Science, 351(3):394–406, 2006.
-  G. L. Miller, S. Teng, W. P. Thurston, and S. A. Vavasis. Separators for sphere-packings and nearest neighbor graphs. J. ACM, 44(1):1–29, 1997.
-  J. Pach and M. Sharir. On the boundary of the union of planar convex sets. Discrete & Computational Geometry, 21(3):321–328, 1999.
-  R. Penninger and I. Vigan. Point set isolation using unit disks is NP-complete. CoRR, abs/1303.2779, 2013.
-  K.-C. R. Tseng. Resilience of wireless sensor networks. Master’s thesis, University of British Columbia, 2011.
-  K.-C. R. Tseng and D. Kirkpatrick. On barrier resilience of sensor networks. In Proc. ALGOSENSORS, pages 130–144, 2011.
-  S. Whitesides and R. Zhao. K-admissible collections of Jordan curves and offsets of circular arc figures. Technical Report SOCS 90.08, School of Computer Science, McGill University, 1990.
-  M. Xiao. Simple and improved parameterized algorithms for multiterminal cuts. Theory of Computing Systems, 46(4):723–736, 2010.
-  S. Yang. Some Path Planning Algorithms in Computational Geometry and Air Traffic Management. PhD thesis, State University of New York, Stony Brook, 2012.