Weak Visibility Queries of Line Segments in Simple Polygons and Polygonal Domains
In this paper we consider the problem of computing the weak visibility polygon of any query line segment (or ) inside a given polygon . Our first non-trivial algorithm runs in simple polygons and needs time and space in the preprocessing phase to report of any query line segment in time . We also give an algorithm to compute the weak visibility polygon of a query line segment in a non-simple polygon with pairwise-disjoint polygonal obstacles with a total of vertices. Our algorithm needs time and space in the preprocessing phase and computes in query time of , in which is an output sensitive parameter of at most , and is the output size. This is the best query-time result on this problem so far.
keywords:Computational Geometry, Visibility, Line Segment Visibility
remarkRemark \newproofproofProof \newproofpotProof of Theorem LABEL:thm2
Two points inside a polygon are visible to each other if their connecting segment remains completely inside the polygon. Visibility polygon of a point in a simple polygon is the set of points that are visible from . The visibility problem has also been considered for line segments. A point is said to be weakly visible to a line segment if there exists a point such that and are visible to each other. The problem of computing the weak visibility polygon (or WVP) of inside a polygon is to compute all points of that are weakly visible from .
If is a simple polygon, can be computed in linear time (6); (11). For a polygon with holes, the weak visibility polygon has a complicated structure. Suri and O’Rourke (10) showed that the weak visibility polygon can be computed in time if output as a union of triangular regions. They also showed that can be output as a polygon in , where is . Their algorithm is worst-case optimal as there are polygons with holes whose weak visibility polygon from a given segment can have vertices.
The query version of this problem has been considered by few. It is shown in (3) that a simple polygon can be preprocessed in time and space such that given an arbitrary query line segment inside the polygon, time is required to recover weakly visible vertices. This result was later improved by (1) in which the preprocessing time and space were reduced to and respectively, at the expense of more query time of . In a recent work, we presented an algorithm to report of any in time by spending time and space for preprocessing (8). Later, Chen and Wang considered the same problem and, by improving the preprocessing time of the visibility algorithm of Bose et al. (3), they improved the preprocessing time to (5). In another work (9), we showed that the can be reported in near optimal time of , after preprocessing the input polygon in time and space of and , respectively.
1.1 Our results
In the first part of this paper, we present an algorithm for computing the weak visibility polygon of any query line segment in a simple polygons . We build a data structure in time and space that can compute in time for any query line segment . A preliminary version of this result appeared in (8).
In the second part of the paper, we consider the problem of computing in polygonal domains. For a polygon with holes and total vertices of , our algorithm needs the preprocessing time of and space of . We can compute in time . Here is an output sensitive parameter of at most , and is the size of the output polygon. Our algorithm is an improvement over the previous result of Suri and O’Rourke (10), considering the extra cost of preprocessing.
Let be a polygon with total vertices of . Also, let be a point inside . The visibility sequence of a point is the sequence of vertices and edges of that are visible from . A visibility decomposition of is to partition into a set of visibility regions, such that any point inside each region has the same visibility sequence. This partition is induced by the critical constraint edges, which are the lines in the polygon each induced by two vertices of , such that the visibility sequences of the points on its two sides are different.
The visibility sequences of two neighboring visibility regions which are separated by an edge, differ only in one vertex. This fact is used to reduce the space complexity of maintaining the visibility sequences of the regions (3). This is done by defining the sink regions. A sink is a region with the smallest visibility sequence compared to all of its adjacent regions. It is therefore sufficient to only maintain the visibility sequences of the sinks, from which the visibility sequences of all other regions can be computed. By constructing a directed dual graph (see Figure 1) over the visibility regions, one can maintain the difference between the visibility sequences of the neighboring regions (3).
1.3 A linear time algorithm for computing Wvp
Here, we present the linear algorithm of Guibas et al. for computing of a line segment inside , as described in (4). This algorithm is used in computing the weak viability polygons in an output sensitive way to be explained in Section 2. For simplicity, we assume that is a convex edge of , but we will show that this can be extended to any line segment in the polygon.
Let denote the shortest path tree in rooted at . The algorithm traverses using a DFS and checks the turn at each vertex in . If the path makes a right turn at , then we find the descendant of in the tree with the largest index (see Figure 2). As there is no vertex between and , we can compute the intersection point of and in time, where is the parent of in . Finally the counter-clockwise boundary of is removed from to by inserting the segment .
Let denote the remaining portion of . We follow the same procedure for . This time, the algorithm checks every vertex to see whether the path makes its first left turn. If so, we will cut the polygon at that vertex in a similar way. After finishing the procedure, the remaining portion of would be .
2 Weak visibility queries in simple polygons
In this section, we show how to modify the presented algorithm of Section 1.3, so that WVP can be computed efficiently in an output sensitive way. An important part of this algorithm is computing the shortest path trees. Therefore, we first show how tom compute these trees in an output sensitive way. Then, we present a primary version of our algorithm. Later, in Section 2.3, we show hot to improve this algorithm.
2.1 An output sensitive algorithm for computing Spt
The Euclidean shortest path tree of a point inside a simple polygon of size can be computed in time (6). In this section we show how to preprocess , so that for any given point we can report any part of in an output sensitive way.
The shortest path tree is composed of two kinds of edges: the primary edges that connect the root to its direct visible vertices, and the secondary edges that connect two vertices of (see Figure 3). We can also recognize two kinds of the secondary edges: a 1st type secondary edge (1st type for short) is a secondary edge that is connected to a primary edge, and a 2nd type secondary edge (2nd type for short) is a secondary edge that is not connected to a primary edge. We show how to store these edges in the preprocessing phase, so that they can be retrieved efficiently in the query time.
The primary edges of can be computed by using the algorithm of computing the visibility polygons (3). More precisely, with a preprocessing cost of time and space, a pointer to the sorted list of the visible vertices of a query point can be computed in time .
For computing the secondary edges of SPT, in the preprocessing time, we store all the possible values of the secondary edges of each vertex. Having these values, we can detect the appropriate list in the query time and report the edges without any further cost.
Each vertex in have potential parents in SPT. For each potential parent of , it may have 2nd type edges in SPT. Therefore, for a vertex , space is needed to store all the possible combinations of the 2nd type edges. Computing and storing these potential edges can be done in time. In the query time, when we arrive at the vertex , we use these data to extract the 2nd type edges of in SPT. Computing these data for all the vertices of needs time and space.
The parent of a 1st type edge of SPT is the root of the tree. As the root can be in any of the visibility regions, we need to consider all these potential parents to compute the possible combinations of the 1st type edges of a vertex. Considering all the regions, the possible first type edges can be computed in time and space.
Given a simple polygon , we can build a data structure of size in time , so that for a query point , the shortest path tree can be reported in time, where is the size of the tree to be reported.
In Section 2.3 we will show how to improve the processing time and space by a linear factor.
2.2 Computing the query version of Wvp
In this section, we use the linear algorithm presented in Section 1.3 for computing WVP of a simple polygon. This algorithm is not output sensitive by itself. See the example of Figure 4. As stated in Section 1.3, to compute , first we traverse using DFS and we check the turn at every vertex of . Consider vertex . As we traverse the shortest path , all the children of must be checked. This can cost time. When we traverse , a sub-polygon with as its vertex will be omitted. Therefore, the time spent on processing the children of in is redundant.
To achieve an output sensitive algorithm, we build the data structure explained in the previous section, so that SPT of any point inside the polygon can be computed in the query time. Also, we store some additional information about the vertices of the polygon in the preprocessing phase. We say that a vertex of a simple polygon is left critical (LC for short) with respect to a point , if makes its first left turn at . In other words, each shortest path from to a non-LC vertex is a convex chain that makes only clockwise turns at each node. The critical state of a vertex is whether or not it is LC. If we have the critical state of all the vertices of the polygon with respect to a point , we say that we have the critical information of .
The idea is to change the algorithm of Section 1.3 and make it output sensitive. The outline of the algorithm is as follows: In the first round, we traverse using DFS. At each vertex, we check whether this vertex is left critical with respect to or not. If so, we are sure that the descendants of this vertex are not visible from , so we postpone its processing to the time it is reached from , and check other branches of . Otherwise, we proceed with the algorithm and check whether makes a right turn at this vertex. In the second round, we traverse and perform the normal procedure of the algorithm.
All the traversed vertices in and are vertices of .
Assume that when we are traversing , we meet and . Let be the parent of in . In this case, or one of its ancestors must be LC with respect to , otherwise the algorithm will detect it as a WVP vertex. Therefore, cannot be seen while traversing . The same argument can be applied to .
In the preprocessing phase, we compute the critical information of a point inside each region, and assign this information to that region. In the query time and upon receiving a line segment , we find the regions of and . Using the critical information of these two regions, we can apply the algorithm and compute .
As there are regions in the visibility decomposition, space is needed to store the critical information of all the vertices. For each region, we compute SPT of a point, and by traversing the tree, we update the critical information of each vertex with respect to that region. For each region, we assign an array of size to store these information. We also build the structure described in Section 2.1 for computing SPT in time and space. In the query time, we locate the visibility regions of and in time. As the processing time spent in each vertex is , by Lemma 2, the query time is .
Using time to preprocess a simple polygon and construct a data structure of size , it is possible to report in time.
Until now, we assumed that is a polygonal edge. This can be generalized for any in .
Let be a line segment inside a simple polygon . We can decompose into two sub-polygons and , such that each sub-polygon has as a polygonal edge. Furthermore, the critical information of and can be computed from the critical information of .
We find the intersection points of the supporting line of with the border of . Then, we split into two simple polygons and , both having as a polygonal edge. The visibility regions of and are subsets of the visibility regions of . Therefore, we have the critical information and SPT edges of these regions. The primary edges of and can also be divided to those in and those in . See the example of Figure 5.
2.3 Improving the algorithm
In this section we improve the preprocessing cost of Lemma 3. To do this, we improve the parts of the algorithm of Section 2.2 that need preprocessing time and space. We show that it is sufficient to compute the critical information and the 1st type edges of the sink regions (see Section 1.2 for the definition of the sink regions). For any query point in a non-sink region, the 1st type edges of can be computed from the 1st type edges of the sink regions (Lemma 5). Also, the critical information of the other regions can be deduced from the critical information of the sink regions (Lemma 6). As there are sinks in a simple polygon, the processing time and space of our algorithm will be reduced to and , respectively.
In the query time, if both and belong to the sink regions, we have the critical information of their regions and we can proceed the algorithm. On the other hand, if one of these points is on a non-sink region, Lemma 5 and 6 show that the secondary edges and the critical information of that point can be retrieved in time.
Assume that, for a visibility region , the 1st type secondary edges are computed For a neighboring region that share a common edge with , these edges can be updated in constant time.
When a view point crosses the common border of two neighboring regions, a vertex becomes visible or invisible (3) to . In Figure 6, for example, when crosses the border specified by and , a 1st type edge of becomes a primary edge of , and all the edges of become the 1st type edges. We can see that no other vertex is affected by this movement. Processing these changes can be done in constant time, since it includes the following changes: removing a secondary edge of (), adding a primary edge (), and moving an array pointer (edges of ) from the 2nd type edges of to the 1st type edges of . Note that we know the exact positions of these elements in their corresponding lists. The only edge that involves in these changes (i.e., the edge corresponding to the crossed critical constraint), can be identified in the preprocessing time. Therefore, the time we spent in the query time would be .
The critical information of a point can be maintained between two neighboring region that share a common edge in constant time.
Suppose that we want to maintain the critical information of , and is crossing the critical constraint defined by . Here, and are two reflex vertices of . The only vertices that affect directly by this change are and . Depending on the critical states of and w.r.t. , four situations may occur (see Figure 11). In the first three cases, the critical state of will not change. In the forth case, however, the critical state of will change. Before the cross, the shortest path makes a left turn at , therefore, both and are LC w.r.t. . However, after the cross, is not on and is no longer LC. This means that the critical state of all the children of in could be changed as well.
To handle these cases, we modify the way the critical information of each vertex w.r.t. are stored. At each vertex , we store two additional values: the number of LC vertices we met in the path (including ), or its critical number, and debit number, which is the critical number that is to be propagated in the subtree of the vertex. If a vertex is LC, it means that its critical number is greater than zero (see Figure 12). Also, if a vertex has a non-zero debit number, the critical numbers of all its children must be added by this number. Computing and storing these additional numbers along the critical information will not change the time and space requirements.
Now let us consider the forth case in Figure 11. When becomes visible to , it is no longer LC w.r.t. . Therefore, the critical number of must be changed to , and the critical number of all the descendants of in must be decreased by one. However, instead of changing the critical numbers of the descendants of , we decrease the debit number of by one, indicating that the critical numbers of its descendants in must be subtracted by one. The actual propagation will happen at the query time when we traverse . If moves in the reverse path, i.e., when becomes invisible to , we handle the tree in the same way by adding to its debit number, and propagating this addition in the query time.
In the preprocessing time, we build the dual directed graph of the visibility regions. In this graph, every node represents a visibility region, and an edge between two nodes corresponds to a gain of one vertex in the visibility set in one direction, and a loss in the other direction. We compute the critical information and 1st type edges of all the sink regions. By Lemma 5 and 6, any two neighboring regions have the same critical information and secondary edges, except at one vertex. We associated this vertex with the edge.
In the query time, we locate the region containing the point , and follow any path from this region to a sink. As each arc represents a vertex that is visible to , and therefore to , the number of arcs in the path is . When traversing the path from the sink back to the region of , we update the critical information and the secondary edges of the visible vertices in each region. At the original region, we would have the critical information and the 1st type edges of this region. We perform the same procedure for . Having the critical information and the 1st type edges of and , we can compute with the algorithm of Section 2.2. In general, we have the following theorem:
A simple polygon can be preprocessed in time and space such that given an arbitrary query line segment inside the polygon, can be computed in time.
3 Weak Visibility queries in Polygons with Holes
In this section, we propose an algorithm for computing the weak visibility polygons in polygonal domains. Let be a polygon with holes and total vertices. Also let be a query line segment. We use the idea presented (12) and convert the non-simple polygon into a simple polygon . Then, we use the algorithms of computing WVP in simple polygons to compute a preliminary version of . With some additional work, we find the final .
A hole can be eliminated by adding two cut-diagonals connecting a vertex of to the boundary of . By cutting along these diagonals, we will have another polygon in which is on its boundary. We repeat this procedure for all the holes and produce a simple polygon .
Let be the supporting line of . For simplicity, we assume that all the holes are on the same side of . Otherwise, we can split the polygon along and generate two sub-polygons that satisfy this requirement. To add the cut-diagonals, we select the nearest point of each hole to , and perform a ray shooting query from that point in the left direction of , to find the first intersection with a point of (see Figure 13). This point can be a point on the border of or a point on the border of another hole. We select the shooting segment to be the cut-diagonal. Finding the nearest points of the holes can be done in time. Also, performing the ray shooting procedure for each hole can be done in time. Therefore, adding the cut-diagonals can be done in total time of . The resulting simple polygon will have vertices. As is , the number of vertices of is also .
Having a simple polygon , we compute in by using the algorithm of Section 1.3. Next, we add the edges of the polygon that can be seen through the cut-diagonals. An example of the algorithm can be seen in Figure 14. First, we compute in . Then, for each segment of the cut-diagonals that can be seen from , we recursively compute the segments of that are visible from through that diagonal. This leads to the final .
3.1 Computing visibility through cut-diagonals
For computing , we must update with the edges that are visible through the cut diagonals. To do this, we define the partial weak visibility polygon. Suppose that a simple polygon is divided by a diagonal into two parts, and . For a line segment , we define the partial weak visibility polygon to be the polygon . In other words, is the portion of that is weakly visible from through . To compute , one can compute by the algorithm of Section 1.3, and then report those vertices in .
Given a polygon and a diagonal which cuts into two parts, and , for any query line segment , the partial weak visibility polygon can be computed in time.
Lemma 8 only holds for simple polygons, but we use its idea for our algorithm. Assume that has only one hole and this hole has been eliminated by the cut . Let be another cut which is on the supporting line of and is on the other side of , such that is on the border of and is on the border of . We can also eliminate by and obtain another simple polygon . Now Lemma 8 can be applied to the polygon and answer partial weak visibility queries through the cut . Following the terminology used by (12), we denote this algorithm by .
By performing the algorithm once for each hole and assuming that has been cut to a simple polygon, we can extend this algorithm to more holes. This leads to data structures of size for storing the simple polygons to perform Lemma 8 for . Using these data structures, we can find the edges of that are visible from through the cut-diagonals.
3.2 The algorithm
We first add the cut-diagonals to make a simple polygon . Then, we compute and find the set of segments that are visible from in . If a segment of the cut-diagonal of a hole is visible from , we use Lemma 8 and replace that segment with the partial weak visibility polygon of through that segment. We continue this for every cut-diagonal that can be seen from . Due to the nature of visibility, this procedure will end. If we have processed segments of the cut-diagonals, we end up with simple polygons of size . It can be easily shown that the union of these polygons is .
Now let us analyze the running time of the algorithm. The cut-diagonals can be added in time. Running the algorithm of Theorem 7 in takes time. In addition, for each segment of the cut-diagonals that has appeared in , we perform the algorithm of Lemma 8 in time. In general, we have the following lemma:
The time needed to compute as a set of simple polygons of size is , where is the number of cut-diagonals that has been appeared in during the algorithm.
The upper bound of is and this bound is tight.
We have selected the cut-diagonals in such a way that the query line segment does not intersect the supporting line of any of the cut-diagonals. Also, the cut-diagonals do not intersect each other. Therefore, if sees a cut-diagonal through another cut-diagonal , then cannot see through . Hence, the upper bound of is . Figure 15 shows a sample with tight bound of .
3.3 Improving the algorithm
In the algorithm of the previous section, we may perform the algorithm up to times for each hole, resulting the high running time of . In this section, we show how to change this algorithm and improve the final result.
A vertex of the polygon can see the line segment directly or through the cut-diagonals. More precisely, can see up to parts of through different cut-diagonals. These parts can be categorized by the critical constraints that are tangent to the holes and pass through and cut . The next lemma put a limit on the number of these critical constraints.
The number of the critical constraints that see is , where is the number of visible holes from .
Let the number of vertices of the hole be . There are three kinds of constraints:
For each vertex that is not on the border of and is visible to , there are at most two critical constraints that touch and cut . Therefore, the total number of these constraints is .
The number of the critical constraints induced by two vertices of that cut is . We also have .
The number of the critical constraints that cut and do not touch any hole is (3).
Putting these together, we can prove the lemma.
We preprocess the polygon so that, in query time, we can efficiently find the critical constraints that cut . There are critical constraints passing through each vertex in . Therefore, the set of critical constraints can be computed in time and space. As the critical constraints passing through a vertex can be treated as a simple polygon (see Figure 16), we build the ray shooting data structure for each vertex in time and space, so that the ray shooting queries can be answered in time. In query time, we find the critical constraints of each vertex that cut in time, or in total time of for all the vertices. Here is the number of constraints that pass through and cut .
By performing an angular sweep through these lines, we can find the visible parts of and the visible cut-diagonals from the vertices in time. We store these parts in the vertices, according to the visible cut-diagonal of each part. Performing this procedure for all the vertices of , including the vertices of the holes, and storing the visible parts of in each vertex can be done in time and space. So, we have the following lemma:
Given a polygonal domain with disjoint holes and total vertices, it can be processed into a structure in space and preprocessing time so that for any query line segment , the critical constraints that cut can be computed and sorted in time, where .
It the rest of the paper we show that these critical constraints make an arrangement that can be used to compute .
We defined to be the part of that can be seen directly from . Let be the cut-diagonal of the hole . We define to be the part of that can be seen through . It is clear that .
Now, we show how to compute . First notice that is on the upper half plane of . Let be the part of that is above . As can see through different parts of , may not a simple polygon.
Let be the set of critical constraints originating from the vertices of that can see and directly cut , plus the critical constraints that can see and hit the border of and cut just before they hit . Each critical constraint is distinguished by one or two reflex vertices. We call each one of these vertices as the anchor of the critical constraint. Also, each one of these critical constraints may cut the border of at most twice. Let be the segments on the border of resulted from these cuttings. It is clear that .
Let , and let be the arrangement induced by the segments of . We show that partitions into visible and invisible regions.
For each point that is visible from , there is a segment in that can be rotated around its anchor until it hits , while remaining visible to .
As is visible from , it must be visible from some point of , such that cuts (see Figure 18). We rotate the segment counterclockwise about until it hits some vertex . Notice that the case is possible and does not require separate treatment. Next, we rotate the segment clockwise about until it hits another vertex . We continue the rotations until the segment reaches one of the endpoints of , or the lower part of the segment hits a point of the polygon, or the segment reaches the end-point . Let be the last point that the segment hits on the upper part of . As we only rotate the segment clockwise, this procedure will end. It is clear that is a critical constraint in , and we can reach the point by rotating counterclockwise about .
All the points of a cell in have the same visibility status w.r.t. .
Suppose that the points and are in , and is visible and is invisible from . Let be the line segment connecting and , and be the nearest point to on that is invisible from . According to Lemma 13, there is a segment with as its anchor such that if we rotate around , it will hit . We continue to rotate until it hits . As is invisible from , must be a critical constraint. This means that we have another critical constraint from a vertex that sees , and it crosses the cell . Thus, the assumption that is a cell in is contradicted.
To compute the final , we have to compute . is a simple polygon of size which can be represented by line segments. Also, can be represented by the arrangement of line segments, where . It can be easily shown that . Therefore, can be represented as the arrangement of line segments.
In the next section, we consider the problem of computing the boundary of .
3.4 Computing the Boundary of
We showed how to output as an arrangement of line segments. Here, we show that can be output as a polygon in time.
Balaban (2) showed that by using a data structure of size , one can report the intersections of line segments in time , where is the number of intersections. This algorithm is optimal because at least time is needed to report the intersections. Here, we have line segments and reporting all the intersections needs time and space. With the same running time, we can classify the edge fragments by using the method of Margalit and Knott (7), while reporting the line segment intersections. We can summarize this in the following theorem:
A polygon domain with disjoint holes and vertices can be preprocessed in time to build a data structure of size , so that the visibility polygon of an arbitrary query line segment within can be computed in time and space, where is the size of the output which is and is the number of visible holes from .
We considered the problem of computing the weak visibility polygon of line segments in simple polygons and polygonal domains. In the first part of the paper, we presented an algorithm to report of any line segment in a simple polygon of size in time, by spending time preprocessing the polygon and maintaining a data structure of size .
In the second part of the paper, we have considered the same problem in polygons with holes. We presented an algorithm to compute of any in a polygon with polygonal obstacles with a total of vertices in time by spending time preprocessing the polygon and maintaining a data structure of size . The factor is an output sensitive parameter of size at most , and is the size of the output.
- B. Aronov, L. J. Guibas, M. Teichmann and L. Zhang. Visibility queries and maintenance in simple polygons. Discrete and Computational Geometry, 27(4):461-483, 2002.
- I.J. Balaban. An optimal algorithm for finding segment intersections. In Proc. 11th Annu. ACM Sympos. Comput. Geom., pages 211-219, 1995.
- P. Bose, A. Lubiw, J. I. Munro. Efficient visibility queries in simple polygons. Computational Geometry: Theory and Applications, 23(3):313-335, 2002.
- S. K. Ghosh. Visibility Algorithms in the Plane. Cambridge University Press, New York, NY, USA, 2007.
- D. Z. Chen and H. Wang. Weak visibility queries of line segments in simple polygons. In 23rd International Symposium, ISAAC, pages 609-618, 2012.
- L. J. Guibas, J. Hershberger, D. Leven, M. Sharir, and R. E. Tarjan. Linear time algorithms for visibility and shortest path problems inside triangulated simple polygons. Algorithmica, 2:209-233, 1987.
- A. Margalit and G.D. Knott. An algorithm for computing the union, intersection or difference of two polygons. Comput. & Graph., 13:167-183, 1989.
- M. Nouri Bygi and M. Ghodsi. Weak visibility queries in simple polygons. In Proc. 23rd Canad. Conf. Comput. Geom., 2011.
- M. Nouri Bygi and M. Ghodsi. Near optimal line segment weak visibility queries in simple polygons. CoRR, abs/1309.7803, 2013.
- S. Suri and J. O’Rourke. Worst-case optimal algorithms for constructing visibility polygons with holes. In Proc. of the second annual symposium on Computational geometry, pages 14-23, 1986.
- G. T. Toussainta A linear-time algorithm for solving the strong hidden-line problem in a simple polygon. Pattern Recognition Letters, 4:449-451, 1986.
- A. Zarei and M. Ghodsi. Efficient computation of query point visibility in polygons with holes. In Proc. of the 21st Symp. on Comp. Geom., pages 314-320, 2005.