Planar Visibility Counting^{†}^{†}thanks: Supported
by DFG projects Me872/121 within SPP 1307
and by Zi1009/12.
The last author is grateful to
(né Otfried Schwarzkopf)
for the opportunity to visit Kaist.
Abstract
For a fixed virtual scene (=collection of simplices) and given observer position , how many elements of are weakly visible (i.e. not fully occluded by others) from ? The present work explores the tradeoff between query time and preprocessing space for these quantities in 2D: exactly, in the approximate deterministic, and in the probabilistic sense. We deduce the existence of an space data structure for that, given and time , allows to approximate the ratio of occluded segments up to arbitrary constant absolute error; here denotes the size of the Visibility Graph—which may be quadratic, but typically is just linear in the size of the scene . On the other hand, we present a data structure constructible in preprocessing time and space with similar approximation properties and query time , where is an arbitrary parameter. We describe an implementation of this approach and demonstrate the practical benefit of the parameter to trade memory for query time in an empirical evaluation on three classes of benchmark scenes.
1 Motivation and Introduction
Back in the early days of computer graphics, hidden surface removal (and visible surface calculation) was a serious computational problem: for a fixed virtual 3D scene and given observer position, (partition and) select those scene primitives which are (and are fully) visible to the observer. Because of its importance, this problem has received considerable scientific attention with many suggestions of deep both combinatorial and geometric algorithms for its efficient solution. The situation changed entirely when the (rather unsophisticated) zbuffer algorithm became available in common consumer graphics cards: with direct hardware support and massive parallelism (one gate per pixel), it easily outperforms softwarebased approaches with their (usually huge factors hidden in) asymptotic bigOh running times [McKe87]. For a fixed resolution, the zbuffer can render scenes of triangles online in time essentially linear in with a small constant. However even this may be too slow in order to visualize virtual worlds consisting of several hundreds of millions of triangles at interactive frame rates. Computer graphics literature is filled with suggestions of how to circumvent this problem; for example by approximating (in some intuitive, informal sense) the observer’s views. Here, the benefit of a new algorithm is traditionally demonstrated by evaluating it, and comparing it to some previous ‘standard’ algorithm, on few ‘standard’ benchmark scenes and on selected hardware. We on the other hand are interested in algorithms with provable properties, and to this end restrict to
1.1 Conservative Occlusion Culling
Definition 1
Objects which are hidden to the observer behind (possibly a collection of) other objects may, but need not, be filtered from the stream sent to the rendering hardware, whereas any at least partially visible object must be visualized.
Here, “conservative” reflects that the rendering algorithm must not affect the visual apprearance compared to the bruceforce approach of sending all objects to the hardware. Occlusion culling can speed up the visualization particularly of very large scenes (e.g. virtual worlds as in Second Life or World of Warcraft) where, composed from literally billions of triangles, typically ‘just’ some few millions are actually visible at any instant. Other scenes, or viewpoints within a scene, admit no sensible occlusion; for instance the leaves of a virtual forest naturally do not fully screen sight to the sun or to each other, similarly for CAD scenes of lattice or similar constructions. In such cases, spending computational efforts on occlusion culling is futile and actually bound to a net performance loss. Between those extremes, and particularly for an observer moving between occluded and free parts of a large scene, the algorithmic overhead of more or less thoroughly filtering out hidden primitives generally trades off against the benefit in reduced rendering complexity. Put differently: Graphics hardware taking care of the visibility problem anyway opens the chance to hybridize with software performing either coarse (and quick) or careful (and slow) culling and leave the rest to the zbuffer.
1.2 Adaptive Occlusion Culling
It is our purpose is to explore this tradeoff and to make algorithms adapt to each specific virtual scene and observer position in order to exploit it in a welldefined and predictable way. To this end we propose socalled visibility counts (the number of primitives weakly visible from a given observer position) as a quantitative measure of how densely occluded a rendering frame is, and whether and by how much occlusion culling therefore can be, expected to pay off. For technical reasons employed in Sections 2.6 and later, the formal notion slightly more generally captures the visibility of ‘target’ scenes through ‘occluder’ scenes:
Definition 2 (visibility count)
For a scene of ‘geometric primitives’ , a subset of ‘targets’ , and an observer position , let
and denote by the number of objects in weakly visible (i.e. not fully occluded) from through . Here, denotes the (relatively open) straight line segment connecting and .
For scenes and observer positions with , occlusion culling is likely to pay off; whereas for it is not. Quantitatively we have the following
Hypothesis 1.1
Each culling algorithm can be assigned a threshold function such that, for scenes and observer positions with visibility ratios (significantly) beyong , it yields a net rendering benefit and (significantly) below does not.
1.3 Combinatorial Geometry and Randomized Computation
Adaptivity constitutes an important issue in Computational Geometry; for instance in the context of Range Searching problems whose running time is preferably output sensitive, i.e. of the form where denotes the overall number of objects and those that are actually reported; compare [Chaz86].
Adaptivity is of course a big topic in computer graphics as well. However this entire field, driven by the impetus to quickly visualize (e.g. at 20fps) concrete scenes in newest interactive video games, generally focuses on innovative heuristics and techniques at a tremendous pace. We on the other hand are interested in algorithms with provable properties based on formal and sound analyses and in particular with respect to welldefined measures of adaptivity. This of course calls for an application of computational and combinatorial geometry [BKOS97, Edel87].
Paradigm 1.2 (Computational Geometry in Computer Graphics)
For interactive visualization of very large virtual scenes of size , algorithms must run in sublinear time , , using preprocessed data structures of almost linear space , , provably!
Here (time and) space complexity refers to the number of (operations on) unitsize real coordinates used (performed) by an algorithm—as opposed to, e.g., rationals of varying bitlength^{‡}^{‡}‡See however Item d) in Section 5 below. Also visibility is considered in the geometric sense (as opposed to e.g. pixelbased notions): point is visible from observer position if both can be connected by an ideal light ray (=straightline segment not intersecting any other part of the scene), recall Definition 2. Our algorithm features a parameter to trade preprocessing space for query time.
Randomized algorithms are quite common in computer science for their efficiency and implementation simplicity. They have also entered the field of computer graphics. Here these techniques are employed to render only a small random sample of the (typically very large) scene in such a way that it appears similar to the entire scene [WFP*01, KKF*04, WW*06]. Our goal, on the other hand, is to approximate the count of visible objects (Definition 2), not their appearance.
1.4 Visibility
Visibility comprises a highly active field of research, both heuristically and in the sound framework of computational geometry [CCSD03]. Particularly the latter has proven combinatorially and algorithmically nontrivial already in the plane [ORou87, Ghos07]. Here the case of (simple) polygons is well studied [CAF07]; and so is point–point, point–segment, and segment–segment visibility for scenes of noncrossing line segments, captured e.g. in the Visibility Graph data structure [GhMo91]. Its nodes correspond either to segments or to segment endpoints (or to both: a bipartite graph); and two nodes get joined by an edge if one can partly see the other. Weak segment–segment visibility for instance amounts to the questions (namly for each pair of segments and ) of whether there exist points and such that is visible from .
We, too, ask for weak segment visibility; however in our case the observer is not restricted to positions on segments of the scene but may move freely between them. For instance we shall want to efficiently calculate visibility counts for singleton targets :
Problem 1
Fix a collection of nonintersecting segments
in the plane and one further segment .
Preprocess into an almost linear
(or merely worstcase subquadratic)
size data structure such as
to decide in sublinear time queries of the following type:
Given , is (partly) visible through ?
1.5 Overview
An empirical verification of Hypothesis 1.1 in dimensions 2, , and , is the subject of a separate work [FJZ09]. Our aim here is to explore the complexity of calculating the visibility counts, thus providing rendering algorithms with the information for deciding whether to cull or not. In view of the large virtual scenes and the high frame rates required by applications, we have to consider both computational resources, query time and preprocessing space, simultaneously. Section 2 focuses on the problem of calculating visibility counts exactly, mostly based on the Visibility Space Partition; our main result here is a preprocessing algorithm with outputsensitive running time. Section 3 weakenes the problem to approximate calculations: first showing the existence of a rather small data structure with logarithmic query time in Section 3.1. However this data structure seems hard to construct in reasonable time, therefore Sections 3.2ff consider approaches based on random sampling. Section 4 describes an implementation and evaluation of this algorithm.
2 Exact Visibility Counting
This section recalls combinatorial worstcase approaches for calculating visibility counts according to Definition 2. Many efficient algorithms are known for visibility reporting problems, that is for determining the view of an observer [Pocc90]; however since reporting may involve output of linear size, such aproaches are generally inappropriate for our goal of counting in sublinear time. On the other hand, logarithmic time becomes easily feasible when permitting quartic space in the worstcase based on the Visibility Space Partition (VSP). The main result of this section, Theorem 2.2 yields an outputsensitive time algorithm for computing the VSP of a given set of line segments in the plane.
2.1 Reverse Painter’s Algorithm
Prior to the hardware zbuffer, Painter’s Algorithm was sometimes considered as a means to hidden surface elimination (at least in the 2D case): Draw all objects in backtofront order, thus making closer ones paint over (and thus correctly cover) those further away. This of course relies on being able to efficiently find such an order: which is easily seen impossible in general unless we ‘cut’ some objects. Now twodimensional BSP Trees provide a means to find such an order and a way to cut objects appropriately without increasing the overall size too much. We report from [BKOS97, Section 12]:
Fact 2.1
Given a collection of noncrossing line segments in the plane, a BSP Tree of can be constructed in time and space .
Now instead of drawing the cut segments in backtofront order (relative to the observer), feeding them into an Interval Tree in fronttoback order reveals exactly which of them are weakly visible and which not. Since insertion into an Interval Tree of size takes time we conclude [TeSe91]:
Lemma 1
Given a collection of noncrossing line segments in the plane and an observer position , can be calculated in time and space .
Notice that preprocessing into a BSP Tree accelerates the running time ‘only’ by a constant factor.
2.2 Rotational Sweep
One can improve Lemma 1 by a logarithmic factor:
Lemma 2
Given a collection of noncrossing line segments in the plane and an observer position , can be calculated in time and space .
Proof
Sketch First mark all segments invisible. Then consider the endpoints of in angular order around while keeping track of the order of the segments according to their proximity to the observer, the closest one thus being visible: whenever a new segments starts insert it into an appropriate data structure in time , whenever one ends remove it. Since the initial sorting also takes time , we remain within the claimed bounds. ∎
2.3 Visibility Space Partition
Lemmas 1 and 2 work without any, and do not benefit asymptotically from, preprocessing of the fixed scene . On the other hand by the socalled locus approach—storing all visibility counts in a Visibility Space Partition (VSP)—they can later be recovered in logarithmic running time [Schi01]:
Lemma 3

For a collection of noncrossing line segments in the plane, there exists a partition of into convex cells such that, for all observer positions within one cell , is the same.

The data structure indicated in a) and including for each cell its corresponding visibility count uses storage and can be computed in time . Then given an observer position , its corresponding cell , and the associated visibility count, can be identified in time .

When charging only real data and operations (more specifically: If an bit string is considered to occupy one memory cell and the union of two of them computable within one step), the above data structure including for each cell its visibility uses storage and can be calculated in time .

Item a) extends to the case of simplices in dimensional space in that the number of convex cells with equivalent observer visibility can be bounded by .
Proof

Draw lines through all pairs of the segment endpoints. It is easy to see that, in order for a near segment to appear in sight, the observer has to cross one of these lines; compare Lemma 5 below. Hence, within each of the cells they induce, the subset of segments weakly visible remains the same; compare Figure 1.

lines induce an arrangement of overall complexity, and can be constructed in time [BKOS97, Section 8.3]. The visibility number associated with each cell is bounded by and hence can be stored using bits; its calculation according Lemma 2 takes time each. Finally, the planar subdivision induced by the edges of the arrangement can be turned into a data structure supporting pointlocation in [BKOS97, Theorem 6.8].

In the proof to b), constructing the arrangement (i.e. the planar partition into cells) and determining the visibility count of each cell were two separate steps which we now merge using divideandconquer: In the first phase calculate the VSP of the first two segments of , then that of the next, and so on; in each VSP store, for each cell, the visibility vector, i.e. the 0/1 bitstring recording which segments from are visible (1) and which are not (0). In the next phase overlay the first two VSPs of two segments into one of the first four segments, and store for each refined cell the union of the 0/1 bitstrings: thus keeping track of its visibility; similarly for the next two VSPs of the next four segments. Then proceed to VSPs of eight segments each; and so on. We therefore have phases; and, according to [BKOS97, Theorem 2.6], the last (as well as each previous) phase takes time .

Similarly to the proof of a), consider all vertices of the simplices. Any tuple of them induces a hyperplane; and a change in sight requires the observer to cross some of these hyperplanes. hyperplanes in space induce an arrangement of complexity [Edel87]. ∎
2.4 Size of Visibility Space Partitions
Lemma 3a+b) bounds the size of the VSP data structure by order . It turns out that this bound is sharp in the worst case—but not for many ‘realistic’ examples. This is due to many of the lines employed in the proof of Lemma 3a) inducing unnecessarily fine subdivisions of viewpoint space. In Figure 4a) for instance, the dotted parts are dispensible.
In order to avoid trivialities, we want to restrict to nondegenerate segment configurations . However this notion is subtle because the lines induced by defining the VSP typically are degenerate: many (more than two) of them meet in one common (segment end) point.
Definition 3
A family of segments in the plane is nondegenerate if

any two segments meet only in their common endpoints.

No three endpoints share a common line;

Any two lines, defined by pairs of endpoints, do meet.
We have already referred to (and implicitly employed in Lemma 3 a refinement of) the Visibility Space Partition; so here finally comes the formal
Definition 4
For two nondegenerate collections and of segments in the plane, partition all viewpoints into classes having equal visibility . Moreover let denote the collection of connected components of these equivalence classes. The size of is the number of line segments forming the boundaries of these components.
Observe that indeed constitutes a planar subdivision: a coarsening of the convex polygons induced by the arrangement of lines from the proof of Lemma 3a). In fact a class of viewpoints of equal visibility can be disconnected and delimited by very many segments, hence merely counting the number of classes or cells does not reflect the combinatorial complexity. Lemma 3a) and Lemma 4 a) correspond to [Mato02, Exercise 6.1.7].
Lemma 4

Even for a singleton target , there exist a nondegenerate line segment configurations such that has separate connected components.

To each , there exists a nondegenerate configuration of at least segments admitting a convex planar subdivision of complexity such that, from within each cell, the view to is constant; i.e. has linear size.

The size of is at most quadratic in the size of the Visibility Graph of (recall Section 1.4).

A data structure as in Lemma 3 can be calculated in time and space .
Since the Visibility Graph itself can have at most quadratically more edges than vertices, Item c) strengthens Lemma 3a). Empirically we have found that a ‘random’ scene typically induces a VSP of roughly quadratic size. This agrees with a ‘typical’ scene to have a linear size Visibility Graph according to [ELPZ07].
Proof (Lemma 4)

Figure 2 is a small modification of [ORou87, Fig. 8.13]. The long bottom line segment is visible from the upper half iff the observer can peep through two successive gaps simultaneously, i.e. from any position on the stripes but not from the ellipses. When moving from an ellipse to another, flashes into sight and is then hidden again. There are such ellipses. It is easy to see that this example is combinatorially stable under small perturbation and hence can be made nondegenerate.

Consider Figure 4: Within each cell of the segment arrangement, all segments are always visible; hence is also a cell of the VSP. And the exterior of gives rise to another 9 VSP cells.

Recall the proof of Lemma 3a); but throw in the observation that crossing the line induced by two endpoints and (say, of segments , respectively) does not induce a change in visibility if and are occluded from each other by some further segment : compare Figure 4. Hence it suffices to consider at most as many lines as the the number of edges in the Visibility Graph; and these induce an arrangement of at most quadratic complexity .
2.5 OutputSensitive VSP Calculation
In view of the large variation of VSP sizes from order to order according to Lemma 4, the algorithms indicated in Lemma 3b+c) for their calculation are reasonable only in case of large VSPs. We now present an outputsensitive improvement of Lemma 4d):
Theorem 2.2
Proof
We start as in the proof Lemma 3 with the order lines induced by all pairs of segment endpoints. Now the idea is to extend Lemma 4c), namely to take into consideration only those parts the lines lines are cut into, which to cross actually changes the visibility. Indeed, these sublines constitute the boundaries of the cells of the VSP and therefore determine its complexity.

Take such a line passing through segment endpoints and of segments and . To an observer crossing , the visibility can, but need not, change—and we want to determine if and where it does. First observe that the middle part of can be disposed off right away (unless and are endpoints of the same segment, but these cases give rise to only combinatorial complexity anyway) because crossing it never changes the visibility; compare Figure 4.
Now consider the two remaining unbounded rays of , starting from and starting from . If, say, intersects some other segment in some point , then traversing the part of beyond that point does not affect the visibility either, as is ‘shielded’ from sight by anyway; again cf. Figure 4. So let in this case, otherwise; and similarly for .
Now crossing at some point may or may not alter the visibility of (at least one of) (the latter being a segment ‘opposite’ to along ) but if it does so, then it does so at every point of . Hence we will either keep the whole , or drop it entirely; similarly for . 
Now since those two alternatives—namely keeping or dropping —depend only on , they can be distinguished in constant time. Moreover, after preprocessing time and space for , each can be decomposed into the two parts and as the result of a ray shooting query among in logarithmic time; see e.g. [Pocc90, Theorem 3.2].
The line parts kept will in general intersect each other. So next cut them into nonintersecting maximal subsegments. By the above observations, these constitute the boundaries of the VSP. And as a standard segment intersection problem, they can be determined in time ; cf. e.g. [BKOS97, Section 2.5].
The resulting (sub)segments give rise to a planar subdivision. For instance they cannot contain leaf (e.g. degree1) vertices: circling around such a vertex one way would change the visibility and the other way would not. Therefore the data structure admitting logarithmictime pointlocation in the VSP can be calculated in space and time , recall [BKOS97, Theorem 6.8]. 
Determining the visibilities as in the proof of Lemma 3b) yields a factor overhead; and the divideandconquer approach of Lemma 3c) seems inapplicable because of the correlations between segments in Step ii), namely cutting off induced by at the first further segment hit. On the other hand, each (and similarly for ) by construction induces a definite change in visibility when crossed: we may presume this information to have been stored with at the beginning of Step ii). Hence we may start at one arbitrary cell of the arrangement, calculate its visibility according to Lemma 1, and then traverse the rest of the arrangement cell by cell while keeping track of the visibility changes induced by (and stored with) each cell boundary. ∎
2.6 Visibility of One Single Target: Trading Time for Space
The query time obtained in Lemma 3 is very fast: logarithmic (i.e. optimally) where, according to Paradigm 1.2, sublinear suffices. Quite intuitively it should be possible to reduce the memory consumption at the expense of increasing the time bound. We achieve this for the case of one target, that is the decision version of visibility :
Theorem 2.3
For each , Problem 1 can be solved, after time and space preprocessing, within query time .
Such a tradeoff result from time to space has become famous in the general context of structural complexity [HPV77]. Note that, obeying sublinear time, we can get arbitrarily close to cubic space—yet remain far from the joint resources consumption of an interval tree (Section 2.1). But first comes the already announced
Lemma 5
Fix a collection of noncrossing segments in the plane. Let denote the lines induced by the pairs of endpoints of segments in . For an observer moving in the plane, the weak visibility of can change only as she crosses

either one of the lines intersecting

or someline supporting a segment ;
compare Figure 5.
Proof
Standard continuity argument: Let denote the observer’s position and suppose point is visible, i.e. the segment does not intersect . Now move until is just about to become hidden behind . Then start moving on such as to remain visible. Keep moving and adjusting : this is possible (at least) as long as the line through and avoids all endpoints of . ∎
Proof (Theorem 2.3)
Consider, as in the proof of Lemma 3, the lines induced by pairs of segment endpoints of . Consider the intersections of these lines with (if any). Partition into subsegments , each intersecting of the above lines. For each piece , take the arrangement of size induced by those lines intersecting , and all lines through one endpoint of and one of some , and all lines supporting segments from . By Lemma 5, within each cell of , the weak visibility of is constant (either yes or no) and can be stored with : Doing so for each () and each of the cells of uses memory of order as claimed; and corresponding time according to Lemma 3c).
Then, given a query point , locating in each arrangement takes total time ; and yields the answer to whether is weakly visible from or not. Now itself is of course visible iff some is: a disjunction computable in another steps. ∎
Scholium 2.4
For each , Problem 1 can be solved, after preprocessing in time into an size data structure, within query time where denotes the size (number of edges) of the Visibility Graph of .
Proof
Instead of considering, and partitioning into groups, all lines induced by pairs of segment endpoints, do so only for the lines induced by pairs segment endpoints visible to each other. ∎
3 Approximate Visibility Counting
Lacking deterministic exact algorithms for calculating visibility counts satisfying both time and space requirements, we now resort to approximations: of up to prescribable absolute error or, equivalently, of the visibility ratio up to absolute error ; recall Hypothesis 1.1.
Remark 1
Relative errors make no sense as there is always a viewpoint with .
Corollary 2, the main result of this section, presents a randomized approximation within sublinear time using almost cubic space in the worstcase and almost linear space in the ‘typical’ one.
3.1 Deterministic Approach: Relaxed VSPs
Visibility space partitions, and the algorithms based upon them, are so memory expensive because they discriminate (i.e. introduces separate arrangement cells for) observer positions whose visibility differs by as little as one; recall Definition 4. It seems that considerably more (time and) space efficient algorithms may be feasible by partitioning observer space into (or merely covering it by) more coarse classes:
Definition 5
Fix and collections and of nonintersecting segments in the plane. Some covering of is called a relaxed VSP of if
In the sequel we shall restrict to relaxed VSPs which constitute planar subdivisions (i.e. each being a simple polygon); and refer to their size in the sense of Definition 4.
Indeed, such VSPs allow for locating a given observer position in logarithmic time to yield a cell which, during preprocessing, had been assigned a value approximating up to absolute error at most .
Example 1
For , the trivial planar subdivision
is a relaxed VSP of .
In particular the quartic lower size bound
of Lemma 4b) applies only
to 0relaxed VSPs but breaks down for .
This example suggests that much smaller (e.g. worstcase quadratic) sizes might become feasible when considering relaxed VSPs for, say, or even . Indeed we have the following lower and upper bounds:
Proposition 1

For each there exists a nondegenerate family of segments in the plane such that any relaxed VSP has size at least .

There also exist such families such that any relaxed VSP has size at least .

Let be a nondegenerate family of segments in the plane and the size of its VSP. Then there exists a relaxed VSP of size .

There also exists a relaxed VSP of size , where denotes the size of the Visibility Graph of .

In fixed dimension , (Definition 5 and) Item c) generalizes to relaxed VSPs of size .
Recall that , thus leaving quadratic gap between a) and d) for small; and between b) and d) for large. Item c) succeeds over d) in cases where asymptotically does not exceed .
Proof

Consider Figure 6 with segments which obviously generalizes to arbitrary . Moving from segment along the arrow, each time crossing a dashed line amounts to an increase in visibility from via , and so on up to entire . Hence to obtain cells of viewpoints with visibility varying by at most , we must keep at least every st dashed border, that is out of . By symmetry, the same argument applies when moving from segment along the arrow, or from any other segment.

A closer look at Figure 6 reveals it to induce a VSP of size : For any segment, there are linearly many separate cones from which it can be seen through the gaps between the other segments; hence we have quadratically many cones, of which almost any two intersect; compare Figure 7, particularly its right part.
Now in order to argue about relaxed VSPs, replace in Figure 6 each single segment by scaled and shifted copies as indicated in the left of Figure 7; that is we now have a scene of size . Observe that, when entering and passing through a cone of visibility, the number of segments visible increases from its original value to an additional (drawn in levels of gray). The visibility number thus varies by , requiring a relaxed VSP to subdivide the cone; indeed the entire cone! By the above considerations, these necessary boundaries induce an arrangement of complexity which, expressed in the size of the scene, is as claimed. 
Classify the cells of according to their visibility count; and let denote the number of boundary segments of separating a cell with visibility count from one with visibility count . Since any boundary segment does so for some ,
Hence by pigeonhole principle there exists such that Now keep from exactly those boundary segments that either separate cells with visibility count from ones with visibility count , or cells with visibility count from ones with visibility count , or cells with visibility count from visibility count and so on.
By the above considerations, this planar subdivision has complexity at most . And by construction, it joins cells having visibility counts , , , …, ; but maintains their separation from cells with visibility count , because those boundary segments are precisely the ones deliberately kept. Hence the joined coarsened (i.e. super) cells indeed contain only viewpoints with visibility count differing by at most . 
Recall the collection of lines used in the proof of Lemma 4c). Now let , , and apply the Cutting Lemma of Chazelle, Friedman, and Matoušek [Mato02, Lemma 4.5.3]:
There exists a subdivision of the plane into generalized (i.e. not necessarily closed) triangles such that the interior of each is intersected by at most lines from .
Since visibility changes occur only when crossing lines in , the latter means that the visibility counts within each differ by at most .

Instead of generalized triangles as in d), now employ simplicial cuttings of size according to [Mato02, Theorem 6.5.3]. ∎
We remark that Item c) considers a certain subarrangement of the 0relaxed VSP, whereas the planar subdivision due to Item d) uses cell boundaries not necessarily belonging to the VSP. Also, the size of the cuttings employed in Items d) and e) is known to be optimal in general; but this optimality does not necessarily carry over to our application in visibility, recall the gaps between lower and upper bounds in Items a) to d).
Substituting yields
Corollary 1
For each collection of nondegenerate segments in the plane
and , there exists a data structure of size
that allows to approximate,
given ,
the visibility ratio
up to absolute error
in time .
Notice that we only claim the existence of such small data structures. In order to construct them, the proofs of Proposition 1c) and d) both proceed by first calculating the 0relaxed VSP and then coarsening it. Specifically for Proposition 1d), a Triangular Cutting can be obtained in time [Agar90]; but determining the visibility count for each triangle costs according to Lemma 2; or can be taken from the 0relaxed VSP. The preprocessing time for Proposition 1c+d) thus is, up to polylogarithmic factors, that of Theorem 2.2, i.e. roughly : independent of, and not taking advantage of large values of, . For the configuration from Lemma 4a) for instance (recall Figure 2), this results in a preprocessing time of order although the resulting relaxed VSP has only size . Alternatively, apply Lemma 2 to (one point from) each triangle to obtain a running time of roughly output, that is still off optimal by one order of magnitude. And finally, the asymptotically ‘small’ size and time for calculating triangular cuttings hide in the bigOh notation some large constants which are believed to prevent practical applicability.
3.2 Random Sampling
Both size and query time of the data structure due to Corollary 1 are rather low; but because of the infavourable preprocessing time and hidden bigOh overhead indicated above, we now proceed to random sampling, based on a rather simple generic algorithm:
Algorithm 3.1

Guess a sample target of size .

Calculate the count of objects in visible through .

Return the ratio ;

and hope that it does not deviate too much from the ‘true’ value .
Item iv) is justified by the following
Lemma 6
Fix and , then choose as independent identically distributed random draws from . It holds
In other words: In Algorithm 3.1 taking (quadratic in the aimed absolute accuracy but) constant with respect to the scene size suffices to achieve the desired approximation with constant probability; slightly increasing it further amplifies exponentially the chance for success.
Remark 2

It is easy to see that a fixed relative accuracy can be attained, for , only by samples of size : If only one segment is visible, it must get sampled to be detected.

Also the visibility of the sample is crucially to be considered with respect to the entire scene, i.e. rather than .
3.3 The VCDimension of Visibility
Note that the random experiment and the probability analysis of its properties in Lemma 6 holds for each but not uniformly in . This means for our purpose to resample at every frame. On the other hand, the above considerations have not exploited any geometry. An important connection between combinatorial sampling and geometric properties is captured by the Vapnik–Chervonenkis Dimension [AlSp00]:
Fact 3.2
Let be a set and a collection of subsets . Denote by
(1) 
the VCDimension of .

For , , .

Let be random of . Then with probability at least , it holds for each : .

Let be random of . Then with probability at least it holds for each : .
Lemma 7
Fix a collection of noncrossing –simplices in .

Define and . Then .

A random subset of cardinality satisfies with constant (and easily amplifiable) probability that, whenever at least a fraction of the simplices of are visible from through , then so is some simplex of .

A random subset of cardinality satisfies with constant (yet easily amplifiable) probability that deviates from absolutely by no more than .

The bound obtained in a) is asymptotically optimal with respect to : In there exist nondegenerate collections of line segments such that .
Proof

and c) follow by plugging Item a) into Fact 3.2b+c).
3.4 Main Result
Lemma 7c) enhances Lemma 6: The latter is concerned with the probability of a constantsize sample to be representative (i.e. to approximate the visibility ratio) with respect to a fixed viewpoint—i.e. our application would (have to) resample in each frame! The former lemma on the other hand asserts that a polylogarithmicsize sample, drawn once and for all, be suitable with respect to all viewpoints! In particular we may preprocess the visibility of each separately according to Theorem 2.3 and obtain, employing Scholium 2.4:
Corollary 2
Given , a collection of noncrossing segments in the plane (), and where denotes the size of the Visibility Graph of . Then a randomized algorithm can preprocess within time and space into a data structure having with high probability the following property: Given , one can approximate the visibility ratio up to absolute error at most in time .
4 Empirical Evaluation
The present section demonstrates the practical applicability of the algorithm underlying Corollary 2: It is (not trivial but neither) too hard to implement, constants hidden in bigOh notation are modest, and query time can indeed be traded for memory.
Measurements were obtained on an Intel® Dual™2 CPU 6700 running at 2.66GHz under openSUSE 11.0 equipped with 4GB of RAM. The implementation is written in Java version 6 update 11. Calculations on coordinates use exact rational arithmetic based on BigIntegers.
4.1 Benchmark Scenes
We consider three kinds of ‘virtual scenes’ in 2D, that is collections of nonintersecting line segments, compare Figure 9:

Sparse scenes representing forestlike virtual environments with longrange visibility;

cellular scenes representing architectual virtual environments with visibility essentially limited to the room the observer is presently in;

and an intermediate of both.
As indicated by the above classification, these scenes contain some regularity. More precisely, their respective visibility ratios obey qualitative deterministic laws, see Figure 10a). On the other hand these scenes are constructed using some random process, which means instances can be made up of any desired size .
Specifically all scenes arise from throwing into each square of an grid one randomly oriented segment. For Scene A, these segments are then shrunk by a factor to yield an average visibility count proportional to , see Figure 10a). For Scene B, each segment sequentially is grown as to just touch some other one: remember we want to comply with Definition 3; as expected (and corresponding to Lemma 4b) this results in constant visibility counts. Scene C finally arises from Scene B by shrinking the segments again; here the visibility count grows roughly proportional to .
b) Variance of the output of the randomized algorithm.
Figure 10b) indicates the quality of approximation attained by Corollary 2: Recall that the preprocessing step randomly selects elements of the scene as targets; and the proof shows that asymptotically this sample is ‘representative’ for the entire scene with high probability. Our implementation chose which turned out to yield a practically good approximation indeed. More precisely, Figure 10b) displays the confidence interval estimates for scenes A) and C) from various viewpoints, normalized to (i.e. after subtracting the) mean 0.
4.2 Memory Consumption versus Query Time
Preprocessing space and query time are two major resource contraints for many applications such as the one we aim at. We have thus performed extensive measurements of these quantities for the scene types A) to C) mentioned above. It turns out that for A) our data structure takes roughly linear space and for B) roughly quadratic one, whereas for C) it grows strictly stronger; see Figure 11a). Here we refer to the setting . For scene C) we have additionally employed the tradeoff featured by Corollary 2 to reduce the memory consumption at the expense of query time; specifically, scene E) means scene C) with , and scene D) refers to . It turns out that the latter effectively reduces the size to quadratic, see Figure 11a).
On the other hand, the saved memory is paid for by an increase in query time; cf. Figure 11b). Indeed at scenes of size each query is estimated to take about , that is as long as one frame may last at an interactive rate of . Whereas scene E), that is scene C) with instead of , is estimated to still remain far below this limit for much, much larger scenes ; not to mention scenes A) to C), i.e. with .
4.3 Conclusion
Our benchmarks range up to when the data structure hit an overall memory limit of 16GB. This may first seem to fall far short of the original sizes aimed at in Section 1.1. On the other hand,

the measurements obtained turn out to depend smoothly on and thus give a sufficient indication of, and permit to extrapolate with convincing significance, the behavior on larger scenes.

By proceeding from single geometric simplicies to entire ‘virtual objects’ (like e.g. a house or a car) as rendering primitives, one can in practice easily save a factor of 100 or 1000.

Temporal and spacial coherence of an observer moving within a virtual scene suggests that visibility counting queries need not be performed in each frame separately. Moreover our CPUbased algorithm can be run concurrently to the graphics processing unit (GPU). These two improvements in running time can then be traded for an additional saving in memory.

In order to attain and access the 16GB mentioned above, we employed secondary storage (a harddisk): with an unfavorable increase in preprocessing time but suprisingly little effects on the query time.
These observations suggest that our algorithm’s practicality can be extended to much larger than the above ; yet doing so is beyond the purpose of the present work.
It is thus fair to claim as main benefit of our contribution in Corollary 2 an (as opposed to Corollary 1) practically relevant approach to approximate visibility counting based on the ability to trade (otherwise prohibitive quartic) preprocessing space for (otherwise almost neglectible logarithmic) query time.
5 Perspectives

We have treated the observer’s position as an input newly given from scratch for each frame. In practice however is more likely to move continuously and with bounded velocity through the scene. This should be exploited algorithmically, e.g. in form of a visibility count maintenance problem [Pocc90, Section 3.2].

How does Theorem 2.3 extend from 2D to 3D, what is the typical size of a 3D VSP?

The quartic worstcase size of 2D VSPs (and quadratic typical yet even of order for 3D) arises from visibility considered with respect to perspective projections; whereas for orthographic projections, it drops to (in 2D; in 3D: order ) [Schi01].

The counterexamples in Lemma 4a) and Lemma 7d) and also Proposition 1a) employ (after scaling the entire scene to unit size) very short and/or very close segments. We wonder if such worst cases can be avoided in the bit cost model, i.e. with respect to denoting the total binary length of the scene description on an integer grid.
5.1 Remarks on Lower Bounds
We have presented various data structures and algorithms for visibility counting, trading preprocessing space for query time. It would be most interesting to complement these results by corresponding lower bounds of the form: preprocessing space requires, in some appropriate model of computation, query time at least . Unfortunately techniques for Friedman’s Arithmetic Model, which have proven so very successful for range query problems [Chaz90], do not apply to the nonfaithful semigroup weights of geometric counting problems, not to mention decision Problem 1; and approximate, rather than exact, counting makes proofs even more complicated.
On the other hand, we do have some lower bounds: namely on the sizes of VSPs and relaxed VSPs in Lemma 4a) and Proposition 1a+b). These immediately translate to lower bounds on the sizes of Linear Branching Programs, based on the observation that any such program needs a different leaf for each different convex cell of inputs leading to the same output value, compare [DFZ02].
But then again this seemingly natural model of computation is put into question when considering the algorithms from Sections 2.1 and 2.2: Both have, in spite of Lemma 4a), merely (weakly) linear size. Superficially, Lemma 2 seems to employ transcendental functions for angle calculations; but these can be avoided by comparing slopes instead of angles—however employing divisions. These, again, can be replaced: yet multiplications do remain and make the algorithm inherently nonlinear a branching program.
5.2 Visibility in Dimensions
We had deliberately restricted to the planar case of line segments. Many virtual scenes in interactive walkthrough applications can be described as dimensional: buildings of various heights yet rooted on a common plane.
But how about the full 3D case? Here we observe a quadratic ‘almost’ lower bound on the joint running time of preprocessing and querying for the 3D counterpart to Problem 1. To this end recall the following famous
Problem 2 (3sum)
Given an element subset or , do there exist such that ?
It admits an easy algebraic time algorithm but is not known solvable in subquadratic time. Similar to Boolean Satisfiability (SAT) and the theory of completeness, 3SUM has led to a rich family of problems mutually reducible one to another in softly linear time and hence called 3SUMcomplete; for example it holds [GaOv95, Section 6.1]:
Fact 5.1
Given a collection of opaque horizontal triangles in space, one further horizontal triangle , and a viewpoint . The question of whether some point of is visible from through (called VisibleTriangle) is 3SUMcomplete.
In particular there is no 3D counterpart to the Interval Tree solving the corresponding 2D problem in time , recall Section 2.1.
References
 [Agar90] P. Agarwal: “Partitioning Arrangements of Lines I: An Efficient Deterministic Algorithm”, pp.449–483 in Discrete Comput. Geom. vol.5 (1990).
 [AlSp00] N. Alon, J.H. Spencer: “The Probabilistic Method”, 2nd Edition, Wiley (2000).
 [BKOS97] M. de Berg, M. van Kreveld, M. Overmars, O. Schwarzkopf: “Computational Geometry, Algorithms and Applications”, Springer (1997).
 [CAF07] S. Charneau, L. Aveneau, L. Fuchs: “Exact, Robust and Efficient Full Visibility Computation in Plücker Space”, pp.773–782 in Visual Comput. vol.23 (2007).
 [CCSD03] D. CohenOr, Y.L. Chrysanthou, C.T. Silva, F. Durand: “A Survey of Visibility for Walkthrough Applications”, pp.412–431 in IEEE Transactions on Visualization and Computer Graphics vol.9:3 (2003).
 [Chaz86] B. Chazelle: “Filtering Search: A New Approach to QueryAnswering”, pp.703–724 in SIAM J. Comput. vol.15:3 (1986).
 [Chaz90] B. Chazelle: “Lower Bounds for Orthogonal Range Searching: II. The Arithmetic Model”, pp.439–463 in J. ACM vol.37:3 (1990).
 [DFZ02] V. Damerow, L. Finschi, M. Ziegler: “Point Location Algorithms of Minimum Size”, pp.5–9 in Proc. 14th Canadian Conf. on Computational Geometry (CCCG 2002).
 [Edel87] H. Edelsbrunner: “Algorithms in Combinatorial Geometry”, Springer (1987).
 [ELPZ07] H. Everett, S. Lazard, S. Petitjean, L. Zhang: “On the Expected Size of the 2D Visibility Complex”, pp.361–381 in Int. J. Comput. Geom. vol.17:4 (2007).
 [FJZ09] M. Fischer, C. Jähn, M. Ziegler: “Adaptive Mesh Approach for Predicting Algorithm Behavior with Application to Visibility Culling in Computer Graphics”, submitted.
 [FuSe93] T.A. Funkhouser, C.H. Séquin: “Adaptive display algorithm for interactive frame rates during visualization of complex virtual environments”, pp.247–254 in Proc. 20th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’93).
 [GaOv95] A. Gajentaan, M. Overmars: “On a Class of Problems in Computational Geometry”, pp.165–185 in Computational Geometry: Theory and Applications vol.5:3 (1995).
 [GhMo91] S.K. Ghosh, D. Mount: “An Output Sensitive Algorithm for Computing Visibility Graphs”, pp.888–910 in SIAM J. Comput. vol.20 (1991).
 [Ghos07] S.K. Ghosh: “Visibility Algorithms in the Plane”, Cambridge University Press (2007).
 [HPV77] J.E. Hopcroft, W.J. Paul, L.G. Valiant: “On Time Versus Space”, pp.332–337 in Journal of the ACM vol.24:2 (1977).
 [KKF*04] J. Klein, J. Krokowski, M. Fischer, M. Wand, R. Wanka, F. Meyer auf der Heide: “The Randomized Sample Tree: A Data Structure for Externally Stored Virtual Environments”, pp.617–637 in Presence vol.13:6, MIT Press (2004).
 [Mato02] J. Matoušek: “Lectures on Discrete Geometry”, Springer Graduate Texts in Mathematics vol.212 (2002).
 [McKe87] M. McKenna: “WorstCase Optimal Hidden Surface Removal”, pp.19–28 in ACM Transaction on Graphics vol.6 (1987).
 [MoRa95] R. Motwani, P. Raghavan: “Randomized Algorithms”, Cambridge University Press (1995).
 [ORou87] J. O’Rourke: “Art Gallery Theorems and Algorithms”, Oxford University Press (1987).
 [OvWe88] M. Overmars, E. Welzl: “New Methods for Constructing Visibility Graphs”, pp.164–171 in Proc. 4th ACM Symposium on Computational Geometry (1988).
 [PlDy90] H. Plantinga, Ch.R. Dyer: “Visibility, Occlusion, and the Aspect Graph”, pp.137–160 in Int. Journal Computer Vision vol.5:2 (1990).
 [Pocc90] M. Pocchiola: “Graphics in Flatland Revisited”, pp.85–96 in Proc. 2nd Scandinavian Workshop on Algorithms Theory, Springer LNCS vol.447 (1990).
 [PoVe96] M. Pocchiola, G. Vegter: “The Visibility Complex”, pp.279–308 in International Journal of Computational Geometry & Applications vol.6:3 (1996).
 [Schi01] R.D. Schiffenbauer: “A Survey of Aspect Graphs”, TRCIS200101 Brooklyn University (2001).
 [Tell92] S.J. Teller: “Visibility Computations in Densely Occluded Polyhedral Environments”, Dissertation University of California Berkeley (1992).
 [TeSe91] S.J. Teller, C.H. Séquin: “Visibility Preprocessing For Interactive Walkthroughs”, pp.61–69 in Computer Graphics vol.25:4 (1991).
 [WFP*01] M. Wand, M. Fischer, I. Peter, F. Meyer auf der Heide, W. Straßer: “The Randomized Buffer Algorithm: Interactive Rendering of Highly Complex Scenes”, in Proc. 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2001).
 [WW*06] Peter Wonka, Michael Wimmer, Kaichi Zhou, Stefan Maierhofer, Gerd Hesina, Alexander Reshetov: “Guided visibility sampling”, in Proc. of SIGGRAPH 2006, pp. 494–502, (2006).