Measuring Visibility using Atmospheric Transmission and Digital Surface Model

Measuring Visibility using Atmospheric Transmission and Digital Surface Model

Abstract

Reliable and exact assessment of visibility is essential for safe air traffic. In order to overcome the drawbacks of the currently subjective reports from human observers, we present an approach to automatically derive visibility measures by means of image processing. It first exploits image based estimation of the atmospheric transmission describing the portion of the light that is not scattered by atmospheric phenomena (e.g., haze, fog, smoke) and reaches the camera. Once the atmospheric transmission is estimated, a 3D representation of the vicinity (digital surface model: DMS) is used to compute depth measurements for the haze-free pixels and then derive a global visibility estimation for the airport. Results on foggy images demonstrate the validity of the proposed method.

\OAGMarXiv

1505.01065 \newname\ege.g. \newname\EgE.g. \newname\etcetc. \newname\iei.e. \newname\etalet al. \newname\cfcf. \newname\FigFigure \newname\EqEq. \newname\SecSection \newname\TblTable

1 Motivation

In order to guarantee for safe air traffic controllers have to rely on precise forecasts and measurements of the current weather situation. Those have to be compiled and reported according to official regulations (e.g. visual flight rules (VFR) flight1) every 30 minutes in an international standard format 2. The decisions the controllers take do not only impact the security but also influence the air traffic and its economic repercussions in case of delays [1].

All major airports operate dedicated sensor systems to assess the current weather situation. Besides “classic” parameters like wind, pressure, humidity, and temperature, there are point-like measurements of visibility and cloud cover information. For reporting the prevailing visibility in the airport vicinity, observers currently compile their reports based on integrating sensor measurements with visual observation of known landmarks, like buildings, mountain tops, etc. (\Fig1).

Figure 1: Landmarks around the airport of Graz (Austria).

Human estimation is naturally very subjective to the individual observer and error-prone. Visibility sensors employ the principles of forward scattering to provide accurate and reliable information regarding visibility changes in the atmosphere. These sensors (usually located at the start and end of the runways) are very precise but only give very local (\egat the position of the sensor) measurements.

Contrarily to Andreu et al. [2], the approach presented in this paper is not aiming at emulating the controller procedure which actually results in a limited number of visibility measurements (up to the number of defined landmarks scattered around the airport) but instead intends to get a dense (\egfor each pixel in the image) visibility estimation. In order to quantify visibility, we propose a three steps procedure. First, we use a single image haze removal technique in order to recover the atmospheric transmission at every pixel of the image. Second, with the help of a very precise (\eg1m resolution) digital surface model of the vicinity of the airport, a depth map is generated for the whole image. Finally, statistics about the depth of the scene points whose corresponding pixels are deemed visible gives the visibility in the field of view of the camera.

A short overview of the relevant literature in this field is presented in \Sec2, which is followed by the methods applied for estimating the atmospheric transmission in \Sec3. Then, it is shown in \Sec4 how, with the help of a very precise digital surface model of the airport vicinity, a depth map can be generated. In \Sec5 we combine the estimated atmospheric transmission and the depth map to derive a global visibility estimation for the airport. Finally, \Sec6 concludes with final remarks.

2 Related work

Weather and other atmospheric phenomena, such as haze, fog, mist, rain and snow, greatly reduce the visibility of distant regions in images of outdoor scenes. Quantifying the amount of atmospheric scattering or removing it (\egdehazing or de-weathering), is a challenging problem, because the degree to which it effects each pixel depends on the depth of its corresponding scene point. This is leading to an under-constrained problem if the input is only a single image. Therefore, many methods propose using multiple images or additional information in order to alleviate that inherent problem.

Recent dehazing methods like [4] and [14] are able to dehaze single images by making assumptions about the contrast difference in haze-free and hazy scenes as well as assumptions about the transmission and the surface shading. Even though those methods result in visually appealing images they are either not physically valid or fail in handling images with heavy haze.

In order to remove haze, Schechner \etal[13] successfully made use of multiple images taken with different polarizer orientations, which is difficult to adapt for standard webcam single images.

Depth-based methods, where depth information or heuristics about the scene are provided either by interactive user inputs or by photo realistic 3D models, have been successful in de-weathering images. Narasimhan \etal[12] addressed the question of de-weathering a single image using simple additional information (\iesky, vanishing point) provided interactively by the user. Kopf \etal[8] take advantage of the availability of accurate 3D models and try to estimate stable values for the haze function directly from the relationship between the colors in the image and those of the rendered 3D model. However, in our case, even if we do have access to a pretty precise digital surface model generated by laser scanning, this 3D model has no color information thus preventing the use of Kopf \etal’s approach.

3 Atmospheric Transmission

3.1 Dark Channel Prior

In computer vision, the model widely used to describe the formation of an image perturbed by haze is [11], [10], [14]:

(1)

where is the observed intensity, is the scene radiance, is the global atmospheric light, and is the atmospheric transmission describing the portion of the light that is not scattered and reaches the camera. The first term on the right-hand side is called direct attenuation, and the second term is called air-light. When the atmosphere is homogenous, the transmission is attenuated exponentially with the scene depth and can can be expressed as where is the scattering coefficient of the atmosphere and the scene depth.

He \etal[5] propose using a dark channel prior for single image haze removal. This prior is partially inspired by the well-known dark-object subtraction technique [3] used in multispectral remote sensing systems. The idea is that in most of the local regions which do not cover the sky, some pixels (called dark pixels) normally have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air-light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. For an arbitrary image , its dark channel is given by where is a colour channel of and is a local patch centred at . Using the concept of a dark channel, if is an outdoor haze-free image, except for the sky region, the intensity of ’s dark channel is low and tends to be zero. \Fig2 shows a foggy image of the airport surrounding (left) and its corresponding dark channel computed over local patches of size 15x15 pixels (center).

Figure 2: Foggy image, its dark channel computed over 15x15 patches, and transmission map after guided filtering with the source image.

3.2 Estimating the transmission

As in [5], is automatically estimated from the most haze-opaque pixels (the top 0.1% brightest pixels in the dark channel). If the transmission can be assumed constant (denoted as ) over the local patch , \Eq(1), normalized by for each colour channel, could be rewritten as:

(2)

By definition the scene radiance should be haze-free, its dark channel is close to zero so \Eq(2) leads to:

(3)

where is the dark channel of the normalized hazy image. In practice, even on clear days the atmosphere is not absolutely free of any particle meaning that haze still exists when we look at distant objects. So, a very small amount of haze could be optionally kept for distant objects by introducing a constant parameter into \Eq(3):

(4)

Note that this method for estimating the atmospheric transmission is based on a prior (Dark Channel) reflecting the statistics of outdoor images. That prior may become invalid on objects similar to the atmospheric light (\iewhite walls) or having no shadow cast on them. As a result, this method might underestimate the transmission of these objects and overestimate the haze layer. However, images of the vicinity of an airport do not contain many of such problematic objects and have plenty enough shadow casting.

3.3 Refining the transmission

Using \Eq(4) for computing the transmission results in a very coarse map. Since the transmission is not always constant over local patches, the map has to be refined in order to capture the sharp edge discontinuities and outline the profile of the objects. We follow the approach of He \etal[5] applying a guided filter (with the original hazy image used as the guidance image) to refine the transmission map [6]. \Fig2 (right) shows the corresponding refined version of the transmission map.

4 Depth map generation

For this study we identified 16 ground control points (GCPs) manually for a part (\ie camera field of view) of the airport vicinity (\Fig3). The 3D reference coordinates of the GCPs were derived from a combined LiDAR3-SRTM4 DSM. In the northern part in the vicinity of the tower of the airport Graz we therefore used a LiDAR DSM which was acquired with 4pts per sqm in the year 2009. For more distant areas we used the globally available version 4 of the SRTM DSM, which was acquired using single pass interferometry at C-band with a ground resolution of about 90m in the year 2000 [7]. \Fig3 shows a painted relief of the generated combined LiDAR SRTM DSM. To facilitate the identification of the buildings given in the LiDAR DSM we additionally used Google map (See \Fig4) and here with especially oblique views.

Figure 3: Painted relief of the combined LiDAR SRTM DSM south to the tower (position outlined at the center of the red circle in the lower left part of the image) of the airport of Graz (Austria). Blue areas indicate no data.
Figure 4: Identification of GCPs in one image of the webcam of the airport Graz (Austria) (left) and combined LiDAR-SRTM and Google map (right).

Based on the well-known (linearized) co-linearity equations we used the GCP information to orient this image [9]. Besides the extrinsic orientation parameters (camera position and rotation) we limited the intrinsic parameters to the focal length only. For a more precise distance map also the principal point and camera distortion parameters could be derived. A coarse position of the webcam and the knowledge that the camera is pointing to the South were sufficient to set-up the orientation with a root mean square error of less than 4 pixels (with respect to the GCPs reprojection error). Once the image is oriented, for every pixel of the image the first intersection of the line-of-sight (LOS) with the combined LiDAR-SRTM DSM is calculated and the distance assigned to the raw distance map. More precise, we restrict ourselves only to image pixels below the detected horizon. As the combined LiDAR-SRTM DSM is only 2.5 dimensional this calculation fails if the LOS is below e.g. the roof of a building. In this case the intersection is found with an object behind the actual building. We therefore applied a simple reconstruction of facades by projecting the eaves line to the ground. Then a simple search in the image line direction ensures that the range direction is always decreasing (when starting from the horizon). \Fig5 shows the raw distance map of the airport Graz (Austria) and corrected distance map. All distances are given in logarithmic scale ranging from less than 60 m to more than 16 km.

Figure 5: Raw depth map of the airport Graz (Austria) (left) and corrected depth map (right). Depths are given in logarithmic scale from 60m up to 16km.

5 Visibility estimation

\Fig

6 shows the result of detecting haze on two hazy images taken at different day time under different weather conditions (low-level clouds and fog banks). The border of the haze is delineated by finding the boundaries of the region obtained by thresholding the transmission map computed using the methods described in \Sec3.

As can be seen on the both examples of \Fig6, the border between hazy and non hazy regions of the image is delineated by using a threshold of 0.75 (heuristically estimated) on the corresponding transmission maps. On the right hand side, one can notice that the fog bank over the runway is accurately segmented which is of importance for air traffic.

Figure 6: Highlighted (in red) haze border by using a global threshold of 0.75 on the corresponding transmission maps.

Derivating a visibility distance estimation out of the depth and transmission map is done by statistic analysis on the pixels of the depth map whose transmission value in the corresponding transmission map is above a given threshold value. One can then either choose as visibility distance the overall depth maximum of the concerned pixels or, in order to eventually avoid outliers, a simple percentile statistics over the gathered depths (\iethe top 1%).

\Tbl

1 reports visibility distance estimations from an initial evaluation together with the Austro Control observers. It is done with a global threshold value of 0.75 on the transmission map and the distance corresponding to the 99th percentile rank of the depth map histogram (with a bin distance of 10m) as resulting visibility distance.

Measured Reported Measured Reported Measured Reported
(METAR) (METAR) (METAR)
102m 250m 731m 500m 6768m 6000m
110m 300m 443m 600m 10537m 10km
371m 300m 791m 800m 12178m 10km
Table 1: Comparison of measured to officially reported visibility (METAR).

Globally, the results over a broad range of distances (from as close as 250m up to 10km) are mimicking official reports well. Some discrepancies could be explained by the difficulty for the controller to evaluate the visibility distance due to the low number of landmarks in some distance ranges (\egin the range [600m, 800m], cf. \Fig1) and also due to the subjectivity of the concept of visibility as human beings are defining it (\egwhen is landmark visible and when not?). Clearly, the value heuristically chosen as global threshold on the transmission has a major impact on the visibility results. No criticality analysis has been done on this parameter yet since the results presented here are mostly preliminary and mainly thought as a proof of concept.

6 Conclusion

In this paper, we have proposed an approach to automatically derive visibility measures at airport sites by combining an image processing method with the digital surface model of the airport vicinity. The image processing method equivalent to an image haze removal technique enables us to first recover the atmospheric transmission at every pixel in the image. Then, with the help of a very precise digital surface model of the airport vicinity, a depth map is generated for every pixel. Finally, gathering the depth of the pixels which are visible gives the visibility in the field of view of the camera. It was shown that this approach gives preliminary results comparable to the visibility estimations given by human controllers. Further work has to be done in order to automatically get an optimal value of the transmission threshold (\ieby exploiting local features for threshold derivation) which is a critical parameter of the proposed approach. Also, instead of being global, we will investigate threshold values that could be locally adapted to different parts of the image.

Acknowledgments.

This research was supported by the TAKE OFF program, an initiative of the Austrian “Federal Ministry for Transport, Innovation and Technology” under contract number 843985.

Footnotes

  1. Visual meteorological conditions: http://en.wikipedia.org/wiki/Visual_meteorological_conditions
  2. METAR: MÉTéorologique Aviation Régulière
  3. Light Detection and Ranging
  4. Shuttle Radar Topography Mission

References

  1. S. S. Allan, J. A. Beesley, J. E. Evans, and S. G. Gaddy. Analysis of Delay Causality at Newark International Airport. In Proceedings of the 4th USA/Europe Air Traffic Management R&D Seminar, Santa Fe, New Mexico, US, December 2001. Available on-line from : http://www.atmseminar.org/papers.cfm?seminar_ID=4.
  2. J.P. Andreu, H. Ganster, E. Schmidt, M. Uray, and H. Mayer. Comparative Study of Landmark Detection Techniques for Airport Visibility Estimation. In Proceedings of the 35th Annual Workshop of the Austrian Association for Pattern Recognition (ÖAGM), Graz, Austria, May 2011. Available on-line from : http://oagm2011.joanneum.at/papers/46.pdf.
  3. P. Chavez. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data. Remote Sensing of Environment, 24:459–479, 1988.
  4. R. Fattal. Single Image Dehazing. In ACM SIGGRAPH 2008 Papers, SIGGRAPH ’08, pages 72:1–72:9, 2008.
  5. K. He, J. Sun, and X. Tang. Single Image Haze Removal Using Dark Channel Prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341–2353, Dec 2011.
  6. K. He, J. Sun, and X. Tang. Guided Image Filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(6):1397–1409, 2013.
  7. A. Jarvis, H.I. Reuter, A. Nelson, and E. Guevara. Hole-filled SRTM for the globe Version 4. Available from the CGIAR-CSI SRTM 90m Database: http://srtm.csi.cgiar.org/, 2008.
  8. J. Kopf, B. Neuberts, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski. Deep Photo: Model-based Photograph Enhancement and Viewing. ACM Trans. Graph., 27(5):116:1–116:10, 2008.
  9. K. Kraus. Photogrammetry - Geometry from Images and Laser Scans - 2nd edition. Walter de Gruyter, Berlin, 2007.
  10. S.G. Narasimhan and S.K. Nayar. Chromatic Framework for Vision in Bad Weather. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 598–605, Jun 2000.
  11. S.G. Narasimhan and S.K. Nayar. Vision and the Atmosphere. International Journal on Computer Vision, 48(3):233–254, Jul 2002.
  12. S.G. Narasimhan and S.K. Nayar. Interactive (De)weathering of an Image using Physical Models. In ICCV Workshop on Color and Photometric Methods in Computer Vision (CPMCV), Oct 2003.
  13. Y.Y. Schechner, S.G. Narasimhan, and S.K. Nayar. Instant Dehazing of Images using Polarization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume I, pages 325–332, Dec 2001.
  14. R. T. Tan. Visibility in bad weather from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
169308
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description