# Fast single image dehazing via

multilevel wavelet transform based optimization

###### Abstract

The quality of images captured in outdoor environments can be affected by poor weather conditions such as fog, dust, and atmospheric scattering of other particles. This problem can bring extra challenges to high-level computer vision tasks like image segmentation and object detection. However, previous studies on image dehazing suffer from a huge computational workload and corruption of the original image, such as over-saturation and halos. In this paper, we present a novel image dehazing approach based on the optical model for haze images and regularized optimization. Specifically, we convert the non-convex, bilinear problem concerning the unknown haze-free image and light transmission distribution to a convex, linear optimization problem by estimating the atmosphere light constant. Our method is further accelerated by introducing a multilevel Haar wavelet transform. The optimization, instead, is applied to the low frequency sub-band decomposition of the original image. This dimension reduction significantly improves the processing speed of our method and exhibits the potential for real-time applications. Experimental results show that our approach outperforms state-of-the-art dehazing algorithms in terms of both image reconstruction quality and computational efficiency. For implementation details, source code can be publicly accessed via http://github.com/JiaxiHe/Image-and-Video-Dehazing.

###### keywords:

Image dehazing, Image enhancement, Regularized optimization, Haar wavelet transform, Sub-band image model^{†}

^{†}journal: Elsevier

## 1 Introduction

Haze is a result of the accumulation of dust and smoke particles in the atmosphere. It causes degradation of visibility and contrast in images due to the light scattered on haze particles along the transmission path between the object and the sensor. Such a problem prevails in many high-level applications, e.g., photography stylization rosa17 (), semantic segmentation xing16 (), scene understanding christos18 (), and video surveillance goyal18 (), to name a few. The image dehazing problem endeavors to apply post-processing of hazed images to remove the haze effects and reconstruct the original image scene. In recent years, a great deal of progress has been achieved on this crucial topic of image processing singh-18 (). Wang et al. wangw18 () categorizes image dehazing methods into the enhancement-based group and the restoration-based group, where the effectiveness of the later is backed up with a sound physical model. However, the optical image model in its original form has several unknown parameters, making single image dehazing an ill-posed problem. A more serious problem is that the haze-free image and the light transmission distribution in atmosphere are bilinearly coupled. As a result, the image restoration process becomes a non-convex problem, which is computationally expensive and easy to converge to a sub-optimal solution.

Early efforts to mitigate this problem often require additional information to reconstruct the image visibility. For instance, Schechner et al. proposed an approach to remove haze by taking two images through one polarizer at different orientations of the same scene Schechner2006 (); Schechner2007 (). Hautiere et al. Hautiere2007 () used the 3D rough geometrical information of the images for analysis; Kopf Kopf2008 () also used the depth information of the scene. Recently, Luan et al. luan17 () proposed a learning framework to automatically extract haze-relevant features and estimate light transmission distribution by sampling haze/clear image pairs. The applications of these methods are limited by their requirement of the additional information, which is not always available in practice. For this reason, methods that only use the single hazed image as an input for dehazing have gained increasing attention. To obtain the prior information from the hazed image, He et al. He2011 () proposed the dark channel prior (DCP) which was later improved in Guided (). However, the DCP-based method involved low efficient pixel computation and over-correction when estimating the light transmission distribution. Gao Gao2014 () observed that contrast and saturation of images can be increased by using the negative image to rectify the hazed image, which is faster than the DCP-based transmission map.

An alternative source of the prior knowledge of haze can be haze scenery modeling. A number of methods Fattal2008 (); Bayesian-defog (); Mutimbu (); Wang2014 () used the Markov random field (MRF) or its variants to estimate the depth information based on a statistical analysis of spatial and contextual dependency of physical phenomena. While some of these methods, e.g. Wang2014 (), could produce improved results, they generally suffered from additional computational complexity. On the other hand, some MRF-based fast or real-time dehazing methods have the common problem of distorted colors and obvious halos due to their model deficiency. These methods Tarel2009 (); Ancuti2011 (); Zhang2011 (); Kim2013 (); Zhu2015 () improved the processing speed at the cost of dehazing quality. Specifically, Kim et al. Kim2013 () estimated the transmission distribution by maximizing block-wise contrast and at the same time minimizing the information loss due to pixel-level over-saturation. The method was further refined by employing a hierarchical searching technique to decide sky regions. Unfortunately, the detected region can be wrong when bright objects are placed in a close distance. The algorithm of Ancuti et al. Ancuti2011 () significantly reduced the complexity of DCP by modifying the block-based approach to a pixel-wise one. Although this method has impressively fast processing time, the pixel-wise haze detection is not robust and often suffers from large recognition errors similar to Kim2013 (). Consequently, the dehazing quality of these two methods is not always visually pleasing. Their time complexity advantages are later surpassed by Zhu et al. Zhu2015 (), which reported a faster processing speed. Zhao et al. Zhao-mof () provide a systematic comparison for the state-of-the-art dehazing methods till present.

We propose a new regularized optimization method termed multilevel wavelet transform based optimization (MWTO) to solve the dehazing problem. Our approach elegantly balances between the image dehazing quality and the processing speed. The formulation of the optimization problem is based on the optical image model, where haze-free image and light transmission distribution are unknown. Inspired by Yang2012 (), we resolve the non-convex difficulty of the original problem by formulating the bilinearly coupled terms as a whole single term he16 (). To further improve the computational efficiency, we perform the discrete Haar wavelet transform (DHWT) on the hazed image to derive a sub-band hazed image model with reduced dimension. Due to the low pass and smoothness characteristic of the light transmission distribution, a piecewise constant assumption on the light transmission distribution is introduced. Based on this assumption, solving the dehazing problem of the sub-band hazed image model with the reduced dimension is sufficient for the solution of the original dehazing problem. This property can significantly reduce the theoretical computational complexity of our method.

Image dehazing with MWTO has several advantages compared to its peer methods. First, the formulation of the regularized optimization is a systematic and deterministic approach to a feasible solution for reconstruction of the haze-free image given the optical image model. The regularized optimization problem is computationally efficient and can be readily implemented with standard procedures and software, such as CVX in Matlab and CVXOPT in Python. Second, the regularization terms of the optimization formulation can provide flexibility in computing the dehazing solution by incorporating a priori knowledge of the image and the atmosphere light transmission distribution, which can guide the computation to a meaningful solution. Finally, the proposed sub-band decomposition procedure enables significant dimension reduction. These advantages result in high-quality dehazing performance in very limited time. Experiments suggest that our approach is even faster than linear models, such as in Zhu2015 ().

The remainder of this paper is organized as follows. Section 2 provides a brief background of the optical model of haze and its application to the wavelet-decomposed images. Section 3 describes the convex transformation of the sub-band image model and the fusion of multilevel light transmission distribution estimations. In Section 4, we evaluate our method and compare to the state-of-the-art methods from many perspectives. Section 5 concludes the paper with future directions.

## 2 Hazed Image Modeling

### 2.1 The optical image model

According to the optical model of haze, the hazed image has two additive components. The first component represents reflected light from the object surface, i.e. the clear image. The second component is the scattering transmission, i.e. the haze. We denote the observed digital image in the RGB color space as and the haze-free image scene as , and with and being positive integers, and are indices of the 2-dimensional index set . Let be the color channel index, then are matrices with non-negative entries denoted by and , respectively. The optical model can be written as:

(1) |

where each is the atmospheric light constant of the corresponding color channel, is the transmission distribution representing the portion of the light, not being scattered, illuminating on camera sensors, denotes the elementwise multiplication operation ^{1}^{1}1In the remainder of this paper, we follow this denotation and use the symbols , , and to represent elementwise addition, subtraction, and division respectively. and is the matrix of appropriate dimension with all-one entries. The hazy scene image is the result of the attenuated image intensity through the scattering transmission path, together with the scattered transmission atmospheric light.
In the following discussion, we assume the entries of , , and are unitized, such that
, , , , and , where is a matrix with all-zero entries and () is the elementwise operation of () on matrices.

In practice, is the only observable image and , , are unknown. The objective of image dehazing is to estimate , as well as and , so to reconstruct the composed haze free color image .

### 2.2 The sub-band image model

Given the assumption by the optical image model that the transmission rate is evenly distributed in atmosphere, the frequency response of haze in images should be mainly distributed within the low-frequency sub-band. This hypothesis is testified in Fig. 1. Based on this low pass characteristic of the light transmission distribution, we apply DHWT to fast decompose the image model into a bank of frequency sub-bands. The decomposed image model in the low-frequency band thus already contains the information of light transmission distribution and can be used for its estimation. This sub-band image model with reduced dimension can result in significant reduction of the computational complexity in the dehazing process.

Specifically, when the atmosphere is homogeneous, the light transmission distribution can be represented by

(2) |

where is the distance map from the target object to the camera, is the scattering coefficient depending on the hazy medium. Both and are strictly positive, which implies that is elementwise bounded by the all-zero and all-one matrices, i.e. . Sampled from the physical world, pixels in each geometric local patch share approximately the same depth value to constitute a region or object. Abrupt depth jump of pixel values, in contrast, constitute edges of objects or regional boundaries. Therefore, it is evident to assume that the distance map is piecewise constant for most images. Since is a continuous map of in (2), is also piecewise constant. With this consideration, it is assumed that the dimensional distribution is -patch piecewise constant in the sense that , for , and . Using this assumption and the DHWT, the sub-band image model can be further specified as following.

Let be the 2-dimensional DHWT matrix of appropriate dimension, that is,

(3) | |||

(4) | |||

(5) |

Subsequently, the single-level DHWT of and , , gives results in their transformed matrices with four dimensional sub-band blocks, i.e.,

(6) | ||||

(7) |

where the superscripts , , and indicate the low-frequency approximation, horizontal, vertical, and diagonal sub-band blocks of the wavelet transform respectively. If the light transmission distribution is -patch piecewise constant, it can be verified that its single-level DHWT is

(8) |

where is the low pass sub-band distribution of the DHWT of satisfying

with being the 2-dimensional index set of the low pass sub-band distribution .

Using (6), (7), and (8), we derive the DHWT-based optical image model as (9):

(9) |

where the low pass sub-band block of the matrix equation (9) presents a DHWT sub-band image model with a reduced dimension of as follows

(10) |

Note that this low pass sub-band image model with reduced dimension has the exact same form as that of the original optical image model in (1), except for that the airlight doubles the color shift effect in the low-frequency sub-band. Meanwhile, the high-frequency sub-band blocks has the following hazed model:

(11) | |||

(12) | |||

(13) |

where the high-frequency coefficients are free from color shift and are only weakened by the down-sampled transmission function . Consequently, instead of recovering , the sub-band image model only requires recovering and , such that the high-frequency coefficients can be easily derived by:

(14) |

where denotes the elementwise division operation on matrices.

Finally, can be reconstructed using inverse discrete Haar wavelet transform. Paring equations (1) and (9), we have the general sub-band hazed image model when multiple levels of wavelet decomposition are recursively performed:

(15) |

where is the level of wavelet decomposition, is the low-frequency sub-band’s airlight and is the sub-band’s light transmission distribution that has the form of

(16) |

Subsequently, the dehazing algorithm is only performed in the low-frequency sub-band block, and the high-frequency coefficients are recovered by dividing the corresponding .

## 3 Image Dehazing via Optimization

### 3.1 Atmospheric light estimation

Estimating the atmospheric light constants , is an important starting point for image dehazing. Many previous studies, e.g. Narasimhan2002 (); Tarel2009 (); He2011 () estimated with the most haze-opaque region, though it may be affected by white objects in the scene wangw17 (). Others attempted to find this region by sophisticated techniques, e.g. hierarchical searching Kim2013 (). However, as aforementioned, the robustness of these techniques are questionable. For simplicity, we consider the brightest pixel as the estimates of through filtering airlight (), i.e.

(17) |

where is a local window centered at . Using this result, we assume that the estimates of , , have been obtained and are used in the hazed image models for further estimation of and .

### 3.2 Linear formulation of the sub-band image model

Recall the low dimensional DHWT-based sub-band model in (10), where the low sub-band blocks and the estimated airlight , are known. Joint estimation of and is still a bilinearly coupled problem. However, we observe that if we consider as a whole, the sub-band hazed image model can be converted to a convex, linear optimization problem. For this reason we introduce the substitution:

Then, the sub-band image model (10) can be written as

(18) |

where variable and the estimated airlight are known. The image dehazing problem is thus formulated as solving and from the new sub-band model (18). The solutions for and are sufficient for further estimation of the wavelet transformed sub-band image blocks. To elaborate, given and , it follows from (9), , and , that the wavelet transformed sub-band image blocks can be estimated by

(19) |

The reconstruction of the haze-free image matrices can be further obtained by the inverse DHWT of providing equation (7) and (19).

### 3.3 Regularized optimization for dehazing

Note that although the linear formulation of the sub-band image model guarantees closed solutions to and when is determined, there are an infinite number of solutions in a continuous space. In order to find the feasible and meaningful solutions for and , incorporation of available knowledge and information about the image and haze conditions into appropriate constraints on and is an indispensable step. The constraints can regulate solutions and to satisfactory values. Moreover, the original problem is not compromised since linear transformation (18) guarantees an optimal when is an optimal solution to (18).

Based on the sub-band image model (18) which is linear in and , the general formulation of the proposed regularized convex optimization for dehazing is written as.

(20) | |||||

s.t. | |||||

where denotes a convex regularization function of and to be selected.

A naïve selection of the regularization function can be the mean squared contrast function of given by Kim2013 ():

(21) | |||||

where denotes the domain of the 2-dimensional pixel index in the low-frequency sub-band block, , are the average pixel values of and , respectively, and is the total pixel number. Since the haze effect reduces the degree of contrast in images, the general idea of image dehazing process is to enhance the level of image contrast. A higher value of indicates a higher contrast of the image. The above equation (21) implies that the image’s contrast is inversely proportional to the square of the transmission function . Therefore, reducing the value of can improve the contrast of the image. For implementing this consideration, a term is introduced into the regularization function to penalize the values of , where denotes the Frobenius norm higham2002 ().

However, the sub-band image model (18) used for the proposed regularized optimization is primarily an elementwise equation of image pixels. Straightforward elementwise operations based on this image model may not well represent and reconstruct dependency and connectivity information of image pixels with their adjacent neighborhoods. Therefore, it is important and necessary that the regularization function takes into account of dependency and connectivity properties of pixels. Like described in Section 2.2, the depth map is piecewise smooth, which leads to the piecewise constant characteristics of . Moreover, the down-sampled also inherits the piecewise constant characteristics. Within a reasonable range, such characteristics can be further extended to its transformed sub-band distribution . To promote the low pass and piecewise constant characteristics of , a well-known total variation function term TV () is introduced into the regularization function, where denotes the total variation norm.

With the above considerations, we specify the regularization function of our proposed method as:

where is a weighting parameter, balancing the penalty weights on regularization terms to guide the optimization solution to satisfactory values. A guideline to the value of is to set it small when the haze is thick, emphasizing on contrast enhancement. When the haze is thin, should be relatively larger. As a result, the regularized convex optimization for image dehazing is formulated as:

s.t. | ||||

Model (3.3) is called single-level wavelet transform based optimization (SWTO), which is the essential ingredient of its multilevel extension.

### 3.4 Extension to multilevel sub-band models

Obviously, the conditions for the aforementioned SWTO model are directly applicable to the original hazed image model (1) and the sub-band image models with higher-level wavelet decomposition. With an extension of the -patch piecewise constant assumption on the light transmission distribution to the -patch piecewise constant assumption for , the multilevel sub-band model provides further reduced model dimension and computational complexity. Let be the -th level DHWT matrix, we have:

Similarly to the single-level DHWT-based optical image model (9), we derive the multilevel DHWT-based optical image model in an iterated function form as:

(23) |

where component matrices , , , , and are the low pass averaging Haar transform matrix and the high pass difference Haar transform matrix. The process to recursively solve and the original transmission distribution from model (23) is illustrated with Fig. 2.

## 4 Experiments and Results

We conduct extensive experiments on a large collection of hazed images and simulated hazed images to evaluate the performance of our proposed MWTO algorithm. Among these, the famous “Canyon”, “Desk”, “Hill”, “House”, “Lily”, “Mountain”, and “River” images are presented. Moreover, we compare with literature Kim2013 () on more images from its supplementary materials to show the quality improvement by adding a new regularization term to the conventional function.

We implemented the state-of-the-art image dehazing methods, including Fattal’s method Fattal2008 (), Tarel’s method Tarel2009 (), He’s method Guided (), Meng’s method Meng (), Kim’s method Kim2013 (), Wang’s method Wang2014 (), Nishino’s method Bayesian-defog (), Zhu’s method Zhu2015 (), and Berman’s method Dana2016 (). Among these, the computations of the algorithms of Tarel et al., He et al., Meng et al., Zhu et al., and Berman et al. use the Matlab codes provided by the authors. Performance comparisons with the other algorithms use the results presented in the corresponding publications and authors’ websites. Implementation of the MWTO method uses the regularized convex optimization software of the Split Bregman iteration algorithm SplitBregman (). All of the algorithms were executed on an HP-Z420 workstation with a 3.30 GHz Intel E5-1660 CPU and parallel computing disabled. We set the multilevel parameter to for simplicity, and weighting parameter is set according to .

The comparison involves many perspectives, such as subjective evaluation, objective evaluation, and computational complexity analysis. More specifically, subjective evaluation is based on both simulated haze and natural haze datasets ihaze (); ohaze (); objective evaluation consists of quantitative visibility assessment Tarel2008 (), mean square evaluation (MSE) zwang04 (), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) wangw17 (); mngpp (); computational complexity analysis consists of comparison of running time and theoretical analysis of our model scalability.

### 4.1 Subjective evaluation

In the classic problem setting of image dehazing, only the hazy image is provided and the “ground truth” image is hard to obtain. Fig. 3, 4, 5, 6, 7, and 9 compare the experimental result (dehazed image) of the proposed method (MWTO) with the algorithms presented in Tarel2009 (); Guided (); Meng (); Kim2013 (); Zhu2015 (); Dana2016 (). Additionally, we present dehazing result of Fig. 8, where simulated haze is applied to the original image; Fig. 11, 12 from two recently published datasets ihaze (); ohaze () and Fig. 10, where the “ground-truth” photograph taken on a haze-free day is available.

It can be observed that our proposed method usually produces a more faithful and balanced contrast and better color reconstruction over the whole image. In the “Canyon” image, there are fringe artifacts in the result of Tarel’s algorithm Tarel2009 (). He’s method Guided () results in the most oversaturated effect, because it tends to underestimate the transmission function. Meng’s method Meng () attempts to improve He’s method Guided () by including a boundary constraint for restoring bright color and limiting over-saturation. However, this constraint does not work well in areas such as sky and cloud. Berman’s method Dana2016 () seems better than the results of Guided () and Meng (), though suffers from a similar over-saturation effect.

This over-saturation problem is common and also presented in the “Desk” image and the “Sofa” image. He’s method Guided (), Meng’s method Meng (), and Berman’s method Dana2016 () show over-correction of color contrast, especially on the objects at the lower left corner of the “Desk” image. Halos around the pink hexagonal object in the middle is salient, which may be caused by defects of the DCP technique Guided () and the hazy-line technique Dana2016 () since pixel-wise based estimation does not consider the continuity of adjacent pixels. Zhu’s method Zhu2015 () does not have the problem of over-saturation and halos, however, the dehazing effect is minimally perceivable. In contrast, the result of MWTO is satisfying for the “Desk” in terms of hue and detail reconstruction. In the “Sofa” image, He’s method Guided (), Meng’s method Meng (), Ancuti’s method airlight (), and Fattal’s method Fattal2008 () all suffer from the overcorrected, darkened effect. Only Berman’s method Dana2016 () produces a result close to the ground truth, but shows a surrealistic outlook. Our proposed method does not completely remove the haze, but still preserve the correct saturation and details.

In the “Hill” image, Tarel’s algorithm Tarel2009 () again introduces heavy color distortion and undesired artifacts. He’s method Guided () and Wang’s method Wang2014 () cause over-saturation and loss of detailed information (see the over-exposed and whitening effect of cloud in Wang’s method Wang2014 ()). Berman’s methodDana2016 () produces a nice color, however, the over shapening around the left hill and between gaps of cloud makes it visually unreal. Zhu’s method Zhu2015 () is unable to thoroughly remove the haze, especially on the central forest. In contrast, our result is pleasing in terms of visual effect and local details. In the “house” image, many methods generate halos or over-saturation around the left tree branch Fattal2008 (); Guided (); Kim2013 (); Meng (); Zhu2015 (); Dana2016 (). While our proposed method offers good visual effect with color enhanced and information preserved. Fig. 7 shows that DCP based techniques can severely darken the image when there is a heavy haze Tarel2009 (); Guided (); Fattal2008 (). In this situation, Zhu’s method Zhu2015 () again fails to sufficiently remove the haze and Meng’s method Meng () and Berman’s method Dana2016 () cause over-saturation in the sky area. In comparison with those methods, our proposed method presents more natural color and a satisfying contrast.

In addition to the natural hazed images, we further investigate simulated haze, which can be very useful in understanding haze formation and virtual scene manipulation. Fig. 8 addresses the advantage of having an optical image model when the haze is simulated. We notice that Zhu’s method Zhu2015 () can not completely remove the haze. Meng’s method Meng () brings about artifacts and a slight color distortion. He’s method Guided (), Meng’s method Meng (), and Berman’s method Dana2016 () all cause shadow on the lower petals, which can be avoided if transmission map is estimated with a physically sound model. Our method can effectively solve the inverse problem of haze formation and produce a close estimate to the haze-free image.

The color distortion problem can be sometimes severe, especially in the contrast enhancement based methods because bright colors are more visually perceivable. For instance in Fig. 9, the hue of ground and horses is completely changed by Kim’s method Kim2013 (). In Figure 12, Fattal’s method Fattal2008 () and Berman’s method Dana2016 () significantly change the color as well. In Fig. 10, given the haze-free image in comparison, Kim’s method Kim2013 (), He’s method Guided (), and Berman’s method Dana2016 () all produce a rather yellowish, distorted result. Zhu’s method Zhu2015 () tackles the color distortion problem by using color attenuation prior. Recently, Lian’s method Lian-18 () considers the detail mapping separately from the global intensity mapping of haze. These two methods do not suffer from the color problem. But in terms of the perceived naturalness, our model is still among the best of the compared methods (see Fig. 10).

### 4.2 Objective evaluation

Unlike subjective evaluation which is manually conducted and can introduce bias, we also quantitatively evaluate the quality of dehazed image. One such method is to set up a reference image by optimizing all the parameters of the dehazing algorithm, and calculate the distance between the dehazed image and the reference image. We consider MSE as the distance measure (calculated as in equation 24, where and are image dimensions, is the reference image and the dehazed image). Other distance measures, such as the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are also mentioned in the literature wangw17 (). It is worth mentioning that these metrics only serve as references and are often not well-aligned with the visual effects. Another well-known quantitative evaluation is Hautiere’s method Tarel2008 (), which defines three measures for visibility: the visible edge ratio , percentage of saturated pixels in all color channels , and the visible edge normalized gradient as in (4.2).

(24) |

The numbers of visible edges in the original image and dehazed image are denoted by and respectively. denotes pixels on the visible edges of the dehazed image and the corresponding gradient. is the number of saturated (balck or white) pixels.

(25) |

Table 1 reports the average Hautiere’s visibility descriptors as well as the MSE calculated from the “Lily” image mentioned in Section 4. Our method outperforms all the others including He’s method Guided (), Kim’s method Kim2013 (), Meng’s method Meng (), Zhu’s method Zhu2015 (), and Berman’s method Dana2016 () in terms of MSE and achieves the second best results in terms of , , and . This observation confirms the right balance and stable performance of MWTO. It is also important to notice that these results are obtained with a much faster computation speed.

We also report the average PSNR and SSIM obtained across the I-HAZE ihaze () and O-HAZE ohaze () datasets in Table 2. We find out that our method works better on indoor scenes compared to outdoor scenes.

He’s Guided () | Kim’s Kim2013 () | Meng’s Meng () | |

1.2 | 0.7 | 2 | |

0.7 | 0.8 | 0.06 | |

1.4 | 1.3 | 2 | |

MSE | 0.51 | 0.44 | 0.44 |

Zhu’s Zhu2015 () | Berman’s Dana2016 () | Our method | |

1.2 | 1.8 | 1.4 | |

0 | 0.6 | 0.05 | |

0.8 | 1.4 | 1.6 | |

MSE | 0.33 | 0.34 | 0.31 |

Metrics | He’s Guided () | Meng’s Meng () | Fattal’s Fattal2008 () | Ancuti’s airlight () | Berman’s Dana2016 () | Our method |
---|---|---|---|---|---|---|

PSNR-indoor | 15.285 | 14.574 | 12.421 | 16.632 | 15.942 | 16.619 |

SSIM-indoor | 0.711 | 0.750 | 0.574 | 0.770 | 0.767 | 0.619 |

PSNR-outdoor | 16.586 | 17.443 | 15.630 | 16.855 | 16.610 | 15.347 |

SSIM-outdoor | 0.735 | 0.753 | 0.707 | 0.747 | 0.750 | 0.392 |

Algorithm | Canyon | Desk | Hill |
---|---|---|---|

(707565) | (1200956) | (576768) | |

He’s Guided () | 14.5 | 24.4 | 17.2 |

Meng’s Meng () | 3.8 | 9.4 | 3.9 |

Zhu’s Zhu2015 () | 2.8 | 4.1 | 3.8 |

Berman’s Dana2016 () | 2.9 | 6.4 | 3.0 |

Our method | 0.8 | 1.6 | 0.8 |

Algorithm | House | Lily | River |

(440448) | (640480) | (440260) | |

He’s Guided () | 7.8 | 12.0 | 4.3 |

Meng’s Meng () | 2.8 | 3.5 | 2.2 |

Zhu’s Zhu2015 () | 2.1 | 3.0 | 1.8 |

Berman’s Dana2016 () | 1.6 | 2.1 | 1.1 |

Our method | 0.5 | 0.7 | 0.4 |

### 4.3 Computational complexity

The fast processing speed is a key advantage of MWTO. Unlike most of the compared methods which use filters to accelerate the processing speed, our method leverages DHWT, which preserves the maximum information and significantly speed up the dehazing process. We conduct experiments on 10 images and observe consistent results. Table 3 reports 6 out of them due to space limit. These algorithms are tested on the same processor using the same MATLAB configuration. He’s method Guided (), though used a guided filter instead of the soft-matting technique in its predecessor He2011 (), does not seem favorable in terms of speed. Meng’s method Meng () and Zhu’s method Zhu2015 () are faster, which may be the result of employing a linear model. Berman’s method Dana2016 () further improves the processing speed in average. However, it is noticeable that our method with two-level DHWT is already faster than Berman’s method Dana2016 () and only requires of its processing time in average. By applying a larger multi-level parameter, our method can be further accelerated with a minimum trade-off against quality. We have successfully deployed this algorithm for real-time processing of videos with a higher computational power.

We observed that the processing time monotonically increases as the image size grows. To compare the model scalability, we investigate the slope as in Fig. 13. We interpolate Cai’s learning-based method cai16 () as reported in the literature, while running time of other methods are obtained through experiments. It is shown that Zhu’s method Zhu2015 () and Berman’s method Dana2016 () has roughly a complexity to the image size, or complexity to the number of pixels. Other methods, especially by He Guided () have much higher computational complexity than this. Our model has a quasi-linear complexity to the image size, that is, to the image size, or complexity to the number of pixels. We believe this exceptional speed should be attributed to the significant dimension reduction effect of a 2-dimensional DHWT, which is considered highly efficient providing the satisfying dehazing quality.

## 5 Conclusion

Convex optimization based on the optical model of image can efficiently perform high-quality image dehazing. The method has several advantages, e.g., consistent with the physical theory of haze formation and the inverse problem can be solved at the same time. Due to these reasons, the method is less suffered from over-saturation and halos, which are common among other dehazing algorithms. In this paper, we introduce multilevel wavelet transform to reduce the dimension of the original image, and perform optimization based image dehazing on the low frequency sub-band. Finally, the dehazed image is recovered using inverse discrete wavelet transform. This strategy produces the dehazing results comparable to the state-of-the-art methods. Meanwhile, the computational complexity of the proposed method is reduced to quasi-linear to the image size. Experiments from various aspects support the aforementioned findings. In future, we will study new loss items and parameter learning, such as the optimal level of wavelet transform and regularizer weighting.

## 6 Acknowledgement

The authors thank Jean-Philippe Tarel, Kaiming He, Qingsong Zhu, Gaofeng Meng, Dana Berman, Yuankai Wang, Raanan Fattal, and Ko Nishino for providing their algorithms and experimental results.

References

## References

- (1) R. Azami, D. Mould, Detail and color enhancement in photo stylization, in: Proceedings of the ACM symposium on Computational Aesthetics, 2017.
- (2) F. Xing, E. Cambria, W.-B. Huang, Y. Xu, Weakly supervised semantic segmentation with superpixel embedding, in: IEEE International Conference on Image Processing (ICIP), 2016, pp. 1269–1273.
- (3) C. Sakaridis, D. Dai, S. Hecker, L. V. Gool, Model adaptation with synthetic and real data for semantic dense foggy scene understanding, in: The European Conference on Computer Vision (ECCV), 2018, pp. 687–704.
- (4) K. Goyal, J. Singhai, Review of background subtraction methods using gaussian mixture model for video surveillance systems, Artificial Intelligence Review 50 (2) (2018) 241–259.
- (5) D. Singh, V. Kumar, A comprehensive review of computational dehazing techniques, Archives of Computational Methods in Engineering (2018) 1–19doi:http://doi.org/10.1007/s11831-018-9294-z.
- (6) W. Wang, F. Chang, T. Ji, X. Wu, A fast single-image dehazing method based on a physical model and gray projection, IEEE Access 6 (2018) 5641–5653.
- (7) S. Shwartz, E. Namer, Y. Y. Schechner, Blind haze separation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2, 2006, pp. 1984–1991.
- (8) Y. Y. Schechner, Y. Averbuch, Regularized image recovery in scattering media, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 29 (9) (2007) 1655–1660.
- (9) N. Hautiere, J. Tarel, D. Aubert, Toward fog-free in-vehicle vision systems through contrast restoration, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.
- (10) J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O.Deussen, M.Uyttendaele, D. Lischinski, Deep photo: Model-based photograph enhancement and viewing, in: ACM Transactions on Graphics, Vol. 27, 2008, pp. 1–10.
- (11) Z. Luan, Y. Shang, X. Zhou, Z. Shao, G. Guo, X. Liu, Fast single image dehazing based on a regression model, Neurocomputing 245 (2017) 10–22.
- (12) K. He, X. Tang, Single image haze removal using dark channel prior, IEEE transactions on Pattern Analysis and Machine Intelligence (TPAMI) 33 (12) (2011) 2341–2353.
- (13) K. He, J. Sun, X. Tang, Guided image filtering, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 35 (6) (2012) 1397 – 1409.
- (14) Y. Gao, H.-M. Hu, S. Wang, B. Li, A fast image dehazing algorithm based on negative correction, Signal Processing 103 (2014) 380–398.
- (15) R. Fattal, Single image dehazing, ACM Transactions on Graphics 27 (3) (2008) 1–9.
- (16) K. Nishino, L. Kratz, S. Lmbardi, Bayesian defogging, International Journal of Computer Vision 98 (3) (2012) 263–278.
- (17) L. Mutimbu, A. Robles-Kelly, A relaxed factorial markov random field for colour and depth estimation from a single foggy image, in: IEEE International Conference on Image Processing (ICIP), 2013, pp. 355–359.
- (18) Y.-K. Wang, C.-T. Fan, Single image defogging by multiscale depth fusion, IEEE Transactions on Image Processing 23 (11) (2014) 4826–4837.
- (19) J.-P. Tarel, N. Hautière, Fast visibility restoration from a single color or gray level image, in: IEEE International Conference on Computer Vision (ICCV), 2009, pp. 2201–2208.
- (20) C. O. Ancuti, C. Ancuti, C. Hermans, P. Bekaert, A fast semi-inverse approach to detect and remove the haze from a single image, in: Asian Conference on Computer Vision (ACCV), 2010, pp. 501–514.
- (21) J. Zhang, L. Li, Y. Zhang, G. Yang, X. Cao, J. Sun, Video dehazing with spatial and temporal coherence, The Visual Computer 27 (6) (2011) 749–757.
- (22) J.-H. Kim, W.-D. Jang, J.-Y. Sim, C.-S. Kim, Optimized contrast enhancement for real-time image and video dehazing, Journal of Visual Communication and Image Representation 24 (3) (2013) 410–425.
- (23) Q. Zhu, J. Mai, L. Shao, A fast single image haze removal algorithm using color attenuation prior, IEEE Transactions on Image Processing 24 (11) (2015) 3522–3533.
- (24) D. Zhao, L. Xu, YihuaYan, J. Chen, L.-Y. Duan, Learning intensity and detail mapping parameters for dehazing, Signal Processing: Image Communication 74 (2019) 253–265.
- (25) Z. Yang, C. Zhang, L. Xie, Robustly stable signal recovery in compressed sensing with structured matrix perturbation, IEEE Transactions on Signal Processing 60 (9) (2012) 4658–4671.
- (26) J. He, C. Zhang, R. Yang, K. Zhu, Convex optimization for fast image dehazing, in: International Conference on Image Processing (ICIP), 2016, pp. 2246–2250.
- (27) J.-P. Tarel, N. Hautière, L. Caraffa, A. Cord, H. Halmaoui, D. Gruyer, Vision enhancement in homogeneous and heterogeneous fog, IEEE Intelligent Transportation Systems Magazine 4 (2) (2012) 6–20.
- (28) S. G. Narasimhan, S. K. Nayar, Vision and atmosphere, International Journal of Computer Vision 48 (3) (2002) 233–254.
- (29) W. Wang, X. Yuan, X. Wu, Y. Liu, Fast image dehazing method based on linear transformation, IEEE Transactions on Multimedia 19 (6) (2017) 1142–1155.
- (30) C. Ancuti, C. O. Ancuti, C. D. Vleeschouwer, A. C. Bovik, Night-time dehazing by fusion, in: IEEE International Conference on Image Processing (ICIP), 2016, pp. 2256–2260.
- (31) N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM: Society for Industrial and Applied Mathematics, 2002.
- (32) L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena 60 (1) (1992) 259–268.
- (33) G. Meng, Y. Wang, J. Duan, S. Xiang, C. Pan, Efficient image dehazing with boundary constraint and contextual regularization, in: IEEE International Conference on Computer Vision (ICCV), 2013, pp. 617–624.
- (34) D. Berman, T. Treibitz, S. Avidan, Non-local image dehazing, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1674–1682.
- (35) T. Goldstein, S. Osher, The split bregman method for l1-regularized problems, SIAM Journal on Imaging Sciences 2 (2) (2009) 323–343.
- (36) C. O. Ancuti, C. Ancuti, R. Timofte, C. D. Vleeschouwer, I-haze: a dehazing benchmark with real hazy and haze-free indoor images (2018). arXiv:arXiv:1804.05091.
- (37) C. O. Ancuti, C. Ancuti, R. Timofte, C. D. Vleeschouwer, O-haze: a dehazing benchmark with real hazy and haze-free outdoor images (2018). arXiv:arXiv:1804.05101.
- (38) N. Hautiere, J.-P. Tarel, J. Lavenant, D. Aubert, Blind contrast enhancement assessment by gradient ratioing at visible edges, Image Analysis & Stereology Journal 27 (2008) 87–95.
- (39) Z. Wang, A. Bovik, H. Sheikh, E. Simoncelli, Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing 13 (4) (2004) 600–612.
- (40) D. Singh, V. Kumar, Image dehazing using moore neighborhood-based gradient profile prior, Signal Processing: Image Communication 70 (2019) 131–144.
- (41) X. Lian, Y. Pang, A. Yang, Learning intensity and detail mapping parameters for dehazing, Multimedia Tools and Applications 77 (12) (2018) 15695–15720.
- (42) B. Cai, X. Xu, K. Jia, C. Qing, D. Tao, Dehazenet: An end-to-end system for single image haze removal, IEEE Transaction on Image Processing 25 (11) (2016) 5187–5198.