LED{}^{2}Net: Deep Illumination-aware Dehazing with Low-light and Detail Enhancement

LedNet: Deep Illumination-aware Dehazing with
Low-light and Detail Enhancement

Guisik Kim and Junseok Kwon
School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea
specialre@naver.com   jskwon@cau.ac.kr
  
Abstract

We present a novel dehazing and low-light enhancement method based on an illumination map that is accurately estimated by a convolutional neural network (CNN). In this paper, the illumination map is used as a component for three different tasks, namely, atmospheric light estimation, transmission map estimation, and low-light enhancement. To train CNNs for dehazing and low-light enhancement simultaneously based on the retinex theory, we synthesize numerous low-light and hazy images from normal hazy images from the FADE data set. In addition, we further improve the network using detail enhancement. Experimental results demonstrate that our method surpasses recent state-of-the-art algorithms quantitatively and qualitatively. In particular, our haze-free images present vivid colors and enhance visibility without a halo effect or color distortion.

\iccvfinalcopy

1 Introduction

Dehazing is a fundamental pre-processing step for improving the performance of numerous computer vision applications in out-door environments (e.g., object tracking and detection for autonomous driving, and scene understanding) because most computer vision algorithms are extremely sensitive to illumination and weather conditions. Dehazing algorithms have shown remarkable improvements, owing to the recent deep-learning development. Despite the great success in deep learning approaches, dehazing still has many challenges; for example, dehazing in low-light environments. Traditional dehazing algorithms can be categorized into two groups: Those that use additional information, e.g., the image depth or a 3D model from numerous sensors and multiple images [27, 36, 40], and those that use a single image for dehazing [19, 12]. Convolutional neural network (CNN)-based dehazing approaches have recently emerged and successfully obtained highly accurate results[38, 6, 29]. Low-light enhancement is also an important task, which can be used together with dehazing as pre-processing steps to improve the performance of high-level vision algorithms. For example, a multi-scale retinex method [37] enhances the color image based on the retinex theory. LightenNet [30], MSRNet [42], and LLNet [32] have been developed to enhance the illumination, using deep learning methods.

(a) (b) (c) (d)
Figure 1: Combination of two baselines, MSCNN and LIME. (a) Input, (b) Dehazing + low-light enhancement (post-processing), (c) Low-light enhancement (preprocessing) + dehazing, (d) our LEDNet.

While dehazing and low-light enhancement have gained a large amount of attention, their deep-learning-based approaches have difficulty gathering training data for supervised learning. For example, for dehazing problems, it is very challenging to obtain pairs of hazy and haze-free images for the same scenes simultaneously. In addition, it is difficult to obtain the ground-truth of hazy images that contain low-light pixels at the same time. To alleviate this problem, we synthesize training data by generating ground truth (i.e., weakly illuminated hazy) images given hazy images. In this paper, we deal with a composite of hazy and low-light images, and conduct dehazing and low-light enhancement in a unified framework. Images of outdoor scenes frequently contain simultaneous hazy regions and low-light areas. Thus, our unified approach is more capable of handling real-world images. In addition, traditional algorithms tend to reduce the images’ brightness in the dehazing process, which can even degrade image quality. Therefore, dehazing and low-light enhancement tasks should be considered at the same time, as in our approach. As a unified framework, we present a dehazing algorithm based on the illumination map. The illumination map estimated by the proposed deep network is utilized to obtain the atmospheric light and transmission for dehazing, and to adjust the image brightness for both dehazing and low-light enhancement.

Fig.1 shows the motivation of our method. Conventional dehazing methods using low-light enhancement as pre-processing or post-processing induce oversaturation problems in bright areas (e.g., tower or river) and cannot solve dehazing and low-light enhancement problems simultaneously, as shown in Figs.1(b) and (c). In contrast, our LEDNet can perform dehazing and low-light enhancement concurrently, and can alleviate over-saturation problems.

The contributions of our method are threefold.

We present a unified framework, LEDNet (Low-light Enhancement and Dehazing with Detail Network), for dehazing and low-light enhancement. In our method, dehazing is performed based on the illumination map.

The illumination map is recycled for various uses, namely, atmospheric light estimation, transmission estimation, and low-light enhancement. The illumination map prevents the dehazing from darkening images and from saturating light areas. It simultaneously enhances the low-light regions.

We present a novel dataset based on retinex theory, which enables us to simultaneously deal with haze and low-light in training.

Fig.2 shows the procedure of our method.

Figure 2: Overall procedure of the proposed method. Our method estimates the illumination map with a novel deep neural network. The estimated illumination map is used for both dehazing and low-light enhancement, Then we enhance details using the proposed detail network. Please refer to supplemental materials for more details on the proposed architecture.

2 Related works

Dehazing: Dehazing algorithms have considerably progressed in the past few years. Recently, single-image-based dehazing has gained popularity, compared to the method using additional information [27, 36, 40]. The latter has several advantages because the methods can utilize additional information for dehazing, which cannot be acquired from the images. However, these approaches have not been actively studied because the necessary additional information is frequently unavailable. Dehazing algorithms based only on a single image [19, 12, 13] employ a dark channel prior [19] and show impressive dehazing results. The dark channel was used for haze features [44, 24]. Tang \etalproposed a random-forest-based regression model for dehazing, using multiple haze-relevant features [44]. Owing to the rapid development of deep learning, several CNN-based dehazing methods have been proposed. Nevertheless, many non-deep learning algorithms are being studied. For example, Color-Line [13] produced haze-free images with very natural colors, in terms of color restoration. Haze-Line [4] also produced accurate haze-removal results. Kim \etalpresented an efficient method of estimating atmospheric light using quad-tree searching [26]. Chen \etalreduced various artifacts that appear after dehazing via a gradient residual minimization [8]. Ancuti \etalproposed a semi-inverse method [1] for haze detection and a multi-scale fusion method using multiple features [2]. He \etalsolved dehazing problems with convex optimization to find the optimal solution in the wavelet domain [18]. Kim and Kwon used the illumination as pixel-wise atmospheric light according to the retinex theory [24]. Zhu \etalproposed a new prior called a color attenuation prior [48]. Meng \etalestimated the transmission using boundary constraints and contextual regularization [33]. Choi \etalpresented a haze-measurement method for dehazing [9]. Cai \etaldeveloped DehazeNet [6], an end-to-end dehazing method with regression networks, for dehazing. Ren \etalproposed coarse-to-fine approaches for dehazing by presenting multi-scale convolutional neural networks (MSCNN) [38]. In AOD-Net [29], the atmospheric scattering model was reformulated and an end-to-end trainable model for both dehazing and object detection was proposed. Ling \etalgenerated synthetic haze images by considering the light wavelength of each color channel [31]. Ren \etal [39] proposed a novel fusion-based method based on three inputs from white balancing, contrast enhancing, and gamma correction. Zhang and Patel [47] removed hazy regions using dense connection and pyramid modules.

Contrary to theses methods, our method performs dehazing and low-light enhancement in a unified framework based on the illumination map.

Low-light Enhancement: Retinex based methods [41] enhance images by dividing them into illumination and reflectance components. High dynamic range algorithms (\egmulti-exposure image fusion [3] and single-image contrast enhancer [7]) improve images by using multiple exposure images. Dong \etalobserved that inverted low-light images look similar to hazy images and presented a low-light enhancement method using the haze model [11]. Recently, deep learning based methods, \egMSRNet [42] and LLNet [32], have made significant improvements in low-light image enhancement. LightenNet [30] directly obtains the illumination map by using CNN architectures, whereas the method in [41] indirectly estimates the illumination using gradients and color constancy. Most traditional algorithms that use the illumination map typically aim to estimate the color constancy [22], perform image rendering in out-door environments [21], or enhance low-light images [16]. Contrary to these methods, our method utilizes the illumination map for various tasks, namely, atmospheric light estimation, transmission estimation, and low-light enhancement.

3 Proposed Method

The haze model is defined by Koschmieder et al. [28], as follows:

(1)

where , , , and indicate a hazy image, a haze-free image, atmospheric light, and the transmission, respectively. Then the haze-free image is obtained by solving (1) for :

(2)

We observe that many low-level vision tasks are interrelated; hence, recent super resolution [10], retinex [30], and dehazing [6, 31] algorithms have adopted deep neural networks with similar architectures. These deep neural networks typically consist of feature extraction, non-linear mapping, and reconstruction layers. In this context, our method adopts one of the weak illumination-enhancement approaches, LightenNet [30], for dehazing, because it has achieved great success in solving low-level vision problems [10, 30, 6, 31]. Our method produces the illumination map based on residual blocks with side convolution and then refines the map into elements for three different tasks:

The illumination map is transformed into pixel-wise atmospheric light through identity mapping, because the illumination itself is the atmospheric light. Then, we add global atmospheric light to pixel-wise atmosphere light that contains relatively local information.

The illumination map is used for transmission estimation, because the illumination can be considered as the degree of light reaching an observer. The inverted low-light images have properties similar to hazy images, which is experimentally verified by [11, 16]. For example, LIME [16] utilizes the illumination of low-light images as the transmission. We observe that the proposed inverted hazy regions look like illumination images. Thus, we obtain the inverted transmission map from the illumination map.

The illumination map is used for low-light enhancement [30, 16]. We use the illumination map for the aforementioned two different tasks, but it can be re-used for the low-light enhancement. The illumination can improve weakly illuminated regions while maintaining the clear illuminated area. Therefore, it does not cause saturation and color distortion.

3.1 Dehazing Based on the Retinex Theory

Dehazing algorithms and the retinex theory are interrelated. Dong et al. [11] used the haze model for low-light enhancement, because the inverted dark regions of low-light images typically have high brightness values, like hazy regions. Adrian et al. [14] proposed a way of solving the dehazing problem directly using retinex. LIME [16] also inverted low-light images to estimate the illumination. In contrast to these methods, our method inverts hazy images into . Then the inverted haze-free image is obtained using (2):

(3)

where is the inverted hazy image. The haze-free image can be recovered by . Given , our dehazing method aims to find and based on the illumination map , in which , , and are obtained by illumination map estimation step, atmospheric light estimation step, and transmission estimation step, respectively. Then, our low-light enhancement method improves the image brightness by the low-light enhancement step. Each step will be explained by following subsections in order.

(a) (b) (c) (d)
Figure 3: Contribution of the proposed atmospheric light. (a) nighttime haze (b) MSCNN (c) LEDNet w/ global A (d) LEDNet w/ Local A

3.2 Illumination Map Estimation Step

Illumination estimation is a major theme in low-light enhancement literature. Recently, LIME [16] and LightenNet [30] have shown impressive accuracy in illumination estimation. According to the retinex theory, an image can be divided into illumination and reflectance components, which can be considered a clear illuminated image and a weakly illuminated image, respectively:

(4)

where , , and indicate the weakly illuminated image, the clear illuminated image, and the illumination map, respectively. As shown in (4), if the illumination map is estimated from an observed image, a clear illuminated image can be obtained by removing weak-illumination regions. To obtain an accurate illumination map, we present a deep neural network consisting of four residual blocks and sidelobe layer, which is from each residual block. The sidelobe layer helps generate more fruitful features in the structure of iterative residual blocks. The initial illumination map has one channel and is normalized to have values between and . Because the illumination map should be locally smooth and must maintain the edge information, we refine the illumination map by applying the guided filter [20] and using relative total variation filtering [46]. The guide image for guided filtering is the minimum channel of the input image.

(a) (b) (c) (d)
Figure 4: Analysis for each step of LEDNet. (a) Input images, (b) Results of (\ieLEDNet only for low-light enhancement), (c) Results of (\ieLEDNet only for dehazing), and (d) Results of LEDNet

3.3 Atmospheric-light Estimation Step

Atmospheric light and the transmission are key factors for solving the dehazing problem. In traditional approaches, the atmospheric light is obtained by finding the maximum value of the dark channel, as in He \etal [19], or by using quad-tree searching [26] and its variants. However, inaccurate transmission or atmospheric light estimation can cause a color distortion and degrade the quality of haze-free images. To solve these problems, we present a simple but robust estimation method for the transmission and atmospheric light. Because the illumination represents the light in the atmosphere, our method uses the illumination to obtain the atmospheric light based on the retinex theory.

Conventional dehazing algorithms [19, 26, 43] assume that the atmospheric light is the same through the entire image; Kim and Kwon [24] disagree with this assumption and estimate the pixel-wise atmospheric light. Unlike these methods, we argue that the atmospheric light has both local and global ingredients, which respectively correspond to the illumination and the global airlight. From a physics point of view, the illumination has a different local value because it is generated as the global airlight is reflected in a different way by particles in the atmosphere. Then, the atmospheric light is a combination of the illumination (\ielocal ingredient) and the global airlight (\ieglobal ingredient):

(5)

where , , and indicate the refined atmospheric light, the global atmospheric light, and the illumination, respectively. In (5), is obtained by taking the maximum value of the illumination. Fig.3 shows dehazing results based on the proposed atmospheric light. Conventional approaches (e.g., MSCNN [38] and our LEDNet only with global atmospheric light) can not accurately dehaze nighttime hazy images and produce over-saturated results, as shown in Figs.3(b) and (c), in which images contain hazy regions, local light sources, low-light areas, concurrently. In contrast, the proposed LEDNet with local atmospheric light preserves local details after dehazing.

3.4 Transmission Estimation Step

As mentioned before, the illumination indicates the light reaching observers as the global airlight is reflected by particles in the atmosphere. Therefore, the illumination has been utilized as the transmission in Dong \etal [11] and LIME [16]. Our method also substitutes the illumination for the transmission. The transmission can be easily obtained by , like that the transmission was estimated by inverting the dark channel, as in the He’s method [19]. Then, (3) can be modified as follows:

(6)

where the atmospheric light and the transmission are replaced by and , respectively. To prevent excessive haze removal, the maximum value of the denominator is empirically set to . In all experiments, is fixed to .

3.5 Low-light Enhancement Step

Because hazy regions have relatively high intensity values, the overall brightness typically decreases after removal of the hazy regions. Furthermore, during the dehazing process, dark areas (\eghaze-free areas) can become darker because dehazing algorithms are globally and uniformly applied to a whole image. This problem can be solved by increasing the overall brightness of images. However, in this case, the hazy regions can be insufficiently removed. To deal with this problem, we utilize the illumination map obtained by the illumination map estimation step:

(7)

where is obtained by applying the gamma correction to the illumination map . In (7), adjusts the degree of improvement of the low-light regions, which is empirically set to . We can measure improvement by as follows:

(8)

where is an estimated haze-free image. By using in (8), the result enhances the brightness of dark regions, while maintaining values of bright regions.

3.6 Detail Enhancement Step

In this step, we use the residual learning to preserve details during dehazing and low-light enhancement. We add a skip connection to the VDSR [25] network to better reflect the detail of the original: , where, the final outputs obtained by the weighted linear combination of outputs of detail enhancement step, , and initial results obtained by LEDNet, . was set to .

4 Implementation Details

Training Data Generation:

Figure 5: New approach of generating training data for both dehazing and low-light enhancement tasks. Colors represent different illumination values from (blue) to (red).

For training, supervised deep learning methods requires a large amount of data along with corresponding ground-truth. In case of haze removal and low-light enhancement problems, the data must be obtained in the out-doors and it is not trivial to collect images that contain hazy areas and low-light regions, simultaneously. Furthermore, it is more difficult to obtain their corresponding ground-truth. Therefore, many studies attempt to construct training datasets by synthesizing haze or low-light onto a clear image. Unlike previous approaches that synthesize low-light onto clear images, our method synthesizes low-light onto hazy images and thus a synthetic image contains hazy areas and low-light regions, simultaneously.

We synthesize low-light onto an inverted hazy image using (4) based on the retinex theory. Fig.5 shows our data generation scheme. For this, in (4) is replaced with a constant value , which is randomly selected among nine values . Then, We generate randomly illuminated patches for inverted hazy images from the FADE dataset [9]. In experiments, was set to . To make the illumination map locally smooth, pixels are overlapped to generate a patch.

Training and Parameter Setting: We use approximately patches for training, which are sufficient to train several deep networks. The parameters of each layer are learned using the Mean Squared Error (MSE) as a loss function:

(9)

where is the training batch size, is the i-th weakly illuminated hazy patch, is the corresponding patch, and is the mapping function. The ADAM optimizer is used to find the optimal in (9) that minimizes . To train it quickly, the filter weights of each layer are initialized from a random Gaussian distribution with a standard deviation, and the filter biases are set to .

We initialize the learning rate to and updated it by a decay factor every five epochs. Based on the parameters above, LEDNet in trained for epochs with a batch size. We conduct the experiments on a workstation with an NVIDIA GeForce GTX TITAN XP GPU. The network is implemented by using Matlab 2018a. The training loss quickly converged in only minutes.

Detail network was trained using the IAPR TC-12 dataset [15]. Because we need to pay attention to the detail of hazy-free images, we separately conducted the learning process for clear images without combining the illumination network. For stable learning, we added skip connection to the VDSR network.

5 Experiments

Our method (LEDNet) was compared with eight state-of-the-art dehazing algorithms: non-deep learning based methods [13, 4, 48] and deep learning based methods [38, 6, 29, 47, 39]. For the qualitative comparison, we used the dataset provided by Fattal [13] and real-world datasets. We utilized the visibility level descriptor (VLD) [17] and four image quality evaluation methods [35, 34, 45, 9], which are no-reference quantitative evaluation metrics.

(a) (b) (c) (d)
Figure 6: Contribution of the detail network. (a) Input image, (b) Estimated details, (c) Result of LEDNet (w/o detail network) and crop from orange bounding box, (d) Result of LEDNet and crop from red bounding box.
(a) (b) (c) (d) (e) (f) (g) (h)
Figure 7: Comparison of dehazing results 1. (a) Input (b) Color-line [13] (c) DehazeNet [6] (d) MSCNN [38] (e) Haze-line [4] (f) AOD-Net [29] (g) DCPDN [47] (h) LEDNet.
(a) (b) (c) (d) (e) (f) (g) (h)
Figure 8: Comparison of dehazing results 2. (a) Input (b) MSCNN [38] (c) DehazeNet [6] (d) CAP [48] (e) AOD-Net [29] (f) DCPDN [47] (g) GFN [39] (h) LEDNet.
(a) (b) (c) (d)
Figure 9: Low-light enhancement results. (a) Input image (b) LIME [16] (c) LEDNet () (d) LEDNet ()

5.1 Ablation Study

To evaluate each step composing LEDNet, we conducted the ablation test, for which and were additionally implemented. denotes the proposed LEDNet only for low-light enhancement w/o detail network. indicates the proposed LEDNet only for dehazing w/o detail network. LEDNet produced three different images based on the illumination map. The first is a low-light enhanced image. The second is a haze-free image, and the last is an image obtained by conducting both dehazing with low-light and detail enhancement. Fig.4 shows intermediate results for each step. The first row shows the image mainly used for HDR, in which low-light and clearly illuminated areas exist at the same time. The second row is the image widely used for dehazing, in which overall regions are hazy and most dehazing algorithms lose the data after dehazing. As demonstrated in Fig.4(b), improved visibility of the low-light image. However, it failed to remove hazy regions, while even making the regions more hazy as the average brightness increased. In Fig.4(c), removed hazy regions and presented the vivid color in hazy images. It also enhanced the brightness of some dark areas in low-light images, with the help of the increase in the contrast after dehazing. However, caused information loss in dark areas. To solve two different problems of dehazing and low-light enhancement simultaneously, we present a unified network called LEDNet. As shown in Fig.4(d), LEDNet improved the contrast and the brightness of dark areas in low-light images. In addition, LEDNet produces haze-free images with enhanced details, while maintaining the vivid color. Fig. 6 shows comparisons of LEDNet and LEDNet including detail enhancement. LEDNet in Fig.6(f) preserves more details than LEDNet in Fig.6(e).

(a) (b) (c)
Figure 10: Image enhancement results. (a) Input image as captured by BlackBerry Passport Camera of DEPD [23] dataset (b) DEPD [23] (c) LEDNet.
{adjustbox}

max width= Goal DehazeNet [6] MSCNN [38] CAP [48] AOD-Net [29] DCPDN [47] GFN [39] LEDNet VLD - 1.063 0.522 0.75 1.035 0.618 1.528 1.276 VLD - 2.384 0.006 0 0.01 0 3.439 0.151 VLD - 1.29 1.202 1.229 1.252 1.449 1.685 2.197 FADE 1.685 2.294 2.196 1.756 2.332 1.488 1.753 NIQE 2.606 2.627 2.612 2.747 3.173 2.76 3.141 BRISQUE 24.18 23.55 26.38 19.63 27.86 20.18 18.61 PIQE 32.47 33.36 32.38 32.56 39.08 31.52 30.48

Table 1: Comparison with state-of-the-art dehazing methods using no-reference image quality measurement. Red, blue, and green denote the best, the second best, and the third best results, respectively.

5.2 Results on Real-world datasets

We have experimented with real-world datasets because characteristics of indoor-synthetic haze and real-world haze are very different. Fig.7 shows experimental results of the Fattal dataset [13]. The algorithms used in comparison produced accurate dehazing results without a halo effect, the most problematic artifacts after dehazing. However, although the color-line produced natural images in terms of color restoration, saturation occurred in sky and bright areas. Red rectangles highlighted the areas where saturation and color distortion occur. At the first row, sky regions saturated, while white river and white house turned yellow and distorted. In addition, black artifacts occurred in the lower-left building. In DehazeNet, MSCNN, and AOD-Net hazy regions were clearly removed, but areas with the low brightness become darker and lost image details. Yellow rectangles highlighted the areas where the brightness is insufficient after dehazing. Haze-line and DCPDN have unnatural results. In contrast to the aforementioned methods, LEDNet obtained accurate haze-free images, while improving the brightness of low-light regions. Moreover, LEDNet avoided saturation and color distortion problems in most areas.

Fig.8 shows the dehazing results for another real-world dataset. MSCNN accurately removed hazy regions in Fig.7, while it failed to remove many of haze regions in this dataset, as shown in Fig.8. GFN produced good dehazing results in most images, but it lost local details of the images at the third row. The proposed LEDNet removed hazy regions properly, while recovering appropriate brightness and preserving details.

Fig.9 shows low-light enhancement results of LIME and LEDNet with different parameters. For this experiment, we utilized the low-light images. LIME [16] is a state-of-the-art algorithm for low-light image enhancement based on the illumination. Because LIME originally aims to solve the low-light image enhancement problem, it produced good results in most images. On the other hand, LEDNet was not developed only for low-light image enhancement. Nevertheless it improved the low-light image quality in terms of the vivid color and unveiled low-light. At the first row, LIME enhanced the brightness of the front leaves significantly, assuming that areas close to a viewpoint should have high values the illumination. However, the results are unnatural. At the second row, LIME has overly high values like hazy image. In contrast to LIME, LEDNet produced relatively natural results. Using a large value of results in a large amount of improvement for low-light regions. Fig.10 shows the results of recently released image enhancement dataset [23]. LEDNet can be also used for the image quality enhancement as well as dehazing and low-light enhancement.

5.3 No-Reference Image Quality metric

There is typically no ground-truth in dehazing problems. Thus, synthesized ground-truth, peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) have been widely used to evaluate dehazing methods. However, these metrics have difficulty in accurately measuring qualitative improvements [5]. Recently, to alleviate this problem, no-reference image quality evaluation has been proposed, which are naturalness image quality evaluator (NIQE) [35], blind/referenceless image spatial quality evaluator (BRISQUE) [34], and perception based image quality evaluator (PIQE) [45]. In this paper, we used the aforementioned metrics based on no-reference image quality evaluation and the conventional metrics, VLD [17] (i.e., edge, contrast, and saturation) and FADE [9] (i.e., haze density measurement). The best method has the largest values of and , and the smallest values of other metrics.

As reported in Table 1, LEDNet and GFN preserved many edges after dehazing, in which they had larger values of . Although CAP and DCPDN produced the smallest values of , the qualitative improvement was not significant. On the other hand, MSCNN and our LEDNet had small values of , while they qualitatively looked better. In terms of FADE, GFN, DehazeNet, and the proposed LEDNet showed the best performance. In addition, LEDNet produced the best perceptual quality images and thus is the best method in terms of NIQE, BRISQUE, and PIQE.

5.4 Discussion

To demonstrate the effectiveness of the proposed LEDNet, we evaluated several dehazing methods in terms of various no-reference evaluation metrics. Nevertheless, these metrics are still insufficient to qualitatively measure the quality of haze-free images and there is not fundamentally accurate measurement for dehazing. In addition, the illumination of real-world images is completely different from the illumination of synthesized haze images. Our LEDNet typically performed well in removal of the real-world illumination. However, it had difficulty in removing the synthesized illumination. On the contrary, other methods produced unnatural results in real-world images.

6 Conclusion

In this paper, we proposed a unified framework called LEDNet for dehazing and low-light enhancement based on the illumination map. We use the illumination map for three different tasks: atmospheric light, transmission, and low-light enhancement. In addition, detail enhancement was performed by the detail network. LEDNet allows haze-removal and low-light enhancement to proceed simultaneously. Our haze-free images contain vivid colors and unveil dark regions after dehazing. In particular, the visibility of weak illumination areas of hazy and low-light images was significantly improved. Experimental results show that LEDNet surpasses other state-of-the-art methods not only for dehazing but also for low-light enhancement.

References

  • [1] C. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert. A fast semi-inverse approach to detect and remove the haze from a single image. In ACCV, 2011.
  • [2] C. O. Ancuti and C. Ancuti. Single image dehazing by multi-scale fusion. TIP, 22(8):3271–3282, 2013.
  • [3] K. M. andZhou Wang. Multi-exposure image fusion: A patch-wise approach. In ICIP, 2015.
  • [4] D. Berman, T. Treibitz, and S. Avidan. Non-local image dehazing. In CVPR, 2016.
  • [5] Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor. The 2018 pirm challenge on perceptual image super-resolution. In ECCV, 2018.
  • [6] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. DehazeNet: An end-to-end system for single image haze removal. TIP, 25(11):5187–5198, 2016.
  • [7] J. Cai, S. Gu, and L. Zhang. Learning a deep single image contrast enhancer from multi-exposure images. TIP, 27(4):2049–2062, 2018.
  • [8] C. Chen, M. N. Do, and J. Wang. Robust image and video dehazing with visual artifact suppression via gradient residual minimization. In ECCV, 2016.
  • [9] L. K. Choi, J. You, and A. C. Bovik. Referenceless prediction of perceptual fog density and perceptual image defogging. TIP, 24(11):3888–3901, 2015.
  • [10] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. TPAMI, 38(2):295–307, 2016.
  • [11] X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, W. Meng, and Y. Lu. Fast efficient algorithm for enhancement of low lighting video. In ICME, 2011.
  • [12] R. Fattal. Single image dehazing. TOG, 27(3):72, 2008.
  • [13] R. Fattal. Dehazing using color-lines. TOG, 34(13):13:1–13:14, 2014.
  • [14] A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmıo. On the duality between retinex and image dehazing. In CVPR, 2018.
  • [15] M. Grubinger, P. Clough, H. Müller, and T. Deselaers. The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In Workshop OntoImage, 2006.
  • [16] X. Guo, Y. Li, and H. Ling. LIME: Low-light image enhancement via illumination map estimation. TIP, 26(2):982–993, 2017.
  • [17] N. Hautiere, J.-P. Tarel, D. Aubert, and E. Dumont. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Analysis & Stereology, 27(2):87–95, 2011.
  • [18] J. He, C. Zhang, R. Yang, and K. Zhu. Convex optimization for fast image dehazing. In ICIP, 2016.
  • [19] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. In CVPR, 2009.
  • [20] K. He, J. Sun, and X. Tang. Guided image filtering. In ECCV, 2010.
  • [21] Y. Hold-Geoffroy, K. Sunkavalli, S. Hadap, E. Gambaretto, and J.-F. Lalonde. Deep outdoor illumination estimation. In CVPR, 2017.
  • [22] Y. Hu, B. Wang, and S. Lin. FC4: Fully convolutional color constancy with confidence-weighted pooling. In CVPR, 2017.
  • [23] A. Ignatov, N. Kobyshev, R. Timofte, K. Vanhoey, and L. Van Gool. DSLR-quality photos on mobile devices with deep convolutional networks. In ICCV, 2017.
  • [24] G. Kim and J. Kwon. Robust pixel-wise dehazing algorithm based on advanced haze-relevant features. In BMVC, 2017.
  • [25] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, 2016.
  • [26] J.-H. Kim, J.-Y. Sim, and C.-S. Kim. Single image dehazing based on contrast enhancement. In ICASSP, 2011.
  • [27] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski. Deep photo: Model-based photograph enhancement and viewing. TOG, 27(5):116, 2008.
  • [28] H. Koschmieder. Theorie der horizontalen sichtweite: kontrast und sichtweite. Keim & Nemnich, 1925.
  • [29] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng. AOD-Net: All-in-one dehazing network. In ICCV, 2017.
  • [30] C. Li, J. Guo, F. Porikli, and Y. Pang. LightenNet: a convolutional neural network for weakly illuminated image enhancement. PRL, 104:15–22, 2018.
  • [31] Z. Ling, G. Fan, J. Gong, and S. Guo. Learning deep transmission network for efficient image dehazing. Multimed Tools Appl., 78(1):213––236, 2019.
  • [32] K. G. Lore, A. Akintayo, and S. Sarkar. LLNet: A deep autoencoder approach to natural low-light image enhancement. PR, 61:650–662, 2017.
  • [33] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan. Efficient image dehazing with boundary constraint and contextual regularization. In ICCV, 2013.
  • [34] A. Mittal, A. K. Moorthy, and A. C. Bovik. No-reference image quality assessment in the spatial domain. TIP, 21(12):4695–4708, 2012.
  • [35] A. Mittal, R. Soundararajan, and A. C. Bovik. Making a “completely blind” image quality analyzer. SPL, 20(3):209–212, 2013.
  • [36] S. G. Narasimhan and S. K. Nayar. Contrast restoration of weather degraded images. TPAMI, 25(6):713–724, 2003.
  • [37] Z.-u. Rahman, D. J. Jobson, and G. A. Woodell. Multi-scale retinex for color image enhancement. In ICIP, 1996.
  • [38] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang. Single image dehazing via multi-scale convolutional neural networks. In ECCV, 2016.
  • [39] W. Ren, L. Ma, J. Zhang, J. Pan, X. Cao, W. Liu, and M.-H. Yang. Gated fusion network for single image dehazing. In CVPR, 2018.
  • [40] L. Schaul, C. Fredembach, and S. Süsstrunk. Color image dehazing using the near-infrared. In ICIP, 2009.
  • [41] C.-T. Shen and W.-L. Hwang. Color image enhancement using retinex with robust envelope. In ICIP, 2009.
  • [42] L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, and J. Ma. MSR-net: Low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488, 2017.
  • [43] M. Sulami, I. Geltzer, R. Fattal, and M. Werman. Automatic recovery of the atmospheric light in hazy images. In ICCP, 2014.
  • [44] K. Tang, J. Yang, and J. Wang. Investigating haze-relevant features in a learning framework for image dehazing. In CVPR, 2014.
  • [45] N. Venkatanath, D. Praneeth, M. C. Bh, S. S. Channappayya, and S. S. Medasani. Blind image quality evaluation using perception based features. In NCC, 2015.
  • [46] L. Xu, Q. Yan, Y. Xia, and J. Jia. Structure extraction from texture via relative total variation. TOG, 31(6):139, 2012.
  • [47] H. Zhang and V. M. Patel. Densely connected pyramid dehazing network. In CVPR, 2018.
  • [48] Q. Zhu, J. Mai, and L. Shao. A fast single image haze removal algorithm using color attenuation prior. TIP, 24(11):3522–3533, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
375289
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description