DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

Abstract

We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.

1Introduction

Figure 1: Schematic diagram of the proposed method.
Figure 1: Schematic diagram of the proposed method.

High Dynamic Range Imaging (HDRI) is a photography technique that helps to capture better-looking photos in difficult lighting conditions. It helps to store all range of light (or brightness) that is perceivable by human eyes, instead of using limited range achieved by cameras. Due to this property, all objects in the scene look better and clear in HDRI, without being saturated (too dark or too bright) otherwise.

The popular approach for HDR image generation is called as Multiple Exposure Fusion (MEF), in which, a set of static LDR images (further referred as exposure stack) with varying exposure is fused into a single HDR image. The proposed method falls under this category. Most of MEF algorithms work better when the exposure bias difference between each LDR images in exposure stack is minimum1. Thus they require more LDR images (typically more than 2 images) in the exposure stack to capture whole dynamic range of the scene. It leads to more storage requirement, processing time and power. In principle, the long exposure image (image captured with high exposure time) has better colour and structure information in dark regions and short exposure image (image captured with less exposure time) has better colour and structure information in bright regions. Though fusing extreme exposure images is practically more appealing, it is quite challenging (existing approaches fail to maintain uniform luminance across image). Additionally, it should be noted that taking more pictures increases power, capture time and computational time requirements. Thus, we propose to work with exposure bracketed image pairs as input to our algorithm.

In this work, we present a data-driven learning method for fusing exposure bracketed static image pairs. To our knowledge this is the first work that uses deep CNN architecture for exposure fusion. The initial layers consists of a set of filters to extract common low-level features from each input image pair. These low-level features of input image pairs are fused for reconstructing the final result. The entire network is trained end-to-end using a no-reference image quality loss function.

We train and test our model with a huge set of exposure stacks captured with diverse settings (indoor/outdoor, day/night, side-lighting/back-lighting, and so on). Furthermore, our model does not require parameter fine-tuning for varying input conditions. Through extensive experimental evaluations we demonstrate that the proposed architecture performs better than state-of-the-art approaches for a wide range of input scenarios.

The contributions of this work are as follows:

  • A CNN based unsupervised image fusion algorithm for fusing exposure stacked static image pairs.

  • A new benchmark dataset that can be used for comparing various MEF methods.

  • An extensive experimental evaluation and comparison study against 7 state-of-the-art algorithms for variety of natural images.

The paper is organized as follows. Section 2, we briefly review related works from literature. Section 3, we present our CNN based exposure fusion algorithm and discuss the details of experiments. Section 4, we provide the fusion examples and then conclude the paper with an insightful discussion in section 5.

2Related Works

Many algorithms have been proposed over the years for exposure fusion. However, the main idea remains the same in all the algorithms. The algorithms compute the weights for each image either locally or pixel wise. The fused image would then be the weighted sum of the images in the input sequence.

Burt et al. [3] performed a Laplacian pyramid decomposition of the image and the weights are computed using local energy and correlation between the pyramids. Use of Laplacian pyramids reduces the chance of unnecessary artifacts. Goshtasby et al. [5] take non-overlapping blocks with highest information from each image to obtain the fused result. This is prone to suffer from block artifacts. Mertens et al. [16] perform exposure fusion using simple quality metrics such as contrast and saturation. However, this suffers from hallucinated edges and mismatched color artifacts.

Algorithms which make use of edge preserving filters like Bilateral filters are proposed in [19]. As this does not account for the luminance of the images, the fused image has dark region leading to poor results. A gradient based approach to assign the weight was put forward by Zhang et al. [28]. In a series of papers by Li et al. [9], [10] different approaches to exposure fusion have been reported. In their early works they solve a quadratic optimization to extract finer details and fuse them. In one of their later works [10], they propose a Guided Filter based approach.

Figure 2: Architecture of proposed image fusion CNN illustrated for input exposure stack with images of size h\times w. The pre-fusion layers C1 and C2 that share same weights, extract low-level features from input images. The feature pairs of input images are fused into a single feature by merge layer. The fused features are input to reconstruction layers to generate fused image Y_{fused}.
Figure 2: Architecture of proposed image fusion CNN illustrated for input exposure stack with images of size . The pre-fusion layers C1 and C2 that share same weights, extract low-level features from input images. The feature pairs of input images are fused into a single feature by merge layer. The fused features are input to reconstruction layers to generate fused image .

Shen et al. [22] proposed a fusion technique using quality metrics such as local contrast and color consistency. The random walk approach they perform gives a global optimum solution to the fusion problem set in a probabilistic fashion.

All of the above works rely on hand-crafted features for image fusion. These methods are not robust in the sense that the parameters need to be varied for different input conditions say, linear and non-linear exposures, filter size depends on image sizes. To circumvent this parameter tuning we propose a feature learning based approach using CNN. In this work we learn suitable features for fusing exposure bracketed images. Recently, Convolutional Neural Network (CNN) have shown impressive performance across various computer vision tasks [8]. While CNNs have produced state-of-the-art results in many high-level computer vision tasks like recognition ([7], [21]), object detection [11], Segmentation [6], semantic labelling [17], visual question answering [2] and much more, their performance on low-level image processing problems such as filtering [4] and fusion [18] is not studied extensively. In this work we explore the effectiveness of CNN for the task of multi-exposure image fusion.

To our knowledge, use of CNNs for multi-exposure fusion is not reported in literature. The other machine learning approach is based on a regression method called Extreme Learning Machine (ELM) [25], that feed saturation level, exposedness, and contrast into the regressor to estimate the importance of each pixel. Instead of using hand crafted features, we use the data to learn a representation right from the raw pixels.

3Proposed Method

In this work, we propose an image fusion framework using CNNs. Within a span of couple years, Convolutional Neural Networks have shown significant success in high-end computer vision tasks. They are shown to learn complex mappings between input and output with the help of sufficient training data. CNN learns the model parameters by optimizing a loss function in order to predict the result as close as to the ground-truth. For example, let us assume that input x is mapped to output y by some complex transformation f. The CNN can be trained to estimate the function f that minimizes the difference between the expected output y and obtained output . The distance between y and is calculated using a loss function, such as mean squared error function. Minimizing this loss function leads to better estimate of required mapping function.

Let us denote the input exposure sequence and fusion operator as and . The input images are assumed to be registered and aligned using existing registration algorithms, thus avoiding camera and object motion. We model with a feed-forward process . Here, denotes the network architecture and denotes the weights learned by minimizing the loss function. As the expected output is absent for MEF problem, the squared error loss or any other full reference error metric cannot be used. Instead, we make use of no-reference image quality metric MEF SSIM proposed by Ma et al. [15] as loss function. MEF SSIM is based on structural similarity index metric (SSIM) framework [27]. It makes use of statistics of a patch around individual pixels from input image sequence to compare with result. It measures the loss of structural integrity as well as luminance consistency in multiple scales (see section ? for more details).

An overall scheme of proposed method is shown in Figure 1. The input exposure stack is converted into YCbCr color channel data. The CNN is used to fuse the luminance channel of the input images. This is due to the fact that the image structural details are present in luminance channel and the brightness variation is prominent in luminance channel than chrominance channels. The obtained luminance channel is combined with chroma (Cb and Cr) channels generated using method described in Section 3.3. The following subsection details the network architecture, loss function and the training procedure.

3.1DeepFuse CNN

The learning ability of CNN is heavily influenced by right choice of architecture and loss function. A simple and naive architecture is to have a series of convolutional layers connected in sequential manner. The input to this architecture would be exposure image pairs stacked in third dimension. Since the fusion happens in the pixel domain itself, this type of architecture does not make use of feature learning ability of CNNs to a great extent.

The proposed network architecture for image fusion is illustrated in Figure 2. The proposed architecture has three components: feature extraction layers, a fusion layer and reconstruction layers. As shown in Figure 2, the under-exposed and the over-exposed images ( and ) are input to separate channels (channel 1 consists of C11 and C21 and channel 2 consists of C12 and C22). The first layer (C11 and C12) contains 5 5 filters to extract low-level features such as edges and corners. The weights of pre-fusion channels are tied, C11 and C12 (C21 and C22) share same weights. The advantage of this architecture is three fold: first, we force the network to learn the same features for the input pair. That is, the F11 and F21 are same feature type. Hence, we can simply combine the respective feature maps via fusion layer. Meaning, the first feature map of image 1 (F11) and the first feature map of image 2 (F21) are added and this process is applied for remaining feature maps as well. Also, adding the features resulted in better performance than other choices of combining features (see Table 1). In feature addition, similar feature types from both images are fused together. Optionally one can choose to concatenate features, by doing so, the network has to figure out the weights to merge them. In our experiments, we observed that feature concatenation can also achieve similar results by increasing the number of training iterations, increasing number of filters and layers after C3. This is understandable as the network needs more number of iterations to figure out appropriate fusion weights. In this tied-weights setting, we are enforcing the network to learn filters that are invariant to brightness changes. This is observed by visualizing the learned filters (see Figure 9). In case of tied weights, few high activation filters have center surround receptive fields (typically observed in retina). These filters have learned to remove the mean from neighbourhood, thus effectively making the features brightness invariant. Second, the number of learnable filters is reduced by half. Third, as the network has low number of parameters, it converges quickly. The obtained features from C21 and C22 are fused by merge layer. The result of fuse layer is then passed through another set of convolutional layers (C3, C4 and C5) to reconstruct final result () from fused features.

MEF SSIM loss function

In this section, we will discuss on computing loss without using reference image by MEF SSIM image quality measure [15]. Let ==1,2 denote the image patches extracted at a pixel location from input image pairs and denote the patch extracted from CNN output fused image at same location . The objective is to compute a score to define the fusion performance given input patches and fused image patch.

In SSIM [27] framework, any patch can be modelled using three components: structure (), luminance () and contrast (). The given patch is decomposed into these three components as:

where, is the norm of patch, is the mean value of and is the mean subtracted patch. As the higher contrast value means better image, the desired contrast value () of the result is taken as the highest contrast value of , (i.e.)

The structure of the desired result () is obtained by weighted sum of structures of input patches as follows,

where the weighting function assigns weight based on structural consistency between input patches. The weighting function assigns equal weights to patches, when they have dissimilar structural components. In the other case, when all input patches have similar structures, the patch with high contrast is given more weight as it is more robust to distortions. The estimated and is combined to produce desired result patch as,

As the luminance comparison in the local patches is insignificant, the luminance component is discarded from above equation. Comparing luminance at lower spatial resolution does not reflect the global brightness consistency. Instead, performing this operation at multiple scales would effectively capture global luminance consistency in coarser scale and local structural changes in finer scales. The final image quality score for pixel is calculated using SSIM framework,

where, is variance and is covariance between and . The total loss is calculated as,

where is the total number of pixels in image and is the set of all pixels in input image. The computed loss is backpropagated to train the network. The better performance of MEF SSIM is attributed to its objective function that maximizes structural consistency between fused image and each of input images.

Table 1: Choice of blending operators: Average MEF SSIM scores of 23 test images generated by CNNs trained with different feature blending operations. The maximum score is highlighted in bold. Results illustrate that adding the feature tensors yield better performance. Results by addition and mean methods are similar, as both operations are very similar, except for a scaling factor. Refer text for more details.
Product Concatenation Max Mean Addition
0.8210 0.9430 0.9638 0.9750 0.9782

3.2Training

Figure 3: Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Figure 3: Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Figure 4: Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.
Figure 4: Results for House image sequence. Image courtesy of Kede ma. Best viewed in color.

We have collected 25 exposure stacks that are available publicly [1]. In addition to that, we have curated 50 exposure stacks with different scene characteristics. The images were taken with standard camera setup and tripod. Each scene consists of 2 low dynamic range images with EV difference. The input sequences are resized to 1200 800 dimensions. We give priority to cover both indoor and outdoor scenes. From these input sequences, 30000 patches of size 64 64 were cropped for training. We set the learning rate to and train the network for 100 epochs with all the training patches being processed in each epoch.

3.3Testing

We follow the standard cross-validation procedure to train our model and test the final model on a disjoint test set to avoid over-fitting. While testing, the trained CNN takes the test image sequence and generates the luminance channel () of fused image. The chrominance components of fused image, and , are obtained by weighted sum of input chrominance channel values.

The crucial structural details of the image tend to be present mainly in channel. Thus, different fusion strategies are followed in literature for and / fusion ([18], [24], [26]). Moreover, MEF SSIM loss is formulated to compute the score between 2 gray-scale () images. Thus, measuring MEF SSIM for and channels may not be meaningful. Alternately, one can choose to fuse RGB channels separately using different networks. However, there is typically a large correlation between RGB channels. Fusing RGB independently fails to capture this correlation and introduces noticeable color difference. Also, MEF-SSIM is not designed for RGB channels. Another alternative is to regress RGB values in a single network, then convert them to a image and compute MEF SSIM loss. Here, the network can focus more on improving channel, giving less importance to color. However, we observed spurious colors in output which were not originally present in input.

We follow the procedure used by Prabhakar et al. [18] for chrominance channel fusion. If and denote the (or ) channel value at any pixel location for image pairs, then the fused chrominance value is obtained as follows,

The fused chrominance value is obtained by weighing two chrominance values with subtracted value from itself. The value of is chosen as 128. The intuition behind this approach is to give more weight for good color components and less for saturated color values. The final result is obtained by converting {, , } channels into RGB image.

Figure 5: Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Figure 5: Comparison of the proposed method with Mertens et al. . The Zoomed region of the result by Mertens et al. in (d) show that some highlight regions are not completely retained from input. The zoomed region of the result by Mertens et al. in (j) show that fine details of lamp are missing.
Figure 6: Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Comparison of the proposed method with Li et al. , Li et al.  and Shen et al.  for Balloons and Office. Image courtesy of Kede ma.
Figure 6: Comparison of the proposed method with Li et al. , Li et al. and Shen et al. for Balloons and Office. Image courtesy of Kede ma.

4Experiments and Results

We have conducted extensive evaluation and comparison study against state-of-the-art algorithms for variety of natural images. For evaluation, we have chosen standard image sequences to cover different image characteristics including indoor and outdoor, day and night, natural and artificial lighting, linear and non-linear exposure. The proposed algorithm is compared against seven best performing MEF algorithms, (1) Mertens09 [16], (2) Li13 [10] (3) Li12 [9] (4) Ma15 [14] (5) Raman11 [20] (6) Shen11 [23] and (7) Guo17 [12]. In order to evaluate the performance of algorithms objectively, we adopt MEF SSIM. Although number of other IQA models for general image fusion have also been reported, none of them makes adequate quality predictions of subjective opinions [15].

4.1DeepFuse - Baseline

So far, we have discussed on training CNN model in unsupervised manner. One interesting variant of that would be to train the CNN model with results of other state-of-art methods as ground truth. This experiment can test the capability of CNN to learn complex fusion rules from data itself without the help of MEF SSIM loss function. The ground truth is selected as best of Mertens [16] and GFF [10] methods based on MEF SSIM score2. The choice of loss function to calculate error between ground truth and estimated output is very crucial for training a CNN in supervised fashion. The Mean Square Error or loss function is generally chosen as default cost function for training CNN. The cost function is desired for its smooth optimization properties. While loss function is better suited for classification tasks, they may not be a correct choice for image processing tasks [29]. It is also a well known phenomena that MSE does not correlate well with human perception of image quality [27]. In order to obtain visually pleasing result, the loss function should be well correlated with HVS, like Structural Similarity Index (SSIM) [27]. We have experimented with different loss functions such as , and SSIM.

Figure 7: Comparison of the proposed method with Ma et al.  for Table sequence. The zoomed region of result by Ma et al.  shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.
Comparison of the proposed method with Ma et al.  for Table sequence. The zoomed region of result by Ma et al.  shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.
Comparison of the proposed method with Ma et al.  for Table sequence. The zoomed region of result by Ma et al.  shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.
Comparison of the proposed method with Ma et al.  for Table sequence. The zoomed region of result by Ma et al.  shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.
Comparison of the proposed method with Ma et al.  for Table sequence. The zoomed region of result by Ma et al.  shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.
Comparison of the proposed method with Ma et al.  for Table sequence. The zoomed region of result by Ma et al.  shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.
Figure 7: Comparison of the proposed method with Ma et al. for Table sequence. The zoomed region of result by Ma et al. shows the artificial halo artifact effect around edges of lamp. Image courtesy of Kede ma.

The fused image appear blurred when the CNN was trained with loss function. This effect termed as regression to mean, is due to the fact that loss function compares the result and ground truth in a pixel by pixel manner. The result by loss gives sharper result than loss but it has halo effect along the edges. Unlike and , results by CNN trained with SSIM loss function are both sharp and artifact-free. Therefore, SSIM is used as loss function to calculate error between generated output and ground truth in this experiment.

The quantitative comparison between DeepFuse baseline and unsupervised method is shown in Table ?. The MEF SSIM scores in Table ? shows the superior performance of DeepFuse unsupervised over baseline method in almost all test sequences. The reason is due to the fact that for baseline method, the amount of learning is upper bound by the other algorithms, as the ground truth for baseline method is from Merterns et al. [16] or Li et al. [10]. We see from Table ? that the baseline method does not exceed both of them.

The idea behind this experiment is to combine advantages of all previous methods, at the same time avoid shortcomings of each. From Figure 4, we can observe that though DF-baseline is trained with results of other methods, it can produce results that do not have any artifacts observed in other results.

Figure 8: Comparison of the proposed method with Ma et al. . A close-up look on the results for Lighthouse sequence. The results by Ma et al.  show a halo effect along the roof and lighthouse. Image courtesy of Kede Ma.
Comparison of the proposed method with Ma et al. . A close-up look on the results for Lighthouse sequence. The results by Ma et al.  show a halo effect along the roof and lighthouse. Image courtesy of Kede Ma.
Comparison of the proposed method with Ma et al. . A close-up look on the results for Lighthouse sequence. The results by Ma et al.  show a halo effect along the roof and lighthouse. Image courtesy of Kede Ma.
Comparison of the proposed method with Ma et al. . A close-up look on the results for Lighthouse sequence. The results by Ma et al.  show a halo effect along the roof and lighthouse. Image courtesy of Kede Ma.
Figure 8: Comparison of the proposed method with Ma et al. . A close-up look on the results for Lighthouse sequence. The results by Ma et al. show a halo effect along the roof and lighthouse. Image courtesy of Kede Ma.

4.2Comparison with State-of-the-art

Comparison with Mertens et al.: Mertens et al. [16] is a simple and effective weighting based image fusion technique with multi resolution blending to produce smooth results. However, it suffers from following shortcomings: (a) it picks “best” parts of each image for fusion using hand crafted features like saturation and well-exposedness. This approach would work better for image stacks with many exposure images. But for exposure image pairs, it fails to maintain uniform brightness across whole image. Compared to Mertens et al., DeepFuse produces images with consistent and uniform brightness across whole image. (b) Mertens et al. does not preserve complete image details from under exposed image. In Figure 5(d), the details of the tile area is missing in Mertens et al.’s result. The same is the case in Figure 5(j), the fine details of the lamp are not present in the Mertens et al. result. Whereas, DeepFuse has learned filters that extract features like edges and textures in C1 and C2, and preserves finer structural details of the scene.

Comparison with Li et al. [9] [10]: It can be noted that, similar to Mertens et al. [16], Li et al. [9] [10] also suffers from non-uniform brightness artifact (Figure 6). In contrast, our algorithm provides a more pleasing image with clear texture details.

Comparison with Shen et al. [23]: The results generated by Shen et al. show contrast loss and non-uniform brightness distortions (Figure 6). In Figure 6(e1), the brightness distortion is present in the cloud region. The cloud regions in between balloons appear darker compared to other regions. This distortion can be observed in other test images as well in Figure 6(e2). However, the DeepFuse (Figure 6(f1) and (f2) ) have learnt to produce results without any of these artifacts.

Figure 9: Filter Visualization. Some of the filters learnt in first layer resemble Gaussian, Difference of Gaussian and Laplacian of Gaussian filters. Best viewed electronically, zoomed in.
Figure 9: Filter Visualization. Some of the filters learnt in first layer resemble Gaussian, Difference of Gaussian and Laplacian of Gaussian filters. Best viewed electronically, zoomed in.

Comparison with Ma et al. [14]: Figure 7 and Figure 8 shows comparison between results of Ma et al. and DeepFuse for Lighthouse and Table sequences. Ma et al. proposed a patch based fusion algorithm that fuses patches from input images based on their patch strength. The patch strength is calculated using a power weighting function on each patch. This method of weighting would introduce unpleasant halo effect along edges (see Figure 7 and Figure 8).

Comparison with Raman et al. [20]: Figure 4(f) shows the fused result by Raman et al. for House sequence. The result exhibit color distortion and contrast loss. In contrast, proposed method produces result with vivid color quality and better contrast.

After examining the results by both subjective and objective evaluations, we observed that our method is able to faithfully reproduce all the features in the input pair. We also notice that the results obtained by DeepFuse are free of artifacts such as darker regions and mismatched colors. Our approach preserves the finer image details along with higher contrast and vivid colors. The quantitative comparison between proposed method and existing approaches in Table ? also shows that proposed method outperforms others in most of the test sequences. From the execution times shown in Table 2 we can observe that our method is roughly 3-4 faster than Mertens et al. DeepFuse can be easily extended to more input images by adding additional streams before merge layer. We have trained DeepFuse for sequences with 3 and 4 images. For sequences with 3 images, average MEF SSIM score for DF is 0.987 and 0.979 for Mertens et al. For sequences with 4 images, average MEF SSIM score for DF is 0.972 and 0.978 for Mertens et al. For sequences with 4 images, we attribute dip in performance to insufficient training data. With more training data, DF can be trained to perform better in such cases as well.

Figure 10: Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.
Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.
Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.
Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.
Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.
Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.
Figure 10: Application of DeepFuse CNN to multi-focus fusion. The first two column images are input varying focus images. The All-in-focus result by DeepFuse is shown in third column. Images courtesy of Liu et al. . Image courtesy of Slavica savic.

4.3Application to Multi-Focus Fusion

In this section, we discuss the possibility of applying our DeepFuse model for solving other image fusion problems. Due to the limited depth-of-field in the present day cameras, only object in limited range of depth are focused and the remaining regions appear blurry. In such scenario, Multi-Focus Fusion (MFF) techniques are used to fuse images taken with varying focus to generate a single all-in-focus image. MFF problem is very similar to MEF, except that the input images have varying focus than varying exposure for MEF. To test the generalizability of CNN, we have used the already trained DeepFuse CNN to fuse multi-focus images without any fine-tuning for MFF problem. Figure 10 shows that the DeepFuse results on publicly available multi-focus dataset show that the filters of CNN have learnt to identify proper regions in each input image and successfully fuse them together. It can also be seen that the learnt CNN filters are generic and could be applied for general image fusion.

Table 2: Computation time: Running time in seconds of different algorithms on a pair of images. The numbers in bold denote the least amount of time taken to fuse. : tested with NVIDIA Tesla K20c GPU, : tested with IntelXeon @ 3.50 GHz CPU
Ma Li Mertens
512*384 2.62 0.58 0.28 0.07
1024*768 9.57 2.30 0.96 0.28
1280*1024 14.72 3.67 1.60 0.46
1920*1200 27.32 6.60 2.76 0.82

5Conclusion and Future work

In this paper, we have proposed a method to efficiently fuse a pair of images with varied exposure levels to produce an output which is artifact-free and perceptually pleasing. DeepFuse is the first ever unsupervised deep learning method to perform static MEF. The proposed model extracts set of common low-level features from each input images. Feature pairs of all input images are fused into a single feature by merge layer. Finally, the fused features are input to reconstruction layers to get the final fused image. We train and test our model with a huge set of exposure stacks captured with diverse settings. Furthermore, our model is free of parameter fine-tuning for varying input conditions. Finally, from extensive quantitative and qualitative evaluation, we demonstrate that the proposed architecture performs better than state-of-the-art approaches for a wide range of input scenarios.

In summary, the advantages offered by DF are as follows: 1) Better fusion quality: produces better fusion result even for extreme exposure image pairs, 2) SSIM over : In [29], the authors report that loss outperforms SSIM loss function. In their work, the authors have implemented approximate version of SSIM and found it to perform sub-par compared to . We have implemented the exact SSIM formulation and observed that SSIM loss function perform much better than MSE and . Further, we have shown that a complex perceptual loss such as MEF SSIM can be successfully incorporated with CNNs in absense of ground truth data. The results encourage the research community to examine other perceptual quality metrics and use them as loss functions to train a neural net. 3) Generalizability to other fusion tasks: The proposed fusion is generic in nature and could be easily adapted to other fusion problems as well. In our current work, DF is trained to fuse static images. For future research, we aim to generalize DeepFuse to fuse images with object motion as well.

Footnotes

  1. Exposure bias value indicates the amount of exposure offset from the auto exposure setting of an camera. For example, EV 1 is equal to doubling auto exposure time (EV 0).
  2. In a user survey conducted by Ma et al. [15], Mertens and GFF results are ranked better than other MEF algorithms

References

  1. http://www.empamedia.ethz.ch/hdrdatabase/index.php.
    EMPA HDR image database. Accessed: 2016-07-13.
  2. VQA: Visual question answering.
    S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. In Proceedings of the IEEE International Conference on Computer Vision, 2015.
  3. Enhanced image capture through fusion.
    P. J. Burt and R. J. Kolczynski. In Proceedings of the International Conference on Computer Vision, 1993.
  4. Image denoising via CNNs: An adversarial approach.
    N. Divakar and R. V. Babu. In New Trends in Image Restoration and Enhancement, CVPR workshop, 2017.
  5. Fusion of multi-exposure images.
    A. A. Goshtasby. Image and Vision Computing, 23(6):611–618, 2005.
  6. Mask R-CNN.
    K. He, G. Gkioxari, P. Dollár, and R. Girshick. arXiv preprint arXiv:1703.06870, 2017.
  7. Deep residual learning for image recognition.
    K. He, X. Zhang, S. Ren, and J. Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
  8. Deep learning.
    Y. LeCun, Y. Bengio, and G. Hinton. Nature, 521(7553):436–444, 2015.
  9. Fast multi-exposure image fusion with median filter and recursive filter.
    S. Li and X. Kang. IEEE Transaction on Consumer Electronics, 58(2):626–632, May 2012.
  10. Image fusion with guided filtering.
    S. Li, X. Kang, and J. Hu. IEEE Transactions on Image Processing, 22(7):2864–2875, July 2013.
  11. R-fcn: Object detection via region-based fully convolutional networks.
    Y. Li, K. He, J. Sun, et al. In Advances in Neural Information Processing Systems, 2016.
  12. Detail-enhanced multi-scale exposure fusion.
    Z. Li, Z. Wei, C. Wen, and J. Zheng. IEEE Transactions on Image Processing, 26(3):1243–1252, 2017.
  13. Multi-focus image fusion with dense SIFT.
    Y. Liu, S. Liu, and Z. Wang. Information Fusion, 23:139–155, 2015.
  14. Multi-exposure image fusion: A patch-wise approach.
    K. Ma and Z. Wang. In IEEE International Conference on Image Processing, 2015.
  15. Perceptual quality assessment for multi-exposure image fusion.
    K. Ma, K. Zeng, and Z. Wang. IEEE Transactions on Image Processing, 24(11):3345–3356, 2015.
  16. Exposure fusion.
    T. Mertens, J. Kautz, and F. Van Reeth. In Pacific Conference on Computer Graphics and Applications, 2007.
  17. Recurrent convolutional neural networks for scene parsing.
    P. H. Pinheiro and R. Collobert. arXiv preprint arXiv:1306.2795, 2013.
  18. Ghosting-free multi-exposure image fusion in gradient domain.
    K. R. Prabhakar and R. V. Babu. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2016.
  19. Bilateral filter based compositing for variable exposure photography.
    S. Raman and S. Chaudhuri. In Proceedings of EUROGRAPHICS, 2009.
  20. Reconstruction of high contrast images for dynamic scenes.
    S. Raman and S. Chaudhuri. The Visual Computer, 27:1099–1114, 2011.
  21. Enabling my robot to play pictionary: Recurrent neural networks for sketch recognition.
    R. K. Sarvadevabhatla, J. Kundu, et al. In Proceedings of the ACM on Multimedia Conference, 2016.
  22. Exposure fusion using boosting laplacian pyramid.
    J. Shen, Y. Zhao, S. Yan, X. Li, et al. IEEE Trans. Cybernetics, 44(9):1579–1590, 2014.
  23. Generalized random walks for fusion of multi-exposure images.
    R. Shen, I. Cheng, J. Shi, and A. Basu. IEEE Transactions on Image Processing, 20(12):3634–3646, 2011.
  24. Image enhancement method via blur and noisy image fusion.
    M. Tico and K. Pulli. In IEEE International Conference on Image Processing, 2009.
  25. Extreme learning machine based exposure fusion for displaying HDR scenes.
    J. Wang, B. Shi, and S. Feng. In International Conference on Signal Processing, 2012.
  26. Exposure fusion based on steerable pyramid for displaying high dynamic range scenes.
    J. Wang, D. Xu, and B. Li. Optical Engineering, 48(11):117003–117003, 2009.
  27. Image quality assessment: from error visibility to structural similarity.
    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  28. Reference-guided exposure fusion in dynamic scenes.
    W. Zhang and W.-K. Cham. Journal of Visual Communication and Image Representation, 23(3):467–475, 2012.
  29. Loss functions for neural networks for image processing.
    H. Zhao, O. Gallo, I. Frosio, and J. Kautz. arXiv preprint arXiv:1511.08861, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
14524
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description