Fast and High Quality Highlight Removal
from A Single Image
Abstract
Specular reflection exists widely in photography and causes the recorded color deviating from its true value, so fast and high quality highlight removal from a single nature image is of great importance. In spite of the progress in the past decades in highlight removal, achieving wide applicability to the large diversity of nature scenes is quite challenging. To handle this problem, we propose an analytic solution to highlight removal based on an chromaticity definition and corresponding dichromatic model. Specifically, this paper derives a normalized dichromatic model for the pixels with identical diffuse color: a unit circle equation of projection coefficients in two subspaces that are orthogonal to and parallel with the illumination, respectively. In the former illumination orthogonal subspace, which is specularfree, we can conduct robust clustering with an explicit criterion to determine the cluster number adaptively. In the latter illumination parallel subspace, a property called pure diffuse pixels distribution rule (PDDR) helps map each specularinfluenced pixel to its diffuse component. In terms of efficiency, the proposed approach involves few complex calculation, and thus can remove highlight from high resolution images fast. Experiments show that this method is of superior performance in various challenging cases.
I Introduction
Color information describes the scene’s reflectance behaviors and plays an important role in various computer vision tasks, such as segmentation, recognition, matching and intrinsic image retrieval. The image recording the diffuse reflection characterizes the color distribution of the scene, but the image intensities of widely existing nonLambertian surfaces, such as the scene displayed in Fig. 1(a), largely deviate from their true color information in the specular regions. There are also work directly based on the specular component, such as shape from specular reflection [1]. Therefore, separating highlight from diffuse component for the images of nonLambertian scenes is of crucial importance.
Ia Related Works
Highlight removal has been studied for decades, as reviewed in [2]. Physically, the degree of light polarization can be considered as a strong indicator of specular reflection, while diffuse is considered unpolarized. Therefore, some polarization based methods with hardware assistance have been proposed, such as [3, 4, 5] and [6]. Highlight also exhibits varying behaviours under different illumination directions or from different views, so some highlight removal approaches from multiple images are proposed. Similarly, Feris et al. [7] use a set of images captured in the same point of view but with different flash positions to restore the diffuse component. With a moving light source in [8], Sato and Ikeuchi introduce a method exploiting the color signature analysis using the dichromatic model. Similarly, Lin and Shum[9] use linear basic functions to separate diffuse and specular components through two images captured with different light directions. Instead of using multiillumination inputs, Lin et al.[10] alternatively propose a method using multibaseline stereo based on the observation that the diffuse component was angle free while the specular component was angle dependent. Statistic techniques can also help highlight removal.
From the threshold to the gradient histograms, which is defined as the difference between the histograms of two different intensities, Chen et al. can [11] reconstruct the specular field successfully, but their method requires more than 200 input images. Differently, with some statistical properties of nature scenes, Weiss [12] formulate the recovery of diffuse component as a maximum likelihood estimation problem from an image sequence with the same diffuse component but different specular components. Yang et al.[13] resort to statistical methods to remove specularity from two images with nonoverlapping specular highlights. Although being able to successfully remove the specularity, approaches with hardware assistance or from multiple images are much less practical compared to the single image based approaches.
Removing highlights from a single image often needs given illumination, which can be either calibrated or estimated computationally[14][15][16]. Some work [17][18][19] require robust color segmentation for accurate specular detection, which is quite challenging. Tan and Ikeuchi [20] make use of the difference between specular and diffuse pixels in their proposed specularfree image to remove the highlight effects. They iteratively replace the chromaticity at a specular position with its neighbor pixel with the maximum diffuse chromaticity until the algorithm converges. To reduce the high computation cost of such a greedy searching strategy, Yang et al.[21] accelerate this method by introducing bilateral filtering. Similarly, Mallick et al. [22] propose a PDE algorithm, which iteratively erodes the specular channel in the SUV color space, as in [23]. But these propagation methods cannot handle large area highlight. Also using the local properties, Kim et al. [24] propose an optimization algorithm utilizing the dark channel prior proposed in [25]. Another optimization based method is proposed by Akashi and Okatani [26], who formulate the separation as a non negative matrix decomposition problem. This method is sensitive to the initial values and one must run this method several times to get the most reasonable result. Besides, both the optimization based methods are very slow. Researchers also spend efforts on highlight removal techniques based on material clustering. Tan et al.[27], Shen and Zheng [28] both propose to conduct specularindependent material clustering and relate the specular intensity to its specularfree counterpart. The strategy that clusters materials first and then recovers the intrinsic diffuse colors in each cluster is promising. In the clustering step, the specularfree image in [28] is channelindependent and cannot discriminate colors like [1 2 3], [2 1 3] and [3 2 1]. In the step identifying the pure diffuse pixels of each cluster, both methods either involve some approximations or make strong assumptions, and fail in some cases, such as approachingwhite material and strong specularity.
IB Our Approach
This paper focuses on highlight removal from a single image and targets for wide applicability to the large variety of nature scenes. Besides, this approach is also designed to be memory saving and of low running cost to handle high resolution images fast.
For a nonLambertian surface, its reflectance can be represented by a linear combination of diffuse and specular components. Then, the highlight removal naturally falls into a signal separation problem. Adopting a twostep strategy similar to [27], we decompose the highlight removal procedure into two subproblems: (i) discriminating intrinsic colors (i.e., diffuse reflectance) of the pixels, either affected or unaffected by specular highlight; (ii) finding the pure diffuse pixels and then recovering the diffuse component in each cluster. Accurate identification of pixels with the same material is a nontrivial problem due to the influence of specularity. Within each cluster, distinguishing the pure diffuse pixels from those affected by specularity is also difficult.
To avoid the influences of specularity, material clustering in the specularfree subspace is favourable. With the known illumination, we directly project the image intensities into two subspaces, orthogonal to and parallel with the illumination direction, respectively. It is noteworthy that all the previous methods are based on the chromaticity definition, and there exist some approximations in the succeeding derivation of the highlight removal methods. In this paper, we propose an chromaticity definition and obtain an analytical highlight removal solution. From the newly defined chromaticity definition and the corresponding normalized dichromatic model, we derive an explicit analytical expression—a unit circle equation—between the parallel and orthogonal components of the pixels with the same chromaticity. Guided by this expression, an error term is defined to measure the color uniformity among the pixels and adaptively determine the number of diffuse colors in the clustering. Fig. 1(b) displays the clustering results of the scene in Fig. 1(a). We can see that the clustering preserves the smoothly varying reflectance on the squash, and interreflection between them. To find the pure diffuse pixels in each cluster, a strictly defined property under the normalized dichromatic model can be derived: within each cluster, the pixels with different specular strengths form a circular arc lying on the plane spanned by the illumination and the illuminationorthogonal directions. The pixels with increasing specular strengths move along the arc monotonously, and the pure diffuse pixels locate at one end. Here we name this pattern the pure diffuse pixels distribution rule (PDDR). This rule is similar to the model proposed by Finlayson and Drew in [29], where the pixels of the same material lie on a straight line. However, this model needs at least 4 channels. Besides, we can easily determine the specular strength for each pixel and recover the diffuse intensity.
Although bearing some similarity to the twostep strategy in [27] and [28], our approach largely differentiates from these two work in finding the pure diffuse pixels and demonstrates higher performance. Only based on the common assumption that there exist pure diffuse pixels for each material in the scene, our PDDR property makes the detection of these pixels very easy. Besides, without any approximation in the whole derivation, the detected pure diffuse pixels and the highlight removal results are of high accuracy. Oppositely, in order to find the pixels unaffected by specular highlight, both [27] and [28] make some additional assumptions. Tan et al. [27] assume that the 1st PCA basis of diffuse reflectance is wavelength free and result in unpleasant color bias and high sensitivity to noise. Requiring calibrating the RGB response curves also limits its practicability. In [28], the criterion for recovering the diffuse color would fail for materials whose color approaches white. Besides, in [28] the assumption that the number of specular affected pixels in each cluster does not exceed a threshold makes this method fail in handling scenes with widespread highlight.
In summary, the proposed approach has advantages over previous approaches in multiple aspects.

Due to the new defined chromaticity, normalized dichromatic model and the PDDR property, we can achieve robust scene adaptive material clustering and accurate recovery of the diffuse colors, even when they are similar to the illumination.

Making use of the global structure instead of the local information, the proposed method can handle images with large area or strong specularity.

Without complex optimization, our method can remove highlight fast for high resolution images.
Ii The Normalized Dichromatic Model
In this paper, we normalize the widely used dichromatic model [30] by norm and correspondingly derive an orthogonal decomposition strategy for the surface appearance.
For a color camera, the imaging process can be formulated as:
(1) 
Here is the intensity of channel at pixel , with indexing the camera channel and representing the 2D location. On the righthand side of the equation, the two terms are respectively the diffuse and specular components, with and being the corresponding strengths. In each term, denotes the wavelength with range being , while , and represent the surface reflectance, the illumination spectrum and the camera’s spectral response of channel , respectively.
Denoting the pure diffuse component and specular highlight as and , Eq. 1 can be represented by
(2) 
The diffuse component represents the inherent color of the surface and the specular highlight implies the color of illumination, as illustrated in Fig. 2. Different from the previous definition of color chromaticity as used in [14][24][27][28][31], we propose a definition as
(3) 
In this equation, , with being the intensity of the th channel.
Similarly, we denote and as the chromaticity of diffuse component and specular highlight respectively:
(4)  
Based on the above definition and assuming that all the pixels are illuminated by the identical illumination color (i.e., is positionindependent), the normalized reflectance can be written as
(5) 
Here the coefficients and .
For the chromaticity of diffuse component , it can be projected into two subspaces, one is parallel with while the other is orthogonal to the illumination direction , as illustrated by the two planes in Fig. 2. The projection procedure can be conducted according to the orthogonal projection algorithm proposed by Chang [32] as
(6) 
in which , and the items and are the projection coefficients along two directions, respectively. Since is normalized, we can easily derive that
(7) 
Substituting Eq. (6) into Eq. (5), we can obtain
(8) 
which can be further simplified into
(9) 
by setting and . Since , and are normalized, we have
(10) 
From Fig. 2, adopting the proposed chromatic definition, all the reflectance with the identical diffuse chromaticity satisfy this equation, i.e., lie on the dotslash unit circle.
Iii Scene Adaptive Highlight Removal
According to the normalized dichromatic model, the direction is solely determined by the diffuse chromaticity and can be used to discriminate different materials. Under the guidance of PDDR property, the coefficient corresponding to the direction can be used to find the pure diffuse pixels and remove specular highlight in each cluster. Summing up these properties in both directions, we propose a highlight removal framework, as illustrated in Fig. 3, using the image in Fig. 1(a) as a running example.
Given the input specular contaminated image and corresponding illumination (shown in the topleft block), we firstly get the normalized specularfree component corresponding to each pixel, as shown in the bottomleft block. Afterwards, we iteratively conduct specularfree clustering using KMeans by increasing the number of clusters until converge, with the convergence criterion defined from the fitting error to the analytical model in Eq. (10), as visualized in the top row, middle column of Fig. 3. The evolution of the clustering results is displayed in the lower two rows of middle column in Fig. 3: the upper one in the image space and the lower one in the parameter space. Thereafter, we separate the specular and diffuse components at each pixel, with the results shown on the rightmost column. The following subsections will detail the two successive steps: material clustering and diffuse component recovery.
Iiia Material Clustering
For a specularityinfluenced image, such as the one displayed in Fig. 4(a), it is difficult to locate the pixels with the same diffuse reflectance in the original space because the influences of specularity, as shown in Fig. 4(b). To avoid the affects from the specularity, we cluster the materials in the illumination orthogonal subspace , in which the pixels with the same diffuse color but different specular strengths cluster well, as shown in Fig. 4(c), which corresponds to the normalized data along in Fig. 2. Without loss of generalization, the proposed algorithm determines the number of clusters adaptively.
We start from a cluster number no larger than the true material types, and increase it successively until it reaches the correct number. Usually, we can set the initial cluster number as 1 for safe. In each iteration, we firstly project the chromaticity of the input image into the illumination orthogonal subspace for clustering. Then for each cluster, we replace with the cluster center , and reproject the normalized reflectance into and to get coefficients and . According to Eq. (10), if the sum of squares of the coefficients is close to 1, i.e., lying on a unit circle, represents this cluster well. On the contrary, if the pixels with coefficients deviating from the unit circle are more than the given threshold (e.g., 10%), we suppose that the preset cluster number is incorrect. We use the total fitting error to the unit circle as the clustering precision, and set the threshold to be 0.1 empirically. After checking all the clusters, we increase the cluster by the number of clusters not meeting the precision criteria and go into the next iteration. Fig. 4(d) shows the clustering result and the corresponding error of Fig. 4(a) with an incorrect cluster number. The result obviously deviates from the dichromatic model.
We terminate the iteration when the cluster number stops increasing, and the correct clustering result of Fig. 4(a) is visualized in the leftmost image in Fig. 4(e). The corresponding fitting error to the proposed unit circle model and its distribution are also displayed. Experimentally, the algorithm converges within 5 iterations for most scenes.
IiiB PDDR based Diffuse Component Recovering
Theoretically, the pure diffuse pixels in each cluster will be those with the minimum value of (), which is corresponding to the smallest parallel component . Thus, the leftmost pixels in each circle arc in Fig. 4(e) are the pure diffuse pixels, as illustrated in Fig. 2. However, there is always noise in real cases and we treat the pixels falling around the first peak of histogram as purely diffuse, i.e., . The histograms of s for the four regions in Fig. 4(a) are plotted in Fig. 4(f), with the solid dots denoting the coefficients corresponding to the pure diffuse reflections.
After finding the pure diffuse pixels in each cluster, it is easy to remove the specular highlight. From Eq. (6), the problem of recovering the diffuse component at is equivalent to find the coefficients along and along . Since can be calculated by directly projecting onto , the problem can be further simplified into finding the ratio of to . On one hand, for the pixels with the same diffuse component, this ratio is invariant to the specular strength. On the other hand, the ration of to is equivalent to the ratio of to for the pure diffuse pixels with . Thus, the ratio of to in each cluster can be directly calculated with the known pure diffuse pixels.
Iv Experiments and Analysis
To validate the proposed method, a series of experiments are conducted. Firstly, we quantitatively test our method on both synthetic and real data with ground truths, and demonstrate its higher performance than previous methods. Then, we run our algorithm on several images including nonLambertian reflection and provide comparison with several stateoftheart (STAR) methods. Next, considering that above experiments both assume known illumination color, and the influence from improper clustering is not considered either, we perform two experiments to test the robustness of our approach to inaccurate illumination color and clustering. Lastly, we discuss the efficiency of our approach, in comparison with other two existing methods achieving near realtime processing on VGA images.
For performance comparison with STAR, we compare with the most cited method [21], the newest one [26], and one also based on material clustering [28]. Since the authors of [26] only provide the images on real data, we do not include their results in synthetic experiments. Neither source code nor running results of [24] is available, so the comparison is omitted.
In terms of parameter setting, we only need to set the threshold for stopping cluster subdivision. In implementation, we subdivide a cluster when more than 10% pixels within this cluster deviate more than 0.1 from the unit circle. In order to be robust to sensor noise, we force the pixel number within each cluster larger than 300.
Scenes  The proposed  Shen et al.[28]  Yang et al.[21]  

Synth  0  51.4  29.8  39.0 
3  34.4  29.3  34.3  
6  32.4  27.5  29.4  
Fruits  0  40.4  38.9  37.6 
3  37.4  35.5  34.0  
6  35.1  31.8  30.1  
Masks  0  34.2  34.1  32.2 
3  33.0  32.5  29.8  
6  32.0  29.9  27.5 
Iva Quantitative Evaluation
Separating diffuse and specular components of a closetowhite surface is challenging. To illustrate the advantage of our method in this situation, we synthesize images with diffuse chromaticity being [0.7053 0.7053 0.0705], [0.6667 0.6667 0.3333] and [0.5965 0.5965 0.5369] respectively, and the illumination chromaticity being [0.5774, 0.5774, 0.5774], as shown in 13 columns of Fig. 5. Although behaving well in first two columns, [28] cannot cope with the scene with diffuse color approaching white, i.e., the 3rd column, due to the approximation in the criterion for recovering the diffuse color. Yang et al.[21] give plausible results, but the separated specular component tends to be weaker than the ground truth. In comparison, our approach demonstrates best performance. This is mainly due to that we strictly follow the dichromatic reflection model, while the assumptions made by [28] and [21] are violated in such cases.
To test our performance in textured images, we compare it with [28] and [21] using images with ground truth diffuse component, as shown in 46 columns of Fig. 5. The 4th column shows a synthetic image, and the other two are from [21]. Since noise always exists in data capturing, we also add Gaussian white noise with standard derivation =3 and 6 to the input images. PSNR is used as evaluation metric, as shown in Table I. The results on the three textured images consistently show our superior performance, and the superiority is more prominent at higher noise levels.
IvB Influence of Inaccurate Illumination or Material Clustering
When accurate illumination color is unavailable, one can also use the estimated illumination by some existing methods, such as [14] and [15]. To test the robustness to estimation deviation of the illumination color, we compare the performance of our highlight removal algorithm on the last two scenes in Fig. 5 with the ground truth illumination color and that estimated by Tan et al.[14]. The ground truth illumination color in Fig. 5 is precalibrated into white (i.e., [r g b] = [0.577 0.577 0.577]), while the estimated illumination for these two scenes are [r g b] = [0.600 0.588 0.542] and [r g b] = [0.633 0.575 0.518], respectively. The results are shown in Fig. 6. Comparing the results from estimated (left column) and accurate illumination (right column), we can see that the visual performance does not drop a lot. Quantitatively, the PSNR of the recovered diffuse component of the ’Fruit’ scene drops by 2.3 dB (from 40.4 dB to 38.1dB), and that of the ’Mask’ scene drops by 1.2 dB (from 34.2 dB to 33.0dB), respectively. The scores arrive at the same conclusion: the deviation of the illumination estimators would not affect our final highlight removal performance apparently.
Another factor might degenerate the final performance is the accuracy of material clustering. Because of the noise, a precise clustering cannot be guaranteed even with a theoretically strict reflection model and clustering criterion. It is worth noting that, the proposed approach is clustering based but does not require accurate clustering, which is prone to noise. Actually, we tend to use a strict clustering criterion and over segment the materials when noise exists. Even with a cluster number more than the real material types, our approach still works well if only there exist pure diffuse pixels in each cluster. For example, the scene in Fig. 7(a) includes 5 kinds of materials but is clustered into 8 groups (the blue magnets are over segmented), as visualized in Fig. 7(b), where we label different clusters with different grey levels. From the separated diffuse and specular components displayed in Fig. 7(c) and Fig. 7(d), one can see that we can get promising separation even with an incorrect clustering.
IvC Results on NonLambertian Nature Scenes
In this experiment, we run our algorithm on a variety of nature images affected by specularity. Fig. 8 displays our results in comparison with Akashi et al. [26]’s, Shen et al. [28]’s and Yang et al. [21]’s algorithms. All the images are captured by ourselves using a Nikon D7000 camera with a 50mm /1.8D lens. Demosaiced 16bit raw data is used as input. The color of the illuminations are all normalized into white illumination by channelwise division.
Scenes  Resolution  Ours (s)  Shen et al.[28] (s)  Yang et al.[21](s) 
(pixels)  Matlab  C++  C++  
Apples  0.011  0.046  0.097  
0.195  0.855  1.722  
Magnets  0.022  0.023  0.089  
0.119  0.199  0.625  
Butterfly  0.020  0.026  0.100  
0.110  0.189  0.809  
Peppers  0.020  0.031  0.157  
0.084  0.273  0.883 
Overall, the algorithm produces promising visual results and this validates its effectiveness in the real nonLambertian scenes. The slight difference with STAR algorithms indicates our superiority in some challenging cases. For images with slight and small scale specularity, such as the ’Apples’ scene in the top row, all the four methods can give good separation results. In the ’Butterfly’ scene, the chromaticity of the pink wing region (=0.7715, =0.3505, =0.5330) is of high similarity to the normalized white illumination (=0.5774, =0.5774, =0.5774) and there is some color bias in [26]’s, [28]’s and [21]’s results. In contrast, we can still recover the diffuse component correctly. Besides, our algorithm exhibits superior performance in handling large area highlights on the glossy surface, such as the highlights on the ’Peppers’ surface. Akashi and Okatani’s approach[26] also performs well on this example. In contrast, Shen et al.’s[28] and Yang et al.[21]’s algorithms cannot handle this situation, because the assumption in [28] that the number of the specular pixels below a certain threshold is violated in this case, and the propagation strategy adopted in [21] only applies to small local specular regions. Strong specularity when the surface approaches to mirror is another challenging case for highlight removal, e.g., the color ’Magnets’ scene in the bottom row. In this example, the methods in [26], [28] and [21] are all incapable of removing the specular component cleanly. Instead, our algorithm can give decent separation if the specular region is not caused by pure specular reflection. We can discriminate the subregions with and without interreflection on the same magnet. Besides, we also test our approach on two outdoor scenes to demonstrate its wide availability, as shown in Fig. 9.
IvD Running Time Analysis and Comparison
Our highlight removal algorithm does not involve complex calculations, and thus of high efficiency in terms of both storage and computation. We test the efficiency on an Intel Xeon 2.27 GHz CPU workstation with 64 bit Windows 7 system. Roughly, processing a 500600 pixel image generally takes 0.02s, and there exists slight variation due to the diverse number of materials in different images. The most time consuming module is clustering in our approach. Although the adopted KMeans clustering will slow down at extremely high resolution, we can easily handle such cases via down sampling strategy. Specifically, we first downsample the original image to a lower version (e.g., pixels), which contains all the materials in the original one. Then we apply the proposed method to the low resolution image to conduct material clustering, and recover the diffuse chromaticity of each cluster. Finally, for each pixel in the high resolution input image, the cluster label and corresponding diffuse chromaticity is assigned as those of the cluster with the highest correlation in the specularfree space.
In Table II, we compare our running time with the fast highlight removal approach proposed by Shen et al.[28] and Yang et al.[21]. Since optimization based methods are usually of much higher computation cost, we omit comparison with these methods here. From the data we can see that, even implemented with Matlab, our algorithm is still slightly faster than the other two methods implemented with C++. Benefiting from the down sampling strategy, our efficiency superiority is more prominent at higher resolution.
V Conclusions and Discussions
This paper proposes a new highlight removal approach by defining an normalized dichromatic model and deriving a strict formulation to separate the diffuse and specular components. Without mathematical approximations and strong assumptions on the scene, the approach can handle a large diversity of cases that fail the previous methods. Besides, the proposed approach involves few complex calculations and thus can achieve fast processing.
Limitations. Although being widely applicable, our approach may produce artifacts in some extreme cases. Firstly, the diffuse component of gray (including white) objects or pure specular regions may lose energy because the coefficients are zero, such as the mirror reflection on the rubber balls and the gray scale tiles in the color checker in Fig. 10(a). Such cases are beyond the scope of the dichromatic model and addressing these cases needs user interaction or inpainting processing. Our method cannot distinguish pixels with the same hue but different saturations either. Secondly, we assume that for each kind of material, there exist pure diffuse pixels. For a scene totally covered by highlights, such as the example in the first column of Fig. 10(b), the specularity can be largely reduced but the recovery results is still slightly affected by the illumination. As demonstrated in the comparison between the recovery in the middle column and the benchmark obtained by applying a polarizer in the right column. Highlight removal in such cases is intrinsically illposed for all the methods on dichromatic model, and beyond the scope of this paper.
Future extensions. We plan to extend current approach to remove specularity in videos. Utilizing the temporal redundancy will further raise the performance and efficiency. For one thing, the sparse abrupt temporal changes will raise robustness to noise. For the other, the slight variation of material types between adjacent frames would accelerate processing, since we do not need to start from 1 cluster. By taking some priors into consideration, such as the sparsity of materials in the scene, the separation of diffuse and specular component under unknown illumination is also possible. Besides, highlight removal under multiple illuminations with different colors is also interesting and worth studying.
References
 [1] J. Lellmann, J. Balzer, A. Rieder, and J. Beyerer, “Shape from specular reflection and optical flow,” International Journal of Computer Vision, vol. 80, no. 2, pp. 226–241, 2008.
 [2] A. Artusi, F. Banterle, and D. Chetverikov., “A survey of specularity removal methods,” Computer Graphics Forum, vol. 30, no. 8, pp. 2208–2230, 2011.
 [3] L. Wolff, “Polarizationbased material classification from specular reflection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 11, pp. 1059–1071, 1990.
 [4] S. Nayar, X. Fang, and T. Boult, “Separation of reflection components using color and polarization,” International Journal of Computer Vision, vol. 21, no. 3, pp. 163–186, 1997.
 [5] D. Kim, S. Lin, K. Hong, and H. Shum, “Variational specular separation using color and polarization,” in Proceedings of the IAPR Workshop on Machine Vision Applications, 2002.
 [6] S. Umeyama and G. Godin, “Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 5, p. 639¨C647, 2004.
 [7] R. Feris, R. Raskar, K.H. Tan, and M. Turk, “Specular reflection reduction with multiflash imaging,” in Proceedings of the IEEE Brazilian Symposium on Computer Graphics and Image Processing, 2004.
 [8] Y. Sato and K. Ikeuchi, “Temporalcolor space analysis of reflection,” Journal of the Optical Society of America A, vol. 11, no. 11, pp. 2990–3002, 1994.
 [9] S. Lin and H. Shum, “Separation of diffuse and specular reflection in color images,” in Proceedings of International Conference on Computer Vision and Pattern Recognition, 2001.
 [10] S. Lin, Y. Li, S. Kang, X. Tong, and H. Shum, “Diffusespecular separation and depth recovery from image sequences,” in Proceedings of European Conference on Computer Vision, 2002.
 [11] T. Chen, M. Goesele, and H.P. Seidel, “Mesostructure from specularity,” in Proceedings of International Conference on Computer Vision and Pattern Recognition, 2006.
 [12] Y. Weiss, “Deriving intrinsic images from image sequences,” in Proceedings of International Conference on Computer Visionn, 2001.
 [13] Q. Yang, S. Wang, N. Ahuja, and R. Yang, “A uniform framework for estimating illumination chromaticity, correspondence, and specular reflection,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 53–63, 2011.
 [14] R. Tan, K. Nishino, and K. Ikeuchi, “Illumination chromaticity estimation using inverseintensity chromaticity space,” in Proceedings of International Conference on Computer Vision and Pattern Recognition, 2003.
 [15] M. S. Drew, H. R. V. Joze, and G. D. Finlayson, “The zetaimage, illuminant estimation, and specularity manipulation,” Computer Vision and Image Understanding, vol. 127, pp. 1–13, 2014.
 [16] L. Shi and B. Funt, “Dichromatic illumination estimation via Hough transforms in 3D,” in Proceedings of International Conference on Computer Graphics, Imaging and Visualization, 2008.
 [17] R. Bajcsy, S. Lee, and A. Leonardis, “Detection of diffuse and specular interface reflections and interreflections by color image segmentation,” International Journal of Computer Vision, vol. 17, no. 3, pp. 241–272, 1996.
 [18] G. Klinker, S. Shafer, and T. Kanade, “The measurement of highlights in color images,” International Journal of Computer Vision, vol. 2, no. 1, pp. 7–32, 1988.
 [19] P. Koirala, P. Pant, M. HautaKasari, and J. Parkkinen, “Highlight detection and removal from spectral image,” Journal of the Optical Society of America A, vol. 28, no. 11, pp. 2284–2291, 2011.
 [20] R. Tan and K. Ikeuchi, “Separating reflection components of textured surfaces using a single image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 178–193, 2005.
 [21] Q. Yang, S. Wang, and N. Ahuja, “Realtime specular highlight removal using bilateral filtering,” in Proceedings of European Conference on Computer Vision, 2010.
 [22] S. Mallick, T. Zickler, P. Belhumeur, and D. Kriegman, “Specularity removal in images and videos: A PDE approach,” in Proceedings of European Conference on Computer Vision, 2006.
 [23] S. Mallick, T. Zickler, D. Kriegman, and P. Belhumeur, “Beyond lambert: reconstructing specular surfaces using color,” in Proceedings of International Conference on Computer Vision and Pattern Recognition, 2005.
 [24] H. Kim, H. Jin, S. Hadap, and I. Kweon, “Specular reflection separation using dark channel prior,” in Proceedings of International Conference on Computer Vision and Pattern Recognition, 2013.
 [25] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” in Proceedings of International Conference on Computer Vision and Pattern Recognition, 2009.
 [26] Y. Akashi and T. Okatani, “Separation of reflection components by sparse nonnegative matrix factorization,” in Proceedings of Asian Conference on Computer Vision, 2014.
 [27] R. Tan and K. Ikeuchi, “Reflection components decomposition of textured surfaces using linear basis functions,” in Proceedings of International Conference on Computer Vision, 2005.
 [28] H. Shen and Z. Zheng, “Realtime highlight removal using intensity ratio,” Applied Optics, vol. 52, no. 19, pp. 4483–4493, 2013.
 [29] G. D. Finlayson and M. S. Drew, “4sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities,” in Proceedings of International Conference on Computer Vision, 2001.
 [30] S. A. Shafer, “Using color to separate reflection components,” Color research & application, vol. 10, no. 4, pp. 210–218, 1985.
 [31] C. Huynh and A. RoblesKelly, “A solution of the dichromatic model for multispectral photometric invariance,” International Journal of Computer Vision, vol. 90, no. 1, pp. 1–27, 2010.
 [32] C.I. Chang, “Orthogonal subspace projection (OSP) revisited: a comprehensive study and analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 502–518, 2005.