Realistic Hair Simulation Using Image Blending

Realistic Hair Simulation Using Image Blending

Mohamed Attia1, Mohammed Hossny1, Saeid Nahavandi1, Anousha Yazdabadi2 and Hamed Asadi2 2School of Medicine, Deakin University 1Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University
Abstract

In this presented work, we propose a realistic hair simulator using image blending for dermoscopic images. This hair simulator can be used for benchmarking and validation of the hair removal methods and in data augmentation for improving computer aided diagnostic tools. We adopted one of the popular implementation of image blending to superimpose realistic hair masks to hair lesion. This method was able to produce realistic hair masks according to a predefined mask for hair. Thus, the produced hair images and masks can be used as ground truth for hair segmentation and removal methods by inpainting hair according to a pre-defined hair masks on hair-free areas. Also, we achieved a realism score equals to 1.65 in comparison to 1.59 for the state-of-the-art hair simulator.

I Introduction

Automated hair removal is one of the most important application for recent advances for artificial intelligence and deep learning [1]. This is because hair artefacts affect automated skin cancer diagnosis tools. These tools include lesion segmentation and classification into malignant and benign [2, 3]. However, lesions are usually occluded by hair and other artefacts, as shown in Fig. 1. Most importantly, the wide adoption of deep learning in computer vision recent tasks in general and medical imaging in particular, has dictated preprocessing artefact removal steps [4, 5, 6, 7]. Thus, hair removal is a crucial part of the automated analysis to improve the classification accuracy and enhance the ability of the models to generalise [8]. Many digital hair removal methods were proposed to remove the super imposed hair in a non-invasive fashion. They are based on two main steps: hair segmentation and hair removal [9]. These methods were able to deal with different hair structures. However, they have been always validated on relatively small datasets, that have many properties in common in terms of size, colours, illuminations. Thus, these methods have limited capability to generalise to larger number of images and the code is not publicly available for comparison.

Several studies tried to overcome these problems by simulating hair by black curved lines with random length and curvatures. However, the resultant hair was unrealistic with solid black colours. To ensure high degree of realism, the simulated hair shall be having realistic curvature and colours. In this presented work, we use image blending techniques using poison blending to blend real hair masks into lesion to simulate hair occlusion on lesions. We blended a pre-segmented hair masks with hair-free image. This methods was able to achieve more realistic results with high degree of variability. Image blending techniques aims at blending image with background to achieve high compatibility between the source and destination. One of the main advantages of image blending is the blended image depends on both the source and destination. Thus, for every pair of images, the hair shapes and colours differ.

Quantitative analysis of the proposed hair simulator is one the most challenging parts. Most of the recent works in the hair simulation was proposed based on the qualitative analysis without comparison to real images. The lack of the quantitative metrics for image quality will affect the applicability of parametric analysis of the method or benchmarking. However, recent advances in blind image quality metric facilitates the characterisation of the natural images based on morphology and colour compatibility. These methods have been validated against human observer.

The main contribution of this presented work is based on the novel proposed methodology for realistic hair simulation using image blending techniques as a tool for validation of hair segmentation and inpainting methods. Also, we used blinded image quality tool for quantitative analysis of the proposed method. The rest of this paper is organised as follows. Section II describes the efforts documented in the literature for state-of-the-art methods. In Section III, a brief description of the proposed methodology is elaborated. followed by the discussion for the obtained results of the conducted experiments. In Section V, conclusions are drawn from the implementation of the proposed architecture.

Fig. 1: Skin images are usually occluded by hair artefacts. These artefacts affects the accuracy of computer aided diagnostic tools.

Ii Related Work

To solve hair the occlusion problem, many methods have been introduced. As shown in Table I, they vary between each others in terms of the assumptions of hair morphology and the hair detection method. Some of these methods were based on morphological operations for hair detection [10, 11]. On the other hand, some are based on matched filters [12, 13].

Koehoorn et al. proposed a comprehensive framework for segmentation of hair with different morphology and colours [9]. They segmented the hair based on multi-scale skeletonization that has been able to detect hair at different scales including, short, long, straight and curly hair. They compared their proposed method against other different hair detection methods using over than 300 manually annotated images. In these proposed methods, they validated their methods on relatively small dataset with size near 300 images. This is due to the difficulty of manual annotation of the data. Also, the comparison was not based on a publicly available datsets which limits the ability to compare or validate the results.

To partially solve the limited data problem, several attempts for hair synthesis was proposed. In these methods, they made several assumption for hair synthesis. They simulated the hair according to the external morphology of the hair based on the assumptions: the hair has thin architecture with varying width. Nevertheless, they based their simulation techniques on utilisation of simplified mathematical models, along with other assumption, to synthesise skin hair. Therefore, the synthesised hair suffers from several artefacts.

Denton et al. proposed one of the earliest attempts to simulate hair for skin lesion [14]. She et al. proposed used the hair morphology to design a hair simulator that produces a single black hair image. This hair is 100 pixels length and 3 pixels wide. They limited hair orientations at horizontal, vertical and 45 [15]. Then, they improved the skin hair simulator and they introduced it as a part of skin lesion simulator to facilitate the analysis of the skin lesions. They simulated the hair as straight and curved black lines [16]. However, the results were based on fixed color hair simulation.

Mirzaalian et al. proposed a hair simulator based on random generated curved lines. They prepared medial curves of the hairs generated by a random curve synthesizer. Then, they dilated the generated curves to achieve hair-thickened curve by dilation with a disk structuring element of varying radius to simulate average human hair. Then, they coloured the generated mask by using a preset dictionary. They limited the hair colours to which includes yellow, brown, white, black and grey. They applied gaussian filter on the edges to blend the hair with the background[17]. However, the hair colours and hair structure are not realistic as shown in Fig. 2.

To address this problem, we are going to used pre-segmented hair masks and superimpose them on lesion images to blend them. Image blending has been used for data augmentation to produce images with the corresponding ground-truth [18, 19]. The main advantage of this method is achieving realistic results with expected variation in morphology and appearance, along with the ground truth [20, 21, 22].

Many approached were proposed to assess the realism of the composite and synthesized, based on parametric metrics. They characterised the images based on noise level, blur level and color transformations. However, these methods are hard to generalise to include images from different sources. Zhu et al. proposed a non-parametric characterisation methodology for assessment of image realism [23]. They used a generizable model for discrimination of realistic and fake images. They trained a deep convolutional model based on VGG architecture to perform a non-task-specific discriminative classifier using end-to-end paradigm [24].

Method Hair detection Compared with #images Code
Dull razor [10] Morphological closing - 5 A
E-shaver [25] Prewitt edge detector Dull razor [10] 50 N/A
Schmidet al. [26] Morphological closing 200 N/A
Zhou et al. [27]
Line detection
Curve fitting
Schmidet al.  [26] 460 N/A
Huang et al. [28] Multi-scale matched filters Dull razor [10] 20 N/A
Fiorese et al. [29] Top-Hat operator Dull razor [10] 20 N/A
Xie et al. [30] Top-Hat operator DullRazor [10] 40 N/A
Abbas et al. [13] Derivatives of Gaussian
Dull razor [10]
Xie et al. [31]
Zhou et al. [27]
100 N/A
Maglogiannis et al. [32]
Bottom-Hat transform
Laplacian
Laplacian of Gaussian
Sobel
Dull razor [10] 10 N/A
Mirzaalian et al.  [17] Matched filters -
40 real
94 synthetic
A
Du et al. [33]
Top-Hat transform
Multi-scale curvilinear
Matched filtering
DullRazor [10]
Xie et al. [34]
Huang et al. [28]
Fiorese et al. [29]
Abbas et al. [13]
Virtual Shaver [35]
60 N/A
Virtual shaver [35]
Multiscale skeleton
Morphological operators
Dull razor [10]
Xie et al. [31]
Huang et al. [28]
Fiorese et al. [29]
Abbas et al. [13]
over 300 A
TABLE I: Summarization for segmentation methods in terms of hair detection method, comparison to other implementation, number of test images, availability of the implementation and the used color space. “A” stands for available code, while, “N/A” stands for not available code.
Fig. 2: The simulated hair using the method proposed by Mirzaalian et al. [17]. The simulated images suffer from unrealistic hair structure with incompatible colour to the skin lesion colours..

Iii Proposed Method

Image blending can be mathematically formulated as follows in Eqn. 1.

(1)

where is the output image, is the hair mask source, is the hair-free image and is the binary mask for hair location. However, image blending faces two main challenges: the harmonisation of the image colour and the blending of the image borders “colour bleeding”. These challenges can be addressed by applying additional constraints on the gradients along the boundaries of the region to be replaced to match the enclosing image  [36, 37]. Thus, the image editing must be done on the boundaries using a guidance vector field. This guidance vector field is estimated for each colour channel separately [38].

Perez et al. proposed an image blending algorithm based on Poisson equation. They used a large sparse linear system to find the optimal solution for this Poisson equation, along with the constraint on the vector guidance field, as shown in Eqn. 2 [38].

(2)

where is the output image, is the boundary of the masked area to be blended, and is boundary of the destination image. Despite the robustness of the mathematically solution of the poisson equation, it works fine in the case of smooth change in the intensity across the boundary. Otherwise, the blended image suffers from bleeding artefacts [36]. Dizdaroglu et al. proposed a method based on utilisation of colour information along with the gradients to reduce colour bleeding [39].

(3)

where is the output image, is the source image, is the boundary of the masked area to be blended, and is boundary of the destination image. Thus, Adding the gradient of the source image in the minimisation equation reduced the colour bleeding, as shown in Eqn. 3. However, the output suffers from the domination of the source colours on the blended areas in the image.

Afifi et al. proposed a modified poison equation by minimising the scalar over not only the source but also the destination. They performed image blending on two steps: blending the source and the target according to the complement of the mask to produce an intermediate image. They used this intermediate image as source for blending with the target, according to the original mask.

In our proposed method, we are going to adopt this modified version. This method suits the application, as it will change the hair morphology according to both of the source and destination. This will help in data augmentation process, where the hair will have different external appearance based on the combination of both the source and destination images.

Iv Experimental Results

We prepared the hair masks using an ensemble of hair segmentation method. We used the proposed method by Koehoorn et al. [9] and Zhou et al. [40]. We used skin images from a publicly available dataset from ISIC archive [41] for validation and comparison purposes. This dataset consists of total 1279 images. We manually selected the hair-free images. We find a total of 72 hair-free images. We super imposed the segmented hair masks on them for simulation.

For qualitative analysis, we super imposed random mask on hair-free images. We compared between the different images to study the effect of the hair colour on different skin lesions. As shown in Fig. 3, the proposed method successfully blended the hair on the skin with more realistic appearance compared to the proposed method by Mirzaalian et al. [17]. The proposed method was able to synthesis hair on different skin architecture. Also, it has been able to synthesis other image atrefacts such as ruler markers, as shown in Fig. 3

For quantitative analysis, we used a quantitative metric for blinded image quality in terms of realism called realismCNN [23]. This metrics is used to assess the compatibility of the colours of the colours inside the image to detected the realism of the objects in the images. We achieved realism score 1.65 based on realismCNN aganist 1.59 for the proposed method in [17].

Image Mask Simulated Image
Fig. 3: Simulation results for the proposed method. The Hair has been synthesised according to the mask. Also, the proposed method is able to generate other artefacts such as ruler markers.

V Conclusion

In this presented work, we proposed a novel methodology for realistic hair simulation. We utilised image blending techniques to obtain simulated hair, along with the corresponding mask. This proposed method can be used as a realistic hair simulator for data augmentation and for validation of hair segmentation and inpainting methods. We used state-of-the-art image blending technique to simulate hair on hair-free images. This method was able to simulate hair with matching colour to both source and destination. Thus, the output hair is realistic hair with high compatibility to the destination image. Also, the colours of simulated hair-masks have high degree of variability.

Acknowledgement

This research was fully supported by the Institute for Intelligent Systems Research and Innovation (IISRI) at Deakin University.

References

  • [1] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, p. 115, 2017.
  • [2] M. Hassan, M. Hossny, S. Nahavandi, and A. Yazdabadi, “Skin lesion segmentation using gray level co-occurance matrix,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2016.
  • [3] M. Attia, M. Hossny, S. Nahavandi, and A. Yazdabadi, “Skin melanoma segmentation using recurrent and convolutional neural networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2015.
  • [4] A. Abobakr, M. Hossny, and S. Nahavandi, “Body joints regression using deep convolutional neural networks,” Systems, Man and Cybernetics (SMC), IEEE International Conference on, pp. 3281–3287, 2016.
  • [5] A. Abobakr, D. Nahavandi, J. Iskander, M. Hossny, S. Nahavandi, and M. Smets, “RGB-D human posture analysis for ergonomic studies using deep convolutional neural network,” pp. 2885–2890, 2017.
  • [6] ——, “A kinect-based workplace postural analysis system using deep residual networks,” pp. 1–6, 2017.
  • [7] K. Saleh, M. Hossny, and S. Nahavandi, “Kangaroo vehicle collision detection using deep semantic segmentation convolutional neural network,” IEEE Conference on Digital Image Computing: Techniques and Applications (DICTA), 2016.
  • [8] Y. Yuan, M. Chao, and Y.-C. Lo, “Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance,” IEEE transactions on medical imaging, vol. 36, no. 9, pp. 1876–1886, 2017.
  • [9] J. Koehoorn, A. Sobiecki, P. Rauber, A. Jalba, and A. Telea, “Effcient and effective automated digital hair removal from dermoscopy images,” Mathematical Morphology-Theory and Applications, vol. 1, no. 1.
  • [10] T. Lee, V. Ng, R. Gallagher, A. Coldman, and D. McLean, “Dullrazor®: A software approach to hair removal from images,” Computers in biology and medicine, vol. 27, no. 6, pp. 533–543, 1997.
  • [11] H. Zhou, L. Wei, D. Creighton, and S. Nahavandi, “Inpainting images with curvilinear structures propagation,” Mach. Vision Appl., vol. 25, no. 8, pp. 2003–2008, Nov. 2014. [Online]. Available: http://dx.doi.org/10.1007/s00138-014-0635-0
  • [12] Q. Abbas, I. Fondón, and M. Rashid, “Unsupervised skin lesions border detection via two-dimensional image analysis,” Computer methods and programs in biomedicine, vol. 104, no. 3, pp. e1–e15, 2011.
  • [13] Q. Abbas, M. E. Celebi, and I. F. García, “Hair removal methods: a comparative study for dermoscopy images,” Biomedical Signal Processing and Control, vol. 6, no. 4, pp. 395–404, 2011.
  • [14] Z. She, A. W. G. Duller, Y. Liu, and P. J. Fish, “Simulation and analysis of optical skin lesion images,” Skin Research and Technology, vol. 12, no. 2, pp. 133–144. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.0909-752X.2006.00140.x
  • [15] Z. She, P. Fish, and A. Duller, “Simulation of optical skin lesion images,” in Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual International Conference of the IEEE, vol. 3.   IEEE, 2001, pp. 2766–2769.
  • [16] Z. She, A. Duller, Y. Liu, and P. Fish, “Simulation and analysis of optical skin lesion images,” Skin Research and Technology, vol. 12, no. 2, pp. 133–144, 2006.
  • [17] H. Mirzaalian, T. K. Lee, and G. Hamarneh, “Hair enhancement in dermoscopic images using dual-channel quaternion tubularness filters and mrf-based multilabel optimization,” IEEE Transactions on Image Processing, vol. 23, no. 12, pp. 5486–5496, 2014.
  • [18] G. Georgakis, A. Mousavian, A. C. Berg, and J. Kosecka, “Synthesizing training data for object detection in indoor scenes,” arXiv preprint arXiv:1702.07836, 2017.
  • [19] D. Park and D. Ramanan, “Articulated pose estimation with tiny synthetic videos,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 58–66.
  • [20] A. Khoreva, R. Benenson, E. Ilg, T. Brox, and B. Schiele, “Lucid data dreaming for object tracking,” arXiv preprint arXiv:1703.09554, 2017.
  • [21] S. Tang, M. Andriluka, A. Milan, K. Schindler, S. Roth, and B. Schiele, “Learning people detectors for tracking in crowded scenes,” in Computer Vision (ICCV), 2013 IEEE International Conference on.   IEEE, 2013, pp. 1049–1056.
  • [22] N. Duchateau, M. Sermesant, H. Delingette, and N. Ayache, “Model-based generation of large databases of cardiac images: synthesis of pathological cine mr sequences from real healthy cases,” IEEE transactions on medical imaging, vol. 37, no. 3, pp. 755–766, 2018.
  • [23] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros, “Learning a discriminative model for the perception of realism in composite images,” in Computer Vision (ICCV), 2015 IEEE International Conference on, 2015.
  • [24] X. Liu, B. V. Kumar, Y. Ge, C. Yang, J. You, and P. Jia, “Normalized face image generation with perceptron generative adversarial networks,” in Identity, Security, and Behavior Analysis (ISBA), 2018 IEEE 4th International Conference on.   IEEE, 2018, pp. 1–8.
  • [25] K. Kiani and A. R. Sharafat, “E-shaver: An improved dullrazor® for digitally removing dark and light-colored hairs in dermoscopic images,” Computers in biology and medicine, vol. 41, no. 3, pp. 139–145, 2011.
  • [26] P. Schmid-Saugeona, J. Guillodb, and J.-P. Thirana, “Towards a computer-aided diagnosis system for pigmented skin lesions,” Computerized Medical Imaging and Graphics, vol. 27, no. 1, pp. 65–78, 2003.
  • [27] H. Zhou, M. Chen, R. Gass, J. M. Rehg, L. Ferris, J. Ho, and L. Drogowski, “Feature-preserving artifact removal from dermoscopy images,” in Proc. SPIE, vol. 6914, 2008, pp. 69 141B–69 141B.
  • [28] A. Huang, S.-Y. Kwan, W.-Y. Chang, M.-Y. Liu, M.-H. Chi, and G.-S. Chen, “A robust hair segmentation and removal approach for clinical images of skin lesions,” in Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE.   IEEE, 2013, pp. 3315–3318.
  • [29] M. Fiorese, E. Peserico, and A. Silletti, “Virtualshave: automated hair removal from digital dermatoscopic images,” in Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE.   IEEE, 2011, pp. 5145–5148.
  • [30] F. Xie and A. C. Bovik, “Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm,” Pattern Recognition, vol. 46, no. 3, pp. 1012–1019, 2013. [Online]. Available: http://dx.doi.org/10.1016/j.patcog.2012.08.012
  • [31] F. Xie, Y. Li, R. Meng, and Z. Jiang, “No-reference hair occlusion assessment for dermoscopy images based on distribution feature,” Computers in biology and medicine, vol. 59, pp. 106–115, 2015.
  • [32] I. Maglogiannis and K. Delibasis, “Hair removal on dermoscopy images,” in Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE.   IEEE, 2015, pp. 2960–2963.
  • [33] X. Du, I. Lee, and B. Anthony, “Hair segmentation using adaptive threshold from edge and branch length measures,” Computers in Biology and Medicine, 2017.
  • [34] F.-Y. Xie, S.-Y. Qin, Z.-G. Jiang, and R.-S. Meng, “Pde-based unsupervised repair of hair-occluded information in dermoscopy images of melanoma,” Computerized Medical Imaging and Graphics, vol. 33, no. 4, pp. 275–282, 2009.
  • [35] J. Koehoorn, A. C. Sobiecki, D. Boda, A. Diaconeasa, S. Doshi, S. Paisey, A. Jalba, and A. Telea, “Automated digital hair removal by threshold decomposition and morphological analysis,” in International Symposium on Mathematical Morphology and Its Applications to Signal and Image Processing.   Springer, 2015, pp. 15–26.
  • [36] Z. Zhu, J. Lu, M. Wang, S. Zhang, R. R. Martin, H. Liu, and S.-M. Hu, “A comparative study of algorithms for realtime panoramic video blending,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2952–2965, 2018.
  • [37] M. Wang, Z. Zhu, S. Zhang, R. Martin, and S.-M. Hu, “Avoiding bleeding in image blending,” in Image Processing (ICIP), 2017 IEEE International Conference on.   IEEE, 2017, pp. 2139–2143.
  • [38] P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Transactions on graphics (TOG), vol. 22, no. 3, pp. 313–318, 2003.
  • [39] B. Dizdaroğlu and C. İkibaş, “An improved method for color image editing,” EURASIP Journal on Advances in Signal Processing, vol. 2011, no. 1, p. 98, 2011.
  • [40] H. Zhou, X. Li, G. Schaefer, M. E. Celebi, and P. Miller, “Mean shift based gradient vector flow for image segmentation,” Computer Vision and Image Understanding, vol. 117, no. 9, pp. 1004–1016, 2013. [Online]. Available: http://dx.doi.org/10.1016/j.cviu.2012.11.015
  • [41] M. Berseth, “Isic 2017-skin lesion analysis towards melanoma detection,” arXiv preprint arXiv:1703.00523, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
355527
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description