An Image Fusion Scheme for Single-Shot High Dynamic Range Imaging with Spatially Varying Exposures

An Image Fusion Scheme for Single-Shot High Dynamic Range Imaging with Spatially Varying Exposures

20192019
20192019
h
\field\authorlist\authorentry

Chihiro Gonaffiliate label\MembershipNumber \authorentryYuma Kinoshitasaffiliate label\MembershipNumber1614118 \authorentrySayaka Shiotamaffiliate label\MembershipNumber1410504 \authorentry[kiya@tmu.ac.jp]Hitoshi Kiyafaffiliate label\MembershipNumber7904021 \affiliate[affiliate label]The authors are with Department of Computer Science, Tokyo Metropolitan University, Tokyo, 191-0065 Japan. 11 11 {summary} This paper proposes a novel multi-exposure image fusion (MEF) scheme for single-shot high dynamic range imaging with spatially varying exposures (SVE). Single-shot imaging with SVE enables us not only to produce images without color saturation regions from a single-shot image, but also to avoid ghost artifacts in the producing ones. However, the number of exposures is generally limited to two, and moreover it is difficult to decide the optimum exposure values before the photographing. In the proposed scheme, a scene segmentation method is applied to input multi-exposure images, and then the luminance of the input images is adjusted according to both of the number of scenes and the relationship between exposure values and pixel values. The proposed method with the luminance adjustment allows us to improve the above two issues. In this paper, we focus on dual-ISO imaging as one of single-shot imaging. In an experiment, the proposed scheme is demonstrated to be effective for single-shot high dynamic range imaging with SVE, compared with conventional MEF schemes with exposure compensation.

igh dynamic range imaging, multi-exposure image fusion, exposure compensation, spatially varying exposures

1 Introduction

The low dynamic range (LDR) imaging sensors used in modern digital cameras cannot capture the wide dynamic range of a real scene. The limitation causes blocked up shadows and blown out highlights in images taken by digital cameras. In addition, those images often have low contrast. For this reason, a lot of high dynamic range (HDR) imaging techniques have so far been reported.

The most common approach for HDR imaging is multi-exposure image fusion (MEF). By using the approach, images covering the HDR of real scenes are generated by fusing a set of differently exposed images called as multi-exposure images. To produce high-quality fused images, three or more multi-exposure images are generally utilized as inputs for multi-exposure fusion. However, MEF often causes ghost like artifacts in fused images. This is because the movement of cameras and subjects makes it difficult to capture suitable multi-exposure images. While there are various robust fusion methods against ghost like artifacts [1, 2, 3, 4], the performance is still limited.

For capturing images covering the HDR without the issue of ghost like artifacts, camera devices having a wide dynamic image sensor [5] or spatially varying exposures (SVE) have been studied. Although the former devices are very expensive and are not widespread yet, the latter ones can be applied to commonly used digital cameras. In the SVE-based imaging, an image is captured with a single shutter by varying exposures for each pixel on an imaging sensor, and multiple sub-images are obtained by separating the image for each exposure. Varying exposures is done by spatially changing shutter speeds or ISO speeds. The dual-ISO imaging is one of the SVE-based imaging methods, in which varying the ISO speeds alternates every two lines in a single raw Bayer image [6, 7, 8]. In [9, 10, 11], the exposure time alternates row-wise varying exposures in a single raw Bayer image with two exposure times. In Quad Bayer pixel structure, integration can be divided into long-time integration and short-time integration for every two pixels in the Quad array [12]. However, in these methods, there is a trade-off between the number of exposures and the resolution of captured sub-images. Hence, the number of sub-images, which is generally two, is less than that of images used for conventional MEF methods. For this reason, the scene information cannot be sufficiently expressed in resulting images.

Because of such a situation, we propose a new image fusion method for single-shot imaging with SVE. The proposed method generates more multi-exposure images from two images captured by SVE, and fuse them into a single high-quality image. To generate those images, a new scene segmentation method is applied to input multi-exposure images. After that, the exposure compensation for input images is automatically performed so that generated multi-exposure images clearly show all regions in a scene [13, 14, 15, 16]. Generated multi-exposure images can be applied to any MEF methods.

We evaluate the effectiveness of the proposed method in terms of the quality of generated images by two simulations. In the simulations, the proposed method is compared with conventional MEF methods in terms of objective quality metrics: the tone mapped image quality index (TMQI) [17], MEF structural similarity (MEF-SSIM) [18], statistical naturalness, and discrete entropy. Experimental results show that the proposed method can produce high-quality images compared with conventional fusion methods for single-shot high dynamic range imaging with SVE.

2 Preparation

Here we summarize dual-ISO imaging and a conventional fusion scheme for the imaging.

2.1 Dual-ISO imaging

Sony Corp. provides an imaging sensor product which can take SVE images with the Quad Bayer array [12]. Canon Inc. also provides some cameras which can capture SVE images by changing the ISO speed of the image sensor line by line by using firmware, called Magic Lantern [6]. SVE images are generally expressed as a row image format. Here, we focus on dual-ISO imaging as Canon’s one.

A raw Bayer image sensed with a dual-ISO sensor is illustrated in Fig.1, where the ISO speed alternates every two lines in the Bayer image [6]. By using the dual-ISO sensor, raw images with two exposures are produced. The Bayer image captured with two ISO speeds are fused as shown in Fig.2. Each fusion step in Fig.2 is briefly explained as below.

Figure 1: Raw Bayer image sensed with dual-ISO sensor
Figure 2: Conventional method for dual-ISO imaging

A Separation and interpolation

A raw image with a size of is first divided into two raw images with the size of , according to the difference of ISO speed (See Fig.3).

(a) Separation
(b) Interpolation
Figure 3: Separating and Interpolation

Next, interpolation processing is applied to each raw image for producing two raw images with the size of : and .

B Demosaicing

To obtain two RGB images , , an image demosaicing algorithm is applied to two raw images .

C Image fusion

A fused image is produced as

(1)

where indicates a fusion to fuse two images into a single image.

In conventional single-shot imaging, the number of exposures is generally limited to two, and moreover it is difficult to decide the optimum exposure values before the photographing as described above. We aim to improve the above two issues.

3 Proposed method

In order to improve the two issues that the conventional single-shot imaging has, we propose a novel image fusion method for the single-shot imaging. The outline of the proposed method is shown in Fig.4, where the main contribution of this work is in scene-segmentation based exposure competition. The exposure competition consists of the following five steps (See Fig.5).

Figure 4: Outline of proposed method
Figure 5: Details of scene segmentation-based exposure compensation

A Local contrast enhancement

Since the number of exposures is generally limited to two, cannot always represent a scene clearly, unlike general multi-exposure images. A local contrast enhancement algorithm is used to enhance detailed information in . In this paper, the local contrast enhancement using the dodging and burning algorithm [19] is performed as

(2)

where is the luminance value of at the place , and is the local average of luminance around pixel . Here, a bilateral filter is performed to obtain as in [19]. In Fig.6(a), images with the local enhancement are compared with images without any local enhancement. In contrast, Fig.6(b) shows images with the local enhancement.

(a) Images without local contrast enhancement
(b) Images with local contrast enhancement
(c) Scene segmentation result
Figure 6: Example of scene segmentation

The luminance at all pixels is required to calculate eq.(2), but a pixel value of a raw image has only a red, green or blue value. In this paper, in order to obtain luminance from , is calculated by using and its eight surrounding pixels as shown in Fig.7.

(a) Red
(b) Green1
(c) Green2
(d) Blue
Figure 7: Block for luminance calculation

When is a red value as in Fig.7(a), the luminance is calculated, as

(3)

where

(4)

When is a blue value as in Fig.7(d), the luminance is calculated, as

(5)

Similarly, when is a green value as in Fig.7(b) or 7(c), the luminance is calculated the same way, respective. Eqs. (4) and (5) are a simple demosaicing algorithm. Other demosaicing ones can be applied to and .

B Scene segmentation

The goal of the proposed segmentation is to separate images into areas , where each of them has a specific brightness range of the image and satisfies . These results are used for exposure compensation.

The proposed segmentation method differs from typical segmentation ones in two ways.

  • Drawing no attention to the structure of images, e.g., edges.

  • Allowing to include spatially non-contiguous regions.

For the segmentation, a Gaussian mixture distribution is utilized to model the luminance distribution of the input images in this paper. After that, pixels are classified by using a clustering algorithm based on a Gaussian mixture model (GMM).

To obtain a model considering the luminance values and , we regard luminance values at a pixel as a 2-dimensional vector , where denotes the transpose of a vector. By using a GMM, the distribution of is given as

(6)

where indicates the number of mixture components, is the th mixing coefficient, and is a 2-dimensional Gaussian distribution with mean and variance covariance matrix .

To fit the GMM into given , the variational Bayesian algorithm [20] is utilized. Compared with the traditional maximum likelihood approach, one of the advantages is that the variational Bayesian approach can avoid overfitting even when we choose a large . For this reason, unnecessary mixture components are automatically removed by using the approach together with a large . is used in this paper, as the maximum of the partition number .

Here, let be a -dimensional binary random variable having a 1-of- representation in which a particular element is equal to 1 and all other elements are equal to 0. The marginal distribution over is specified in terms of a mixing coefficient , such that

(7)

For to be a valid probability, must satisfy

(8)

together with

(9)

A cluster for an observation is determined by the responsibility which is given as a conditional probability:

(10)

When a pixel is given and satisfies

(11)

the pixel is assigned to a subset of . Figure 6 shows an example of scene segmentation. In this example, the image was segmented into parts.

C Exposure compensation

multi-exposure images are created from two images, , by using the result of scene segmentation, . The scaled luminance which clearly represents an area is obtained by

(12)

where the scale factor indicates the degree of adjustment for the th scaled luminance . In the following, how to determine parameter is discussed.

Given as a subset of , the approximate brightness of an area is calculated as the geometric mean of luminance values on . We thus estimate an adjusted multi-exposure image , so that the geometric mean of its luminance equals to middle-gray of the displayed image, or 0.18 on a scale from zero to one as in [21].

The geometric mean of luminance on pixel set is calculated using

(13)

where is set to a small value to avoid singularities at . From eq.(13), parameter is calculated as

(14)

By using the exposure compensation, exposure values are automatically adjusted, even when the values have no approximate brightness.

D Combining adjusted luminance and input images

A set of luminance adjusted by the scene segmentation-based exposure compensation is combined with an input image to obtain adjusted images .

Therefore, the adjusted pixel value is computed by

(15)

As a result, raw images are prepared, according to eq.(15).

E Demosaicing

Since are Raw images, a demosaicing algorithm is carried out to obtain RGB images . In this paper, we apply an image demosaicing algorithm [22] to . In the proposed method, other demosaicing ones can be applied to not only and but also . Demosaicing algorithms give some influence to the quality of generated images such as the presence of artifacts.

F Image fusion

A final image is produced by using RGB images, as

(16)

where indicates a function to fuse multi-exposure images into a single image. Any existing MEF methods are applicable for the proposed scheme. The fusion method proposed by Mertens et al. [23] is used in this paper as .

4 Simulation

In experiments, the proposed scheme is demonstrated to be effective for single-shot HDR imaging with SVE. The performance of the proposed scheme was compared with conventional MEF methods [6, 24].

4.1 Simulation with HDR images

A Dataset

In this experiment, input SVE images were generated by using HDR images , according to [21]. The procedure for generating consists of three steps. First, an LDR image with EV was generated from an HDR image, so that the geometric mean of its luminance equals 0.18, as in [21]. Next, according to the following equation, exposure values of every two lines in were changed into EV and EV, respectively

(17)

Finally, raw Bayer image was obtained by removing two of RGB components at each pixel in . To generate , we used 28 HDR images which were selected from a database [25].

From each HDR image, four SVE image sets with EV, EV, EV, or EV were generated. In MEF, it is not known yet how the optimal EV is decided. Therefore, in MEF, this issue is overcome by capturing many EV images. However, for SVE images, many EV images can not be captured. The proposed method aims to improve this issue under the limited number of exposure values.

B Objective metrics

The quality of images produced by each method was evaluated in two objective metrics; TMQI [17] and MEF-SSIM [18]. TMQI measures the quality of a tone mapped image from an HDR image and it consists of structural fidelity and statistical naturalness. To calculate structural fidelity, an HDR image is used as a reference. In contrast, statistical naturalness does not need any references. MEF-SSIM is based on a multi-scale SSIM framework and a patch consistency measure. It keeps a good balance between local structure preservation and global luminance consistency. For each score, a larger value means higher quality.

C Result

Tables 1 and 2 denote the average scores of 28 images. From Table 1, it is confirmed that the proposed method had higher TMQI scores than conventional methods. From Table.2, in the case of EV and EV, the proposed method had higher scores than the conventional methods, although Yang et al. had higher MEF-SSIM scores than the proposed method in the case of EV and EV. In addition, the difference between with the local enhancement and without any local enhancement is demonstrated in the tables. In particular, the image quality was improved by performing the local contrast enhancement in terms of MEF-SSIM. In addition, we confirmed the influence of demosaicing algorithms. For ”proposed (with Eqs. (4) and (5))” in the tables, Eqs. (4) and (5) were applied to not only and but also . The results were lower than other proposed ones with the demosaicing algorithm [22], although the results were higher than those of conventional MEF methods.

Figure 8 shows examples of output images produced by each method. The proposed method could express both bright and dark areas for all exposure values. In contrast, Yang et al. could not preserve the relative luminance, especially when the ratio of the exposure value is high like for EV or EV. The proposed method could preserve the relative luminance even in such conditions.

Figures 9 and 10 show the box-plot of TMQI and MEF-SSIM scores for fused images. From the results, the proposed method is demonstrated not only to have high scores but also to have narrow range. The results mean the proposed method almost always provided good results under various conditions.

No correction 0.2061 0.2047 0.2012 0.1981
Alex [6] 0.2051 0.2048 0.2033 0.2005
Yang et al. [24] 0.2069 0.2034 0.1960 0.1882
Proposed
(without local enhancement) 0.2072 0.2065 0.2054 0.2044
Proposed
(with local enhancement) 0.2073 0.2066 0.2055 0.2045
Proposed
(with Eqs. (4) and (5)) 0.2073 0.2065 0.2054 0.2043
Table 1: Average scores of TMQI
No correction 0.6346 0.6134 0.5698 0.5099
Alex [6] 0.3251 0.3250 0.3250 0.3248
Yang et al. [24] 0.6805 0.6772 0.6321 0.5848
Proposed
(without local enhancement) 0.6602 0.6293 0.6041 0.5924
Proposed
(with local enhancement) 0.6666 0.6633 0.6564 0.6658
Proposed
(with Eqs. (4) and (5)) 0.6652 0.6542 0.6477 0.6552
Table 2: Average scores of MEF-SSIM
(a) No exposure compensation ()
(b) Alex ()
(c) Yang et al. ()
(d) Proposed method ()
(e) No exposure compensation ()
(f) Alex ()
(g) Yang et al. ()
(h) Proposed method ()
(i) No exposure compensation ()
(j) Alex ()
(k) Yang et al. ()
(l) Proposed method ()
(m) No exposure compensation ()
(n) Alex ()
(o) Yang et al. ()
(p) Proposed method ()
Figure 8: Examples of fused images
Figure 9: Experimental results (TMQI). Boxes span from the first to the third quartile referred to as and , and whiskers show maximum and minimum values in the range of . Band inside box indicates median.
Figure 10: Experimental results (MEF-SSIM). Boxes span from the first to the third quartile referred to as and , and whiskers show maximum and minimum values in the range of . Band inside box indicates median.

4.2 Simulation with photographing

A Dataset

Photographs taken by Canon EOS 5D Mark camera were directly used as input images . We also used Magic Lantern [26], which is a firmware to perform dual-ISO sensing. The shutter speed and the aperture were set by auto exposure of the camera at ISO 800. This condition means that the exposure value is 0 EV at ISO 800. For the dual-ISO imaging, ISO 200 and ISO 3200 correspond to -2 EV and +2 EV respectively. We used nine dual-ISO sensing images.

B Objective metrics

In this experiment, there are no ideal images to be reference images. Thus, we used discrete entropy and statistical naturalness as objective quality metrics, which do not require any reference images. Discrete entropy represents the amount of information in an image. For each score, a larger value means higher quality.

C Result

Tables 3 and 4 denote the average scores of nine images. From Table 3, the proposed method had a high score for all exposure values. From Table 4, in the case of EV and EV, the proposed method had higher scores than conventional methods, although Yang et al. had higher discrete entropy scores than the proposed method in the case of EV. Figure 11 shows examples of output images produced by each method. Compared with the conventional methods, the proposed imaging successfully represent information of dark area for all exposure values. In contrasts, Yang’s method does not represent information of dark area.

Figures 12 and 13 show the box-plot of statistical naturalness and discrete entropy for fused images. The performance of the proposed method is demonstrated to offer high quality images under various conditions as well as in Figs.9 and 10.

No correction 0.0229 0.0303 0.0385
Alex [6] 0.0082 0.0078 0.0075
Yang et al. [24] 0.0794 0.1336 0.1466
Proposed
(without local enhancement) 0.1722 0.1469 0.1458
Proposed
(with local enhancement) 0.1885 0.1579 0.1585
Table 3: Average scores of statistical naturalness
No correction 5.0301 5.2525 5.5203
Alex [6] 4.1619 4.1092 4.1349
Yang et al. [24] 5.5740 6.1244 6.5076
Proposed
(without local enhancement) 6.3579 6.1785 6.0886
Proposed
(with local enhancement) 6.3610 6.1886 6.0997
Table 4: Average scores of discrete entropy
(a) No exposure compensation ()
(b) Alex ()
(c) Yang et al. ()
(d) Proposed method ()
(e) No exposure compensation ()
(f) Alex ()
(g) Yang et al. ()
(h) Proposed method ()
(i) No exposure compensation ()
(j) Alex ()
(k) Yang et al. ()
(l) Proposed method ()
Figure 11: Examples of fused images
Figure 12: Experimental results (statistical naturalness). Boxes span from the first to the third quartile referred to as and , and whiskers show maximum and minimum values in the range of . Band inside box indicates median.
Figure 13: Experimental results (discrete entropy). Boxes span from the first to the third quartile referred to as and , and whiskers show maximum and minimum values in the range of . Band inside box indicates median.

5 Conclusion

In this paper, we proposed a new image fusion method for single-shot imaging with SVE. In the single-shot imaging with SVE, the number of exposures is generally limited to two. To improve the limitation, the exposure compensation for input images is automatically performed so that generated multi-exposure images clearly show all regions in a scene. In the proposed method, generated multi-exposure images can be applied to any MEF methods. We evaluated the effectiveness of the proposed method in terms of four objective quality metrics: TMQI, MEF-SSIM, statistical naturalness, and discrete entropy. Experimental results showed that the proposed method can produce high-quality images compared with conventional fusion methods.

References

  • [1] E.A. Khan, A.O. Akyuz, and E. Reinhard, “Ghost removal in high dynamic range images,” Image Processing, 2006 IEEE International Conference on, pp.2005–2008, IEEE, 2006.
  • [2] C. Lee and E.Y. Lam, “Computationally Efficient Truncated Nuclear Norm Minimization for High Dynamic Range Imaging,” IEEE Transactions on Image Processing, vol.25, no.9, pp.4145–4157, September 2016.
  • [3] Y. Kinoshita, T. Yoshida, S. Shiota, and H. Kiya, “Pseudo multi-exposure fusion using a single image,” Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp.263–269, IEEE, 2017.
  • [4] Y. Kinoshita, S. Shiota, and H. Kiya, “A pseudo multi-exposure fusion method using single image,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol.101, no.11, pp.1806–1814, 2018.
  • [5] M.D. Tocci, C. Kiser, N. Tocci, and P. Sen, “A versatile hdr video production system,” ACM Transactions on Graphics (TOG), vol.30, no.4, p.41, 2011.
  • [6] A1EX, “Dynamic range improvement for some canon dslrs by alternating iso during sensor readout.” http://acoutts.com/a1ex/dual_iso.pdf, 2013.
  • [7] S. Hajsharif, J. Kronander, and J. Unger, “HDR reconstruction for alternating gain (ISO) sensor readout,” April 2014.
  • [8] R. Gil Rodríguez and M. Bertalmío, “High quality video in high dynamic range scenes from interlaced dual-iso footage,” 2016.
  • [9] H. Cho, S.J. Kim, and S. Lee, “Single-shot high dynamic range imaging using coded electronic shutter,” Computer Graphics Forum, 2014.
  • [10] J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” Computational Photography (ICCP), 2010 IEEE International Conference on, pp.1–8, 2010.
  • [11] V.G. An and C. Lee, “Single-shot high dynamic range imaging via deep convolutional neural network,” 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2017, Kuala Lumpur, Malaysia, December 12-15, 2017, pp.1768–1772, 2017.
  • [12] SONY, “IMX294CJK — Sony Semiconductor Solutions.” https://www.sony-semicon.co.jp/products_en/new_pro/may_2017/imx294cjk_e.html.
  • [13] Y. Kinoshita, S. Shiota, and H. Kiya, “Automatic exposure compensation for multi-exposure image fusion,” IEEE International Conference on Image Processing (ICIP), pp.883–887, 2018.
  • [14] Y. Kinoshita, T. Yoshida, S. Shiota, and H. Kiya, “Multi-exposure image fusion based on exposure compensation,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.1388–1392, 2018.
  • [15] Y. Kinoshita and H. Kiya, “Automatic exposure compensation using an image segmentation method for single-image-based multi-exposure fusion,” APSIPA Transactions on Signal and Information Processing, vol.7, p.e22, 2018.
  • [16] Y. Kinoshita and H. Kiya, “Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion,” IEEE Trans. Image Processing., pp.1–1, 2019, doi: 10.1109/TIP.2019.2906501.
  • [17] H. Yeganeh and Z. Wang, “Objective quality assessment of tone-mapped images,” IEEE Transactions on Image Processing, vol.22, no.2, pp.657–667, 2013.
  • [18] K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol.24, no.11, pp.3345–3356, 2015.
  • [19] Y. Huo, F. Yang, and V. Brost, “Dodging and burning inspired inverse tone mapping algorithm,” Journal of Computational Information Systems, vol.9, no.9, pp.3461–3468, 2013.
  • [20] C.M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics), Springer-Verlag, Berlin, Heidelberg, 2006.
  • [21] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM transactions on graphics (TOG), vol.21, no.3, pp.267–276, 2002.
  • [22] H.S. Malvar, L.w. He, and R. Cutler, “High-quality linear interpolation for demosaicing of bayer-patterned color images,” Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference on, pp.485 – 488, Institute of Electrical and Electronics Engineers, Inc., May 2004.
  • [23] Wiley Online Library, Exposure fusion: A simple and practical alternative to high dynamic range photography, 2009.
  • [24] Y. Yang, W. Cao, S. Wu, and Z. Li, “Multi-scale fusion of two large-exposure-ratio images,” IEEE Signal Processing Letters, vol.25, no.12, pp.1885–1889, 2018.
  • [25] hdrlabs, “sIBL Archive.” http://www.hdrlabs.com/sibl/archive.html.
  • [26] “Magic Lantern.” http://www.magiclantern.fm.
\profile

Chihiro Goreceived his B.Eng. degree from Tokyo University of Agriculture and Technology, Japan, in 2017. He is a Master course student at Tokyo Metropolitan University, Japan. His research interests are in the area of image processing.

\profile

Yuma Kinoshitareceived his B.Eng. and M.Eng. degrees from Tokyo Metropolitan University, Japan, in 2016 and 2018, respectively. From 2018, he has been a Ph.D. student at Tokyo Metropolitan University. He received IEEE ISPACS Best Paper Award in 2016, IEEE Signal Processing Society Japan Student Conference Paper Award in 2018, and IEEE Signal Processing Society Tokyo Joint Chapter Student Award in 2018, respectively. His research interests are in the area of image processing. He is a student member of IEEE and IEICE.

\profile

Sayaka Shiotareceived B.E., M.E., and Ph.D. degrees in intelligence and computer science, engineering, and engineering simulation from the Nagoya Institute of Technology, Nagoya, Japan in 2007, 2009, and 2012, respectively. From February 2013 to March 2014, she worked as a Project Assistant Professor at the Institute of Statistics Mathematics. In April of 2014, she joined Tokyo Metropolitan University as an Assistant Professor. Her research interests include statistical speech recognition and speaker verification. She is a member of the Acoustical Society of Japan (ASJ), the IEICE, the ISCA, APSIPA, and the IEEE.

\profile

Hitoshi Kiyareceived his B.E and M.E. degrees from Nagaoka University of Technology, in 1980 and 1982 respectively, and his Dr. Eng. degree from Tokyo Metropolitan University in 1987. In 1982, he joined Tokyo Metropolitan University, where he became Full Professor in 2000. From 1995 to 1996, he attended the University of Sydney, Australia as a Visiting Fellow. He is a Fellow of IEEE, IEICE and ITE. He currently serves as President of APSIPA, and he served as Inaugural Vice President (Technical Activities) of APSIPA in 2009-2013, and as Regional Director-at-Large for Region 10 of IEEE Signal Processing Society in 2016-2017. He was also President of IEICE Engineering Sciences Society in 2011-2012, and he served there as Vice President and Editor-in-Chief for IEICE Society Magazine and Society Publications. He was Editorial Board Member of eight journals, including IEEE Trans. on Signal Processing, Image Processing, and Information Forensics and Security, Chair of two technical committees and Member of nine technical committees including APSIPA Image, Video, and Multimedia Technical Committee (TC), and IEEE Information Forensics and Security TC. He has organized a lot of international conferences, in such roles as TPC Chair of IEEE ICASSP 2012 and as General Co-Chair of IEEE ISCAS 2019. Dr. Kiya is a recipient of numerous awards, including six best paper awards.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
387172
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description