Towards Learning Fine-Grained Disentangled Representations from Speech

Towards Learning Fine-Grained Disentangled Representations from Speech

Abstract

Learning disentangled representations of high-dimensional data is currently an active research area. However, compared to the field of computer vision, less work has been done for speech processing. In this paper, we provide a review of two representative efforts on this topic and propose the novel concept of fine-grained disentangled speech representation learning.

Towards Learning Fine-Grained Disentangled Representations from Speech

Yuan Gong, Christian Poellabauer

Department of Computer Science and Engineering

University of Notre Dame, IN 46556, USA

ygong1@nd.edu, cpoellab@nd.edu

1 Introduction

Representation learning is a fundamental challenge in machine learning and artificial intelligence. While there are multiple criteria for an ideal representation, disentangled representation (illustrated in Figure 1), which explicitly separates the underlying causal factors of the observed data, has been of particular interest, because it can be useful for a large variety of tasks and domains [1, 2, 3, 4, 5]. For example, in [5], the authors show that learning disentangling latent factors corresponding to pose and identity in photos of human faces can improve the performance of both pose estimation and face verification. Learning disentangled representation from high-dimensional data is not a trivial task and multiple techniques, such as -VAE [1], InfoGAN [2], and DC-IGN [3], have been developed to address this problem. While disentangling natural image representation has been studied extensively, much less work has focused on natural speech, leaving a rather large void in the understanding of this problem. In this paper, we first present a short review and comparison of two representative efforts on this topic [6, 7], where both efforts involve using an auto-encoder and can be applied to the same task (i.e., voice conversion), but the key disentangling algorithms and underlying ideas are very different.

In [6], the authors proposed an unsupervised factorized hierarchical variational autoencoder (FHVAE). The key idea is that assuming that the speech data is generated from two separate latent variable sets and , where contains segment-level (short-term) variables and contains sequence-level (long-term) variables ( that are further conditioned on an s-vector ). Leveraging the multi-scale nature that different factors affect speech at different time scales (e.g., speaker identity affects the fundamental frequency and volume of speech signal at the sequence level while the phonetic content affects the speech signal at the segment level), by training an autoencoder in a sequence-to-sequence manner, can be forced to encode segment-level information (e.g., speech content), while and can be forced to encode sequence-level information (e.g., speaker identity). In the experiments, by keeping fixed and changing , speech of the same content, but by different speakers, can be synthesized naturally, demonstrating the clean separation between content and speaker information. Further, the learned s-vector is shown to be a stronger feature than the conventional i-vector in the speaker verification task, demonstrating that it encodes speaker-level information well. In [6] and subsequent efforts [8, 9, 10], the authors further showed that the disentangled representation is also helpful in the speech recognition task. These efforts convey two primary insights: 1) by adding the appropriate prior assumptions on the latent variables, speech content information and speaker-level information can be separated out in an unsupervised learning manner; 2) the learned disentangled representations are useful to improve both speech synthesis and broader inference tasks.

Figure 1: Illustration of disentangled representations (using an auto-encoder as an example). We expect to learn a structured latent variable set =, where each element corresponds to one independent underlying causal factor of the data.

Different from [6], in [7], the authors propose a supervised approach based on adversarial training [11, 12, 13, 14](illustrated in Figure 2 (left)). In addition to a regular auto-encoder, the authors add a regularization term in its objective function to force the latent variables (i.e., the encoding) to not contain speaker information. This is done by introducing an auxiliary speaker verification classifier . is trained to correctly identify the speaker from the latent variables (i.e., minimizing the misclassification loss ), while the encoder is trained to maximize , i.e., to avoid encoding speaker information in . Both and speaker label are fed to the decoder for reconstruction, and the complete objective function of the auto-encoder is hence minimizing (where is the point-wise L1-norm loss). By alternatively training the auto-encoder and , the is learned to be an encoding of speech content information. Further, the residual information of the speech is reconstructed through another GAN and auxiliary classifier. The experiment shows that such a scheme can be successfully applied to a voice conversion task. The main insight of this work is how to use supervision to conduct representation disentanglement.

In summary, although the implementation is very different, in order to learn disentangled representations, both works add constraints to the latent variables. Such a constraint can be a prior assumption (in the unsupervised case) or a regularization term (in the supervised case). While both efforts show good empirical performance in real tasks and lay the groundwork for future efforts, the learned disentangled representation is relatively coarse-grained. That is, in [6], and are in fact corresponding to general fast-changing and slow-changing information, i.e., may contain other fast-changing information such as emotion, while may contain slow-changing factors such as background and channel noise. In [7], the authors actually separate out speaker information and general non-speaker information, which may contain a lot of detailed information. Coarse-grained disentangled representations are enough for some tasks, such as voice speaker conversion, but might be limited for other tasks. Next, we discuss the need for (and benefits of) fine-grained disentangled representations.

2 Fine-Grained Disentangled Speech Representation

Natural speech signal inherently contains both linguistic and paralinguistic information; common paralinguistic information include gender, age, health status, personality, friendliness, mood, and emotion (sorted from long-term to short-term) [15]. In the original representation of natural speech, these information are entangled together. But in fact, many of these information are essentially independent or of low correlation with each other as well as with the linguistic content, hence raising the possibility to disentangle them in some latent space. Natural speech signals can be viewed as produced by multiple fine-grained causal factors and disentangling these factors leads to the following benefits:

Synthesis: Learning fine-grained disentangled representation can help more flexible speech synthesis. Assume that disentangled latent variables corresponding to age, personality, friendliness, emotion, and content are learned; we may then be able to synthesize speech signals corresponding to arbitrary combinations of these factors, according to the requirement of the application scenario. Further, this may support novel AI applications, such as speech style transfer and predicting the future voice of a given subject (similar technology has been adopted in computer vision, e.g., image style transfer [16] and face aging [17]). In contrast, a coarse-grained disentangled representation [6, 7] may only support a simple voice speaker conversion task.

Inference: Learning fine-grained disentangled representation can also help with more accurate inference and reasoning. When we attempt to predict one target variable, we usually want to eliminate the interference of other factors. For example, a speech recognition system is expected to be emotion-independent, while a speech emotion recognition system is expected to be text-independent. Historically, some manually designed algorithms are used to eliminate the effects of unrelated factors, e.g., speaker normalization [18, 19] and speaker adaptation [20, 21] are commonly used to eliminate the impacts of speaker variability. However, it is difficult to manually design algorithms for all underlying factors. Previous work in representation learning has shown that by disentangling different (independent) factors, all corresponding inference tasks [5] can gain performance improvements. In addition, the learned disentangled and interpretable representation helps us to understand the inner working mechanism of the machine learning model.

The question is how can we learn fine-grained disentangled representation from speech? Following the discussion in the previous section, we can either add a prior assumption (unsupervised) or a regularization (supervised) to the latent variables. However, for unsupervised approaches, when we want to disentangle many factors, designing such prior assumptions is difficult and needs to be done very carefully. Hence, we first consider a fully supervised solution.

Assume that we want to disentangle independent factors , , …, of natural speech. Further assume that we have a data set that has complete annotations {,,…,} corresponding to each factor for each sample. We can then extend the approach in [7] to learn disentangled latent variables {,,…,} corresponding to more than two factors. As illustrated in Figure 2 (right), we build auto-encoders, each used to learn latent variables corresponding to one factor. In order to guarantee the disentanglement, for each auto-encoder we further need to build auxiliary predictors. Since the auto-encoder attempts to learn latent variable , we train each predictor to correctly predict one factor based on (i.e., minimizing the miss prediction loss ), and train the auto-encoder to maximize the minimum of the loss of each predictor. The training is conducted alternatively and during the training, ground truth annotations are fed to the decoder for successful reconstruction. Hence, the complete loss of auto-encoder is , where is the point-wise L1-norm reconstruction loss, and is the parameter controlling the disentanglement degree of each factor. Note that the predictors having the same target factor cannot be re-used across different auto-encoders, because they are based on different latent variables.

Figure 2: Illustration of supervised representation learning. Left: the approach in [7]. Right: the proposed approach for learning fine-grained disentangled representation.

However, in practice, there does not exist such a speech dataset that has complete annotations for each factor; most speech datasets only have a limited number of annotations. Nevertheless, natural speech has relatively fixed variation factors, which makes it possible to use multiple datasets to cooperatively learn disentangled representations. This is very different from natural images, which have a much larger variation. For example, handwriting digit images have factors “number” and “writing style”, which are completely different from human face images; therefore, it is difficult to use a handwriting image dataset and a human face dataset together to learn disentangled representations. In contrast, for most speech datasets, although they are collected for different purposes and hence have different annotations, they share the same factors such as content, emotion, age, and gender.

Assume that we have a set of datasets , , …,, where each has an annotation of one different factor. In this practical setting, two things become more complicated: 1) without the ground-truth label, the loss of predictor cannot be calculated; 2) the encoder aims to remove the information unrelated to the desired factor and the decoder does not have the ground-truth label of the removed information to reconstruct the input. To solve these challenges, we only use the samples with the corresponding ground-truth labels to train the predictors, and use the certainty of the prediction as the regularizer for the auto-encoder (e.g., for discrete labels : ). In the same iteration, we also feed this prediction to the decoder. In this approach, we penalize the high-certainty prediction rather than the correct prediction, hence the ground-truth label is not needed. We feed the prediction as the ground truth to the decoder, compensating the information the encoder removes (according to this prediction) to reconstruct the input.

In this paper, we extend the idea of [7] for fine-grained disentangled speech representation learning. The training procedure detail, convergence condition, and the empirical performance need to be further explored and are left as future work.

References

  • [1] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework,” 2016.
  • [2] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in Advances in Neural Information Processing Systems, 2016, pp. 2172–2180.
  • [3] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” in Advances in Neural Information Processing Systems, 2015, pp. 2539–2547.
  • [4] E. Banijamali, A.-H. Karimi, A. Wong, and A. Ghodsi, “Jade: Joint autoencoders for dis-entanglement,” arXiv preprint arXiv:1711.09163, 2017.
  • [5] S. Reed, K. Sohn, Y. Zhang, and H. Lee, “Learning to disentangle factors of variation with manifold interaction,” in International Conference on Machine Learning, 2014, pp. 1431–1439.
  • [6] W.-N. Hsu, Y. Zhang, and J. Glass, “Unsupervised learning of disentangled and interpretable representations from sequential data,” in Advances in neural information processing systems, 2017, pp. 1876–1887.
  • [7] J.-c. Chou, C.-c. Yeh, H.-y. Lee, and L.-s. Lee, “Multi-target voice conversion without parallel data by adversarially learning disentangled audio representations,” arXiv preprint arXiv:1804.02812, 2018.
  • [8] W.-N. Hsu, H. Tang, and J. Glass, “Unsupervised adaptation with interpretable disentangled representations for distant conversational speech recognition,” arXiv preprint arXiv:1806.04872, 2018.
  • [9] W.-N. Hsu and J. Glass, “Extracting domain invariant features by unsupervised learning for robust automatic speech recognition,” arXiv preprint arXiv:1803.02551, 2018.
  • [10] H. Tang, W.-N. Hsu, F. Grondin, and J. Glass, “A study of enhancement, augmentation, and autoencoder methods for domain adaptation in distant speech recognition,” arXiv preprint arXiv:1806.04841, 2018.
  • [11] G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer et al., “Fader networks: Manipulating images by sliding attributes,” in Advances in Neural Information Processing Systems, 2017, pp. 5967–5976.
  • [12] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
  • [13] G. Louppe, M. Kagan, and K. Cranmer, “Learning to pivot with adversarial networks,” in Advances in Neural Information Processing Systems, 2017, pp. 981–990.
  • [14] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” arXiv preprint, vol. 1711, 2017.
  • [15] B. Schuller and A. Batliner, Computational paralinguistics: emotion, affect and personality in speech and language processing.    John Wiley & Sons, 2013.
  • [16] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2414–2423.
  • [17] G. Antipov, M. Baccouche, and J.-L. Dugelay, “Face aging with conditional generative adversarial networks,” in Image Processing (ICIP), 2017 IEEE International Conference on.    IEEE, 2017, pp. 2089–2093.
  • [18] V. Sethu, E. Ambikairajah, and J. Epps, “Speaker normalisation for speech-based emotion detection,” in Digital Signal Processing, 2007 15th International Conference on.    IEEE, 2007, pp. 611–614.
  • [19] C. Busso, A. Metallinou, and S. S. Narayanan, “Iterative feature normalization for emotional speech detection,” in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on.    IEEE, 2011, pp. 5692–5695.
  • [20] N. Ding, V. Sethu, J. Epps, and E. Ambikairajah, “Speaker variability in emotion recognition-an adaptation based approach,” in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on.    IEEE, 2012, pp. 5101–5104.
  • [21] T. Rahman and C. Busso, “A personalized emotion recognition system using an unsupervised feature adaptation scheme,” in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on.    IEEE, 2012, pp. 5117–5120.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
247367
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description