Deep Voice 2: Multi-Speaker Neural Text-to-Speech

Deep Voice 2: Multi-Speaker Neural Text-to-Speech

Abstract

We introduce a technique for augmenting neural text-to-speech (TTS) with low-dimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-of-the-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly.

1Introduction

Artificial speech synthesis, commonly known as text-to-speech (TTS), has a variety of applications in technology interfaces, accessibility, media, and entertainment. Most TTS systems are built with a single speaker voice, and multiple speaker voices are provided by having distinct speech databases or model parameters. As a result, developing a TTS system with support for multiple voices requires much more data and development effort than a system which only supports a single voice.

In this work, we demonstrate that we can build all-neural multi-speaker TTS systems which share the vast majority of parameters between different speakers. We show that not only can a single model generate speech from multiple different voices, but also that significantly less data is required per speaker than when training single-speaker systems.

Concretely, we make the following contributions:

  1. We present Deep Voice 2, an improved architecture based on Deep Voice 1 [2].

  2. We introduce a WaveNet-based [13] spectrogram-to-audio neural vocoder, and use it with Tacotron [19] as a replacement for Griffin-Lim audio generation.

  3. Using these two single-speaker models as a baseline, we demonstrate multi-speaker neural speech synthesis by introducing trainable speaker embeddings into Deep Voice 2 and Tacotron.

We organize the rest of this paper as follows. Section 2 discusses related work and what makes the contributions of this paper distinct from prior work. Section 3 presents Deep Voice 2 and highlights the differences from Deep Voice 1. Section 4 explains our speaker embedding technique for neural TTS models and shows multi-speaker variants of the Deep Voice 2 and Tacotron architectures. Section 5.1 quantifies the improvement for single speaker TTS through a mean opinion score (MOS) evaluation and Section 5.2 presents the synthesized audio quality of multi-speaker Deep Voice 2 and Tacotron via both MOS evaluation and a multi-speaker discriminator accuracy metric. Section 6 concludes with a discussion of the results and potential future work.

2Related Work

We discuss the related work relevant to each of our claims in Section 1 in order, starting from single-speaker neural speech synthesis and moving on to multi-speaker speech synthesis and metrics for generative model quality.

With regards to single-speaker speech synthesis, deep learning has been used for a variety of subcomponents, including duration prediction [24], fundamental frequency prediction [16], acoustic modeling [23], and more recently autoregressive sample-by-sample audio waveform generation [13]. Our contributions build upon recent work in entirely neural TTS systems, including Deep Voice 1 [2], Tacotron [19], and Char2Wav [18]. While these works focus on building single-speaker TTS systems, our paper focuses on extending neural TTS systems to handle multiple speakers with less data per speaker.

Our work is not the first to attempt a multi-speaker TTS system. For instance, in traditional HMM-based TTS synthesis [21], an average voice model is trained using multiple speakers’ data, which is then adapted to different speakers. DNN-based systems [22] have also been used to build average voice models, with i-vectors representing speakers as additional inputs and separate output layers for each target speaker. Similarly, [5] uses a shared hidden representation among different speakers with speaker-dependent output layers predicting vocoder parameters (e.g., line spectral pairs, aperiodicity parameters etc.). For further context, [20] empirically studies DNN-based multi-speaker modeling. More recently, speaker adaptation has been tackled with generative adversarial networks (GANs) [7].

We instead use trainable speaker embeddings for multi-speaker TTS. The approach was investigated in speech recognition [1], but is a novel technique in speech synthesis. Unlike prior work which depends on fixed embeddings (e.g. i-vectors), the speaker embeddings used in this work are trained jointly with the rest of the model from scratch, and thus can directly learn the features relevant to the speech synthesis task. In addition, this work does not rely on per-speaker output layers or average voice modeling, which leads to higher-quality synthesized samples and lower data requirements (as there are fewer unique parameters per speaker to learn).

In order to evaluate the distinctiveness of the generated voices in an automated way, we propose using the classification accuracy of a speaker discriminator. Similar metrics such as an “Inception score” have been used for quantitative quality evaluations of GANs for image synthesis [17]. Speaker classification has been studied with both traditional GMM-based methods [14] and more recently with deep learning approaches [11].

3Single-Speaker Deep Voice 2

In this section, we present Deep Voice 2, a neural TTS system based on Deep Voice 1 [2]. We keep the general structure of the Deep Voice 1 [2], as depicted in Figure 1 (the corresponding training pipeline is depicted in Appendix Section 7). Our primary motivation for presenting an improved single-speaker model is to use it as the starting point for a high-quality multi-speaker model.

One major difference between Deep Voice 2 and Deep Voice 1 is the separation of the phoneme duration and frequency models. Deep Voice 1 has a single model to jointly predict phoneme duration and frequency profile (voicedness and time-dependent fundamental frequency, ). In Deep Voice 2, the phoneme durations are predicted first and then are used as inputs to the frequency model.

In the subsequent subsections, we present the models used in Deep Voice 2. All models are trained separately using the hyperparameters specified in Appendix Section 8. We will provide a quantitative comparison of Deep Voice 1 and Deep Voice 2 in Section 5.1.

Figure 1: Inference system diagram: first text-phonemes dictionary conversion, second predict phoneme durations, third upsample and generate F_0, finally feed F_0 and phonemes to vocal model.
Figure 1: Inference system diagram: first text-phonemes dictionary conversion, second predict phoneme durations, third upsample and generate , finally feed and phonemes to vocal model.

3.1Segmentation model

Estimation of phoneme locations is treated as an unsupervised learning problem in Deep Voice 2, similar to Deep Voice 1. The segmentation model is convolutional-recurrent architecture with connectionist temporal classification (CTC) loss [6] applied to classify phoneme pairs, which are then used to extract the boundaries between them. The major architecture changes in Deep Voice 2 are the addition of batch normalization and residual connections in the convolutional layers. Specifically, Deep Voice 1’s segmentation model computes the output of each layer as

where is the output of the -th layer, is the convolution filterbank, is the bias vector, and is the convolution operator. In contrast, Deep Voice 2’s segmentation model layers instead compute

where is batch normalization [8]. In addition, we find that the segmentation model often makes mistakes for boundaries between silence phonemes and other phonemes, which can significantly reduce segmentation accuracy on some datasets. We introduce a small post-processing step to correct these mistakes: whenever the segmentation model decodes a silence boundary, we adjust the location of the boundary with a silence detection heuristic.1

3.2Duration Model

In Deep Voice 2, instead of predicting a continuous-valued duration, we formulate duration prediction as a sequence labeling problem. We discretize the phoneme duration into log-scaled buckets, and assign each input phoneme to the bucket label corresponding to its duration. We model the sequence by a conditional random field (CRF) with pairwise potentials at output layer [10]. During inference, we decode discretized durations from the CRF using the Viterbi forward-backward algorithm. We find that quantizing the duration prediction and introducing the pairwise dependence implied by the CRF improves synthesis quality.

3.3Frequency Model

After decoding from the duration model, the predicted phoneme durations are upsampled from a per-phoneme input features to a per-frame input for the frequency model. 2 Deep Voice 2 frequency model consists of multiple layers: firstly, bidirectional gated recurrent unit (GRU) layers [4] generate hidden states from the input features. From these hidden states, an affine projection followed by a sigmoid nonlinearity produces the probability that each frame is voiced. Hidden states are also used to make two separate normalized predictions. The first prediction, , is made with a single-layer bidirectional GRU followed by an affine projection. The second prediction, , is made by adding up the contributions of multiple convolutions with varying convolution widths and a single output channel. Finally, the hidden state is used with an affine projection and a sigmoid nonlinearity to predict a mixture ratio , which is used to weigh the two normalized frequency predictions and combine them into

The normalized prediction is then converted to the true frequency prediction via

where F_0 and F_0 are, respectively, the mean and standard deviation of for the speaker the model is trained on. We find that predicting with a mixture of convolutions and a recurrent layer performs better than predicting with either one individually. We attribute this to the hypothesis that including the wide convolutions reduces the burden for the recurrent layers to maintain state over a large number of input frames, while processing the entire context information efficiently.

3.4Vocal Model

The Deep Voice 2 vocal model is based on a WaveNet architecture [13] with a two-layer bidirectional QRNN [3] conditioning network, similar to Deep Voice 1. However, we remove the convolution between the gated tanh nonlinearity and the residual connection. In addition, we use the same conditioner bias for every layer of the WaveNet, instead of generating a separate bias for every layer as was done in Deep Voice 1. 3

4Multi-Speaker Models with Trainable Speaker Embeddings

In order to synthesize speech from multiple speakers, we augment each of our models with a single low-dimensional speaker embedding vector per speaker. Unlike previous work, our approach does not rely on per-speaker weight matrices or layers. Speaker-dependent parameters are stored in a very low-dimensional vector and thus there is near-complete weight sharing between speakers. We use speaker embeddings to produce recurrent neural network (RNN) initial states, nonlinearity biases, and multiplicative gating factors, used throughout the networks. Speaker embeddings are initialized randomly with a uniform distribution over and trained jointly via backpropagation; each model has its own set of speaker embeddings.

To encourage each speaker’s unique voice signature to influence the model, we incorporate the speaker embeddings into multiple portions of the model. Empirically, we find that simply providing the speaker embeddings to the input layers does not work as well for any of the presented models besides the vocal model, possibly due to the high degree of residual connections present in the WaveNet and due to the difficulty of learning high-quality speaker embeddings. We observed that several patterns tend to yield high performance:

  • Site-Specific Speaker Embeddings:

    For every use site in the model architecture, transform the shared speaker embedding to the appropriate dimension and form through an affine projection and a nonlinearity.

  • Recurrent Initialization:

    Initialize recurrent layer hidden states with site-specific speaker embeddings.

  • Input Augmentation:

    Concatenate a site-specific speaker embedding to the input at every timestep of a recurrent layer.

  • Feature Gating:

    Multiply layer activations elementwise with a site-specific speaker embedding to render adaptable information flow. 4

Next, we describe how speaker embeddings are used in each architecture.

4.1Multi-Speaker Deep Voice 2

The Deep Voice 2 models have separate speaker embeddings for each model. Yet, they can be viewed as chunks of a larger speaker embedding, which are trained independently.

Figure 2
Figure 2
Figure 3
Figure 3
Figure 4
Figure 4

Segmentation Model

In multi-speaker segmentation model, we use feature gating in the residual connections of the convolution layers. Instead of Eq. , we multiply the batch-normalized activations by a site-specific speaker embedding:

where is a site-specific speaker embedding. The same site-specific embedding is shared for all the convolutional layers. In addition, we initialize each of the recurrent layers with a second site specific embedding. Similarly, each layer shares the same site-specific embedding, rather than having a separate embedding per layer.

Duration Model

The multi-speaker duration model uses speaker-dependent recurrent initialization and input augmentation. A site-specific embedding is used to initialize RNN hidden states, and another site-specific embedding is provided as input to the first RNN layer by concatenating it to the feature vectors.

Frequency Model

The multi-speaker frequency model uses recurrent initialization, which initializes the recurrent layers (except for the recurrent output layer) with a single site-specific speaker-embedding. As described in Section 3.3, the recurrent and convolutional output layers in the single-speaker frequency model predict a normalized frequency, which is then converted into the true by a fixed linear transformation. The linear transformation depends on the mean and standard deviation of observed for the speaker. These values vary greatly between speakers: male speakers, for instance, tend to have a much lower mean . To better adapt to these variations, we make the mean and standard deviation trainable model parameters and multiply them by scaling terms which depend on the speaker embeddings. Specifically, instead of Eq. , we compute the prediction as

where is a site-specific speaker embedding, _0 and _0 are trainable scalar parameters initialized to the mean and standard deviation on the dataset, and and are trainable parameter vectors.

Vocal Model

The multi-speaker vocal model uses only input augmentation, with the site-specific speaker embedding concatenated onto each input frame of the conditioner. This differs from the global conditioning suggested in [13] and allows the speaker embedding to influence the local conditioning network as well.

Without speaker embeddings, the vocal model is still able to generate somewhat distinct-sounding voices because of the disctinctive features provided by the frequency and duration models. Yet, having speaker embeddings in the vocal model increases the audio quality. We indeed observe that the embeddings converge to a meaningful latent space.

4.2Multi-Speaker Tacotron

In addition to extending Deep Voice 2 with speaker embeddings, we also extend Tacotron [19], a sequence-to-sequence character-to-waveform model. When training multi-speaker Tacotron variants, we find that model performance is highly dependent on model hyperparameters, and that some models often fail to learn attention mechanisms for a small subset of speakers. We also find that if the speech in each audio clip does not start at the same timestep, the models are much less likely to converge to a meaningful attention curve and recognizable speech; thus, we trim all initial and final silence in each audio clip. Due to the sensitivity of the model to hyperparameters and data preprocessing, we believe that additional tuning may be necessary to obtain maximal quality. Thus, our work focuses on demonstrating that Tacotron, like Deep Voice 2, is capable of handling multiple speakers through speaker embeddings, rather than comparing the quality of the two architectures.

Character-to-Spectrogram Model

The Tacotron character-to-spectrogram architecture consists of a convolution-bank-highway-GRU (CBHG) encoder, an attentional decoder, and a CBHG post-processing network. Due to the complexity of the architecture, we leave out a complete description and instead focus on our modifications.

We find that incorporating speaker embeddings into the CBHG post-processing network degrades output quality, whereas incorporating speaker embeddings into the character encoder is necessary. Without a speaker-dependent CBHG encoder, the model is incapable of learning its attention mechanism and cannot generate meaningful output (see Appendix Section 10.2 for speaker-dependent attention visualizations). In order to condition the encoder on the speaker, we use one site-specific embedding as an extra input to each highway layer at each timestep and initialize the CBHG RNN state with a second site-specific embedding.

We also find that augmenting the decoder with speaker embeddings is helpful. We use one site-specific embedding as an extra input to the decoder pre-net, one extra site-specific embedding as the initial attention context vector for the attentional RNN, one site-specific embedding as the initial decoder GRU hidden state, and one site-specific embedding as a bias to the tanh in the content-based attention mechanism.

Figure 5: Tacotron with speaker conditioning in the Encoder CBHG module and decoder with two ways to convert spectrogram to audio: Griffin-Lim or our speaker-conditioned Vocal model.
Figure 5: Tacotron with speaker conditioning in the Encoder CBHG module and decoder with two ways to convert spectrogram to audio: Griffin-Lim or our speaker-conditioned Vocal model.

Spectrogram-to-Waveform Model

The original Tacotron implementation in [19] uses the Griffin-Lim algorithm to convert spectrograms to time-domain audio waveforms by iteratively estimating the unknown phases.5 We observe that minor noise in the input spectrogram causes noticeable estimation errors in the Griffin-Lim algorithm and the generated audio quality is degraded. To produce higher quality audio using Tacotron, instead of using Griffin-Lim, we train a WaveNet-based neural vocoder to convert from linear spectrograms to audio waveforms. The model used is equivalent to the Deep Voice 2 vocal model, but takes linear-scaled log-magnitude spectrograms instead of phoneme identity and as input. The combined Tacotron-WaveNet model is shown in Figure 5. As we will show in Section 5.1, WaveNet-based neural vocoder indeed significantly improves single-speaker Tacotron as well.

5Results

In this section, we will present the results on both single-speaker and multi-speaker speech synthesis using the described architectures. All model hyperparameters are presented in Appendix Section 8.

5.1Single-Speaker Speech Synthesis

We train Deep Voice 1, Deep Voice 2, and Tacotron on an internal English speech database containing approximately 20 hours of single-speaker data. The intermediate evaluations of models in Deep Voice 1 and Deep Voice 2 can be found in Table 3 within Appendix Section 7. We run an MOS evaluation using the crowdMOS framework [15] to compare the quality of samples (Table Table 1). The results show conclusively that the architecture improvements in Deep Voice 2 yield significant gains in quality over Deep Voice 1. They also demonstrate that converting Tacotron-generated spectrograms to audio using WaveNet is preferable to using the iterative Griffin-Lim algorithm.

Table 1: Mean Opinion Score (MOS) evaluations with 95% confidence intervals of Deep Voice 1, Deep Voice 2, and Tacotron. Using the crowdMOS toolkit, batches of samples from these models were presented to raters on Mechanical Turk. Since batches contained samples from all models, the experiment naturally induces a comparison between the models.

Model

Samp. Freq. MOS
Deep Voice 1 16 KHz
Deep Voice 2 16 KHz
Tacotron (Griffin-Lim) 24 KHz
Tacotron (WaveNet) 24 KHz

5.2Multi-Speaker Speech Synthesis

We train all the aforementioned models on the VCTK dataset with 44 hours of speech, which contains 108 speakers with approximately 400 utterances each. We also train all models on an internal dataset of audiobooks, which contains 477 speakers with 30 minutes of audio each (for a total of 238 hours). The consistent sample quality observed from our models indicates that our architectures can easily learn hundreds of distinct voices with a variety of different accents and cadences. We also observe that the learned embeddings lie in a meaningful latent space (see Fig. ? as an example and Appendix Section 10 for more details).

In order to evaluate the quality of the synthesized audio, we run MOS evaluations using the crowdMOS framework, and present the results in Table 2. We purposefully include ground truth samples in the set being evaluated, because the accents in datasets are likely to be unfamiliar to our North American crowdsourced raters and will thus be rated poorly due to the accent rather than due to the model quality. By including ground truth samples, we are able to compare the MOS of the models with the ground truth MOS and thus evaluate the model quality rather than the data quality; however, the resulting MOS may be lower, due to the implicit comparison with the ground truth samples. Overall, we observe that the Deep Voice 2 model can approach an MOS value that is close to the ground truth, when low sampling rate and companding/expanding taken into account.

A multi-speaker TTS system with high sample quality but indistinguishable voices would result in high MOS, but fail to meet the desired objective of reproducing the input voices accurately. To show that our models not only generate high quality samples, but also generate distinguishable voices, we also measure the classification accuracy of a speaker discriminative model on our generated samples. The speaker discriminative is a convolutional network trained to classify utterances to their speakers, trained on the same dataset as the TTS systems themselves. If the voices were indistinguishable (or the audio quality was low), the classification accuracy would be much lower for synthesized samples than it is for the ground truth samples. As we demonstrate in Table 2, classification accuracy demonstrates that samples generated from our models are as distinguishable as the ground truth samples (see Appendix Section 11 for more details). The classification accuracy is only significantly lower for Tacotron with WaveNet, and we suspect that generation errors in the spectrogram are exacerbated by the WaveNet, as it is trained with ground truth spectrograms.

Table 2: MOS and classification accuracy for all multi-speaker models. To obtain MOS, we use crowdMOS toolkit as detailed in Table . We also present classification accuracies of the speaker discriminative models (see Appendix for details) on the samples, showing that the synthesized voices are as distinguishable as ground truth audio.

Dataset

Multi-Speaker Model Samp. Freq. MOS Acc.
VCTK Deep Voice 2 (20-layer WaveNet) 16 KHz 2.870.13 99.9%
VCTK Deep Voice 2 (40-layer WaveNet) 16 KHz 3.210.13 100 %
VCTK Deep Voice 2 (60-layer WaveNet) 16 KHz 3.420.12 99.7%
VCTK Deep Voice 2 (80-layer WaveNet) 16 KHz 3.530.12 99.9%
VCTK Tacotron (Griffin-Lim) 24 KHz 1.680.12 99.4%
VCTK Tacotron (20-layer WaveNet) 24 KHz 2.510.13 60.9%
VCTK Ground Truth Data 48 KHz 4.650.06 99.7%
Audiobooks Deep Voice 2 (80-layer WaveNet) 16 KHz 2.970.17 97.4%
Audiobooks Tacotron (Griffin-Lim) 24 KHz 1.730.22 93.9%
Audiobooks Tacotron (20-layer WaveNet) 24 KHz 2.110.20 66.5%
Audiobooks Ground Truth Data 44.1 KHz 4.630.04 98.8%
Figure 6
Figure 6
Figure 7
Figure 7

6Conclusion

In this work, we explore how entirely-neural speech synthesis pipelines may be extended to multi-speaker text-to-speech via low-dimensional trainable speaker embeddings. We start by presenting Deep Voice 2, an improved single-speaker model. Next, we demonstrate the applicability of our technique by training both multi-speaker Deep Voice 2 and multi-speaker Tacotron models, and evaluate their quality through MOS. In conclusion, we use our speaker embedding technique to create high quality text-to-speech systems and conclusively show that neural speech synthesis models can learn effectively from small amounts of data spread among hundreds of different speakers.

The results presented in this work suggest many directions for future research. Future work may test the limits of this technique and explore how many speakers these models can generalize to, how little data is truly required per speaker for high quality synthesis, whether new speakers can be added to a system by fixing model parameters and solely training new speaker embeddings, and whether the speaker embeddings can be used as a meaningful vector space, as is possible with word embeddings.

7Training Deep Voice 2

Figure 8: System diagram for training procedure for Deep Voice 2.
Figure 8: System diagram for training procedure for Deep Voice 2.

For convenience, we depict the training procedure for Deep Voice 2 in Figure 8, similar to the figure in [2]. The pronunciation dictionary can be substituted by a grapheme-to-phoneme model as in [2], to adapt for unseen words in the pronunciation dictionary.

For frequency extraction during training, similar to [2], we use the Praat software, which may be substituted by another fundamental frequency and voicedness estimation algorithm.

The intermediate evaluation metrics for Deep Voice 2 models are compared to their Deep Voice 1 counterparts in Table 3. We obtain significant improvements in these evaluation metrics for all the models, but we also note that the overall synthesis quality cannot be fully quantified by these metrics.

Table 3: Comparison of segmentation, duration, frequency models for Deep Voice 1 and Deep Voice 2, both trained with the same single-speaker data and evaluated as described in .

Model

Evaluation metric Deep Voice 1 Deep Voice 2
Segmentation Phoneme pair error rate 7 % 0.1 %
Duration Mean absolute error 38.0 ms 17.5 ms
Frequency Mean absolute error 29.4 Hz 24.5 Hz

8Model Hyperparameters

All hyperparameters of the models used in this paper are provided in Table ?.

To speed up the training of character-to-spectrogram model in Tacotron in our experiments, we added a penalty term in the form CTC loss (obtained from the attention hidden states) to the overall loss function. We do not have a clear conclusive evidence that it improves the overall audio quality but we observed faster convergence in some cases.

Learning rate is presented as a triple , which means that the initial learning rate of was decayed by a factor of every iterations. All models use the Adam optimization technique [9] with , , and .

Convolutional layers are presented as filters, which means that there are convolutional layers, and (output channels) filters in each layer. The filter size is , where height is in frequency bins and width is in time frames.

Any unspecified details or hyperparameters for Deep Voice 2 are identical to those from the best models used in the original implementation of Deep Voice 1 [2]. Similarly, any unspecified details or hyperparameters for our Tacotron implementation are identical to those from the best models used in the original implementation of Tacotron [19].

9Training Time for Multi-speaker Models

We present the details of training time for multi-speaker models in Table ?. We use the same optimized WaveNet inference kernels described in Deep Voice 1. For detailed analysis and corresponding techniques to optimize the inference time, we refer the readers to [2].

10Interpretation of Learned Embeddings

In this section, we explore the consequences of speaker-dependent models on intermediate model outputs, model activations, and the distributions of the learned embeddings.

10.1Speaker-Dependent Fundamental Frequency Profiles

Figure 9: Time-dependent fundamental frequency profiles (collectively including the F_0 and voicedness) generated by the model when the inputs and embeddings correspond to Speaker S_{6} or S_{24}. Speaker S_{6} is a 23-year-old female with a Southern England accent and Speaker S_{24} is a 24-year-old male with an Indian accent. The pronounced sentence is Six spoons of fresh snow peas five thick slabs of blue cheese and maybe a snack for her brother Bob. Spikes in the ground truth are caused by the estimation errors using the Praat software , and we observe that our frequency model can compensate them in some cases, learning from the other correct ground truth samples.
Figure 9: Time-dependent fundamental frequency profiles (collectively including the and voicedness) generated by the model when the inputs and embeddings correspond to Speaker or . Speaker is a 23-year-old female with a Southern England accent and Speaker is a 24-year-old male with an Indian accent. The pronounced sentence is “Six spoons of fresh snow peas five thick slabs of blue cheese and maybe a snack for her brother Bob”. Spikes in the ground truth are caused by the estimation errors using the Praat software , and we observe that our frequency model can compensate them in some cases, learning from the other correct ground truth samples.

To demonstrate the significance of the speaker embeddings, we run inference for the frequency model with the speaker embedding vectors corresponding to the actual speaker and a different speaker. As shown in Figure 9, while the input phoneme features are dominant in determining the overall shape of the fundamental frequency profiles, the actual values are highly speaker dependent. For example, when the speaker embedding vector of a male is substituted with the speaker embedding vector of a female, the output frequency profile would cause generation of a female voice despite all the other input features correspond to the correct male speaker.

10.2Speaker-Dependent Attention Plots

Figure 10: Attention plots for three speakers speaking at different speeds. Speaker S_{25} is a 22 year-old male with a Scottish accent, Speaker S_{4} is a 23-year-old female with a Southern England accent, and Speaker S_{16} is a 22-year-old male with a London accent. The pronounced sentence is Six spoons of fresh snow peas five thick slabs of blue cheese and maybe a snack for her brother Bob.
Figure 10: Attention plots for three speakers speaking at different speeds. Speaker is a 22 year-old male with a Scottish accent, Speaker is a 23-year-old female with a Southern England accent, and Speaker is a 22-year-old male with a London accent. The pronounced sentence is “Six spoons of fresh snow peas five thick slabs of blue cheese and maybe a snack for her brother Bob”.

Figure 10 shows the learned attention plots for three different speakers who talk at different speeds. It demonstrates that the modifications in the Tacotron encoder architecture are highly effective making the attention model speaker dependent such that different portions of the input text can be focused depending on the speech features of the speaker.

10.3Principal Components of the Embeddings

Figure 11: Principal components of the learned embeddings of the 80-layer vocal model, shown with the genders and regions of the speakers.
Figure 11: Principal components of the learned embeddings of the 80-layer vocal model, shown with the genders and regions of the speakers.
Figure 12: Principal components of the learned embeddings of the character-to-spectrogram model, shown with the genders and regions of the speakers.
Figure 12: Principal components of the learned embeddings of the character-to-spectrogram model, shown with the genders and regions of the speakers.

We explore the latent space of the learned speaker embeddings by visualizing them in a lower dimensional space. Figure 11 and Figure 12 show the first two principal components of the learned embeddings of the vocal model and character-to-spectrogram model respectively. Although they are initialized randomly and completely trained based on a loss function related to the generative quality, we can observe discriminative patterns in the learned embeddings. Gender of the speaker is the most apparent discriminative pattern in these learned embeddings that even a linear classifier fit on the shown two-dimensional space can classify the gender with a very high accuracy. Besides, we observe apparent discriminative patterns for the region of the speaker. 6 In the two-dimensional space, especially Great Britain and North America regions seem highly separable.

11Speaker Discriminative Model

To compute multi-speaker classification accuracy, we use a speaker discriminative model trained on the ground truth data set of multiple speakers. Although using another discriminator model such as Deep Speaker [11] or other methods would also suffice, we choose to create our own deep learning based discriminative model. We note that our accuracy results on the test set are on par with the state-of-the-art speaker classification methods in the literature.

Figure 13: Architecture for speaker discriminator.
Figure 13: Architecture for speaker discriminator.

Our architecture is depicted in Figure 13. We use mel-frequency cepstral coefficients (MFCCs) computed after resampling the input to a constant sampling frequency. Then, we employ two-dimensional convolutional layers convolving over both time and cepstral frequency bands, with a relu nonlinearity clipped to a maximum of six after each convolutional layer. The last convolutional layer is followed by max-pooling layer. We then mean-pool over time for all utterance timesteps and apply a fully connected layer with a relu nonlinearity followed by a fully connected output layer with a softmax nonlinearity and cross-entropy classification loss. In order to avoid overfitting to the dataset, we apply dropout after every relu nonlinearity.

In order to demonstrate that the classification results are not sensitive to the choice of the hyperparameters of the discriminative model, we demonstrate the classification accuracy for other choices in this section. Hyperparameters for all the discriminator models are available in Table ?. Only the results for the models, and , are presented in Table 2, as they yielded the highest validation set accuracy.

Footnotes

  1. We compute the smoothed normalized audio power as , where is the audio signal, is the impulse response of a Gaussian filter, is the maximum value of and is one-dimensional convolution operation. We assign the silence phoneme boundaries when exceeds a fixed threshold. The optimal parameter values for the Gaussian filter and the threshold depend on the dataset and audio sampling rate.
  2. Each frame is ensured to be 10 milliseconds. For example, if a phoneme lasts 20 milliseconds, the input features corresponding to that phoneme will be repeated in 2 frames. If it lasts less than 10 milliseconds, it is extend to a single frame.
  3. We find that these changes reduce model size by a factor of 7 and speed up inference by 25%, while yielding no perceptual change in quality. However, we do not focus on demonstrating these claims in this paper.
  4. We hypothesize that feature gating lets the model learn the union of all necessary features while allowing speaker embeddings to determine what features are used for each speaker and how much influence they will have on the activations.
  5. Estimation of the unknown phases is done by repeatedly converting between frequency and time domain representations of the signal using the short-time Fourier transform and its inverse, substituting the magnitude of each frequency component to the predicted magnitude at each step.
  6. The regions are determined according to https://en.wikipedia.org/wiki/Regional_accents_of_English

References

  1. Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code.
    O. Abdel-Hamid and H. Jiang. In ICASSP, 2013.
  2. Deep voice: Real-time neural text-to-speech.
    S. O. Arik, M. Chrzanowski, A. Coates, G. Diamos, A. Gibiansky, Y. Kang, X. Li, J. Miller, J. Raiman, S. Sengupta, and M. Shoeybi. In ICML, 2017.
  3. Quasi-recurrent neural networks.
    J. Bradbury, S. Merity, C. Xiong, and R. Socher. In ICLR, 2017.
  4. Learning phrase representations using rnn encoder-decoder for statistical machine translation.
    K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. arXiv:1406.1078
  5. Multi-speaker modeling and speaker adaptation for DNN-based TTS synthesis.
    Y. Fan, Y. Qian, F. K. Soong, and L. He. In IEEE ICASSP, 2015.
  6. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.
    A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber. In ICML, 2006.
  7. Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks.
    C.-C. Hsu, H.-T. Hwang, Y.-C. Wu, Y. Tsao, and H.-M. Wang. arXiv:1704.00849
  8. Batch normalization: Accelerating deep network training by reducing internal covariate shift.
    S. Ioffe and C. Szegedy. arXiv preprint arXiv:1502.03167
  9. Adam: A method for stochastic optimization.
    D. Kingma and J. Ba. arXiv:1412.6980
  10. Neural architectures for named entity recognition.
    G. Lample, M. Ballesteros, K. Kawakami, S. Subramanian, and C. Dyer. In Proc. NAACL-HLT, 2016.
  11. Deep speaker: an end-to-end neural speaker embedding system.
    C. Li, X. Ma, B. Jiang, X. Li, X. Zhang, X. Liu, Y. Cao, A. Kannan, and Z. Zhu. arXiv preprint arXiv:1705.02304
  12. SampleRNN: An unconditional end-to-end neural audio generation model.
    S. Mehri, K. Kumar, I. Gulrajani, R. Kumar, S. Jain, J. Sotelo, A. Courville, and Y. Bengio. arXiv:1612.07837
  13. Wavenet: A generative model for raw audio.
    A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. arXiv:1609.03499
  14. Speaker verification using adapted gaussian mixture models.
    D. A. Reynolds, T. F. Quatieri, and R. B. Dunn. Digital signal processing
  15. Crowdmos: An approach for crowdsourcing mean opinion score studies.
    F. Ribeiro, D. Florêncio, C. Zhang, and M. Seltzer. In IEEE ICASSP, 2011.
  16. Median-based generation of synthetic speech durations using a non-parametric approach.
    S. Ronanki, O. Watts, S. King, and G. E. Henter. arXiv:1608.06134
  17. Improved techniques for training gans.
    T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. In NIPS, 2016.
  18. Char2wav: End-to-end speech synthesis.
    J. Sotelo, S. Mehri, K. Kumar, J. F. Santos, K. Kastner, A. Courville, and Y. Bengio. In ICLR2017 workshop submission, 2017.
  19. Tacotron: Towards end-to-end speech synthesis.
    Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio, et al. In Interspeech, 2017.
  20. A study of speaker adaptation for DNN-based speech synthesis.
    Z. Wu, P. Swietojanski, C. Veaux, S. Renals, and S. King. In Interspeech, 2015.
  21. Robust speaker-adaptive hmm-based text-to-speech synthesis.
    J. Yamagishi, T. Nose, H. Zen, Z.-H. Ling, T. Toda, K. Tokuda, S. King, and S. Renals. IEEE Transactions on Audio, Speech, and Language Processing
  22. On the training of DNN-based average voice model for speech synthesis.
    S. Yang, Z. Wu, and L. Xie. In Signal and Information Processing Association Annual Summit and Conference (APSIPA), Asia-Pacific, 2016.
  23. Unidirectional long short-term memory recurrent neural network with recurrent output layer for low-latency speech synthesis.
    H. Zen and H. Sak. In IEEE ICASSP, 2015.
  24. Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices.
    H. Zen, Y. Agiomyrgiannakis, N. Egberts, F. Henderson, and P. Szczepaniak. arXiv:1606.06061
10424
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request answer
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...