High Fidelity Speech Synthesis
with Adversarial Networks
Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images. However, their application in the audio domain has received limited attention, and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech. To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech. Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS – Mean Opinion Score), as well as novel quantitative metrics (Fréchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS. We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator. Listen to GAN-TTS reading this abstract at https://storage.googleapis.com/deepmind-media/research/abstract.wav.
The Text-to-Speech (TTS) task consists in the conversion of text into speech audio. In recent years, the TTS field has seen remarkable progress, sparked by the development of neural autoregressive models for raw audio waveforms such as WaveNet (wavenet), SampleRNN (samplernn) and WaveRNN (wavernn). A notable limitation of these models is that they are difficult to parallelise over time: they predict each time step of an audio signal in sequence, which is computationally expensive and often impractical. A lot of recent research on neural models for TTS has focused on improving parallelism by predicting multiple time steps in parallel, e.g. using flow-based models (parallel-wavenet; clarinet; waveglow; flowavenet). Such highly parallelisable models are more suitable to run efficiently on modern hardware.
An alternative approach for parallel waveform generation would be to use Generative Adversarial Networks (GANs, gans). GANs currently constitute one of the dominant paradigms for generative modelling of images, and they are able to produce high-fidelity samples that are almost indistinguishable from real data. However, their application to audio generation tasks has seen relatively limited success so far. In this paper, we explore raw waveform generation with GANs, and demonstrate that adversarially trained feed-forward generators are indeed able to synthesise high-fidelity speech audio. Our contributions are as follows:
We introduce GAN-TTS, a Generative Adversarial Network for text-conditional high-fidelity speech synthesis. Its feed-forward generator is a convolutional neural network, coupled with an ensemble of multiple discriminators which evaluate the generated (and real) audio based on multi-frequency random windows. Notably, some discriminators take the linguistic conditioning into account (so they can measure how well the generated audio corresponds to the input utterance), while others ignore the conditioning, and can only assess the general realism of the audio.
We propose a family of quantitative metrics for speech generation based on Fréchet Inception Distance (FID, fid) and Kernel Inception Distance (KID, kid), where we replace the Inception image recognition network with the DeepSpeech audio recognition network.
We present quantitative and subjective evaluation of TTS-GAN and its ablations, demonstrating the importance of our architectural choices. Our best-performing model achieves a MOS of , which is comparable to the state-of-the-art WaveNet MOS of , and establishes GANs as a viable option for efficient TTS.
2 Related Work
2.1 Audio generation
Most neural models for audio generation are likelihood-based: they represent an explicit probability distribution and the likelihood of the observed data is maximised under this distribution. Autoregressive models achieve this by factorising the joint distribution into a product of conditional distributions (wavenet; samplernn; wavernn; deepvoice). Another strategy is to use an invertible feed-forward neural network to model the joint density directly (waveglow; flowavenet). Alternatively, an invertible feed-forward model can be trained by distilling an autoregressive model using probability density distillation (parallel-wavenet; clarinet), which enables it to focus on particular modes. Such mode-seeking behaviour is often desirable in conditional generation settings: we want the generated speech signals to sound realistic and correspond to the given text, but we are not interested in modelling every possible variation that occurs in the data. This reduces model capacity requirements, because parts of the data distribution may be ignored. Note that adversarial models exhibit similar behaviour, but without the distillation and invertibility requirements.
Many audio generation models, including all of those discussed so far, operate in the waveform domain: they directly model the amplitude of the waveform as it evolves over time. This is in stark contrast to most audio models designed for discriminative tasks (e.g. audio classification): such models tend to operate on time-frequency representations of audio (spectrograms), which encode certain inductive biases with respect to the human perception of sound, and usually discard all phase information in the signal. While phase information is often inconsequential for discriminative tasks, generated audio signals must have a realistic phase component, because fidelity as judged by humans is severely affected otherwise. Because no special treatment for the phase component of the signal is required when generating directly in the waveform domain, this is usually more practical.
Tacotron (tacotron) and MelNet (melnet) constitute notable exceptions, and they use the Griffin-Lim algorithm (griffinlim) to reconstruct missing phase information, which the models themselves do not generate. Models like Deep Voice 2 & 3 (deepvoice2; deepvoice3) and Tacotron 2 (tacotron2) achieve a compromise by first generating a spectral representation, and then using a separate autoregressive model to turn it into a waveform and fill in any missing spectral information. Because the generated spectrograms are imperfect, the waveform model has the additional task of correcting any mistakes. Char2wav (char2wav) uses intermediate vocoder features in a similar fashion.
2.2 Generative adversarial networks
Generative Adversarial Networks (GANs, gans) form a subclass of implicit generative models that relies on adversarial training of two networks: the generator, which attempts to produce samples that mimic the reference distribution, and the discriminator, which tries to differentiate between real and generated samples and, in doing so, provides a useful gradient signal to the generator. Following rapid development, GANs have achieved state-of-the-art results in image (sagan; biggan; stylegan) and video generation (dvdgan), and have been successfully applied for unsupervised feature learning (bigan; ali; bigbigan), among many other applications.
Despite achieving impressive results in these domains, limited work has so far shown good performance of GANs in audio generation. Two notable exceptions include WaveGAN (wavegan) and GANSynth (gansynth), which both successfully applied GANs to simple datasets of audio data. The former is the most similar to this work in the sense that it uses GANs to generate raw audio; results were obtained for a dataset of spoken commands of digits from zero to nine. The latter provides state-of-the-art results for a dataset of single note recordings from various musical instruments (NSynth, nsynth) by training GANs to generate invertible spectrograms of the notes. adversarial_vocoding propose an adversarial vocoder model that is able to synthesise magnitude spectrograms from mel-spectrograms generated by Tacotron 2, and combine this with phase estimation using the Local Weighted Sums technique (lws).
To the best of our knowledge, GANs have not yet been applied at large scale to non-visual domains. Two seconds of audio at 24kHz11124kHz is a commonly used frequency for speech, because the absence of frequencies above 12kHz does not meaningfully affect fidelity. has a dimensionality of , which is comparable to RGB images at resolution. Until recently, high-quality GAN-generated images at such or higher resolution were uncommon (sagan; stylegan), and it was not clear that training GANs at scale would lead to extensive improvements (biggan).
Multiple discriminators have been used in GANs for different purposes. For images, lapgan; stackgan; progressive-growing proposed to use separate discriminators for different resolutions. Similar approaches have also been used in image-to-image transfer (munit) and video synthesis (tganv2). dvdgan, on the other hand, combine a 3D-discriminator that scores the video at lower resolution and a 2D-frame discriminator which looks at individual frames. In adversarial feature learning, bigbigan combine outputs from three discriminators to differentiate between joint distributions of images and latents. Discriminators operating on windows of the input have been used in adversarial texture synthesis (patchgan) and image translation (im2im; cyclegan).
Our text-to-speech models are trained on a dataset which contains high-fidelity audio of human speech with the corresponding linguistic features and pitch information. The linguistic features encode phonetic and duration information, while the pitch is represented by the logarithmic fundamental frequency . In total, there are features. We do not use ground-truth duration and pitch for subjective evaluation; we instead use duration and pitch predicted by separate models. The dataset is formed of variable-length audio clips containing single sequences, spoken by a professional voice actor in North American English. For training, we sample second windows (filtering out shorter examples) together with corresponding linguistic features. The total length of the filtered dataset is hours. The sampling frequency of the audio is kHz, while the linguistic features and pitch are computed for ms windows (at Hz). This means that the generator network needs to learn how to convert the linguistic features and pitch into raw audio, while upsampling the signal by a factor of . We apply a -law transform to account for the logarithmic perception of volume (see Appendix C).
A summary of generator ’s architecture is presented in Table 2 in Appendix A.2. The input to is a sequence of linguistic and pitch features at Hz, and its output is the raw waveform at kHz. The generator is composed of seven blocks (GBlocks, Figure 0(a)), each of which is a stack of two residual blocks (resnet_v2). As the generator is producing raw audio (e.g. a s training clip corresponds to a sequence of samples), we use dilated convolutions (dilated_conv) to ensure that the receptive field of is large enough to capture long-term dependencies. The four kernel size- convolutions in each GBlock have increasing dilation factors: . Convolutions are preceded by Conditional Batch Normalisation (cond-batch-norm), conditioned on the linear embeddings of the noise term in the single-speaker case, or the concatenation of and a one-hot representation of the speaker ID in the multi-speaker case. The embeddings are different for each BatchNorm instance. A GBlock contains two skip connections, the first of which performs upsampling if the output frequency is higher than the input, and it also contains a size- convolution if the number of output channels is different from the input. GBlocks 3–7 gradually upsample the temporal dimension of hidden representations by factors of , while the number of channels is reduced by GBlocks 3, 6 and 7 (by a factor of each). The final convolutional layer with Tanh activation produces a single-channel audio waveform.
3.3 Ensemble of Random Window Discriminators
Instead of a single discriminator, we use an ensemble of Random Window Discriminators (s) which operate on randomly sub-sampled fragments of the real or generated samples. The ensemble allows for the evaluation of audio in different complementary ways, and is obtained by taking a Cartesian product of two parameter spaces: (i) the size of the random windows fed into the discriminator; (ii) whether a discriminator is conditioned on linguistic and pitch features. For example, in our best-performing model, we consider five window sizes ( samples), which yields discriminators in total. Notably, the number of discriminators only affects the training computation requirements, as at inference time only the generator network is used, while the discriminators are discarded. However, thanks to the use of relatively short random windows, the proposed ensemble leads to faster training than conventional discriminators.
Using random windows of different size, rather than the full generated sample, has a data augmentation effect and also reduces the computational complexity of s, as explained next. In the first layer of each discriminator, we reshape (downsample) the input raw waveform to a constant temporal dimension by moving consecutive blocks of samples into the channel dimension, i.e. from to , where is the downsampling factor (e.g. for input window size ). This way, all the s have the same architecture and similar computational complexity despite different window sizes. We confirm these design choices experimentally in Section 5.
The conditional discriminators have access to linguistic and pitch features, and can measure whether the generated audio matches the input conditioning. This means that random windows in conditional discriminators need to be aligned with the conditioning frequency to preserve the correspondence between the waveform and linguistic features within the sampled window. This limits the valid sampling to that of the frequency of the conditioning signal (Hz, or every ms). The unconditional discriminators, on the contrary, only evaluate whether the generated audio sounds realistic regardless of the conditioning. The random windows for these discriminators are sampled without constraints at full kHz frequency, which further increases the amount of training data. More formally, we define conditional and unconditional s as stochastic functions:
where and are respectively the waveform and linguistic features, is set of network parameters, and is a frequency ratio between and .
The final ensemble discriminator combines different ’s:
In Section 5 we describe other combinations of s as well as a full, deterministic discriminator which we used in our ablation study.
3.4 Discriminator Architecture
The full discriminator architecture is shown in Figure 2. The discriminators consists of blocks (DBlocks) that are similar to the GBlocks used in the generator, but without batch normalisation. The architectures of standard and conditional DBlocks are shown in Figures 0(b) and 0(c) respectively. The only difference between the two DBlocks is that in the conditional DBlock, the embedding of the linguistic features is added after the first convolution. The first and the last two DBlocks do not downsample (i.e. keep the temporal dimension fixed). Apart from that, we add at least two downsampling blocks in the middle, with downsample factors depending on , so as to match the frequency of the linguistic features (see Appendix A.2 for details). Unconditional s are composed entirely of DBlocks. In conditional s, the input waveform is gradually downsampled by DBlocks, until the temporal dimension of the activation is equal to that of the conditioning, at which point a conditional DBlock is used. This joint information is then passed to the remaining DBlocks, whose final output is average-pooled to obtain a scalar. The dilation factors in the DBlocks’ convolutions follow the pattern – unlike the generator, the discriminator operates on a relatively small window, and we did not observe any benefit from using larger dilation factors.
We provide subjective human evaluation of our model using Mean Opinion Scores (MOS), as well as quantitative metrics.
We evaluate our model on a set of sentences, using human evaluators. Each evaluator was asked to mark the subjective naturalness of a sentence on a 1-5 Likert scale, comparing to the scores reported by parallel-wavenet for WaveNet and Parallel WaveNet.
Although our model was trained to generate 2 second audio clips with the starting point not necessarily aligned with the beginning of a sentence, we are able to generate samples of arbitrary length. This is feasible due to the fully convolutional nature of the generator and carried out using a convolutional masking trick, detailed in Appendix A.1. Human evaluators scored full sentences with a length of up to 15 seconds.
4.2 Speech Distances
We introduce a family of quantitative metrics for generative models of speech, which include the unconditional and conditional Fréchet DeepSpeech Distance (FDSD, cFDSD) and Kernel DeepSpeech Distance (KDSD, cKDSD). These metrics follow common metrics used in evaluation of GANs for images, Fréchet Inception Distance (FID, fid) and Kernel Inception Distance (KID, kid).
FID and KID compute the Fréchet distance and the Maximum Mean Discrepancy (MMD, mmd) respectively between representations of reference and generated distributions extracted from a pre-trained Inception network (inception_v3). To obtain analogous metrics for speech, we extract the features from an open-source implementation of an accurate speech recognition model, DeepSpeech2 (deepspeech2). Specifically, we use the implementation available in the NVIDIA OpenSeq2Seq library (openseq2seq) and extract features from the last layer, whose output is used in the CTC loss during training. We use representations in the resulting feature space to compute the Fréchet distance and MMD (See Appendix B.1 for details).
We note that fad proposed a similar metric, Fréchet Audio Distance. This metric, however, has been designed for music datasets and uses a music classifier as a feature extractor; therefore it is not well-suited to evaluate text-to-speech models.
As conditioning plays a crucial role in our task, we compute two variants of these metrics, conditional (cFDSD, cKDSD) and unconditional (FDSD, KDSD). Both Fréchet and Kernel distance provide scores with respect to a reference real sample and require both the real sample and the generated one to be independent and identically distributed. Assume that variables and are drawn from the real and and generated distributions, while c is drawn from the distribution of linguistic features. In the conditional case, cFDSD and cKDSD compute distances between conditional distributions and . In the unconditional case, FDSD and KDSD compare and .
Both metrics are estimated using 10,000 generated and reference samples, drawn independently with the same (in the conditional case), or independent (in the unconditional case) linguistic features. This procedure is detailed in Appendix B.3.
The main reason for using both Fréchet and Kernel distances is the popularity of FID in the image domain, despite the issue of its biased estimator, as shown by kid. Thanks to the availability of an unbiased estimator of MMD, this issue does not apply to kernel-based distances. For instance, they yield zero values for real data, which allows comparison in the conditional case. We give more details on these distances in Appendix B.2.
In this section we discuss the experiments, comparing GAN-TTS with WaveNet and carrying out ablations that validate our architectural choices.
As mentioned in Section 3, the main architectural choices made in our model include the use of multiple s, conditional and unconditional, with a number of different downsampling factors. We thus consider the following ablations of our best-performing model:
full-input discriminator ,
single conditional : ,
multiple conditional s: ,
single conditional and single unconditional : ,
five independent s and s:
10 s without downsampling but with different window sizes:
10 s with longer window: .
All other parameters of these models were the same as in the proposed one. In Appendix D we present details of the hyperparameters used during training.
|Parallel WaveNet, parallel-wavenet|
Table 1 presents quantitative evaluations of the proposed model, together with benchmarks and other variants of GAN-TTS that we considered in this work.
Our best model achieves worse yet comparable scores to the strong baselines, WaveNet and Parallel WaveNet. This performance, however, has not yet been achieved using adversarial techniques and is still very good, especially when compared to parametric text-to-speech models. These results are however not directly comparable due to dataset differences; for instance WaveNet and Parallel WaveNet were trained on 65 hours of data, a bit more than GAN-TTS.
Our ablation study confirms the importance of combining multiple s. The deterministic full discriminator achieved the worst scores. All multiple- models achieved better results than a single ; all models that used unconditional s were better than those that did not. Comparing 10-discriminator models, it is clear that combinations of different window sizes were beneficial, as a simple ensemble of 10 fixed-size windows was significantly worse. All three 10- models with varying discriminator sizes achieved similar mean opinion scores, with the downsampling model with base window size 240 performing best.
We also observe a noticeable correlation between human evaluation scores (MOS) and the proposed metrics, which demonstrates that these metrics are well-suited for the evaluation of neural audio synthesis models.
Random window discriminators.
Although it is difficult to say why s work much better than the full discriminator, we conjecture that this is because of the relative simplicity of the distributions that the former must discriminate between, and the number of different samples we can draw from these distributions. For example, the largest window discriminators used in our best model discriminate between distributions supported on , and there are respectively 371 and 44,401 different windows that can be sub-sampled from a 2s clip (real or generated) by conditional and unconditional s of effective window size 3600. The full discriminator, on the other hand, always sees full real or generated examples sampled from a distribution supported on .
Our Generator has a larger receptive field (590ms, i.e. 118 steps at the frequency of the linguistic features) and three times fewer FLOPs (0.64 MFLOP/sample) than Parallel WaveNet (receptive field size: 320ms, 1.97 MFLOP/sample). However, the discriminators used in our ensemble compare windows of shorter sizes, from 10ms to 150ms. Since these windows are much shorter than the entire generated clips, training with ensembles of such s is faster than with . In terms of depth, our generator has 30 layers, which is a half of Parallel WaveNet’s, while the depths of the discriminators vary between 11 and 17 layers, as discussed in Appendix A.2.
The proposed model enjoyed very stable training, with gradual improvement of subjective sample quality and decreasing values of the proposed metrics. Despite training for as many as 1 million steps, we have not experienced model collapses often reported in GAN literature and studied in detail by biggan.
We have introduced GAN-TTS, a GAN for raw audio text-to-speech generation. Unlike state-of-the-art text-to-speech models, GAN-TTS is adversarially trained and the resulting generator is a feed-forward convolutional network. This allows for very efficient audio generation, which is important in practical applications. Our architectural exploration lead to the development of a model with an ensemble of unconditional and conditional Random Window Discriminators operating at different window sizes, which respectively assess the realism of the generated speech and its correspondence with the input text. We showed in an ablation study that each of these components is instrumental to achieving good performance. We have also proposed a family of quantitative metrics for generative models of speech: (conditional) Fréchet DeepSpeech Distance and (conditional) Kernel DeepSpeech Distance, and demonstrated experimentally that these metrics rank models in line with Mean Opinion Scores obtained through human evaluation. As they are based on the publicly available DeepSpeech recognition model, they will be made available for the machine learning community. Our quantitative results as well as subjective evaluation of the generated samples showcase the feasibility of text-to-speech generation with GANs.
We would like to thank Aäron van den Oord, Andrew Brock and the rest of the DeepMind team.
Appendix A Architecture details
a.1 Masking convolutions to generate longer samples
Since our generator is a fully-convolutional network, in theory it is capable of generating samples of arbitrary length. However, since deep learning frameworks usually require processing fixed-size samples in batches for efficiency reasons, our inputs of different lengths need to be zero-padded to fit in a fixed-size tensor. Convolutional layers, including the ones used in our model, often pad their inputs to create outputs of the desired dimensionality, hence we only need to ensure that the padded part of the input tensors to all layers is always zero. As shown in Figure 3, this would not normally be the case after the second convolutional layer, since convolutions (with kernel sizes greater than one) would propagate non-zero values outside the border between meaningful input and padding. A simple way to address this issue is masking, i.e. multiplying the input by a zero-one mask tensor, directly before each convolutional layer. This enables batched sampling of utterances of different length, which is efficient on many hardware platforms, optimised for batching.
a.2 Architecture details
|conv, kernel size 3||400||200Hz||768|
|conv, kernel size 3||48000||24kHz||1|
|factors||num. blocks||depth||factors||num. blocks||depth|
In Table 2 we present the details of Generator architecture. Overall, the generator has 30 layers, most of which are parts of dilated residual blocks.
Table 3 shows the numbers of residual DBlocks and downsample factors in these blocks for different initial downsample factors of s.
All conditional discriminators eventually add the representations of the waveform and the linguistic features. This happens once the temporal dimension of the main residual stack is downsampled to the dimension of the linguistic features, i.e. by a factor of 120. Downsampling is carried out via an initial reshape operation (by a factor varying per ) and then in residual blocks, whose downsample factors are prime divisors of , in decreasing order. For unconditional discriminators, we use only the first two largest prime divisors of .
Appendix B DeepSpeech distances - details
Our evaluation metrics extract high-level features from raw audio using the pre-trained DeepSpeech2 model from the NVIDIA OpenSeq2Seq library (openseq2seq). Let be a ms window of raw audio at 24kHz, and let be a function that maps such a window through the DeepSpeech2 network up to the 1600-dimensional output of the layer labeled
ForwardPass/ds2_encoder/Reshape_2:0. We use default values for all settings of the DeepSpeech2 model; also includes the model’s preprocessing layers.
For a 2s audio clip , we define
where is a vector slice.
The function therefore computes 1600 features for each 20ms window, sampled evenly with 10ms overlap, and then takes the average of the features along the temporal dimension.
b.2 Metrics in distribution space
Given samples and , where is the representation dimension, the Fréchet distance and MMD can be computed using the following estimators:
where and are the means and covariance matrices of and respectively, while is a positive definite kernel function. Following kid we use the polynomial kernel
Estimator (5) has been found to be biased (kid), even for large sample sizes. For this reason, FID estimates for real data (i.e. when and are both drawn independently from the same distribution) are positive, even though the theoretical value of such a metric is zero. KID, however, does not suffer from this issue thanks to the use of the unbiased estimator (6). These properties also apply to the proposed DeepSpeech metrics.
The lack of bias in an estimator is particularly important for establishing scores on real data for conditional distances. In our conditional text-to-speech setting, we cannot sample two independent real samples with the same conditioning, and for this reason we cannot estimate the value of cFDSD for real data, which would be positive due to bias of estimator (5). For cKDSD, however, we know that such an estimator would have given values very close to zero, if we had been able to evaluate it on two real i.i.d. samples with the same conditioning.
b.3 Distance estimation
Let and represent the generator function and a function that maps audio to DeepSpeech2 features as defined in Eq. 4. Let
where are jointly sampled real examples and linguistic features, and . In the conditional case, we use the same conditioning in the reference and generated samples, comparing conditional distributions and :
In the unconditional case, we compare and :
Appendix C -law preprocessing
Many generative models of audio use the -law transform to account for the logarithmic perception of volume. Although -law is typically used in the context of non-uniform quantisation, we use the transform without the quantisation step as our model operates in the continuous domain:
where and for 8-bit encoding or for 16-bit encoding.
Our early experiments showed better performance of models generating -law transformed audio than non-transformed waveforms. We used the 16-bit transformation.
Appendix D Training details
We train all models with a single discriminator step per generator step, but with doubled learning rate: for the former, compared to for the latter. We use the hinge loss (geometricgan), a batch size of and the Adam optimizer (adam) with hyperparameters .
Following biggan, we use spectral normalisation (spectralnorm) and orthogonal initialisation (ortho-init) in both the generator and discriminator(s), and apply off-diagonal orthogonal regularisation (ortho-reg; biggan) and exponential moving averaging to the generator weights with a decay rate of 0.9999 for sampling. We also use cross-replica BatchNorm (batch-norm), which aggregates batch statistics from all devices across which the batch is split and standing statistics during sampling. The latter means that we accumulate batch statistics from 100 forward passes through the generator before the actual sampling takes place, allowing for inference at arbitrary batch sizes.
In fact, accumulating standing statistics makes the BatchNorm layers in the generator independent of any characteristics of the samples produced during inference. This technique is thus vital for sampling audio of unspecified length: producing samples that are longer than those used during training typically requires using a smaller batch size, with partially padded samples (See Appendix A.1). These smaller batches would naturally have different statistics than the batches used during training.
We trained our models on Cloud TPU v3 Pods with data parallelism over 128 replicas for 1 million generator and discriminator updates, which usually took up to 48 hours.
Figure 4 presents the stable and gradual decrease of cFDSD during training.