Unsupervised pretraining transfers well across languages
Cross-lingual and multi-lingual training of Automatic Speech Recognition (ASR) has been extensively investigated in the supervised setting. This assumes the existence of a parallel corpus of speech and orthographic transcriptions. Recently, contrastive predictive coding (CPC) algorithms have been proposed to pretrain ASR systems with unlabelled data. In this work, we investigate whether unsupervised pretraining transfers well across languages. We show that a slight modification of the CPC pretraining extracts features that transfer well to other languages, being on par or even outperforming supervised pretraining. This shows the potential of unsupervised methods for languages with few linguistic resources.
Morgane Rivière, Armand Joulin, Pierre-Emmanuel Mazaré, Emmanuel Dupoux
\addressFacebook AI Research, Ecole des Hautes Etudes en Sciences Sociales
Unsupervised pretraining, low resources, cross-lingual
Learning phoneme representations remains a challenge for a large number of languages with limited supervised resources. A common approach is to pre-train these representations on a large supervised corpus in other languages and transfer them to the low resource languages [12, 14]. For example, Vesely et al.  learn a shared representation on a supervised multilingual dataset and finetune it on the target language. This pre-training works even between distant languages, but requires massive supervised corpora in the same domain.
Recently, several works [27, 15] have proposed promising methods to train monolingual audio representations without supervision. In particular, Schneider et al.  shows that the unsupervised pre-training method of van den Oord  improves the quality of automatic speech recognition (ASR) on several competitive benchmarks. In this paper, we are interested to see if similar unsupervised pre-training methods can be leveraged in a cross-lingual setting to improve the quality of phoneme representations for low resource languages.
We focus on the contrastive predictive coding (CPC) method of van den Oord  since Schneider et al.  has shown its benefit for pre-training features for ASR. CPC is a form of forward modeling in the feature space : it predicts the near future windows in an audio sequence while contrasting with windows from other sequences or more distant in time. We introduce several modifications to the original approach stabilize the training and lead to better phoneme representations. We use our modified CPC model to pre-train phoneme representations in English, namely on Librispeech, and transfer them to several low-resource languages from the Common Voice database.
In this paper, we obtain several results related to transferring across languages the features pre-trained without supervision. First, pre-training phoneme representation outperforms representations trained from scratch in the target language, even if we do not use any supervision for the pre-training. Suprisingly, we also observe that the gap between unsupervised and supervised pre-training is relatively small if we use the same pre-training corpora. Finally, scaling unsupervised pre-training to larger unlabelled datasets further reduces the gap with the supervised pre-training features, and even surpasses it in some low-resource languages.
2 Related work
2.1 Multilingual pre-training for speech recognition
A common way to improving speech recognition in low-resource languages is to train multilingual speech recognition with shared components [24, 25, 14]. For example, Stolcke et al.  train features for phoneme classification in a different language. Burget et al.  shares the parameters of a Gaussian Mixture Model. Closer to our work, several works have shared the parameters of a neural network encoder, using feedforward networks [29, 12, 14] or LSTM . The model is then finetuned on the target low-resource language to fit its specificities . The sampling of the languages during the pre-training can focus on languages related to the targeted language . Another approach is to encourage a language-independent encoder with an adversarial loss . As opposed to our work, this line of research focuses on supervised pre-training which restrict its impact to domains or languages with large resources for supervision.
2.2 Unsupervised learning of features
Many unsupervised learning approaches have been proposed for speech and we focus on those based on contrastive learning [4, 30, 23]. In particular, Time Contrastive Learning  learns audio features by discriminating between time windows. Our work closely follows van den Oord et al.  where a contrastive loss is used to predict forward representations in an audio sequence. Their Contrastive Predictive Coding (CPC) objective function is similar to the objective of word2vec , applied to sequences instead of words. Contrastive approaches are also related to examplar self-supervision [7, 2, 13]. However, CPC has the advantage of making no assumption about the nature or number of the training data samples. Recently, variants of CPC have been applied to monolingual ASR  and images .
In this section, we rapidly introduce the approach of van den Oord et al.  and we refer the reader to the original paper for details.
We also present several modifications to improve the resulting representations and stabilize the training.
We made our code as well as our experiments available to the public
3.1 Contrastive Predictive Coding
Unsupervised training of neural networks relies on building a pretext task that requires discriminative features to be solved. The pretext task used in Contrastive Predictive Coding (CPC)  is forward modeling, i.e., predicting the future states of a sequence from its past. The particularity of CPC is to frame forward modeling as a reconstruction of future representations, not future inputs. Past and future representations are built from the same model, and a contrastive loss ensures that temporally nearby representations are pushed closer than temporally distant ones.
More precisely, given an audio sequence splitted in discrete time steps, or windows, we embed the input signal at each time step with a encoder. Then, we form the current phoneme representation by applying a sequence model to the resulting sequence of embeddings, i.e.,
where is the encoder and is the sequence model, parametrized by and respectively. In CPC, the encoder is a -layer convolutional network (kernel sizes: 10,8,4,4,4, stride sizes: 5,4,2,2,2) and the sequence model is a -layer Gated Recurrent Units (GRU).The encoder also has a downsampling factor of , meaning that for a kHz input, each feature encodes ms of audio.
Given this phoneme embedding , the pretext task in CPC is to predict the next future representations, i.e., for . CPC also pushes away representations from a random subset of negative examples, or “distant” windows. Overall, the loss function at a time step is thus:
where is a linear classifier. There are many ways to sample the “distant” windows and we follow van den Oord et al.  by sampling negative within speaker. The parameters , and are learned with stochastic gradient descent.
3.2 Modifications to Contrastive Predictive Coding
Stabilization of the training
We observe empirically that the training of CPC is unstable, and can converge to poor solutions. The reason is the presence of batch normalization  between the layers of the encoder. Indeed, batch normalization parameters are learned by computing statistics over the whole batch. Since the encoder is shared across a sequence, these parameters leak information between past and future windows. This makes minimizing eq. (1) trivial when the batch normalization is activated, resulting in instability. We fix this issue by replacing batch normalization with a channel-wise normalization that plays a similar role of conditioning internal representations. As oppposed to batch normalization, the parameters are not shared across the sequence and do not leak information (see Supplementary Section S1.1 for details).
Improving the model
The prediction of future representations is made by linear classifiers on top of a phoneme embedding, as shown in eq. (1). The motivation is to encourage the phoneme embeddings to encode linearly separable phonemes. However, the future representations are not phoneme representations themselves; they are embeddings of the time window. Comparing the outputs of a sequence model and an encoder with a linear classifier may not result in linearly separable phoneme representations. Several alternatives are possible, such as adding a sequence model on the future representations. In practice, we find that replacing each linear classifier with a -layer Transformer network  works well (see Supplementary Section S1.2 for details). This layer accesses the entire sequence of to predict a particular . We also observe that reducing the dimension of convolutionnal layers from to does not impact performance while reducing memory footprint. Finally, using an LSTM instead of a GRU slightly improves the performance.
3.3 Transferring features to phoneme classification
In this work, we evaluate the quality of phoneme representations trained with no supervision when transferred across languages. Standard cross-lingual approaches finetune their pre-trained network on the targeted language. While this improves the quality of the resulting representations, it does not assess the quality of the pre-trained representations. Instead, we freeze the model after the pre-training and simply learn a linear classifier for the targeted language. Specifically, we perform the linear classification of a concatenation of windows to match the average size of a phoneme. We then use the CTC loss between our model predictions and the non-aligned phoneme transcriptions . This procedure explicitly measures the linear separability of the phoneme representation, once transferred to a target language.
4 Experimental setting
Pre-training on the Librispeech dataset
We pre-train models on the English Librispeech dataset (LS). We consider both the h and h splits of clean data. For the supervised pre-training model, we use the aligned phone labels provided by  for Librispeech-h.
Transferring to the Common Voice database
After the pre-training, we freeze the parameters of our models and transfer the features across languages.
We consider the common Voice database
Measuring phoneme separability on Zerospeech2017
Zerospeech2017 is a dataset made to measure phoneme separability of unsupervised models in different languages. We consider the English, Mandarin and French benchmarks and we report the ABX score on them . The ABX score measures the discriminability between phonemes by estimating the probability speech segments to be closer to one another if they encode the same phoneme than if they don’t (the distance being DTW-realigned average frame-wise cosine).
5.1 Within-language results
In this set of experiments, we compare the original CPC with our modified version on two within-language tasks: phoneme discriminability on the English Zerospeech2017 dataset, and phoneme linear separability on Librispeech h . In Table 1, we compare our ABX score with that of the toplines from the Zerospeech leaderboards. It is interesting to note that CPC does not perform well on this metric but our modified version is on par with the state of the art. Overall, our modified CPC surpasses the original model on phoneme classification and even matches unsupervised approaches dedicated to phoneme separability. In Table 2, we show that our modifications to CPC leads to an improvement of 3.4 points in phoneme classification compared to the original CPC implementation.
|Trained on ZeroSpeech2017 (45 h)|
|Supervised topline ||6.9||5.3|
|Heck et al. ||8.7||6.2|
|Chorowski et al. ||8.0||5.5|
|Trained on Librispeech-360|
|Trained within language|
|Heck et al. ||11.7||8.7||7.4||7.9|
|Chorowski et al. ||10.8||7.5||11.2||10.7|
|Trained on English (Librispeech-360)|
5.2 Cross-lingual transfer of phoneme features
In a first experiment, we consider the problem of phoneme classification across languages on the Common Voice database. In Table 3, we report the phone error rate (PER) for the linear classifiers trained on top of the phoneme features pretrained with and without supervision. We also compare with a model trained from scratch on the target dataset. The training set of each target dataset is only hour long. The model trained from scratch thus performs poorly. On the other hand, pre-trained features significantly improve the performance in all languages, even without any finetuning. First, on hours of librispeech, our modified CPC outperforms the original CPC by 5.4 points on average. However, supervised pre-training still performs slightly better ( points) than our unsupervised pre-training on the same corpus. An advantage of unsupervised pre-training is that we can apply it to any larger unannotated dataset. We show the benefits of this by pre-training our modified CPC on hours of unlabelled data from Librispeech and match the performance of the supervised model. This result not only confirms the findings of  but it also shows that unsupervised pre-training can match supervised pre-training with enough data (see Supplementary Section S2 with the larger Libri-light dataset ).
In a second experiment, we compare the quality of our pre-trained features against other unsupervised methods on the Zerospeech2017. In Table 4, we compare on French and Mandarin, the ABX score of our approach trained on English Librispeech with unsupervised methods trained for these languages. Surprisingly, our English features transfered to other languages are competitive with the top lines of the leaderboard. This result further shows that unsupervised pre-trained features generalize well across languages.
Impact of finetuning phoneme features
We also study the impact of fine-tuning the phoneme features instead of freezing them. We use hours of speech in target languages for this experiment. In Table 5, we compare the difference between frozen features and fine-tuning. As for the experiments on h of speech, our approach is on par with supervised pre-training when the features are frozen. We also observe a boost around performance points for all the pre-training methods when we fine-tune the features. Our approach is still relatively competitive with supervised pre-training, but slightly worse ( points) on average.
Pre-training in a given language, with or without supervision, can produce features usable across other languages and other domains. Moreover, these features can be matched with a set of phonemes even with extremely low resources datasets and unaligned labels. They are usable with a very simple linear model and can be trained at low cost. Finally, though supervised pre-training tends to be better than the unsupervised one, the gap between them is small and can be greatly reduced with the use of a larger amount of unlabelled data. We did not attempt to push numbers in order to achieve good phone error rates in the low resource languages, as we only tested a linear separation layer for phoneme classification. Further work needs to be done to establish how these pretrained features can be best used in the low resource setting (see ), and with other ASR tasks .
S1 Supplementary methods
We describe here ablation experiments comparing our reimplementation of the original CPC model  and improvements we made to this model.
s1.1 Changing the normalization method
In order to make the training more stable, we replaced the batch normalization in the original model with layer normalization. The results are illustrated in Table S1.
|Trained on Librispeech-100|
|CPC + Layer norm (LN)||12.0||8.7|
s1.2 Choosing the right predictor design
We compared several alternatives to the linear prediction model initially presented in . We supposed that if the prediction network is too simple, then the auto-regressive network will perform a significant part of the prediction task. Thus we though that more complex architecture would improve the quality of our output features. The results of our experiments are compiled in Table S2.
|Trained on Librispeech-100|
|CPC + LN||12.0||8.7|
|CPC + LN + Conv8||13.4||9.2|
|CPC + LN + FFD||11.7||8.56|
|CPC + LN + transformer||9.5||7.3|
|CPC + LN + transformer + dropout||9.3||6.8|
S2 Supplementary results
Here, we present results on the CPC features trained on the recently released Libri-light 60K dataset. As seen in Table S3, we now beat both the Bottleneck and Supervised features on all languages except one. The comparison between Bottleneck and CPC features is displayed in Figure S1.
- thanks: * Code and data available in https://github.com/facebookresearch/CPC_audio. This is the extended reprint of: Rivière, M., Joulin, A., Mazaré, P.E. and Dupoux, E. (2020). Unsupervised pretraining transfers well across languages. in ICASSP-2020.
- (2019) Massively multilingual adversarial speech recognition. arXiv:1904.02210. Cited by: §2.1.
- (2017) Unsupervised learning by predicting noise. In ICML, Cited by: §2.2.
- (2010) Multilingual acoustic modeling for speech recognition based on subspace gaussian mixture models. In ICASSP, Cited by: §2.1.
- (2005) Learning a similarity metric discriminatively, with application to face verification. In CVPR, Cited by: §1, §2.2.
- (2019) Unsupervised speech representation learning using wavenet autoencoders. arXiv:1901.08810. Cited by: Table 1, Table 4.
- (2018) Sequence-based multi-lingual low resource speech recognition. In ICASSP, Cited by: §2.1.
- (2014) Discriminative unsupervised feature learning with convolutional neural networks. In NIPS, Cited by: §2.2.
- The zero resource speech challenge 2017. arXiv:1712.04313. Cited by: Table 1.
- (2017) Multilingually trained bottleneck features in spoken language recognition. Computer Speech & Language 46, pp. 252–267. Cited by: Table S3, Table 3.
- (2006) Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ICML, Cited by: §3.3.
- (2017) Feature optimized dpgmm clustering for unsupervised subword modeling: a contribution to zerospeech 2017. In ASRU, Cited by: Table 1, Table 4.
- (2013) Multilingual acoustic models using distributed deep neural networks. In ICASSP, Cited by: §1, §2.1.
- (2018) Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670. Cited by: §2.2.
- (2013) Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In ICASSP, Cited by: §1, §2.1.
- (2016) Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In NIPS, Cited by: §1, §2.2.
- (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167. Cited by: §3.2.1.
- (2020) Libri-light: a benchmark for asr with limited or no supervision. In INTERSPEECH, External Links: Cited by: §S2, §5.2, §6.
- (2020) Learning robust and multilingual speech representations. External Links: Cited by: §6.
- (2019) Multilingual speech recognition with corpus relatedness sampling. arXiv:1908.01060. Cited by: §2.1.
- (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: §2.2.
- (2013) Evaluating speech features with the minimal-pair abx task: analysis of the classical mfc/plp pipeline. INTERSPEECH. Cited by: §4.0.3.
- (2019) Wav2vec: unsupervised pre-training for speech recognition. arXiv:1904.05862. Cited by: §1, §1, §2.2, §5.2.
- (2015) Facenet: a unified embedding for face recognition and clustering. In CVPR, Cited by: §2.2.
- (2001) Language-independent and language-adaptive acoustic modeling for speech recognition. Speech Communication. Cited by: §2.1.
- (2006) Cross-domain and cross-language portability of acoustic features estimated by multilayer perceptrons. In ICASSP, Cited by: §2.1.
- (2019) Contrastive multiview coding. arXiv:1906.05849. Cited by: §2.2.
- (2018) Representation learning with contrastive predictive coding. arXiv:1807.03748. Cited by: §S1.2, Table S1, §1, §1, §S1, §2.2, §3.1, §3.1, §3, §4.0.1, §5.1, Table 1, Table 2, Table 3, Table 4, Table 5.
- (2017) Attention is all you need. In NIPS, Cited by: §3.2.2.
- (2012) The language-independent bottleneck features. In SLT, Cited by: §1, §2.1.
- (2009) Distance metric learning for large margin nearest neighbor classification. JMLR. Cited by: §2.2.